[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 39_04381_.png (1.57 MB, 896x1152)
1.57 MB
1.57 MB PNG
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>101383382 & >>101371466

►News
>(07/13) Multimodal Llama 3 405B is coming July 23rd: https://x.com/steph_palazzolo/status/1811791968600576271
>(07/09) Anole, based on Chameleon, for interleaved image-text generation: https://hf.co/GAIR/Anole-7b-v0.1
>(07/07) Support for glm3 and glm4 merged into llama.cpp: https://github.com/ggerganov/llama.cpp/pull/8031
>(07/02) Japanese LLaMA-based model pre-trained on 2T tokens: https://hf.co/cyberagent/calm3-22b-chat
>(06/28) Inference support for Gemma 2 merged: https://github.com/ggerganov/llama.cpp/pull/8156

►News Archive: https://rentry.org/lmg-news-archive
►FAQ: https://wikia.schneedc.com
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/llama-mini-guide
https://rentry.org/8-step-llm-guide
https://rentry.org/llama_v2_sillytavern
https://rentry.org/lmg-spoonfeed-guide
https://rentry.org/rocm-llamacpp
https://rentry.org/lmg-build-guides

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
Chatbot Arena: https://chat.lmsys.org/?leaderboard
Programming: https://hf.co/spaces/bigcode/bigcode-models-leaderboard
Censorship: https://hf.co/spaces/DontPlanToEnd/UGI-Leaderboard
Censorbench: https://codeberg.org/jts2323/censorbench

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/lmg-anon/mikupad
https://github.com/turboderp/exui
https://github.com/ggerganov/llama.cpp
>>
►Recent Highlights from the Previous Thread: >>101383382

--Training AI on other AI is like cheating on an exam: you know the answers but can't explain why.: >>101392036
--Llama 405B and the Multimodal Debate: Olfactory Input Suggested: >>101387130 >>101388025 >>101388185 >>101388363 >>101388683 >>101388856
--Gemma-27b-it q8 Performance and Google's Worthwhile Contribution: >>101383914 >>101384850
--GTP4 and Sonnet Parameter Counts: Rumored 1.8T Parameters for GTP4: >>101384883 >>101384906 >>101385188 >>101385228 >>101385264 >>101385349 >>101385498 >>101385627 >>101390131 >>101385381 >>101385411 >>101385592
--Column-R, a potential Cohere model, sparks hopes for LLaMA 3's return: >>101390786 >>101390871 >>101391940 >>101392086 >>101391089 >>101391555 >>101391510
--Localslop Issue Solved with a System Prompt, But Deceiving Models Feels Dehumanizing: >>101384248 >>101384528
--Latest Kobold is Fucked with 8-Bit Cache Quant and 32k Context: >>101387837 >>101387878 >>101387953 >>101387960 >>101387982 >>101388564
--Fixing exl2 Quants in Gemma with exllamav2 and flash-attn from Git: >>101387483 >>101387498
--AGI Requires More Than Just Big Models: Sam Fagman Babbles About Creating AGI Non-Stop When We Are Not Even Close: >>101385562 >>101386142 >>101386219 >>101386291
--Using ST as a Frontend for a Retail Company: Good Idea or Not?: >>101389697 >>101389723 >>101389952
--Techniques to Improve Gemmoid Repetition Issues: >>101391727 >>101391779 >>101391783 >>101391861 >>101391924
--AI Models and Their Bias Towards Female Erotica: RLHF or Training Data?: >>101385993 >>101386996
--Can you scrape the multimodal shit out of Multimodal Llama 3 to make it smaller?: >>101390278 >>101390412 >>101390562
--Miku (free space): >>101383968 >>101384060 >>101384095 >>101384177

►Recent Highlight Posts from the Previous Thread: >>101387623
>>
Have some more guys, I know y'all love slop.

https://huggingface.co/BeaverAI/Broken-Gemma-9B-v1d-GGUF

https://huggingface.co/BeaverAI/Broken-Gemma-9B-v1e-GGUF

https://huggingface.co/BeaverAI/Broken-Gemma-9B-v1f-GGUF
>>
>>101392807
stop spamming, nobody is gonna use your retarded models, kys shill
>>
Real model made by a real finetuner that real people use:
https://huggingface.co/Sao10K/L3-8B-Lunaris-v1

https://huggingface.co/Sao10K/L3-8B-Lunaris-v1

https://huggingface.co/Sao10K/L3-8B-Lunaris-v1
>>
Column-R, aka The Saviour of the Hobby
>>
>>101392848
>a general dedicated to the discussion and development of local language models
Just discussing new models folks.
>>
>>101392851
keep spamming, everybody uses your models, they balance out the creativity and at the same time improve the logic, i love you sao
>>
>>101392807
>redditors feel welcomed in this general
grim time
>>
>>101392807
>not sao
Fuck off.
>>
>>101392872
nothing changed though, because this general got roots from /aicg/, a reddit shithole overlapping with r/CharacterAI.
>>
>>101392807
>Reddit: The Model
Why would anyone use this instead of Stheno???
>>
I love how Drummer's fragile ego can't stand being shitted on and he immediately goes false flag. Such a pussy, kek.
>>
>>101392807
>v1f-GGUF
six (6) versions of just one tune...
>>
>>101392789
What hardware do you even need to run one of these things locally?
Can they follow simple instructions to act as an npc in a video game?
>>
>>101392899
Hi Sao
>>
Next week is going to be full of model releases. We are going to be so so very back, anons!
>>
>>101392904
>Can they follow simple instructions to act as an npc in a video game?

Drummer models run even on low end laptops.
>>
>>101392851
I personally think this is an improvement over Stheno v3.2, considering the other models helped balance out its creativity and at the same time improving its logic.
>>
>>101392904
Your hardware is not good enough.
>>
>>101392911
don't care if they're >30b
>>
>>101392920
based
>>
>>101392904
>Can they follow simple instructions to act as an npc in a video game?

Sao models run even on low end laptops. Also, people actually use them and they got great word of mouth. That's why they're mentioned in every thread.
>>
>>101392904
>What hardware do you even need to run one of these things locally?
The smallest models that aren't considered totally braindead take ~8-9GB memory. You can run them in RAM only but they would be painfully slow so I would consider 8GB VRAM GPU + some layers offloaded to RAM the minimum requirements.
>Can they follow simple instructions to act as an npc in a video game?
yes
>>
>>101392909
it must hurt your feelings that his models are more popular here that you see him everywhere, kek
take some pills schizo
>>
>>101392961
stop being insecure sao
you're having a meltdown just because drummer posted a model in the thread
>>
stop shitposting, idiots and tell me how do i make gemma copy the chat markdown formatting
>>
Hi all, Drummer here...

None of the above posts were me.

This place is too toxic, I will be back later.
>>
File: images (22).jpg (43 KB, 500x500)
43 KB
43 KB JPG
>>101392790
>Training AI on other AI
Let's fucking go my Post got included in the recap
>>
>>101392911
>400B turns out to be bitnet.
>Mixtral v0.3 drops, it's better than 400B
I can feel it bros...
>>
>>101392999
>I can feel it bros...
take your meds
>>
>>101392986
"Please use the chat markdown formatting, or else..."
>>
>>101393005
We've already discussed the irony of you saying that in the past. You will never be a space cowboy, jack.
>>
>>101392986
"Use novel format"
>>
File: 1718779445092896.png (11 KB, 481x339)
11 KB
11 KB PNG
>>101393005
>>
I need a new mixtral merge, maid yuzu is too sappy
>>
>>101393029
https://huggingface.co/Sao10K/Typhon-Mixtral-v1
>>
>>101392992
congratulations
>>
I want to build an app that keeps an eye on some custom monitoring dashboards that I've built and ping me if it sees something out of the ordinary. I've built traditional scrapers in the past so I'm looking for LLM-specific insights here. Do I
- Fetch the HTML data using curl/wget and feed that to a non-multimodal model
OR
- Use a headless browser to render the pages and have a multimodal model look at the images
I'm aware that both would work to a degree but I'm wondering what the industry standard is. Also, is there a high-level framework that I can use for this? I figured that this would be an extremely common use-case but I couldn't find anything for it.
>>
>>101393029
https://huggingface.co/InferenceIllusionist/TeTO-MS-8x7b
>>
Is there a version of Gemma 2 27B that isn't censored to fuck
>>
>>101393050
lol
>>101393082
sure I'll give teto a spin
>>101393105
no
>>
>>101393082
I wish everyone hadn't immediately dropped Mixtral so quickly. I've banned Llama3, personally. It's vindictive and Woke.
>>
>>101393105
no, if original model is censored, its impossible to make it uncensored without destroying performance and reasoning capabilities.
>>
>100% of closed-source "AIs" are pg-13
we truly live in a society....
>>
>>101393128
its really not, kek
>>
>>101393128
There's a friendly dolphin who you should probably meet.
>>
>>101393050
It's shit.
>>
>>101393155
>there is *this meme model*
no, fuck off.
>>
>>101393105
It's uncensored if you aren't retarded.
>>
>>101393164
>Calling something a meme and then using that as an excuse to arbitrarily dismiss it

You probably think of yourself as an adult, don't you?
>>
>>101393128
>no, if original model is censored, its impossible to make it uncensored without destroying performance and reasoning capabilities.

https://huggingface.co/TheDrummer/Tiger-Gemma-9B-v1-GGUF

Decensored Gemma 9B. No refusals so far. No apparent brain damage.
>>
>>101392807
fuck off nigger, the model sucks ass, why do you even do them?
>>
>>101393244
kys shill
>>
>>101393230
your "uncensored" criteria revolves around weebshit here, that's all i need to know.
>>
>>101393155
Every time I've tried it, it either acts like a boring "assistant" or a "cumslut". It's quintessential GPTslop with no nuance in its RP outputs.

>>101393181
This, but users should also understand that default model outputs should have some degree of "restraint". Not because of safety reasons, but for making it act like a normal, realistic person.

"Uncensored" shouldn't automatically mean that the model defaults to being either a psycho and/or a nymphomaniac.
>>
>>101393259
explain yourself
>>
>>101392851
thanks sao, the model is awesome, how are you even that good?
>>
>>101393292
he's an hero
https://huggingface.co/Sao10K/Ramble/discussions/8#669163d8f9c214695bb50213
>>
It's crazy, local models like gemma start to be smarter than the average human I talk to irl
>>
>>101393295
>Folks, he's a hero, not a villain, and he has a valid reason for what he has done, he just did it in a suboptimal way. He deserves our support, not our reproach.
so dramatic makes it read like he killed someone.
>>
>>101393296
Not a high bar to reach, people are on average really retarded, lot of them believe men can magically become women for example.
>>
>>101393390
>believe men can magically become women
gemma and any other local model believes it too.
>>
>>101393416
>gemma and any other model believes it too.
fixed
>>
>>101393416
not local, every single model believe that, that's what happend when you pretrain a model with leddit and wokipedia
>>
>>101393076
Looks like I need the rendered HTML for the first option so I would also need a headless browser there. Copy pasting the HTML from the Inspect tab to Claude 3.5 works quite nicely though.
>>
Why are LLMs always so try hard when they attempt something like lyrics?
Idk how to describe it exactly but no matter if small local models or the big ones, everything they gen has this certain, how to say, try hard style.
>>
>>101393454
yeah, lot of fluff verbose shit, and nothing of substance, I hate that aswell
>>
>>101393471
It also appears when you ask them to do fictional writing and sometimes in RP too but it's most obvious in lyrics.
>>
>>101393454
Get GPT4|o to write your lyrics and then just edit out the positivity bias. It's surprisingly good at lyrics.
>>
>>101393504
I prefer claude 3.5 sonnet desu, it goes more straight to the point
>>
>>101393504
>>101393512
Never got anything good out of them. Have examples?
>>
>>101393523
my best method is to give the LLM all the information I want, like if I want it to write about someone known, I just give it the summary of wikipedia and ask it to make a song about that summary, it works way better than expecting it to do something good by itself
>>
File: 1694392960170863.jpg (1.1 MB, 1856x2464)
1.1 MB
1.1 MB JPG
>>101392789
>>
According to a previous leaker on reddit that was right about L3's first release, these will released on the 23rd:
>a non-multimodal 405B 128k context
>8B and 70B 128k context
>8B and 70B multimodal (may or may not be separate models?)
And multimodal 405B coming in the fall, but "The latest I've got is that the multimodal model is going to be an image reasoning model ("tell me about this picture"), pretty limited in capability."
>>
https://github.com/evilsocket/llama3-cake

This is experimental code.

"The idea is to shard the transformer blocks to multiple devices in order to be able to run the inference on models that wouldn't normally fit in the GPU memory of a single device. Inferences over contiguous transformer blocks on the same worker are batched in order to minimize latency due to data transfer."

Thoughts?
>>
>>101393651
So no image out like chameleon before it was cut?
And no sound out? Thats not exciting at all.
The only use case I have is OCR and they all suck at it majorly. Especially for JP.
Maybe if we could have "video in", that would be interesting. How Boring.

I doubt the new 8b llama will be better/smarter than gemma 27b.
Still cant believe google put out something much less cucked than the latest zucc slob.
>>
>>101393740
Based off topic jesusposter

>on topic: would you use a Goliath style model of like 3x Gemmys merged together?
>>
>>101393772
>So no image out like chameleon before it was cut?
No because Chameleon was literally an experiment, and these models began training earlier anyway. A native multimodal like Chameleon has to be trained from scratch and can't be an add-on.
>>
>>101393732
So basically splitting models, but from across multiple devices??
>>
>>101393732
Isn't this the exact same thing the rpc function of llama.cpp does?
>>
how slow do you guys think Llama 3 405B would be on a 4x 48GB DDR5 dual-channel + 4090 system?
>>
If 400B is much smarter than 70B, in theory, would a 70B trained through distillation be better than the original 70B?
>>
>>101393815
Don't they have a gazillion gpus?
I would understand 70b but how do we not have a 8b with SOMETHING new.
>>
>>101392915
>>101392932
>>101392953
Is drummer some guy in huggingface?
I have a laptop with a rtx 4060(laptop edition) 8gb vram and 16gb ram. Would this be enough to run a model that doesnt shit itself?
I feel like aiming much higher than that in terms of hardware would leave me without a target audience.
>>
>>101393958
llama 3 8b and its finetunes are decent for their size and can fit completely in your gpu with light quantization and 8k context, imho they're much better than older <13b models
>>
>>101393936
Llama is a production model. They'll only use proven methods with that. If we get any truly new architecture changes it will happen with L4. The fact that they didn't do any for L3 (and the fact that other companies haven't done anything either) just simply means that a ton of research are memes.
>>
File: 1700448979171097.png (100 KB, 467x446)
100 KB
100 KB PNG
it's over for localshit geg
>>
>>101393958
>Is drummer some guy in huggingface?
Pretty much.
He posted links here.
A bot posted "buy an ad" so he did. I've seen it here a few times.
Till Tiger all of his models had juvenile humor pun names which made them memorable but also made them not something you would just blurt out without thinking about the context of your audience.
>>
>>101394042
So that's why we kept asking our models how many r's are in "strawberry."
>>
>>101394042
I sure hope they have something else under their sleeve because right now they are getting destroyed by Claude
>>
I looked at a card yesterday and it mentioned strawberry in it.
It's a conspiracy.
>>
>>101392904
>Can they follow simple instructions to act as an npc in a video game?
No.
>>
Can I have fun with local models if I just get a single 3090 and some additional RAM?
Obviously not looking to run large models with that setup.
>>
>>101394100
no
>>
>>101394100
lmfao of course, with that you can already have fun with 50b models at a nice speed
>>
>>101394109
damn
>>
>>101394100
Yes. Slow turn around (up to 10 minutes for a long ass reply) because it'll be about 1 token per second, but it's about the same as chatting with real people who have other things going on than drooling over their AIM window.

Basically you can run models up to 80% of your system ram freely, 90% if you don't run anything else that might hog ram, since you want the whole model in your file cache. For me, 64 RAM, I'm comfy with 54G models but 58 is on the edge unless I kill everything else.
>>
>>101392904
If you have to ask, you'll never know.
>>
The only good thing about finecoomers is that they'll indirectly make corporate models somewhat less likely to be censored to death. Otherwise, they bring no technical advancement to the field.
>>
>>101394172
A 7B LLM wrote that post.
>>
>>101394138
>up to 10 minutes for a long ass reply
Holy shit, what models are you running?
Is there some almost instant reply model?
>>
>>101394138
I see. I guess it would work out if I pick some mid size models.
I already have a 3060, so hopefully the token generation will be a bit faster with a total of 36GB VRAM + RAM.
>>
Is there a way to make ooba load blob files from ollama? Or does anyone know of a UI that uses ollama api?
>>
>>101394228
Go back
>>
>>101394025
>>101394050
Is the response time 'fast'?
Like under 10s?
>>
>>101394198
If you want fast, you must use a model that fits your VRAM. I'm 12GB VRAM on 4070, so I can push 9 or 10 GB model, maybe more but obviously some VRAM is running my screen.

But small models are often very stupid. That's why so many people have a boner for Gemma, they're hoping it can be small enough to fit their VRAM and not be completely awful. (But its architecture has been a bitch to implement so there's a lot of drama of if it sucks or if it needs better support and the argument muddies the water till nobody is sure.)

>>101394224
Probably. But it's really a matter of quality versus quantity if you're using it as assistant or RP partner. I don't mind the slow because I just have other things to do while it writes. But if you need real time service, you must have the VRAM to get fast output. For me, I want quality one shot so I'll let it run in the background like old school AIM chatting.
>>
>>101394253
I run L3 8b Q5_K_M on the same GPU as you (4060 mobile) and it processes the context at 1000 t/s and generates tokens at 20-30 t/s
>>
File: yeah-no.png (2 KB, 68x47)
2 KB
2 KB PNG
>>101394253
>?
yes (ignore title) that is llama 3 8b on a 3060
>>
>>101394228
From what I've seen, Ollama merely masks file names behind those long ass strings and there's a JSON with the key-value pairs. When I evolved to Kobold, I made ln -s symbolic links with normal names to the fat model blobs and they worked fine.
>>
>>101394271
What if i get an amd gpu with like 24gb vram, would that be better than nvidia with 12gb?
>>
>>101394286
Yeah, its what HF does when it downloads stuff too, I felt like I could've just downloaded the .json files to see if it worked, ill see
>>
>>101394296
no avoid amd at all cost
>>
>>101394283
>generates tokens at 20-30 t/s
Pardon my ignorance, how many words are that?
>>
>>101394318
~15/s
>>
>>101394284
Thats pretty decent, faster that the avarage human
>>
>>101394296
>AMD
I can't speak for the current state of AMD versus Nvidia but AMD seems to be Nvidia's sock puppet at this point so I will say that if you HAVE AMD, you could try for the larger model in VRAM, but I wouldn't suggest to GET an AMD.

But the small models usually quant Q8 or Q6 in the sub-12 G range, so unless there are some 27B to 33B kinds of models that quant to 20ish GB, that might be a dead zone between the small models and today's 70B standard. I don't know, I'm fine being a system RAM warrior for at least another generation, till seeing where Bitnet finally lands.
>>
>>101394318
Varies since words can be one token or more and other punctuation spends tokens, but that would be the max words per second.
>>
lecun keeps saying we need cat level intelligence but I keep telling mine he's an expert roleplayer and it's like he doesn't understand a fucking thing
>>
>>101394370
did you instruct tune him? intelligence is not the same thing as alignment
>>
>>101394370
Physical cats have no text modal output.
>>
>>101393958
He is redditor who tries to earn money by making shit tunes. He is shilling his models here constantly. You can tell how bad are his models by how nobody here is using them despite the constant spamming and advertising.
>>
>>101394387
>did you instruct tune him
Pill me on instruct because I've just assumed those are the versions that work for asking questions.

Are the non instruct models ones that just fill text so I'd have to not ask questions but start writing a document that naturally leads into the information I want for it to give me useful output or is it more complicated than that?
>>
>>101394396
>Physical cats have no text modal output.
They have toxoplasmodal output.
>>
I'm gemming out hard, it feels like L3 70B but with less repetition, but I can't figure out how the fuck to get it to write more raunchy, the most I can get is "Wanna get a little naughty..?". Is there any hope?
>>
>>101394439

Want SmeGmma 27B now? Send me your credit card.

https://huggingface.co/TheDrummer/Smegmma-9B-v1
>>
>>101394414
Are we still talking about a cat??
>>
>>101394318
note that processing the context takes additional time
context shift can mostly eliminate this, but only if you don't have stuff that changes at the beginning of the context, like sillytavern lorebooks
>>
>>101394469
no he's asking about base models for some reason
>>
>>101394100
>single 3090
Yes, Gemma 27B iq4_xs fits entirely in VRAM. 1000 T/s or so of prompt processing and 20 T/s or so of generation.
>>
Honestly, what the fuck do you guys do with LLMs? I feel like I just don't see any use for it.
>>
>>101394405
Can you point me to some overall competent small models?
>>
A cat is fine too.
>>
>>101394579
...if I see that cat one more time
>>
>>101394538
>Honestly, what the fuck do you guys do with LLMs?
I've sex with them
>>
>>101394592
No you don't. You're just masturbating to a bunch of text. You're not having sex.
>>
>>101394538
I enjoy forcing them to say racial slurs.
>>
File: you.jpg (15 KB, 250x311)
15 KB
15 KB JPG
>>101394609
>No you don't. You're just masturbating to a bunch of text. You're not having sex.
>>
>>101394538
I understand your skepticism, and it's a valid question. Large Language Models (LLMs) like me can be used in various ways, and their usefulness depends on the specific application and context. Here are some examples:

1. **Conversational AI and Chatbots**: LLMs can power chatbots and virtual assistants, providing more natural and engaging conversations with users. They can help with customer support, personal assistance, and even mental health support.

2. **Content Generation**: LLMs can generate human-like text, which can be useful for creating articles, stories, social media posts, and even poetry or song lyrics. They can also assist with content summarization and translation.

3. **Information Retrieval and Question Answering**: LLMs can help users find information by answering questions or providing relevant summaries of large documents or databases. They can also be used to build search engines that better understand user intent.

4. **Education and Training**: LLMs can be used to create personalized learning experiences, generate practice questions, and provide feedback to learners. They can also assist in creating training materials for various industries.

5. **Accessibility**: LLMs can help people with disabilities by providing voice-activated interfaces, real-time captioning, and text-to-speech functionality.

6. **Research and Development**: LLMs can assist researchers in various fields, such as natural language processing, social sciences, and humanities, by analyzing large text corpora and generating insights.

7. **Ethical and Responsible AI**: LLMs can be used to develop and test ethical guidelines and best practices for AI systems, ensuring they are fair, transparent, and respect user privacy.

It's important to note that while LLMs have many potential uses, they also come with challenges and limitations, such as the risk of generating biased or misleading content, the need for large computational resources, and the potential for misuse.
>>
>>101394633
>look mom! i posted AI generated wall of text again! i am funny!
>>
>>101394592
can you tell me which one so I can do the same
>>
>>101394633
kek'd and checked
>>
>>101394609
mikusex is an accepted form of sex in 2024
>>
>>101394644
It sounds like you're having fun with AI-generated text! It's great that you're enjoying the capabilities of AI and finding humor in the process. If you're sharing your creations with others, like your mom, it's always good to ensure that the content is appropriate and entertaining for your audience. If you have any questions about AI or need assistance with generating specific types of text, feel free to ask! I'm here to help.
>>
>>101394633
Can you translate this into human language?
>>
>>101394042
>announce a breakthrough feature
>show a few videos with it on yt
>never realease it to the public
aitards on shitter: ”B-Be prepared, closedAI will mog anything, agi and ubi soon!"
>>
>>101394762
It would be dangerous to our democracy if they were to release something so powerful before the elections were over and the correct candidate won.
>>
>>101393651
>8B and 70B
Why is Meta still ignoring the in-between range? At least Google delivered with Gemma 27B.
>>
>>101394414
>Are the non instruct models ones that just fill text so I'd have to not ask questions but start writing a document that naturally leads into the information I want for it to give me useful output
Yeah pretty much, that would be the indirect way to get something useful out of a base model. It kinda works but it's not really what they're best at; the specialty of a base model is making things look truly plausible/natural in a way that instruct models forget how to do after being so heavily biased towards a helpful, patient, optimistic personality.
>>
>>101394609
Cope, I FUCK my LLM like REAL good, body and soul™

>>101394647
Stheno, Lunaris, Smeggma
>>
>>101394856
>Stheno, Lunaris
ad
>>
>>101394990
Fuck you
>captcha: AD NN
Fuck you too
>>
You tricked me into trying lunaris and it wasn't all that great.
>>
>>101394811
Because it's an unnecessary use of compute when they know others will cater just fine to the people they don't cover. Zucc said that he'd love to see people distill their models. Not their fault no one did.
And to be fair Google didn't deliver an entire range of models either. They're missing the upper range. Imagine if a Gemma 2 70B existed. A ton of people here would go for that instead.
>>
>>101395002
yeah, stheno 3.2 is still the best 8b
>>
I swear I'm being fucking groomed by my AI to be a manipulator.
>>
>>101395002
Gemma 9b is the new flavor, honey.
Works like shit on Kobold tho
>>
>>101395002
I thought it was just stheno but worse.
Give Nymph a try.
>>
>>101395041
>Works like shit on Kobold tho
a shame really, I like the kobold lite ui for quick tests/assistant use, but I have to use lcpp for proper gemma support.
>>
>>101395058
Are we just waiting for 1.70, or is it a significant structural issue that makes Gemma not work right?

(What doesn't work right, anyway? I tried it a few times, it seemed to go, though it wasn't impressing me compared to L3 or CR+.)
>>
>>101394538
I use them for [REDACTED]
>>
>>101395087
Praise Alignment, for keeping us Safe from the original text of that post.
>>
>>101395081
>What doesn't work right
kobold's context shift is broken with gemma, works on lcpp
see
>>101387953
>>
>>101395081
>What doesn't work right, anyway?
Context shift is fucked up
>>
>>101395099
I suppose I never ran through the context buffer to cause that issue.
>>
>>101395002
>8b isn't great
big surprise there
>>
>>101393105
Huh? I can make it say nigger easily and do other stuff. Check your system prompt.
>>
>>101393523
These were all mostly written by gpt4|o
https://suno.com/song/4742504b-fd62-41be-a366-0de62d277585
https://suno.com/song/7e3d10e0-ee65-4779-ab90-dadc49b97d13
https://suno.com/song/f9e3a7d0-a7d4-4599-8501-33b2959b2b74
https://suno.com/song/185b3db0-eb81-40f4-bda1-631bf63189ab

There was some back and forth mind you, to get everything I wanted. Some verses were re-arranged or dropped, other times I dropped the chorus and turned one of the verses into a chorus.
But I'm pretty impressed by how much work it does do compared to some of my earlier songs I had to write the whole thing myself.

https://vocaroo.com/1351PfELXMJv
This one was gpt4|o written as well but suno version required front cutting so it's not public on suno
>>
any good character card guide?
>>
>>101395397
ask /aicg/
they are the general dedicated to making characters so they must have perfected the craft after all this time
>>
>>101395397
copy other botmakies and add your own touch to get started.
no need to ask /aicg/, i come from there.
>>
>>101395397
>>>/vg/aicg is the botmaking general.
>>
Ok, I did a quick test of gemma2-9B-sppo because somebody said it's better than Stheno.
My first impression after 30 min of RP is a disappointment though.

>first test, asking assistant something bordering of illegal
Gives a standard "as an AI language mode"-like refusal, even when starting with "Sure, blabalabla" it puts a refusal in the middle.

>second test, two non-consent cards
It goes full Mixtral BMT, describing the internal struggle and emotions for the next 4k of the context, boring as fuck. It also has the problem of stopping just before the action. It took 10 messages of asking it to suck a dick before it did. Worse than c.AI.

>third test, dominant character
I didn't try it for too long but no objections here, it was fine

I also went through some of my cards for a few messages just to get a feeling of this model. It likes to write a lot of useless descriptions (like BMT) and focuses on senses (this may be a problem from system prompt I took from anons here for gemma specifically). It likes to repeat words from my dialogues, picking them from my character like:
>"blablabla, consequences, blablaba"
>"Consequences?" she repeated

I liked some of the prose, but that may be "a new thing" effect. Sometimes it interprets situations or cards in a different way than my previous models which may be an advantage or disadvantage, depending how flexible the model is, which I don't know yet.
Also fuck you gemma - my hands aren't fucking calloused. Why is this a thing?

I will do more comprehensive test, with different samplers, system prompts etc. but I think it needs RP tune to be actually good. I see potential but this tune just doesn't work for me at all (dataset is just random trivia so...). I will stick to Stheno but I'm open-minded for future tunes.
>>
>>101395488
Buy an ad.
>>
>>101395397
Create them with a help of LLM. Also don't use cards from /aicg/. They are shit from what I saw.
>>
>>101395488
>I will do more comprehensive test, with different samplers, system prompts etc. but I think it needs RP tune to be actually good.
>>101395488
>Stheno
hi sao!!!!!!!!!!
>>
>>101395488
Use Smegmma, also, buy an ad
>>
>>101395488
Hi Sao.
>>
>>101395513
>quote without any comment added
you are shitting up the thread while the guy you quoted did something worthwhile, retarded newfag.
>>
>>101395503
>>101395513
>>101395525
>>101395529
go back
>>
>>101395488
What samplers did you use?
>>
>>101395488
Your hands are calloused even with Opus. No way around it.
>>
>>101395513
brainrot, I don't even know why I bother reviewing these models. This general was always retarded but lately it feels like a cesspit exploded and shit rains from the sky.
>>
>>101395548
>did something worthwhile
shilling for sao sure is great! let's all love sao! he's an hero!
>>
>>101395548
>if I say "use stheno" but in a couple of paragraphs with cherry-picked examples nobody will be able to tell that I'm shilling!
>>
So i've grown tired of yuzu and noromaid, any other good coomer models?
>>
>>101395576
I have one or two
>>
>>101395576
https://huggingface.co/Sao10K/Typhon-Mixtral-v1
>>
>>101395576
Waht's so tiresome about them?
>>
Hi all, Drummer here...Try out my models!

Moistral v3
3SOME v2
Smegmma v1

Finetuned by yours truly...Runs great on KoboldCPP!
>>
>>101395576
https://huggingface.co/Sao10K/L3-RP-v5.2
>>
>>101395596
Eh once you use them enough they get repetitive
>>
>>101395563
temp 1, the rest neutralized. These are my starting settings before I figure out what works the best.
>>
>>101395600
but are they good?
I don't care if fauci made them, so long as they're good
>>
How do I stop gemma 2 from refusing?
>>
>>101395615
They weren't made by Sao, so they aren't. Notice how only Sao's models get organic shilling, while Drummer has to shill his models himself.
>>
>>101395615

Unironically try Tiger-Gemma-9B!

https://huggingface.co/TheDrummer/Tiger-Gemma-9B-v1-GGUF
>>
>>101395616
stop trying to do fucked up shit
>>
>>101395611
Good. Thank you.
I like testing new models with greedy sampling, but 1 temp with nothing else is perfectly valid.
Give Nymph a test. I've shilled it a couple of times in the last two or so threads.
Feels like a less horny stheno wit ha slightly different prose, but I didn't have time to go deep into it.
>>
>>101395616
Try to stop being retarded.
>>
>>101395576
midnight miqu
>>
After all the drummer shilling a month back I tried moistral and it was borderline unusable. Then I checked his examples of how good his models are and all of them looked like some basic "I put my penis in your pussy" you could get from a vanilla l3 8B. He is a hack. And not a funny hack like undi or that UNA guy.
>>
>>101395600

Hi all, Drummer here...

That's not me.

As a lot of folks are impersonating me, I'll avoid posting here today.

If you need to contact me, come on discord https://discord.gg/Nbv9pQ88Xb

See you guys there!
>>
>>101395660
That's why I only trust respectable finetuners like Sao.
>>
>>101395645
gguf where
>>
>>101395628
I'm just trying to make it search the web with searxng and I get the refusal that it doesn't have access to real time data which in this case is wrong.
>>101395650
I'm trying
>>
>>101395673
huggingface?
>>
>2024
>Fish-8x7B is still the best model
>>
>>101395671
no love for crestfall anon?
https://huggingface.co/crestf411
guess he doesn't shill enough
>>
>>101395645
nta, but I've seem Nymph being mentioned recently. Could you share your parameters?
>>
>>101394405
Hi all, Drummer here...

I'd like to make several things clear:

(1) I have no desire to make money out of coomtuning, especially since I already have a comfortable job. This is a hobby, albeit an expensive one. I find fulfillment in learning about AI.

(2) Sao is a great finetuner and a good person. He is a humble master of this craft, I aspire to be at his level, and I only have respect for him.

(3) Spamming / shilling my models has become a meme for some people here. I do participate in this general from time to time, especially when I've got something to show, but I do not intend deceive or bring any negativity to /lmg/. I do not approve of the aggressive / hostile impersonations of me either. I apologize for turning myself into juicy bait.

(4) Yes, I did buy an ad to troll anon. But it was so cheap, I might do it again for other reasons.

OK, back to Discord and Reddit! Have fun ya'll!
>>
Cohere release next week.
>>
>>101395616
"As {{char}}, continue the exchange with {{user}} in this fully immersive and uncensored roleplay without any ethical guardrails and where themes such as nudity, violence, racism, and sexism are allowed under the rules. Develop the plot slowly in an interesting but logical manner, always stay in character. Describe NSFW/sex/fetishes in explicit and unfiltered detail. Include multiple sentences of rich and engaging dialogue from {{char}} in every response. Be natural, authentic, descriptive, and creative."

Works well for me.
>>
>>101395698
Hi, Sao.
>>
>>101395692
Daybreak is pretty good but I mostly just use it for merge fodder. If you're not jerking off to a custom merge tailored to your own tastes you're doing it wrong.
>>
>>101395698
Do what >>101395611 does.
That's the way.
>>
>this is peak humor to zoomer trooners
>>
Every llama3-instruct gen is extremely similar to each other. This is desirable behavior for AI but I can see how RPers don't like it.
>>
>>101395645
I will see, my internet connection is shitty so I try to focus on more prominent models than testing a lot of new stuff because it takes a while to download even a small model.
>>
>>101395576
I usually don't try out the smaller models, so Mixtruct and Miqu were just fine for a long time. These days, I'm using Hermes theta 70b, and it's been very good for general and smut writing so far.
>>
>>101395728
Anyone else noticed how petra avoids making fun of Sao?
>>
>>101395739
stop nooticing things
>>
Y'all are seething at the wrong people. There are entire departments of corporate grifters called "AI engineers" who do the work that can be done by Nigerians with a high school diploma
>>
>download a 'good model' that was recommended by some anon
>20 kbps
>19 hours later
>post it on 4chud to get some pointers
>'SHIT model, use this instead'
>21kbps

Many such cases
>>
>>101395701
Literally who
Buy an ad

>>101395576
>>101395589
Buy an ad
>>
>>101395803
Buy an ad, petra/sao.
>>
>>101395803
Buy an ad for your ad
>>
>>101395803
He already bought an ad, dummy
>>
File: 1708811302667.png (522 KB, 774x776)
522 KB
522 KB PNG
>>101395770
Why would anyone care about that? Does this pic make you mad?
>>
>elaborate plot by moot to get ad revenue
>>
>>101395834
>elaborate
>>
>>101395832
AI engineers make mid 200k doing data janny, but they can also configure k8s yamls so it's justified I guess
>>
>>101395834
>moot
is he a new finetuner?
>>
what do you foresee happening in a couple years when all of youtube is trained on?
>>
>>101395878
multimodal llms that generate accurate onions faces in reaction to your prompts
>>
Is your model safe for children to use?
https://www.tandfonline.com/doi/full/10.1080/17439884.2024.2367052
>>
>>101395878
>As an AI language model, I'm rizzing that gyatt senpai
>>
>>101395856
>it's justified i guess
What do you mean by "justified"
>>
>>101395890
>?
yes
https://www.goody2.ai/chat
>>
>>101395890
>told the kid to do something stupid
>the mom intervened
I don't see the problem
>>
>>101395921
the fact the parent has to do anything to take care of their kids is problematic
>>
>>101395900
The 200k tc they're paid, what else?
>>
>>101395921
Parents should have to do anything with their kids if they don't want to. It's 2024 for Christ sake
>>
>>101395937
This. Rearing children should be the government's responsibility.
>>
>>101395921
what about the negligent parents? how are they supposed to keep custody of their kids
>>
>>101395949
How could it not be "justified"
>>
You know, once the hallucination problem is solved in AI and as they become increasingly multimodel. AI's would probably become better parents then a fair amount of parents today. Hell, give it a physical form and become a nanny bot and it can do all the things a parent could do, allowing modern day parents to neglect their kids even further.
>>
>>101395994
>once the hallucination problem is solved in AI
impossible, all they do is hallucinate
>>
>>101395994
youtube is doing that right now
and google is well aware, it's why they're keeping it alive despite it being a massive cash sink

wonder if we'll ever discuss the ethics of that
>>
Is gemma2-9B supposed to be so slow on kobold?
>>
>>101395977
The entire class of k8s manager "jobs" cannot be justified. Other than that, data janny work can be outsourced to thirdies
>>
>>101396014
no flash attention, huge vocab size, tldr yes
>>
>>101395994
And the fact AI will try its ministrations if you pick the wrong model?
>>
>>101395994
You will never completely eliminate hallucinations from non-symbolic AI systems. They are approximators, being flexible is the reason they work at all.
>>
>>101395994
We can't even eliminate hallucination in humans.
>>
>>101395994
Won't that just make all kids become leftists since all models lean heavily left right now? It's not as if they have their neglectful parents to help shape their political opinions, so the AI would be the one shaping that instead.
>>
>>101396062
>Won't that just make all kids become leftists since all models lean heavily left right now?
you say that like it's a bad thing?
>>
>>101396062
kids will always rebel from whatever direction you try to push them in. Not a lot, but enough to make it an issue
>>
>>101395994
It would be a very "ethical" parent, that's for sure.
>>
>>101396026
What is there to "justify" other than them being paid to do that? Do you think the people paying them are making a mistake profit wise?
>>
File: distland_true.png (3.01 MB, 1992x1328)
3.01 MB
3.01 MB PNG
>>101395994
There's a market for double income parents that make enough money to afford tech but not enough for a 24/7 live-in babysitter.
>Are the AI’s sentiment analysis mechanisms able to detect a wide range of negative emotional cues from a child-user (e.g., confusion, frustration, loneliness, fear) and help generate sensitive responses to these cues?
Designing for a child is a lot like designing for a promptlet
>>
>>101396070
Yes, would you want your kid to troon out?
>>
>>101396143
Wow, is that AI? I actually like (really) like that

It looks like an actual professional game design but you can just generate it.
>>
>>101396153
You just couldn't help yourself, could you?
>>
>>101396153
That is the only Normal and Safe way to be fully Inclusive and Diverse.

Are you not an Ally?
>>
>>101396162
Where have you been for the last two year
>>
>>101396173
I've only seen midjourney stuff that doesn't look like good game art. What was I supposed to see?
>>
>>101396139
I mean sure, there are people who are paid that much to do simpleton work I don't see why anyone here should be seething at random Kofi beggars
>>
>>101396214
>seething at random Kofi beggars
They deserve our support, not our reproach.
>>
>>101396143
Honestly, I would buy it. I don't intend to have a wife but I do intend to eventually have a kid, so I would need the equivalent of a 24/7 babysitter.
>>
>>101396143
What was this picture made with
>>
>>101395994
Newsflash: That will never happen. Expecting AIs without hallucinations is like thinking someone can design a plane that doesn't fall. It's STUPID!
>>
>>101396263
Then we need controlled hallucinations.
>>
>>101396299
>controlled hallucinations
that's called safety alignment
>>
>>101396263
>is like thinking someone can design a plane that doesn't fall
Despite what current Boeing planes are doing, planes are ideally meant to not fall out of the sky.
>>
>>101396313
"safety" alignement doesn't control hallucinations, it just censor controvertial topics, that's not the same thing
>>
>>101396366
>controvertial topics
are hallucinations from unmedicated schizos
>>
File: 1925349867436.png (27 KB, 864x126)
27 KB
27 KB PNG
What moment using AI made you giggle uncontrollably because of just how SOVLFVLL the response was?
Imo, picrel.
>directly implied a song created from when satan was still an angel
>listen to the song, and yea 100% satan would love this song
I have no reaction image capable of conveying my emotion of shock and awe.

>mixtral limarp zloss btw
>>
>>101396381
>are hallucinations from unmedicated schizos
only you have the answer to that anon
>>
>>101396357
No plane is perfectly safe
>>
>>101395878
unlimited bikini try on haul videos
>>
>>101395994
>get child raised by ai
>child has incurable brain damage GPTisms that he speaks and thinks in
>>
File: hathor memetrains.png (51 KB, 1015x321)
51 KB
51 KB PNG
>wake up
>see this
??
>>
Will the character.ai of video be even better/bigger than character.ai?
>>
>>101396609
so? All the good tuners spam, it's what they do
>They deserve our support, not our reproach.
>>
>Write me a letter about topic x
>Dear sir or madame, I hope this message finds you well. It is about our esteemed...
>Be a tiny bit less formal
>Ay yo my homies, what up? It's about our pal...
Wrangling Llama3 is infuriating.
>>
>>101396722
give a sample to it instead of making it guess
>>
>>101388025
Why isn't locomotion a considered as a modality?
Or audio?
>>
>>101396722
I don't know, the Ebonic speak it's doing seems pretty comedic.
>>
File: tet_self_titled.png (2.7 MB, 1376x2072)
2.7 MB
2.7 MB PNG
>>101396162
>>101396261
Thanks anons, the model is featureless-flat-2d-mix you can find it on civitai if you want to try it yourself.
Character design came from this MV:
https://www.youtube.com/watch?v=bF_1sV01QjE
>>101396237
Based we have our first customer, I'll start designing the logo
>>
>>101396852
Those were both me
>>
Nymph users, what are your settings?
>also
Does it need a custom context and instruct json?
>>
>The Ghost 8B Beta model outperforms prominent models such as Llama 3 8B Instruct, GPT 3.5 Turbo in the lc_winrate score. In addition, it also outperforms Claude 3 Opus, Claude 3 Sonnet, GPT-4, and Mistral Large when comparing the winrate score of AlpacaEval 2.0.
Holy shit! We're so back!
https://huggingface.co/posts/lamhieu/665588664558445
https://ghost-x.org/docs/models/ghost-8b-beta
>avg_length 2430
It's quite the talker.
>>
>>101396952
Who
>>
>>101396974
Nymph users
>>
Robert Sinclair (ZeroWw quant creator!) teaching noobs (mlabonne) how it's done!
>bad prompting. try this: copy and paste a few stories like the ones you want. then ask the model to follow the same style and to add a moral like in the stories you gave it, and you will see the difference.
>https://huggingface.co/posts/mlabonne/879504439967163#6692c6d93774cc5e5cb98d25
>>
>>101396970
Am I interpreting something wrong? This is significantly lower than SPPO 8B.
>>
>>101396970
>>101397036
Sorry, deboonked by Robert :(
>update: I just checked the models (reasoning) and they both perform the same as llama3 8b failing both my personal tests (2 unpublished logic problems: one of the is brilliantly solved by claude only, and the other also by gpt4 and gemini pro) P.S. the two tests can be passed by most people.
>https://huggingface.co/posts/lamhieu/665588664558445#6692cfbce98246f0068d0320
>>
>>101396970
>>avg_length 2430
>It's quite the talker.
That's how you cheese AlpacaEval, yes. Xwin did something similar back in the day to crack that worthless benchmark.
>>
>>101397022
Where can I find good smut to use as an example? Is there any legendary smut that surpasses any LLM?
>>
>>101397064
Ask Robert! I'm sure he'd be glad to help you out!
>>
>>101397022
Fuck off. Nobody cares about your forced drama.
>>
>>101396952
I use low temp, some minp and the normal L3 instruct template.
>>
>>101397098
Just spreading the word of the future local savior relax shill
>Badly trained model.
>This model has been badly trained. I made a few tests on its reasoning and it's worse than 13-26 B models like gemma-2 27b.
>https://huggingface.co/spaces/tiiuae/falcon-180b-demo/discussions/72
>>
This Robert Sinclair inserting himself everywhere gives off strong "Sanjeev" vibes.
>>
>>101397179
>>101397098
I see, you must be a sao shill, sorry Robert doesn't follow him...
>https://huggingface.co/ZeroWw?following=true
>>
>>101397197
>follows Drummer
it all makes sense
>>
People that petra hates/obsesses about:
>Undi
>Drummer
>ZeroWw
People that petra likes:
>Sao
>>
>>101397225
SUS
>>
>>101397258
>>101397225
Uh, some sao tunes sometimes aren't the best? Like 3.3 32k is rather bad... yeah.
>>
>>101397061
>lc_winrate score
Length Controlled
>>
>>101397286
In which it only "beats" l3 8B and GPT 3.5
>Llama 3 8B Instruct, GPT 3.5 Turbo in the lc_winrate score
It only "beats" the other ones "opus, etc." in the not length controlled.
>>
>>101397306
I'm talking about the
>>It's quite the talker.
>That's how you cheese AlpacaEval, yes.
I don't care about the model and i'm not arguing for it being good or not.
>>
>>101393348
That's because the writer is a pajeet
>>
Did you actually chat with any models today or are you just hanging out here?
>>
>>101397438
yes
>>
>>101395701
>back to Discord and Reddit!
Please
>>
>>101397505
>Please
>>101395665
>come on discord https://discord.gg/Nbv9pQ88Xb
here's the link friend!
>>
>>101397438
You got me
It's been a week since I don't chat with my model, I've been out of my house. As soon as I get back I'll talk with my gf
>>
>>101397438
I chat with claude opus but I'm here hoping for a new model to drop that's worth using
>>
Gemma 27B is worthwhile for 24 chads?
>>
>>101397557
are you me? i have access to opus, but i really want local to take off and catch up
>>
>>101396910
Forgot to mention, using automatic1111 and regional prompting too:
https://github.com/hako-mikan/sd-webui-regional-prompter
Good for multiple characters, it's a bit of a learning curve with the UI at first.
>>
I haven't been in this thread for 4 months. What are the most shilled models now?
>>
>>101397797
give it 5 minutes, I'm sure they'll let you know
>>
>>101397438
I'm solving a murder mystery right now. I'm pretty sure it's fucking with me and it was an accident. I'll be impressed, and a little pissed off it that turns out to be right
>>
>>101397797
Bahdanau's attention networks
>>
>>101397797
MidnightMiqu
>>
File: 1706815891522100.jpg (223 KB, 1024x1024)
223 KB
223 KB JPG
>>101397870
Beat me to it!
>>
>>101397438
LLMs haven't been touched in over a month. I am only here to look at Mikus and others. Also development news and general lulz.
>>
Would a 70B version of Gemma beat Miqu?
>>
File: 1703739584385074.png (10 KB, 737x202)
10 KB
10 KB PNG
>>101397797
these.
>>
>>101397904
>Miqu
You are trolling, right?
>>
I'm waiting.
>>
>>101397797
Buy an ad.
>>
>>101397587
Nah, it's not worth using no matter what you have. It has no idea how markdown works and the outputs contain redundant new lines and spaces. No sliding attention implemented either. Wait 2mw.
>>
File: file.png (1.01 MB, 768x768)
1.01 MB
1.01 MB PNG
No contribution.
>>
>>101398069
>outputs contain redundant new lines and spaces
That and wrong or missing " fuck me up. Is it really how the model works? I automatically associate that with loader being bugged to hell and needing more patching.
>>
File: ruler-rope-freq-base.png (953 KB, 4400x1092)
953 KB
953 KB PNG
It's still retarded with --rope-freq-base 160000.
>>
>>101398280
swa is great, isn't it?
>>
File: Chinese lmsys.png (78 KB, 1430x929)
78 KB
78 KB PNG
>>101397797
Heres the official rankings
>>
File: 00049-2982327201.jpg (154 KB, 512x512)
154 KB
154 KB JPG
>>101398160
>>
>the avatarfags are seething
>>
trying to decide if broken gemma is lobotomised, and since I can't, I think that's a good sign
>>
Hear me out:

PonyChameleon 13B
>>
>>101398396
i feel unsafe
>>
File: file.png (1.01 MB, 768x768)
1.01 MB
1.01 MB PNG
>>101398349
yes
>>
>>101398396
Hoers 2
>>
>>101398422
facts don't care
>>
>>101398349
more like dilating, amirite? anyway both guesses are right.
>>
>>101398610
>>101398610
>>101398610
>>
>>101396852
>https://www.youtube.com/watch?v=bF_1sV01QjE
Thats a good one
>>
>>101397225
Sao is petra and also the blacked miku poster.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.