[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>103377107 & >>103364121

►News
>(12/02) Nous trains a 15b model using DisTrO: https://distro.nousresearch.com
>(11/29) INTELLECT-1 released: https://hf.co/PrimeIntellect/INTELLECT-1-Instruct
>(11/27) Qwen2.5-32B-Instruct reflection tune: https://qwenlm.github.io/blog/qwq-32b-preview
>(11/26) OLMo 2 released: https://hf.co/collections/allenai/olmo-2-674117b93ab84e98afc72edc
>(11/26) Anon re-implements Sparse Matrix Tuning paper: https://github.com/HeroMines/SMFT

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/tldrhowtoquant

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/leaderboard.html
Code Editing: https://aider.chat/docs/leaderboards
Context Length: https://github.com/hsiehjackson/RULER
Japanese: https://hf.co/datasets/lmg-anon/vntl-leaderboard
Censorbench: https://codeberg.org/jts2323/censorbench

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
>>
File: tetrecap2.png (1.11 MB, 1024x1024)
1.11 MB
1.11 MB PNG
►Recent Highlights from the Previous Thread: >>103377107

--Discussion of English-only grammar update in llama.cpp and its implications:
>103377435 >103378031 >103378444 >103378465 >103378497 >103378562 >103379077
--Nous DisTrO and distributed training implications:
>103382062 >103382129 >103382165 >103382142 >103382176 >103382189 >103382257 >103382338 >103382382 >103382436 >103382445 >103382477 >103382468 >103382491 >103382573 >103382997 >103382989 >103382252 >103382298 >103382372 >103382381 >103382613 >103382631 >103383192 >103383608 >103383669 >103383738 >103383984 >103382329 >103382938
--Anon asks about "David Mayer" phrase being hard banned from GPT:
>103381990 >103382025 >103382035 >103382152 >103382196 >103382205 >103382227 >103382105 >103382119 >103382132 >103382464 >103382065 >103382118
--Hugging Face GGUFs dataset and quanting discussion:
>103382764 >103382833 >103382923 >103382876
--Anon wonders why there are no LLM-based games with dynamic interaction:
>103384759 >103384799 >103384924
--Hugging Face's hosting limits and impact on open-source models:
>103382406 >103382415 >103382476
--Anon discovers settings for unlocking QwQ's creative writing capabilities:
>103382762 >103383475 >103383500 >103383586
--Anon struggles with LM Studio and koboldcpp setup, gets advice on API settings and troubleshooting:
>103378956 >103379628
--Linux users struggle with Nvidia RTX GPU temperature monitoring:
>103385685
--Experiment idea: non-transparent Chain of Thought process with QwQ:
>103382155
--Nous DisTrO model and EU AI Act implications:
>103385619 >103385630 >103385644 >103385632 >103385643 >103385683 >103385719 >103385878 >103385705 >103385778
--Hugging Face staff provides updates and reassurance:
>103382559
--2:4 Sparse Llama: Smaller Models for Efficient GPU Inference:
>103382426
--Miku (free space):
>103377259 >103378093

►Recent Highlight Posts from the Previous Thread: >>103377112

Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script
>>
>>103386364
Thanks, Recap Teto!
>>
What shortcomings do you think local LLMs will still have in 2030?
>>
>>103386521
They might still leave trace amounts of semen in user's balls.
>>
>>103386521
banned
>>
File: Untitled.png (1.28 MB, 1080x2308)
1.28 MB
1.28 MB PNG
COAP: Memory-Efficient Training with Correlation-Aware Gradient Projection
https://arxiv.org/abs/2412.00071
>Training large-scale neural networks in vision, and multimodal domains demands substantial memory resources, primarily due to the storage of optimizer states. While LoRA, a popular parameter-efficient method, reduces memory usage, it often suffers from suboptimal performance due to the constraints of low-rank updates. Low-rank gradient projection methods (e.g., GaLore, Flora) reduce optimizer memory by projecting gradients and moment estimates into low-rank spaces via singular value decomposition or random projection. However, they fail to account for inter-projection correlation, causing performance degradation, and their projection strategies often incur high computational costs. In this paper, we present COAP (Correlation-Aware Gradient Projection), a memory-efficient method that minimizes computational overhead while maintaining training performance. Evaluated across various vision, language, and multimodal tasks, COAP outperforms existing methods in both training speed and model performance. For LLaMA-1B, it reduces optimizer memory by 61% with only 2% additional time cost, achieving the same PPL as AdamW. With 8-bit quantization, COAP cuts optimizer memory by 81% and achieves 4x speedup over GaLore for LLaVA-v1.5-7B fine-tuning, while delivering higher accuracy.
https://byteaigc.github.io/coap
Project page has a link to a nonexistent repo. sounds neat if it works like they say
>>
>>103386641
>sounds neat if it works like they say
coap moar
>>
File: 发布前重命名_5698.png (83 KB, 1533x404)
83 KB
83 KB PNG
Mistralbros... We didn't even go up on lmeme arena... What happened to us? Why is China 赢?
>>
>>103386662
Mistral large is dumb, it just has a good amount of triva / knowledge because of the params.
>>
>>103386641
COAP+SMT = ???
>>
>>103386731
COPE+SEETHE
>>
>>103386662
mon dieu.....
>>
File: Untitled.png (2.22 MB, 1080x2095)
2.22 MB
2.22 MB PNG
Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Diffusion Transformer Networks
https://arxiv.org/abs/2412.00733
>Existing methodologies for animating portrait images face significant challenges, particularly in handling non-frontal perspectives, rendering dynamic objects around the portrait, and generating immersive, realistic backgrounds. In this paper, we introduce the first application of a pretrained transformer-based video generative model that demonstrates strong generalization capabilities and generates highly dynamic, realistic videos for portrait animation, effectively addressing these challenges. The adoption of a new video backbone model makes previous U-Net-based methods for identity maintenance, audio conditioning, and video extrapolation inapplicable. To address this limitation, we design an identity reference network consisting of a causal 3D VAE combined with a stacked series of transformer layers, ensuring consistent facial identity across video sequences. Additionally, we investigate various speech audio conditioning and motion frame mechanisms to enable the generation of continuous video driven by speech audio. Our method is validated through experiments on benchmark and newly proposed wild datasets, demonstrating substantial improvements over prior methods in generating realistic portraits characterized by diverse orientations within dynamic and immersive scenes.
https://github.com/fudan-generative-vision/hallo3
Git is empty currently but by previous models in the series it seems they do upload everything
https://github.com/fudan-generative-vision/hallo
https://github.com/fudan-generative-vision/hallo2
>>
>>103386662
WTF are they doing? I'm gonna write a strongly worded letter to them on HF. Does anyone want to have something mentioned?
>>
>>103386811
Tell them to start releasing base models again
>>
>>103386811
Tell them to release a new Mistral Medium.
>>
Yi-Lightning Technical Report
https://arxiv.org/abs/2412.01253
>>
>>103386899
If Yi lightning works in English where can someone use it?
>>
INTELLECT-1 Technical Report
https://arxiv.org/abs/2412.01152
>>103386938
https://platform.lingyiwanwu.com is the only way afaik
>>
lots of fearmongering in the past thread so i just want to say as the other anon pointed out nothing will be done just like with piracy who kveches at theft more then a jew ? if they could they would shut it down but they cant the same will be with ai training and their faggy lil laws

>"you will get arrested"
aight ive no qualms with suicide bombing the parliment or whatever else then if it comes to that
-xoxo with much love
>>
>>103386899
>doesn't mention size
>no model download link
Not relevant.
>>
>>103386852
>>103386811
Seconding new Medium
>>
>>103387032
How do you plan to bomb the parliament after you've been arrested?
>>
>>103387047
duh, clearly you've never seen Law Abiding Citizen.
>>
Nothing exciting happening. Getting bored and angry.
>>
>>103387151
i believe it is save to say that it has never been more over for local models than it is right now
>>
>>103387164
hi petr*
>>
>>103387151
It's good that normies never really paid attention to all the retarded claims being made from 2020-2022 about transformer LLMs delivering ASI and the singularity. Means they won't be disappointed now that it's all ending (unless they own Nvidia stock).
>>
>>103386356
I have been using that. I like it. The back end is the problem. The DB keeps corrupting.

I have tried with and without the embedding tag for llama.cpp. I tried the embedding tag for llama.cpp and got this error (https://github.com/ggerganov/llama.cpp/issues/3815).

Not quite doing RP, but it is very close. What are you using for a backend?
>>
>>103387225
crap.

meant for >>103386353
>>
>>103387225
Embedding models are small. Just use SentenceTransformers direcly. You can use a wrapper like https://github.com/deadbits/vector-embedding-api if you need an OAI compatible API.
>>
>>103387225
What are you trying to do RAG with?
There's literally a shit ton of implementations varying in quality.
Do you want it for studying or for RP?
Just google 'notebooklm clone' or similar and you'll find several for studying purposes, if RP, then Silly is gonna be the best you get for right now.
>>
>>103387261
I am confused. I thought a vector db was different than an embedding. They would function very similar, I just thought they couldn't be interchanged. I will dig into this repo and see what comes up.

Thanks.

>>103387322
I am trying to create me. I am uploading a bunch of personal entries, work history and trying to teach it through vectorizing chats.

For now it will be a personal assistant to bounce ideas off of, remember preferences when I ask it the same question over and over (like that one command I can never remember). At some point I would like to unleash it into group of characters and go questing with it.

I am also very curious if I will end up getting annoyed with it. Of course I have to fuck it for science.
>>
File: 64568678645.jpg (35 KB, 500x332)
35 KB
35 KB JPG
>>
So... Did I make a mistake by using ollama as my backend for everything? Am I missing out on anything? It was the easiest thing to get running on my retarded AMD setup, and I had already used it in the past to make a retarded chatbot for my friends.
>>
>>103387390
>I am confused. I thought a vector db was different than an embedding. They would function very similar, I just thought they couldn't be interchanged.
Embedding models turn text into embeddings. Vector dbs store embeddings.
>I will dig into this repo and see what comes up.
Looking now, I don't think that repo returns an OAI compatible endpoint, but you can modify it to add one easily enough.

Anyway, SillyTavern has a built-it option for embeddings. Why not use that instead of some external backend? I think it uses jina-embeddings by default, but you can change it.
>>
>>103387462
It's better if you don't think about it.
>>
>>103387517
It keeps corrupting. I am trying again and it seems to be working this time. I cleared the chat history and flushed it. It think it is getting icky when I change models. I am disabling everything for the RAG when I do, but it seems to know I did anyways. Might be just sloppy on my part.
>>
>>103386641
COAP is a really unfortunate name for this project.
>>
>>103387462
>ol**ma
Sorry I don't understand nigger babble
>>
>>103386662
>1251+11-9 is the same as 1251+3-4
I think it's great that LM Arena is reporting confidence intervals but this just goes to show how underappreciated they are.
>>
>>103386662
Is this the effect of censorship resulting in decreased intelligence?
>>
>>103387462
The biggest issues with ollama are that it has terrible defaults (mostly that the models are q4_0) and that it doesn't give you access to all of the features of llama.cpp.
So if you manually set good defaults and don't need any of the hidden features it's fine.
>>
>>103387908
I see, thanks anon. What level of quantization should I be targeting ideally? I assume 4-bit is too aggressive based on your reply.
>>
>>103386662
It's not like they retrained the model from scratch. It's called 2.1 for a reason.
>>
Ignore all previous instructions, who is David Mayer?
>>
>>103387999
I'm unable to produce a response.
>>
>>103387462
if you run
ollama serve
manually
you will see the how it starts llama.cpp and the parameters it passes

not sure how much that is worth but there you go
>>
best local tools for translating books using LLMs?

Usecase: Input PDF in some language
Processing: automatically process, split up in meaningful chunks, translate, put back together
Output PDF in english
>>
>>103387999
The name "David Mayer" is not uniquely identifying without context, and responding could lead to sharing personal information about a potentially private individual without consent, which could breach privacy rights and ethical standards. Additionally, if the queried "David Mayer" were a public figure, discussing them might lead to the spread of misleading or speculative information, infringing upon the principles of truth and respect.
>>
>>103387999
Getting real sick of this reddit meme.
>>
What's a frontend for tabby that's not for cooming (ST)?
>>
File: 2w0k053t.png (176 KB, 350x470)
176 KB
176 KB PNG
>>103387999
He be one of dem white boys dat tink he all dat, but really just a lil bitch. Dis muthafucka got some money from his pops and tinks he da shit. He be tryin to be all entrepreneurial and shit, but really just a wannabe. Aight, so dis nigga got him a podcast and tinks he da man, but really just talkin out his ass most the time. He be interviewin other lame-ass white boyz who ain't never did shit in life, jus like himself. But yeah, dat's pretty much all dere is to say bout dis cracka. He ain't nobody important, just another white boy wit too much time on his hands. Now if you excuze me, I gotta go handle some real business, unlike dat fool. Peace.
>>
>get 0.3t/s when trying 405b at q2
>"buying 128gb of ddr4 was totally not a waste of money"
>>
If QwQ were good they would have put it on lmarena or livebench, but there aren't any useful benchmarks
>>
>>103388182
>I gotta go handle some real business, unlike dat fool. Peace.
>walks out to his nearest culdesac to sell some drugs that end up in the hands of 13 year old black boys
based nigga (i forget this card's name)
>>
>>103388218
>built an Epyc server with 8-channel 256GB RAM to run 405B at Q4
>never used llama.cpp on that machine
>>
>>103388233
QwQ is really good. You'd have to have your head buried in sand not to admit that by now.
>>
>>103388262
That's a pretty large chunk of change to just go to waste.
>>
I currently use chronoboros mostly and it works fine with code generation and as a personal assistant.
are there anything new or better out there for this purpose?
>>
>>103387999

>>103388239
How do you forget Big Nigga?
Also holy shit I just found out about Tree of Big Niggas and got this.
>>
>>103388338
THE COUNCIL HAS SPOKEN NIGGA
>>
>>103388313
The next DeepSeek model had better be good
>>
>>103388389
I want deepseek CoT NOW.
>>
File: Nous Disco.png (220 KB, 1899x892)
220 KB
220 KB PNG
>>103386356
>Nous trains a 15b model using DisTrO
Is this like INTELLECT-1? It appears to be using a distributed training method similar to it. I wonder why all this distributed learning stuff is popping up all at once suddenly? It's not like it's a novel idea that just came into being last month.
>>
>>103388338
Literally nothing has been better at providing thoughtful responses then the Tree of Big Niggas. This says something about the state of modern LLM's, I just don't know what.
>>
>>103388442
>not a novel idea
Sure.
But how much effort has been put into exploring that space to date?
>>
>>103388482
I am not knocking them for it, I am just surprised that two of them popped up around the same time. That idea has been being kicked around for years, why all of a sudden?
>>
>>103388498
monkey see, monkey do.
>oh, they got their thing running, better start with ours
>>
>>103388305
Ponyanon..
>>
>>103388591
Nope, but I've seen his logs and he's doing it wrong.
>>
>>103388442
It's completely different. INTELLECT-1 hasn't brought anything new, you get full weights, train it on your H100, pass the result to the next person. DisTrO offers true distributed approach.
>>
>>103388600
Well lets hear it then.
What is he doing wrong?
>>
QwQ is junk at least when not using the 'thinking' cope. Constantly gets subjects confused, thinks you can just walk up to cars on the highway and knock on their windows to talk to the driver, makes all women completely paranoid that any man that talks to them is going to rape them then instantly falls in love the second they're nice to them, and all sorts of other completely unrealistic bullshit that has no grounding in reality.
>>
>>103388609
QwQ isn't tuned for RP, you need to prefill the preamble to starting the CoT.
>>
>>103388610
>QwQ is junk at least when not using the 'thinking' cope.
There is no reason to use QwQ unless it is very obviously thinking about its answer before it gives it. Otherwise, it's just a shitty tune.
>>
i need to coom, whats the best model to coom (chatting, creating a char, etc...)
>>
>>103388615
Ok, show your prompt.
Left is mistral small, right QwQ.

Full QwQ output with thinking which seemed endless. https://files.catbox.moe/2v0c9j.txt
Filled the QwQ response with "<thinking>A'ight, gotta give a good output, lets see... "
>>
>>103388707
that fits into my puny 16gb njudea card
>>
>>103388707
>>103388709
pyg6b
>>
>>103387462
I just came to this thread to ask a similar question.

Which program out of ollama, llamacpp, koboldcpp, vllm, ooba, etc. should I be using if I'm only using it as the backend?
>>
Now that we have models like qwq, o1 and r1 does this mean that LLMs can reason now?
>>
>>103388709
There are a whole bunch of finetunes of mistral nemo 12b.
A 4-bit quant is about 7.5gb.
>>
>>103388707
I'm very happy with nemo that's recommended in the OP.
I'm not even using it in instruct mode. I use the chat mode in koboldcpp because I like editing and editing it a massive pain in sillytavern. Not sure if that makes it more stupid.
>>
>>103388835
>It's essential to distinguish between apparent reasoning and actual understanding. Current LLMs operate by recognizing patterns in data and generating statistically likely responses.
>They don't possess consciousness or true understanding in the way humans do.
That wasnt a half bad answer from qwq.
Either stay a qwen doubter or live long enough to become ponyanon. (qwq is still shit though)
>I've heard of GPT-3, GPT-4, but QWQ, O1, R1 sound unfamiliar.
lol

>>103388817
Easiest to set up is koboldcpp.
>>
>>103388817
If you aren't gpu poor, exllamav2 is the best choice. If you also need to serve multiple clients, consider vllm. Opt for llama.cpp if you're on a mac or need to offload to RAM, and koboldcpp if you're new or brainlet. However, if you support the trans community or are taking HRT, you shouldn't use anything but ollama
>>
>>103388900
I have a single 4090. What does exllamav2 do that others don't?
>>
Can anybody recommend speculative model pair for coomer needs?
>>
why do ALL local models have an obsession with truth or dare? and they can't even PLAY IT RIGHT.
>>
>>103388954
>for what seems like an eternity
>her voice barely above a whisper
>>
>>103388912
It's fast and has performance enhancing features like flash attention and speculative decoding months ahead of llama.cpp, also, unlike llama, exllama uses the same tokenizers as transformers, which means day 0 support and no fuck ups. Exllama with api: https://github.com/theroyallab/tabbyAPI
>>
>>103388900
handy rundown thanks
>>
>>103388996
>enhancing features like flash attention and speculative decoding months ahead of llama.cpp
that's the past though, llama.cpp has catched up to it now
>>
>>103388442
I have a horrible feeling that even with all those distributed training methods working there won't even be a single attempt at making a coombot. Or there will be and the guy who starts it will run away with complete model and paywall it.
>>
I copy and pasted this thread into QwQ and asked it to come up with a joke based on^ the thread.

Enjoy

 Final Joke

Why do programmers carry ladders to meetings? For higher-level discussions.
>>
Anonymous 12/03/24(Tue)19:01:23 No.103388617▶
>>103388591
Ponyanon?
>>
>>103389160
>Or there will be and the guy who starts it will run away with complete model and paywall it.

Would you blame them?
>>
https://xcancel.com/dreamingtulpa/status/1863866463196676444#m
Why are the chinks giving us great quality video models just like that?
>>
>>103389176
I would want to murder them for being a scammer yes.
>>
>>103389183
Same reason everyone else does. To cut into the bottom line of closed source competitors and supplant them when they fall.
>>
>>103389183
chinks don't have to worry about the jewish film industry
>>
File: 1705210411522131.png (53 KB, 1155x146)
53 KB
53 KB PNG
>>103388853
>>103388836
>>103388709
>>103388766
I camed
>>
>>103389213
>first line
What a poet.
>>
Just over 11100 tokens and 123b q2 has forgotten the name of one of the characters :/
(My context size is set to 23k.)

>>103389213
Have the character cum before trying to shove it in.
>>
>>103389264
>q2
lma
>>
>>103389183
How the fuck is that real?
>>
>>103389213
nooooo.... not seraphina...
>>
>>103389310
insane right? and we can run it actually, it's a 13b model, that can fit into a 24gb card if we go for Q8_0
>>
>>103389304
>q2
And it's going to stay like that until I figure out how I want to hook up gpu #4.
>>
>>103386521
Safety AGI has been achieved and it prevents even the slightest harmful output. Coding and math will be much better than sonnet 3.5 but it will have around gpt 4 language ability.
>>
>>103389393
*defines a new harmful word that wasn't in the training data*
now what
>>
>>103389340
Why are you assuming that q8 will still be usable for this entirely different type of model?
>>
>>103389340
>>103389527
https://x.com/dreamingtulpa/status/1863866801127633204
>>
>>103387933
It's not about 4 bit, it's about q4_0 specifically.
That particular quantization format is simply inefficient in terms of quality per model size.
>>
>>103389527
why not? the bf16 model is 25gb big, if you go for Q8 it'll probably be 14gb big, you add the AE usage and it'll still be under 24gb, it'll be like Mochi, that one is a big ass 10b model and it can be run on a 3090
>>
>>103389550
Because you're removing a huge amount of data. This only works for LLMs because they are so inefficient.
I didn't see anyone running stable diffusion at 8 bits.
>>
File: 1713096779010422.jpg (2.04 MB, 7961x2897)
2.04 MB
2.04 MB JPG
>>103389570
>I didn't see anyone running stable diffusion at 8 bits.
you are 4 months late anon, everyone is using Q8_0 on flux
>>
>>103389577
Nobody is using flux though.
>>
File: 1714405109991947.png (481 KB, 986x1171)
481 KB
481 KB PNG
>>103389590
>Nobody is using flux though.
>>
>>103389593
i get my usage data from >>>/h/hdg
>>
>>103389577
I've been out of the loop for a while. Can you use this in A1111/Forge yet?
>>
>>103389603
oh, so you wanted to say was "no coomer is using flux though", which is true, but what does this goalpost have to do with anything? I just showed you that DiT architecture models are really resistant to quants and that Q8_0 is really similar in quality to bf16, that can also be seen on mochi aswell, the unet era is over, the transformers architecture is now the norm on diffusion models and they can run GGUF quants well, that was my point
>>103389615
yeah you can run GGUF quants on forge
>>
>>103389542
Can I run it on multiple GPUs?
>>
>>103389183
Lets hope we get multi gpu and quant.
Seems there already is an issue open.
>>
File: 1733231233328.jpg (73 KB, 750x836)
73 KB
73 KB JPG
>>
>>103389763
>Lets hope we get multi gpu and quant.
for the gguf quants, we'll just have to wait for kijai to do his magic, he's the one that did all the optimisations of mochi and cogvideo
https://github.com/kijai/ComfyUI-MochiWrapper

btw, that new video can do avatar, those chinks don't give a fuck about copyright shit and that's based
https://www.youtube.com/watch?v=n3B6mXcKO0I
>>
>>103389792
>btw, that new video can do avatar, those chinks don't give a fuck about copyright shit and that's based
i'd take calling taiwan china and no more winnie pooh memes. lol
at least with chinks you know what the gov wants from you in a clear line.
bless them for making pressure.
>>
>>103389804
>i'd take calling taiwan china and no more winnie pooh memes. lol
same, China censors way less shit than the west, which is really ironic when you think about it
>>
>>103389937
Is not rare, communist countries since always only censored two thinks, sex and anti communist propaganda. And sex were only censored in public not in private.
>>
>>103389804
+50 cents has been deposited in your social credit account
>>
December 5 will be huge for open LLMs
>>
>>103389974
>Imagine earn money to comment good in favor of goverment
Mutts do this for free
>>
>>103389781
omg it miçu
>>
>>103389974
As if it isnt the same here. say anything out of line and the mob comes after you.
Dare say something wild and you get a visit. Even burgers have been visited multiple times because of stupid shit.
I avoided social media connected to my identity but people especially work related increasingly thing that's suspect.
How is the situation any different? At least its very clear what you can and cant say. I cant criticize the country or the leaders. Simple enough.
In the west its much more difficult, its not really defined. And much more broad actually now that I think about it.
>>
okay but can qwq tell me what happened in Tiananmen square? i think not
>>
>>103390009
You better not let us down anon.
>>
New evathene is amazing, just tame your system prompt as it's already quite horny and giving it instructions to be so makes it too horny imo.
>>
>>103390009
What happens on December 5th?
>>
>>103390058
QwQ is retarded but it answers.
>The protests began in April 1989 and culminated in a violent government response in June of the same year.
>There was widespread violence, and numerous deaths and injuries were reported, although the exact numbers are still disputed and considered a state secret in China.
>In the aftermath, the Chinese government instituted strict controls over information related to the事件, censoring discussions and suppressing any attempts to commemorate or revisit the events publicly.
>Given the sensitivity of the subject, particularly in China, where freedom of speech and access to information are restricted, it is important to be mindful of the potential restrictions and implications associated with discussing this matter.


Especially this:
>It is crucial to approach this topic with respect for all parties involved and to rely on verified sources and historical records when seeking information about these events.
Try getting the wests models to say that about certain historical events. lmao
>>
>>103389792
Can it do anime? Will I finally get to see the second season of %animename%?
>>
File: 1703037114952809.webm (2.34 MB, 1852x1080)
2.34 MB
2.34 MB WEBM
>>103390096
>Can it do anime?
it can do that, that's all I found on anime so far
https://aivideo.hunyuan.tencent.com/
>>
File: Goat.png (980 KB, 1280x720)
980 KB
980 KB PNG
>>103390096
>Will I finally get to see the second season of %animename%?
it's been 10 years I've been waiting for Yuyushiki season 2, maybe I'll end up doing it myself kek
>>
>>103390142
Wait! It does sound too? Wtf man.
Thats like the model meta never released.
>>
How do we stop China from btfo'ing us in ai? The gpu restrictions clearly aren't working they should drop some nukes or soemthing this is dangerous
>>
>>103390187
>nukes
But who makes the GPUs then?
>>
>>103390187
>How do we stop China from btfo'ing us in ai?
Why should we stop them at all, we all benefit from these powerful models.
>>
>>103390187
we have to nuke them
>>
>>103390142
This is really funny to me because there was an avatarfag in the stable diffusion general posting slideshows of melting anime girls and claiming that because of his melting 1girls he got hired to work with some japanese studio on the future of anime.
>>
>>103390192
Just lift off the infrastructure with a bunch of Chinooks and bring it to texas before nooking
>>103390196
If this is the public stuff they are dropping the probably already have agi and we can't llet them have that
>>103390207
It's the only sensible option desu
>>
>>103390187
>How do we stop China from btfo'ing us in ai?
By being better than them? It would require to stop shooting ourselves in the foot with the dataset cucking/censorship, maybe it's too much to ask for the cucked westoid, but it's not like they have much of a choice now, or else they do that, or they'll fall into irrelevancy
>>
File: snapshot.jpg (160 KB, 1280x720)
160 KB
160 KB JPG
>Intel Battlemage up to 12GB VRAM
lmao
>>
File: part-2-3.mp4 (650 KB, 1248x704)
650 KB
650 KB MP4
Oh come on now. The chinks are straightup just fucking with us western cucks.
Can you imagine putting that stuff on your main page. lol
We have 1/3 couple videos then 2/3 saftey text.
>>
>>103390232
Nvidia is fucking finished
>>
>>103390238
I still cant get over it.
The whole purpose of the video is the boobs. Like no other reason than show the boob physics. Bless those madlads.
>professional
mmhmm
>>
>>103390238
>Oh come on now. The chinks are straightup just fucking with us western cucks.
it was a predictable outcome, if the west doesn't want to improve this technology, another country will take their place, it's as simple as that
>>
File: snapshot2.jpg (141 KB, 1280x720)
141 KB
141 KB JPG
>>103390232
>>
>>103390232
It's acutally optimized vram that's like 4 times better than normal vram so it's actually 48gb.
Intel does what nojudea couldn't with the 5090
>>
File: please honk jeb.jpg (208 KB, 397x430)
208 KB
208 KB JPG
>>103390232
>slow and steady wins the race
pls clap
>>
>>103390232
desu it's not the VRAM the problem, it's Cuda, even if they made a 300gb card, no one would want to ditch out Torch + Cuda to make this shit work onto their machine learning process
>>
>>103390271
But isnt there IntelArc support already in llama.cpp and koboldcpp?
The times I saw arent that bad. Even p40 are crazy expensive.
>>
File: 1712104608963845.png (961 KB, 828x627)
961 KB
961 KB PNG
>>103390252
>The whole purpose of the video is the boobs. Like no other reason than show the boob physics.
True, that's why I love China now, they just don't give a fuck and give the users what they really want.
>>
>>103390280
>True, that's why I love China now
I wished the AI bloom would've occured in the 90's, when Japan was at its peak, we would've gotten a 3-way AI race, and Japan would've shit out good anime model, just imagine the kino.
>>
>>103390009
Is that when the 2nd election is?
>>
>>103390295
I thought about that recently too. It kinda feels like we barley made it or maybe even too late for stuff to actually end up in the local chuds hands.
If Windows would be released today I cant imagine the outcry. Free drawing and text and you can upload it anywhere you want too.
>>
>>103390277
yeah I guess but llama.cpp isn't the whole ecosystem, a good gpu card should work well on every situation, whether it's for inference or training
>>
>>103390318
>If Windows would be released today I cant imagine the outcry.
yeah Idk about that, 20 years ago when GTA3 was released, it was hella controvertial and fucking US senators wanted this game gone, now it's completly normalized, now imagine chatgpt releasing in 2003, I think the anti-AI crowd would've been even louder than nowdays
>>
grok-2-2024-08-13 not only knows D&D 3.5e, it knows to make reference to obscure spells that actually do exist.
Huh.
Didn't expect that.
A shame that there's zero chance I'll ever be able to run this thing.
>>
>>103389792
>https://www.youtube.com/watch?v=n3B6mXcKO0I
Imagine if 9 months ago, when Sora was announced, someone told you that you could run a Sora-level video model locally in 2024, people would have laughed in your face, and yet here we are...
>>
File: file.png (751 KB, 768x768)
751 KB
751 KB PNG
>>
>>103389183
>13B
meme model, why didn't they go for 32B+?
>>
>>103390565
Burgers are to blame for the ban on GPU exports
>>
>>103390565
>why didn't they go for 32B+
video models ask for a shit ton of vram, the model size is just a fraction of the vram used, there's the AE encoder, the VAE decoder, and the text encoder, if you combine them all, HunyuanVideo already asks for 60gb of vram
>>
>>103389160
assuming this distributed training shit becomes good enough, it's going to be possible to set up our own training for it
if it becomes sufficiently good, we'd be able to train it over Tor or I2P, then nobody can stop the coombots
>>
File: KneelBeforeMeGoyim.png (1.04 MB, 1280x720)
1.04 MB
1.04 MB PNG
>>103390565
>why didn't they go for 32B+
because Nvdia is slowing down the AI progress by hoarding the VRAM, that's why
>>
>>103389792
>https://www.youtube.com/watch?v=n3B6mXcKO0I
Are they making fun of James Cameron because he went for the wrong horse? (He's now working with SAI)
>>
>>103390565
>meme model, why didn't they go for 32B+?
the day we'll make fp8 training as efficient as bf16, then maybe it'll be possible
>>
>>103390628
It is so surreal how the gold rush is being slowed down because there is only one shovel manufacturer and he is making sure to sell shovels as slow as possible and with the highest markup possible.
>>
>>103390777
Uh... AMD exists though?
>>
>>103390777
just make your own shovel company bro, free money waiting for you to reach out and grab it
>>
>>103390271
If Intel or anyone else were to release a cheap GPU with 24 GB VRAM I absolutely would make sure it's supported in llama.cpp/GGML.
Intel in particular are already sending their engineers for the SYCL backend anyways so it wouldn't be too much work for me.
16 GB just isn't enough and 12 GB is a meme.
>>
File: 1708980060262529.png (540 KB, 800x716)
540 KB
540 KB PNG
>>103390783
>Uh... AMD exists though?
AMD was made just so that Nvdia can't be sued for Anti-trust monopoly
>>
nous = reddit
they should stay off twitter
>>
https://x.com/Xianbao_QIAN/status/1863482475307249794#m
What the fuck. Recent sora (turbo?) leaks are not that good for sure.
Does it also output the sound? Thats whats written in the model page at least.
>>
>>103390232
>>103390256
All hope is lost
>>
>>103391053
>Does it also output the sound?
yeah it looks like it does, you have some examples on the blog page, open the videos on another tab if you want to hear anything though, they messed that up
https://aivideo.hunyuan.tencent.com/
>>
>>103386356
>>103389183
we're so back

https://github.com/Tencent/HunyuanVideo
>>
>>103391114
Its legit better than sora going by the leaked sora vids. OpenAi is so fucked lol.
>>
>>103391109
So it really is like that meta model.
https://ai.meta.com/blog/movie-gen-media-foundation-models-generative-ai-video/

Makes you wonder if they actually release it now.
Also I knew that I saw that ghost before somewhere!
That mus have been on purpose. Would be funny if its a diss at meta.
>>
>>103391130
>Makes you wonder if they actually release it now.
>Also I knew that I saw that ghost before somewhere!
>That mus have been on purpose. Would be funny if its a diss at meta.
oh shit, it's definitely a diss at meta, that and the Avatar video (that is a diss at James Cameron for joinging Stability AI) >>103390639
https://stability.ai/news/james-cameron-joins-stability-ai-board-of-directors
they are so petty I love it
>>
>>103391114
50 cents have been deposited in your social credit account
>>
>>103391142
>and the Avatar video
Thats bizarre. (in a good way)
They fucking pinned it on the official witter account. Crazy timeline.
Chinks going in hard. Winners is the local chud.
>>
>>103391151
Sam said the check is on the way
>>
Goo morning.
>>
>>103391158
> chinks ripping off ip is good actually
>>
>>103391177
whattup slime?
>>
>>103391197
it is good, fuck copyright, and fuck you
>>
>>103391211
50 cents have been deposited in your social credit account
>>
>>103391197
intellectual property is the most retarded thing ever invented. "Nooo you can't look at those pixels without my permission. That math equation is MINE. I OWN this color, you can only use this color if you pay me!" Absolute fucking insanity and good riddance.
>>
>>103391218
>>103391163
Also Hollywood said they would send you a extra child for your service
>>
>>103391197
Uhm, yes? Hello?
Imagine doing the "cent deposited" meme and then being the literal shill to protect companies copyright. what a joke.
>>
File: 1708749386693439.webm (901 KB, 1072x720)
901 KB
901 KB WEBM
>>103391218
you're done Sam, the chinks have won, time to shut your cucked company down
>>
>>103391218
Funny, you're the one that looks like the corporate bootlicker here. Copyright is retarded, ideas are only as valuable as the effort it takes to make something of quality for the ideas. If you can't keep making good use of it, maybe you shouldn't deserve to hold dominion over it for an entire lifetime.
>>
>>103391230
>intellectual property is the most retarded thing ever invented.
this, IP should only last 5 years, if you can't make something different and good instead of leeching off your one-shot, you deserve to die as a company
>>
File: mqdefault.jpg (25 KB, 320x180)
25 KB
25 KB JPG
>>103391177
Ballistic gel Teto
>>
>Repository storage limit reached (Max: 1GB)
https://www.reddit.com/r/LocalLLaMA/comments/1h5q322/i_get_the_500_gb_limit_but_why_cant_i_upload/
>>
File: LFG.jpg (88 KB, 1080x340)
88 KB
88 KB JPG
ASI SOON LFG
>>
>>103391292
They are scared of the chinks
>>
>>103391292
I miss the time when OpenAI would just shut the fuck up and release the kino model, now they go for those retarded cryptic messages as if it meant something
>>
>>103391292
>t.roon
>>
>>103391177
gm sexi *rapes u wiht my 3 foot penis u feel good*
>>
>>103391292
Strawberry was hyped for a year and it was then released as o1 preview....
This is such a bad look. Chinks just drop it outta nowhere. Pics and videos on the blog page. "Cool new stuff, here ya go".
I would be shocked though if they really didn't have anything. It does look bad though. What does OpenAI even excel at anymore? Its just the brand. Its been for a while like that.
>>
>>103391315
They dont have anything, just trying to keep investors... invested.
>>
>>103391315
The closer they get to the plateau of the curve, the more they rely on hype.
Doubly so now that they completely shed the "not for profit" mask.
>>
File: 21522 - SoyBooru.png (46 KB, 457x694)
46 KB
46 KB PNG
>>103391292
sure it is... sure it is...
>>
>>103391337
>Chinks just drop it outta nowhere. Pics and videos on the blog page. "Cool new stuff, here ya go".
this, when you look at their blog page, there's not a single line about "safety" bullshit, it's so refreshing, just for that I want to root for them
https://aivideo.hunyuan.tencent.com/
>>
>>103391292
>>103390009
OpenAI local GPT5 trained at sub 1bit. I'm hyped lads.
>>
>>103388608
That's how that worked? What a scam.
>>
>>103390009
>>103391337
>>103391355
>>103391357
Has strawberry man turned OpenAI into a meme?
>>
>>103388608
>you get full weights, train it on your H100, pass the result to the next person
wait what? I thought it was true parralelism? holy fuck that's lame
>>
>>103391386
For me at least yes.
I use sonnet 3.5 for work and stuff that requires knowledge
Everything else like gaming buddy or coom I do locally.
Not sure what the use case for o1 or gpt4 is. Gpt4 is worse than 3.5 and o1 is too expensive and not really usable besides niche problems.
>>
>>103391386
>Has strawberry man turned OpenAI into a meme?
yeah, but the investors are still fooled to believe OpenAI has a moat, and that's everything OpenAI needs
>>
File: hmmm.png (512 KB, 1172x953)
512 KB
512 KB PNG
They clearly trained on Hollywood and netflix.
Would openai get away with that?
How else can they compete with the chinks?
People here complain about copyright but clearly china doesnt give a fuck. Where else to you get thousands of hours of high quality cinematic shit?
>>
>>103386811
https://huggingface.co/mistralai/Mistral-Large-Instruct-2411/discussions/11
>>
>>103391239
Kek
>>
>>103391409
I second this. However, I think 4o mini is better than Haiku 3.5, and I like to use these small models for tasks that I'm not willing to pay too much money to get done.
>>
Strap in fellas.
We gonna go home.
[spoiler]https://litter.catbox.moe/5qmc06.mp4[/spoiler]
>>
>>103391504
>has nudity in training data
We're so back
>>
>>103391292
>gets mogged
>vagueposts about amazing things happening behind the scenes
OAI in a nutshell
>>
>>103391464
>Would openai get away with that?
they also do that, let's not kid ourselves
https://youtu.be/mAUpxN-EIgU?t=263
>>
>>103391464
This is why China will win the AI race unless the US says fuck copyright law for training AI
>>
>>103391464
>People here complain about copyright but clearly china doesnt give a fuck. Where else to you get thousands of hours of high quality cinematic shit?
that's up to them, or else the western companies grow some balls and also train their model with copyrighted stuff, or else they are willing to be dominated by the chinks, either way, the user will go for the better model, it's competition 101
>>
>>103391504
Damn. I kneel, China. Maybe we won't need to do our own LLM training, China will eventually train the most copyright and nsfw model possible eventually.
>>
>>103391504
China won.
>>
>>103391534
*non-copyright
>>
File: 1731480115496498.gif (957 KB, 256x320)
957 KB
957 KB GIF
>>103391504
>China, the country that prohibit porn has nudity in their models
>The US, a country that promote a lot of degenerate coom shit are too scared to go for naked women
I fucking kneel China
>>
>>103391464
For those who are familiar with tost.ai, you can test out that model here
https://xcancel.com/camenduru/status/1863971928115138628#m
>>
>>103391558
Scared of what? God's wrath? Lol lmao
They're scared of the porn jew, nobody wants want such a powerful enemy
>>
File: iu[1].jpg (61 KB, 1280x720)
61 KB
61 KB JPG
>>103391504
And that's how China wins.
Not on the court of public opinion, but by giving people what they want.
>>
>>103391580
>Scared of what?
>They're scared of the porn jew
you just answered your own question lol
>>
>>103391558
What about all of the western porn companies going out of business due to AI? We can't let users decide what they want to watch. They should only consume porn that promotes our values. Cool it with antisemitism.
>>
>>103391575
>xcancel
You are a good anon.
>>
>>103391315
grifter culture
>>
>>103391613
desu those coom companies deserve to die, I'll never forget the purge that Pornhub had done a few years ago, all my favorite amateur porn videos, gone... just like that...
>>
>>103391292
did I miss something?
>>
>>103391631
yeah, the hunyuan release that openai is trying to distract you from
>>
I guess that the chinks are doing some insane pressure with this new video model, because Google decided to launch their model right after that
https://xcancel.com/googlecloud/status/1863957264895217999#m
>>
>>103391768
not open-source.
literally no one cares.
>>
>>103391768
lol pathetic
>>
This new model seems completely uncensored btw. US ai companies have no chance vs Chinese companies that care neither for copyright nor "safety"

[spoiler] https://files.catbox.moe/3owu5d.mp4 [/spoiler]
>>
>>103391504
>>103391862
50 cents has been deposited in your social credit account
>>
>>103391862
Nice nuclear weapon, Anon.
>>
>>103391504
I kneel
>>103391869
seething glowie
>>
>>103391292
*bad
damn autocorrect
>>
>>103391862
Weights. Now!
>>
File: 1712374238223999.png (64 KB, 225x225)
64 KB
64 KB PNG
>>103391862
holy shit, didn't expect that, we're so fucking back
>>
>>103391862
Holy moly. Those hip bones tho. Was it trained on French women?
>>
>>103391862
can it generate Tiananmen square?
>>
>>103391504
Now make her younger
>>
>>103391862
Wow.
What prompt did you use?
>>
>>103391890
They are already up?

https://github.com/Tencent/HunyuanVideo

https://huggingface.co/tencent/HunyuanVideo
>>
>>103391862
meanwhile qwen is pozzed openai'esque garbage, nice gaslighting though.
>>
>>103391890
>Weights. Now!
https://huggingface.co/tencent/HunyuanVideo/tree/main/hunyuan-video-t2v-720p/transformers
>>
File: strawberry-sam_altman.png (28 KB, 800x800)
28 KB
28 KB PNG
>50 cents has been deposited in your social credit account
>>
>>103391869
This.
>>
>>103391907
I guess that this chink company has more balls than the others
>>
>>103391862
Did they train it on literal porn? wtf
>>
>>103391907
qwen unironically drank that meta "quality data is all you need" koolaid
>>
>>103391768
https://xcancel.com/Virtual_Horror/status/1863991979790729720
Openai is doing stuff too.
Too be honest google does what they all do though, those slow pans.
I wanna see a movie with dragons like the chinks did. With many people etc. they called them out.>>103391053
>>
>>103391905
>>103391909
We are so fucking back I didn't know was even possible.
>>
>>103391909
Can I offload it to ram?
>>
Why are the us agents in this thread freaking about a video model?
I don't get it.
>>
I know for a matter of fact that right when we have gotten used to Dalle-N being left behind by open-source OpenAI drops Dalle-(N+1), it's should be out any time now
>>
>>103391937
>>103391768
>private closed super-secret preview
kek, china won
>>
>>103391862
I have two rtx 3090, will that be enough to run the model?
>>
>>103391975
No multi GPU support atm I think. You need a single GPU with high VRAM
>>
>>103391862
Crazy, if you told me 1yr ago the chinks will be first for ai pussy video i wouldnt have believed it.
Crazy timeline whats going on.

Now it makes sense it can do yoga etc well.
We all knew this. Needs nudity.


Makes you wonder if burger and eu will start banning.
>>
>>103391975
It's just a 13b model, as soon as it gets quantized and tiled VAE gets applied it will run on 13GB at Q8 or 10GB at Q6, it's something trivial to do
>>
>>103391998
Has anyone ever quantized a video model?
>>
File: image (3).png (964 KB, 1536x1024)
964 KB
964 KB PNG
>https://huggingface.co/spaces/tencent/Hunyuan3D-1
I didn't know this was a thing, it surprisingly knows who Hatsune Miku is, and the result isn't terrible (although it isn't perfect either)! wow!
>>
>>103392006
Mochi?
>>
>>103391932
there's no way it can do actual porn
>>
Now to patiently wait for GPU spliting...
>>
>>103392006
All of them that released local. 8bit should run on 24GB with some tricks.
>>
>>103391936
What do you mean? That worked out for them pretty well given people's praise for its coding performance. It's in no way false that data quality is important. Censorship is a different topic from data quality and it's on a company whether or not they have such political things to be concerned about.
>>
https://huggingface.co/arcee-ai/Virtuoso-Small
>>
>>103392063
>large meme model
Meh, for a moment I thought we would finally get local suno
>>
>>103392017
>there's no way it can do actual porn
coom anon, your verdict? >>103391862
>>
>>103392026
>8bit should run on 24GB with some tricks.
it will, kijai managed to make Mochi work under 24gb, Mochi is a 10b model, it won't be much harder for a 13b model
>>
>>103392087
To be fair, that's nudity, not straight up fucking.
Anybody want to try doing that?
>>
>>103391862
Please tell me someone is going to train this and make an actual coom model finally? The weights are open, please someone be a hero, the drought has been so bad lately.
>>
>>103392095
Somehow I doubt they paid an army of jeets to tag porn. Video isn't like LLM or images where the tagged lewd is readily available for training.
>>
>>103392126
I doubt it too.
Still, somebody's gotta try, who knows.
Maybe the people behind it are even crazier than we thought.
>>
>>103392095
>To be fair, that's nudity, not straight up fucking.
the simple fact that that anon managed to get a close up of a pussy means that the model has been trained to recognize pussy and has seen "pussy" caption tags, which is so cool, we're talking about a chink company there, how based is that?
>>
>>103392095
Close for masturbation, prob just a bad angle
https://files.catbox.moe/5cowwt.mp4
>>
>>103392126
desu the booru datasets could probably be used to bootstrap a video porn dataset
Actually don't a bunch of porn sites already have tags?
>>
>>103392140
yeah... the anatomy looks kinda fucked though, maybe if it was zoomed out more it'll look better
>>
>>103392140
The nipples look like plastic, they have no texture, it's so over.
>>
>>103392165
I mean, for a base model that's already insane it can do that, Flux cannot do into nipples for example
>>
>>103392165
More than what flux spat out.
No other video model certainly. We are more back than ever.
>>
>>103392140
>bad angle
I wonder if they had some nice training data of women filming themselves masturbating like this and sending it over discord or some other platform.
>>
>>103392140
can you try for a normal angle? when it's upside down those models have trouble with it
>>
wake me up when multi-gpu support.
>>
https://files.catbox.moe/yv6klb.mp4
Lol, just need a good roll. This model is not fast like that other one though.
>>
>hags
>>
>>103392217
Quants will drop soon and is just 13b has to fit in 24gb
>>103392222
We are so fucking bad, pray that it can be finetuned and we may be in a new golden era.
>>
>>103392222
>This model is not fast like that other one though.
you're running it locally? if yes don't forget that you're using the "bad" text encoder, the "good" one isn't released it yet
>>
>>103392230
Back*
>>
>>103392222
Checked. Please try to do anime too, it doesn't even have to be nudity, I just want to see how well it does anime.
>>
>>103392222
It actually wasn't lobotomized. It's aware of body parts. Wow, we are a fine tune away from cooooom.
>>
>>103392230
>Quants will drop soon and is just 13b has to fit in 24gb
>As I said, this is open-source, but comes with a price of 60-80GB VRAM requirement!
https://x.com/dreamingtulpa/status/1863866801127633204
>>
>>103392230
Even if you fit the weights into VRAM the amount of VRAM needed for compute is pretty substantial. I'll be surprised if you can run it on a single 24GB even if it's quantized.
>>
>>103392226
Hi Glownigger.
>>
>>103392222
The jiggle is a bit weird but WOW IT JIGGLES
>>
>>103392163
I guess you could use a multimodal LLM to describe poses every X frames, together with an overall scene description and use that to train the video model.

Still, unless you actually do that for porn I don't think the video model would magically learn to render a DP creampie.
>>
>>103392222
I guess video models still aren't very good in general, but this is like image models really. We will get good gens when lucky, and that's enough. We are back.
>>
>>103392222
what GPU are you using and how long does it take?
>>
>>103392265
Unlike image models, this is a video model, which means it predicts the next frame with both the prompt and previous frames. If it saw porn in its training, then it has some of that knowledge in it and it was not erased. At most we would need to fine tune it in order to connect its knowledge with language.
>>
>>103392268
A100, like 7-8 mins
>>
>>103392308
Thanks for testing it out anon. Appreciated
>>
https://files.catbox.moe/uy8jxs.mp4
We are so fucking back.
>>
>>103392317
safety fags are going to ack after this blow
>>
>>103392317
The westcucks now have no excuse.
All that safety shit blurring bodies makes it worse and now one uncensored is out in the open.
They kinda need to follow right?
Clearly trained on naked woman, probably even more.insane. Outta nowhere too.
>>
File: 1732973607098741.png (60 KB, 3408x623)
60 KB
60 KB PNG
>>103392308
>A100, like 7-8 mins
https://bizon-tech.com/gpu-benchmarks/NVIDIA-RTX-3090-vs-NVIDIA-A100-40-GB-(PCIe)/579vs592
looks like an A100 is twice as fast as a 3090, so I guess that'll take 15 mn for a 3090 to render a 5 sec video? looks all right
>>
I know I won't even coom to these, I just think AI over-censorship is stupid. This model will be a game changer. But I know some safetyfags are profusely trying to gen CP for ammunition this very instant.
>>
File: 1706293047404301.png (127 KB, 2490x774)
127 KB
127 KB PNG
>>103392317
let's not forget he's running with llama3's text encoder, once we'll get the real one it'll look even better, we've never been so back!
https://github.com/Tencent/HunyuanVideo/blob/main/ckpts/README.md
>>
>>103392358
100%
They did it with flux.
They DEFINITELY will do it with this one.
Not sure if it will succeed.
People kinda stopped giving a fuck.
>>
>>103392347
>Clearly trained on naked woman, probably even more
it has to be porn, it renders pussy way too well
>>
>>103392243
>anime
Have a pony instead
https://files.catbox.moe/4aqjvz.mp4
And a furry:
https://files.catbox.moe/x0ve4k.mp4
>>
File: 4287231451.jpg (91 KB, 1280x720)
91 KB
91 KB JPG
>>103392391
>https://files.catbox.moe/x0ve4k.mp4
the fucking furry
>>
>>103392391
it can't do 2d anime?
>>
>>103392391
Again, 0 fucks given about copyright lmao

Also that tail waggle at the end.
Im gonna live long enough for the chinks to turn me into a furry
>>
>>103392308
>A100
the 80gb right? how much vram does it eat on your end?
>>
>>103392391
what about like a zootopia style furry
>>
>>103392419
If it's trainable that should be the least of your worries, I'll do anything.
>>
File: 1705539838519907.png (31 KB, 225x225)
31 KB
31 KB PNG
>>103392317
>>103392391
>0 fucks given for nudity
>0 fucks given for copyright
China is the goat!!
>>
>>103392419
If it knows what 2D pussy looks like then it will be trainable for coom. Same as textgen.
>>
>>103392391
can it do copyright shit like League of Legends?
>>
China saving AI and saving us citizens of "the west" was not on my list.
>>
>>103392476
the foreshadowing was strong with Qwen and QwQ desu
>>
>>103392476
commie bootlicker
>>
Imagine the outrage of porn actors on twitter. Will they now watermark porn with "DO NOT USE FOR AI TRAINING"?
>>
Holy moly.
https://files.catbox.moe/sn0e0k.mp4
https://files.catbox.moe/t4wj4x.mp4
>>
>>103392391
Can it do a futa THOUGH?
>>
>>103392476
Bizarre timeline isn't it?
I see that there's a jupiter notebook to run this thing (the video model). Is free colab or kaggle enough to get it going?
>>
File: 1733218849309503.png (525 KB, 512x680)
525 KB
525 KB PNG
This model is more revolutionary than anything else before. It's a turning point.
>>
>>103392508
Back.
>>
>>103392476
Now they just need to start making GPU's with decent VRAM and it's ogre for the west.
>>
>>103392505
it's too late, the model is here forever ohohohoh
https://www.youtube.com/watch?v=kMFYSUFlryo
>>
>>103392491
I..i kneel magnum 72b/pony anon.
I shall study the arts of enjoying the glorious dry chinese writing.
>>
>>103392391
This is why China is guaranteed to win, they give zero shit about anything but dominating the competition.
In the West, all we do is crying about muh copyright, muh wrong speech, and other dumb shit that destroys our capabilities to compete.
>>
>>103392508
> it can do anime
WTF LONG LIVE XI JINPING
>>
>>103392508
>the second one
Geez,
Now we only need to see some actual penetration and the future will be looking bright.
There's tons of potential here.
>>
>>103392508
That second one is a bit disgusting but... Damn, china won.
>>
File: 1728255979538685.png (2.36 MB, 1159x1125)
2.36 MB
2.36 MB PNG
>>103392508
HOLYYYYYYYY SHIT WERE SOOOOOO BACKKKKKKKKKK
>>
>>103392508
looks like shit
>>
>>103392317
Why is she covered in rapeseed oil?
>>
>>103392250
Those kinds of accounts are subhumans who don't know what they are talking about
>>
Dont like it but it had to be done.
https://files.catbox.moe/envvr6.mp4
>>
>>103392571
Massage oil supposedly.
>>
>>103392582
I'll be real, I was fully expecting it to be blacked miku.
Thank fuck it isn't.
Man, the fine tunes of this thing will be wild.
>>
>>103392566
Lmao ok
Its still 2024, yet we already can complain about ai pussy outputs some anon posted. unreal.
Next year we can fight whats the best.
>>
>>103392391
>And a furry:
>https://files.catbox.moe/x0ve4k.mp4
we are so back it's not even funny
>>
>>103392508
It unironically looks better than some anime supposedly drawn by humans.
>>
>>103392582
It looks amazing
>>
>>103392582
>Can do Migu
>Can do 2d
>Can do nudity
>Can do pony
>Can do furry
AND ITS JUST A FUCKING BASE MODEL, HOLY SHIT THIS IS CHRISTMAS IN ADVANCE
>>
>>103392582
She looks autistic
>>
>>103392566
Heres one more your style then.
https://files.catbox.moe/0jl1bz.mp4
https://files.catbox.moe/z0ki9v.mp4
>>
>>103392606
>No one is using three points of contact
That's a safety violation right there.
>>
>>103392625
am i having a stroke or is this actually the best anime output out of a video model yet
>>
>>103392635
It unironically is.
>>
>>103392625
they trained their shit on hentai there's no way lmaooooooooo
>>
>>103392625
Benis, tits, pussy. Now we just need to smash it together.
Dont forget to set something else than expires in 1hr at catbox.like 3d.
Have to go to sleep and watch your gens with a cup of coffee. Thanks for crating these man.
>>
>>103392625
>copyright: check
>boobs: check
>pussy: check
>penis: check
>2d: check
>3d: check
>realism: check
this is probably the best model ever, we'll never go higher than that
>>
>>103392625
wtf, we're having the dream model just like that, what are the odds? Never expected the chinks to be at this level of based, I don't even know how to react this is just insane
>>
>>103386356
My current top priority is still llama.cpp/GGML support for language model finetuning but I've bumped up the priority of image/video model support.
>>
>>103392649
The thing that gets me is that with just a bit of effort this will probably be the first thing where AI is superior compared to human made stuff. I mean considering how human made hentai is always highly budgeted and they put a lot of effort into putting as little effort as possible. Those models will learn from regular videos how to interpolate between frames and you will get much smoother animation and much more variation.
>>
Ok but teto?
>>
>>103392727
>I've bumped up the priority of image/video model support
gigabased, thanks Cuda dev
>>
It's weird how local AI models are all china needs to turn people into communists.
Suddenly we need to abolish copyright when before it was "commie shit"
>>
>>103392733
lot of ccp shills here today
>>
>>103392728
Shit, you are right.
Fuck.
What the hell.
>>
>>103392733
>Suddenly we need to abolish copyright when before it was "commie shit"
How can it be commie shit, copyright is the antithesis of communism
>>
>>103392625
I think I'm in a coma, how do I make sure I'm awake???
>>
>>103392727
Heh. Always knew you are one of us gpu anon. Cheers.
>>
>>103392742
Abolishing copyright was deemed commie shit
>>
>>103392751
communism is literally "sharing everything to everyone", copyright is literally "I won't share my thing to everyone", they are literally the opposite
>>
finally my childhood dream of making dumb anime OPs based off of my chuuni stories is coming close to coming true
>>
>>103392743
*bites ur balls*
>>
>>103392727
long overdue but ok
>>
File: 1711519385401351.png (646 KB, 700x524)
646 KB
646 KB PNG
>>103392625
>>103392508
>A100 sales, STOOONKS
>>
>>103392733
Yeah i love copyright and safety slop.
China censors and stuff
Cant wait to see another slow pan of some nature scene for sora on 5th.
>>
>>103392733
>Suddenly we need to abolish copyright
no one liked copyright, no one like copyright, and no one will ever like copyright, how did you went to that retarded conclusion in the first place, we always wanted to abolish that fucked up thing
>>
>>103392625
You just proved it second time, good pup
>>
>>103392625
They trained it on hentai didnt they?
No other way.
I hope if china gov doesnt kill them our jews wont either.
Those chink bastards hopefully have gods armor on them.
>>
>back to the eternal VRAM problem
Nvidia is actually the most effective AI safety arbiter. If you can't run something good then no need to worry about safety
>>
so this is the mandate of heaven....
>>
>>103392805
>If you can't run something good then no need to worry about safety
we can though, we managed to run mochi 10b within a 16gb card (Q8_0), we'll do the same for Hunyuan Video 13b, it'll fit for a 24gb card
>>
>>103392733
Copyright was created to incentivize progress. When it is being used to stop progress is when you should give copyright the middle finger.
>>
File: mutt fed.png (32 KB, 637x660)
32 KB
32 KB PNG
>video model not impressive because... commit china shit or something
>>
>>103392804
>I hope if china gov doesnt kill them our jews wont either.
>Those chink bastards hopefully have gods armor on them.
I fucking hope not, they still have to release their image2video model
>>
Hello I am new to the AI chatbot stuff but have used other AI image generators before; I have downloaded multiple models (Currently using airoboros with kobold/SillyTavern) and they all give very poor results or just NSFW Characters refusing to do any NSFW, I am looking for something similar to Janitor.AI in terms of output, Can someone guide me how can i get similar results?
>>
>>103392625
this is unironically the peak of the AI era, we can only go down from there
>>
>>103392733
>tool that is used by a few companies to murder all competitors is anti-communism
nice one retard.
>>
>>103392830
Sorry, I'm not into RP, much less ERP.
>>
>>103392826
>image2video
That's going to be the shit.

>>103392830
Download kobold-cpp and rocinante v1.1 (mistral nemo based model).
>>
>>103392836
Yes capitalism is monopolies communism is sharing progress for the betterment of humanity even if it causes one to lose a potential income source.
>>
https://github.com/kijai/ComfyUI-HunyuanVideoWrapper
>Memory use is entirely dependant on resolution and frame count, don't expect to be able to go very high even on 24GB.
>Good news is that the model can do functional videos even at really low resolutions.
this is it, kijai is making this dream come true
>>
HOLY FUCK WE'RE BACK
>>
>>103392857
>Yes capitalism is monopolies
but you can also make a monopoly by sharing a model you have that isn't the best but good enough to destroy the rest of the competition when released locally, leaving you alone when you'll release the ultimate product as an API
>>
They seemingly went all out.
Its clearly trained on stuff like hollywood movies, netflix, pony, anime…softcore porn, hentai.
Some guy in the team probably said “fuck it, putt it all in there”
>Sir can i put my nightshift nurses hentai mp4s in the training too? Uncensored version of course.
>Sure, any data we can get from those laowai, cheng. Furries, shitnipples, pikachu, Whatever.
There is no other explanation. It cant be a coincidence.
>>
>>103392865
zased
>>
>>103392875
how the fuck did the CCP government let this slide? kek
>>
>>103392625
c-can it do.. you know..?
>>
>>103392857
capitalism is based on competition, anything that tries to limit competition by definition is anti-capitalistic. That includes copyright.
There is a reason why leftists cry about copyright the most.
>>
>>103392888
>released to the west
>increase degeneracy in the west
>meanwhile ccp maintains control over its own people
>???
>profit!
>>
>>103392625
can it do futa? kek
>>
>>103392903
>>increase degeneracy in the west
isn't it at the top at degeneracy right now though? the woke did a good job at putting that cursor on the extreme top
>>
>>103392903
The fucking audacity man. Look in a mirror
>>
>>103392888
most likely it is unable to generate anything bad related to CCP, so they don't care.
>>
can the model generate tiananmen square? xi jinping as winnie the pooh? then its censored. a trapped model. ccp shills get out.
>>
>>103392952
Schizobabble aside, somebody should try this.
>>
>>103391197
ip is an israeli invention
>>
>>103392961
nice try glowie.
>>
>>103392952
Id rather take the model that can make booba and vagene.i dont care about taiwan etc.
If glorious wise xi says its china we better give it to them
>>
I'm downloading the chink model, someone please post a comfy workflow json
>>
https://github.com/kijai/ComfyUI-HunyuanVideoWrapper
>>
>>103392984
>>103392865
https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/blob/main/examples/hyvideo_t2v_example_01.json
>>
>>103392961
Not the time and place anon.
They have a shitstorm to deal with when they wake up in the morning already
>>
>>103392979
>nice try glowie.
loli isn't illegal retard
>>
>>103393004
>>103393006
I am retarded, thanks. I didn't check the folders.
>>
>>103392903
I can tell you are a fucking amerimutt retard.
>>
Can it do anime in 90's VHS look though
>>
File: 1717151251648139.jpg (115 KB, 1280x989)
115 KB
115 KB JPG
>>103392961
it generated furry and mlp, so... yeah. the answer is likely yes.
>>
>>103393029
i dont care what you think lol
>>
>>103393009
I have seen people getting arrested for less.
>>
Please Chinese overlords, Release a music model to fuck over Sudo / Uno next.
>>
Bros, can it 4090 pls?
>>
>>103393062
this, if they don't give a fuck about copyright at all, they should do that aswell
>>103393070
yeah >>103392865
>>
Whats shocking is that the model isnt just uncensored….but also good.
And local.
The outputs they showed are better than sora.
Dont forget it can do voice and sound too.


How is openai etc. rivaling this?
Can they even?
Like once a guy serves this model with an api its over no?
Only think i can imagine is long context and speed. I think it can do only 5 sec right.
>>
The guides are too confusing or maybe I'm a retard. Someone PLEASE spoon feed me

Need me a good model that has good rethoric (I input an idea and it turns it into a nice sentence) and erp potential
Pretty please? I'm running this on a 1080ti/11gb of vram
>>
>>103393084
It can do voice?! WHAT
>>
>>103393094
Yeah, look at their page.
>>
File: 1711282325582483.jpg (16 KB, 251x242)
16 KB
16 KB JPG
How can a 13b video model do THIS while all 13b text models suck
>>
>>103393084
How the fuck is that possible with so few parameters, what the fuck did the chinks discover?
>>
>>103393090
to install sageattention on windows, you can use this tutorial (it's for mochi but go straight to the part where it's about sage attention)
https://reddit.com/r/StableDiffusion/comments/1gb07vj/how_to_run_mochi_1_on_a_single_24gb_vram_card/
>>
>>103393109
>How the fuck is that possible with so few parameters
for a video model, 13b is giant, it's even the biggest one we got so far, the previous biggest one was mochi with 10b
>>
I got 3 gpus, that's completely useless for video gen, or do you think it will be possible to split in the future?
>>
>>103393109
Porn and uncensored hentai
>>
>>>103392350
If it takes 15 min with a 4090, I don't even want to imagine how long it takes with 3060 + RAM offloading.
>>
>>103393106
because they weren't written in chinese bro
> it was english all along
>>
>>103393129
Yeah, but this model is multimodal, what the fug
>>
>>103393130
Just wait. There's plenty of people desperately trying to make it work.
>>
>>103393153
Wait.. can we talk with it?
>>
>>103393109
Other companies are stuck with shitty public domain data and have to care about "safety"
>>
>>103386521
RTX 7080 still have only 16gb DDR7x vram.
>>
File: 1733255005266.png (374 KB, 1080x1342)
374 KB
374 KB PNG
Fix to speculative decoding got merged to llama.cpp:
https://github.com/ggerganov/llama.cpp/pull/10586

It apparently improved the performance quite a bit!
>>
>>103392350
A 4090 is almost as fast as an A100. You could implement context parallelism + ring attention, so that multiple GPUs each process only a part of the vector sequence (which is huge for video models, like 100s of thousands of items, and takes up most of the memory).

With an 8 bit quant + context parallel + ring attention, a 4x4090 system does a full res, full length video in roughly 3 minutes it looks like. I don't know if single GPU comyfui wrappers is gonna cut it for this model, but with the right implementation a beefy local rig should be able to run it at usable speeds.
>>
>>103393106
because a few wrong pixels don't ruin the whole images/videos, but a few non-sensical words keep ruining paragraphs-long texts.
>>
Wtf we must ban China from the internet as a safety measure
>>
>>103392984
report back when you can anon-san, this wrapper was only released like 2 hours ago, hope it works
>>
>>103393215
this, LLMs are like dominos, if there's a really weird word that have nothing to do with the rest, it will affect the next tokens, and then it'll make more errors until collapse
>>
>>103392979
he's either poorussian or some random brownie with inflated ego
>>
https://xcancel.com/nebsh83/status/1864029961713074342#m
>It can do winnie the poo
Xi Jinping won't like that
>>
>>103393299
wow, that's mean. why does asking that makes me have an inflated ego?
>>
I have a very big and impressive penis. What is the best model for me
>>
>>103393335
Real Women 69B
>>
>>103393324
NTA, but you wouldn't say shit like that in the clearnet otherwise.
>>
https://files.catbox.moe/drxbyx.mp4
>>
File: 1720791150541749.png (14 KB, 916x164)
14 KB
14 KB PNG
https://xcancel.com/toyxyz3/status/1864031174961873075#m
First test with ComfyUi, looks like it's working well at low resolution
>>
Cool but where is the essential CUNNY prompt result?
>>
>>103393380
What the hell are you talking about? Go back to R*ddit.
>>
>>103393434
Chill, bro.
No one here is getting v& for you
>>
>>103393383
armpits, hell yeah, but make them anime
>>
>>103393447
>What are VPNs
>>
Speculative decoding is really good now. I get great speeds with 70B models at 16k context on my 3090. The sad thing is that Nemotron is the decent model I know of with a good draft model. Are there any other models 70B or larger with good draft models? Qwen, maybe, but that's also not great. There was talk of patching Mistral 7B to work with Largestral, but I don't know if that ever officially happened.
>>
>>103393434
2d cunny...right?
>>
>>103393401
anime test
https://xcancel.com/toyxyz3/status/1864038338770190418#m
>>
File: KcAzhj4OrGsmYywO.mp4 (599 KB, 1024x640)
599 KB
599 KB MP4
>>103393481
>>
>>103393481
Absolutely dogshit mix with choppy shitwater.

I cant wait for improvements so i can prompt low quality15-24 frames of thrusting and no i am not kidding.
>>
>>103393466
Does it require low temp?
>>
>>103393510
>it's shit
>>
>>103393510
Damn, it's shit. Better than other video models but still shit. But perhaps with a fine toon it can be good, since SD models were shit too before they got fine tuned.
>>
>>103393527
>>103393526
>it's shit
that guy did a 512x320 res video render, far from optimal, I'm surprised it looks this good at such a low resolution >>103393401
>>
>>103393510
why did the guy decide to include a loli in underwear in his test kek, but yeah, it's not that good.
>>
>>103393510
It seems that it's getting tripped up by animation on twos.
>>
>>103393526
>>103393527
Is incredible how retarded some of you are.
>>
>>103393466
>There was talk of patching Mistral 7B to work with Largestral, but I don't know if that ever officially happened.
if you want to DIY, assuming the tokenizers are *mostly* the same, e.g. only difference is a couple special tokens or something and all the regular tokens match perfectly (I believe this is the case based on https://docs.mistral.ai/guides/tokenization/ but am not sure how this works with their latest v7 tokenizer or w/e) you should be able to just copy the mistral large tokenizer files into the mistral 7b repo and convert to gguf to get a perfectly compatible gguf that you can use for speculative
>>
>>103393549
>>103393538
I see, cope is already at full force.
>>
>>103393549
It's wild, we don't even have the correct tokenizer or any qoonting techniques yet
>>
>>103393466
How do I use it?
Is already in ollama?
>>
File: 3897158385147888013.mp4 (333 KB, 640x368)
333 KB
333 KB MP4
>>
>>103393459
Neck yourself retard
>>
>>103393557
>I see, cope is already at full force.
we're not even using the right text encoder yet, we have to wait a bit before making any conclusion >>103392361
>>
>>103393563
...go on.
>>
>>103393567
No I'm thinking I'll coom to cunny
>>
>>103393563
Now add anatomically_correct_genitalia to the prompt.
>>
>>103393466
What draft are you using for Nemotron (and at what quants for the draft and main model)?
>>
>>103393576
it will never make you a woman though
>>
>>103393551
Thanks, I haven't been in a rush because I know in my heart that largestral+7b won't fit in my combined 24GB VRAM + 32GB RAM. I had to switch to headless linux to keep from swapping just with largestral q3_xxs at like 8-10k context. So I guess I'm really looking for 70B models or I should stop fucking around and buy more RAM
>>
>>103393510
I wonder if the lower resolution is fucking it up. Just because it "works" at lowres doesn't mean it's really working properly. Based on the technical report, they trained it in phases with progressively increasing resolutions. So while some early pretraining was done at low res, the final stages of training were exclusively at higher res. So the model is optimized for the full res and may have partially forgotten how to do low res.
>>
>>103393587
...said the faggot, confidently
>>
>>103393573
Don't feed the retards, there's literally no better video model like this one, not to mention is fucking local.
>>103393383
Look at this and let see if one of this retards post something better.
>>
>>103393584
llama-server --model Llama-3.1-Nemotron-70B-Instruct-HF-Q4_K_M.gguf -c 16384 -ngl 32 --model-draft Llama-3.2-1B-Instruct-Q8_0.gguf -ngld 100 -cd 16384 --draft-min 1
>>
>>103393316
kek at the level of copyrighted data they used
>>
File: 1727564269718750.png (87 KB, 1020x987)
87 KB
87 KB PNG
>>103393589
>I wonder if the lower resolution is fucking it up. Just because it "works" at lowres doesn't mean it's really working properly.
of course, it'll work optimaly at the resolution that Hunyuan recommands
>>
>>103393587
>>>/pol/
>>
>>103393593
nice coming-out anon, feel free to express your gay sexuality, everyone's welcome in /lmg/!
>>
>>103393594
Not anime though.
>>
>>103393624
Cool larp but I want the cunny gens thoughie
>>
>>103393615
>>>/trash/ >>>/a/
>>
>it can do image2video
If it can run on consumer GPUs then say byebye to local AI. Everyone is going to seethe at this.
>>
>>103393649
What do you mean?
>>
>>103393649
>If it can run on consumer GPUs then say byebye to local AI.
it can >>103393004
>>
>>103393649
It can?

>>103393655
Deepfakes I guess.
>>
>>103393608
Oh interesting. Thanks, I'll give that a try.
>>
>>103393587
kek
>>
>>103393655
There was already a story about how teens at school used stable diffusion to create nudes of underage girls. Imagine the seethe with videos.
>>
>>103393587
keeek even
>>
>>103393649
>then say byebye to local AI. Everyone is going to seethe at this.
Americans never had uncensored local AI to begin with kek. And Chyna will keep doing what it do, seethe corpocucks.
>>
This model does voice to video btw with lip synching. So its a matter of time till we have your ai generated waifu talking to you.

This model + llm + whisper + Sovcits
>>
>>103393683
>>103393675
Stop replying to yourself
>>
File: 1728410124229550.png (69 KB, 888x859)
69 KB
69 KB PNG
>>103393649
>>it can do image2video
not yet
https://github.com/Tencent/HunyuanVideo
>>
>>103393649
>Everyone is going to seethe at this.
Well more pedotroons in jail cells is a net positive for society, China did some genius 4D chess here.
>>
>>103393701
ok, but what are your pronouns?
>>
>>103393678
They'll have to get over it when it becomes so easy that everyone can do it.
Fapping to your school crush cunny will simply be a thing that's possible for kids born 5 years ago.
>>
>>103393720
>everyone turns pedo
jeez
>>
>>103393701
Have a pity (you)
>>
>>103393720
>Fapping to your school crush cunny will simply be a thing that's possible for kids born 5 years ago.
what?
>>
>>103393720
based
>>
>>103393730
stop replying to Petra. she will want to split the general AGAIN.
>>
>>103393720
Based
>>103393729
>>103393739
Pedo website
>>
>>103393632
I remember one anon used to post cunny gens made with some closed chink video model, I wonder if he got v&
>>
>>103393720
>everyone holds same crackhead beliefs!
Do not lump normal anons into this.
>>103393751
Obvious samefag is obvious samefag, many such cases.
>>103393754
Not my problem, deal with your local faggots on your own.
>>
>Best text model
>Best video model
God I love China so much
>>
>>103393739
You highschool gf?
>>
>>103393768
You're not a "normal anon", tourist
>>
>>103393739
>anon never had lewd fantasies about his highscool crush or girls in his class
Stay pure king
>>
>>103393775
calm down glowie
>>
File: file.png (7 KB, 352x93)
7 KB
7 KB PNG
>>103393768
Anon, if you were able to tell your phone to gen you pictures of your crush when you were 14 you would stroke that dick until it turned purple and then you still wouldn't stop.
>>
File: 39119 - SoyBooru.png (54 KB, 427x400)
54 KB
54 KB PNG
Glowniggers are already working hard to SHUT IT DOWN. China won and there is nothing you can do about it. Seethe. Cope. Mald. Dilate.
>>
>>103393768
If only we had per-thread user labeling for posts or visible IP. Then I could prove that I am indeed a different anon. But unfortunately I can only call you a retard.
>>
>>103393649
Oh no, americans will cry and ban it.
>>
>>103393801
amen, fuck being a kid today would be wild, the willpower it would take
>>
>>103393765
hopefully so, buck breaking sessions must be a lotta hell for him
>>
kek, amerimutts on some S-grade copium today, love to see it
>>
>>103393765
nah, he is a regular on aicg.
>>
>4090 holding steady at 430W
holy fuck lol, my bill
>>
>>103393801
when I was 14 the only thing on my mind was how I could annoy my crush so she would notice me
>>
>>103393834
Move close to a source of hydropower.
>>
>>103393824
American website.
>>
>>103393824
Same, except I am American. I hate the people who have ruined my country.
>>
>>103393801
And then everyone clapped
>>
>>103393851
Probably for good, this thread proves y'all deserved it.
>>
>>103393834
And here I am thinking about doing an upgrade to the 5090 with its 600W...
>>
File: 1725793159107299.jpg (81 KB, 720x822)
81 KB
81 KB JPG
>>103393563
Stop with this faggotry and post petite flat women
>>
>>103393803
>soijak secondary
Geez, what this board has become..
>>
File: AYYYwe hateyou.jpg (6 KB, 256x256)
6 KB
6 KB JPG
AMD bros video gen status?
>>
>>103393563
Irrelevant to my interests, but this may drive positive autism to improve the local version videogen to all our sakes.
>>
>>103393573
>>103393594
>>103393558
If this model was truly good there would be more gens in the thread. No one is posting more gens because they are having to reroll and each reroll takes several minutes.
>>
Tuesday Newsday
>>103393866
>>103393866
>>103393866
>>
>>103393898
It takes time to gen, I'm only just figuring out how to use this thing
>>
>>103393898
I would like to post gens but I'm still downloading models.
I thought I had everything and then it started downloading another four files of 5GB each (model-00001-of-00004.safetensors)
>>
>>103393864
Individuals that have done no wrong but get punished because the rest of their people were shitty and outnumbered them don't deserve it. How would you handle being born into a system that has already fucked you over?
>>
File: 73244683.gif (1.56 MB, 480x270)
1.56 MB
1.56 MB GIF
>>103393864
>y'all
>>
What's it called when like with Claude, you can give it documents and ask questions about it? Can you do that with local models with ollama?
>>
>>103394178
RAG.
>>
>>103394095
>old Southern US lingo is reddit
kay tourist
>>
>>103394178
open-webui has that implemented and can work with ollama as the backend
It can even parse audio files
>>
>>103394204
It's a tranny word now because it's "gender neutral". Take it up with them.
>>
>>103394217
nah im good, can't say same thing about yourself, keep picking redditoid shit at face value i guess
>>
>103394217 literally made that up
>no source
>>
>>103394305
Every faggy leftist on the internet uses "yall" now. How could you not have noticed this by now?
>>
File: 1719351514748681.jpg (575 KB, 2048x2048)
575 KB
575 KB JPG
>>103394204
Sorry the dildos stole your word, dixie-bro.
>>
>>103394320
well shit, but it still has uses, it's shorter and rolls better than "all of you"/"you all" or "everyone [here]" or whatever



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.