[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: 1761107509592865.png (351 KB, 863x1884)
351 KB
351 KB PNG
>>
A PARAMETER JUST FLEW OVER MY HOUSE
>>
normies think vulkan is impressive
>>
>>108486850
>>108486879
/thread
>>
>>108486850
>397B
>48GB ram
OK retard
>look at graph
>Q2 Q4
This shit is quantized. The speeds it runs at are useless and it probably freezes up the entire laptop while it's running.
>>
i have to call my mom
>>
>>108486850
>20 token context
>>
>>108486850
>4 experts
>quantized model
why are AI people so full of shit?
>>
>>108486850
>Here's the wildest part:
I sweat to God, the AI slop is getting more and more predictable.
>>
>>108486850
malware at best, claude hallucination at worst.
>>
>C
why did they write it in C instead of a memory safe language like Rust?
>>
>>108486955
preference, baiting in AI clickbait threads is like getting a high of beating up a homeless person in exchange for a bottle of vodka
>>
>>108486850
Trust me.
I have a better idea.
THE ULTIMATE PURGE:
Delete everything except Rust, egui, and Linux.
All knowledge of .js, Windows, and iToys is being buried in the Mumbai landfill.
Absolute zero bloat.
IQ: +300 (Brain safety guaranteed)
Speed: 1000x faster
Power: My 8B kitten model > Their 800B behemoth
>>
>>108486850
shocking news bro
Google? Ha. OpenAI? In your dreams. Yes, if you want real AI expertise just look to the VP of AI at a failing drug store chain. Is that AI model too big for your laptop? No worries! Just quantize it to 1/10th of it's original size and then limit the experts in use so that the benefits of MoE models are gone. You'll see no degradation of quality. I pinky promise. Now go updoot my posts so I get more attention.
>>
File: 1733990423795505.png (7 KB, 192x255)
7 KB
7 KB PNG
>>108486850
>alert emoji
>On a Macbook. No cloud. No GPU cluster. No data center. A laptop.
>At 4.4 tokens per second. With tool calling.
>No Python. No PyTorch. No frameworks. Just raw C and hand-tuned Metal shaders.
>Here's why this should not be possible.
>Here's the wildest part:
>Trending on GitHub. 332 points on Hacker News.
>100% Open Source.
Grokslop is the cringiest kind of slop
>>
File: itoddlerwinsagain.png (55 KB, 931x545)
55 KB
55 KB PNG
>>108486850
enjoy your 0.3tkps
>>
>>108487324
slop coders get the rope
you just know this guy has 0 idea what he's doing and is totally leaning on AI to do everything for him
>>
awesome, only 3 seconds per token too!
>>
>>108486955
Devs tend to use languages they know.
>>
>>108486955
C is memory safe if you write memory safe code.
>>
>>108486850
Just because something is about AI it doesn't mean you gotta use AI to make a slop post.
>>
>>108486929
It was a vibe coder as well..
>>
>>108486850
It's great how having even a bit of surface knowledge on the subject is enough to inoculate me from this sort of braindead hype.
>>
>>108486850
>No x. No y. No z. Just w.
SHUT THE FUCK UP!!!
If I see this retarded AI writing one more time I'm going to blow my brains out. ITS EVERYWHERE!
>>
>>108486850
Maybe somebody could tell me what the everloving FUCK is going on. And please, speak as you might to a young child. Or a golden retriever.
>>
>>108489041
slop detector models when? and a browser extension to automatically hide slop content
>>
>>108486850
maybe he should use his macbook ai to make refilling prescriptions not suck shit
>>
>>108486884
it is pretty cool
>>
>>108489041
It is nauseating.
>>
>>108487344
Why does he get featured in the news and I live with my mom?
>>
>>108486972
Kek
>>
>>108489716
bool is_ai_slop(const char *str) { return true; }

this will be 99% accurate for any random text from the 2026 internet
>>
>>108486850
It's real. I haven't made it public yet but I ran a 3T (t as in trillion) parameter model on a Tandy Color Computer. I she a secret I will share only on X.
>>
>>108486850
so it just uses the ssd like a giant swap device? meh
priate ai is cool if it's usable tho i guess, most of these local models uck compare to claude/code tho right?
>>
>>108486884
Vulkan? It is a Mac. Are you retarded?
>>
0.3 tokens per second is still impressive for something that huge to run on a single high end machine. With some compromise it might be possible to get something very usable and still good running on it.
>>
>>108486850
>MemeoE
>4.4tok/s
wake me up when it hits 44tok/s
nothing impressive nor usable
>>
>>108489041
No therapy. No medication. No counseling. Just high-impact graphic suicide.
>>
>>108487238
>rust
>t. loony troon
>>
File: 1754335095836.jpg (65 KB, 972x776)
65 KB
65 KB JPG
>>108486850
>built the entire engine in 24 hours
>>
>ai runs like shit because it's python
>"why doesn't anyone write it in C?"
>"too hard"
>ask ai to do it
>it does
HOLY SHIT WHY AREN'T WE BOOTSTRAPPING EVERYTHING
>>
>>108486850
>here's why this should not be possible
that tweet itself was also written by AI
>>
>>108486850
>MoE
>Q2/Q4

OP is just as stupid as the tweet author
>>
>>108486850
I fucking hate it when people say something shouldn't be possible. Like it clearly fucking is so shut up.
>>
>>108486850
>ssd streaming
as in the mac silicon ssd that's impossible to replace.
>>
>>108492579
>check apple thread
>more lies from seething windows users
>>
>>108490074
4.4 tokens per second is good enough for cooming so it's interesting if the hurdle in >>108487324 can be overcome.
>>
>>108487880
that it runs at all is amazing
>>
File: 603568.jpg (48 KB, 640x480)
48 KB
48 KB JPG
>>108486879
>>
>>108486850
Cloud-based SaaS AI btfo
>>
jeets making AI slop posts about LLMs on twitter sums up the modern internet and I hate it so much
>>
File: file.png (259 KB, 460x460)
259 KB
259 KB PNG
>>108494036
usecase of 400B model for cooming?
>>
>>108486850
When you quantize a 379B model down to 1-bit compression just to fit it into 48 gigs of ram, This is how that last bit looks at you when you go to actually using the model for inference.
>>
>>108494350
it's probably fine for coomer shit, but i wouldn't dare use it for coding.
>>
File: file.png (6 KB, 546x28)
6 KB
6 KB PNG
>>108486850
waow
>>
>>108486850
ANOTHER anthropic paid shill?
This is madness. Why didn't Katy Perry tweet this if it's so good and she knows all about AI?
>>
>>108494287
Accidentally cooming so hard you get it all over your face and hair.
>>
>>108486850
what the hell is the matter with disgusting thirdies and their constant delutional HORYSHIT SARR AI BREAKING BREAKING NEWS
blocking posts by region cant come soon enough
>>
>>108496905
You'll see those posts anyway, enjoy the screencap threads.
>>
>>108486942
That's a power move. All intentional.
>>
>>108486850
I think my writing style tends to incorporate run-on sentences, but that abomination of a Xeet makes me feel a lot better about using excessive amounts of commas instead of full stops every other word.
Spamming sentence fragments is the text equivalent of whispering mundane things to sound important. Yes, I realize that post was almost certainly written by AI.
>>
>>108486899
>The speeds it runs at are useless and it probably freezes up the entire laptop while it's running
good enough for some things & good enough for now. it sets the trend of non datacenter AI.
>>
>>108494350
it's not 1bit, it's literaly inferencing by reading from ssd lol.
you can run a 1T model on a raspberry pi if you are willing to wait a long time lol
>>
>>108492240
you are a retard, llama.cpp isn't python.
the reason ai run slow is because the bottleneck is memory bandwidth.

also python inference code generaly just calls c++ anyway, it's not bottlenecking shit.
>>
>>108486899
It's afraid
>>
>>108499630
you can run it that fast on any good gaming pc, nothing impressive there knowing it's doing ssd inference lmao



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.