https://x.com/LiorOnAI/status/1913664684705874030So is this actually worth looking into?
>>106794834You already could.
>bitnet>botnet
>For achieving the efficiency benefits demonstrated in the technical paper, you MUST use the dedicated C++ implementation: bitnet.cpp.You MUST use botnetLooks cool though, it probably can even run in my 8gb vramNow if only it had coding version instead of generl purpose one, would be super cool
> bitnet.cppbenchmarks fucking suck they always get those high numbers but then you use it and you get this crap
>>106795506Who the fuck cares about that. Can it erp?
>>106795506>then you ask it stupid fucking questions that LLMs can't handle because of their architectureWhy not ask it to write a JS function to determine how many "r"s there are? Or even better: give it tool access? Every time you post this you're screaming to the world you're retarded
>>106796908whats the point of a fucking 100b model if it can't do the basics?
>>106794834I want local language models on my pc without needing an internet connection. Is this going to make local on-device AI a thing in the future?
>>106796957Have you been living under a rock or something in the last three years, anon?
>>106796969I don't want my AI to steal resources from my system. I am talking about the 80% less energy claim in the OP pic for one. Or will pc need a dedicated AI chip for this?
>>106796990Instead of schizoposting you can just read repo and scientific paper attached to repoNo spoonfeeding, no shortcuts. Do it.
>>106796946No model can do that without being hardcoded or using tools. LLMs aren't capable of introspecting on single characters in text. The vast majority of recent models can do this just fine if prompted to give a JS function to solve it and given tool access to a sandbox to run that JS.
>>106795506What kind of retarded models do you use if they can't do that?I just had to edit the first reply because it wanted to show me python code and I didn't care.This is a shitty 12b q5 model btw.
>>106797368