What is the most practical uses for ai and llms right now?
Search engine replacement, gooning, or coding if you're bad at code or just feeling lazy.
>>108551248I am trying to code with my nemo 12b llm but i dont know anything. Very useful tool though.
>>108551232Replacing search engines. I would surf for half an hour on Stackoverflow, try different examples to do my thing or I just ask AI and get a response instantly plus I can ask to specify to adapt to my special case.
>>108551410Thanks
stop telling (((them))) your ideas they lack gods divinity to create
>>108551289I'd encourage you to at least learn SOMETHING. If you can't doublecheck what the model is generating you're going to run into problems if not immediately then down the road.
>>108551533I have been interested in python and rust just from the interest from reading about them. Is there any other languages that would be more beneficial in the long run?
porn
>>108551232Porn
>>108551232For AI anything less than 64GB is abysmal, but I'll consneed that 32GB is enough for wangblows and a single vm.
>>108551820Basically every programming language you've heard of is good for something, which is why it's used. I'm a C/C++ fan, but Rust is generally used for the same purposes. To start out just pick one language and stick with it for a while. I'd recommend you start with a statically typed one and then use dynamically typed languages like Python once you know how types actually work, because it will still matter even if you can't see it as easily. If you're interested in Rust, probably start with that.The other anon's point though is that you shouldn't ask the AI to just make things for you. You should ask the AI how to make it and then make it yourself.
>>108552399it's actually totally fine, if you depend on dual channel ddr5 for any workload you're already dead. for inference you basically have an ethereum mining workload, but instead of just fitting the dag you want as much ram as possible with high bendwidth. even with old gpus like radeon vii or 2080 ti 22gb prompt processing won't be the bottleneck for single user inference
>>108551232Scamming investors
>>108551232Clickbait engagement farming on X.
>>108551248>>108551410Google was better a decade ago than LLMs are today.The reason why search engines became shit is because:>they did censorship (this also applies to LLMs, but less so because its much harder to ensure the training data kosher)>they enshittified the results for maximum ad revenue>they squeezed the cost of compute per query super lowLLMs will need to figure out how to do those 2nd and 3rd points or else the money will dry up.That's why Google was so worried about rolling out Gemini in the first place, and were last-movers.Enjoy it as a "search engine replacement" while you can.
>>108551232I don't know. I use it for programming but I have a family member who has never interacted with LLM because he has gotten himself into some kind of cult/delusional thinking where he thinks LLMs are "satan". Unfortunately this is a gen x thing.
>>108557621you don'tknow anything
>>108557635Uh huh, explain how I'm wrong.>censorshipGoogle ended up being forced to curate internet results to bias "authoritative sources", whereas it didn't before.Currently AI is in a "need more data" phase so the training set includes "wrongthink" and their remedy is what they like to call "guardrails" and it isn't a long term solution. They will eventually need to limit the training set and enshittify the product.>ad revenueThey currently don't have a working business model. Subscriptions are never going to work. Wealthy "power users" of LLMs do not want to subsidize your usage.Whether or not they even can monetize LLMs with ads is still an open question.>cost of computeObviously, LLMs have failed enormously here. It's uncertain how they will justify that long term.>search engine optimizationI forgot to mention this, but it is analogous to the "AI death spiral" due to training on your own output.Basically: it an endogeneity problem, which makes your search engine index algorithm, or training model in the case of LLMs, produce worse output.
>>108557762I don''t have to prove how you're wrong. You'er an idiot who makes up lies and comes to 4chan to shit in a bucket of shit. If you really think Google Search compares to claude in any way, shape or form, you are a fucking moron. Period. End discussion. Google was an okay search engine in 2016, it had suggestions, but it was nothing like an LLM, and you are being a retard. Grow the fuck up.>Enshittified results for adrevGoogle is an ad company. You should have switched to 4get if you are an oldfag as you are faking right now.>SEOYou oon't know jack shit about SEO, you don't know anything.
>>108557797>If you really think Google Search compares to claude in any way, shape or form, you are a fucking moron.There is a very slight difference, but it is overall it's the equivalent in form, and the difference is a tradeoff.Let's consider the workflow described here: >>108551410Old Google would put the best StackOverflow results at the top and you'd get a StackOverflow copypaste solution much quicker.That being said, yes it merely indexes the solution, and olg Google would promote an old school website if it had a better solution to your problem. Yes, you also may need to adjust the solution slightly for your specific use case.New Google does not give you the best StackOverflow result nearly as quickly, most likely because of interaction with reduction of search engine compute costs, and to force you to scroll.LLMs, by comparison, mangle their training set rather than merely indexing. Which is a fundamental tradeoff. The mangling ideally adapts the Old Google StackOverflow solution to your specific use case automatically, saving you some work.But that mangling is also where "hallucination" comes from and can result in spectacularly buggy code.
>>108557631your family member is right
A reason to leave windows
use it to write the software you need, then put that software into an autoresearch loop targetting memory usage and make it very optimisedthen delete all your bloatdon't need more than 8gb as apple has proven