[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: IMG_4448.jpg (217 KB, 1404x1027)
217 KB JPG
Now that AI companies aren't going to subsidise token usage by customers using funding from venture capital, i.e. customers are going to face increase in the costs of AI subscriptions, is the AI bubble starting to pop?
>>
Everyone will pivot to local AI
Mark my words
>>
>>108722099
>customers are going to face increase in the costs of AI subscriptions
yo imma let you finish but deepseek 4 is like $3 per million tokens
>>
>>108722501
Ok, but it still doesn't do any useful work other than in niche applications, most companies that have adopted AI haven't benefitted at all
>>
Ed zitron assures me that the bubble will pop in 2 weeks
>>
>>108722509
>but it still doesn't do any useful work other than in niche applications
true 18 months ago
no longer true today. there are things its shit at, like low level programming, but by now most knowledge workers should be able to get some value out of it unless they’re retards
god bless chinks for keeping openai and anthropic honest though, shit would be at least 5x the price without chink models
>>
>>108722537
>most knowledge workers should be able to get some value out of it
Such as?
>>
>>108722551
Using ai to create promotional material for courses on making money with ai and spamming social media with the results.
>>
>>108722551
It’s good at refactoring parts of any small and midsized code base
It’s great at shitting out boilerplate
it’s good at exploratory work, spitballing design and architectural ideas
It’s great at shitting out one-off scripts and generally throwaway code
It’s good at poc

Outside codeslop it’s vastly more capable at general computer usage than the average normie who struggles with sending an email. Shaneeqa down at the dmv who manually enters data into excel could be replaced in 5 minutes. Bob whose job it is to translate all the invoices your company receives into a standard format has no reason to be employed with modern OCR, and you only keep him around as lightning rod for blame if some data is wrong. The entirety of bad to mediocre salesmen are now largely obsolete, and so are any knowledge worker gig spaces like fiverr etc. Your half blind grandma can now make and publish a full blown blog about her favorite potted plants in half an hour.

Shit’s useful if you’re good at tech, and actual magic uf you’re not.
>>
Code written by AI should be scrutinized so much you could've written it yourself in less time.
>>
>>108722586
Every "use" you mentioned requires entering highly sensitive information into a public training database or letting a retarded chatbottalk to your customers. Both non starters.
>>
>>108722591
AI slop is already better than cheap jeetslop, and nobody does proper review of jeetslop
then again, anybody below L6 or so is mostly there for ops and not actual development, so it doesn’t really matter whether it’s ai or human written - none if it will ever touch anything important, unless it’s some tiny startup with 10 people and a dream
>>
>>108722509
>other than in niche applications
sir, ~70% of the code at the 4 trillion dollar company, alphabet, is written by ai
>>
>>108722603
>requires entering highly sensitive information into a public training database
both claude and chatgpt are on amazon bedrock, and amazon already has all your data
>or letting a retarded chatbottalk to your customers
>non-starter
you cannot be serious
>>
>>108722452
How can it be done if the hardware cost is prohibitive now?
>>
>>108722610
Is that why it takes 10 minutes to send an email with an attachment on mobile now?

Great for business. Love having to tell my customers "sorry for the delay" with almost every correspondence. On the bright side, there's a "write this email for me" option that can't be turned off, that nobody on earth, who actually sells something, asked for or wanted.
>>
File: file.png (193 KB, 1000x520)
193 KB PNG
>>108722623
it is, in fact, great for business.
and sounds like a you problem bro. even my clanker sends me email with no problems.
>>
>>108722631
Yeah, I was talking about my business obviously. I don't care about what type of quarter Google had.
>>
>>108722634
sure you were, buddy.
>>
>>108722452
How much money am I going to have to spend on a new pc to run a half decent local ai?
>>
>>108722643
Definitely was. Use a different hindi <-> English app and reread my reply.
>>
>>108722646
look man, maybe you're just not cut out to run a business. you can't even send emails out on time. maybe sell the thing and buy some google stock - you'll probably make more money that way.
>>
>>108722618
You can run gemma 4 on a midrange laptop and it’s better than chatgpt 4o at launch, which was less than 2 years ago
If the trend holds you’ll have models that are as good as current Good Shit running on your $400 stinkpad
>>
>>108722651
Thanks for the tip zoomer. Why didn't I think of just closing my 11 year operation due to enshitification. Would give me more time to masturbate and argue with you about retarded shit.

You're absolutely right -- Everyone producing real value should just fuck off as tech giants break products that have worked flawlessly for decades, and we should fully transition to a "funny cat video" economy.
>>
>>108722688
You’re the guy strugging with sending an email doe
maybe you could try asking ai for help?
>>
>>108722614
>you cannot be serious
Customers don't like talking to chatbots, when they call customer service they want to talk to another Human,
>>
>>108722709
they put up with talking to indians for a couple of decades, a chatbot is going to be a welcome upgrade. and also more human.
>>
>>108722452
but I already does
in fact never personally paid for any services
>>
>>108722695
I don't think ai can assist with the time it takes ai to fully scan every attachment and enter it into a training database. You could be right though... Old guys like me, who keep the real economy moving with goods and services are a thing of the past. We need to step aside so the kids can stream fortnite dances and code broken minecraft clones with 20million claude tokens.

At least I own land.
>>
>>108722722
>I don't think ai can assist with the time it takes ai to fully scan every attachment and enter it into a training database.
anon... un ironically, this is baby-tier work for an agent these days. not even kidding.
you should give it a fair shot.
$20 bucks to find out if you can automate away hours of work. if you're not comfortable with full automation, you can get it to 90% and you just do approve the entries manually.
>>
File: IMG_4463.jpg (388 KB, 1488x435)
388 KB JPG
>>108722711
Stop being racist.
>>
>>108722739
I gave it a shot in 2022 and again in 2025 actually. Every interaction with a customer is in essence a commission for art work, in a manner of speaking. Another human, generally, couldn't even handle it properly. On occasion, my gf can stand in with correspondence, but I usually have to fix something. Ai just doesn't fit, even with a RAG populated by every correspondence I've had with a customer since 2015. To much variation, expectations are too high, and when I don't have an answer right away, I need to be able to communicate that as part of the process. A sale can take weeks of back and forth.

People spending big money on stuff expect to interact with someone who knows what they're exactly what they're talking about.
>>
>>108722452
still cheaper to pay for someone running that model on a big server
>>
>>108722709
>they want to talk to another Human
Tough shit. Besides talking to an ai bot will be an improvement over those automated lines and jeet call centres
>>
>>108722603
Companies have been trying hard to automate away from human customer service for decades. Do you realize that once upon a time, if you dialed a support phone number, your first interaction would be with a real human? It's just accepted now that to speak with a persom you have to wade through a bunch of IVR shit that collects your data that the real person (if you manage to get one) always asks for again anyway. Anything that filters people away from costing them wages to a grunt employee.

Automation is already in place, LLMs can now make it actually effective support instead of just fake support that yields corporate benefits through customer frustration and attrition.
>>
>>108722812
I don't mean the cable company dealing with pops not being able to find channel 52. I'm talking in lieu of a sales department at a Cadillac dealership or whatever. Customer service is what it is. I get that.
>>
>>108722786
fair enough. i misread your post a bit and thought you were talking about data entry or smth, not bespoke art commissions.
keep checking in once in awhile though.
the crazy hype that's built up in the last 6 months is not unwarranted. the models did genuinely get better at being agentic.
have a look at recursive language model set ups vs rag. would be interesting if they worked for your case. obviously don't let the thing actually send an email for you, but might be nice if it can keep an eye on your inbox and get you a draft that's most of the way there
>>
>>108722812
You're an artist? Has the rise in AI slop affected the amount of commissions you get? Do you use programmes like glaze or nightshade to stop your art being scraped for AI training?
>>
>>108722840
Don't ask
>>
>>108722840
>Do you use programmes like glaze or nightshade to stop your art being scraped for AI training?
Anon, that's about as effective as making AI waifus wear a scarf so that hopefully crayon artists can't sketch AI waifus from reference or their memory to prevent crayon artists from competing with your AI waifus.

It doesn't work and it's retarded.
>>
>>108722840
I'm not sure if you were talking to me? (The elderly luddite who should sell his business because I can't cope)

If so, I don't do digital stuff. Real world products. Not technically art, but the sales process is similar.
>>
>>108722969
It won't stop AI from using already scraped art, but it's supposed to stop style mimicry by AI, so if an AI is prompted to make an image in the style of artist Anon, all the images it's scraped would be distorted and the prompt will come out looking wrong
>>
>>108723075
Anon glaze and nightshade have been around for a couple of years now and the models have just gotten better. There is zero evidence suggesting any of them have ever been poisoned. It doesn't work.
>>
>>108723075
Ok, keep adding a cope scarf. It'll start working any time now.
>>
>>108722452
The best closed models have trillions of parameters. Model size measured in terabytes. Local can never reach the same performance in general knowledge.
Maybe task specialized small models ran locally are possible and competitive in performance but what is the business model for company making those?
>>
>>108722452
Yes, I too think literally everyone will buy $6000 laptops to run GPT-4 level LLMs locally!
>>
>>108723304
>run GPT-4 level LLMs locally
so like gemma 4 or qwen 3.6?
all you need is 8gb vram + 16 gb ram or just 24gb ram lol
>>
>>108723304
You’d maybe have a point if you were shilling yuge modern models like deepsneed 4 pro or kimi 2.6, but basic bitch gpt4 tier performance can be had on a midrange laptop from several years ago
>>
>>108723380
How viable is that anyway? Last time I messed around with gemma 4, I had like 3-5 tokens/sec output. I then started digging into what could be done to speed it up (apparently only buying expensive hardware), so I didn't get to the point of giving it any serious task and seeing if it works.
>>
>>108723492
dense will be slow unless you fit the entire thing in vram, which atm needs 32 gb. 16 gb vram gpu like 5070 ti is usable at q4, but by no means fast.

On the other hand the moe models are fast on any reasonably modern cpu. 32 gb ram + cpu gives you like 100k context, with the model at q5 or q6. I wouldn’t use it for agentic programming but most tasks, be it sentiment analysis, writing ffmpeg and autohotkey scripts or your disgusting furry erp works just fine. Gemma 4 also has vision support in the .mmproj files, and it’s reasonably easy to add tool calling support to it.
8gb vram + 16 gb ram you’d prolly struggle with longer tasks and will need to use q4 quantization, which is okay but not great.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.