>everytime I ask chatgpt to help me with my honework im using a gallon of water and 1000 kw of electricity Can someone explain how this works? How is openai making so much money when this costs so much and its completely free to use?
you were lied to and their free models are worse than models you can run on a commodity laptop
>>108755292they're doing the tech startup thing where you spend years at a loss onboarding as many people into your ecosystem as possible off venture capital before you hopefully sell for a fortune and have someone else enshittify it into something profitable.
>>108755292>How is openai making so much moneyThey aren't>when this costs so much and its completely free to use?Goyim foot the bill
>>108755292>How is openai making so much money when this costs so muchso, funny story
>>108755292>>everytime I ask chatgpt to help me with my honework im using a gallon of water and 1000 kw of electricityyou aren't, that's absurd. the training uses a lot of power and some water (in the form of evaporation from cooling towers, which isn't a huge deal if the data center isn't in the desert). but inference isn't that resource hungry, one request to claude or chat gpt barely uses any power and an immeasurably small amount of water.
>>108755292You're accessing a single GPU, which you're sharing with multiple other users. Even if that GPU could draw 1000 w (non-kilo), which it probs can't, 1000 kw would still be off by 3 orders of magnitude - and even that would be for the entire GPU, not for 1 user.
>>108755292>openai making so much money
>>108755292The goyim genuinely believe that datacenters just take water an African child was about to drink and permanently delete it from existence
>>108755292It's not making money in the way you think if is. It's making 0.2% of what they scammed investors out of a year, meaning that in 500 years at this pace they'll have almost broken even. That's what decades of taking 99% of the richest country in the world's income and paying no taxes can afford you.
>>108755896nobody thinks thisthe goyim know that datacenters steal the water THEY paid for as their bills hike
>>108755634That depends. Raw inference is getting slightly cheaper due to hardware improvement, but input questions themselves have complexity and computability variance and recent developments such as reasoning loops lead to token churn as the model tries to "think" about an answer by feeding itself its own slop a few times before spit out the final answer. The savings arent realized if the solution is run more loops and pray we dont goblins goblins.
>>108755896stolen by peachnigger