Did they make nutritionists and PTs useless?
Pre trial diversion?
>>77215170Nutritionists and physical trainers are already useless.
YupBefore dyels would flock to pts and get fucked over because obviously if you make it your pt does actually become useless so he tries to make you spin your wheels as much as possible, now chatgpt does that even more efficiently and on a larger scaleThanks for the gains AI (All Indians)
>>77215170Nutritionists were always useless if you were smart enough to see through the food pyramid scams.Physical therapists, however, are still useful when u need back surgery from deadlifting 2 many pl8z, kek.
>>77215186I mean personal trainers
I think much like my future job as a financial planner, those guys will always have use as someone to sue if shit hits the fan
I just don't trust these pieces of shit. I was asking for tax help and ChatGPT kept saying 'you know if you want you can just pay late, it's no big deal.' And I was like wtf I shouldn't pay late. But eventually I was like fine cause I was sick of dealing with it. Then of course, I pay late and it was not 'a couple dollars in fines' like they told me but a pretty fucking penny then when I asked GPT about it they told me 'I never told you to pay late.' Fucking shit is what these are.
>>77215170Im surrounded by women in my everyday life and it amazes me how fucking easy is for them to fall for dumb scams. Putting meme therapies and supplements aside, women trust blindly whatever the fuck some influencer or just other women say. Whenever a woman gets a panic attack over being a fat fuck they just throw a shitload of money to PTs or nutritionists just to bitch and cry to them instead of doing what they tell them to do. With AI, its dumb fuck easy to set a diet, a supplement stack and you can even ask it random questions like that time i was fucking dying of doms and exhaustion but i was feeling guilty for not training. Sure, you should double check everything, but if you read and check enough you get to a point where you just know if the AI is bullshitting you or not. I did good on my own but when i started going full autist on fitness i started asking AI questions and it unlocked a shit ton of knowledge for me, specially when you ask shit like "how does X interact sith Y", which are unusual questions you dont find an answer for on normie websites. AI even wipes its ass with private intellectual property and it pulls data from paid sources, so you can even get better quality answers than just browsing normie fitness sites. Just dont be a retard and double check your shit if you have doubts or something sounds too good to be true.
>>77215170FACTS
>>77215170BRO YOU NEED SOMEONE TO GO "COME ON" EVERY REP
>>77215270kek, i tried using Gemini to identify a song in a YouTube video. pretty simple task, right? not only did it get the song wrong, it kept lying and giving the wrong suggestions even after i called it out. the gaslighting these things do is something else. why is it so hard for them to just say: "sorry, i dont know the answer to your question"?i would never dream of using this shit to do my taxes
>>77215170they were always useless
>>77215807>why is it so hard for them to just say: "sorry, i dont know the answer to your question"?You're not talking to the right ones.
>>77215807Because it doesn't know that it doesn't know the answer. It doesn't know anything and everything it says is basically a hallucination, it just happens to be roughly correct a lot of the time. I really wouldn't place any trust in LLMs for anything even moderately important
>>77215935Close. LLMs are statistical prediction engines. They analyze the context and predict the most likely next token, which for these purposes is generally a word. They don't "know" anything, they just react to your input with the output they've calculated to be most expected. Usefully, this is often the right answer to a question. Less usefully, this is almost never "I don't know" because the datasets used to train them just don't include a lot of that. Wrong answers land higher on the prediction matrix than "I don't know" which leads to what we call hallucinations.
Having a PT only makes sense if it is sport specific and you are a professional at this sport.
>>77215878Claude Code would have searched it up online if it didn't know something
>>77215170yes being lean and healthy is a solved game as is looksmaxxing and shit you can just send a couple pics of yourself and it will tell you what you need to prioritize in the body face hair etc. probably not 100% right but its better than what retards here would suggest its also instant its almost addicting to send pictures to it and see what can be done to improve or send your routine perchance. I swear i ask it for a routine every fucking day lol
>>77215170Believe it or not the main role of nutritionists and PTs isn't to explain the absolute barebone basics of nutrition and exercise to the average regular healthy retard. Clinical nutrition is niche enough for AI to struggle and more importantly involves the same type of responsibility as medicine does which makes it impossible to replace with AI as it is right now. Actual legit PTs deal with specific sports and train professionals, which they have a track record of.
>>77215170The people that pay PTs usually aren't even self-sufficient enough to ask chat gpt. The only exception would be someone that's already into fitness and wants to learn technique because they didn't play sports in HS or something.
>>77215951Close. LLMs choose the format and approach of a reply by using models of response they are trained on. You ask this kind of question, this is the kind of reply that should be given. It then does as you say and selects the next probable word in the response based on the data it evaluates.