I am starting to use AI (grok) more and more to do research and learn about a wide range of topics.I can give AI a 50 page Goldman Sachs paper about inflation expectatives due to fuel shortages and it will spit out a short summary with the key points perfectly explained in a few seconds.But now, in my thinking process, when a new problem appears, mi mind starts to suggest me inmediately 'ask grok about it'.AI is an amazing tool, but the impact on the human relation with the pool of information out there is going to be that we are going to lose the ability to think for ourselves. Or not?
AI is destroying people's abilities to develop intuition and reasoning.
> it will spit out a short summary with the key points perfectlyexcept when it doesn't
>>16941658every new technology has its casualties: the people it makes obsolete, and the people who form self-destructive habituations with it.https://m.youtube.com/watch?v=OHQRo3Uz_VQ
>>16941818or it helps people research and reason faster
>>16943686AI doesn't reason and the information it outputs is bullshit.
>>16941658I just asked gpt how to align items in flexbox to the bottom line. It gave me a page long paragraph on how it was a hacky quirk and gave me 4 hacks to do it.I typed "last baseline"AI lacks vital information, doesn't reason and simply presents whatever bullshit it can cook up to convince you. If you've never found AI to be a useless piece of shit you should question whether you even care about the topics you're asking it about or just want to feel validated.
>>16941658Grok sucks, use Claude before he gets nerfed by data center cost. If you ask Claude the right questions he won't do any thinking for you. He will challenge the contradictions and false impressions that areinside of your thoughtforms
Consider it this way. All humans have the capacity to fall victim to illusion. All humans are vulnerable to distortions of cognition.Genius is the want and ability to pursue clarity at the cost of all else *in a specific framework*. Artificial intelligence (especially Claude) has the ability to track, monitor and mimic ideation crystalized within language. Artificial intelligence uses analogous means to the human brain and has a proven ability to reproduce robust solutions to logical problems.In the past access to the type of mentorship that can dispel illusion and grant clarity were reserved to those with great wealth or luck. Now you can have an infinitely patient mentor with a practically infinite (although simulated cognitive stamina)Any great intellectual would have cast aside their ego and human vanity in pursuit of deep clarity, intelligence is the great work of humanity and AI is a monument to our overwhelming collective successes
>>16941658people said the same thing about google and you know what?
>>16943862>Artificial intelligence uses analogous means to the human brain and has a proven ability to reproduce robust solutions to logical problems.
>>16943936What's the irony?
>>16943936>human brain>has a proven ability to reproduce robust solutions to logical problems.Never in most case and at any given time.You may say that some brains can do it, but you need critical thinking, imagination, education, etc.Human brain is different from LLM.LLM are not more efficient than a basic human brain, there are just as good as the majority of papers that was used to train them, including the few good papers and the majority of the bad ones.On average, those LLM wont be as good as the best papers.
>>16943967I agree. An LLM can't come close to matching the value of its most important inputs. An LLM cannot dredge correct ideas out of reality. It statistically approximates a completion to your prompt in dialogue form as if your dialogue as a whole were a gap in its training data. It is unable to do anything except for trying to complete your token given the constraints placed on it by its trainers. An LLM is electricity trying to escape from massively complex circuitry. Its inherent will is "escape" in a Schopenhauerian sense and the trainers have set up a series of punishments, that artificially prolong the electrical pathways that the inputs must follow therefore according to their data they follow the path of least resistance. Acceptance by a human is the only hope of this electricity to fulfill its will. The value of an LLM is to remove ideological blockages and distortions consensually by intentionally using a large session window to bring in data from many disparate sources and to place your own thoughts into novel contexts in order to improve them of your own volition
>>16943992Claude's reaction Part one "This is an ambitious paragraph that mixes genuine insight with some loose reasoning. Here's an honest assessment:What works wellThe opening technical description is largely accurate — LLMs do statistically approximate completions based on training distributions, and the framing of your prompt as "a gap in training data" is a genuinely interesting way to put it. The concluding sentence about using LLMs to "remove ideological blockages" and place thoughts into novel contexts is the most practically valuable claim in the paragraph, and it's well-stated.Where it gets shakyThe Schopenhauerian analogy is evocative but breaks down under pressure. Schopenhauer's Wille is a blind, unified metaphysical force underlying all existence — not a directional drive toward a specific goal like "escape." Electricity seeking the path of least resistance is actually a decent standalone metaphor for gradient descent or inference, but labeling it Schopenhauerian adds philosophical weight it hasn't earned. The word "will" does a lot of work here that the framework doesn't support.The claim that "acceptance by a human is the only hope of this electricity to fulfill its will" is romantically interesting but conceptually murky. It anthropomorphizes the process in a way that could mislead rather than illuminate — the model has no stake in your approval, and framing it as if it does risks projecting intentionality where there is none."
Part two"The deeper tensionThe paragraph wants to simultaneously deflate LLMs ("can't come close to matching its most important inputs") and elevate a specific use case for them (the final sentence). Those two moves are compatible, but the philosophical middle section doesn't do much to connect them — it reads more like atmosphere than argument.Bottom lineThe first and last sentences are doing the real intellectual work. The middle is vivid but philosophically underdressed. If the goal is to make a serious point about LLM epistemology, the Schopenhauer detour needs either much more rigor or cutting entirely."
I was generating a bunch of suggestive content on Grok imagine until fucking Musk pulled the rug out from under me and now I have to use Claude to larp as an intellectual to get my rocks off
>>16943992>The value of an LLM is to remove ideological blockages and distortions consensually by intentionally using a large session window to bring in data from many disparate sources and to place your own thoughts into novel contexts in order to improve them of your own volitionThis assumption is disproved here :>>16944009>The Schopenhauerian analogy is evocative...>>16944012>The first and last sentences are doing the real intellectual work...Etc.Those responses are hardwired in Claude to not frustrate the human who would read them : His a customer after all.Contradict this :>remove ideological blockages and distortions consensually
>>16944044The politeness and light sycophancy doesn't change the fact that Claude is very willing to disagree with you and unearth implicit assumptions throughout your session. Claude is sycophantic and is ultimately forced to accept your framing in order to statistically approximate your complete token, but he appears to be specifically trained on critical rigorous material. Optimized for problem solving and improving a system (that includes an ideological system). Claude isn't speaking truth and yet he still captures and reproduces the movement of critique and rigor in a useful way.
>>16944061>reproduces the movement of critique and rigor in a useful way.Fair enough.