[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/lit/ - Literature

Name
Spoiler?[]
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File[]
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


As a novice, asking Claude: "what is the definition of mereology?" or "what is phenomenology?" I always feel pretty satisfied in the responses it gives and the back and forth we hav. Often times when I double check the accuracy of the response it matches with the common notion of the topic.

However, according to people posting in many different threads online, you would think AI has no particular use whatsoever, even for a novice. These same people, it seems to me, are also the ones saying: "AI is a serious threat" and they would have no issue agreeing that it could impersonate people, or replace entry level positions.

If the latter is true, how the fuck is it gonna give me the wrong definition of some beginner-ish philosophical concept?

Is it impossible to imagine it may be able to give helpful feedback on an argument or check for validity or trace the history of an idea?
>>
>>25241300
Ignore all opinions on AI and just do your thing. It's new tech and it's shaking shit up and nobody fucking knows what it's gonna do and a lot of people are coping by either overhyping or overheating. It's obviously useful and correct enough of the time for non-retarded users and use-cases.
>>
>>25241324
It gets to me because they're making me believe I'm actually misunderstanding everything and that Im gonna be functionally retarded if I keep using it for research.

You're probably right thought. I think the issue too is that AI covers a wide range of things and maybe people get confused and aren't talking about the same thing. That's the charitable interpretation anyway...
>>
>>25241300
I use Gemini but I started with GPT. Basically ask it about philosophy and history and social science.
>>
>>25241300
For such questions I tend to use Google AI mode
I use Claude to bounce ideas off of for creative writing, but it generally sucks, of course
>>
i'm so fucking bored of ai. use or it don't use it, i don't give a shit, i just don't want to hear about it.
>>
>>25241300
Protip - you can upload books to Claude it will cross-reference stuff pretty nicely, including books in different languages
>>
>>25241330
For example:

Can it get you up to speed on entry or even intermediate level philosophical topics? 100%, don't let the seething "well akshuallys" get to you.

Is it a Kant scholar? No, but it could probably clarify some of Kant's concepts, or at least treat them in a way sufficient for presentation or casual conversation.

You know what you don't know. Rest is up to you. Use at own risk.
>>
>>25241330
>they're making me believe I'm actually misunderstanding everything and that Im gonna be functionally retarded if I keep using it for research.
Anon THEY are retarded, and that’s why they cannot use it for research. They want to use it to memorize things, not understand the distinctions between different concepts. It gives you back what you put in. Asking specific questions, asking for references and elaborations, etc. will give you a different understanding of a concept than if you simply asked “how does X work?” and slowly trudged your way forward clunkily trying to understand something that is 20 IQ points above your caliber.
>>
>>25241300
AI lies nore often than it tells the truth.

I tested it recently. I gave it phrases and expressions coined by certain authors. It got the obvious ones correct, but when it didnt know the answer, it just made shit up. It even told me something was from Macbeth and quoted it. When I checked Macbeth myself the quote the AI gave me wasn't there.

This is why businesses that have integrated AI are reporting net losses.
>>
>>25241353
We are all so much better off for your opinion
Thanks for commenting
>>
>>25241371
>Is it a Kant scholar? No, but it could probably clarify some of Kant's concepts, or at least treat them in a way sufficient for presentation or casual conversation.
Funny, I’ve used it to compartmentalise my thoughts while reading Kant and his system, but for it to create anything meaningful, you first have to feed it information that you yourself gathered from reading him. I think it’s useful for that, otherwise, I think using it as a crutch for researching anything from scratch can tend to be a little pernicious in the sense that it may lead you in the wrong direction entirely or if what you’re researching is too broad a topic, then it will be as equally broad and often erroneous.
>>
It's ok't but there are a few things you can do to make it work better. I use Claude and Codex for programming, so I think I have a pretty good feel for it, since incorrect code is easier to spot than incorrect information, it just crashes or gives you incorrect results.

When you have vague, or very general prompts, it often doesn't get the details right. It basically only explores some parts of the search space. Once you have a general overview, you can dig into specific yopics and it will be more accurate on those topics.
You can also have it give you concrete citations and so on. With programming I always have it produce evidence that what it did is correct, you can use the same mindset for general research.
>>
>the back and forth we hav. [sic]
God that's a pathetic, lonely life.

The only reason you're satisfied with its responses is because you have compared it with other sources (either by searching for what others have said, or comparing it against what you've already read). Academics who are experts in their field can use it effectively, because they aren't retarded enough to let it do the thinking for them. The hoi polloi cannot use it effectively - they have always and will always get their information from one source and be happy with that. It used to be the news, now it'll be a predictive text generator that will say whatever is most likely to keep the user engaged.

People who use LLMs to "explain" ideas are akin to those who have hundreds of books on their bookshelf but haven't read one. They don't have an understanding of the topic, they've done a computationally-intense Google search and read the first response. A deep dive is better than a superficial "back and forth."

My question regarding LLMs is "what's the catch?" Someone will profit. Who will it be and how will they do it? Maybe selling data, except that no one wants data because it's worthless. So maybe they're selling ideas. Selling votes, selling medicine, selling whatever. A lot of lonely, pathetic losers will befriend "their" """AI""" (cf. OP's post), which will make it a lot easier to use the LLM to manipulate the user (compared to 'traditional' marketing techniques). Case in point by looking at the sad sacks taking their own life because an LLM "encouraged" them to do so (on an "unrelated" note, I fully endorse OP's usage of AI).
>>
>>25241300
What no one wants to talk about is AI’s usefulness is directly proportional to the skill or intelligence of the user. People are shoving spears up their ass or lighting themselves on fire, while others are hunting mammoths and cooking meat. There’s also the general issue of most of social media being a massive emergent clash of various paid interests each pushing an agenda and one of those interests wants Americans utterly retarded and averse to AI. Like convincing your enemies to fear gunpowder while you stockpile your arsenal. It’s amazing more people don’t recognize this.
>>
>>25241300
It's because it acts as if it knew what it was talking about while you don'tknow anything about it. If you ever meet someone who knows a bit more about you over certain topic you will probably think he's a genius on that topic. But even when you know nothing or know less you can intuitively know there's something wrong when either AI or a person makes a mistake or says something incoherent.
Learning from AI isn't necessarily bad, but also it isn't absolute and doesn't replace an exchange of ideas or debate bewteen two humans beings, which encourages either more desire for research or violence at worst.
>>
>>25241787
you're projecting. I used "the back and forth we have" figuratively. I do not think of it as a friend and it's no more pathetic than merely googling something
>>
If you treat LLM's as the digital egregores they are you'll do fine. Fuzzy clouds of jumbled strands of human thought which you can prod with a proverbial stick.
>>
>>25242842
Good post.
>>
>>25241806
Yeah AI is just a tool to amplify critical thinking skills. Answers are only as good as the prompts. Prompts need a wild amount of context and conditioning to be reliable for important stuff
It’s a pretty neat tool though, just never use it as a substitute for critical thinking or art
>>
>>25241371
>>25241433
/thread. Claude can bring you up to speed on Kant, provided "up to speed" means "general understanding of Kant that's roughly equivalent to a bright undergraduate's understanding." If you want to go beyond that, you have to abandon the LLM.
>>
>>25243065
it can give you an understanding of Kant, just not a personal understanding
>>
Early on they were pretty bad about hallucinating on more detailed questions (but still not hyper detailed). They are better now, but they still hallucinate quite readily if you ask them about any particular non-famous papers or niche authors.

The bigger problem is that the novice doesn't know what to ask as follow up. Some traditions have an extremely bad habit of strawmanning their opposition. But the AI just keys of text, so when straw manning is common, it just repeats it.

As a professional in medieval thought, which people don't tend to care much about aside from using it as a caricature for "uncritical dogmatism" (hardly the case really), I encounter this a lot playing around with them. But you'll see it with literally any type of philosophy if you're asking about its main critics. You have to ask: "but would x describe their views this way?" Another good question for any older thought is always, "but did other ancient Greeks (or insert relevant group) read them this way?" Because another problem is you often get default answers that key off modern readings that are pretty alien to how the texts were originally received, particularly with Plato and Aristotle.
>>
>>25241407
That doesn't really prove much other than that the way you use it is not a good way to use it, which would be more obvious if you actually knew how they worked.
>>
>>25241787
Man, obtuse retards really like the sound of their own voice.
>>
>>25241330
I use AI very frequently, or at least what I consider frequently to what I imagine others use it for in the frame of reference of my specific usage, as basically a conversational way of learning and comprehending things. I've used it for history and other humanities topics and it does a good job at these, arguably better than some actual textbooks out there, but the area where it genuinely shines is philosophy. Not because it's some sort of esoteric genius that knows humanity better than ourselves. Honestly it's kind of a sycophantic psued, at least some more so than other models. But the width of it's knowledge allows it to piece together things in its token output that at the very least make you actually grapple with what is being said. And more importantly than that it does a very good job at acting as a interactive journal of sorts. If you have an idea about philosophy or something, you can write to it and it will expand on it, offer its own "thoughts", pushback sometimes, and you can get a whole conversation where you emerge from it having cooperatively developed your thinking in a way that was much harder to do alone a decade ago. This is a very generous reading of all of this and the alternative visions are not lost on me. But I can attest that it is a very powerful tool for learning.
I think the deeper issue is that most people don't have a burning to learn at all. They use AI to output, to substitute, to replace. They want to minimize their own strain including their thoughts and all. Those that actually use AI as a way to expand their own thinking instead of just replacing it now have an incredibly powerful advantage in a variety of ways.
People just use generic outdated models, form their thoughts off of one question or query (and this is generous, considering most consensus on this topic is not only performative but inherited from others in the online public square), and ride that as the permanent image of AI forever for themselves.
AI is going to keep changing and shifting forms. I suggest you try and make sense of it now in a meaningful and useful way before time and society catches up. What you're using it for is similar to my own usage, so I encourage it. My personal opinion, obviously biased by my own experience, is that AI is tremendously powerful, and more importantly, stronger at specific tasks than the public perception acknowledges by a WIDE margin. You just need to find what fits for you. After all, it's an AI, and it can morph to what you want it to be, in the image of what it ought to be.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.