People say you can't use large language models on any of the content websites, scientific journals or anywhere else even if you fact check and correct all the errors.AI work is just bad by default.Why is that? Why is technology evil by default?If I want to write on a topic, I can direct AI to create content for me and change it, fact check it and make sure it's real useful information but that's not good enough for them.
>>107886196It's more about raising the barrier to entry rather than restricting the quality of content
Creation is divine, AI is material.
>>107886223Are people lying about using it, I can produce a bunch of content I find interested with Claude because it's so good at finding sources and information. I'm 100% sure people use it.
>>107886196even if you factcheck the errors, its almost by definition plagiarism. And publishers get a bit wary about that kind of stuff. If an AI knocks something out that then turns out to be more or less lifted wholesale from some obscure text, everyone involved rep takes a trip to the shitter. Which is not to say there aren't 'journalists' or authors currently knocking out trash that is 90%+ AI, because there most inarguably is. Its just this garbage tends to be confined to the trashier end of the market as most of it is blatantly obvious when pointed out.
>>107886196Copyright infringement worries, for one.
>>107886299It's not when you use sources
>>107886196because you're a retard. But we don't even need to counter you, you are already done you see?
debate me
>>107886247>it's so good at finding sources and information.>what is a search engine
>>107886196Because while someone people like AI they like what they make with AI they don't give a fucking rats ass what you make with AI. Basically fuck off with your garbage they want to enjoy their own shit without seeing your gay ass nonsense.
>>107886247>It's so good at searching redditOuting yourself as a faggot isn't a good idea>but itbasically all the websites it searches it got the links from reddit
>>107886196You're describing AI as an assistive tool—directing it, refining output, and ensuring accuracy—which aligns with what many policies allow. Medium, for example, permits AI for outlining, grammar, or idea refinement but requires labeling for significant assistance and bans fully generated paywalled content. Scientific journals often mandate disclosure of AI use (e.g., in methods sections) and prohibit it as a co-author, but human-edited AI contributions are increasingly accepted if ethical.The pushback comes from:Detection Challenges: AI detectors are unreliable, so platforms err on the side of caution to avoid floods of undisclosed AI content.Reader Expectations: Audiences often prefer "human" content for authenticity, especially in journals where peer review relies on human accountability.Precedent and Risk: Even edited AI can subtly carry over flaws, and platforms/journals want to preserve their reputation by prioritizing human-led work.That said, not everyone sees AI as evil—it's transformative for accessibility, efficiency, and creativity when used responsibly. If you're writing on a topic, your method is valid for many outlets; just check their specific guidelines and disclose AI involvement to build trust. Over time, as tools improve, these stigmas may fade.
If it's so useful why don't you get the key iedas from it and rephrase it yourself?
>>107887836This is an AI post. Merely recognizing it as such meant I wouldn't ever read it.