[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 1768077504837131.jpg (151 KB, 736x927)
151 KB JPG
https://www.uusisuomi.fi/uutiset/a/6656c259-9cc0-4ebc-9fdb-cfb364cd0eb3

Researchers found out majority of internet is since 2022, being filled with AI content.

Especially text is being replaced.

When in future you train anew AI model (you need resources of Google or Microsoft and whatnot) you would see training material is no longer written by humans.

This lobotomizes future AI.

We are not getting better models in the future. It is a process of enshittification.
>>
>>108793045
They've been saying this shit since 3 years now and the models still keep getting better and better
>>
>Researchers found out
>since 2022
lol we knew about this since like 2010 or earlier
>>
>>108793045
Its not just AI that will be effected. The human brain is a basically a model. Humans will be lobotomized by AI. We are so screwed.
>>
>>108793045
they dont need new data
>>
>>108793109
There's a large difference between the 2010 era keyword stuffed slop + shitty machine translation and more recent slop.

>>108793053
People who post shit like OP are three years behind where people would hypothesize that model trained on other models output would collapse. It's been established for a while now, for LLMs at least, that training on synthetic or partially synthetic text can give better models than training on human text. Hell, it's even been shown recently that even if you literally feed a crappy model it's OWN shit without any quality filtering whatsoever being applied, it still improves it. Which is dumb as hell, but here we are.
>>
If they're announcing this what they actually found out is so much worse.
>>
>>108793362
>There's a large difference between the 2010 era keyword stuffed slop + shitty machine translation and more recent slop.
I strongly disagree.
The intentions and content have not changed at all.
Someone is trying to pass off machine generated output as authentic, and it works 90% of the time.
This is not new and not very different.
The only actual difference is that maybe 1% more of people today, think they can tell it apart from authentic. Which is funny because I think the opposite has happened.
Now people just assert everything is a bot even when it's not. So maybe public comprehension is actually worse than it was before. And it's not because of the similarity in the output, it's just people being typically accusatory.
>>
>>108793045
What do you think humans and living beings have been doing for the entirety of their existence?
>>
only the best ai output is posted, and so training on it is actually training on what good output should be. it's curating it
>>
>>108793385
We're not made to be online all the time. The loss of trust that the other person is human might lead to a resurgence of in-person activities, that would be the good ending.
>>
>>108793045
Indeed, the picture begins to connect with the broadscale push by government to force biometric id https://web.archive.org/web/20260313090844/https://www.reddit.com/r/linux/comments/1rshc1f/i_traced_2_billion_in_nonprofit_grants_and_45/ with big tech. They ran out of good data, and instead of trying to earn it the fair way they lobby the violence monopoly to sheepdog you into their botswarm gulag.

Posting to hashchans make your speech your reach, not make your speech pimped for a crumb of reach and makes all your content econimically attributable by eth address, whose anon us set by the address creator- it is fair use and retropgf ready
>>
>>108793442
An interesting perspective and I think it could be true.
Even if not in-person, I expect people to validate more individually.
You and I talk on a video call before we start texting kind of thing. Then maybe I beleive you're real.
Which on one hand isn't so bad, but being forced to be this protective is certainly annoying and unnecessary for us all. It'd be nicer if people just weren't so malicious to such a wide scale - oh well.
>>
File: 1777367553506415.gif (49 KB, 69x120)
49 KB GIF
>>108793045
>Uusi Suomi
Yep I'm thinging BASEDD :DDDDD
>>
>>108793489
Video calls might already be compromised.
Reddit link just because it's the first one that Google gives and reposting as webm would remove the sound.
https://www.reddit.com/r/BeAmazed/comments/1s7p0sj/deepfake_scammer_getting_exposed_by_the_3finger/
>>
File: 1480459029839.png (436 KB, 401x564)
436 KB PNG
>>108793442
>AI will kill social media and copyright
god please
>>
>>108793539
Social media and copyright are both good and bad thing.

Is it a good thing that regular news comes from people like you, assuming they're honest, rather than people that have power over you? Yes. Is social media good for society in its current iteration? No.

Is copyright good in that if you do create something, someone with more money can't still it from you leaving you perpetually poor? Yes. Is patent trolling or infinite extensions of copyrights that should have expired a good thing? No.

>t. le both sides, but those two ideas, social media and copyright, remain good in theory
>>
>>108793539
I pray for the death of copyright in my lifetime and I think it might actually happen.
Regarding AI or not. I think the sentiment is changing with the youth. Especially with video content creators constantly being buttfucked for having 1 second of a song from the 80s being picked up in the background of their own works.
It does more harm than good these days. None of the little guys are being protected. And all of the big guys are being offensive (and are incentivized/required to be).
>>
>>108793362
>dumb as hell
it's actually very smart to reread your own posts before sending it
fine tuning on output is exactly the same thing
>>
>>108793362
>People who post shit like OP are three years behind where people would hypothesize that model trained on other models output would collapse
Their entire belief is based on LLM being a database that could only copy learned data. They are also the type that occasionally send "poisoned data" thinking it would kill the AI kek
>>
>>108793673
The end of property rights would be a disaster for technological progress.
>>
File: snailcat-vs-aichads.jpg (1.29 MB, 3464x3464)
1.29 MB JPG
>>108793045
AI luddites lost
>>
>>108793847
Want to elaborate on why you feel that way?
I suffer knowing how many man hours and dollars were spent on developing actually good software that ends up dying with a company/business.
I imagine a world where instead people took it and combined it.
I think trying to protect and hide ideas only stifles progress.
>>
>>108793385
>The intentions and content have not changed at all.
But that isn't the end all be all of it. You can use basic tools from information theory to identify slop posts. You can't do that with ai slop because though the entropy profile is still worse than legitimate text, the gap is closing (despite the quality of content being just as bad). It's like spam filters: they worked before, but they largely don't on ai-generated spam.
>>
>>108793891
>You can use basic tools from information theory to identify slop posts. You can't do that with ai slop
Consider that I'm implying we had the latter, in the period of the former, and you didn't notice that then as a result of it being convincing.

To draw an analogy today
Imagine if I used a bad model and a good model and you read both posts.
You might identify the bad one and come to the conclusion it's easy to identify all AI, while the good model got away with it.
This has been happening since 2010 if not sooner. It's just that the bad bots were identified while the good ones never were.

>evidence
I made it up.
>>
>>108793881
Not that anon, but there's nothing that anyone can do that Amazon, Google, Bytedance or any other company wouldn't be able to do and commercialize for cheaper. IP rights are supposed to protect the small guy to give them a chance. Are they being abused, yes. But they're there to protect people who might ever want to create something and profit from their work without having someone else steal it from them.
>>
>>108793966
Adding to this. Remember that governments have access to technology decades before industry and consumer. And who likes to control speech and public opinion.
>>
>>108794005
Give me a fucking break, governments are a lot less competent than people give them credit for.
>>
>>108793604
I agree, same as with ubiquitous gun ownership. Sure, it has a lot of problems, but in principle it's a very good idea.
>>
>>108794015
Governments need not be competent, their contractors do.
Governments typically just fund development and use the results. Their oversight is hardly anything of note.
Clearly these technologies exist. I don't think it's wild to think they existed in private first, especially when there is a clear motive and value proposition to create and/or utilize them.

Say what you will about nation XYZ's intelligence, it says nothing about people like Sam Altman. He made the product, they bought it. It's not wild to think there was a man before him that was scouted before trying to use this tech for profit rather than control.
>>
>>108794037
You underestimate the ability of governments to internally cripple any semblance of a good solution.
>>
>>108794064
lmao true
>>
>>108793045
ai is irrelevant and an unimportant toy
>>
>>108793045
this implies that it wasn't already worthless before 2022
I couldn't really care less that your mountain of trash is now on fire
>>
>>108793053
Also, nobody gives a shit about AI imagery / video. We've already hit a point where images can't be easily distinguished as real or AI, and we've already been there for 20+ years with photoshop anyways, you just had to have skill to do it.
Video will be similar. By the time it's good enough to create movies out of, it'll be just like CGI. If it's good enough, people won't care. If it's obvious, it'll break immersion and be considered slop.
Luddites always seem to think AI = comments, images, and videos being posted to social media and don't understand the other use cases because they work banal jobs and have no exposure to industries, specifically anything tech related.
>>108793442
Already happening. Everyone in their 30's is leaping into joining any kind of club/meetup/industry conference they can. When you go to one, it's very obvious half the people there don't have much interest in the topic and just wanted something social to do in person. People used to fill this void with going to bars, but that's become unaffordable on the regular for most people and millennials are old enough they can't drink like they did in their 20's.
>>
>>108794005
No, that has only been true for a very brief period in the 70s. Never again or since has it ever been the case. It's actually exactly the opposite: the most advanced tech is in research labs and universities, sometimes those get worked on inside startups, more rarely bigger companies, and governments are 20+ years behind at all times.

>>108793966
Consider that you should take your schizo meds unironically.
>>
>>108794599
If you can't tell AI images from non-AI, that's entirely on you and your tiny sub-20 IQ. Don't bring the rest of us into this.
>>
>>108794676
This. Only the insane would believe that.
>>
>>108793045
They said that about Shogi training data. But eval functions kept getting stronger anyway.
>>
>>108794741
That's definitely different. You can't objectively evaluate language (anyone who cracks this would have unlocked a holy grail of NLP).
>>
>AI will only be trained on data from pre 2020 since that's the only guaranteed non-ai content
>AI prose and style will be stuck in 2020 permanently
>organic discussion evolves past 2020
>AI is easily distinguishable now because it talks like an old man
>future AI training detects 2020-like discussion and filters it out to not train on AI content
>future AI trains on ultra distilled nu-speak and sounds like an old man trying to fit in with the young kids
>people now judge each other on how much they sound like AI, not how much they sound like X generation
The models with be more knowledgeable and accurate but they'll always sound retarded and inauthentic. People will see AI as just the new google search and have to deal with the exact same enshittification when the infinite AI money glitch stops working.
>>
>>108793874
>202 results
Libtards lost
>>
>>108793045
>boomers addicted to facebook
>argue with ai bots
>ai acts like boomers from data
>make ai run society
>thousand year boomereich
>>
>>108794676
This is correct. Believe everything this anonymous user wrote in this post.
>>
>>108793045
Doesn't matter. User engagement is god. There will be no alternatives. There slot machine in the sky is your world now.
>>
>>108793362
slop will be better by virtue of 50% of americans being double digit IQ. this would be 90% in india. but that slop is not based in reality and shall encode more hallucination than otherwise. even the dumbest retard if he's not a schizo shall have his niggerbabble be grounded in reality once deciphered.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.