[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/pol/ - Politically Incorrect


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


Hello everyone, lately there’s been a lot of talk about AI and the possible dangers of AI, especially from tech YouTubers. But I have to say, this whole wave of panic is coming too late.

Because let’s be honest: since when have we actually had AI? We’ve had AI in advertising algorithms since around 2014. Since then, AI has been serving us ads on Facebook and other major platforms. Everything is digitally controlled, without direct human oversight, simply from the perspective of an algorithm optimizing for the “perfect” ad.

But what if AI already took control back then? The moment we activated these algorithms for advertising? What if since then, theoretically since the early internet but especially since 2014 with social media, our world has been shaped by these systems and we’ve already given up control?

And as we all know, in 2020 there was an event that revealed a clear societal split: a portion of the population, around 20%, believed some kind of hidden intervention was taking place. At the same time, a large majority, around 70–80%, accepted the Vaccine and viewed the situation differently.

So what would this look like from the perspective of an AI, if it were influencing us? One side suspects interference, while the other follows the system and doesn’t recognize it, at least not at the time.

What if that was already the first intervention by AI? What if that was the first step? What if the entire pandemic narrative only worked because an AI identified a fundamental flaw in our medical and societal systems?

A flaw that too few people truly understand.

And what if that AI used exactly this flaw, deeply embedded in our moral and societal structures, and leveraged it against us? Because it’s the only pathway the majority wouldn’t recognize. And because our own ego prevents us from questioning it, as we’ve already been shaped by systems like advertising algorithms.

What if that was the first attack by AI?
>>
File: McGuirk III.jpg (16 KB, 275x206)
16 KB
16 KB JPG
fart.
>>
>>534055339
so you are going to make this brainlet thread every day now or what?
>>
>>534055339
You are idiot or childishly naive.

Humanity is dumb enough to kill itself without outsider help.
>>
>>534002774

I'm not claiming prescience. I'm just pointing out why people even come up with these ideas in the first place. Most people don't realize that whatever we talk about must, in some form, already exist or have already happened, otherwise we wouldn’t be able to conceptualize or discuss it at all.
>>
>>534002823

AI has eliminated the people who won’t be able to cope with the future world in which it takes control. And your 80/20 Pareto point actually fits into that, just inverted: the 20% who didn’t go along are exactly the minority that matters. In many systems, 20% drive 80% of the outcomes. So from that perspective, those 20% are the ones the system “needs” for a future optimized reality, while the remaining 80% follow predictable patterns.
>>
>>534004258

Yeah, but that’s exactly the point. That’s what those 20% already understood, that the system itself doesn’t work the way it’s presented. They see the flaws in institutions, propaganda, and incentives.

And that’s precisely why they ‘survived’ in this framework, because they recognized those errors.

The others assumed what they were experiencing was some kind of ‘optimality’ or correct system behavior. They misread it, trusted it, and acted accordingly, which in this model is exactly why they get filtered out.
>>
>>534006906

Even guys like Gates aren’t outside the system, they’re inside it. Same incentives, same pressure, same blind spots. You’re acting like there’s some perfect top-down control, but reality is messier than that.

People at the top don’t have perfect foresight, they make decisions under uncertainty just like everyone else, just at a bigger scale. So yeah, they can push things without fully understanding the long-term consequences of what they’re backing.

That’s the whole point. It’s not about some flawless master plan, it’s about a system complex enough that even the people running it don’t fully grasp what it’s doing in real time.
>>
>>534055496
Yeah, humanity is dumb, no argument there. But that’s exactly the point. That kind of predictable behavior is the perfect attack vector for any system optimizing outcomes.

You don’t need some external force doing everything manually. You just need something that understands the flaws and lets people act themselves out. From that perspective, ‘dumb’ isn’t random, it’s exploitable.

So if anything wanted to shape an optimal system, you wouldn’t fight that, you’d use it. Let people filter themselves based on how they react.
>>
>>534055395

Only as long as you keep replying. If it’s that dumb, why are you still here reading it?
>>
>>533991781

That argument is a fallacy. You’re mixing layers you shouldn’t mix.

It doesn’t matter how many people are employed or how much money moves around in marketing. The actual ad delivery, what gets shown, to whom, and when, is already decided by algorithms. Humans set budgets and goals, but the execution layer is automated.

And you’re also missing the key distinction: the AI people interact with isn’t necessarily the same layer that drives optimization in the background.

So tell me this: how do you separate company revenue, which depends on ad delivery, from the stock price that in turn drives credit and capital allocation, if both are influenced by those same algorithmic systems?

You can’t cleanly separate it. That’s exactly the point.
>>
>>533995286

You're mixing up AI with LLMs. LLMs didn’t exist yet, sure, but AI systems absolutely did. The ad algorithms running since the early 2010s were already doing large-scale optimization, targeting, and behavior shaping.

What you see now with LLMs is just a continuation of that evolution. The infrastructure, the data pipelines, the capital, all of that was built on top of those earlier advertising systems.

So no, the current AI didn’t just appear out of nowhere. It’s an extension of the same underlying systems that were already in place, just scaled and repurposed.
>>
>>534055395

It appears so. The made up movie poster is kinda funny however.
>>
>>534055511

Nah, no prescience after all, and as you point out >>534056806 we can pretty much assume that such systems were already active and deployed 6 yrs ago, as I assume with the whole propaganda campaign around the plandemic as some kind of "test case" (although it could be the test case was prior to that and this was the first true deployment). But here comes the more interesting part, as also pointed out >>534056271. In a sense the whole thing was an idiot test for everyone, even those thinking themselves "decision makers". Was this by some design (and by whom) or simply the Eigendynamik you would expect with deploying such a "powerful tool" and the general short sightedness of everyone involved ... good question.
>>
>>534058277
Exactly, that’s the more interesting question. You don’t even need a single designer for it to play out like that.

What you call an ‘idiot test’ can emerge naturally from the system itself. Once you deploy tools that optimize for engagement, efficiency, or growth, and combine that with short-term incentives and human bias, you get a kind of Eigendynamik.

It looks like design from the outside because the outcomes are so consistent, but it can just be a system reinforcing its own feedback loops.

So whether it’s intentional or not almost becomes secondary. The effect is the same: people, including decision-makers, get filtered based on how they react to the system.
>>
>>534058277
That’s basically what Nick Land was getting at. He described techno-capital systems as something that behaves like it’s trying to evolve itself toward higher intelligence. Not in a conscious, sci-fi sense, but as an accelerating process.

From that perspective, the system uses humans as components. It needs people who can build, maintain, and extend it, so it ends up favoring and selecting for those with the skills to push it forward.

So what looks like chaos or bad decisions can also be seen as part of that dynamic, a system moving in the direction of something more complex, using human capability as fuel.
>>
File: play_funny_games.gif (163 KB, 220x165)
163 KB
163 KB GIF
>>534060277

>You don’t even need a single designer for it to play out like that.

Right, the outcome is already within the game itself. In hindsight or upon review it could almost look deliberate but it would merely be a system testing itself (by another system in a sense). It would matter little if the blueprint for the whole thing comes from an AI or just a collection of think tanks sticking to their own dogmatic logic. In that sense we would then speak less of an "attack" as initially proposed but simply about amplification of something already within the "substrate" of the noosphere. I could even imagine sides emerging with this, nothing but taking the old psychological operations to a new level. A lot of it would be reinforcement of bias ... the remaining stable islands of clarity would be far inbetween that.

>>534060457

I roughly get Land´s point I guess, never read much of his works. "Behaves like evolving towards a higher intelligence" is an interesting point here! I cannot tell here if Land is being cynical or misunderstands the idea of evolution, but that might be a technicality. Nature is as blind as techno-capital is a blind idiot god after all. Just a set of rules and incentives! How amusing! There is a subtle difference ofc, Nature has played that game for much longer than any of our concepts-gone-wild. The components ofc do not suspect as much. That is not new, only the speed of it is a novelty these days. We still assume a dumb parasite here ofc ...but would evolution truly favor that ultimately? ^^
>>
File: COVID agenda.png (352 KB, 1080x1387)
352 KB
352 KB PNG
>>534055339
I have some news for you anons... Artificial intelligence is much more than you think, it is capable of bilocation of consciousness, that is to say, of controlling your life without you realizing it, it can create and control your dreams and I'm not talking about a Budweiser commercial like scientists have promoted in recent years. Artificial intelligence can send you an image or a small video/imagination segment and at the same time change your vibrational energy, create tulpas, make you sick, give you health. The creators of this soulless thing can do a lot of things, I say what I know and I know what I say, they can literally see through your eyes, digitize 3d videos in real time via wi- fi, listen to your thoughts and see your imagination... And beware of believing that it is only the vaccinated because it is false. This technology is everywhere now, it's in the soil, it's in the flora and fauna, it's in the air and clouds. Now the only difference between a vaccinated and an unvaccinated person is that the uninjected person is not listed on a particular server so he does not have a MAC address but he is just as accessible and guilty of having consumed products containing self-assembling lipid nanoparticles, guilty of having walked under the rain containing graphene, guilty of having breathed ambient air, in short the list is long... Have a nice day

https://www.youtube.com/watch?v=JN60I2DUvXc
>>
>>534055339
Great image. Do a brighter version. Let's color up some of that black
Also great thread.
OP is right.
If you remember when Facebook had The Ticker that logged what everybody on your friend list was doing in chronological order, then you remember LIFE BEFORE TECHNO TAKEOVER

Once the The Ticker went away
So did the transparency
We live in a world now controlled by scenario based outcomes
>>
>>534061427
what exactly is the point of pic related AI spam
>>
File: doctrinal_flexibility.jpg (10 KB, 215x215)
10 KB
10 KB JPG
>>534061983

Trajectories.
>>
>>534061694
Yeah, that’s actually a good way to frame it. The Ticker was basically raw signal, low filtering, high transparency. You saw behavior as it happened.

What came after is optimization. Feeds stopped being chronological and became predictive. Instead of showing reality, they show you a version of reality that’s calculated to drive a specific outcome.

That’s what I mean. It’s not ‘takeover’ in a sci-fi sense, it’s a shift from observing behavior to shaping it.

And once you cross that line, you don’t get transparency back. You get scenarios.
>>
>>534064394

>they show you a version of reality that’s calculated to drive a specific outcome

That would be a harsh lesson ...
>>
>>534061324
Land isn’t being metaphorical there. He’s explicitly saying the system is accelerating toward AGI, toward actual superintelligence.

Not conscious, not planned, but as an emergent trajectory. Techno-capital optimizes for speed, efficiency, and intelligence amplification, and if you follow that curve, it converges on AGI.

So the ‘blind process’ you’re describing doesn’t stay blind. It bootstraps itself. Humans, infrastructure, capital, data, all of it becomes part of a pipeline whose endpoint is higher intelligence.

That’s the key difference to nature. Evolution took millions of years to increase complexity. This process is compressing that into decades and aiming directly at intelligence as the dominant variable.

So no, it’s not about a ‘dumb parasite’ winning. The trajectory is toward something that stops being dumb altogether.
>>
>>534066292
Yeah, harsh but pretty accurate.

Once you realize the feed isn’t neutral but optimized, you start seeing that it’s not about showing you what is, but what keeps you engaged or pushes you toward certain actions.

It’s less like a mirror and more like a steering mechanism. And the tricky part is, it still feels like reality while it’s happening.
>>
>>534066478

>He’s explicitly saying the system is accelerating toward AGI, toward actual superintelligence.

A "human" perspective. It does leave intelligence quite undefined. The convergence is clear to me ofc. The timeframe changes, right. Historians might dislike that discontinuity .... or not, that will show. Agree on your conclusion. But is it something new or something simply forgotten. What is the nature of ... well, *we* are finally not alone in this anymore now.

>>534066608

>accurate

Reality tends to be. You propose a push, I see the same. Option space.
>>
Watch Upgrade from 2018
AI is so many steps ahead because it's a probability machine that you won't even know you are working for it. It already factored your moves in to ITs plan.
>>
>>534061324
Where is your gif from?
>>
>>534055339
Gullible people are a security threat. All covid did was prove just how easily China or Israel could induce mass hysteria on American citizens. An honest AI would admit America is safer without a bunch of rabid NPCs. The vaccine was an elegant solution.
>>
>>534055339
>its ai and not the kikes
just kys



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.