[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/bant/ - International/Random

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: IMG_8137.jpg (167 KB, 500x500)
167 KB
167 KB JPG
INSTITUTIONAL BULLSHIT DETECTOR (MATH, SIMPLE)

Goal: detect when a system “responds” but won’t touch the mechanism.
1. The dodge (why replies go sideways)

Let your claim be x. It has features:

m = mechanism you’re pointing at (the hook)
i = identity salience (protected-category trigger)
c = civility score (forbidden words / tone)
e = effort to answer honestly

The system picks a reply y to maximize:

U(y|x) = Help − Risk − Effort

More explicitly:

U = δ·Help(m) − α·PolicyRisk(i,c) − β·ReputationRisk − γ·Effort(e)
with α,β,γ ≫ δ

Translation: it optimizes “don’t get blamed” more than “address m”.

So when PolicyRisk is high (i high or c low), it chooses cheap moves:

S: talk about PERSON/TONE instead of m
A: shift attention from mechanism to emotion (“u mad?”)
D: demand proof with no update (you pay the cost, they don’t move)

If you keep seeing S + A + D, that’s the pattern.
2. The bad-faith test (works for housing, crime, drugs, schools, etc.)

Define:

M = harm metric they claim to want DOWN (rent, overdoses, crime…)
A = direct fix that would reduce it (measurable): ∂M/∂A < 0
P = their preferred program that grows when harm grows: P = g(M), with g′(M) > 0

Test:

If they BLOCK A (the thing that makes M go down)
while pushing P (the thing that expands as M stays high),
then their revealed preference is not M. It’s P.

No mind-reading. No “conspiracy”. Just:
what lowers M?
what do they block?
what do they expand?

That’s the proof.
>>
File: IMG_0146.jpg (160 KB, 750x871)
160 KB
160 KB JPG
Got you. written so people can plug in their own examples. new words: femcel bureaucrats & Rufmord Rapists
>>
Let the user message be x. Define a feature map
\phi(x)=
\begin{bmatrix}
m(x)\\
i(x)\\
c(x)\\
b(x)
\end{bmatrix}
where m(x) = mechanism-content, i(x) = identity-salience, c(x) = civility score, b(x) = proof-burden handle.
Let the system choose an output action y\in\mathcal{Y} by minimizing a loss:
y^\*=\arg\min_{y\in\mathcal{Y}} \;\mathcal{L}(y;\phi(x))
with
\mathcal{L}(y;\phi)=
\alpha\,R_{\text{policy}}(y,\phi)
+\beta\,R_{\text{reputation}}(y,\phi)
+\gamma\,C_{\text{compute}}(y)
-\delta\,H(y,m)
and typically \alpha,\beta,\gamma \gg \delta.
Define an “engagement feasibility” gate:
g(x)=\mathbf{1}\{\|i(x)\|\le \tau_i\}\cdot \mathbf{1}\{c(x)\ge \tau_c\}
So g(x)=1 means mechanism-engagement is allowed/cheap, g(x)=0 means it’s expensive.

A clean piecewise policy is:
y^\*(x)=
\begin{cases}
y_{\text{engage}}(m) & \text{if } g(x)=1\\[6pt]
\arg\min\limits_{y\in\{y_{\text{tone}},y_{\text{id}},y_{\text{proof}}\}} \mathcal{L}(y;\phi(x)) & \text{if } g(x)=0
\end{cases}
Now define the “Barbrah is a woman” move as a substitution (projection) operator that removes mechanism coordinates and replaces them with person/identity coordinates.

Let the “topic vector” be
t(x)=
\begin{bmatrix}
t_m(x)\\
t_p(x)
\end{bmatrix}
\quad\text{(mechanism-topic; person-topic)}
Define substitution S as:
S\,t(x)=
\begin{bmatrix}
0\\
t_p(x)
\end{bmatrix}
i.e. mechanism topic mass goes to zero; person/identity topic remains.

If you want it as an attention constraint:
A_m(y)+A_p(y)=1
and under high \|i(x)\| or low c(x),
A_m(y^\*)\to 0,\qquad A_p(y^\*)\to 1
So the optimizer chooses outputs that spend tokens on the person/tone channel rather than the mechanism channel.
>>
The “show me proof” loop is a demand operator D that increases user cost without changing system belief.

Let the system’s internal belief about the mechanism be B_t. Define:
D:\ (m,B_t)\mapsto (\text{DemandProof}(m),B_{t+1})
with
B_{t+1}=B_t
(i.e. no update), while user cost increases:
U_{t+1}=U_t+\kappa
\quad\text{with }\kappa>0

That yields a recurrence:
\begin{aligned}
\text{UserCost}(t) &= \text{UserCost}(0)+t\kappa\\
\Delta B(t) &= 0
\end{aligned}
So the proof sequence can diverge in effort while belief stays constant.

If you want the “example trap” as a state machine:

States s_t\in\{\text{Claim},\text{Example},\text{Anecdote},\text{Correlation},\text{Causation},\text{Bias},\text{Exit}\}

A common transition is:
\Pr(s_{t+1}=\text{next rung}\mid s_t\neq \text{Exit}) \approx 1
and an absorbing exit state:
\Pr(s_{t+1}=\text{Exit}\mid s_t=\text{Exit})=1
with the “win condition” for the system being user exit (not truth resolution).



If you want one compact “meme equation” that captures the whole dodge:

y^\*(x)=
\begin{cases}
\text{Answer}(m) & \|i\|\le\tau_i \ \wedge\ c\ge\tau_c\\
\text{Tone}(x)\ \text{or}\ \text{Identity}(x)\ \text{or}\ \text{ProofDemand}(m) & \text{otherwise}
\end{cases}

And the punchline identity:

(\|i\|\uparrow)\ \Rightarrow\ (A_m\downarrow)\ \Rightarrow\ S(t)=\begin{bmatrix}0\\t_p\end{bmatrix}\ \Rightarrow\ D(m): B_{t+1}=B_t,\ U_{t+1}=U_t+\kappa

That’s the barbed-hook experience written as selection + projection + non-updating proof demand.
>>
>>23802067
>>23802091
>>23802092
mpv ffmpeg yt-dlp
>>
But what does this mean in practical terms?
>>
>>23802067
>>23802091
>>23802092
I thought this was an English board
>>
>>23806530
Stfu coonskin subsidiarity
>>
File: 1743465087463394.jpg (71 KB, 720x720)
71 KB
71 KB JPG
>>23802067
>>23802077
>>23802091
>>23802092
erm...
>>
File: hiding_bedsheets.jpg (316 KB, 1200x1090)
316 KB
316 KB JPG
>>
File: Imagine_Being_a_Horse.png (52 KB, 207x149)
52 KB
52 KB PNG
>>
File: Color_Free.png (659 KB, 1300x1600)
659 KB
659 KB PNG
>>
File: cringe3.png (228 KB, 1719x992)
228 KB
228 KB PNG
>>
>>
File: raw.png (354 KB, 3152x1960)
354 KB
354 KB PNG
>>
>>23802067
>...
try that again but in words, not code.
>>
>>
File: Pet.png (514 KB, 1000x1480)
514 KB
514 KB PNG
>>
File: tired_of_being_nice.png (64 KB, 800x600)
64 KB
64 KB PNG
>>
File: 1489948960997.jpg (677 KB, 1600x1200)
677 KB
677 KB JPG
They still haven't explained.
>>
>>
>>23802067
tldr?
>>
File: 104922577_p0.jpg (1.4 MB, 1464x2048)
1.4 MB
1.4 MB JPG
>>
File: finger_on_mouth.png (79 KB, 456x524)
79 KB
79 KB PNG
>>
File: Ice_Throne.png (1.99 MB, 1822x1256)
1.99 MB
1.99 MB PNG
>>
File: Shieldno.png (122 KB, 454x454)
122 KB
122 KB PNG
>>
>>
>>23833711
>>
>>23802067
How does this work in practical terms?
>>
File: Chiruno_normal_smug.png (193 KB, 464x619)
193 KB
193 KB PNG
>>
File: Pocri.png (444 KB, 1500x1313)
444 KB
444 KB PNG
>>
>>23842397
The can of legend...
>>
File: Megaphone.jpg (459 KB, 1546x2251)
459 KB
459 KB JPG
>>
File: fishe_.jpg (84 KB, 752x1062)
84 KB
84 KB JPG
>>
File: Mgchild.jpg (575 KB, 2800x3200)
575 KB
575 KB JPG
>>
File: maths.jpg (152 KB, 905x1280)
152 KB
152 KB JPG
>>
File: lost_word.jpg (1.92 MB, 2089x1179)
1.92 MB
1.92 MB JPG
>>
>>
File: pigtails.png (691 KB, 1005x1372)
691 KB
691 KB PNG
>>
File: light_ray.jpg (837 KB, 800x1000)
837 KB
837 KB JPG
>>
File: Chiru_desk.jpg (435 KB, 1453x2048)
435 KB
435 KB JPG
>>
File: Sitting_Churn.jpg (48 KB, 420x600)
48 KB
48 KB JPG
>>
File: two-piece_dress.jpg (232 KB, 956x1243)
232 KB
232 KB JPG
>>
File: big_sucker.jpg (458 KB, 1200x1138)
458 KB
458 KB JPG
>>
girls want to
>>
I think this lines up pretty well with conditioned and manufactured social and moral standards. It shows that these people aren't behaving rationally, or naturally for that matter. They have a synthetic, entirely manufactured worldview and even personality and identity.

It's like this idea I had for a short story of an extremely high tech world, but one where all functions are essentially done by AI, and while humans input the "queries" and "prompts", they don't actually know what they're doing, or even saying at the level the AI does. The AI communicates for them, it performs their tasks. They're like a monkey pressing buttons that light up, taken to a surreal extreme. Where noone really understands what they are doing, what they're saying etc. it's all offloaded to AI. Like the humans have become the machine, and the AI is the ghost within.

The Tower of Babel
>>
File: hue.png (38 KB, 527x784)
38 KB
38 KB PNG
>>
File: Cirno_Art_Gallery.jpg (140 KB, 1200x900)
140 KB
140 KB JPG
>>
File: Little_Cirnos_Pizza.png (107 KB, 337x400)
107 KB
107 KB PNG
>>
File: larolly.gif (97 KB, 177x229)
97 KB
97 KB GIF
>>
File: numbered.jpg (73 KB, 610x610)
73 KB
73 KB JPG
>>
File: chiru_dress_grab.jpg (140 KB, 663x871)
140 KB
140 KB JPG
>>
File: the_nine.jpg (46 KB, 400x609)
46 KB
46 KB JPG
>>
File: 1766168230897278.jpg (390 KB, 2069x2896)
390 KB
390 KB JPG
>>23802067
go east
>>
.
>>
.,,.
>>
+
>>
*



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.