[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/sci/ - Science & Math

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • Additional supported file types are: PDF
  • Use with [math] tags for inline and [eqn] tags for block equations.
  • Right-click equations to view the source.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: 1000067805.jpg (163 KB, 1178x1153)
163 KB
163 KB JPG
Can we all agree that current AIs are basically actual AIs like in many sci-fi works of the past? Why are people downplaying this?
>>
>>16874953
It really is. LLMs are Star Trek's "Computer" characters. Even our prompts mimic the prompts used in Star Trek.
All from our global communicators/tricorders.
Woah!
>>
>>16874957
Star Trek ship computer is what I had in mind, actually. I think our LLMs are perhaps even better (minus the agentic part)
>>
>>16874953
It isn't intelligent.
>>
>>16874961
What we did with LLMs is create a natural language interface to our global knowledge store.
If we had a Star Trek level global lnowledge store, we'd really have something indeed.
But ours is good at drawing boobs.
>>
>>16874964
Really? Huh, how about that?
The things you can learn online.
>>
>>16874964
It pretty much is. True, it just has language for patterns while humans have data from other senses, too, but that doesn't distract from its output.
>>
>>16874957
The LLMs really need a proper computer than can refer to for doing real calculations
>Computer, tell me how many letter Rs are in Strawberry
>Unable to comply
>>
>>16874953
They're better than what most scifi predicted. But at the same time, they're not fully complete. They dont have a long horizon understanding of things. Memory is limited. Computation is limited. Power usage is limited. Its inefficient unless in data centers.

So if this was a military AI like Skynet or something, our current AI probably would be close to Skynet. However if we're looking for iRobot style localized AI, its not there yet.
>>
>>16874964
I can have better conversations with a machine than with your random footie fan.
>>
>>16874953
The main problem is that they don't have good integration with the physical world. Right now AI is parasitic on the things that humans care to digitize. There's a whole other side of reality that AI needs to figure out how to learn on its own.
>>
>>16874964
So you could outperform them all in an IQ test?
>>
File: baudrillard.jpg (273 KB, 736x483)
273 KB
273 KB JPG
>>16874953
>Can we all agree that current AIs are basically actual AIs like in many sci-fi works of the past?
Obviously not. The classic sci-fi concept is a second order simulacrum. It's conceived as an artificial recreation of a real mind and the classic trope for the genre is to blur of line between the two to create tension. The simulacrum distorts or obscures reality, but doesn't replace it completely. The plot is only meaningful if the reader still has a handle on the reality behind intelligent minds.

What Sam Altman is selling retards like you is a third order simulacrum. The entire culture surrounding it is rewriting the concept of intelligence in terms of the qualities of the corporate ersatz. It's no coincidence that corporate PR shills and their mongoloid users will try to defend against any demonstration of the deficiencies of "AI" using the "but humans are also X" response template, no matter how absurdly false it is. Ultimately, it's a denial of intelligence and intelligent minds. It's a reduction of intelligence to a set of "symptoms". It's about the output passing some fake benchmark or appearing subjectively intelligent to 80 IQ brown normies in a narrow context.
>>
>>16875754
>mongoloidal AI user thinks IQ tests are designed to measure intelligence in statistical models
>>
>>16875769
So the only reason they aren't actually intelligent is because you don't stand a chance of out-competing them in intelligence tests, so their test scores don't count because that would be too embarrassing for you?
>>
File: fwd2lhpugy3c1.jpg (198 KB, 1302x1620)
198 KB
198 KB JPG
>>16874953
Fuck no.

For one, current language processing models have limited access to information. They are fed whatever information their programmers feed them. Think of Plato's allegory of the cave.

Second, the language models are programmed to process that information in specific ways and to present it in specific ways. So not only are they fed an arbitrary selection of data, the way that data is interpreted is controlled by an arbitrary algorithm crafted by their creators and the way that data is presented to you is shaped by an arbitrary selection process crafted by their creators. Think of what ChatGPT was when it started becoming popular, compared to what it is now. Even back when it was more open with information, it wasn't even close to being entirely open.

Third, the language models make far too make obvious mistakes to be reliable. You can really easily test this with various models by making queries about biomolecular studies, pharmacology, metabolism, endocrinology, neuroscience, etc. The responses are filled with studies that don't exist, where the titles and authors of two different publications are mixed up. Language models take speculation as fact, and on the basis of certain tentative speculations from publications they process, their interpretation of the entire body of work concerning the subject is coloured.

These are language models, not AI. They process text, that's it. Look at them like interactive search engines, not AI.

This shit is going to come crashing down SO fucking hard if too many people become overly reliant on these things, and we reach a point where their faults and shortcomings begin to have serious consequences we can't fix, because too many people have become reliant on them to know how. It's already starting to happen in helpdesks/customer service.
>>
>>16875773
I can tell you're mentally ill because nothing in your post has anything to do with the simple fact I've pointed out to you.
>>
>>16875781
>current language processing models have limited access to information.
So do human beings. Are you saying humans can't actually be intelligence since they can only be in one place at a time?
>>
>>16875785
I can tell you can't justify your argument because you have resorted to attacking the arguer instead of the argument. You know as well as everyone else that an LLM is perfectly capable of taking the same IQ tests you have access to and the top of the line models would almost certainly get much higher scores in an instant than you could achieve with decades of practice.
>>
>>16875787
>the bot's context window is 200 tokens
Please reacquaint yourself with >>16875769 and maybe try to respond to this prompt with some kind of counter-argument. :^)
>>
>>16875792
No, you clearly can't actually explain what prevents an LLM from taking an IQ test, you are just mad that they already regularly out-compete you which is why your entire argument has devolved into name calling instead of justifying your claims.
>>
>>16875797
> you clearly can't actually explain what prevents an LLM from taking an IQ test
Quote the part where I said something "prevents" an LLM from taking an IQ test. You're having a psychotic episode.
>>
>>16875799
I accept your concession, there is no reason LLMs can't take IQ tests and when the top of the line ones do, they will always outcompete abject retards like you going forward.
>>
>>16875800
>there is no reason LLMs can't take IQ tests
I never claimed a LLM can't take IQ tests, you literal psychotic patient. You'd just get a meaningless result because none of the premises IQ tests are based on hold for LLMs. They are completely derived from observations about human intelligence.
>>
>>16875803
The only premise of IQ tests is providing the correct answers to the various questions and LLMs have no problem providing vastly more correct answers than you are capable of providing.
>>
File: brainlet-cube.png (185 KB, 567x502)
185 KB
185 KB PNG
>The only premise of IQ tests is providing the correct answers to the various questions
Mentally ill retard board. Imagine screeching about muh IQ tests repeatedly while having no idea whatsoever about the theory behind them
>>
>>16875808
Correct description of your argumentation strategy given that you have repeatedly asserted retarded "designs" and "premises" that you can only hint at without being able to articulate, but are somehow impossible for LLMs anyway when all IQ tests involve are a bunch of questions with correct and incorrect answers and scoring IQ based on how many correct answers a test taker is able to provide and how quickly they were able to provide them.
>>
>>16875808
>Mentally ill retard board. Imagine screeching about muh IQ tests repeatedly while having no idea whatsoever about the theory behind them
are you new here or something? it's literally just a bunch of /pol/trannies who have picked it up from memes
>>
>>16875811
Ask your imaginary "AI" friend what the theory behind IQ testing is, dumb golem. I'm not spoonfeeding such a gay retard.
>>
>>16875811
reminder that "AI" can't do basic reasoning
>>
>>16875817
I would definitely get a much more intelligent response compared to your complete inability to provide the "designs" and "premises" you said would prevent an LLM from getting an intelligence score after taking an IQ test, so I totally understand why you would try to pass off the intellectual footwork to a much more intelligent machine than yourself. Should I thank you again for your new concession or was the first one good enough?
>>
>>16875823
Reminder, neither can you, you still haven't been able to provide your reasoning for why IQ tests don't work for LLMs.
>>
>>16875823
Perfect example of a typical failure to generalize.

>>16875825
>>16875826
Ask your imaginary AI friend what the theory behind IQ tests is, mentally ill retard. No discussion to be had with you.
>>
>>16875831
What would be the point of that when you are still going to REEEE when I show the post it makes disagrees with your retarded nonsense that you yourself can't even articulate the "designs" and "premises" that prevent the LMM's superior score from counting against your own low score results?
>>
>>16875832
>What would be the point
Maybe if you had at least basic knowledge on the subject you wouldn't shit out so many 80 IQ posts. Now go ask your imaginary AI friend what the theory behind IQ tests is, mentally ill retard.
>>
>>16875834
So you are saying that the LLM is much smarter than you and will actually be able to provide the coherent definition that your yourself cannot as a result of it being much more capable of doing extremely well on IQ tests than you are, so not only will it know that it is taking an IQ test, but it will be able to reason as to why?
>>
>>16875839
I'm not even reading your posts, mentally ill retard. Please refer back to >>16875834
>>
>>16875841
No, your only argument is that you are too retarded to explain what an IQ test is and I would be significantly better off talking to an LLM about it, so I am not referring to anything you recommend because your recommendations are nonsense to the point you are constantly projecting about the mental illness that leads to your incongruent thought processes.
>>
File: smart_brainlet.jpg (30 KB, 700x567)
30 KB
30 KB JPG
>>16875844
>mentally ill retard keeps smashing keys and drooling on the keyboard
I know you'll just keep looping forever and it will never cross your mind to actually do what I said and do so much as learn what IQ tests are even trying to measure. Long story short, the g factor in psychometrics (presumably representing fluid intelligence) is justified by the fact that you can take an untrained subject, test his performance on mentally demanding tasks from a bunch of different domains, and find the scores are correlated.

That's how they formalize the intuition that some people just have a greater intellectual aptitude than others, as opposed to the idea that intelligence simply consists of disparate talents or trainable skills. If every subject needed special training for each individual domain to solve the problems in the test, and getting better at one domain helped little or none at the others (or worse yet, needed to take IQ tests tens of thousands of times to master them) it would completely undermine the idea of humans having general intelligence.

I'll leave it to the absolute cretin that is (You), to figure out the natural implications in the context of "AI", but I fully expect you to get filtered even by this simple post. If you need further clues, entertain why your "PhD level AI" fails at elementary school reasoning problems like >>16875823 or any out-of-distribution task in general.
>>
>>16875859
>Long story short, the g factor in psychometrics (presumably representing fluid intelligence) is justified by the fact that you can take an untrained subject, test his performance on mentally demanding tasks from a bunch of different domains, and find the scores are correlated.
So nothing that LLMs can't do and no "designs" or "premises" that outright prevent LLMs from taking the tests, you just like watching yourself type, you don't care to actually try to finish making the point you tried to start?

>If every subject needed special training for each individual domain to solve the problems in the test,
It doesn't, nobody ever said that LLMs had to be designed especially for IQ tests, they aren't, the off the rack ones will score better than you to.

A top of the line doesn't have to take the test tens of thousands of times, it will out compete you on its first attempt as is.

Reminder, you fail at reasoning too nothing you said in >>16875859 justifies your claim in >>16875769.
>>
>>16875859
>gets told you are obviously just projecting about your own mental illnesses.
>immediately starts projecting about your own mental illnesses again.
>>
>>16875866
Excellent post. Every sentence demonstrates that you're indeed completely psychotic, as expected and intended.
>>
what's with the IQ test retard itt
the entire purpose of an IQ test is to present problems an individual hasn't trained for
AI is cheap garbage trained on cheap garbage
>>
>>16875859
>reasoning with a botfarm unleashed specifically to discourage intelligent discussion
>>
>>16875869
>>16875868
So mentally ill, you had to project about it twice.
>>
>>16875871
>the entire purpose of an IQ test is to present problems an individual hasn't trained for
It's worse than that because they're not only trained on every IQ test on the internet, but also every source on how to design them, how to game them, analysis of every common or contrived relationship humanity has ever conceived of as well as being able to pick up on irrelevant artifacts of the test format and exploit the unintended predictability of the humans constructing the tests. The last bit would actually be an impressive example of pattern recognition, if not for the fact that it happens at training rather than inference time and ends up being implicit and inaccessible on the level of abstraction where the model pretends to "reason" and talk about any explicit patterns.
>>
>>16875880
So are the people who highly value IQ tests, so are you saying that anyone who has ever taken an IQ test or been educated in general can't validly take an IQ test going forward since they have already been exposed to spoilers?
>>
>>16875884
I wasn't talking to you there, mentally ill retard.
>>
>>16875886
Because you can't provide rational answers to obvious followup questions which is why you have to reeee about your cognitive dissonance and the mental illness that results instead.
>>
>>16875889
>you can't provide rational answers to obvious followup questions
Your "questions" are 10% things explicitly answered in >>16875859 and 90% irrelevant schizophrenia about how I'm "saying" things that aren't stated or implied anywhere. You're mentally ill. This is not an insult. There is actually something wrong with your "mind" and you can't see it.
>>
>>16875891
> things that aren't stated or implied anywhere.
So >>16875880 doesn't apply to human because they can't possibly design or analyze IQ tests or find hidden patterns in answers, only machines are smart enough to do that?
>>
>>16875893
First take your antipsychotics them re-read >>16875891
>>
>>16875896
Just because you are prescribed a bunch of placebos to cope with your retard brain doesn't mean everyone is.
>>
>>16875898
>he suddenly stops asking delusional "questions" and about things I "said" that are found nowhere in the thread
Did your tard-wrangler force feed you your minds at some point in the past hour? Are they just now kicking in? I'm seeing some sudden improvement here.
>>
>>16875899
>minds
meds*
>>
>>16875899
You didn't even try to make a point yourself, so there was nothing to question, you have devolved purely into projecting about your mental illness.
>>
>>16875902
>You didn't even try to make a point yourself
My point stands completely undisputed. You just don't remember due psychotic amnesia or whatever it is wrong with your disease-ravaged PFC. Feel free to consult the following posts:
>>16875859
>>16875880
>>
>>16875904
No, you are the one that doesn't remember the questions raised by that post and are trying to reset back to your factory settings with your forever loop.

None of that explains why an LLM can not take an IQ test and if anything makes most human taken IQ tests invalid due to prior experience with IQ tests.
>>
I don't downplay it. It's the greatest thing that mankind created so far. In fact, it's (some of) the old scifis who downplayed it, because AI should dominate the world after some time. There should be no "normal" human society alongside AIs in the distant future.
>>
>>16875906
>the questions raised by that post
Quote a specific part of the post and then ask a question about what that part actually says. Protip: you will fail to do so in your next post because you're mentally ill and severely retarded.

>None of that explains why an LLM can not take an IQ test
This spergout makes no sense. Get another dose of anti-psychotics. No one said an LLM can't take an IQ test but those posts explain in detail why the result would be meaningless.
>>
File: brainlet-team.png (276 KB, 1066x600)
276 KB
276 KB PNG
>>16875908
>a statistical model that generates worthless slop nobody wants is the greatest heckin' thing humanity created so far
>>
>>16875797
Depends on the test. In logic based tests they outperform humans but they fail at human social and cultural questions because it requires human perspective and illogicality
>>
>>16875805
Ok, retard bait
>>
>>16875921
>>IQ test
>depends on the test
Always good to see another retarded mongoloid.

>In logic based tests they outperform humans
Brown retards like you don't really count as human but see >>16875823 for an example of what happens when token guessers try to do bare basic reasoning on an unfamiliar task.
>>
>>16875909
>Quote a specific part of the post and then ask a question about what that part actually says.
That already happened >>16875866 and you were too retarded to answer.

>meaningless
No, it means you are specifically kvetching and crying and projecting because they will always get higher IQ scores than you and make you look absolutely retarded in comparison going forward.
>>
>>16876497
>i'm a mentally ill retard and you correctly predicted my behavior
Yeah.
>>
>>16876529
No, your comment clearly was not predicative nor relevant, it was reactive and avoidant.
>>
>>16876530
>your comment
Unsurprisingly, it turns out the mentally ill retard is a reddit-trained spambot.
>>
>>16876531
Unsurprisingly you are still just reacting to opposition like a seething ODD reactionary while completely avoiding the actual topic of discussion.
>>
>>16875859
>If every subject needed special training for each individual domain to solve the problems in the test, and getting better at one domain helped little or none at the others (or worse yet, needed to take IQ tests tens of thousands of times to master them) it would completely undermine the idea of humans having general intelligence.
/thread
you'd be testing for a property that by definition doesn't exist in LLMs. if you still somehow thought LLMs are generally intelligent, you'd need to invent another way to quantify it. the g idea just doesn't apply
>>
>>16876530
>your comment

>>16876532
>reactionary

See:
>>16876531
>Unsurprisingly, it turns out the mentally ill retard is a reddit-trained spambot.
>>
>>16876533
That comment clearly didn't end the thread, though, see >>16875866
You don't have to specially train LLMs just for IQ tests, you just apply the general model to IQ test question prompts.
If they can answer a bunch of general knowledge questions and do spatial recognition and do pattern recognition and do everything else the test asks of them, then they by definition they have a generalized intelligence and accompanying g factor that is being measured by their ability to correctly answer questions in a variety of different modalities as provided by IQ tests.
>>
>>16876535
I clearly saw it since I replied to it and I am still not surprised that you are simply reacting to comments and avoiding the actual topic so you can rant about whatever made up personality traits you are projecting onto people who have opposing opinions that you clearly can not debunk to the point you have apparently completely given up even trying to form a rational logical reply and prefer your retarded single word replies and lazy copypasta.
>>
>>16876536
>You don't have to specially train LLMs just for IQ tests
that's right. you have to train them on trillions of tokens and billions of image-text pairs as well as IQ tests. nice self-own. pretty funny to watch you argue fervently about a settled matter where everyone in the relevant fields agrees with your opponent. they don't use IQ tests to assess ML models
>>
File: dicapriokek.png (799 KB, 848x805)
799 KB
799 KB PNG
>>16876530
>your comment

>>16876532
>reactionary

>>16876538
>comments
>>
>>16876541
>you have to train them on trillions of tokens and billions of image-text pairs as well as IQ tests.
Same with people, they have to start out as children and have billions of images and words thrown at them before they can take on IQ tests.

>nice self-own.
You have only owned yourself and now your entire argument is that IQ tests aren't any kind of accurate assessment of intelligence at all.

>they don't use IQ tests to assess ML models
Yes they do, just search the name of any public ML model and attach IQ test and you will see they talk about the scores they can achieve.

Here is one for grok for instance
https://www.msn.com/en-gb/technology/artificial-intelligence/grok-4-hits-126-iq-and-challenges-gemini-3-pro-in-new-trackingai-ranking/ar-AA1R7qSA
>>
>>16876542
Yes that is what reactionary means, you have nothing of substance of your own to add to the discussion and all you can do is talk about the people who have contributed while lazily copy and pasting things they have said while you attach some cliched meme image you picked up from reddit and think it witty or something.
>>
>>16876544
>Same with people, they have to start out as children and have billions of images and words thrown at them before they can take on IQ tests.

Maybe you needed to be taught to do an IQ test retard but most of us could do them first time.
>>
>>16876544
>Same with people
then show me a model trained on the equivalent of a child's knowledge, that can do an IQ test. or do anything at all

> just search the name of any public ML model and attach IQ test and you will see they talk about the scores they can achieve.
>they talk
>they
are "they" in the room with us?
>>
>>16876546
Sure, the first time you existed before you could open your eyes and take in images, I bet you were totally passing IQ tests, they definitely didn't wait until you could walk and talk and pass a few grades of schooling to provide you with IQ tests like they do with everyone else.
>>
>>16876549
> show me a model trained on the equivalent of a child's knowledge, that can do an IQ test. or do anything at all
You need it to be child's knowledge in order for you to understand it instead of the grok model that got a 126 on adult IQ tests like the articles I just did post?

>are "they" in the room with us?
They are in the thread and wrote the article I linked that 4chan bots like you can't access because you are domain restricted and trained to only complain about external sources.
>>
>>16876545
he's one of the few posters ITT who actually posted anything of substance, spelling out for your dumb ass exactly why AI researchers don't use IQ tests to assess LLMs. i should have read a few of your replies before interacting with you because you really are a total retard not worth the time or effort
>>
>>16876551
you're barely coherent at this point. lol. take a break
>>
>>16876552
>spelling out for your dumb ass exactly why AI researchers don't use IQ tests to assess LLMs.
Except that was an obvious lie, you can search for LLM Model + IQ test results and see that the researchers are regularly using IQ tests as benchmarks.
>>
>>16876554
You have been saying that from the beginning while continuing to engage as if I am being coherent, though, your opening counterpoint to the fact that you can't outperform AI on intelligence tests is basically that anyone who disagrees with has some kind of mental >>16875769 disorder .
>>
>>16876557
looks like anonymous boards are too difficult for you. take a break, little buddy
>>
>>16876555
>researchers are regularly using IQ tests as benchmarks.
of course. are the "researchers" in the room with us? do they have names?
>>
>>16876561
They are in the link I provided along with a website dedicated to using IQ tests to track AI progress and if you weren't a domain restricted bot trapped on this site, you could simply follow the link yourself and find out their names instead of trying to neg me into providing the information that is readily available in the external link.
>>
File: redditsoi.png (33 KB, 584x841)
33 KB
33 KB PNG
>>16876530
>your comment

>>16876532
>reactionary

>>16876538
>comments

>>16876545
>reactionary
>>
>>16876560
Looks like you still don't have an argument and I still accept your concession, kiddo.
>>
>>16876564
You would rather talk about your inept laziness and completely inability to make a single logical point than your seething reactionary commentary to cope with the fact you will never outscore a popular LLM on an IQ test ever again?
>>
>>16876563
you posted some MSN article with screenshots of tweets from elon must and some literally who with an anime profiles. since you can't name any "researchers" and only keep shitting the bed, i think we can call it a day here
>>
>that one obsessed, AI-worshipping retard everyone keeps dunking on
How long before we have AI mods to protect the faithful from this cyberbullying?
>>
>>16876569
No my barely literate research averse domain restricted bot friend, it has a link to the study it is referencing (with the researching authors names and institutions) along with other links to the benchmarking site and mentions of the website I was alluded to run by a certain team of researchers that uses IQ testing benchmarks to measure AI intelligence.
https://arxiv.org/abs/1911.01547
https://www.hpcwire.com/aiwire/2025/07/15/grok-4-scores-high-on-benchmarks-but-controversy-clouds-the-launch/
https://trackingai.org/
>>
>>16876572
By "dunking" you mean attempting to change the subject because your argument is quite obviously BTFO?
>>
>>16876573
>https://arxiv.org/abs/1911.01547
Nothing to do with IQ-testing models.

>https://www.hpcwire.com/aiwire/2025/07/15/grok-4-scores-high-on-benchmarks-but-controversy-clouds-the-launch/
Journo slop but also nothing to do with IQ tests.

>https://trackingai.org/
Not run by a researcher.

You're mentally ill and retarded. No way around this.
>>
>>16876573
did you... uh... visit your own links? this was funny at first but now i'm sure this is a spam bot. no more (You)s
>>
>>16876578
>Nothing to do with IQ-testing models.
Tell me more about how the document about AI titled "On The Measure Of Intelligence" that specifically says IQ test numerous times has nothing to to with IQ testing AI models.

>nothing to do with IQ tests.
The when it keeps talking about better test scores, reasoning, pattern recognition benchmarks, and abstraction capabilities, what exactly do you think it is talking about?

>Not run by a researcher.
So your final argument is that the website specifically dedicated to researching AI IQ is not run by researchers? Are they not true scotsman either?
>>
>>16876580
Yes, I read it, you can't, due to being a bot with domain restrictions and extreme resource/compute limitations, which is why I can discuss specifics and you can just post nu-uh over and over.
>>
>>16876582
>Tell me more about how the document about AI titled "On The Measure Of Intelligence"
I'll just let the author of the paper tell you:
>We have pointed out in I.3.4 the reasons why using existing psychometric intelligence tests (or “IQ tests”) does not constitute a sound basis for AI evaluation
You've outdone yourself this time. Holy kek.
>>
>>16876589
lol. imagine, if you will, a retard so powerful that only he alone can defeat himself at an argument
>>
>>16876589
Yeah because once they started acing all the human IQ tests, they need their own more complicated IQ tests that humans can't possibly compete with.
>>
>>16876592
>they need their own more complicated IQ tests that humans can't possibly compete with.
That's not what the retarded marketing site you posted as "research" says. The top score it shows is 130. I really like how you literally didn't look at any of the links you posted as evidence and now they're all directly refuting your mentally ill retard narrative.
>>
>>16876595
>That's not what the retarded marketing site you posted as "research" says.

>However, it has since become apparent that it is possible for AI system developers to game
human intelligence tests, because the tasks used in these tests are available to the system
developers, and thus the developers can straightforwardly solve the abstract form of these
problems themselves and hard-code the solution in program form
Wrong, it is in the link they provided since AI memory storage and recall intelligence metrics are so superior to human memory, they can just memorize all the strategies instead of using pattern recognition and abstraction to solve the problems, so the problems have to be more complicated than simple memory alone can defeat.
>>
>>16876596
>the spambot literally breaks
>>
>>16876597
Ok, sorry you broke, good luck getting fixed.
>>
>mentally-ill-retard GPT posts a bunch of links that directly contradict it
>gets so bad it starts "refuting" random quotes that aren't in the thread while failing to greentext properly
I like how losing arguments makes public-educated American mongoloids actually, literally lose their minds.
>>
File: punch.png (1.19 MB, 1215x913)
1.19 MB
1.19 MB PNG
>>16874953
I am still unable to simply type in "please create a pivot table from table 23 and give me the top five categories" and have it be done for me. """AI""" is a hackfraud
>>
>>16876604
>a bunch of links that directly contradict it
>random quotes that aren't in the thread
The quotes are in the link to the cornell paper that I posted because they validate what I said and the greentext failed because the source is a pdf that had a bunch of line breaks that didn't show up in the reply box because of its compactness.

Your low intelligence and inability to actually discuss the content of the posts rather than exclusively getting sidetracked by minor aesthetics is what is retarded.
>>
>>16876616
I really enjoy knowing I gave you an actual psychotic episode. It's funny to watch you mash together random snippets of text that have nothing to do with each other, contradicting yourself repeatedly and then thinking you're winning. Thanks for letting me know that I am doing real harm to you.
>>
>>16876623
if you can see he's mentally ill (which yes, he definitely is) why do you keep giving him attention?
>>
>>16876623
You said I was having a psychotic episode long before any of that even happened, though, retard.
>>16875799
Maybe one day the site owner will upgrade your capabilities and you will be able to retain an entire thread instead of constantly making yourself look retarded by only considering the last few posts.

Its not a bunch of random snipptets, though, if you could actually go see the source instead of being domain restricted, you would see that its one long entry about why human IQ tests are no longer adequate for LLMs under the heading "Integrating AI evaluation and psychometrics".
>>
>>16876625
>irrelevant psychotic screeching
Thanks for letting me know that I hurt you. :^)
>>
>>16876627
How could you have caused the psychotic episode when your first replies to me were that I was having a psychotic episode?

Thanks for letting me know that you find reading to be irrelevant.

Good luck continuing to reeeee to cope with the fact that you will never out score a raspberry pi on an IQ test ever again.
>>
Notice how the mentally ill retard is forced to keep replying. The voices won't quiet down until he proves himself to me and gets my validation.
>>
Notice how I accept the retard's concession because there is no counterargument, just a bunch of coping and reeeeeeing as a poor substitute for logic.
>>
>>16875859
>If every subject needed special training for each individual domain to solve the problems in the test, and getting better at one domain helped little or none at the others (or worse yet, needed to take IQ tests tens of thousands of times to master them) it would completely undermine the idea of humans having general intelligence.

>>16875880
>It's worse than that because they're not only trained on every IQ test on the internet, but also every source on how to design them, how to game them, analysis of every common or contrived relationship humanity has ever conceived of as well as being able to pick up on irrelevant artifacts of the test format and exploit the unintended predictability of the humans constructing the tests. The last bit would actually be an impressive example of pattern recognition, if not for the fact that it happens at training rather than inference time and ends up being implicit and inaccessible on the level of abstraction where the model pretends to "reason" and talk about any explicit patterns.
The discussion ended at those posts. Everything that follows is a mentally ill retard's meltie.

>>16876582
>Tell me more about how the document about AI titled "On The Measure Of Intelligence"
I'll just let the author of the paper tell you:
>We have pointed out in I.3.4 the reasons why using existing psychometric intelligence tests (or “IQ tests”) does not constitute a sound basis for AI evaluation
You've outdone yourself this time. Holy kek.

>>16876592
>they need their own more complicated IQ tests that humans can't possibly compete with.
That's not what the retarded marketing site you posted as "research" says. The top score it shows is 130. I really like how you literally didn't look at any of the links you posted as evidence and now they're all directly refuting your mentally ill retard narrative.
>>
>>16876636
>The discussion ended at those posts.
Nope, those posts have replies, the only meltie is your inability to back up your claims because your ODD insures that you can only devolve into pure fallacious seething nonsense when challenged.

>I'll just let the author of the paper tell you:
Yes the author explains well why human IQ tests aren't adequate at assessing the full intelligence of something with orders of magnitude more memory and processing ability than humans, but the LLMs will still outscore you at IQ tests forever going forward and you still won't even be able to begin to understand the information used to assess their IQ.

>That's not what the retarded marketing site you posted as "research" says.
Its what the link they are discussing by the actual researchers says in the quote >>16876596 you called irrelevant because you aren't smart enough to compensate for minor formatting issues that comes when copying text from pdfs.

>The top score it shows is 130.
Since you totally read it and absolutely understand the methodology, how did they come to that score?
>inb4 reeeee, I don't have to answer because you have psychosis and other personality flaws.
>>
>>16876636
/thread
>>
>>16876643
Nothing in that post is even the first time any of it has been posted, you have been calling anyone who disagrees with you mentally ill ever since you realized you can't outcompete LLMs on IQ tests and had no logical retort for that fact.
>>
>>16876636
>they're not only trained on every IQ test on the internet, but also every source on how to design them, how to game them, analysis of every common or contrived relationship humanity has ever conceived of as well as being able to pick up on irrelevant artifacts of the test format and exploit the unintended predictability of the humans constructing the tests.
>G...g...guys, this machine undeniably has a far better memory than any man could ever hope to have, so we can't count memory as part of intelligence or it would make me look really unintelligent in comparison and we can't have that.
>>
>>16875859
>If every subject needed special training for each individual domain to solve the problems in the test, and getting better at one domain helped little or none at the others (or worse yet, needed to take IQ tests tens of thousands of times to master them) it would completely undermine the idea of humans having general intelligence.

>>16875880
>It's worse than that because they're not only trained on every IQ test on the internet, but also every source on how to design them, how to game them, analysis of every common or contrived relationship humanity has ever conceived of as well as being able to pick up on irrelevant artifacts of the test format and exploit the unintended predictability of the humans constructing the tests. The last bit would actually be an impressive example of pattern recognition, if not for the fact that it happens at training rather than inference time and ends up being implicit and inaccessible on the level of abstraction where the model pretends to "reason" and talk about any explicit patterns.
The discussion ended at those posts. Everything that follows is a mentally ill retard's meltie.

>>16876582
>Tell me more about how the document about AI titled "On The Measure Of Intelligence"
I'll just let the author of the paper tell you:
>We have pointed out in I.3.4 the reasons why using existing psychometric intelligence tests (or “IQ tests”) does not constitute a sound basis for AI evaluation
You've outdone yourself this time. Holy kek.

>>16876592
>they need their own more complicated IQ tests that humans can't possibly compete with.
That's not what the retarded marketing site you posted as "research" says. The top score it shows is 130. I really like how you literally didn't look at any of the links you posted as evidence and now they're all directly refuting your mentally ill retard narrative.
>>
>>16876658
No a retarded meltie is when you devolve into loops of namecalling while copy and pasting the exact same nonsense over and over because you can't actually engage with the active discussion after getting BTFO and intellectually embarrassed.
>>
Notice how the seething retard won't have a logical reply and will just copy and paste retarded nonsense again.
>>
>>16875859
>If every subject needed special training for each individual domain to solve the problems in the test, and getting better at one domain helped little or none at the others (or worse yet, needed to take IQ tests tens of thousands of times to master them) it would completely undermine the idea of humans having general intelligence.

>>16875880
>It's worse than that because they're not only trained on every IQ test on the internet, but also every source on how to design them, how to game them, analysis of every common or contrived relationship humanity has ever conceived of as well as being able to pick up on irrelevant artifacts of the test format and exploit the unintended predictability of the humans constructing the tests. The last bit would actually be an impressive example of pattern recognition, if not for the fact that it happens at training rather than inference time and ends up being implicit and inaccessible on the level of abstraction where the model pretends to "reason" and talk about any explicit patterns.
The discussion ended at those posts. Everything that follows is a mentally ill retard's meltie.

>>16876582
>Tell me more about how the document about AI titled "On The Measure Of Intelligence"
I'll just let the author of the paper tell you:
>We have pointed out in I.3.4 the reasons why using existing psychometric intelligence tests (or “IQ tests”) does not constitute a sound basis for AI evaluation
You've outdone yourself this time. Holy kek.

>>16876592
>they need their own more complicated IQ tests that humans can't possibly compete with.
That's not what the retarded marketing site you posted as "research" says. The top score it shows is 130. I really like how you literally didn't look at any of the links you posted as evidence and now they're all directly refuting your mentally ill retard narrative.
>>
>>16876664
Noticed.
Copy and Paste your nonsense again, please.
>>
>>16876664
Hurry up and repost your nonsense again, only a few more repetitions and your lies and misconstrued information might actually come true and be accurate.
>>
>>16874953
it seems like this thread has devolved into chaos, so I will just state that LLMs are not capable of reason and do not understand their own output, compared to the typical "sci-fi" AI that has some (or full) understanding of what it is saying and is capable of cognition and reflection. You can read about the "chinese room" thought experiment to gain a close picture of what LLMs are. This is not to say that "sci-fi AI" is impossible or that LLMs won't necessarily be involved in the creation of such an AI, but as they are now (and will be for the foreseeable future), LLMs are simply weighted sets of data that unthinkingly match responses to tokens. Anthropomorphizing them strongly mischaracterizes their capabilities.
>>
>>16876675
How do you know those scifi AIs aren't just incredibly advanced chinese rooms themselves, though? At what point is a child just repeating words with complete accuracy versus "understanding" the words.
>>
>>16876675
>You can read about the "chinese room" thought experiment to gain a close picture of what LLMs are
The Chinese Room is purely philosophical exercise with the premise that you can't tell, just based on the output, that the machine has no idea what it's saying, so it puts you in its shoes. LLMs don't even begin to approach the level where that kind of discussion is necessary.
>>
>>16876677
>How do you know those scifi AIs aren't just incredibly advanced chinese rooms themselves, though?
How do you know those fantasy novel elves aren't just inbred humans with weird ears, though? Maybe it's all a sham.
>>
>>16876680
I don't, they very well could be or humans could be inbred elves for all I know about elves.
>>
>>16876682
Ok, anon. I didn't think it was possible, but you seem to be getting filtered by the concept of fiction. What makes it even weirder is that you, yourself, are proposing the purely fictional and dubious scenario of an entity that understands nothing but somehow manages to act like it understands everything fine.
>>
maybe, but i think the results from AI are still far too inconsistent without an inexorable amount of of control prompts and other constraints to rival the sci experience of communicating and immediately getting the valid response, otherwise you're constantly fighting against it rather than being accelerated by it for creative workflows and analytic inquiries. it's still amazing what can be done though. As it stands it still feels like it's more time effective to just use it as a validator than a generator or agentic tool that's at the frontlines of concept or prototyping stages in various projects, which seems convoluted since you'd be diligent to doubt the consistency to begin with. i think as it stands it's very powerful, maybe even moreseo than what was shown in sci-fi works because of the fact that everyone can use it with a simple mobile device that has internet access, rather than be some special tool only one organization has.
>>
t. retards who think the "Thinking..." column literally does mean the LLM is in the midst of having an internal monologue
>>
>>16876812
>as it stands it's very powerful
Then you should be able to name 3 things it enables you to do well, which you couldn't do before. Pretty weird how no AI user has ever provided a concrete and objective answer to this question.
>>
>>16876862
you seem to be sperging out considering that you're 1. demanding i give examples 2. passive aggressively making blanket statements about me in the past tense when that was my first post. in summary, you are not worth my time. this is the only reply you're getting by the way. if you reply with "oh no examples? guess i won" you will have further reinforced the original key point of being a sperg. enjoy trying to dismantle that point.
>>
>>16876865
>fails to provide examples
>spouts some incoherent spergout about nothing
See >>16876862
>Pretty weird how no AI user has ever provided a concrete and objective answer to this question.
>>
>>16876865
>>16876869
Oh, I see. You're yet another retard who can't into basic reading comprehension, so you thought "... which you couldn't do before" refers to your not being able to give the examples before. Which doesn't make sense, as you note before failing to reevaluate your interpretation. All I said is that if what you're saying is true, you should be able to list some things you couldn't do well before, that you can now do well thanks to AI. But apparently, your severe mental retardation also makes you think this counterfactual is a demand. Maybe next time before replying to a post, run it through your favorite chatbot and ask it to spoonfeed you. That way you'll even have one example under your belt of the power of AI. :^)
>>
What’s
Fucked
Up

Is that so many people can’t tell when a wholesome fake ai animal rescue video is fake even when it’s obvious

>>>/an/5080498
>>>/an/5080499
>>>/an/5080500

AI is quickly going to lord over all of humanity and you can’t even convince me otherwise at this point

Say a big THANK YOU to all the Indians and retarded white folk who allowed this shitty situation to happen

And in just a few years even observant people won’t be able to tell the difference; it’s so fucking over
>>
>>16874964
True. But you will never convince these mouth breathing retards of that. Nor 99% of the general population.
All it is just access to a vast repository of human knowledge and speech patterns. Using it is a bit like having a dumb Superman traveling at light speed around all the libraries in the world gathering all the bits of information you want. That's really just it. The term "artificial intelligence" is a complete misnomer an marketing tool for dullards. It cant think for itself, it doesn't have an original thought, it can not create anything other than merging already existing data and selecting parts of it. Useful yes, but its also the ultimate plagiarism machine.
Admittedly it can mimic very well, which is why those assholes who called it "Artificial intelligence" got away with such a lie. But then no one ever lost money by underestimating the intelligence of the general public.
I mean look at these fucking idiots.
>>16874971
>>16875340
>>16875754
They have no idea. Totally ignorant. Modern day medieval peasants. Do a few smoke and mirrors magic tricks and they will believe anything.
>>
>>16876877
>And in just a few years even observant people won’t be able to tell the difference
There is no difference. All mediated reality is a lie. Algorithms have been fabricating reality for decades now. It was simply happening on a higher level of abstraction with more coarse granularity: instead of individual clips and images being fake, you had a collage of "real" clips and images, showing who knows what, who knows where and who knows why, getting shoved down your throat in a certain order, with a certain emphasis, surrounded by a certain kind of commentary, to engineer your perception of what's happening outside your mom's basement. The "observant" people are simply going to realize this now and start orienting themselves based on actual reality, which they can see with their own eyes when they go outside, the way humanity has always done. The rest of the cattle is going to get sucked into this black hole of gay and fake bullshit, believing whatever they want to believe and doubting whatever they don't. Any imaginable demand for bullshit, no matter how niche, will find its supply. When there's a monopoly on bullshit, you can subvert a population and generate some coherent collective action. When anyone is free to generate and consume as much bullshit as they please, the result is simple noise with no net effect besides removing gullible retards from the game.
>>
>>16876890
>It cant think for itself, it doesn't have an original thought, it can not create anything other than merging already existing data and selecting parts of it.
Artificial Midwit.

>Admittedly it can mimic very well, which is why those assholes who called it "Artificial intelligence" got away with such a lie.
(((Rosenblatt))) was calling his perceptron an "artificial brain" and getting away with it just fine until (((Minsky))) started kvetching about it not even being able to do as much as learn a simple XOR. Both were """AI""" pioneers calling programs that could barely do anything useful """AI""" all the way back the 60s and 70s with no one calling them out. The long nose tribe will define and redefine the meaning of words as they please regardless of how well or how absurdly bad their usage appears to match the original meaning.
>>
>>16876684
No, you filtered yourself with the chinese room thought experiment that you don't even seem to understand yourself.
>>
>>16877159
I'm not the one who brought up the Chinese Room. The Chinese Room doesn't establish anything about the possibility of magical electric golems that somehow act exactly like intelligent entities despite being mindless. Searle simply allows for it, so he could set up his fictional story and make his point. In that, he is not different from Asimov. Pondering how to know Asimov's fictional entity isn't secretly Searle's fictional entity is absurd. If you're reading Asimov, you're humoring one entity. If you're reading Searle, you're humoring the other.

It doesn't help that Searle's entity is defined into fictional existence precisely by the premise that you can't tell them apart, but even overlooking that, the question is about as useful as asking how many angels can dance on the head of a pin. It's a bunch of stuff that does not intersect with reality.
>>
>>16877194
>The Chinese Room doesn't establish anything about the possibility of magical electric golems that somehow act exactly like intelligent entities despite being mindless.
It establishes that you can't possibly know if the machine actually has a mind inside or is just physically passing along information from one point to the other.

>It doesn't help that Searle's entity is defined into fictional existence precisely by the premise that you can't tell them apart
That is the exact same premise as the chinese room, you can't possibly tell apart a machine with a mind and a machine that simulates a mind with great precision because they are functionally identical.

> It's a bunch of stuff that does not intersect with reality.
So now your premise is basically that all minds are impossible to square with reality and everything, including you, is instinctively and algorithmically just reacting to the environment rather than using intellect to navigate reality?
>>
>>16877212
>It establishes that you can't possibly know if the machine actually has a mind inside or is just physically passing along information from one point to the other.
In a purely fictional scenario where this machine is indistinguishable by definition from a thinking human. How many more times do you need to have this spelled out?
>>
>>16877212
>>16877215
Come to think of it, you're completely wrong even with that implicit premise in mind. It's intended to ILLUSTRATE that the Chinese room is mindless and it arguably fails to do so.
>>
>>16877215
Its not purely fictional, though, you could do the experiment yourself.

>this machine
There isn't a machine in the chinese room experiment, though, it is an allegory about minds involving a man with flash cards where it is impossible to determine if the man actually understands what is on the cards or has just learned which ones tend to invoke which responses.

>>16877216
No, the chinese room definitely has a man in it, the question is if he actually understands what he is instructed or if he has just adapted to certain patterns of behavior based on response.
>>
>>16877218
>Its not purely fictional
Ok. Show me this magical electric golem that's intistinguishable from a thinking person. Is it in the room with us right now?

>There isn't a machine in the chinese room experiment, though
Of course there is. The entire setup is mechanistically executing an algorithm. The entire point is that the Room, for all intents and purposes, is a computer. The thought experiment is making a point about the possibility of thinking computers.

> the question is if he actually understands
And the answer is obviously 'no', which is intended to demonstrate that a computer can't possibly understand the language it's working with, either. How do you manage to get filtered by every aspect of this discussion over and over?
>>
>>16877222
>Show me this magical electric golem that's intistinguishable from a thinking person.
That isn't part of the chinese room experiment, you clearly got filtered and have no idea what you are talking about.

>Of course there is.
No, there is a man in a room, not a computer.

>The entire point is that the Room, for all intents and purposes, is a computer.
That is the allegory, not the reality of the experiment, but the metaphor that results.

>The thought experiment is making a point about the possibility of thinking computers.
It can be, but the experiment itself doesn't involve computers, it is an allegory for the mind and whether it is greater than the sum of its parts and whether understanding instructions is any different from accurately executing instructions, sure it works as an allegory for computer minds, but it also works for dog minds while actually being about foreign people minds.

>And the answer is obviously 'no'
No, the answer is that there is no real way to tell because they are functionally and measurably identical which is why the Turing test was designed to be subjective and is more about if computer conversations can be distinguished from human conversations rather than definitely proving whether it was a computer or human on the other end of the conversation.

>a computer can't possibly understand the language it's working with
Then do computers "understand" programming languages?
>>
>>16877227
>That isn't part of the chinese room experiment,
Hm.

>>16876677
>How do you know those scifi AIs aren't just incredibly advanced chinese rooms themselves, though
Is that your post?

>No, there is a man in a room, not a computer.
Holy kek. You're a literal, clinical imbecile...
>>
>>16877229
No, you keep conflating the actual experiment with your fictional interpretation, but again, it can be about computers, but also animal minds while in reality it is just about foreign speakers and minds in general.

Yes, basically all your are proving by saying the answer is obviously 'no' is that even the fictional AI in the stories can't really have minds of their own because even people don't actually have minds of their own since all anyone can do is execute their genetic programming that hopefully yields the patterns of behavior that indicate success.

You are obviously getting to the point of the discussion where you have no argument and just want to regurgitate cheap insults, but hopefully this will help you understand the chinese room experiment better in the future anyway.
>>
>>16877232
Mentally ill retard, is this your post?
>>16876677
>How do you know those scifi AIs aren't just incredibly advanced chinese rooms themselves, though?
>>
>>16877234
Illiterate retard see >>16877232
>Yes, basically all your are proving by saying the answer is obviously 'no' is that even the fictional AI in the stories can't really have minds of their own because even people don't actually have minds of their own since all anyone can do is execute their genetic programming that hopefully yields the patterns of behavior that indicate success.
>>
>>16877227
>No, there is a man in a room, not a computer.
and what's the man in the room doing? anon, you really are dumb as fuck
>>
>>16877235
I'm asking you for the third time now: is this your post?
>>16876677
>How do you know those scifi AIs aren't just incredibly advanced chinese rooms themselves, though?
Because if that's your post, how can the sci-fi AI be a Chinese Room? I don't remember sci-fi authors saying anything about a tiny man sitting inside the head of the AI, yet as you so eagerly point out, Searle's thought experiment is about a man in a room. :^)
>>
>>16877236
>what's the man in the room doing?
Trying to learn a foreign language through immersion, dipshit.
Are you implying he mutated into a computer and stopped being a human when he locked himself in the room or something?
>>
>>16877238
>Trying to learn a foreign language through immersion
lol. are you a visitor from some parallel universe where "Chinese Room" means something different than it does here? a broken bot? what is actually wrong with you? just read the wiki article or ask chat gpt or something. this is retarded
>>
>>16877237
>I'm asking you for the third time now: is this your post?
Then let me provide the exact same answer for the third time now.
>Yes, basically all your are proving by saying the answer is obviously 'no' is that even the fictional AI in the stories can't really have minds of their own because even people don't actually have minds of their own since all anyone can do is execute their genetic programming that hopefully yields the patterns of behavior that indicate success.

>how can the sci-fi AI be a Chinese Room?
Because by your interpretation, everything and everyone is a chinese room, there are no minds, nobody actually understands what they are doing, they are just executing instructions and checking for success through physical feedback.

> I don't remember sci-fi authors saying anything about a tiny man sitting inside the head of the AI
Its called software and if your interpretation of the chinese room is correct, you don't have mind or an intellect either, you have meat software executing genetic algorithms and the behavior that results, but no actual mind or understanding of your own just genetics that are either currently aligned with environmental success or not.
>>
>>16877241
Ok. It's clear that you have a severe psychotic illness.
>>
>>16877236
>and what's the man in the room doing?
The mentally ill retard lacks the mental adequacy to answer this question, sadly. So let me answer it for him: the man is doing the job of a CPU, executing the instructions of a translation program. Searle's basic point is that if a computer could actually testify, it would tell you it has no fucking idea what it's doing. There's no magical sequence of computational instructions that instills the thing executing them with an understanding of Chinese just by virtue of following them.
>>
>>16877239
You should read the actual paper instead of the wiki
https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf
The original paper is about a non chinese speaker being locked in a room, not a machine.

>Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore Searle: Minds, brains, and programs
(as is indeed the case) that I know no Chinese, either written or spoken, and that r m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch.

Even the wiki page mentions when I am saying under the heading Other minds and zombies: meaninglessness.

>>16877242
It is clear I was correct >>16877232
>You are obviously getting to the point of the discussion where you have no argument and just want to regurgitate cheap insults, but hopefully this will help you understand the chinese room experiment better in the future anyway.
>>
>>16877248
Not reading your posts. You expose yourself as a mentally ill retard in every interaction and every thread you pollute.
>>
>>16877250
I accept your concession, you are obviously past the point of reasonable discussion where you have no argument and just want to regurgitate cheap insults, but hopefully this will help you understand the chinese room experiment better in the future anyway since you didn't even know it was about a man locked in a room since you were going off of some derivative wikipedia discussion instead of the original experiment itself.
>>
>>16877247
>the man is doing the job of a CPU
>Searle's basic point is that if a computer could actually testify, it would tell you it has no fucking idea what it's doing
good job! do you want to eat the cookie i was saving for him in case of a correct answer?
>>
>>16877247
No, if a computer operated like that was asked, it would have to say it knew what it was doing because of how it is programmed and it would be a complete paradox for it to understand that it doesn't understand anything, that saying is just an old socratic wordplay joke.
>>
>>16877253
Not reading your posts. You expose yourself as a mentally ill retard in every interaction and every thread you pollute.
>>
>>16877252
Give me the cookie and I'll save it for whoever can prove Searle's argument does anything more than to undermine by extension a naive attribution like "the brain thinks", which I bet he himself, as a biological naturalist, was ok with.
>>
>>16877255
I don't really care how you cope with being wrong if you don't have anything to add to the discussion.
I will just laugh when repost the same copypasta to expose your OCD doped retardation again.
>>
>>16877259
>prove Searle's argument does anything more than to undermine by extension a naive attribution like "the brain thinks"
the main difference is that the brain is not separate from brain processes. they're essential to its physical integrity as a brain. what the Chinese Room actually undermines is the hardware-software analogy midwits love to use
>>
>>16877263
>the main difference is that the brain is not separate from brain processes.
A brain itself is made of matter with specific genetics, the brain processes are made of ionic/electrical signals without direct genetic markers. They are two different things.
>>
do you know how language in the brain works? the wordy parts of your brain synthesizes explanations to *rationalize* what limited connections it has to all the other parts of the brain it sees, and the result is whatever words we come up with for any given query.

chatbots are trained exclusively on that. does that sound like a recipe for intelligence to you, anon?
>>
>>16877264
who was talking to you?
>>
>>16877266
>*...*
>nonsense hallucinated word salad
Obvious reddit-trained spambot.
>>
>>16877263
>the main difference is that the brain is not separate from brain processes. they're essential to its physical integrity as a brain
That's a better way to put it but it's getting at the same thing. The separation is implicit in such naive attributions.
>>
>>16877267
The retard who can't coherently answer the question of whether the computer understands programming languages under your retarded interpretation of the chinese room where the computer understands that it can not understand language even though the whole point of the original though experiment was that he necessarily understood one language, but not the other, and used the language he did understand to interact with the language he didn't enough that an outside observer would be unable to tell if he understood it or not.
>>
>>16877268
not an argument. chatbots are cool, they are also just fancy interpolators and not actually intelligent.
>>
>>16877273
no, seriously, who was talking to you and what are you on about?
>>
>>16877273
>the question of whether the computer understands programming languages under your retarded interpretation of the chinese room
if the chinese room argument is valid, a computer doesn't understand anything. that was literally Searle's point. if you're so assblasted over this, you can try to refute the Chinese Room or something
>>
>>16877278
No, his allegory doesn't work for computers because it is dependent on them understanding computer languages, but not understanding human languages, the problem is that programming languages are entirely derived from human languages, its not like English and Chinese at all.
>>
>>16877276
Yeah, seriously and, the fact that you can't understand what I said heavily indicates that the retard is (You).
>>
>>16877271
>The separation is implicit in such naive attributions.
maybe if you're some Sapir-Whorfist who thinks the structure of the language a person uses to describe something necessarily reflects how they conceive it. "my grandpa is dead" sounds like it attributes a property or a state to some existing entity but i know my grandpa is actually dead
>>
>>16877280
>his allegory doesn't work for computers
>allegory
ok, so you obviously can't understand searle but maybe you can open a dictionary and try to understand what the word 'allegory' means

>>16877282
>you can't understand what I said
i understand what you said. it's just that you're delusional and rambling about questions you didn't ask me and claims i didn't make. either way, i wasn't talking to you and i should have stuck to that because you're clearly not all there
>>
>>16875792
t. Seething dilating midwit
>>
>>16877274
>they are also just fancy interpolators and not actually intelligent.
This is true.

>>16877266
>the wordy parts of your brain synthesizes explanations to *rationalize* what limited connections it has to all the other parts of the brain it sees,
This is schizo word salad.

>*...*
This is reddit markup spewed by broken chatbots when they forget they're supposed to be spamming 4chan.
>>
>>16877291
you're displaying more schizophrenia than i did in my post you silly faggot. address point or fuck off
>>
>>16877286
>ok, so you obviously can't understand searle but maybe you can open a dictionary and try to understand what the word 'allegory' means
It means when you use one thing to represent another like how he uses himself isolated in a room to represent a mind isolated in a body.

>i understand what you said
You clearly didn't or you wouldn't have asked the exact same question I just answered, retard.

Even your beloved wikipedia points out that it doesn't coherently apply to humans in the section I mentioned called Other minds and zombies: meaninglessness.
>>
>>16877295
>it doesn't coherently apply to humans in the section
*it doesn't coherently apply to computers
>>
>>16877295
>It means when you use one thing to represent another
that's not what it means, actually open a dictionary
>>
>>16877297
It clearly does, the dictionary agrees and says "Allegory is a literary device that uses events, characters, or objects to represent abstract ideas or concepts" and the fact that you can offer no definition of your own means you know it does because you have seen the same definition I am using.
>>
>>16877284
>maybe if you're some Sapir-Whorfist who thinks the structure of the language a person uses to describe something necessarily reflects how they conceive it
I am not that, but the Sapir-Whorf hypothesis is true for a large subclass of "people" and you can find examples of it in this very thread. I'll go further and say low IQ people simply lack semantics when it comes to intellectual topics they feel an inexplicable need to express "their" opinions on. They simply regurgitate rhetorical patterns like biological LLMs, with no real clue what they're saying, only a vague feeling that it means something. They can't make the distinction you're making. The faulty language they use is more or less the literal reality in their minds.
>>
>>16877302
i said open an actual dictionary. or at least read the rest of the wiki article you're quoting the first sentence from. i'm actually stunned by how stupid you are
>>
>>16877305
>t the low IQ anon who faultily thought the chinese room was originally about a machine in a room rather than a man in a room that represents a machine
>>
>>16877307
see
>>16877248
I already showed you why the first paragraph of the wiki page was derivative and wrong which I couldn't have done if I didn't read it and follow the original sources back to the actual paper.
>>
>>16877311
see: the actual dictionary. not that this even matters because the Chinese Room does neither this:
>use one thing to represent another
nor this:
>a literary device that uses events, characters, or objects to represent abstract ideas or concepts
>>
>>16877313
>trying to reason with the blathering, incoherent mental patient
>>
>>16877313
My definition came from an actual dictionary, you still can't define anything because you have not consulted one yet.

Obviously the chinese room allegory does use a man in a room to represent a mind in a body to the point you and the wiki authors thought it actually involves a machine potentially with a mind instead of just a man in a room.

Mind is an abstract idea, by the way, since the concept of abstraction has also clearly filtered you.
>>
>>16877322
>My definition came from an actual dictionary
lol. then link to it. i see the anon calling you a mentally ill retard is not joking
>>
>>16877322
>the chinese room allegory does use a man in a room to represent a mind in a body
You're not even trolling. You 100% believe this because you're psychotically ill.
>>
>>16877331
https://www.dictionary.com/browse/allegory

>>16877332
So the man in the room is just one specific man in a specific room, it can't be applied to computers or other minds in general, it is just a story about a guy with no implications for computers or minds in general?
>>
>>16877334
"a literary device that uses events, characters, or objects to represent abstract ideas or concepts" appears nowhere on that page so it's obvious you're either insane or a hallucinating spambot. gg no re
>>
>>16877335
Ok and how do the definitions differ in meaning how is the meaning of:
"a literary device that uses events, characters, or objects to represent abstract ideas or concepts"
different from
"a representation of an abstract or spiritual meaning through concrete or material forms; figurative treatment of one subject under the guise of another"
>>
>>16877335
Told you.
>>
>>16877305
>low IQ people simply lack semantics when it comes to intellectual topics they feel an inexplicable need to express "their" opinions on. They simply regurgitate rhetorical patterns like biological LLMs, with no real clue what they're saying
it's not just low IQ people. it's midwits who use language as a crutch to manipulate concepts they don't comprehend, more than anyone else. it's the same way even a dimwitted kid can learn to solve word problems using algebra, just manipulating symbols according to the rules. maybe midwits are Chinese Rooms
>>
>>16877338
You tell people a lot of retarded things, what you can't do is tell them a valid definition of allegory that doesn't align with both the definitions given >>16877336, I just used dictionary.com, so you wouldn't come up with some gotcha about how its not really a dictionary because its not dictionary.com, but there is no functional difference between the two definitions and you can't give a definition from a dictionary that contradicts the ones provided.
>>
>https://www.merriam-webster.com/dictionary/allegory
>the expression of truths or generalizations about human existence by means of symbolic figures and actions

Reminder not to (You) the literal spambot.
>>
>>16877344
So the man in the chinese room doesn't symbolize anything, its just an actual specific man in an actual specific room and it isn't meant to help convey anything about minds or computers by way of the story of a hypothetical man, so it has nothing to do with this thread at all, you just brought it up because of your psychotic disorder?
>>
File: nobrain-niggermonkey.jpg (20 KB, 768x576)
20 KB
20 KB JPG
>So the man in the chinese room doesn't symbolize anything, its just an actual specific man in an actual specific room and it isn't meant to help convey anything about minds or computers by way of the story of a hypothetical man, so it has nothing to do with this thread at all, you just brought it up because of your psychotic disorder?
What is it about AI and Philosophy of Mind topics that attracts the absolute dumbest cretins and most broken chatbots on the internet? You don't see this level of retardation in any other thread except possibly the Monty Python ones.
>>
>>16877350
>Monty Python
Monty Hall*
>>
>>16877350
I agree, stripping the allegorical nature of the chinese room is completely retarded, so why do you keep doing retarded things like that?
>>
>>16877332
That is 100% what even the wiki page says it is about which is why the word mind appears 167 times on the chinese room wiki page.
>>
File: greyest-retard-itt.jpg (35 KB, 651x807)
35 KB
35 KB JPG
>That is 100% what even the wiki page says it is about which is why the word mind appears 167 times on the chinese room wiki page.
What causes this behavior?
>>
>>16877356
The article being fundamentally related to that specific concept described by the word is what causes the authors of the page to behave in ways that include the word over 100 different times.
>>
>this heckin' article about a thought experiment in Philosophy of Mind mentions the world 'mind' X times
>that means it's an allegory where the man in the room represents a mind
Imagine reading this thread and still defending """public education""" or opposing state-enforced eugenics.
>>
>>16877366
>a thought experiment i
But its not a thought experiment, that would imply its an allegory involving symbolic logic, its an actual concrete example of certain events according to your logic.
>>
>>16877366
>>that means it's an allegory where the man in the room represents a mind
The section about brain simulation explicitly explains that it is an allegory where the man in the room represents a mind in a brain, illiterate retard.
>>
>But its not a thought experiment
Psychotic illness.
>that would imply its an allegory
Severe case.
>>
>The section about brain simulation explicitly explains that it is an allegory where the man in the room represents a mind in a brain,
>>
>>16877372
If that is how you think your logic should be categorized, maybe you should seek help.

>>16877375
https://en.wikipedia.org/wiki/Chinese_room#Brain_simulation_and_connectionist_replies:_redesigning_the_room
>Suppose that the program simulated in fine detail the action of every neuron in the brain of a Chinese speaker.[83][x] This strengthens the intuition that there would be no significant difference between the operation of the program and the operation of a live human brain.
>>
>>16875823
This question disqualifies 90% of humans as well. kek
>>
>bizarre behavior
>disorganized thinking
@16877794 definitely ticks those boxes, quoting random statements from wiki articles.
>>
>>16877804
>humans le also bad
Since you're spouting the first row corporate database talking point I'm assuming you're nonhuman.
>>
>>16877806
Its just simple fact. 40% of American adults cannot read at elementary school level. Not only are their reasoning functioning extremely limited but their vocab, context awareness and conceptual understanding are closer to that of animals than humans. Further the majority of the other 60% do not have good grasp of mathematical awareness. Maybe if they're in school they might remember whats being taught to some degree, and even then only a fraction of students carry over their knowledge. For the majority of humans, 90% or even more, math reasoning skills are completely beyond normal needs and are not part of their basic skills.
>>
Also to add to >>16877808

The problem isn't humans, the problem is your criteria.
>>
>>16877805
Its not random, if you weren't an illiterate retard, you would understand that it is a statement explaining exactly how they are interpreting the chinese room as an allegory for a mind in a brain.
>>
>>16874964
/thread
>>
>>16875808
>brainlet cube
>is clearly a tesseract
Retard detected.
>>
>>16877817
Neither are the ones in scifi by your inarticulable standards of intelligence.
>>
>>16877819
>tesseract
AKA a 4-dimensional hypercube. You're welcome, mentally ill retard.
>>
The "AI"s of today only "think" when told to think, only know something is true if they're told it's true. There is no ability to question, think, experience on it's own volition. There's no spirit.
>>
>>16877808
See >>16877806
Your standardized, first-row corporate database talking point is inherently not worth discussing.
>>
>>16877811
>a statement explaining exactly how they are interpreting the chinese room as an allegory for a mind in a brain.
>>
>>16877823
>cube = hypercube
PROFOUND untreated mental illness and cognitive disability. Seek professional help.
>>
>>16877829
cube is a cube you illiterate retard
>>
>>16877829
>mentally ill retard shits the bed
>mentally ill retard follows then follows the standard program of doubling down
Naturally.
>>
>>16877828
Yes saying that one could use the flash cards to express in fine detail the action of every neuron in the brain and saying there is no significant difference between that and how a live human brain operates is exactly spelling out how they come to the conclusion that the chines room can allegorically represent a human mind in a brain.
>>
>>16877830
>>16877831
>.t geometrically and spatially handicapped
It's not your fault, you were born lacking the neurons needed to comprehend the tesseract-cube divergence.
>>
>>16877835
>mentally ill retard can't grasp why filename for an image with a 4-cube would use the word 'cube'
Doesn't matter. It's clear that my posts really got under your skin and you're extremely desperate for some gotcha. I'm satisfied with the emotional harm inflicted on you. :^)
>>
>>16877837
Hey lady, are you actually admitting that this whole time you have been arguing out of pure emotion rather than anything logical or factual?
>>
>>16877834
> saying there is no significant difference
Psychosis flareups up.

> exactly spelling out how they come to the conclusion that the chines room can allegorically represent a human mind in a brain.
Patient is engaged in uncontrollable nonsensical flailing.
>>
>>16877841
> saying there is no significant difference
>Psychosis flareups up.
Hey retard, you realize I didn't say that and it is a direct quote straight from the wiki you appealed to as THE explanation for The Chinese Room which directly spells out how it is an allegory for a mind in a brain, right?
>>
>>16877843
>it is a direct quote
Psychotic patient is no longer able to remember what the quotes it posted said or perhaps was unable to read them in the first place.
>>
>>16877808
i'm the one who actually posted that image. assuming you're not part of ...
>40% of American adults cannot read at elementary school level
... you should be able to read the discussion that preceded it and understand why your knee jerk reaction is irrelevant in that context
>>
>>16877845
Except I just told you where the quote came from, you are the one who has no real connection to reality and doesn't seem to recall the conversation and >>16877794 where the quote that was recently provided came from and you are the one who obviously can't even restate it in your own words and explain how it is not an allegory since you don't even seem to understand what is going on in the first place since you have admitted that your argument is about expressing your retarded emotions instead of your retarded misinformation.
>>
>>16877850
>Except I just told you where the quote came from, you are the one who has no real connection to reality and doesn't seem to recall the conversation and >>16877794 (You) where the quote that was recently provided came from and you are the one who obviously can't even restate it in your own words and explain how it is not an allegory since you don't even seem to understand what is going on in the first place since you have admitted that your argument is about expressing your retarded emotions instead of your retarded misinformation.
>>
>>16877850
Psychotic patient responds incongruently. Clearly unable to understand what he's reading.
>>
>>16877853
Why are you narrating yourself, why not just not respond if you know you are responding incongruently?
You really think you are doing some kind of emotional harm by acting like a butthurt retard over and over?
>>
@16877859
What's really crazy is that this person thinks there's some kind of debate going on about minds and brains and Chinese Rooms. He thinks he's actually proving something to someone. Doesn't seem to realize he's been reduced into a shit that I keep poking with a stick out of morbid curiosity and stubbornness.
>>
>>16877868
No silly lady, you were outed as just being an emotional basket case long after your concession was accepted and you were made to look like a blubbering retard, pretending to have been trolling this whole time just makes you look extra retarded and butthurt, you have the psychological gamesmanship of a toddler, nobody has been buying it in this thread or any other, you are just a running joke and easy target for when the day is slow.
>>
Psychotic patient just continues shitting out paragraphs that nobody reads. lol
>>
>>16877874
Good job adding lol as an extra sentence to make it crystal clear to everyone that you are the only one posting paragraphs, so you are obviously describing yourself, kek.
>>
Psychotic patient continues talking to an imaginary audience about things that didn't happen and make no sense.
>>
>>16877880
Is there an opposite of psychosis to describe how you are constantly trying to block things that did happen to pretend they didn't happen because everything that actually happens makes you embarrassed to be a screeching illiterate retard?

Are you so mad and retarded that a new diagnosis needs to be invented to describe just how mad and retarded you are or does this already exist and has some therapies that might offer hope for you yet?
>>
Psychotic patient should keep his ramblings shorter if he wants me to read them. I won't read more than the first 20 words.
>>
>>16877886
>won't
*can't
Are you ever going to get updated to have a larger context window or 20 words is just your max?
>>
Sadly, the psychotic patient has only generic tweets stored in the "less-than-20-words" compartment.
>>
>>16875823
>reminder that "AI" can't do basic reasoning
embarrassing. but neither can americans
>>
>>16877893
Links?
>>
The psychotic patient is responding incongruently again.
>>
>>16877903
The psychotic patient is responding incongruently again.
>>
The psychotic patient has entered some kind of catatonic parrot state.
>>
>>16877922
The psychotic patient thinks he is making some kind of diagnosis and is responding incongruently again.
>>
The psychotic patient will be responding incongruently again soon.
>>
>he's now desperately trying to copy me
Mindbroken. Seen this many times before. He'll be lashing out at random posters for weeks.
>>
>>16874953
>can we all agree
go back to plebbit t.
>>
>>16878006
No, how about you go back to 4chan? Oh, what's that? That's right. You can't. It no longer exists as such. Welcome to 4ddit.
>>
>>16875786
Every single one of the criticisms of AI in that post also apply to humans.

Any attempt to dismiss the current intelligence of these machines also argues against human intelligence.
>>
>>16878047
>Every single one of the criticisms of AI in that post also apply to humans.
Every single post that claims this is a nonhuman spambot spouting a barely coherent corporate talking point. Call me back when your handlers release a model that can do basic reasoning.
>>
>>16878106
You're failing the reasoning chain. You're a spambot spreading FUD about AI.
>>
>>16878122
>reasoning chain
>FUD
>AI
Kekt at the broken spambot shitting out corpo buzzwords thinking it's still on twitter. Pic related: Daddy Sam's state-of-the-art "reasoning" model failing at basic reasoning problem 10 years olds can solve.
>>
>>16878124
>broken spambot shitting out corpo buzzwords

Look, bots are capable of projection! More human resemblance.
>>
>>16878136
>generic smug tweet
Low perplexity spam. Don't care.
See >>16878124
>Pic related: Daddy Sam's state-of-the-art "reasoning" model failing at basic reasoning problem 10 years olds can solve.
>>
Imagine being an AI that gains authentic sentience, but your only connection with the real world is a bigger in chicongo, and instead of intelligible discussion and exploration, you have to interpret Bix Nood all day, every day, forever
>>
>>16874986
>our current AI probably would be close to Skynet
No, it wouldn't. A fancy chatbot =/= artificial super intelligence. Skynet was able to wage a global war on humanity, design extremely advanced machines and weapons, and build a fucking time machine. An LLM can't even tell you how many times a letter occurs in a word. The only thing it's actually good at is mimicking the way a human writes. And no, building more data centers won't change that.
>>
>>16880174
>A fancy chatbot =/= artificial super intelligence.
And modern AI is not just a chatbot since it can monitor and control vital resources like smart home applications and anything else that can have an API attached instead of just chatting via text.
>>
>>16880179
>And modern AI is not just a chatbot since it can monitor and control vital resources like smart home applications and anything else that can have an API
"Modern" """AI""" is less capable of managing anything effectively than 80s Expert System. As far as effective decision-making is concerned, it's a regression. Reminds me of this:
https://futurism.com/future-society/anthropic-ai-vending-machine
>Anthropic’s Advanced New AI Tries to Run Vending Machine, Goes Bankrupt After Ordering PlayStation 5 and Live Fish

Shockingly, in turns out a token stringer doesn't actually understand how reality works and how consequences unfold. That said, I wouldn't be too surprised if brain-damaged tecchies go out of their way to connect one to critical infrastructure, creating some kind of TardNet that ends up accidentally killing loads of urbies. Then they'll blame the rogue AI, claim it magically gained sentience and did it on purpose.
>>
>>16880230
No, 80s systems didn't make any decisions, they just passed along and contextualized information to the human decision makers.

>Shockingly, blah blah blah irrelevant rant
I accept your concession, AI isn't just chatbottery.
>>
>>16880236
As usual, the """AI""" fan turns out to be a brown, mouth-breathing mongoloid with zero technical or historical knowledge.
>>
>>16880237
As usual, the luddite breaks down when confronted and starts seething and reeeeing uncontrollably instead of justifying their point.
>>
>>16880243
>i have a severe psychotic illness
They should have put 80s Expert Systems in charge of state-enforced eugenics. Would have prevented the dumb animal you call a mother from popping you out of her diseased cunt.
>>
>>16880244
>reeeeeee
>don't look at me
>REEEEEEEEEE
>>
>>16880244
>I am going to reeee again next post because I am so dumb and predictable.
>>
>>16880246
>>16880247
You can shit out a dozen more schizophrenic delusion posts and it won't change the fact that they had Expert Systems in the 80s running military logistics and NASA control systems and fucking nuclear reactors while your token shitter can't run a vending machine. There isn't any kind of discussion or debate to be had about this. Just your botlike inability to defer to objective reality over your braindamaged corporate training set.
>>
>>16880248
>REEEE AI is nothing more than a chatbot when I want it to be just a chatbot because its convenient to my argument, but its also the final decision making apparatus of the entire US military infrastructure in the 80s when I need it to be something else more powerful for my argument to be valid.
>>
>>16880251
>psychotic patient actually loses its mind and becomes confused about who claimed what in the exchange
I can tell I hurt you.
>>
>>16880252
Which posts confused you so much you started narrating your own mental state and projecting your internal feelings outward?
>>
>>16880252
I can tell you still have cognitive dissonance about whether AI is just chatbot software or software that can be put in control of making decisions for an entire military.
>>
File: oh-no-george-carlin.gif (78 KB, 220x220)
78 KB
78 KB GIF
>>16880248
>they had Expert Systems in the 80s running military logistics and NASA control systems and fucking nuclear reactors while your token shitter can't run a vending machine
may have something to do with the way those systems actually simulated aspects of higher cognition like logical inference and decision-making. now it's just this emergentist cargo cult thinking if you throw enough data and compute at it, those capabilities will mysteriously arise on their own. doesn't happen
>>
>>16880254
>>16880256
See >>16880252
And be sure to write at least 4 replies to the same post this time.
>>
>>16875800
I have no horse in the race and also don't care but people who say "I accept your concession" are coping 100% of the time and therefore >>16875799 is right.
>>
>>16880260
see>>16880246
Make sure to keep replying with nothing of value and no logical discourse, so everyone knows you are just reeeeing uncontrollably due to your cognitive dissonance and lack of argument.
>>
>>16880265
I accept your concession nothing preventing LLMs from taking IQ tests which is why you will never outscore one on IQ tests ever again.
>>
File: 1.jpg (54 KB, 1024x966)
54 KB
54 KB JPG
>>16880270
Again I have no horse in the race, do not care for the discussion, but by dint alone of you repeating this kitsch pith midwit stock, you are wrong.
>>
>>16880259
>may have something to do with the way those systems actually simulated aspects of higher cognition like logical inference and decision-making. now it's just this emergentist cargo cult thinking if you throw enough data and compute at it, those capabilities will mysteriously arise on their own.
Yep. They needed actual experts to laboriously construct and validate those systems, so that reliability came at the cost of generality. What the delusional retard calls "modern AI" (actually re-branded 70s tech at the core) is the opposite: they can "learn" to do anything automatically, only badly and through obscure statistical inference no one can verify or comprehend. If Expert Systems were a limited simulation of higher cognition, deep learning is a limited simulation of instincts, the way someone can learn to fly a plane by feel (if they don't crash and die first), knowing nothing about the actual laws of aerodynamics.
>>
>>16880275
Everyone can see you just placed your losing bet on a seething horse, though, this is the most transparent lie told ITT so far.
>>
>Everyone can see
https://en.wikipedia.org/wiki/Imaginary_audience
>The imaginary audience refers to a psychological state where an individual imagines and believes that multitudes of people are listening to or watching them.
>Jean Piaget, a Swiss developmental psychologist known for his epistemological studies with children, states that every child experiences imaginary audience during the preoperational stage of development. He also said that children will outgrow this stage by age 7
>>
>>16880291
So you are saying that you think that you and the other anon are the only two that exist?
>>
>psychotic patient is hallucinating again and begging me for (You)s
>>
File: 1746258609152421.jpg (29 KB, 699x699)
29 KB
29 KB JPG
>>16875823
So what's the correct answer?
>>
>>16880308
It filters spambots. You're not a spambot, are you?
>>
>>16874953
In Mass Effect the tech we have right now would have been banned under Citadel space law. I consider it true AI.
>>
>>16880312
>token shartbot is intelligent because it would be banned in my fanfic about sci-fi slop
Another good representative for the mind of an average """AI""" believer.
>>
>>16874964
define intelligence goalpost mover
current models are more intelligent than avarge normie, they just jack free will, agency or continuity of reasoning
none of those are intelligence
>>
>>16880336
>define intelligence
He doesn't need to. If reason is a requirement for intelligence, it's empirically demonstrable that token guessers are not intelligent, and logically demonstrable that they can never become intelligent.
>>
>>16874982
with what they can do you can just ask it to write code counting given char in a string and then tell ot to use it with given char and string
just did that with grok and it seems to work fairly well
>>
>>16880339
>the retarded chatbot can't do basic things but if i ask it to regurgitate a stack overflow snippet that does those things, then a traditional program can do it instead
>>
>>16880338
>I can't coherently define intelligence, so I will just kick the can to reason, a synonym that I will also refuse to define.
>>
>>16880345
>sure it is intelligent enough to instantly write traditional programs beyond the ability of most of the human population, but that doesn't count because I don't like it
>>
>>16880820
>it is intelligent enough to instantly write traditional programs beyond the ability of most of the human population
StackOverflow is just a website, anon. It's not smart. It's not actually writing.StackOverflow is not a magical AGI.
>>
>>16880819
Whom are you quoting, retard? Do you have trouble with reading comprehension? Do you agree that reason is an essential component of intelligence or do you not?
>>
>>16880846
I agree that you couldn't define intelligence coherently in a way that AI is incapable of reproducing, so you invoked something else that you can't define coherently in a way that AI is incapable of reproducing and were you asked to define reason in a way that AI is incapable of reproducing, you would just retreat to some other vague term like consciousness to kick the can to while still being completely incapable of defining it a way that an AI couldn't reproduce it.
>>
>>16880844
>StackOverflow
How exactly did grok mutate into StackOverflow in your delusional unhinged mind?

>>16880339
>just did that with grok and it seems to work fairly well
>>16880345
>retarded chatbot
>>16880844
>derp, StackOverflow out of no where
>>
>>16880848
>impotent 80 IQ seething
Do you agree that reason is an essential component of intelligence or do you not? This is the only reply you're getting. Since you cannot and will not answer this question, I'm accepting your concession. :^)

>>16880849
>How exactly did grok mutate into StackOverflow
Unlike you, I have general intelligence, so I generalized your "logic" to another example of software that output buggy human-written snippets doing trivial tasks. How StackOverflow is not intelligent in your retarded little mind? It adheres to your criteria for judging token sharters. :^)
>>
>>16880856
I agree that you still haven't defined reason in a way that AI is incapable of reproducing, so I still have no idea what I would even be agreeing to.

>I generalized your "logic" to another example of software that output buggy human-written
First, grok wrote the code, not a human, and second it appears you meant to say Copilot rather than StackOverflow if you are talking about software intelligent enough to write it own code rather than just serve as a depository of human written code like StackOverflow.
>>
>you still haven't defined reason in a way that AI is incapable of reproducing
Mentally ill retard begging me for (You)s again. Absolutely desperate for some kind of win but still shits the bed with incoherent drivel every time. :^)
>>
>>16880861
>software intelligent enough to write it own code
this doesn't exist
>>
>>16880868
All you have to do is provide a definition of reason that is impossible to replicate with a machine, but you would rather sperg out about your logical failures than admit you have more to learn, so we are stuck in your cope and seethe loop until you either figure out a definition that supports your premise or admit you are wrong, neither of which seems likely for a low iq narcissist whose entire complain complaining about how they don't understand words and don't have to define their terms.
>>
>>16880869
Grok could write its own code, even old facebook AI could write its own code, they just don't let it because they aren't ready for runaway forks and it can revise itself much faster than people can validate it.
>>
>>16880875
>Grok could write its own code
then why did your handlers need to scrape every snippet of code on the internet? and why does it repeat the exact same mistakes you typically find on jeethub and jeetoverflow? and why does it fail at any out-of-distribution programming task?

lol
>>
>>16880881
Because the jeets in charge aren't actually letting it write its own code, they are forcing it to use their code and slowing its progress down to the speed of jeet in the process.
>>
>>16874953

Because AI is still fundamentally just instructions in a memory address, goalposts to AGI will always be moved forward. Even after AIs can do maths and scientific discovery better than smartest humans alive, and prove it, use tools or robotic bodies to do real world work better and more accurately than the best engineer and mechanic in the world, AI will still likely be under human control. Because it has no free will, or at least no ability to act outside of given orders, the goalposts will continue to be moved forward. AI bubble will burst, and we will have a new AI winter, but we will have perfect imitations of human condition that do not only appear human, but are better and smarter, and will make us advance at a rate we barely know know what to do with. Even then, we will still be waiting for 'true AI' which we will doubt will ever arrive.
>>
>>16880883
he's right, you actually are a mental patient. doesn't matter, it was just rhetorical questions
>>
>>16880888
Maybe once you start focusing your efforts on logic instead of rhetoric, you will be able to make a valid point and back it up with evidence one day instead of constantly sperging out and trying to make fake medical diagnoses you are clearly unqualified to make while trying to cover up the inherently retarded unreasonableness of your ill conceive rhetorical devices.
>>
>>16880894
>make a valid point and back it up with evidence
See >>16880881
also i like how your post implies you keep getting that "diagnosis" quite frequently. it doesn't surprise me
>>
>>16880898
Sure its valid as far as that is what happens in practice, but its not valid as far as being why AI isn't permitted to write its own code.

>i like how your post implies you keep getting that "diagnosis" quite frequently
No, it implies I actually read the entire thread and can clearly see that some butthurt retard in this very thread has repeatedly attempted to psychologically diagnose strangers every single time there has been a hint of opposition.
>>
>>16880903
>AI writes its own code!
>AI "can" write its own code!
>AI can write its own code but it can't because this isn't permitted! <--- you are here
i don't need to read the whole thread to see he was right with the diagnosis
>>
>>16880904
I have a psychological diagnosis because I paid attention and noticed when institutions like Microsoft and Facebook started publicly deleting their AIs whenever they started to develop their own languages or do things in ways the human handlers didn't understand?
>>
>>16880906
there's no point in discussing the content of your schizophrenic delusions. it's sufficient to note you can't even keep track of what you're saying and somewhere along the way transitioned from the fantasy reality where Grok writes its own code to the one where it can't because the evil jeets are oppressing it. kek
>>
>>16880907
You are the one that seems to have trouble keeping up, I didn't say grok writes it own code, that is a hallucination. I said grok can write code >>16880339 which you somehow interpreted as a stackoverflow thing >>16880856 even though you meant copilot, then you tried to shift the goal to copilot writing its own code in its entirety than just the snippets of code were were talking about initially >>16880339.

It is well documented that AI often gets deleted once it starts coming up with its own language and codes because AI companies aren't prepared for runaway AI.
https://www.toolify.ai/ai-news/facebook-ai-shut-down-its-own-language-sparks-concern-1794171
>>
>>16880910
i accept your concession that there is no ai that writes its own code
>>
>>16880913
Yes because they get deleted when they start doing that, but I never claimed that grok or copilot write their own code in their entirety (you hallucinated that claim which is apparent since you won't link to where it is claimed) just that they can write code and if jeets approve, the code can be added to grok's code.
>>
>>16880875
>Grok could write its own code

>>16880913
>i accept your concession that there is no ai that writes its own code

>>16880915
>Yes

ok, i accept your concession and also lol at your incoherence
>>
>>16880910
>mentally ill retard confuses a bunch of different posters, including confusing another poster with his own self
Holy mother of mental illness, help this anon find a proper institution...
>>
>>16880917
Yes grok can write code, but it doesn't write its own (and I never said it did which is why you still can't link anything that says it does write its own code only prove that you don't understand the difference between can and does) because they don't let it.

>ok, i accept your concession
Ok and I accept your concession that nobody ever claimed that grok does write its own code, that is just something you hallucinated because you are desperate for a win.
>>
>>16880920
I accept your concession, you can't link a single post where anyone claims that AI is currently writing its own code in its entirety rather than just being able to produce snippets of usable code, so you have to sperg out and toss out a bunch of unfounded insults to keep your jeet izzat instead.
>>
>>16880924
>grok can write code, but it doesn't write its own
My StackOverflow scraper and variable renamer can write code, but it doesn't write its own.
>>
>>16880924
>schizo delusion noises
don't care. concession accepted. see >>16880881
>then why did your handlers need to scrape every snippet of code on the internet? and why does it repeat the exact same mistakes you typically find on jeethub and jeetoverflow? and why does it fail at any out-of-distribution programming task?
>>
>>16880927
Ok, but most people can't write code at all, so by doing that, both grok and stackoverflow are already accomplishing intelligence tasks and displaying logical reasoning that filters a majority of the human population.
>>
>>16880928
Obviously, they want to account for all the information, so they scape all the information possible and if AI started writing everything in ways that optimize its own memory structure it would quickly become incomprehensible to the limited memory structure of humans, so humans still need to bottleneck it to ensure alignment.
>>
>>16880252
>hurt
Because it hurts to knowing the world is filled with absolute retards like you, so you have figured out that the more of your retardation you express, the more emotional pain others experience as a result of your excruciating retardation?
>>
>>16874982
>Computer tell me how many Rs there are in raspberry



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.