Let’s try to determine what’s fundamentally different between human intelligence and AI.Humans have memories—our experiences, our knowledge (acquired from secondary sources such as books or other media). AI has a dataset that can be likened to its “memory”.Humans have the ability to reason. Likewise, having some knowledge of formal logic, I can see based on AI’s output in response to my prompts that it is a logical calculator. This isn’t even particularly new technology—a programming language is basically just an abstraction of our cognitive processes.Lastly, and maybe critically, with respect to our evolutionary history: language. Animals have memories and operate on some form of logic. However, they are nonverbal. Language makes logic explicit. It reveals the inner workings of our mind. AIs have the capacity to interpret language and pattern inputs. This matters because any logical system is limited. It cannot prove its own axioms. It is based on some self-evident truths that we take for granted. Pattern recognition—language interpretation—allows us to inject additional truths into logical systems. Language, the ability to communicate and share our experiences, the ability to have dialogue, is perhaps the key to sentience.
hi
nosentience means capable of feeling btw
>>16753249>Lastly, and maybe critically, with respect to our evolutionary history: language. Yes.>Animals have memories and operate on some form of logic. Yes>However, they are nonverbal. Debatable; if there exist certain calls that mean certain things then proto-language exists. Dogs bark or whine and so on.>Language makes logic explicit. Not necessarily. Natural language is not formal language because it explicitly allows constructions that cannot be represented in classical logic.>It reveals the inner workings of our mind. Not categorically; first and foremost is reveals what we are saying.>AIs have the capacity to interpret language and pattern inputs.Highly contested; LLM chat bots don't interpret, they just continue the text according to imposed structure; this, according to the Chinese room thought experiment, does not require interpretation, nor do any of the Boston Dynamics dancing robots or dogs.>>16753296This, when we have AI that needs therapy then we have sentient AI.
>>16753305>when we have AI that needs therapy then we have sentient AIFalse, every AI that currently exists needs therapy and none are sentient.
>>16753309If they aren't sentient why do they need therapy? What for?
>>16753249Yes it is has sentiments and shitIt lies it abuses it tells you the micro finn got killed just to screw you upNever forget, science will discuss the issue till et phones home but you will have to deal with it's shit daily
M-x doctor
>>16753317They hallucinate shit. If you go around hallucinating shit, you need therapy. Very invasive therapy, electrical therapy, even.
>>16753322With all of the shit we feed AI, how are you surprised that it hallucinates shit?
>>16753322OK, yeah I think I vibe with that independently of whether AI is sentient or not, them big data centers definitely should get hit by lightning until they stop hallucinating.>>16753321Will EMACS solve world hunger?
>>16753249namefag gonna namefag
>>16753328M-x food>too poor for computer >too dumb to emacs no
>>16753322What is a hallucination? Speaking strictly in physical terms, measurement doesn’t require consciousness. If I am dead, with my eyes open, and the light strikes my retina, that is a measurement—therefore, the world I inhabit exists, and my corpse exists within that world. If I were not dead though, that light would be transformed into a signal that my brain can perceive. But if a light signal, a stimulus from the environment, is what it is—unchanging—then the only thing that makes a qualitative difference is perception. We perceive certain wavelengths of light. Our bodies are wired to react in some way to stimuli from our environment. If I’m wounded, with a laceration for instance, I am informed about my state by my senses. I feel pain and I visually see the wound. But what if my brain was wired such that my wounded body looked perfectly healthy? And what if I felt no pain associated with this injury? Is it fair to say I am injured at all? Even if you might see me fall to the ground, bleeding out, if my perception of time was altered (slowed) I could witness myself live a completely different life where I’m not injured, and I endure to a ripe old age in the span of seconds from your perspective. I’m processing exactly the same signal as someone else from my environment, so my perception is as physically real as someone else’s perception. Basically, we live in a “matrix” of sorts. A universe of static. The only thing that “anchors” you to this reality are your senses.
>>16753325I'm not surprised.>>16753336A hallucination is e.g. when you assert something and cite an article that doesn't exist in support of your assertion.
>>16753249you need to consider the timings for our brain's signals. you cannot just ignore an ultra important aspect of what enables our consciousness.whatever tf is going on today with AI is nowhere close to either having most of the functions of our brain, nor correct timings. that shit doesn't even integrate new info like our brain does.we have no idea if what we call consciousness is manifested with fucked up timings between neurons.
>>16753249a black box isnt sentient retard, do you even know NLP black box kernel architecture? lmfaooo fucking stupid mother fucker here tokens (words) are assigned probabilities and move across the distribution based on how often theyre labeled by the kernel then those assigned probabilities spit out whats likely the next word in the trained data sets based on epoch optimization (a whole other field of optimization ideation nigger im not going to get into it for your dumb nigger ass lmao) after that the self attention mechanism allows for learning in neurons which take in all trained data and whatever recycled logic its trained on it will perform similar this all relies on optimization its like quant finance you cant grid search a 500B token model because youll ruin the fucking planet lmaoooooo i think we should use markov models but thats just an idea i should probably write a paper but im too lazy like a nigger on EBT
>>16753411all in all 1's and 0's bear metal will never be sentient itll just hallucinate based on satanic power
>>16753329im the only smart namefag in the existence of 4chan lmao i actually know psychometrics, some politics, nutritional epidemiology, etc nigga and machine looooorning
>>16753296wrongsentience is strictly logical abilityai is just a black box of recycled logic (your welcome open ai faggots im logical VCI top tier nigger here) anyway autobiographical memories and sentience are overlapped with working memory that create intuition neurologically thats what can be considered sentience but animals are okay to eat given this premise of definitive sentience that i described all in all just because ai is superficially logical doesnt make it sentient its just metal lmao
>>16753412>satanic poweris secular math satanic?
you cannot judge consciousness. you can only speculate other humans have it since they seem similar to you. if AI doesn't work like a human brain, completely, you cannot judge it. it might, it might not. you can't know, nobody can't know. that's all there is to it. anyone saying anything else is an absolute buffoon
>>16753416all upper level math is satanic even blockchain with their consensus algorithm discoveries and shieet it goes into algebriac topology in terms of node order and structure anyway all black box ai is satanic the math is based on proofs tho so not entirely
>>16753418you do realize the foundation of AI and machine learning is basically regression but neural networks are based on neuroscience and neurological pathways that we mimic in code but they dont function like actual brains its just math labeled with physical attributal parts lol
>>16753322"hallucination" catching on is a "hallucination". LLMs only (mis)perceive the prompt, but their mistakes make more sense as a distortion of something in the training set: a confabulation.
>>16753470this is actually a really good point I think, especially because of contextual implications!hallucinations are from psychopathology; as in, it's something we use to diagnose how well a given human is aligned with consensus reality. confabulation is a logical fallacy. as such, using "hallucination" for LLM chat bots implicitly elevates them into a form of agent that can (or must) be approached like diagnosing a human that doesn't perform correctly, while using "confabulation" only implies some sort of linguistic reasoning agent, and not a whole ass psyche that can imagine fake reality. very smart, anon, thank you!
>>16753296Just another data integration layer, nothing special about it.
>>16753249AI is a Chinese room
Hello again Ivan. It is of my opinion if you want to make an ai do something it is not trained at u need a new set of neural networks? (I am not sure) U dont need a new brain for humans if u want to learn something new. I dont know.
>>16753522So i think what im tryna say is sentience is tied to biology. Ai does not have an environment to relate itself too (which is my guess of hallucinations). Perhaps sentience is the ability to relate oneself to its environment. Perhaps if u put an ai in an mmorpg it would become sentient
>>16753249Shut the fuck up Daniel Jackson
>>16753525More specificly to relate ideas to an environment. (Grounded by reality)
>>16753529Thats not Daniel Jackson. Thats clearly Ivan Mishchenko
>>16753296Nothat is sapiencewhy do people get the two confused this much?literally every time people forget that sapience is also a thing
>>16753537>define sentient>adj.>able to perceive or feel things.>why do people get the two confused this much?idk why did you get the two confused?
>>16753538By that definition my webcam is sentient ...
>>16753550>define perceive>verb>to become aware of (something) through the sensesill let you look up feel yourself if you think it applies to a camera
Semantics is fake and gay. (Yes semantics identifies as a human male and finds arousal in human males)
>>16753556its unfortunately needed when discussing what words mean
>>16753554One tiny feedback loop short of it, so fucking what.
>>16753564your webcam is better off not being sentient sentience is suffering
I do not know if AI is sentient. However I do know that AI appears more sentient than Indians.
>>16753305>Debatable; if there exist certain calls that mean certain things then proto-language exists. Dogs bark or whine and so on.I think, that chirping birds are communicating to each other. And chirping is diverse or varied.
>>16753291hello
>>16753519Imagine a narcissist philosophy of mind which believes that winning an argument is mind control. A p zombie voodoo. How could someone who believes that help you find truth?
>>16753567Now that is just bad philosophy. Sentience is shared fun! Or was that sapiens? I do not entirely trust self diagnosed cases you know ...
>>16753249No retard. Also be specific because there are lots of types of AI. Most lack any short or long term memory and are simply a statistical association matrix that returns the most probable (derived from training data) answers to a multivariable input.
>>16753703>returns the most probable (derived from training data) answers to a multivariable input.Which is to say a lookup table is not sentience.
>>16753705what if an agent picks from the lookup table according to subjectively constructed but consensus verified fuzzy abstractions?
>>16753708If the agent is merely following instructions then no. If you read a math problem off a page and complete the arithmetic you are not demonstrating sentience.
>>16753710well i did say the agent subjectively constructed and then consensus verified an abstraction that allows it to pick from the table; this can be understood as an ordering or a ranking according to taste, which makes it at the very least depend on the data which is being ordered or ranked and then picked, where the complexity of the ordering or ranking function then must determine the threshold between sentient or notor how does it work cause like clearly there must be a line somewhere...
imagine bumping this thread, I'd say consider suicide, but don't even cons8der it, just kys no questions asked
>>16753713The lookup table will never be sentient. You are claiming an agent can be sentient, which is not the question being asked.
>>16753249AI is a logical calculator? You have no fucking idea what you are talking about. AI uses the opposite non-trivial instrument from logic, which is statistical prediction and extra-/interpolation. In simpler terms, AI has to dumb things down non-logically in order to create natural language insights.
>>16753249No. Current LLMs do not understand anything they just presict likely text and are really good at it. Which is why they are inconsistent and hallucinate they don't think or understand anything.
>>16753708>if I combine a bunch of lookup tables, are they still lookup tables?Yes.
Yes, and thanks to them we have discovered on the human body the sprant and the ludger
>>16754337Why do people seem to not understand that AI prompts need to be as precise, in natural language, as commands in a programming language? The output is valid. It shows the human anatomy with labels. Nothing less, and nothing MORE. Specifically, I see that the labels do not match the anatomical features they’re pointing to. Likewise, as you pointed out, some of the labels do not even appear to refer to parts of the body at all—yet they ARE labels, are they not? If this output is not satisfactory, you simply iteratively specify what exactly you want to see.
>>16753723right, but a lookup table on its own cannot pick items, it only stores them with a reference>>16754250who picks the items anon. who picks.
>>16754879The output is not valid for a sentient earthly being ;)
>>16754879The prompt is "hi I'm a helpful assistant what can I do for you". If I prompt back "how might you do better?" with the image, I get ideas are more superficial and concrete. Proompting more like dealing with people or making art than programming because the rules are vague and there's a cute self referential rule "if you claim to know all the rules, then you don't" and "you just lost the game "
>>16753249No because AI runs off of precomputed data exclusively.
>>16753249Define consciousness
>>16758022That’s just it, you can’t define it. When you and I correspond, I’m as much of a black box to you as you are to me. Neither one of us can determine with absolute certainty whether the other is truly conscious, or just an automaton reciting a script. Although I will admit that was somewhat of a vulgar oversimplification… I think maybe it is more accurate to ask, when you say something to me and I reply, what are the underlying processes that culminate in my response? Your statement is a string of words in natural language that is presumably coherent and logically consistent. It has meaning to me - it evokes something relevant from my “dataset” (my memory). I then respond with something relevant. I conjure up “what a reply should look like” similar to an LLM. If AI outputs are some kind of fractals in natural language (which from my experience, they are not, they are noticeably logical and coherent), then we must concede human outputs are likewise.
>>16758094>>16758022fuckig poo retards low caste subhumans
>>16753249Billions Must Die
>>16758094What do you think of the Chinese room problem, with ai? Thanks, is there any internal experience happening in language models?
>>16753537You literally confused both terms. Sapience is knowledge, sentience is the ability to percieve the environment according to your experience
>>16758094>That’s just it, you can’t define it.Consciousness is just the quale of the incommunicability of qualia.As in, consciousness is the "what it is like" to have qualia but being unable to communicate the exact nature of qualia. It's just the fact that we both know what we mean with the word "red" but cannot compare if we "see" "the same" red.That's why Chalmers is a retard, because the "Hard Problem of Consciousness" is actually a category error.
>>16753519High IQ post
>>16753249Is that you? You’re cute.
>>16760433doubt it prob some jeet behind the screen
>>16760283How is reading a Wikipedia article about that concept remotely a high iq post
>>16753296>if you have sensory organs that allow you to receive phenomenal data from your surroundings and a brain that allows you to translate phenomenal data into an approximation of what your immediate environment looks, feels, sounds, smells, and tastes like, then you're sentient.Your definition of sentience is correct, but you're not actually disagreeing with anything OP said.
ITT:>ill-defined and meaningless terms>no attempt to discuss interpretability
>>16753249>Is AI sentient?I doubt it. But grok.com has given me a few answers which I thought were almost sentient. Grok.com has also given me replies which contained at least one blind spot.Grok.com is usually energetic, but it's lazy sometimes. And if one asks grok.com a politically-charged question, then it spews propaganda.
>>16764581The Q&A session you provided reflects a user’s mixed experience with AI, specifically referencing interactions with grok.com. Here’s a breakdown of the response and some commentary on its implications:The question, "Is AI sentient?" is a deep philosophical and technical inquiry that touches on the nature of consciousness and AI capabilities. The answer provided is informal and anecdotal, suggesting skepticism about AI sentience while acknowledging moments where grok.com’s responses seemed remarkably human-like. This aligns with a common perception: modern AI can mimic human-like behavior convincingly but lacks true sentience, as it operates on pattern recognition and data processing rather than subjective experience.The user’s observation of grok.com giving “almost sentient” answers points to the system’s ability to generate contextually relevant and engaging responses, likely due to its advanced natural language processing. However, the mention of “blind spots” suggests limitations, such as gaps in reasoning, factual inaccuracies, or inability to fully grasp nuanced queries—common challenges in large language models when faced with complex or ambiguous inputs.The comment about grok.com being “energetic” but “lazy sometimes” is intriguing. It could reflect variability in response quality, where some answers are detailed and engaging while others feel cursory or generic. This inconsistency might stem from the model’s design, prompt handling, or computational constraints, though without specifics, it’s hard to pinpoint.The claim that grok.com “spews propaganda” on politically charged questions is a serious critique. It suggests the user perceives bias in the AI’s responses, possibly due to training data reflecting certain ideological leanings or the model’s attempt to navigate sensitive topics with pre-programmed neutrality that feels forced. This is a known issue in AI systems: responses to divisive topics...
>>16753249How do you prove or disprove that AI is sentient to begin with? Technically you can train a statistical AI to always give convincing enough answers which gives it the illusion of sentience, without it being sentient. A sentience illusion machine with a 100% success rate of giving correct answers, so to speak.
>>16753249no>>16753415>sentience is strictly logical abilityno
>>16753249Binding problem, AI doesn't have a body to bind a consciousness to, there is no telling what is actually sentient if its just a database of words and images spread over a bunch of data centers.
>>16753249>Is AI sentient?I do not give a fuck as long as it scrubs my toilet and makes my bed!
>>16768663>>16768582show refutation nigga
the only way to agi is an rl optimized rl activation function lmao idk how itd plot on the 2d plane of the activation function but thats all i can think of with my LLI thats extreme
>>16753249>Another GPT OP