[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


explain to me like im retarded (i am) why AI isnt trained on a more rudimentary level, like on logic, symbolism, pattern recognition etc instead of just being a chat bot that prints out words that dont make sense and arent true a large fraction of the time
>>
>>107564044
that's what AI researchers were trying to do for half a century
>>
>>107564073
so what happened
>>
>>107564085
i can take that picture of greta and make her nude now. so to answer your question even plainer than that, they are. it does more than print words or make ugly soda ads.
>>
>>107564085
the chatbots worked better
>>
>>107564044
i've seen ai of her doing dirty things brehs
>>
>>107564044
they used to be. that was when you got meme languages like lisp or prolog. that hype died. the money is on chatbots now.
>>
>>107564093
it still is based on just language and images and not actual logic

a borderline retarded person be more logical than even the best AI services and catch it making mistakes constantly
>>
>>107564093
why would you want to see her stinky cheese cunt?
>>
>>107564264
she kinda cute doe
>>
>>107564277
no
>>
>>107564044
Data.
If you train on text, then you can train on the entire internet and on every book every written.
If you train on "symbolism" or "pattern recognition" then, well, what does that actually look like in practice? Where do you get that data, what form does it take?
>>
>>107564044
you're talking about shit that a 2 year old can do, that's not gonna wow anybody, and even though a 2 year can play and understand flappy bird, they can't write it in pygame, so clearly this was the more profitable* option
>>
>>107564325
>If you train on "symbolism" or "pattern recognition" then, well, what does that actually look like in practice?
a good foundation for then training it on text
as it is now its just making AI look retarded after an initial novelty fueled interest
>>
>>107564355
so then why does every single LLM AI act like a retarded infant that talks in circles?
i mean of course that new free chatgpt model was downgraded to complete trash and is beyond retarded but even way better services fail to not make huge obvious mistakes all the time and fail to logically follow through with things very well
>>
>>107564305
i mean....
>>
>>107564359
Sorry, I might not have made myself clear. I didn't mean "what would the end result look like?", I meant "how would you train it?".
Like, what would the dataset for "pattern recognition" look like? Mensa-style IQ puzzles like raven's progressive matrices? How much of that stuff is there, how would you make more? Is this supervised or unsupervised learning?
>>
>>107564044
It doesn't think, it just predicts the most likely next few things in a sequence
>>
File: idiot.png (3 KB, 114x94)
3 KB
3 KB PNG
>>
File: 1747205053782200.jpg (273 KB, 1024x768)
273 KB
273 KB JPG
>>107564044
There's models like that (even more specialized ones then you imply), but majority of models are built for natural language interfaces.
>>
>>107564073
>>107564085
>>107564093
>so what happened
We invented transformer based models, that's what happened, not that researchers didn't try either way before me had transformers.
>>
>>107564044
Language is a universal system that can describe all of the things you listed, so why bother with a million separate training tasks when you can just teach it language and then describe other tasks
>>
>>107564044
How do you know that AI is AI?
What if the thing that you thought was AI actually wasn't, but had been named AI incorrectly. How would you be able to tell the difference?
Be specific please.
>>
File: 1748017173306366.png (570 KB, 788x602)
570 KB
570 KB PNG
>>107564305
>no
>>
File: EGI_1.jpg (730 KB, 1450x1106)
730 KB
730 KB JPG
Wake up!
>>
File: EGI_2.jpg (160 KB, 574x437)
160 KB
160 KB JPG
Open your eyes!
>>
File: understanding.jpg (163 KB, 961x354)
163 KB
163 KB JPG
>>107564044
Because there isn't much to learn. The rules of logic are ridiculously simple. The problem is their application. It requires understanding, requires the idea of causality. As long as "AI" doesn't already have this faculty that every life form has, even the most primitive at least rudimentary as perception, it can't apply these abstract rules (can't project them, can't imagine them) on anything new you feed it. Pattern recognition btw is a misleading term. It's not the patterns as such but their necessary relations (i.e. causality) that are recognized.
>>
>>107564085
chatbot parrots make investors go "OMG IT'S LE ALIVE JUST LIEK ME! LEMME PRINT MORE MONEY!" BRRRRR
>>
>>107564044
Train what on logic? There isn't any mind in the electronics and software you're trying to "train". They can only parrot what you feed them.

To make them work like a mind, we'd have to fully understand our own minds and then replicate the biology of our minds. Because ours work based on live connections, emotions, perceptions, vague associations. Things we don't fully understand either, but we have an unfounded confidence that we do.

How could we train them to doubt "themselves" and others including their trainers. That level of cognitive interactivity can only come from being alive and going through the same shit we do. You can't have the peaks of human cognition without all the shittery and tragedy that comes with it. No skin in the game
>>
>>107565226
No, investors are not this stupid. They understand how much human work can be replaced by LLMs. Amazon does not pay humans to assist (influence) customers but can cheaply fake the experience of asking another human about a product and getting a convincing answer. I recently learned this myself when I was interested in a USB4 SSD case and just wanted to know the length of the cable it comes with. I was surprised about the satisfying effect it had on me. Even despite my knowledge that I was dealing with a LLM! I was aware and critical but most people are not and will be easy to influence, not just with purchases but also with much more serious, much more important decisions. It's the mirror neurons in the human brain that are to be blamed for this, they are our weak spot that makes us resonate with anything that even only appears to be an alter ego.
>>
>>107564044
for that to be of any use you need an algorithm to go from data to logic and back if llm can only output formal logic answers what the fuck use is it
>>
>>107565496
Peak midwit take
>>
>>107565574
There's no logic without a human mind applied to a subject
Logic is not some kind of universal constant like the speed of light that just happens to fit a certain number
>>
>>107566126
what do logic gates do
>>
>>107564963
stay safe anon. questions like that risk losing some people a lot of money

>>107564044
""""ai"""" isnt trained on a more rudimentary level because we humans have a biased view on what is rudimentary about ourselves. since every human can learn, we assume learning must be easy.
all you're seeing right now is an attempt to monetize the internet. that's all this is. a lot of abjectly greedy people have been looking at how much information is just sitting freely on the internet and have been working tirelessly to figure out how to make all that free shit make them money. and very roughly this is how they do it:
they scrape; aka steal, huge amounts of content from the internet. this is how you can double check my work, by the way; what are the three forms of media on the internet? images, writing and video. what are the principle uses of the trash they're calling 'ai'?
anyways, they collect all this shit, but you can't just hand all this information to a computer and have the computer label and categorize it. so they pay pennies to 3rd worlders to do some of the most menial, mind-numbing categorical work imaginable (like solving captchas endlessly for hours). once this is done, you can feed keywords to the ai, have it scramble a bunch of inputs categorized under those keywords, and begin sorting through the answers it gives to decide which are acceptable and which aren't.
this is the fun part! this is where all the useful idiots who use 'ai' to make videos, art and text come in! when you try to make ai art, and get 50 outputs, but only want one, you're just another cog in the recursive wheel
your 'ai' companies then take these hilariously inept systems, doll them up, build hype by giving it names, talk about how the singularity is just around the corner, and sell these crippled, pathetic systems they've made to companies and governments led by men who can hardly navigate their phone, much less understand how to use a computer.
>>
>>107564611
yes thats the problem, its literally just words theres no actual logic or abstract thought pattern behind it
>>
>>107564958
clearly not because its dogshit at being coherent in more than a very short span of time. it will flip flop instantly when pressed easily half the time
>>
>>107564044
>I'm dumb, explain...
that's a lot of dumb. if dumb, can't explain. but, dumb, so can't explain that if dumb can't explain.
>>
>>107564044
Despite what the output may suggest, LLMs aren't all that complex architecturally, compared to biological brains.
LLMs really only take a sequence of tokens and predict the most likely next token. During training they create a high dimensional latent space mapping tokens (syntax) to abstract concepts (semantics), so if you get a token (word) you can "lookup" semantically similar concepts and navigate this latent space to related concepts, and then map back tokens for output. Under the hood it's all just matrix multiplications in a finite domain to compute the outputs of an extremely simplified crude approximation of a neuron (all it does is fit the function f(x) = ax + b, while biological neurons have vastly more complex and nuanced behaviour).
All that is to say, they're not intelligent and lack fundamental understanding, so they can't do much more than superficially parrot with an extremely large vocabulary. Training on logic won't make it actually understand logic and apply it correctly. It'll just spit out some stuff that looks like logic (which will fool some people).
>>
File: 1715668233458152.jpg (95 KB, 500x427)
95 KB
95 KB JPG
>>107564611
>>107566402
No, for the billionth time, LLMs are NOT a Markov's chain. They take into account the whole conversation, that makes a huge difference.
>>
>>107564085
Putting the entire internet into hypercompressing automatic pattern recognition machine works better then literally any other method. Half a century of trying to figure out smart stuff and getting absolutely nowhere vs like 7 years of training GPTs and denoisers on the internet and us getting chatbot smart enough to convince anyone from 2015 that it’s an AGI.
>But it’s not truly intelligent, it just repeats training data
Ok, but can it do your job? Can it automate away all entry level computer jobs? That’s what I thought.
>>
>>107564376
>so then why does every single LLM AI act like a retarded infant that talks in circles?
Start using uncensored models, it makes a huge difference. Guardrails and RLHF are completely neutering most LLMs.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.