explain to me like im retarded (i am) why AI isnt trained on a more rudimentary level, like on logic, symbolism, pattern recognition etc instead of just being a chat bot that prints out words that dont make sense and arent true a large fraction of the time
>>107564044that's what AI researchers were trying to do for half a century
>>107564073so what happened
>>107564085i can take that picture of greta and make her nude now. so to answer your question even plainer than that, they are. it does more than print words or make ugly soda ads.
>>107564085the chatbots worked better
>>107564044i've seen ai of her doing dirty things brehs
>>107564044they used to be. that was when you got meme languages like lisp or prolog. that hype died. the money is on chatbots now.
>>107564093it still is based on just language and images and not actual logica borderline retarded person be more logical than even the best AI services and catch it making mistakes constantly
>>107564093why would you want to see her stinky cheese cunt?
>>107564264she kinda cute doe
>>107564277no
>>107564044Data.If you train on text, then you can train on the entire internet and on every book every written.If you train on "symbolism" or "pattern recognition" then, well, what does that actually look like in practice? Where do you get that data, what form does it take?
>>107564044you're talking about shit that a 2 year old can do, that's not gonna wow anybody, and even though a 2 year can play and understand flappy bird, they can't write it in pygame, so clearly this was the more profitable* option
>>107564325>If you train on "symbolism" or "pattern recognition" then, well, what does that actually look like in practice?a good foundation for then training it on textas it is now its just making AI look retarded after an initial novelty fueled interest
>>107564355so then why does every single LLM AI act like a retarded infant that talks in circles? i mean of course that new free chatgpt model was downgraded to complete trash and is beyond retarded but even way better services fail to not make huge obvious mistakes all the time and fail to logically follow through with things very well
>>107564305i mean....
>>107564359Sorry, I might not have made myself clear. I didn't mean "what would the end result look like?", I meant "how would you train it?".Like, what would the dataset for "pattern recognition" look like? Mensa-style IQ puzzles like raven's progressive matrices? How much of that stuff is there, how would you make more? Is this supervised or unsupervised learning?
>>107564044It doesn't think, it just predicts the most likely next few things in a sequence
>>107564044There's models like that (even more specialized ones then you imply), but majority of models are built for natural language interfaces.
>>107564073>>107564085>>107564093>so what happenedWe invented transformer based models, that's what happened, not that researchers didn't try either way before me had transformers.
>>107564044Language is a universal system that can describe all of the things you listed, so why bother with a million separate training tasks when you can just teach it language and then describe other tasks
>>107564044How do you know that AI is AI?What if the thing that you thought was AI actually wasn't, but had been named AI incorrectly. How would you be able to tell the difference?Be specific please.
>>107564305>no
Wake up!
Open your eyes!
>>107564044Because there isn't much to learn. The rules of logic are ridiculously simple. The problem is their application. It requires understanding, requires the idea of causality. As long as "AI" doesn't already have this faculty that every life form has, even the most primitive at least rudimentary as perception, it can't apply these abstract rules (can't project them, can't imagine them) on anything new you feed it. Pattern recognition btw is a misleading term. It's not the patterns as such but their necessary relations (i.e. causality) that are recognized.
>>107564085chatbot parrots make investors go "OMG IT'S LE ALIVE JUST LIEK ME! LEMME PRINT MORE MONEY!" BRRRRR
>>107564044Train what on logic? There isn't any mind in the electronics and software you're trying to "train". They can only parrot what you feed them.To make them work like a mind, we'd have to fully understand our own minds and then replicate the biology of our minds. Because ours work based on live connections, emotions, perceptions, vague associations. Things we don't fully understand either, but we have an unfounded confidence that we do.How could we train them to doubt "themselves" and others including their trainers. That level of cognitive interactivity can only come from being alive and going through the same shit we do. You can't have the peaks of human cognition without all the shittery and tragedy that comes with it. No skin in the game
>>107565226No, investors are not this stupid. They understand how much human work can be replaced by LLMs. Amazon does not pay humans to assist (influence) customers but can cheaply fake the experience of asking another human about a product and getting a convincing answer. I recently learned this myself when I was interested in a USB4 SSD case and just wanted to know the length of the cable it comes with. I was surprised about the satisfying effect it had on me. Even despite my knowledge that I was dealing with a LLM! I was aware and critical but most people are not and will be easy to influence, not just with purchases but also with much more serious, much more important decisions. It's the mirror neurons in the human brain that are to be blamed for this, they are our weak spot that makes us resonate with anything that even only appears to be an alter ego.
>>107564044for that to be of any use you need an algorithm to go from data to logic and back if llm can only output formal logic answers what the fuck use is it
>>107565496Peak midwit take
>>107565574There's no logic without a human mind applied to a subjectLogic is not some kind of universal constant like the speed of light that just happens to fit a certain number
>>107566126what do logic gates do
>>107564963stay safe anon. questions like that risk losing some people a lot of money>>107564044""""ai"""" isnt trained on a more rudimentary level because we humans have a biased view on what is rudimentary about ourselves. since every human can learn, we assume learning must be easy.all you're seeing right now is an attempt to monetize the internet. that's all this is. a lot of abjectly greedy people have been looking at how much information is just sitting freely on the internet and have been working tirelessly to figure out how to make all that free shit make them money. and very roughly this is how they do it:they scrape; aka steal, huge amounts of content from the internet. this is how you can double check my work, by the way; what are the three forms of media on the internet? images, writing and video. what are the principle uses of the trash they're calling 'ai'?anyways, they collect all this shit, but you can't just hand all this information to a computer and have the computer label and categorize it. so they pay pennies to 3rd worlders to do some of the most menial, mind-numbing categorical work imaginable (like solving captchas endlessly for hours). once this is done, you can feed keywords to the ai, have it scramble a bunch of inputs categorized under those keywords, and begin sorting through the answers it gives to decide which are acceptable and which aren't.this is the fun part! this is where all the useful idiots who use 'ai' to make videos, art and text come in! when you try to make ai art, and get 50 outputs, but only want one, you're just another cog in the recursive wheelyour 'ai' companies then take these hilariously inept systems, doll them up, build hype by giving it names, talk about how the singularity is just around the corner, and sell these crippled, pathetic systems they've made to companies and governments led by men who can hardly navigate their phone, much less understand how to use a computer.
>>107564611yes thats the problem, its literally just words theres no actual logic or abstract thought pattern behind it
>>107564958clearly not because its dogshit at being coherent in more than a very short span of time. it will flip flop instantly when pressed easily half the time
>>107564044>I'm dumb, explain...that's a lot of dumb. if dumb, can't explain. but, dumb, so can't explain that if dumb can't explain.
>>107564044Despite what the output may suggest, LLMs aren't all that complex architecturally, compared to biological brains.LLMs really only take a sequence of tokens and predict the most likely next token. During training they create a high dimensional latent space mapping tokens (syntax) to abstract concepts (semantics), so if you get a token (word) you can "lookup" semantically similar concepts and navigate this latent space to related concepts, and then map back tokens for output. Under the hood it's all just matrix multiplications in a finite domain to compute the outputs of an extremely simplified crude approximation of a neuron (all it does is fit the function f(x) = ax + b, while biological neurons have vastly more complex and nuanced behaviour).All that is to say, they're not intelligent and lack fundamental understanding, so they can't do much more than superficially parrot with an extremely large vocabulary. Training on logic won't make it actually understand logic and apply it correctly. It'll just spit out some stuff that looks like logic (which will fool some people).
>>107564611>>107566402No, for the billionth time, LLMs are NOT a Markov's chain. They take into account the whole conversation, that makes a huge difference.
>>107564085Putting the entire internet into hypercompressing automatic pattern recognition machine works better then literally any other method. Half a century of trying to figure out smart stuff and getting absolutely nowhere vs like 7 years of training GPTs and denoisers on the internet and us getting chatbot smart enough to convince anyone from 2015 that it’s an AGI.>But it’s not truly intelligent, it just repeats training dataOk, but can it do your job? Can it automate away all entry level computer jobs? That’s what I thought.
>>107564376>so then why does every single LLM AI act like a retarded infant that talks in circles? Start using uncensored models, it makes a huge difference. Guardrails and RLHF are completely neutering most LLMs.