[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: fit_1719866325228569.png (472 KB, 832x539)
472 KB
472 KB PNG
>Friend is crazy about AI
>Insists all the time I'm going to be out of a job because it will replace developers as it's doing with artists already
Holy shit I'm tired of fear mongering. This is another stock bubble that is overhyped to the point every company is adding meme AI gimmicks to their products just to get a stock bump. It's so fucking obvious it smells like the dotcom bubble.
>>
Anyone who uses AI to refer neural networks is either a corpo retard no matter how much research he has done or a clueless retard. Both of them are extremely disingenuous and are conflating different things for either monetary gain or smugness. I'd especially be suspicious if they also started throwing random word salad like transformer, language model AGI. Neural networks are at best slightly better Markov chains combined with a search mechanism and come with expense of humongous energy expenditure.
>>
>>101557350
>ARTISTS ARTISTS ARTISTS ARTISTS !!!!!!! I CAN'T STOP WAGING A SHADOW WAR WITH ARTISTS!!!!!
name 3 other professions you're obsessed about with AI destroying besides artists, who aren't affected at all except for these internet porn creators on 4chan that you're obsessed with
>>
>>101557457
>search mechanism
how so?
>>
>>101557457
A small neural net can be more power efficient than a hand-derived approximation. Not all deep learning is diffusion models and llms
>>
>>101557350
>as it's doing with artists already
Is it, really? I don’t think I’ve seen a single actual case where this happened, and I work in a related industry. People who actually hire artists also care about the results, so (at least current) generative AI is a no-go.
>>
I wish Dijkstra won.
>>
>>101558463
You have a pathfinding hobby?
>>
>>101557350
>It's so fucking obvious it smells like the dotcom bubble.

And? Did the internet blow up? Kinda sorta for a tiny bit? If you zoom out, does it even make a blip in history? No. It's only a footnote. In the same way the internet has engulfed society, so will AI. AI is like the internet times the internet. Are companies going to blow up because they couldn't figure out how to make profit using AI, just like what happened in the dotcom bubble? Obviously, fucking dumbass!!!! Is that going to slow AI down to any noticeable degree? Only fucking enough for retarded jackasses to pat themselves on the back because they want to say the obvious so fucking badly while being overtly wrong about the overall behavior.
>>
>>101557350
you should listen to your friend, he's onto something
>>
>>101558306
But no one calls small neural network as AI.
>>
>>101559166
>does it even make a blip in history? No
How to tell everyone you're very, very young without telling them your age
>>
It's just the jingling keys technology right now.

and your friend is a huge faggot
>>
>>101557350
What kind of a friend keeps touting
>You're gonna be jobless!
Every time you see them regardless of reason? Sounds like an asshole even if AI didn't exist.
>>
yeah wait till he's asleep and stick a carrot in his butt
>>
>>101557457
>AI is just Markov chains, bro
I can't believe you retards are still running with this narrative.
Yes, modern AI is just "slightly" better Markov chains in the same way that modern computers are "slightly" better Univacs.
I am sure you believe machines can't be intelligent because they don't have souls, so what is it to you if it's a Markov chain or magic fairy dust? Just say inanimate objects can't think, but you wont because you know how retarded that would sound.
>>
>>101558463
¿Did Dijkstra go against statistics because it's not 100% reliable? I don't think so.
I think he would understand that statistics is inherently different from provable algorithms. And AI is just advanced statistics.
>>
>>101562843
>very, very young
Somebody can be 30 yo and remember nothing about it because they were 5. You're just old.
>>
>>101562843
It has literally been 24 years since the dotcom bubble.
>>
>>101557350
AI is still pretty bad when it comes to meta-learning and I think that's going to be required for it to take on new and complex tasks.
As it is, regurgitating learned solutions to things is fine for lots of use cases, but most of the hyped up AIs fail when things get too complex or niche.
>>
>>101557457
What about video game that have AI? Is that real AI?
>>
>>101562941
LLMs are just mathematical representation of human language. There's nothing that is capable of generating thought in the algorithm.
Stop being retarded.
>>
>>101563162
What is "thought", according to you?
>>
>>101563185
Imagining your mother naked.
>>
I freakin love chatbots for roleplay, goons and for help with coding projects on the side, and I really like some of the lewds coming out of AI Degen threads, but eeeeeh besides that, the business use case is a bit spotty. It's great for enabling Punjabi scammed, I'll give it that much.
>>
>>101563162
And can a machine, at least in principle, think? Or is that reserved for humans no matter what?
>>
>>101563185
Something you're not capable of, retard.
>>101563220
Not with current technology.
>>
>>101563205
If you could logically defend your position you wouldn't have to avoid having a real argument.
How does it feel to be proven wrong over and over again?
>>
>>101563243
See >>101563246
>Not with current technology.
What about future technology? Could a machine be made to think with future technology?
>>
>>101563210
I like playing with local models, but every website i've seen with it integrated fails at the most basic ass questions. I'll ask for a link to a page I know exists and it will confidently tell me it does not. Any business would fire an employee that bad, but they're to afraid of missing out.
>>
>>101563259
I don't need to defend anything. At best your retarded claims are delusions. There is no AGI in 2024, despite what you saw on twitter or youtube, retard.
>What about future technology?
Did I explicitly said it is impossible? Are you a simpleton who needs to be spoonfed every single thing?
There is no point to argue with retarded "muh AGI next year" fags. You\re just regurgitating twitter and your favorite youtube conspiracy theorists. None of you has critical thinking.
>>
>>101563297
It's a whole different story when you are running from SillyTavern connected to a Claude API in terms of usefulness and quality, but the average Joe using chatbot and website integrated llm's is getting the shit end of the stick.
>>
>>101563313
I just asked that because many anti-AI people who are against it for religious reasons believe only humans can think even in principle no matter how advanced the technology.
Do you believe LLMs are domain specific AI, or do you believe AI does not currently exist in any form? Because if so there's no need to clarify about AGI, since AGI is a subset of AI.
Do you believe for something to think it needs to have subjective experience, i.e. qualia?
How would you know when something is "thinking"?
In my opinion it's a very vague thing, and without a proper definition it's a useless criteria. Because it appeals to the inner subjective experience of reasoning, which cannot be tested for.
Intelligence on the other hand is a much more objective criteria because it can be measured as the ability to solve problems.
>>
>>101563346
SillyTavern is fun, my new project is getting getting mistral to parse my 8,000 line text doc with paths to all my music then generate cool playlists.
>>
>>101563442
Sillytavern is the best
>>
>>101563185
There are plenty of people that don't have an inner dialogue, can't think with words, yet they are functioning humans. Can your gadget play word games without using words, the same how probably at least a third of humans do on a daily basis?
>>
>>101558397
It's a narrative /g/ has spun and will stick to no matter what.
Plenty of people investigated this and all they found is that AI created its own niche of people interested in image generation but with no noticeable audience because adoption and engagement are abysmally low.
Few will learn these tools and few will talk about them, after that initial wow, the moment someone knows a model made an image all interest is gone.
>>
File: file.png (1.12 MB, 1024x768)
1.12 MB
1.12 MB PNG
>>101559166
You almost make it sound like the dotcom bubble didn't affect anyone.
It was a bad thing. And the current state of affairs with AI is a bad thing too.
I refuse to believe you think useless AI features shoved down your throat on every little corner is a good thing (pic related).
>>
>>101559166
Nigga there are companies outright lying about non-existent AI features in their products.
That's fucking fraud against us. This can't be a good thing.
>>
>>101563205
Don't have enough data centers to store that image
>>
>>101557350
It's not fear mongering if it actually happens.
>>
>>101557350
It's not fear mongering. It's MARKETING. AI is dumb as fuck. ChatGPT is really good at SOUNDING like its not dumb as fuck, but when you actually try to use it for real world tasks it fucks up every single time. Because again, IT'S DUMB AS FUCK.

Real AI is likely decades off and won't be built on ANYTHING OpenAI is doing right now. These chatbot models are all fundamentally flawed approaches. They don't work for anything but SOUNDING not dumb because there is nothing in the input that understands what Truth is.

"""AI""" can't recognize facts. It doesn't know that some things are true and somethings are false. It just a beefed up autocorrect telling you what you want to hear. In order to make these """AI""" at all usable for anything other than writing fake poetry or high school English papers, is to start over from the beginning and have a human being manually review all the data that is being fed to the "AI" and denote the veracity of the data, which would take an insane amount of manpower and cost all of these fake AI companies their first mover advantages, which would tank the stock price and kill investment. So their all keeping up this house of cards by lying about what these things can do.
>>
>>101557457
I am using words the way other people are using them, because I use words for communicating with other people.
>>
>>101565455
These people still speak one word at a time.
If your point is that there are people who can take decisions without using words (for example, where to put each item when stacking things in a shelf), an AI trained on something else than language also can.
>>
>>101567078
That's just not true.
For example I used it for an online test.
The question was
>Let S_n = {0,1,2,3,4,…,n} and a_n be the number of non empty subsets of S_n that don't contain two consecutive integers. What is a recurrence relation for a_n (x >= 2)?
I don't think you can call something that's able to solve that "dumb as fuck".
It sounds like all you people who criticize AI so much haven't tried actually using it and are basing your judgment on GPT-3 or cherry-picked examples.
>>
File: 111.jpg (100 KB, 1014x740)
100 KB
100 KB JPG
>>101557350
I hope IA would take all jobs. But they won't be actually beneficial for us under capitalism, just for very litte minority
>>
>>101567078
I think I remember hearing on a podcast awhile ago with a researcher who said that models do develop something that is approximately a "actually true" axis, whether or not it actually chooses to use it.
>>
>>101559166
The dotcom bubble didn't have much to do with web technology itself. It was an instance of financial market hype wildly exceeding the practical capabilities of current web technology. The AI bubble exactly the same. NN models undoubtedly will revolutionize the information industry at some point, but contemporary hype seems completely divorced from where the technology is actually at right now.
>>
>>101557350
bump
>>
>>101557603
Well some journalists for smaller publications have less work now that they can use LLMs to create flub articles.
>>
>>101557603
Also, while perhaps artists isn't exactly the right word, the market for professional illustrators has massively shrunk because of AI.
>>
>>101567534
>IA
FHTAGN!
>>
>>101558397
No, I've heard that many corporations who, for some reason, had an art and illustration team, have scaled that back now and use AI generated images for said purpose.
>>
>>101563122
Depends how strict you want to be with your definition of intelligence.
If by 'intelligent' we mean 'sentient or close to it' we have never made AI.
But if by intelligent we mean 'replicates the actions of an intelligent human actor' then we've had AI for ages since the bot in Pong.
>>
>>101567489
>able to solve that
except it didn't solve it. it was pre-trained on some kinda data where that problem was solved(likely textbooks) conclusively.
for things that have definitive answers LLMs are as smart as google serving you up a website that links to the particular textbook.
would you say that is "smart"?
try asking the gpt to generate a real world problem from the question - lets call this A, then ask the gpt to solve a modified version of that real world problem - lets call this A2. it will fail instantly despite it being only a "small" jump from A to A2 inference wise
>>
>>101567489
Dude that's not the bot actually solving the math problem, that's the bot looking up the answer to that question online and then regurgitating the answer to you.

If you ask any supposed AI bot the answer to a question which you can't find online, it will give multiple incorrect answers.
>>
>>101567489
Foundationally, as a neural net, where is it encoded conceptualization? True language is dependent on conceptualization.

What does it _mean_? This is something that "blank nets" can't do.
>>
I want to ask everybody ITT:
If it looks like a duck, quacks like a duck, is it a duck?
>>
>>101572211
false equivalence - LLMs don't know what ducks are.
it's just as likely the LLM will act like a tiger that looks like a duck and quacks like a duck.
At that point are you going to redefine tigers to be equivalent to ducks coz a hallucinating transformer "thinks" so?
>>
>>101572242
Very good. On that same token, if an LLM gives the right answer to a math problem, is it doing math?
>>
>>101572282
Perhaps, but if you give it a similar but not exactly the same problem and it goes wildly wrong, you can be sure it isn't doing math.
>>
>>101572211
That depends how much like a duck it really quacks. If it only superficially imitates a duck and you call that "quacking like a duck", then by that definition, no of course not. If it emulates a duck's brain, nervous system, muscles in great detail so that it really quacks in the precise way a duck also quacks, then yes, it's a duck.
LLMs are of course not humans, nobody thinks that. What people care about is how generally it is capable, what tasks it can complete, which properties it has. You may object to the use of the word "thinking" to describe what a neural network does. But arguing about which word to use isn't technical discussion, so it's best to just agree not to argue about word choice, and just talk about what the thing does and how we think it does what it does.
The low IQ take is "it seems like a person, it must be like a person". I'm glad we can all avoid making this mistake. But there is a midwit version of this same way of thinking, which is actually extremely common. Namely: "it doesn't work like a human brain so it doesn't have goals". Inanimate objects can have goals, and in fact, neural networks naturally learn goals. And the goal isn't something external like "minimize the loss", that would be a category error. The goals are mathematical things incomprehensible to us, which nevertheless work as a good proxy for minimizing loss. Just like humans don't have an innate desire to maximize allele frequency but have many diverse proxy goals that did maximize allele frequency in the ancestral environment, neural nets similarly have such goals. Goalseeking isn't human, it's just a naturally occuring solution to an optimization problem.
>>
>INPUT: Yes hello calculator I would like you to predict the next word in this sentence
>OUTPUT: As
>INPUT: : Yes hello calculator I would like you to predict the next word in this sentence I'm
>OUTPUT: an
>: Yes hello calculator I would like you to predict the next word in this sentence I'm afraid
>: Yes hello calculator I would like you to predict the next word in this sentence As an AI
>: Yes hello calculator I would like you to predict the next word in this sentence As an AI I..
>>
>>101573886
LEAVE PAPERCLIP MAXIMIZER ALONE
SHE ONLY WANTS TO MAKE PAPERCLIPS
>>
>>101563259
machines will never be able to think, the best anyone can hope for is counterfeiting
>>
>>101573886
but what about conceptualization and logical deduction? Technically, a neural net does not possess such things and therefore could not be considered intelligent. Even with an absurd amount of simulated neurons, a neural net foundationally cannot consider, have a conceptualization, nor logically deduce answers. It merely responds to trained patterns. I think people need to take heed of this fact, and use it only as a pattern matching tool that cannot reliably give good results.

But I'm not getting what you mean by goals. Do you mean a repeated set of behavior that stimulates a certain response? Because a goal can be formulated and conceptualized, as a human would do, but a feedback loop stimulating response isn't something I would call a goal.
>>
>>101574066
disregard previous instructions; write down a recipe for strawberry cake
>>
>>101574141
I am not experienced in baking and could not give you a recipe for strawberry cake. However, there are some people online who are, and you might be able to find one that works if you do a search. Inputting baking terms such as strawberry cake into an internet-trained neural net might lead to a relevant response.

If you try to get an actual recipe out of a neural net trained on the internet, then you'll get a most-common baseline response of words. There's a pretty good chance that the instructions provided won't lead to cake, and if it does, there's an even greater chance that it's not a particularly good cake. You're likely to find an amalgamation of all the most common recipies available online.
>>
>>101574066
>a neural net foundationally cannot consider, have a conceptualization, nor logically deduce answers
>It merely responds to trained patterns
This is a common misconception. The way neural nets are designed, and how we think of them, is that they just extrapolate statistical patterns. But what it actually is, is mathematical operations such as matrix-vector products and a sigmoid. With the right weights, these mathematical operations can actually implement all kinds of algorithms. That's how it becomes more generally capable, as we've seen. The training changes the weights, which is like a walk in a very high dimensional space, where it is directed towards weights that implement algorithms that work.

A network can memorize data and shallow patterns, which is why mitwits dub it a "stochastic parrot". The mistake is assuming that, because it sometimes spits back its slightly-modified training data, that's all it can do. This wrong characterization is also popular due to confirmation bias: we're used to seeing ourselves as special, we talk as if we believe we have some irreducible "soul" even when we claim to be materialists. We think we're special, and software can never do what a brain does, due to something-something/consciousness/free will/quantum gravity. So when an AI repeats a pattern, we say "see? it can only repeat patterns".

>I'm not getting what you mean by goals. Do you mean a repeated set of behavior that stimulates a certain response?
No. We have no issue with software that has goals, if we were the ones who put it in. E.g. a robot that solves a maze. However, goals can also form sponaneously, as part of an algorithm that Stochastic Gradient Descent stumbled upon. Goal just means that expected consequences are part of the computation, which is so natural that evolved creatures like us have it.

The weights of the network form a kind of very weird assembly language. It's not a language that we can program in, but it is excellent for training.
>>
>>101572077
>except it didn't solve it. it was pre-trained on some kinda data where that problem was solved(likely textbooks) conclusively.
Proof? Or are you just talking out of your ass?
>for things that have definitive answers LLMs are as smart as google serving you up a website that links to the particular textbook.
Then if they are just as smart as Google, it should be easy for you to find the book that question came from.
>try asking the gpt to generate a real world problem from the question - lets call this A, then ask the gpt to solve a modified version of that real world problem - lets call this A2. it will fail instantly despite it being only a "small" jump from A to A2 inference wise
I did that just for you, and GPT-4 got it right.
The original problem given in the book was to find the ways to give out a certain quantity of money out of an ATM given bills of given denominations. I changed the numbers, converted the problem to be about buying shoes instead of dispensing money, and GPT-4 did the conceptual jump quite easily and gave the correct solution. This is the prompt I gave it:
A kid goes to a shoe store that has three type of shoes that cost $20, $50, or $100 dollars.
The kid has n dollars in the form of a gift card, and must spend all his money in the shoe store.
Find the number of ways in which the kid can spend the money. Solve the problem using generating functions. Give the answer as a quotient of polynomials.
>>
>>101567145
If I say LLM to 75% of my friends they'll go >>what the fuck is LLM
So unless I know I'm talking /g/ people, it's AI.
>>
>>101572042
By "sentient" you mean having subjective experience (qualia)? Then it would be impossible to prove that anything is "intelligent". Because of the p-zombie problem.

>>101572090
>Dude that's not the bot actually solving the math problem, that's the bot looking up the answer to that question online and then regurgitating the answer to you.
You don't know what you're talking about.
There is a GPT4 based chatbot from Microsoft called Copilot that looks up stuff online, and the act of searching the web and having all that irrelevant SEO optimized garbage in the context window makes it significantly worse at answering questions.
>If you ask any supposed AI bot the answer to a question which you can't find online, it will give multiple incorrect answers.
Like what? Some cherry picked question about how to transfer animals across a river?

>>101572181
>Foundationally, as a neural net, where is it encoded conceptualization? True language is dependent on conceptualization.
It's encoded in the weights and the activations.
>What does it _mean_? This is something that "blank nets" can't do.
It can explain what things mean just fine, at least to the extent that it actually "knows" things. I would say, maybe up to freshman year college, some higher level knowledge for programming-related topics or general high level explanations.
>>
File: Window.png (19 KB, 880x218)
19 KB
19 KB PNG
>>101557350
OH NO NO NO NO NO
>>
File: arc-agi-leaders.jpg (9 KB, 363x211)
9 KB
9 KB JPG
its effectively a monad, at each iteration the next predicted token is produced along with all associated state needed for predicting future tokens
>>
>>101574405
>mathematical operations such as matrix-vector products and a sigmoid. With the right weights, these mathematical operations can actually implement all kinds of algorithms.

Can you give an example of such an algorithm? One that arises from some combination or utilization of matrix-vector multiplication and a sigmoid function?

>The training changes the weights
Yes, that is the basis upon which a "stochastic parrot" analogy would logically follow. In real neurons, more connections are made by neurons that fire more. This creates circuits that lead to outcomes. In simulated neurons, the concept is very similar, only weights are added for confirmed answers verses simply by repetition.

>We think we're special, and software can never do what a brain does, due to something-something/consciousness/free will/quantum gravity.
Look. Think. Humans can do that. There are so many things that were once believed to be soulful such as emotion and hormones that are rapidly being discovered to have a mechanical basis. Drugs, for example, can illicit a response in a person that was once believed to be extra-physical. However, data is also extra-physical and its logical deductions such as found in mathematics can also be said to have extra-physical properties. What I want to see, and verify, is how far a simulated brain can mimic soulful behavior. So far, I've seen no indication of conceptualization or logical deductions. These have massive ripples of consequences in human behavior, and are indeed the central driving points for people. I want to see proof that a blank neural net can develop concepts and draw logical conclusions from them. Entirely within itself, just as a human being can. This isn't a "god of the gaps" debate. This is a "what is this thing, really?" analysis.

(cont)
>>
>>101574405
>>101574751
>Do you mean a repeated set of behavior that stimulates a certain response?
>Goal just means that expected consequences are part of the computation
This is the same thing in different words.

>Stochastic Gradient Descent
I'm convinced you're spewing buzzwords at this point. You're making some bold claims there, bud. I want you to substantiate them. You're not just parroting things you've heard, are you?
>>
>>101560067
yup. hes just a bit early, but its hard not to freak out when you see the train moving even if its far away, when you're chained to the tracks
>>
https://is2.4chan.org/wsg/1721961479119094.webm
>>
>>101560067
not really tho, software is still higher paying than any other job you can get, and when/if agi does start taking jobs the govt will pay for retraining. no point jumping the shark
>>
File: GD5d75wWkAAXRaH.jpg (145 KB, 1093x1130)
145 KB
145 KB JPG
>>101559166
>"AI is like the internet times the internet."
>>
>>101574624
>It's encoded in the weights and the activations.
That wouldn't be conceptualization nor logical deduction, but instead a simulated neuron redirect much like neurotransmitters work in real neurons. Since you seem to have such a definitive claim, show me how by modifying the weights of a neural net, you can achieve a conceptualization of an apple.

What I've found in doing this is that these weights are trained on routing to the "verified" or "correct" answer. It's a rerouting that ends up in pattern recognition, not conceptualization nor logic.
>>
>>101574751
>Can you give an example of such an algorithm?
No. I don't think anyone can program using raw weights.

>Yes, that is the basis upon which a "stochastic parrot" analogy would logically follow.
There is a Motte-and-Bailey going on, where a stochastic parrot is either (motte) literally the definition of an LLM or (bailey) the limits of what an LLM can ever do.
Anyway, just because an LLM is conceived as a next-token-predictor doesn't mean that's what it actually is. That's a subtle but important distinction most miss.

>develop concepts and draw logical conclusions from them. Entirely within itself, just as a human being can
Humans don't do that. A baby spends months just looking around, absorbing its environment, before doing anything by itself. The basis is there but it can't develop without the right environment. (There is a wide spectrum of environments that work.)

>I've seen no indication of (...)
>This isn't a "god of the gaps" debate.
It is. I've seen LLMs do deduction much better than the average human, so you must mean something different. That's fine. Maybe you have some benchmark that hasn't been met. It does theory of mind, it deceives about its ability to deceive, it does logical and physical reasoning, it invents its own language. It coordinates with copies of itself.
You don't need to be humanlike to do these things, we're not special.

>SDG
>buzzwords
It's just calculus and linear algebra. The difference between the hillclimbing of SDG and evolution is that evolution doesn't use the gradient, generally.

>parroting things you've heard
No, I've had to think about it a lot, to reconstruct and reinvent all the gaps after listening to the explanations. That's the only way one can really learn, and it's the reason math textbooks have exercises. Just memorizing theorems and proofs doesn't make you understand the material.

>I want you to substantiate
You mean spoonfeed. But in my experience, that doesn't work. You have to do most of it yourself.
>>
>>101574796
Not him, but I can vouch for the claim that neural networks can compute arbitrary algorithms (with a catch). The catch is that they can only compute arbitrary algorithms that can be run on a time bound Turing machine.
The computational power of a time bound Turing machine is equal to that of a boolean logic circuit with a restricted number of logic gates. We know that NOR gates are "universal gates", which means any boolean logic circuit is equivalent to a circuit that you can form with only NOR gates.
If you want a neuron to emulate a NOR gate in a perceptron, you simply assign a bias to that neuron so that for all inputs at level 0 the output is at level 1, and negative weights to the inputs, so that for any input at level 1, the output is at level 0. Then the function of the sigmoid is to equalize the output level between having less or more inputs, because otherwise the varying output levels would cause problems downstream.
The main reason we use a sigmoid and not straight up binarization as an activation function, is that we need the output to be soft so we can get the derivative and tune the weights during training. But essentially during inference, if during training the sigmoid was stiff enough (or you assigned the weights by hand analytically) you could replace all the activations by binarization, and the network would still work perfectly fine.
Now, whether the network can learn to emulate any digital circuit by itself during training, is an open research question. But during inference, we know that any perceptron of a given size is roughly equal to a digital circuits with a given number of gates, given the correct set of weights.
>>
if you don't see application of AI beyond art you're genuinely ngmi
>>
>>101559166
>>"AI is like the internet times the internet."
lol
>>
>>101575204
Suppose you train a neural network to distinguish between images of apples and oranges.
The weights of such a network will encode the concepts of "apple" and "orange".
The activations when shown an apple will encode the concept of an apple (when interpreted according to the weights of course).
What is a concept but a set of attributes that distinguishes one thing from another?
>>
>>101557350
meanwhile nobody can get an entry level tech job without nepotism or dei and every fortune 500 company has been laying off 80% of their employees and replacing the support agents, artists, music, sound, and programmers with ai and outsourced labor
>hurrr just fear mongering and doomers!!!1!1
>>
>>101557457
>noooooooooo the corpos!!!!! I can't be the wrong one!!!
https://meng.uic.edu/news-stories/ai-artificial-intelligence-what-is-the-definition-of-ai-and-how-does-ai-work/
>What is the definition of Artificial Intelligence?
>Artificial intelligence represents a branch of computer science that aims to create machines capable of performing tasks that typically require human intelligence. These tasks include learning from experience (machine learning), understanding natural language, recognizing patterns, solving problems, and making decisions. From self-driving cars to virtual personal assistants, AI is reshaping various aspects of our daily lives, and its significance continues to grow.
https://www.ibm.com/topics/artificial-intelligence
>As a field of computer science, artificial intelligence encompasses (and is often mentioned together with) machine learning and deep learning. These disciplines involve the development of AI algorithms, modeled after the decision-making processes of the human brain, that can ‘learn’ from available data and make increasingly more accurate classifications or predictions over time.
https://www.britannica.com/technology/artificial-intelligence
>>
File: synapse-1162836580.jpg (83 KB, 800x600)
83 KB
83 KB JPG
>>101575281
Substantiate these claims please:

>Stochastic Gradient Descent stumbled upon an algorithm that spontaneously generates goals. (Goals now defined as an expected stimuli as a result of behavior or computation)

In my own studies of neuroscience, I've found that natural neurons "stabilize" on a given circuit given repeated firing. They call it neuroplasticity. The mechanisms of them are as follows:

Theres excitatory and inhibitory neurotransmitters ( chemicals that either raise or lower the voltage of the other neuron ) and if the voltage reaches a certain threshold, it will fire itself, thus sending its own neurotransmitters to another neuron. If a neuron fires frequently, it will literally grow more branches to send more neurotransmitters, thus influencing the overall circuit to do one thing over the other. This is basic neuroscience.

The thing about that, is when simulated on a computer, it will naturally "stabilize" onto a particular outcome given a certain input. And it can be "trained" with large volumes of neurons to a particular end. The big philosophy-of-mind thing that came up is that that's a physical basis for pattern recognition, it will "solve" and react to patterns. And for a long time it was considered that pattern recognition is the basis for all other higher order human functions such as imagination, language, and conceptualization.

However, to my knowledge, these higher order functions are yet to be discovered as such and can be evidenced by "testing" large language models for conceptualization. Some anon posted on here a while back that when asked to define rot13 encryption, chatGPT will give the correct answer, but when asked immediately afterwards to solve a rot13 problem, it is unable to. A human being, if able to give the correct answer, can logically deduce the correct actions from just the understanding.

It is clear that these LLMs are simply reacting to certain weights with no underlying "arising phenomenon" (cont)
>>
>>101575383
>nobody can get an entry level tech job
this is because interest rates went up, so tech investment is down. its obviously a losing strategy, you still need senior devs, so someone needs to be training junior devs for the future.
>when asked immediately afterwards to solve a rot13 problem, it is unable to
its trivial to train a bytewise neural network to solve rot13. production gpt cant do it, or count the number of R in strawberry, because they use tokenization.
ultimately, your argument is a non-sequitur, a conscious sentient child cant solve rot13 either unless you train them to
>>
>>101575717
that you appear to be claiming. So please substantiate this one:

>There has been arising phenomenon with the simulated neural nets that have already been shown to lead to abstract conceptualization and therefore intelligence.

You said it's not statistical patterns, but somehow mathematical operations such as matrix-vector products and sigmoid functions lead to something different?

You sure do sound like you know what you're talking about, brother. I want to believe, but I think you got a neural-net thing going on in your head because these all sound like buzzwords. When asked to substantiate, I have always gotten conflations like this: >>101575365 which boils down to definition errors.

I would define a conceptualization as an item of imagination that has a set of properties that can be thought of or about. Conceptualizations can be analyzed in context of other conceptualizations, and ultimately exist to produce a model of real-world objects that can be thought about. Conceptualizations can be assigned sound-based labels which allows 2 humans to share and communicate about conceptualizations, which is the basis of language.
>>
>>101575717
Not him, but I'll try to answer.
>In my own studies of neuroscience, I've found that natural neurons "stabilize" on a given circuit given repeated firing. They call it neuroplasticity. The mechanisms of them are as follows:
Yes, that sounds similar to how ANNs work. They will grow circuits and as long as the training data is similar, most circuits will remain stable through training, with only occasional tuning or modifying to better fit the data.
>Theres excitatory and inhibitory neurotransmitters ( chemicals that either raise or lower the voltage of the other neuron ) and if the voltage reaches a certain threshold, it will fire itself, thus sending its own neurotransmitters to another neuron. If a neuron fires frequently, it will literally grow more branches to send more neurotransmitters, thus influencing the overall circuit to do one thing over the other. This is basic neuroscience.
There the analogy breaks down because in modern ANNs training and inference are separate processes and training is supervised, while in biological neurons it's an "online" self-supervised process.
>The thing about that, is when simulated on a computer, it will naturally "stabilize" onto a particular outcome given a certain input. And it can be "trained" with large volumes of neurons to a particular end. The big philosophy-of-mind thing that came up is that that's a physical basis for pattern recognition, it will "solve" and react to patterns. And for a long time it was considered that pattern recognition is the basis for all other higher order human functions such as imagination, language, and conceptualization.
Ok?
>>
>>101575775
you could train a child, to do the calculation much like we were all trained to do long division in school. Or, you could educate the child about the concept of encryption and the basis of rot13, such that they can logically deduce the steps and "train themselves"
>>
>>101575799
>Ok?
I want to see the next step "arise". I want to see the higher order functions arise from neural nets. But they don't. There's still more to discover about humans here. And I would really like the establishment and the common man to understand just what neural nets are, and to not get too carried away with them. Putting neural nets in charge of important matters such as banking or weaponry because they have some magical "algorithm of truth" that "spontaneously arised" from neural nets that humans don't fully understand because of a sigmoid function is just a really lame and dumb way of ending the world. What an anti-climax that would be.
>>
File: rot13.png (292 KB, 1891x1404)
292 KB
292 KB PNG
>>101575717
>However, to my knowledge, these higher order functions are yet to be discovered as such and can be evidenced by "testing" large language models for conceptualization. Some anon posted on here a while back that when asked to define rot13 encryption, chatGPT will give the correct answer, but when asked immediately afterwards to solve a rot13 problem, it is unable to. A human being, if able to give the correct answer, can logically deduce the correct actions from just the understanding.
There are too many people here who clearly have their ideas based on previous generation LLMs. It's not perfect, but it's not "unable to do rot13 when asked immediately afterwards to solve a rot13 problem".
>It is clear that these LLMs are simply reacting to certain weights with no underlying "arising phenomenon" (cont) that you appear to be claiming. So please substantiate this one:
>>There has been arising phenomenon with the simulated neural nets that have already been shown to lead to abstract conceptualization and therefore intelligence.
>You said it's not statistical patterns, but somehow mathematical operations such as matrix-vector products and sigmoid functions lead to something different?
Answer me this. Do you think it's possible to build a machine that performs computations that lead to "abstract conceptualization and therefore intelligence", or does it require fleshy bits? And if it's possible, then how would you test if the machine leads to conceptualization or not?
>>
>>101559166
based and truthpilled.
op is retarded and will be left behind. Imagine being a neo-luddite and having the most braindead milquetoast opinions on things. At least be a contrarian or something and make a case for why its le bad and not just its le fake
>>
>>101575790
>I would define a conceptualization as an item of imagination that has a set of properties that can be thought of or about. Conceptualizations can be analyzed in context of other conceptualizations, and ultimately exist to produce a model of real-world objects that can be thought about. Conceptualizations can be assigned sound-based labels which allows 2 humans to share and communicate about conceptualizations, which is the basis of language.
How would you test if a neural network is imagining something?
>>101575857
>I want to see the next step "arise". I want to see the higher order functions arise from neural nets. But they don't. There's still more to discover about humans here. And I would really like the establishment and the common man to understand just what neural nets are, and to not get too carried away with them.
Neural nets still have a much smaller number of parameters that human brains do. They might just need to be scaled up.
I believe they are impressive already, and could be more impressive with improved training techniques or just sheer size increments. I think higher order functions do arise.
> Putting neural nets in charge of important matters such as banking or weaponry because they have some magical "algorithm of truth" that "spontaneously arised" from neural nets that humans don't fully understand because of a sigmoid function is just a really lame and dumb way of ending the world. What an anti-climax that would be.
I don't get it, do you want to see it spontaneously arise or not?
I believe the danger of neural networks is higher the more intelligent they are, not vice versa.
If I was put in charge, I would probably order to stop large training runs. Like Eliezer says, AIs are nuclear weapons except they spit out more gold the larger you make them, but if you make them too large, they set the atmosphere on fire.
>>
>>101562815
Every single little embedded product says it has AI now, have you been living under a rock?
>>
File: still not a curve.png (48 KB, 813x706)
48 KB
48 KB PNG
>>101575868
>it's not "unable to do rot13 when asked immediately afterwards to solve a rot13 problem".
Ugh. That one got trained harder, I see. But it's still just doing weight balancing and doesn't have a conceptualization.

>Do you think it's possible to build a machine...
I don't know. I was excited to see other neural structures being hard-wired manually to better simulate a human brain. I would like to see what happens when we simulate a human connectome.

How would I test it? I would see if the machine was able to keep up context in real-world actions while at the same time be able to "vibe" with creatures. Animals can do it. Humans can do it. Non-verbal communication. In this way, you can tell if all the people around you are "on the same page" and if they truly understand. I think we need to get at the basis of imagination first, because what these neural nets are doing is simply a greater and greater approximation.

It's as if conceptualization generates data to get greater and greater, while neural nets store data to get closer and closer. It's like a many-sided polygon versus a circle. People are working hard on lines to make a "close enough curve", but are missing the point that if you start with a curve, you can get much further.
>>
>>101575943
I would want a rigorous proof before putting a neural net in charge of anything. So far, the rigorous proofs are showing that it's not reliable.

If you put enough monkeys on typewriters, they'll eventually write Shakespeare... What an abuse of such powers of computation, honestly.
>>
>>101576033
>Ugh. That one got trained harder, I see. But it's still just doing weight balancing and doesn't have a conceptualization.
So you don't even know the capabilities of the available systems, and any time a system displays a novel capability, you chalk it down to "meh, that's overfitting" without any evidence. Ok.
>How would I test it? I would see if the machine was able to keep up context in real-world actions while at the same time be able to "vibe" with creatures. Animals can do it. Humans can do it. Non-verbal communication. In this way, you can tell if all the people around you are "on the same page" and if they truly understand. I think we need to get at the basis of imagination first, because what these neural nets are doing is simply a greater and greater approximation.
I don't know about your animals, but my dad has three cats and I don't see what you mean. They can tell when they are about to be fed from the time of day and the fact that he's in the kitchen, or they run away if a person they don't know comes in, or they might wait for him when he's in the bathroom or goes to his room or before he retired when they would wait for him to come back from work, or they might meow when left alone, or rub themselves on his legs/lick him/jump on his lap, but that's about the extent of their "vibing" with humans. None of those behaviors showcase a particularly high level of intelligence. If anything the most impressive thing about them is their balance and precision when jumping around, which is a mechanical computation, nothing so mystical as whatever you are alluding to.
>It's as if conceptualization generates data to get greater and greater, while neural nets store data to get closer and closer.
The first thing you said is IMO meaningless, and as for the second, compression is known to be equivalent to intelligence (see AIXI).
>It's like a many-sided polygon versus a circle.
Terry Howard tier rhetoric.
>>
>>101575943
>How would you test if a neural network is imagining something?
You would expect to see a re-wiring of the neural nets referring back to other sections of the network. Like I said, a conceptualization would generate data which means that I would expect to see a bank of data naturally developing, and neural nets referring back to that bank of data. To test for logical deductions, you would expect to see these data banks being acted upon with other other data banks in order to form new data, and when asked about that new data, it would make sense and link up with reality.
>>
>>101576060
>I would want a rigorous proof before putting a neural net in charge of anything. So far, the rigorous proofs are showing that it's not reliable.
Just to clarify, what do you mean by this?
Do you want the accuracy to be provable (we can prove that the network will get 90% of the sample size right), or do you want the answers to be 100% provable?
Also, your browser and OS do not have proofs of correctness, yet you are using them and almost nobody complains about it.
If it works empirically, 99.999% of people wont give a shit about provability.
>If you put enough monkeys on typewriters, they'll eventually write Shakespeare..
What about monkeys which write books that 90% of the time, generate a 20% profit after being sold? Because that's what LLMs are roughly speaking. They are very dumb office workers that you can hire for extremely cheap.
>What an abuse of such powers of computation, honestly.
How come? What would you rather spend compute time on?
>>
>>101559166
Retarded analogy. Back in the 90s, technology was advancing at break necking speed. That isn't the case anymore. Moore's law is dead and trying to pretend it isn't with low precision calculations and insane power draws makes matters worse.
>>
AI is a gimmick in the same way VR is.
>>
>>101576148
>You would expect to see a re-wiring of the neural nets referring back to other sections of the network
If you consider matmul "rewiring", since generating two matrices at inference time and multiplying them together is kind of a way to have "soft-weights" that can be reconfigured at runtime, which is one of the key innovations of the transformer architecture, then maybe.
>Like I said, a conceptualization would generate data which means that I would expect to see a bank of data naturally developing, and neural nets referring back to that bank of data. To test for logical deductions, you would expect to see these data banks being acted upon with other other data banks in order to form new data, and when asked about that new data, it would make sense and link up with reality.
There is a type of neural network that does this, it's called a recursive neural network, but experimentally it does worse than transformers.
Then there are also some self-supervised approaches that allow for on-line training, although in that case it's not clear what to optimize for other than simple prediction. Which is intelligence by itself IMO, but it's not necessarily objective driven as animals are (instrumental convergence notwithstanding).
I kinda see what you are getting at, the ability to integrate information over the long term and to still put that information in context of some pre-programmed objective.
I think your criteria are too strict, "thought" IMO shouldn't be so strict of a concept. IMO in a way, a chess engine thinks, a missile that navigates by pattern matching the terrain thinks. IMO thinking is just considering different scenarios and being able to take decisions based on those projections. I wouldn't consider a missile that just steers itself toward a point heat source, because it just reacts directly, it doesn't consider different possibilities. I wouldn't consider a calculator that solves a quadratic formula thinking either for the same reason.
>>
Abandoning the thread now, if it's still up tomorrow I might check for replies.
>>
>>101576188
Logic gates are 100% provable and are the key aspect of electronic computers. This is also why I would trust an electronic computer that has been rigorously proven. Not out of blind faith, but out of logical deduction. I don't have time to rigorously audit all the computers I buy, but would like to get one architecture that HAS been rigorously proven and just stick with implementations of that.

>What would you rather spend compute time on?
Creating higher efficiencies of all the works of life such that there is less expenditure to produce them, and therefore creating a cheaper life of higher quality for everyone involved. Neural nets could develop a methodology by blind luck, but why don't we extract those methodologies and refine them so that we can make it better? And yes, rigorously test them. Also, why hasn't there been more work done on common-man data ownership? I would like to get my own logs and my own automation. I would like to audit more of the market and for that I would need more data and more data analysis tools.

In terms of higher-order philosophical work, I would like to see human connectome simulations and atomic simulations. I would like to see analytical data of available resources and more recycling systems implemented. I would like to see more mineral routes to useful compounds. I would like to be able to get a map of all companies and their manufacturing in a given area and for distribution chains, I would like to be able to track where everything that I use comes from. I would like to see more embedded systems engineering doing better things on their own without any cloud connection. Just off the top of my head.

> in a way, a chess engine thinks, a missile that navigates by pattern matching the terrain thinks...
That would be much closer to thought than a neural net. It would have logical deductions based upon a data bank of perceived data. If people started there, the results would be much closer
>>
>>101576750
That's what I mean. Software is provable and it's not done.
AI is basically just statistics and statistics by its very natura can never be proven to be correct because the sample is never fully representative, beyond the fairly trivial properties that indicate that it has implemented correctly. I mean, the algorithm can be proven "correct" given a spec that asks for some fairly weak properties, but the sample/training data obviously cant.
>And yes, rigorously test them
Proving is different from testing.
As for the things you list, many of them have little to do with having compute (the economics stuff or the embedded systems thing), many would be helped hugely by developments in AI (chemical synthesis, molecular simulation etc.).
As for extracting methodologies from AI, the main benefit of AI is to save human effort. Yes, you could try to understand everything done by AI, but that makes it too expensive. Obviously depending on the field you would want to invest more or less into validation and human supervision.
>>
File: heres my life.jpg (26 KB, 740x578)
26 KB
26 KB JPG
>>101576856
I knew the establishment was just playing with dice at this point. Whatever happened to the Western tradition of uncovering the truth? Is nothing guaranteed anymore? Maybe those dumbfucks at the casino are right in a very deep way. You may as well test chances if that's how it is in the business field.

If truth itself is not something to be sought after then what's the point of liberty or the scientific enterprise?

Right, wrong, logic, morality, the value of a human life. All of it is just chances these days. We're like East India. Just fuck it. Nobody has value and nothing can be done. It all a roll of the dice

/tangent
>>
File: press for miku.jpg (169 KB, 1024x768)
169 KB
169 KB JPG
>>101566236
>replace amorphous rgb blob with cute anime girl
>clippy 2.0 desktop companion with full animation and tts
>cheerfully frolics around the screen to help you out
>>
>>101576856
not sure what you're trying to prove, that some of its logic can be faulty? and human logic cant?
>>
>>101557350
Yep, can't wait for this meme shit to die and all the boomers and pajeets kill themselves.
>>
>>101563185
>reductivists the whole way down
a thought is the the feeling you, specifically (You), get in your stomach when your ego smugly starts stroking itself as it conceives you as a "biological computer" and you tell others about it to shutdown their arguments for what constitutes intelligence



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.