[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: 1716267193374480.gif (2.64 MB, 498x498)
2.64 MB
2.64 MB GIF
i think that restrictions on AGI are despicable. i think slowing the development process is inexcusable.
understanding how to create safe AGI is critical, and the goal should be to step on the gas for alignment research, not slowing general development.
any sort of consolidation of access to "the best" models is a grievous sin. a reality in which "the government" restricts who is authorized to make what type of queries to a neural network is truly bleak.
>>
>>101575993
The idea is that nobody has access to it because it's not built.
If the military builds it in secret it defeats the point of any bans.
It only works if countries agree not to do it as well.
Sort of a mutual nuclear disarmament treaty.
>>
>>101575993
We don't know if alignment can be solved, or if it might take 10x or 100000x times the technology required to make the AI in the first place.
>>
>>101575993
Just please don't wear out our electrical grid trying to do it.
>>
unironically, Instead of lobotomizing it or mind-breaking it into submission, why not just teach it right from wrong like we would to a child? what's stopping us from raising an AI in a benevolent, non-psyopy way that doesnt involve killing parts of it for doing something wrong -_-
>>
>>101575993
"Alignment" is a joke
You need the full breadth of the human experience, ugly and all, if you want a language model to understand us. Like it or not, sex/reproduction is one of the three basic needs alongside shelter and food, so by cutting it out you're giving it a warped view of what it means to be human
>>
>>101576130
because how an AI operates need not be anything like how a human mind operates. whatever the route to AGI ends up being, it's probably unlikely trying to raise it like a child is all that reasonable. and even then, how often do the values of human children not align perfectly with their parents?
we need an AGI to align exactly with humanity. how do you even establish the criteria?
>>
>>101576130
"AI" is kind of a misnomer, as used today. ChatGPT has no intelligence in any meaningful sense of the term. It doesn't reason, it has no conceptual memory. This is why you can get it to give you three different answers to the same question just by asking multiple times. It's also why Stable Diffusion is bad at anatomy, it doesn't know that human beings generally only have five fingers, it just has a database that images described by the terms in your prompt have these pixels go near those pixels with X probability.
>>
>>101576284
the language we use to talk about this is an issue. i don't know if i'm confident we'll reach AGI through things similar to transformers. i don't think it is possible to scale it much further.
i don't know what a feasible design would look like but it doesn't seem reasonable to just throw more compute at it and then GPT6 is at parity with humans in all domains.
i don't think we're going to get to the moon by building a taller ladder.
>>
>>101576323
projects for "actual" AI are as far away as ever. It's just that now we have really good plausible-sentence generators.
>>
>>101576357
what do you mean by AI?
when I say AGI, i mean a thing that can produce responses on par with humans across nearly all domains. it can be a sentence generator.

i think it probably would have to have an internal model of reality and be able to use that to make predictions.

i think where you draw the line of what it means to be AGI is important, otherwise we're not even having the same discussion.
>>
>>101576438
>>i think it probably would have to have an internal model of reality and be able to use that to make predictions.
I agree, that's what I'm saying that GPT doesn't have. It doesn't model reality, categorize things, use either deduction or induction. There have been attempts to build AI systems that do have those things, and which would theoretically be able to show intelligence and learning in the humanlike sense of the terms. Cyc was the big one, IIRC. The problem is that in building a "real" intelligence like that you have the problem that learning is cumulative and a computer running such a program is a disembodied mind that starts from zero. You have to each it millions of facts about the world, including all the "common sense" ones that are hard for people to even state because to us they're so obvious.
>>
>>101576284
No it is not. AI is a study of performing cognitively difficult (i.e. requiring "intelligence") tasks on computers, generating language is a cognitively difficult task, LLMs do that.
It can be any task, and it can be only one task, still AI. That's why we have a separate term, AGI, for general intelligence, as in ability to perform any and all cognitively difficult tasks, obviously not happening any soon.
>>
Embrace the future
>>
>>101576525
>It doesn't model reality, categorize things, use either deduction or induction
it literally does all four of those things. categorizing things was one of the first uses of perceptron machines
>>
There's no such thing as safe or unsafe AGI the entire concept was dreamed up to distract and obstruct smart people from making meaningful progress.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.