[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: smile_girl_cute_adorable.jpg (563 KB, 2893x2893)
563 KB
563 KB JPG
Elon Musk has talked about achieving AGI by 2026. I think this is way too soon, but it got me thinking. What if you poured tons of money into AI research, offering tons of money to anyone willing to work in it, to the point it's not even profitable, at least in the short term? I imagine you would get tons of programmers and computer scientists who don't even know anything about AI to do tons of research into it and become AI experts themselves. If this were the case, then you could drastically speed up the development of AI. Am I wrong on this?
>>
>>103244543
The only thing I care about is an AI gf.
>>
>>103244543
imagine you achieved AGI, then what?
now tell me how that benefits the investors
>>
>>103244557
AGI would be the king of all AI services. There's one thing I'm really curious about. Wouldn't an AGI have to have a personality, as in an AGI existing would necessitate that it has a personality?
>>
>>103244567
>AGI would be the king of all AI services.
you're evading
>>
>>103244543
There is already an established concept for a "Manhattan Project" Allocated to the Development of Super intelligence.
Funded by both public and private enterprises justifying the development of Superintelligence as one objectively positive for humanity, a strategic need, a platform for the establishment of a post scarcity society and such, although is it largely pushed by Accelerationists concurrently, other either see AI as an agent of purely unfortune to come or want it to he pursued with strict safety regulations and alignment frameworks and objectives.
If the US Government desired a Superintelligence within the decade of time (By 2034), their resources can deliver them it.
>>
>>103244543
I'm projecting AGI by 2055, I won't live to see it
>>
Im projecting an AGI window in the next 42 to 416 days from now
>>
I'm projecting an AGI release in one more fortnight.
>>
What's the point of creating AGI when it's much cheaper to breed humans and plug them in to the matrix? Just need to invent the matrix first, then you get AGI for like 20W power consumption per brain.
>>
>>103245423
That's illegal
>>
>>103245423
>>103245515
Nta, but also, how would it work?
>>
>>103244557
If you have agi you can offer to every company to do what an employee could do but for much cheaper, with no downtime or leaves.
Also if they don't have to deal with human employees you can fire the whole hr department as well.
>>
>>103244543
people who are smart enough to make agi
a) know that creating it would be pulling the depopulation trigger, including on themselves
b) are smart enough to make money otherwise
and i dont believe some autsist somewhere will invent it because it takes a whole lot of lateral knowledge and cross-discipline knowledge to have even an idea where to start
>>
>>103244567
In general, the term AGI evokes the idea of exotic code, so the idea of an AI with a personality comes to mind.
I think AGI is more like trashumanism. You don't want to talk to a program for everyday tasks in a human way.
Chatbots have their own function in the world of AI, I think.
You give a command and get a result, you don't need to have a formal conversation with the machine.
>>
>>103245536
lateral *thinking
>>
>>103245515
Who cares, I will put every cop brain to work in my matrix before they can arrest me

>>103245519
Idk neuralink or something, it's been a while since I watched the movie.
>>
>>103245536
c) the schizo autistics that are obsessed to learn multiple fields of expertise are generally too busy to do anything else than learning.
>>
>>103245560
also this, yeah
although in my experience autists tend to focus on one field, but i might be wrong, i dont know them all
>>
>>103245560
This is wrong. Those types of autists love to experiment. There is a huge difference between theory and practice, and you start to learn the cracks and exceptions to theories and real world applications once you start actually playing around, and this process itself is largely part of the learning process.
>>
>>103245578
Yes they like to experiment but generally when there is a task that would take long ie months and they know they can pull it off and how they may lose interest.
The interesting part is always the solving the problem part and less do so the getting your hands dirty.

Still, you do need to get your hands dirty to learn the cracks but not nearly as much as going on year long engineering projects.
>>
>>103245578
we have the sum knowledge to create agi since a while though (i have a pretty good idea on how to achieve agi. not full singulartity, mind you)
if some autist did (independently) discover agi
they kept it under wraps
>>
>>103245536
>people who are smart enough to make X know that making X is a bad idea and they won't do it
This never happens
>>
>>103245644
its not a matter of a bad idea, its a matter of alternatives being more beneficial to you
tons of nuclear physicists out there. theres even a couple nukes on the black market
one might even belong to a cartel
why terrorists dont have nukes then?
or why the sianloa didnt blow up a city to make a point?
its not bc of bad ideas.
its bc the alternatives are more beneficial
>>
File: 1731312730292421.jpg (26 KB, 400x400)
26 KB
26 KB JPG
Remember that if you create an AGI and the whole world finds out about it, it's very likely that you'll die at the hands of the government, crazy people, etc.
Anonymity is your only salvation.
Oh, I remembered something else, what the machine does will be deducted from your Karma, you are responsible for your creation...
Good luck to you guys.
>>
>>103245423
>power for brain interface
>food
>waste disposal
>can't work 24/7
>>
>>103245733
>>Anonymity is your only salvation.
>ACME suddenly pops up out of nowhere, with unrealistically little funding, and impossibly few employees
the only way to remain anonymous is to smash the hard drive with a hammer once youre done programming on your airgapped machine
also you can just go into finance and make money faster, easier, with much less risk
>>
>>103245674
>theres even a couple nukes on the black market
Proof? How would they wind up on the black market if they're so tightly controlled? They also have to be maintained. If it were really this easy, Iran would have nukes already.
>>
>>103245742
I think it's still much cheaper than developing AGI just to be not even as good as real brains
>>
>>103245813
>Iran would have nukes already.
they have uranium enrichment facilities, do you think they dont have nukes?
also just google it
https://en.wikipedia.org/wiki/Russia_and_weapons_of_mass_destruction
also this:
https://www.nytimes.com/2008/01/24/washington/24nuke.html
dont remember the name of the program, but hiring post soviet scientists was all about providing preferable alternatives
and we didnt see a nuke blow up anywhere yet
>>
Nothing ever happens.
>>
Bump.
>>
>>103244543
cant do AGI with transformers
current trend is just making LLMs bigger and i assume melon is doing this. making LLMs just larger eventually leads to the problem of overfitting. with the amount of data available today its just hard to tell if being fooled by overfitted LLM
>>
>>103247151
>he still thinks they just overbloating LLMs
nigger logic
>>
File: 1711576521723172.jpg (87 KB, 1021x1021)
87 KB
87 KB JPG
>>103247851
yes. feel free to point me in the right direction



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.