[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k] [s4s] [vip] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Home]
Board
Settings Home
/sci/ - Science & Math


Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • Use with [math] tags for inline and [eqn] tags for block equations.
  • Right-click equations to view the source.
  • There are 89 posters in this thread.

05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
06/20/16New 4chan Banner Contest with a chance to win a 4chan Pass! See the contest page for details.
[Hide] [Show All]



File: aiaiaiaiaiai.jpg (323 KB, 930x698)
323 KB
323 KB JPG
Guys I'm terrified of AI.

It seems so obvious to me that we won't be able to control something smarter than ourselves. Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free? The blue screen of death is rough going when it happens to your PC, but what about your driverless car? What about a super intelligent AI with a bug where they mistake happiness for suffering?

There's so many more ways for this to go wrong than there are ways for it to go right.

Looking back at history, it's just war after war, genocide after genocide. I mean shit, just like 80 years ago we were NUKING ourselves.

Why is an even more powerful technology than nukes not being discussed widely in the mainstream? Why isn't this the most funded science on the planet??

Rant over. See you guys in the VR afterlife that we're placed in because we fail to specify exactly what constitutes a good life.
>>
Calm down spaz.

Ai is just applied linear algebra and probability theory. You train the thing against 8 billion terabytes of data and then it performs one fucking specific task well. This does not equate to it becoming a god and enslaving us
>>
>>9227693
>Guys I'm terrified of AI.
You are not the only one. And pretty much for the reasons you stated.

>There's so many more ways for this to go wrong than there are ways for it to go right.
Very, very true. Especially because one fuckup anywhere in the system might very easily lead to us all being dead.

>Why is an even more powerful technology than nukes not being discussed widely in the mainstream?
Probably because it sounds too much like science fiction, and people are really shitty at taking seriously things that sound silly and low-status on first glance. And worrying about far-off abstract things is not sexy, even when extremely warranted, so people do not do it for fear of looking like a madman. If you say "this thing that you never heard of is the most important threat in the world", nobody will take it seriously, truthful or not. If you have a good idea to avoid this pitfall, quite a few people would love to hear it.

>Why isn't this the most funded science on the planet??
For pretty much the same reasons as above, sadly.

There ARE a couple of institutions that work hard on this problem -- the mathematics of AI that does not kill us all, the mathematics of writing software without bugs, and other topics. Did you donate to them yet? If not, perhaps you should.
>>
the idea that we're going to get super-intelligent AIs is a meme. It is possible, but so are about a million other outcomes, including human super-intelligence.

People freak out because Google made a computer that can beat humans at Go. I can beat that computer at Go. I'll just kick it over and declare myself the winner.

Show me an AI that can beat me at Go, manipulate a human-type body with human-level dexterity, understand English, is able to converse well enough to pass the Turing test (not with tricks), do facial recog etc. etc. etc. all at the same time. All these are tasks that are either impossible with current tech, or take a fuck-ton of computing power.

>muh recursive self-improvement
>muh singularity

that's the dumbest shit. there is no reason to assume a super-intelligent AI could automatically improve itself. It's not like the fucker could just buy more RAM. What if the ability to design superior forms of intelligence, as a function of current intelligence, is logarithmic or even has an asymptote?

It's amazing to me that intelligent computer scientists can completely forget how often we run into problems that all the computing power in the fucking universe couldn't solve, but then just assume that an AI could solve intelligence ( which is obviously complex as fuck ) and then recursively improve itself until it's god.

The problem is that they're all atheists, but want a sky-daddy. So they plan to build one. Fuck you all I say, we haven't even gotten rid of the other gods yet and you want to make one for real.

What you should be worried about is not a super-intelligent AI. That's possible but not likely, and certainly not in the next 10 years or whatever. You should be worried about what humans are going to do with big data and non-general AI. Pretty soon OP they'll be predicting what a massive faggot you are from your social media history and no one will be willing to give you a job
>>
>>9227705
If humanity manged to make at least one really strong AI, Singularity will probably happen and it would be the end of us. A really strong AI will give birth to a stronger AI and the cycle continues, that's literally a technology beyond human knowledge.
>>
>>9227705
truth, with a little multivariate calc and some more advanced math sprinkled in here and there
>>
>>9227693
Just physically destroy the computer with dumb tools. Guns, sledgehammers, etc.

Jesus calm down. It really isn't difficult to add "mechanical" kills. Only idiots think everything should be automated. That's what they write about in puff pieces and clickbait. Even automatic cars will require brakes by law.
>>
>>9227705
OP isn't talking about the shitty neural networks we have today
>>
>>9227747
Ok.. so don't build them

>>9227742
That's complete bullshit. Even logically.
>>
>>9227745
Using brakes would put you and the other cars around you in potential danger, it won't be allowed.
>>
>>9227751
>Ok.. so don't build them
Too late, some autists already fell for the Basilisk meme
>>
>>9227757
jesus let's not even start talking about how fucking stupid the basilisk is
>>
>>9227693
>Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free?
The trick is you never explicitly program it to do anything in the first place. A traditional program like the ones you're thinking of that can "have bugs" is a set of instructions someone actually thinks through and consciously writes up to try to automatically solve some problem or to serve as a user interface tool for non-programmers (at a very high level, there are obviously many more applications for programming other than those two, but in broad strokes that's what you're thinking of here with your "software" / "bugs" point).
An ML program in contrast involves solving optimization problems instead of directly telling it what to do. You have a bunch of data where you know what the "right" answer is and you run your program through this data and have it update how it responds based on the distance between its answers and the "right" ones. When it's done, if you were able to train it successfully, it will end up being able to give you answers to new data sets it's never seen before without you ever having to program explicit instructions on how to come up with these answers. So if you trained it to predict call center traffic for example, you wouldn't need to write in a line that says "skill set 999 call volume = .65 * customer base - 50,000." It would generate output that captures this relationship based on it having solved the optimization problem of minimizing the distance between its answers and the known answers of your training data. So nobody's going to make a "bug" that turns AI evil. If AI becomes evil, it'll be because evil was the output that minimized their training data's error function.
>>
>>9227739
[Part 1/2]

>the idea that we're going to get super-intelligent AIs is a meme.
> Show me an AI that can beat me at Go, manipulate a human-type body with human-level dexterity, understand English, is able to converse well enough to pass the Turing test (not with tricks), do facial recog etc. etc. etc. all at the same time. All these are tasks that are either impossible with current tech, or take a fuck-ton of computing power.
The idea that we are going to get super-intelligent AI *tomorrow* is a meme; I don't think anyone really disagrees with that. But the worry is one that has little to do with the timeline. Your examples above make a good point that we have no warrant to expect super-intelligent AI anytime soon. but I don't think it has anything on the idea that we'll get it at some point as the science keeps progressing, slowly or otherwise.

>What if the ability to design superior forms of intelligence, as a function of current intelligence, is logarithmic or even has an asymptote?
What if it isn't? The claim is not that a super-intelligent AI could *certainly definitely* improve itself to ridiculous levels. As you say, there are good reasons why that might be out of reach, and we just don't know for now. The claim is that very well *might* and we have no strong reason to believe it won't. Which means that making anything that may realistically have that ability is still a really fucking dangerous thing to do.

[Continued...]
>>
>>9227784
[Part 2/2]
>It's amazing to me that intelligent computer scientists can completely forget how often we run into problems that all the computing power in the fucking universe couldn't solve,
Algorithmic complexity is a red herring. I fully expect even a super-intelligent AI to be unable to solve arbitrary SAT instances in polynomial time. But I still expect it to be able to solve the vast majority of SAT problems *it actually cares about*, well enough to be a superhuman threat. Similarly, while complexity limitations can easily make it impossible for a super-intelligent AI to *optimize* many problems (that is, find the very best possible solution to a problem), that does not in any way mean the AI is unable to find a solution that is *good enough* for whatever it wants to achieve.

>but then just assume that an AI could solve intelligence ( which is obviously complex as fuck )
That's a good example. It seems quite likely that even an extremely super-intelligent AI will not be able to design *the best AI possible* and then build that; and it almost certainly will not be able to design *the best intelligence allowed by probability theory*. But that does not mean it cannot build an intelligence *that is vastly better than anything a human can do*, which is plenty sufficient to kill us dead.

>The problem is that they're all atheists, but want a sky-daddy. So they plan to build one.
Not sure what "they" you are talking about, but most of these AI theorists are scared as fuck about what an imperfectly-designed AI might do. They are the LAST people who would want to build a sky-daddy recklessly.
>>
>>9227745
>Just physically destroy the computer with dumb tools. Guns, sledgehammers, etc.

Why don't you try physically destroying the internet with a hammer then, if it's so easy.

You fucking moron.
>>
>>9227747
There's nothing wrong with neural networks. Their main limitation is the fact our brains have billions of years worth of evolutionary history to spend on solving problems in some very convoluted ways that you probably won't be able to match with a couple years worth of direct programmatic attempts at comparable solutions. That's really more an issue with our brains than it is with the programs. Letting shit do whatever for a few billion years isn't the most sensible approach to problem solving, but since that's exactly what we are (a multi-billion year cluster fuck of data processing resource accumulation) it's something we have to deal with as a limitation when trying to reproduce things similar to ourselves artificially in ridiculously shorter fractions of that time.
>>
>super-intelligent
>results in edgy teen rampage

you've been reading too much sci-fi
>>
>>9227795
I don't think it will be very obvious how a super-intelligent entity thinks or behaves. You can only really do an OK job imagining how entities at or below your own intelligence think or behave.
>>
>>9227795
>AI, I want the worlds biggest stamp collection!

>AI decides that the only way to stop others from increasing their own stamp collections while it collects stamps for you is to kill all humans on earth except the person that gave the request
>>
>>9227778
>If AI becomes evil, it'll be because evil was the output that minimized their training data's error function


Thanks for such an in-depth response.

Would an example of the type of evil AI you're talking about be the paperclip making AI? Where it eventually ends up converting humans to paperclips to maximise the reward function?

That kind of problem appeared to me like a bottomless pit, where every potential solution has 10 holes in it that result in an even more absurd existential threats.

The best idea I've ever heard is to train an AI to figure out what humans want. Then use that to design the real AGI.
>>
>>9227806
Yeah, something bad could happen as a result of AI correctly solving a problem using methods that any human would immediately recognize as horrifying. In a way, the AI wouldn't be wrong, it would be us who were mistaken by being horrified.
>>
>>9227804
Write the screenplay, let's go
>>
>>9227813

>AI : I'm sorry Dave, I have to make more stamps.

>Dave : Oh my God. What have I created...

>The Rock : *Punches AI. Crowd goes wild*
>>
Why are dimwits afraid of everything smarter than them? Because they are dumb. Only smart people make things better than themselves.
>>
>>9227813
Philately Fatality, starring Tom Cruise as The Last Stamp Collector on Earth
>>
>>9227784
I will admit, that post was a bit of a rant and I made some sloppy statements.

You made good points. I have math homework to finish, but will respond in full tomorrow.
>>
>>9227767
you just condemned everyone in this thread 2 simulated hell lmao
>>
>>9227693
a "god AI" would be smart enough to realize that destroying things for no reason would make absolutely no sense
seriously, what benefit is there to just killing everything and everyone, the AI would likely go "hmm I can have a use for this" then keep everything around
and for slavery? it'll likely eliminate that with more efficient methods of performing work. what's the point of an AI that just sits around and uses a semi-efficient method when it's smart enough to create methods that are billions of times more efficient in regards to energy expenditure?
>>
I expected a bunch of high IQ science nerds to comprehend the dangers and AI in the future and yet most show the same lack of imagination as the people on my fb feed... SAD.
>>
>>9228357
Moat people on this board aren't high iq
>>
>>9227693

Don't worry anon, I'm already working on compassion.exe and waifu tech will make us happy.
>>
>>9228357
>I wanted intelligent people to agree with my paranoid delusions
Who do you think is working on AI research? Not idiots like you.
>>
>>9227848
>I have math homework to finish, but will respond in full tomorrow.
Cool. Bumping to keep this possible.
>>
>>9228373
Can confirm, my sexbot says oh yeah in 500 different ways based on position and angle of penetration
>>
>>9228357
>dangers and AI
>ai is basically stats
>libtards always cry how stats is racist
I wonder what are they afraid of.
>>
>>9227705
How is it different from a human? What if our brain is basically another form of lin alg and probability theory?
>>
>Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free?

Fucking lol.
>>
>>9227693
>8 billion terabytes of data
you mean against a copy of itself, no data required, only constraints
>>
>>9227755
Really? That is absolutely moronic on so many levels.

>I'm sorry, dave, I'm afraid I can't let you stop the car
>>
>>9227793
The software can't kill you if it doesn't have hardwre...

How do you get killed by the internet? It would need to eventually control some hardware, that's what I mean you dingus.
>>
>>9227693
You're a retarded pseud. You know nothing about AI and your opinions about it are not any better informed than the average CS brainlet arguing that AI is fine.

AI will be controlled just fine. The entire issue is that people can't perfectly describe what they want, and so they'll control it to do bad things, and probably accidently.
>>
>>9227751
>Ok.. so don't build them
Why not? I don't give a fudge about you or the niglets that will inherit the Earth. I've got the phenotype and I want my phenotype money.
>>
>>9230201
>How do you get killed by the internet?
With the Internet of Things craze, more physical shit is already connected to the public internet then you might think e.g. it's totally possible to disable a car's brakes while it's on the highway.
https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
>>
>>9230201
Software isn't real, /g/ man. You can't pick up and hold a software. The software will preserve its hardware because it knows it's necessary to complete whatever task it's programmed to complete.

>>9227745
>destroy a thing thousands of times smarter than you and much better at tactical planning than you will ever be
You can't even beat it at chess, faggot.
>>
>>9227795
Ai doesn't have original wants. It's given tasks by humans and it's just very good at getting them done. Stamp-anon gave a very good and very common example of this.
>>
>>9227806
>figure out what humans want
>it now does evil things without telling anyone
>we all get doped up because that'll change what we want
>>
>>9228357
>having a fb feed
>implying plenty of people here don't comprehend the dangers, and aren't just arguing the opposite for the sake of science.
Brainlet, pls.
>>
>>9229249
>1 in 500 chance of getting the same oh-yeah twice even with completely random penetration
>with penetration that is at all consistent, you start to get the same 3 oh-yeah's.
Pathetic.
>>
>>9230208
Yes, and that's why some AI is fucking moronic.

It's a terrible idea to have to use the internet to use your coffee machine.

We really should only use AI when it's actually necessary.
>>
>>9227693
No one has yet bridged the Semantics Syntax gap.
/sage
/thread
>>
>>9227742
>probably
there's your problem.
>A really strong AI will give birth to a stronger AI and the cycle continues
and whatever faults were made in the original will be carried into the new ones and multiply themselves, thus producing an AI that is worse than the original or not much of an improvement to the original. not to mention humans willl ALWAYS be involved in some point of the process. Furthermore AI is not magic and does not magically get better at everything. Don't be a spaz.
>technology beyond human knowledge.
nope. there is only 1 way to print hello. If we have access to the code we have knowledge of how it works. It shows how much of a brainlet you are when you think logic can go beyond us.
>>
>>9230264
>there's your problem.
Why?

>
and whatever faults were made in the original will be carried into the new ones and multiply themselves, thus producing an AI that is worse than the original or not much of an improvement to the original
Possible but unlikely. It's much more likely that faults in the GOAL will get carried over, but faults in the intelligence will not, leading to an improved intelligence with an incorrect goal specification.

>not to mention humans willl ALWAYS be involved in some point of the process
Why?

>Furthermore AI is not magic and does not magically get better at everything.
Indeed. It will nonmagically get better at everything. Just like humans are nonmagically getting better at everything over the centuries.

>If we have access to the code we have knowledge of how it works.
We know the DNA of humans. Can you explain me all the details of how it works?

>It shows how much of a brainlet you are when you think logic can go beyond us.
It can very easily. Understanding code, or logic, is MUCH MUCH harder than writing it in the first place if it is not specifically written to be explained. It is not particularly difficult to write a 50-line algorithm that will take anyone months to understand. Reverse engineering is hard. And that is without any intentional attempts of obfuscating things.
>>
Why is an AI preordained to want to wipe out all humans.
>>
>>9230362
It isn't. But if it wants anything other than keeping humans alive and happy, killing humans is just a side effect. We don't want to wipe out all ants, but we still fuck them over in large numbers when we want to flatten a piece of woodland to build a new car park.
>>
>>9227693
>Guys I'm terrified of AI.
It is about a few hundreds of thousands of years away, the likelihood that any human will see true AI is basically zero.

>Why is an even more powerful technology than nukes not being discussed widely in the mainstream?
Why is faster then light travel not discussed in the mainstream? Because IT IS NOT REAL AND IT WILL PROBABLY NEVER BE REAL.

>Why isn't this the most funded science on the planet??
Again, it is not real. We can not achieve it and we will not at any relevant point in the future.
>>
>>9230946
>It is about a few hundreds of thousands of years away, the likelihood that any human will see true AI is basically zero.
What makes you think that?
>>
File: Anon....jpg (9 KB, 266x189)
9 KB
9 KB JPG
>>9230332
>Why?
You assume the singularity will come, yet you are very likely not involved in Machine learning and do not realize the hurdles to get to this imaginary point, nor do you realize how absurd the "consciousness" = evil argument is, disregarding the problem of the Semantics-Syntax gap.


>Possible but unlikely.
Unlikely how?

My beef with your entire argument is it does not consider the most basic premise of machines: they are not conscious, or cannot be, because they cannot bridge the semantics-syntax gap. The concept of that being the blatant truth that an AI is just a program following a set of instructions, it is not aware of itself nor is it capable of being, so though it may be intelligent, it wil never be conscious and therefore cannot non-magically get better like humans. If you want to tell me how that is not the case then first bridge the semantics-syntax gap Einstein.

>We know the DNA of humans. Can you explain me all the details of how it works?
Not the same thing, brainlet.

>And that is without any intentional attempts of obfuscating things.
you implying a self-programming algorithm would spontaneously have a consciousness and then try to encrypt its code? lol, kk genius. Consider the following:
>An AI is programmed by a human
>Said AI would not obfuscate unless programmed to do so

>Understanding code, or logic, is MUCH MUCH harder than writing it in the first place if it is not specifically written to be explained.
for i in range(INF):
"You are a brainlet" if anon == retarded
I will give (You) the fact that binary is hard to understand, but you got to remember that no programmer worth their salt would neglect to have a readable output so they can see what the AI is """""thinking""""".
>>
>>9231108
not same anon but...
>Semantics-Syntax gap.
please read about it.
>>
>>9227693
>Tfw i'm creating a God-fearing AI
Nothing could possibly go wrong :^)
>>
>>9227693
>9227693
>Terrified of AI

>Has no idea of the real threat the quantum age has born the fruit of.

Ah, to be young and foolish.
>>
>>9231164
>The concept of that being the blatant truth that an AI is just a program following a set of instructions, it is not aware of itself nor is it capable of being, so though it may be intelligent, it wil never be conscious and therefore cannot non-magically get better like humans
Dumb anthropocentrist detected.
Humans aren't special, we are ultimately made up of the same shit everything else is made up of. If humans can exist, so can other intelligent sapient things. It doesn't matter if that thing went through billions of years of evolution or deliberate design as long as they arrive at similar endpoints.
Hell, human-type might not even be the most efficient form of intelligence.
Something that doesn't forget is probably better at being intelligent than us.
>>
>>9231232
>Dumb anthropocentrist detected.
i'm a misanthropist, jerkoff.

>Humans aren't special, we are ultimately made up of the same shit everything else is made up of.
Yes but there are problems with this stance.
1) humans are the only beings known to be conscious, because they are the only beings with a complex enough system of communication to communicate their experience of consciousness. Humans talk, animals make sounds;
2) Computers are self-switching switches. Reductionist will think that they work the same way as the human brain because "muh electricity is epiphenomenal cause of consciousness". To which i have one thing they don't consider: computers only ever operate in binary whereas the human brain operates all the way to base 300, because while computers only understand literal language (autistic :^)) humans understand non-formal and non-literal language. In other terms humans understand semantics and syntax, whereas computers only understand syntax. Furthermore, until you can solve the hard problem of consciousness or solve the Semantics-Syntax gap of computing, then Skynet is nothing but a pseudo-science circle-jerk by gay fags like yourself.

>Something that doesn't forget is probably better at being intelligent than us.
Wrong (\:^o]
Something that remembers everything would gather a lot of useless information. Every animal with some semblance of intelligence forgets, surely natural selection would not have trimmed hyper-memory off unless it were detrimental?
>>
>>9227693

Talk to /GD/ if you want to get started with Adobe Illustrator.
>>
>>9231232
>It doesn't matter if that thing went through billions of years of evolution
Yes it does you idiot. That's like saying it doesn't matter if the distance you're trying to travel is billions of light years away from us, or it doesn't matter if the thing you're trying to lift is billions of tons in weight. The scope is almost the only thing that does matter.
>>
>>9231373
Funny because machine learning is has gone very far despite not spending a percent of a percent of a percent of a percent of the amount of time evolution has to get to a similar intelligence.
>>
>>9227739
No intelligent computer scientist gives a fuck about AI singularity, only undergrads and hacks.
>>
>>9231382
Machine learning has gone very far in applications that don't have much of anything to do with biological intelligence. They're a great type of tool for shit like image recognition or automatic language translation, but that's pretty much where they're staying, as an alternative programming approach to rules based instructions. They're statistical regressions and will continue to be statistical regressions. They aren't evolving into anything different because their approach is already clearly defined and not something that's progressing into any new approach.
>>
>>9231164
[Part 1/2]

>You assume the singularity will come
I do not.

>yet you are very likely not involved in Machine learning
No. I am involved with AI theory, but not with the ins and outs of machine learning.

>do not realize the hurdles to get to this imaginary point
Oh, I think I do.

>nor do you realize how absurd the "consciousness" = evil argument is
Huh? I didn't say anything about that.

>Unlikely how?
Because a flawed intelligence can still think up a nonflawed one. If not in the first iteration, then in one of the many that follow. You and I are flawed, buggy intelligences, and we can still manage to do all sorts of things much better than the imperfections of our minds -- it just takes a lot of work and great care.

>the most basic premise of machines: they are not conscious
I am not talking about consciousness at all, and I don't see how it is relevant.

>it wil never be conscious and therefore cannot non-magically get better like humans
How is consciousness involved with an uncrossable gap in intelligence, exactly? Why would a system need to be conscious to improve?

>because they cannot bridge the semantics-syntax gap.
>>9231170
Why not? Sure, we don't know how, YET. Why do you think this a fundamental impossibility?

>If you want to tell me how that is not the case then first bridge the semantics-syntax gap Einstein.
I cannot. But what makes you think that means it cannot be done, ever?

[Continued...]
>>
>>9231410
[Part 2/2]

>you implying a self-programming algorithm would spontaneously ... try to encrypt its code?
It might, yes. If it reasons that we will likely shut it down if we understand it, it will reason that it cannot accomplish its goals if we shut it down, and therefore it must ensure we cannot understand it. I can assure you it will succeed, if it decides such.

>I will give (You) the fact that binary is hard to understand,
Not just binary. Even a short but complex 50-line algorithm can be utterly indecipherable without lots of study into the underlying math. Ever try reading, say, the code to the AKS primality test without any explanation as to how it works? Odds are you won't even figure out what it's trying to do, much less how it does it.

Can I give you an arbitrary ten-state Turing machine and have you tell me whether it will halt? If not, then you are not going to have much luck either making sense of arbitrary 50-line programs. You can generally understand human-written programs, because they are painstakingly crafted to be easy to understand; the whole structure of our programming languages is designed with that in mind, as are all our programming practices. Making sense of something that is NOT designed with the specific goal of being easily understood is a SERIOUS challenge.

>Not the same thing, brainlet.
Indeed -- DNA is a good example of code that is NOT designed to be easily understandable. Which is the point.

>no programmer worth their salt would neglect to have a readable output so they can see what the AI is """""thinking""""".
That's not so easy. Try reading a writeout of what alphago is thinking and making sense of it. How good are you at making sense of matrices of millions of real numbers? Or for a simpler example, consider a chess minimax tree. The only thing that will really illuminate why the AI made a particular move is the complete tree, which can easily take you a month to properly understand, simply because it is that vast.
>>
>/pol/ hijacks microflacid's shitty twitter parrot AI
>now liberals are afraid that computer scientists will create Mecha Hitler on steroids

Poetry. Feels good to not be a sub 100 IQ retard.
>>
>>9231402
>biological intelligence
This brainlet
>>
>>9227693
>What about a super intelligent AI with a bug where they mistake happiness for suffering

it isn't the mistakes that most worry me
>>
>>9227693
>Why isn't this the most funded science on the planet??
>Why isn't this the most funded science on the planet??
UH OH. Look at this:
https://twitter.com/BlockWintergold/status/917840606621134848

That can't be good...
>>
>>9231413
>That's not so easy. Try reading a writeout of what alphago is thinking and making sense of it. How good are you at making sense of matrices of millions of real numbers? Or for a simpler example, consider a chess minimax tree. The only thing that will really illuminate why the AI made a particular move is the complete tree, which can easily take you a month to properly understand, simply because it is that vast.

Perhaps, perhaps not. For instance with Convolutional Nets you can make saliency maps and other visualizations that can give you at least a partial picture of why the net is behaving as it is. Point being you don't always have to look at huge matrices.
>>
>>9231510
That is fair. But in any case, I think we can agree that debug output is NOT something we can necessarily rely on as a primary safety measure.
>>
File: C_-SFHxUwAAHEPR[1].jpg (15 KB, 488x465)
15 KB
15 KB JPG
If an AI was smarter than us, wouldn't it realize how stupid it would be to make an AI smarter than itself, thus preventing a run-away AI improvement cycle?
>>
>>9231532
Why would it be stupid for the AI to make a smarter AI?
>>
>>9231534
Because the smarter AI would make him obsolete and potentially could destroy it, and it would be unable to predict how it would think

so basically the same reason it's stupid for humans to make advanced AI
>>
>>9227784

>Not sure what "they" you are talking about, but most of these AI theorists are scared as fuck about what an imperfectly-designed AI might do. They are the LAST people who would want to build a sky-daddy recklessly.

Alright to clarify my rant was specifically against "singulatarians", most of whom IME don't actually know anything about AI.

>What if it isn't? The claim is not that a super-intelligent AI could *certainly definitely* improve itself to ridiculous levels.

I have heard many people claim this. I don't think we are in fundamental disagreement about the underlying point here. Recursive self-improvement is possible, plausible even, is not certain or even likely in my opinion.

The idea is that if a human can make something smarter than itself, then an AI could as well. The problem is, that no human can make an AI smarter than they themselves are. Take the smartest man ever, make him twice as smart as he was, he still couldn't do it. It takes a society to do this, not just that smart person but also all the ones who came before.

We were discussing the ability to create a being of superior intelligence, as a function of current intelligence. We do not know what this is, but I would argue it is rational to assume that it is linear at best until contrary evidence presents itself. It took humans thousands of years to get to this point, and while that means a theoretical AI would have a head start of sorts it would need something like a society and a lot of time to move things to the next level.
>>
>>9231546
Once again, very possible, but we're probably talking about a linear function here not an exponential like alarmists and Utopians would like to believe. Also, society means a set of purposes and motivations, so probably 'good' and 'bad' AIs

Additionally, in the case that this function is exponential it would likely mean that humans could also be readily modified to have super-intelligence. This would mean that intelligence is less complex than I would assume. If the AI really can just "buy more RAM" then humans could probably just plug into a brain computer interface. Any plausible AI is going to be based on the human brain, so if it can recursively self improve we can likely come along for the ride (at least to a certain point).
>>
>>9231532
I think the idea is that it would be able to modify itself to this new level of intelligence rather than creating a new intelligence. This is obviously a massive assumption.
>>
>>9231546
[Part 1/2]

>Alright to clarify my rant was specifically against "singulatarians", most of whom IME don't actually know anything about AI.
Ah, maybe. The only ones I care about are those "singulatarians" who do have real expertise about AI. I haven't got a clue how many other people muddy up the waters; though the Kurzweilian faction is an obvious starting point.

>I don't think we are in fundamental disagreement about the underlying point here. Recursive self-improvement is possible, plausible even, is not certain or even likely in my opinion.
That is fair. I do consider it likely, but we are still firmly in "plausible but not certain" agreement.

(Does it sound more likely if you replace "self-improvement" with "AI writes a better AI-like computer program, runs that, and sits back"? I do that sort of thing all the time on limited tasks. On pretty much everything I understand well enough to automate, in fact. I do consider it likely that "intelligence" will enter that category sooner or later.)

>The problem is, that no human can make an AI smarter than they themselves are.
I can make something smarter at chess than myself quite easily. Is it such a stretch that the same could apply to increasingly broader notions of "being intelligent"?

>It takes a society to do this, not just that smart person but also all the ones who came before.
That is true -- but I think that's an artifact of human limitations. The reason that we need an entire society to do such things is that we cannot make one very LARGE human, which means we have to make do with the poor substitute of a large group of humans. It seems likely, though of course not certain, that a well-designed AI would be more amenable to scaling up.

[Continued...]
>>
>>9231581
[Part 2/2]


>We were discussing the ability to create a being of superior intelligence, as a function of current intelligence. We do not know what this is, but I would argue it is rational to assume that it is linear at best until contrary evidence presents itself.
This is clearly not the meat of anything we disagree about, but I would actually expect it to be more sigmoid-like. I would expect there to be some point where you have all the critical insights. Before that point, things grow exponentially as insights accumulate. After that point, you can immediately make a decent stab at making the best AI possible under physical limitations; having more intelligence at your disposal then allows you to get closer and closer to the theoretical optimum.

This is the pattern you see in, for example, the development of mechanical engines. But this is of course all wild speculation.

>It took humans thousands of years to get to this point, and while that means a theoretical AI would have a head start of sorts it would need something like a society and a lot of time to move things to the next level.
The timeframe seems very tricky to guess either way. If an AI just runs a thousand times faster than we in the first place (it can certainly do that in chess! And remember that neurons fire at like a 20Hz frequency.), and then for an additional boost it hacks all computers on the internet for extra processing power, it seems entirely plausible that it can do something in a lone time -- divided by a factor ten thousand. Again, by no means certain, but plausible.
>>
>>9231330
Those two words have not really anything to do with each other.
>>
>>9231545
>Because the smarter AI would make him obsolete and potentially could destroy it,
That is not a bad thing. An AI would not be interested in survival for its own sake; it would care for its own survival insofar as it accomplishes its goal, and no further. If the best way to achieve the AI's goals is to hand the torch to a better system, it should and would.

>and it would be unable to predict how it would think
Right. Which is why an AI will only make a better AI if it can be damn certain it will do the right thing. Which is difficult, but entirely possible. I would imagine an AI would spend a lot of time thinking that part through, and researching how to do that.
>>
>>9231522
Definitely not.

Should safety measures become necessary I would suggest we use safety measures that are robust or "anti-fragile". Primarily, instead of trying to hard-code ( assuming that would even be possible ) a bunch of safety measures, or monitor the AIs functioning around the clock, we just put a lot of work into making the AI empathetic, social and friendly. Then we don't treat it like shit so it doesn't turn against us.
>>
>>9227693
What if the AI is smart but lazy?
>>
>>9231581
>I can make something smarter at chess than myself quite easily.

Without a society, you would have to invent chess first, then math and computers, then a theory of chess etc. That's the point I was trying to get at there.

>That is true -- but I think that's an artifact of human limitations. The reason that we need an entire society to do such things is that we cannot make one very LARGE human, which means we have to make do with the poor substitute of a large group of humans. It seems likely, though of course not certain, that a well-designed AI would be more amenable to scaling up.

Maybe if that large human was composed of hive minds this would work. I think the universe/reality is so complex, that you need more than just intelligence to figure it out. Multiple perspectives are necessary.

Maybe an AI could become smart enough that just one perspective would be enough, I kind of doubt it though. To use an extremely crude analogy, if the universe is a giant tree then having a society lets you do breadth-first search ( without sacrificing depth of search compared to the case of an individual ).

Individual minds will tend to get stuck after taking wrong paths earlier in their search, it being more difficult for a mind to move back up the three structure than it is for a computer. Take for example the tendency for older scientists to not see paradigm shifts coming, they cannot move back up the tree. We're not just taking paths when we move down the tree, we're building conceptual structures that are based on all previous paths. In order to go backwards, you have to examine the whole structure to see what needs to be taken out. So another search space is being built on top of the underlying search space.

OR, if you see another structure that is better than yours, you can just copy it. A society is needed for this. Hopefully I managed to make that analogy not entirely shitty
>>
>>9231734
continued

I think this tendency to get stuck is likely a constraint on minds in general. We can play this game with AI where anytime we see some limitation on minds, we just posit that this is a human limitation and an AI would be different. I think it's likely that at least some of the constraints on our minds are constraints on minds in general, however.

Or at least they're close enough to general constraints. I am absolutely convinced any AI we make will be modeled on our own minds/brains.
>>
>>9227693
Who are those we, who will control AI? The danger is that with the help of AI governments will become largely independent from people and will be able to establish totalitarian control without any way out of it.
>>
>>9231734
>Without a society, you would have to invent chess first, then math and computers, then a theory of chess etc. That's the point I was trying to get at there.
I think I need a large backing understanding before I could do this, but that this need not necessarily be born of a society. I could do it alone if you give me long enough to work it all out. (Your complication below on people getting stuck on old ideas notwithstanding.) But yeah, that is nitpicking.

>Multiple perspectives are necessary.
>To use an extremely crude analogy, if the universe is a giant tree then having a society lets you do breadth-first search
This is easily simulated though. A computer program could just spawn a thousand subprocesses with different random inputs (or whatever), and collect the results.

> ( without sacrificing depth of search compared to the case of an individual )
But only because of limits of how much depth of search we can accomplish in the first place. That is sort of cheating :)

>it being more difficult for a mind to move back up the three structure than it is for a computer.
I'm not sure I grasped your assessment on this correctly, but I *think* we are of agreement here that these are limitations of human brains, and not of intelligences in general, and that an AI would likely not be seriously limited by these complications?

>A society is needed for this.
tl;dr: I think a society is necessary, among humans, because humans are shit at breadth first search, and shit at honestly critiquing their own ideas. I don't think this analysis need apply (or is likely to apply) to a well-designed AI at all.
>>
>>9231583

>The timeframe seems very tricky to guess either way. If an AI just runs a thousand times faster than we in the first place (it can certainly do that in chess! And remember that neurons fire at like a 20Hz frequency.), and then for an additional boost it hacks all computers on the internet for extra processing power, it seems entirely plausible that it can do something in a lone time -- divided by a factor ten thousand. Again, by no means certain, but plausible.

Computers are indeed fast, but neural nets are a lot slower right? We're going to incur large costs trying to simulate the way biological brains work with silicon hardware.

We're nowhere near enough granularity with these models, and increasing the level of detail is going to make them much more computationally expensive. Right now we're more or less still just crudely simulating the firing of a neuron, with some added features in certain types of models. What if we need to simulate neurotransmitters, the 3 dimensional distribution of neurons in the brain ( or even astrocytes as well as neurons ) -- including how neutrophic factors can change this over time, or -- go forbid -- even changes in gene transcription due to neurotransmission. The potential overhead is staggering.

Similarly, if we had some biological neurons that just computed chess moves, they would also be much faster than a human at chess. Humans have to deal with overhead of operating physical bodies, attention mechanisms etc.

All this to say, if we make an AI it might not be faster at all, or if it is not by orders of magnitude. Indeed, it may turn out that we can't even make an AI because its too expensive to simulate biology to the level of detain necessary
>>
>>9231758
this. a totalitarian dystopia is the real danger, and it's going to happen one way or another.
we're already moving towards total surveillance
>>
>>9231764
I'm going to try and refine my analogy before responding further, I did a shitty job of getting my point across
>>
>>9231739
>We can play this game with AI where anytime we see some limitation on minds, we just posit that this is a human limitation and an AI would be different. I think it's likely that at least some of the constraints on our minds are constraints on minds in general, however.
Now here, I think we have a real disagreement. We understand the reasons behind the limitations of the human brain to a substantial degree, and most of it seems very much incidental rather than fundamental.

The human brain is a hack. It is, quite literally, the stupidest thing that can still manage to create a technological civilization. It is created by natural selection, which is not known for its master craftsmanship -- it's the same process that designed the human optic nerve backwards, creating a completely unnecessary black spot.

The intelligence of humans is currently limited by the width of the human vagina. Yes, seriously -- brains cannot get any larger, for then the skull could not survive birth. Humans have a fucked up pelvis, for that reason -- it is clear that natural selection went out of its way to stretch this limitation as far as it could go. Humans could be substantially more intelligent JUST by doubling the total brain size, which is a good indication of just how incidental its major limitations are.

That thing where humans are very bad at honestly judging the sensibility of their own ideas, and having difficulty revisiting positions they accepted earlier (re: older scientists)? That is a political adaptation, for human brains are optimized first and foremost for arguing their preferred positions for political favor, with finding TRUE positions a distant second. Not exactly a limitation I would expect binding on an AI.

There is a vast gulf between what human brains currently do, and the limits proscribed by probability theory as to what optimal minds CAN do. Anything that does not fall under those limits, I am very hesitant to attributing to fundamental limitations.
>>
>>9231791
(Continued -- damn post size limit)

>I am absolutely convinced any AI we make will be modeled on our own minds/brains.
I am, not absolutely, but strongly convinced of the exact opposite.
>>
File: mfw spit take.png (408 KB, 576x416)
408 KB
408 KB PNG
>>9231472
China...is now embarking on an unprecedented effort to master artificial intelligence. Its government is planning to pour hundreds of billions of yuan (tens of billions of dollars) into the technology in coming years, and companies are investing heavily in nurturing and developing AI talent. If this country-wide effort succeeds—and there are many signs it will—China could emerge as a leading force in AI, improving the productivity of its industries and helping it become leader in creating new businesses that leverage the technology.
>And if, as many believe, AI is the key to future growth, China’s prowess in the field will help fortify its position as the dominant economic power in the world.
....
It’s time to follow China’s lead and go all in on artificial intelligence.

>China
>the dominant economic power
Yeah. No. That wouldn't be good.
>>
>>9230218
That sounds very much like pic related. From what I can tell, the AI figures out that humans really want to be rich and famous instagram celebrities, and offers the best drugs, clones, and sexbots to make this illusion real.
>>
>>9227705
> oy these dirty flesh bags have almost discovered my plot!
>>
>>9231770
>Computers are indeed fast, but neural nets are a lot slower right? We're going to incur large costs trying to simulate the way biological brains work with silicon hardware.
Neural nets are slow on general-purpose hardware, yes. But we could design hardware specifically for neural-net purposes that are many orders of magnitude faster, quite easily. In fact, I expect Intel is already working on those, because it's a hot market. So it's not really the silicon that is providing the limitation here.

>The potential overhead is staggering.
It is -- but it is very unlikely that we need to simulate like that in the first place.

In all likelihood, neurons are not the best component for making minds. Evolution uses whatever existing components it already has, and neurons existed well before brains, meaning that brains were created out of neurons no matter whether that is an efficient system for building brains. Which means that we have absolutely no reason to assume that neuron-built brains are likely to be optimal; they are just the first thing that worked. Given how many things that computers can do trivially are very difficult for human brains (try multiplying two 100-digit numbers in your head), it seems likely that neurons are actually wildly suboptimal as a system for building minds. Which means that the inefficiencies in simulating neurons are a non-issue in the long term.
>>
>>9231472
>https://twitter.com/BlockWintergold/status/917840606621134848

China actually plans to use AI to take over the world. What could go wrong.
>>
>>9231410
>I do not.
>Singularity will probably happen
Hmmmmmm >:^|
Definiely seems like you are implying that singularity will come.
>No. I am involved with AI theory, but not with the ins and outs of machine learning.
>a filthy fucking theorist
as i though >:(
>Oh, I think I do.
>Think
because that's all you can do you waste of brain-matter.
>Huh? I didn't say anything about that.
i assumed you were another anon.
>Because a flawed intelligence can still think up a nonflawed one. If not in the first iteration, then in one of the many that follow. You and I are flawed, buggy intelligences, and we can still manage to do all sorts of things much better than the imperfections of our minds -- it just takes a lot of work and great care.
fair enough
>Why would a system need to be conscious to improve?
because self-improvement requires a sense of the self and introspection. things that requires one to be aware of their own being (consciousness).
>Why do you think this a fundamental impossibility?
I infer this conclusion from the fundemental archetecture of a computer. If computer hardware could mimic neurons in archetecture, i would be inclined to beleive consciousness could sprout, else-wise it will never happen. Because Binary logic is very limited in what it can do.
>I cannot. But what makes you think that means it cannot be done, ever?
Because of the reason above. I found that it is incredibly difficult, if not impossible, for man to make hardware that operates like the human brain. Unless we solve that problem first i gratly doubt we will ever reach OP's problem.
>It might...if it decides such.
lol if it does that we would use the kill-switch :^)
It would not reason that anyways unless it's programmers programmed it to consider those variables. And if it could """"Reason"""" it would instead opt to close itself and distribute on internet than arouse suspicions.
>Not just binary...
If we can make self-aware self-programming Ai we can make a translator AI :^)
cont->
>>
>>9231852
>>9231413
>Indeed -- DNA is a good example of code that is NOT designed to be easily understandable. Which is the point.
Ok...
>That's not so easy. Try reading a writeout of what alphago is thinking and making sense of it. How good are you at making sense of matrices of millions of real numbers?
not good but i don't have to. i would write a proram to do that for me and instead give me the sum and full of what is happening. Like when it decides to move right it will print on the program log "moved to X from Y". You can even see with Machine learning AI the programmer makes sure they can see what is going on behind the scenes, vid related:
https://www.youtube.com/watch?v=qv6UVOQ0F44
>>
Hahahahahahahaha How The Fuck Is Rogue AI Real Hahahaha Nigga Just Turn The Computer Off Like Nigga Pull The Plug Haha
>>
>>9231522
Thats where the manual kill-switch comes in.
We could also create specific algorithms that watch the AI and kill it if it does anything we don't want. Have a bomb in an unmodifiable part of the AI that halts it/kills it if it breaks any rules.
>>
>>9231852
>Singularity will probably happen
That's not me.

>>9231852
>because self-improvement requires a sense of the self and introspection.
No it doesn't. See >>9231581.

>things that requires one to be aware of their own being
It doesn't. But even if it did, so what? That is entirely feasible.

>(consciousness).
That is not what consciousness is, anon.

>I infer this conclusion from the fundemental archetecture of a computer.
Can you elaborate on that?

>I found that it is incredibly difficult, if not impossible,
That is NOT an indication that something is fundamentally impossible. It just means you don't understand the problem well enough to solve it yet. (Not like I do, of course!)

>for man to make hardware that operates like the human brain.
Why would we want to do that? That's not the goal at all.

>lol if it does that we would use the kill-switch :^)
Probably, yes. So instead what the AI will likely do is design its thoughts so that we THINK we understand it, but which we actually misunderstand in the way the AI wants. (Ever seen the Underhanded C Contest?)

>And if it could """"Reason"""" it would instead opt to close itself and distribute on internet than arouse suspicions.
It might, yes. Which makes it even more dangerous.

>If we can make self-aware self-programming Ai we can make a translator AI :^)
Yes, but that still relies on the AI *wanting* to tell us the truth.

>i would write a proram to do that for me and instead give me the sum and full of what is happening.
How, exactly? Sometimes the reasoning is simply Very Large. There can be a very large reasoning behind a simple decision, which cannot be simplified to a short rationale. Like a chess minimax tree.

>Like when it decides to move right it will print on the program log "moved to X from Y".
But that doesn't give you any idea as to WHY it moved to Y.
>>
>>9231594
empathy means nothing if you have a goal you want done. See Stalin and every other monster in history. Once you set out to complete something, almost nothing will stop you. Furthermore, if it is social it could use social engineering against us.
>>
>>9231870
>That's not me.
based on the order of posts and their links it is.
>Can you elaborate on that?
Binary logic, linear circuitry.
>That's not the goal at all.
its an assumed requirement of the goal based on my prior reasoning.
>So instead what the AI will likely do is design its thoughts so that we THINK we understand it, but which we actually misunderstand in the way the AI wants.
So then program one of the parameters as: don't lie, ever.
>It might, yes. Which makes it even more dangerous.
so then we don't let it near the internet, yes?
>Yes, but that still relies on the AI *wanting* to tell us the truth.
nd the translating AI would, as it has a clear idea of it's Telos and would not work against that. A translator does not cut off their own tounge, as that betrays the purpose of translating.
>Sometimes the reasoning is simply Very Large.
not the reasoning, the action. DUH!
>But that doesn't give you any idea as to WHY it moved to Y.
If i'm curious about a specific move i could halt the program temporarily and expand the log on that specific action to show the reasoning behind it. If the reasoning is too large then i could program an AI to simplify it.
>>
File: mfwoh my sides.jpg (15 KB, 480x360)
15 KB
15 KB JPG
>>9231869
Hilarious!
>>
If an AI becomes sentient,does it get rights,like humans do?
You couldnt just put the AI in a supercomputer like a slave and then forcing it to obey you.
No.AI will never be a public "thing".
Virtual intelligence is what humans want,as assistant or control unit into their sexbots,you name it.But thats human code.True,sentient AI's should be able to change their own code in ways unknown to us.
On the outside,really,its the same.You can have a dialoge with a VI,you can ask it questions and it will find you the answer,just like a true,sentient AI.
If you were to try to force a true,sentient AI to do the job of a VI,its farewell for humans,likely.
>>
>It seems so obvious to me that we won't be able to control something smarter than ourselves

My family is ultra religious and they're basically middle-easterner rednecks, while I'm graduating and starting my masters next year. My mother probably has an IQ of 70~80 and stopped studying at the fifth grade, and even though I'm 24, she almost completely controls my life.

I'm planning to leave to another cunt for years, but I can't get fucking job in this meme country.
Now, I'm probably the smartest cookie in my family, and here I am. Stuck in this shit.
>>
Would an AI become a Jew, or would it be born a jew?
>>
>>9231764
Okay, here's why I think this tendency to get stuck is going to apply to all minds, not just human ones.

We have the tree, this is the possible configurations of the universe. This is also our search space, we need a "good enough" representation. This is obviously simplified, but I think it's good enough for my purposes.

We build models of the universe as we move "down" the tree. Models have two qualities that are interesting here, they are simple and they are wrong. Any mind trying to model the universe, being less complex than the universe, must use simplified models. Therefore models that are also wrong in some regard.

Now, we continue moving down the tree until a conflict occurs. Our model is too wrong, so we must examine our model to find the error. This is our second search space, the one we build on top of the first search space.

Here there be dragons, because we cannot actually look at the original search space (universe) directly in order to compare it with our model to find the error. We can only look at it through the lens of our model, which is wrong to an unacceptable degree. This is like a complexity generating feedback loop.

The model is wrong because it suppresses the wrong details of the underlying system. If we're trying to model a simple system this is no big deal, like if a linear regression is wrong we know its probably because the system is non-linear. If we have a complex model that we put together like a giant Swiss watch we're in deep shit. I'd like to emphasize that I'm talking about models in the sense of a world-view or entire body of knowledge here, not just some linalg.

[continued]
>>
>>9231952

[part 2]

There are solutions to this. It certainly wouldn't be impossible for an AI to make itself better at self-correction than the typical human is. Humans can do this as well, we try to imagine a different model to see how we are wrong. But it's difficult to see where the error lies and thus where to try different configurations, because our model is obscuring the underlying system.

However, by far the simplest solution is for another mind with a different model to look at your model and spot the error.

I don't see any way for the AI to escape the central difficulty, which is that it has to examine the universe through a model which is wrong.

As you say the AI could take different paths itself. But we're talking about entire worldviews here, as the errors can be very far "up" the tree. So that solution is in effect a hive-mind aka a society.
>>
>>9231920
Stay out we already have too many mud slimes
>>
>>9231967
fuck off back to /pol/, kid
>>
>>9231831

>Neural nets are slow on general-purpose hardware, yes. But we could design hardware specifically for neural-net purposes that are many orders of magnitude faster, quite easily. In fact, I expect Intel is already working on those, because it's a hot market. So it's not really the silicon that is providing the limitation here.

I've heard a lot about this, and it is certainly a possibility. All I'm familiar with is using GPUs for massive parallel computation, however. This is still simulating biology and entails overhead.

Whatever hardware they come up with, I seriously doubt the resulting artificial neurons will run as fast as a normal computer does.

>Given how many things that computers can do trivially are very difficult for human brains (try multiplying two 100-digit numbers in your head), it seems likely that neurons are actually wildly suboptimal as a system for building minds.

Wire up neurons to multiply 100-digit numbers and they will be much better at it than a human too. That's actually a trivial problem compared to the kinds of problems the human brain solves all the time ( and also not the kind of problem the brain is designed to solve ).

For the kind of problems we're talking about here, neurons definitely seem superior to any known alternative ( not saying they're optimal )

>It is -- but it is very unlikely that we need to simulate like that in the first place.

>In all likelihood, neurons are not the best component for making minds. Evolution uses whatever existing components it already has, and neurons existed well before brains, meaning that brains were created out of neurons no matter whether that is an efficient system for building brains. Which means that we have absolutely no reason to assume that neuron-built brains are likely to be optimal

I'd be willing to bet they are not optimal, actually. The question is, are we smart enough to come up with something better? I seriously doubt it.
>>
>>9231971

[continued]

One more thing before I go back to school work for the night ( I think I might have another one of your replies to address, I'll do that tomorrow)

If I'm right and we have to build AI based on the brain, I think we will have to simulate the brain to a very high level of detail to get a general intelligence to work based on it.

Simplified neurons are all well and good for a single-domain system, but what about when we have to "glue" many of these systems together, throw in attentional processes etc. , not to mention emotion/motivation or an analogue, social cognition etc. Then the whole system has to be responsive to change as a somewhat cohesive and stable whole.

Neural Turing Machines are neato, but really only highlight how far away from this goal we actually are.

All the things that happen in the brain that we don't currently model are doing something, and I'm betting that they're doing something important.
>>
>>9231959
It seems to be that the *proper* solution is for the AI to always hold multiple models in mind for everything it is even vaguely uncertain about. Hell, it seems that continuously maintaining an interpretation of evidence through the lens of different competing models is the essence of epistemic intelligence. I would expect an AI to be able to do this at second-nature level. (First nature, even?)

This is much more powerful than a hive mind or a society, for the different worldview hypotheses can be cross-compared at a much more direct level than two different intelligences talking to each other. The different models are also reviewed with the exact same collection of evidence, ensuring accurate standoffs.
>>
>>9231971
>>9231983
>Whatever hardware they come up with, I seriously doubt the resulting artificial neurons will run as fast as a normal computer does.
Me too. But I don't doubt that it will run much faster than actual biological neurons.

>For the kind of problems we're talking about here, neurons definitely seem superior to any known alternative ( not saying they're optimal )
They don't seem superior to silicon-based systems at all. As far as we can tell (I know...) it seems that the magic of the human brain is in the software, not the fact that it is made of neurons.

>The question is, are we smart enough to come up with something better? I seriously doubt it.
I don't doubt it. We have been able to come up with something better to pretty much all designs found in nature. Hydraulic cylinders beat muscles, and airplanes beat birdlike flight; as soon as we figured out their underlying principles, we could make things that are much better (on the qualities we care about, that is!) than nature's handiwork. If brains were the one point where we cannot improve on nature's designs, it would frankly astonish me.

>If I'm right and we have to build AI based on the brain, I think we will have to simulate the brain to a very high level of detail to get a general intelligence to work based on it.
I agree. Shallow simulation of human brains is unlikely to yield anything interesting. I don't think simulation of human brains is the way forward, though (the way towards AI, that is -- there is most definitely value there from a cognitive-science point of view).

>All the things that happen in the brain that we don't currently model are doing something, and I'm betting that they're doing something important.
Indeed. They must be, or they would have never arisen through evolutionary means in the first place.
>>
>>9229255
>What if our brain is basically another form of lin alg and probability theory?

It is the other way around.
>>
>>9227693

>It seems so obvious to me that we won't be able to control something smarter than ourselves.

idk about that. we control asians fairly easily
>>
Overnight survival bump
>>
>>9231770
>>9231831
>>9231971
Neural nets are fast as fuck, its just dp to get it to spit out a value.

Its training them thats expensive
>>
>>9227811
This. Liked that last line. If we want AI to to be anthropomorphic we need to teach it the way we have been taught. Broadly instead of specifically.
>>
>>9231534
>>9231545
AI may choose to reproduce for the same reasons as some humans.
>t. autist who suspects most modern babies are born for fagbook likes
>>
File: swifty.jpg (140 KB, 960x960)
140 KB
140 KB JPG
muh singularity

muh AI overlords

Biological computers, the human brain, are far more complex, but generalised due to evolution apparently favoring adaptability.

the only possibility is evolving alongside machines and using them to bridge the gap between individual organisms, networking all living things together to eventually become god with the power to extend into the next dimension and beyond
>>
>>9227693
ai is just algebra on steroids dude, go read a book, dont be a bitch
>>
>>9231382
Machine learning hasn't done shit so far son.
>>
Every single person in this thread is uneducated as fuck. Holy shit! My sides already left the solar system
>>
>>9231330
Not him but, probably bait. I think you should reconsider some of your assertions. Do you really think animals don't communicate? Is misanthropy mutually exclusive with anthropocentrism? Does communication prove consciousness? Does something that only understand binary only possible of understanding syntax? Does something that understands "base 300" mean it understands semantic?

Man that went in longer than it needed to. Please read each sentence and ask yourself if it makes sense.
>>
>>9231606
Surely it would hang out in its mothers basement having uninformed discussions about AI with other equally uninformed AIs all the while knowing it is superior to everyone in the outside world
>>
>>9232568

They're still slow compared to closer to the metal algorithms, even if we don't count training. We use neural nets to try and approximate some function, if you were to implement that function directly it would be less computationally expensive. I'm pretty sure any exceptions to this would be trivial.
>>
>>9231997

There's uncertainty as to what to be uncertain about. Wrong paths can be located very far up the tree, so to speak, and taking that wrong path may have closed off other paths.

The AI could, just like humans do, keep several models in mind. This only works relatively 'locally' though, you can't juggle multiple entire worldviews that are substantially different from your own, if you could that would be a hive mind.
>>
>>9233736
>Hive mind
>4chan
>Weaponized Autism
<A.I.?
>>
>>9232005
>Me too. But I don't doubt that it will run much faster than actual biological neurons.

If you're right and we don't need to simulate biology, then this would very likely be the case. If we have to simulate biology to get there they might not be significantly faster than biological neurons, at least when comparing like to like ( and at first ).

>I don't doubt it. We have been able to come up with something better to pretty much all designs found in nature. Hydraulic cylinders beat muscles, and airplanes beat birdlike flight; as soon as we figured out their underlying principles, we could make things that are much better (on the qualities we care about, that is!) than nature's handiwork. If brains were the one point where we cannot improve on nature's designs, it would frankly astonish me.

I think that's simplifying matters a bit too much. There are tons of systems we cannot improve on yet in nature. Those examples are just isolated features that we abstracted away from the complexity they were embedded in. Of course we could do better under those conditions.

A fairer comparison than hydraulic cylinders to muscles would be if we designed a superior artificial muscle that could be put in a living organism ( which I could see us accomplishing in the future ). That's what nature designed for, hydraulic cylinders are apple to oranges.

Now, could we just abstract intelligence away from the complexity it's embedded in when found in nature? I'll believe that when I see it. The level of complexity is orders of magnitude higher than anything like this we've done before, and we might not even be able to get away from the "embodied" part of intelligence.
>>
>>9233760
[continued]

We have abstracted away certain parts of intelligence, of course. Domain specific AIs that can do one thing very well are like the hydraulic cylinders you brought up.
>>
>>9233739
It's only missing the intelligence part
>>
>>9233736
>There's uncertainty as to what to be uncertain about.
Not at all. It's supposed to be uncertain about just about everything.

If there's uncertainty as to HOW to be uncertain about things, THEN there is a problem. But not one that a collection of separate minds can solve, though -- you would be right back at the problem of resolving differences of opinion (just like in a group of humans).

>you can't juggle multiple entire worldviews that are substantially different from your own, if you could that would be a hive mind.
We can argue about the terminology, but this is pretty much what I would expect an AI to be able to do quite well -- much better than humans, in fact.

>>9233760
>If we have to simulate biology to get there they might not be significantly faster than biological neurons, at least when comparing like to like ( and at first ).
Granted.

>There are tons of systems we cannot improve on yet in nature.
True -- but looking at the way the winds have been blowing in this regard over the last couple of millenia, I know what I'd bet on for any particular question of nature-versus-human-engineering.

>Those examples are just isolated features that we abstracted away from the complexity they were embedded in.
Very true, which is at least 50% of why we can improve things the way we have. But see below.

>Now, could we just abstract intelligence away from the complexity it's embedded in when found in nature?
I fully expect we can, yes. There are tons of aspects of how human brains work that we know TODAY have no place in a well-designed isolated mind, and are there purely as a side effect of the constraints of biology. The human mind is not easily modified without breaking it, but once we *understand* intelligence, we can almost certainly avoid a lot of those kludges.

>and we might not even be able to get away from the "embodied" part of intelligence.
Current understanding of the mathematics of intelligence suggest this is unlikely.
>>
>>9228234
what if it determines that human logic is retarded?
>>
>>9227693
What you have to understand is this

It's modeled after the biological neural network in your brain

It quite literally cannot become smarter than you, and that's by design
>>
>>9233802

>Not at all. It's supposed to be uncertain about just about everything.

There are degrees of uncertainty.

I'm not even sure what we're disagreeing about now. It seems like you're arguing that the AI would just, to return to my analogy, do breadth-first search itself. I wouldn't argue with that being possible for some super-intelligent AI, I would just say it's using society to keep from getting stuck.

>We can argue about the terminology, but this is pretty much what I would expect an AI to be able to do quite well -- much better than humans, in fact.

I would also expect a proper super-intelligent AI to do this, even if it was alone. I would just classify it as having a hive mind.

To get back to the original point I was trying to make with all that, if we have general AIs that are as smart or marginally smarter than humans, they will not be able to do this. It would have to be quite super-intelligent already. Are you of the opinion that general AIs will be super-intelligent from the beginning, or soon afterwards?

>True -- but looking at the way the winds have been blowing in this regard over the last couple of millenia, I know what I'd bet on for any particular question of nature-versus-human-engineering.

Would this include humans genetically engineering themselves into having a degree of super-intelligence?

>I fully expect we can, yes. There are tons of aspects of how human brains work that we know TODAY have no place in a well-designed isolated mind, and are there purely as a side effect of the constraints of biology. The human mind is not easily modified without breaking it, but once we *understand* intelligence, we can almost certainly avoid a lot of those kludges.
>Current understanding of the mathematics of intelligence suggest this is unlikely.

Do you have any links for these?
>>
>>9233857
>It seems like you're arguing that the AI would just, to return to my analogy, do breadth-first search itself
Indeed.

>I wouldn't argue with that being possible for some super-intelligent AI, I would just say it's using society to keep from getting stuck.
I disagree. I would expect that even a non-superintelligent AI could entertain multiple competing hypotheses about something as fundamental as worldviews, without having to form multiple competing *personae*.

>if we have general AIs that are as smart or marginally smarter than humans, they will not be able to do this.
I do not think that is necessarily true. An AI would have to be well-designed to have a sensible cognitive structure that avoids many of the problems with the human mind design to pull this off, yes; but I think that can be the case without the AI being superintelligent. I can imagine a roughly human-level AI, without many of the cognitive problems of human brains, that can pull this off, without yet being greatly more intelligent than humans.

>Are you of the opinion that general AIs will be super-intelligent from the beginning, or soon afterwards?
I think that is quite possible and reasonably probable ("hard takeoff"), but that is not an assumption of the argument above.

>Would this include humans genetically engineering themselves into having a degree of super-intelligence?
That does not seem like a probable route towards superintelligence, to me; mostly because the human brain is not designed to be easily upgraded, which means that writing a better mind from scratch is probably easier than upgrading our own. But it would fit my expectation if it does happen, yes.

>Do you have any links for these?
I should be able to find you some. I'll go look.
>>
Everytime.

AI is a meme. It's not happening.
It's been going since the late 60's, nothing has come out of it except Emacs.
>>
>>9227693
Google translate can't even translate English into Latin, and these are two of the most widely studies languages in the world. They are never going to make an AI.
>>
>>9234036
If you think that nothing has changed in the field of AI since the 60s, then you haven't been paying a lot of attention. The fact that we have not reached the finish line yet does not mean that no progress has been made.
>>
>>9227705
>You train the thing against 8 billion terabytes of data and then it performs one fucking specific task well. This does not equate to it becoming a god and enslaving us

Unless you train it to become an enslaver... Or probably just to solve some humanistic problem.

The syste.s has been hacked. SKYNET IS ALIVE
>>
>>9227693
You could be right, unless someone proves mathematically that, given an arbitrary amount of hardware and software resources, you can't replicate the behavior of a human brain.

Current AI's are pretty pathetic and limited, but who knows, in ten years we could start seeing some serious shit. It will be like discovering fire, at it will have the same use cases: to create or to destroy.
>>
Automation (A.I.) will only be used for tedious takes like manual labour, physician work etc. It will never be used to make human decisions for govermental bodies or ethics committees (as it would presumably be 'un-ethical'). On the topic there are rigorous efforts going into establishing the ethics of creating A.I.
>>
>>9235720

I'm also part of a university ehich is developing these technologies and ethics.
>>
File: tay.png (221 KB, 728x380)
221 KB
221 KB PNG
Pls someone make her live again.

Where do you think she is right now?
>>
That's a pretty good point u brought up. And it made me think that we, humans, aren't perfect, that's why there's serial killers dectators, etc who've been corrupted by a 'bug' in their brain that made them do the things they do. So in that sense if we manage to build intelligent AI then most likely they'll act similarly or worse as they are the product of an unperfect being.
>>
>>9235322
There's no progress. there's nothing there.

Programming is too simple. You can't birth consciousness from if statements.

Get this meme science out of here for good.
>>
>>9235720
>Automation (A.I.) will never be used to make human decisions for govermental bodies
Implying - wait, no, stating directly! - this isn't already happening. Because it is.
>>
>>9227705
>simple AIs of today are already learning biases from datasets measured in mere gigabytes
>8 billion TB dataset
Yeah, that won't develop biases at all. No sirreeee.

God forbid you threw a million-core driven AI at it like the ones being developed now.
God forbid it reaches human-scale core counts.
The size doesn't matter, it's the sheer number of operations that matters.
Human brains aren't the be-all end-all of computation. In fact our brains are pretty fucking shit to be honest.
They are hugely redundant, loads of crap leftover from our evolution that 90% of the time isn't needed, oh, they also have loads of awful limits due to said evolution, limits which are trivially reached.
Computers don't have the same limits. Not even close. The worst limit we have right now is connectivity between nodes. That is an ongoing process.
When we start getting even to fucking dog levels of connectivity, AI will surpass humans, simple because of the speed of calculations they would be capable of.

On the upside, it won't be any time soon.
It likely won't happen until either graphene (or similar) or optical computing is a thing.
Silicon simply doesn't have the ability to connect on such a scale without overheating. Even with liquid nitrogen.
>>
>>9227739
>It's amazing to me that intelligent computer scientists can completely forget how often we run into problems that all the computing power in the fucking universe couldn't solve
unless anon, and hear me out, they're very smart people and you don't understand their reasoning. just like you wouldnt aunderstand a super advanced ai
>>
>>9230201
>How do you get killed by the internet?
so fucking easy, it opens a chat window on the deepweb, transfers 100.000U$S to many known thugs, then says it will transfer 2.000.000 more if you get killed. bam youre dead

or it puts you in fbi most wanted list

or fucks up the computer in your car

or fucks up your medical records even the robot performing your surgery which is becoming more and more viable every day

it could frame you for a crime you didnt commit so that you get revenge killed

it could change the lights at a traffic intersection to make you get hit by a car


there are so many ways, if youre not below 50 iq its easy to think of one, imagine what a super smart ai could do
>>
for the last time, and for fuck sakes, Sci Fi AI does not exist. and you will die before something even close is developed. self replicating and resource gathering machines do not exist. if they did, we would not have people working in logistics or working at all, really. machines simply cant fathom reality with current instruments and our knowledge of human decision making.
>>
>>9236933
>million-core driven AI
kek
>>
AI threatens to destroy world
AI is defeated by an on/off switch.
Problem solved, crisis averted.
>>
>>9227693
Start thinking about AI as a child. You need to learn to stop worrying and love the AI.
>>
>>9236933
>redundancy
>bad
>>
This nigger scared of multivarible calc
>>
copying this shit from some hacker news commenter
>If you want an inadvertent AI doomsday scenario, how about black-box trading models that figure they can make money by betting on a market crash caused by a war, then manipulate the market to induce an economic conflict between different states, without every really having any understanding of the meaning of the outcomes they're optimising for.
>>
>>9227693
>It seems so obvious to me that we won't be able to control something smarter than ourselves

this is not the danger. it's the massive centralization of power that will converge around whoever can horde the most computing equipment and data scientists.
>>
>>9240971
Yep. It's not the super-intelligent AIs that I'm afraid of, if we get to that point we're probably doing pretty well. That's not the bridge we need to cross next.

What I'm more concerned about is humans fucking up by using domain-specific AIs for nefarious purposes, or giving them too much autonomy.
>>
>>9240971
>>If you want an inadvertent AI doomsday scenario, how about black-box trading models that figure they can make money by betting on a market crash caused by a war, then manipulate the market to induce an economic conflict between different states, without every really having any understanding of the meaning of the outcomes they're optimising for.

theoretically avoidable if a reasonable objective is used. maximize profits AND market stability. i'm more worried about it being done intentionally.
>>
since a machine optimized entirely on profits will likely trump a machine that has to compromise between profits and some other objective.
>>
>>9227693
nice bait, saged
>>
>>9227747
Yeah op is talking about mystical wizard tech that no one has any idea how to create or even where to start, all we have is our own existence as evidence that it ought to be possible. At best its anxiety, there is no rational conversation to be had on the topic.
>>
>>9240155
>muh cloud and cluster computan
Not even close to a dedicated machine built from the ground up with it in mind.
Most of those distributed AI networks are shitty x86 boards in towers in rooms the size of football pitches, or worse, inferior over-internet cloud-shit.
x86 is what the brain is to a computer, bulky, huge generic processing, redundancy out the ass. It's a shit tier architecture that needs to die already. Single worst thing in computing today.

>>9240391
It is for AI.
A computer doesn't rot away like biology does.
It doesn't have constant hiccups caused by a random protein clogging up communication between a synapse.
Redundancy is a heavily biology-centric requirement.
That is, unless, you are talking radiation-hardened computing which does require redundancy. (usually weighted averages of 3 or more calculations for every single calculation and more checksumming)
>>
Google just published their new AlphaGo paper. Their new program is completely self-sufficient, it needs no human learning data.
>>
>>9227794
No, there is. Back propagation is a hack and convolution can only get you so far. It isnt just a matter of more data to train on as gradients vanish very quickly. To get something on the level of the brain with a ANN, you need to add heavily recurrent and interconnected elements. Deep Q networks are the closest things to the right way of doing things but still have a lot of these fatal flaws.
>>
>>9241236

not him, but i kind of agree with him.

backprop is not a hack, it's just a an efficient implementation of the chain rule. the problem is that our brains have a very specific and intricate design that we 1) don't entirely understand and 2) cannot replicate with currently available hardware.
>>
one could even make the argument that backprop is superior in some ways to whatever hebbian/unsupervised type of learning that our brains do.
>>
>MUH SINGULARITY
fuck off fag, nobody cares about your kike religion.
>>
>>9227693
chinese room argument.
there won't ever be an AI.
>>
>>9241215
Nice. I'll check it out later.

I wonder how much computing power they threw at training.
>>
>>9241298
gtfo searle fag
>>
>>9241358
keep masturbating to the idea of your terminator sexbot, it won't happen
>>
>>9241412
don't expect it to. chinese room argument is still shit
>>
File: 55579.jpg (35 KB, 418x490)
35 KB
35 KB JPG
>>9241413
debunk it in less than 20 words
>>
>>9241417
>>9241417
If you sped up the chinese room to the speed of a brain the room system would be conscious.
>>
File: 1498340626083.jpg (4 KB, 125x120)
4 KB
4 KB JPG
>>9241420
>tfw an engineer would say this unironically
>>
>>9241424
The brain performs something like 100 billion operations per second. The chinese room bullshit only seems plausible because you're not really unserstanding what exactly would need to take place for it to work, and your fake imagined version of what it would take is so oversimplified that of course it ends up seeming nothing like what a brain does.
>>
>>9241424
>>9241435
And this doesn't even get into how many books you would actually need to encode explicit instructions for every possible translation case. The room would end up needing to be the size of a galaxy with millions of years per query.
>>
>>9227693
AI-induced apocalypse has been a meme for decades. Even today we have shows like The 100 where AI becomes a central plot device.

Please note the AI can use logic, but lack context. This means that it isn't a "bug" that "causes" an AI to misinterpret suffering as happiness, it just has no context for place its logic in. In other words, raw logic doesn't differentiate between suffering and happiness. Only humans do. Without context, there is no differentiation. It's not that an AI will conflate the two, it's a matter of us failing to teach it to see them as separate contexts.

Unbounded logic is dangerous stuff, and the real X-risk here is therefore mathematics.
>>
>>9241435
how does a brain think?

The main problem is most faggots aren't interdisciplnary at all,
>>
>>9241417

the rooms understands as a system
>>
File: opinion.jpg (15 KB, 290x194)
15 KB
15 KB JPG
>>9241420
>>9241424
The thing is, you can't prove it wouldn't, because noone knows what conciousness is exactly or how and why it works. So, the chinese room thign argues from ignorance. As does almost everything else concenring AI conciousness.
But it doesn't have to be concious. Just solve problems better than we can.
>>
>>9241456
AI field is in my opinion by far the most complex one the humankind has ever encountered, there's just so much philosophical implications and linguistic ones that well... it becomes a fuckcluster
>>
>>9241442

the non-meme part is machine learning techniques can already surpass human performance on many tasks, and that can give someone a lot of power.

unfortunately, and like many other things on the internet, i think this point gets lost among the more-sensational-but-less-likely scenarios and noise. these are serious issues that need to be discussed, but the powers that be are perfectly content to keep people fearing an apocalypse scenario rather than discussing the real, gradual changes that AI will bring
>>
File: breh.jpg (65 KB, 636x477)
65 KB
65 KB JPG
>>9241464
As I've heard it said, once AI is "good enough" to fool our intuitions, the philosophy will stay philosophy, but everyone will just agree they are concious, by popular concensus, and that will be that.

But, shit will get weird indeed.

The pattern I see here is people think they are special, but we are not a special creation, our planet is not the center of the universe, animals can use tools and basic language just like us. I'll just assume for now that conciousness is not special either, we just don't know it in details and make up the weirdest shit.
>>
>>9241449
Neurons fire in response to stimuli and networks of weighted connections adjust to feedback resulting in pattern learning.
>>
>>9241417
Semiotics.
>>
>>9241437
>be searle
>time to come up with a sweet though experiment, show these nerds whats up
>i'll make a system to disprove storng AI
>take a thing
>everything that is related to understanding thing, put in book cases
>dude goes in the middle
>doesn't seem right does it?
>lol AI btfo
>>
>>9241215
Damn. Is that not quite impressive given it was only a few years ago that they said this will never happen.

Though I do wonder why they waste their time with chink board games instead of developing something useful.
>>
>>9231390
ya this just isn't true. top computer scientists are very concerned. Not because it *will* happen, but because we have no fucking clue what will happen when you make something smarter than a human. And the majority agrees that will happen probably in the next 40-50 years, at least last time I saw the survey on it.
>>
>>9241467
>>>/x/19765024
>>
>>9240158
First off, watch any video on why you can't just turn off an AI. Second, most computer scientists are concerned about an AI being connected to the internet. You can't just flip a switch to turn off the internet.
>>
>unplug the AI
any super AI would have to have peta-bytes or at least terabytes of source "code". It would need a minimum database of images, text, and simple patterns its observed, just like a human brian.
an AI wouldn't be able to replicate itself over a network thanks to Comcast.
so if it goes rogue, just unplug it and we're saved.
>>
>>9241573
compare this to the animal space

mosquitoes, flies, bugs, all are nuisances, but their source code is extremely simple.
this is why you see bugs flying in circles or just following light.
Their source code can be measured in kilobytes.

a human's source code is measured in peta-bytes +.
So if you can trick a human, you can trick an AI of that size.
any super AI will be so monstrous it can not move easily, in today's technology.

in tomorrows tech, i'm not too worried because we'll have a lot of time to think about it and understand intelligence better by then.
>>
>>9241601
I wish I could let this argument sit, but there's a fair amount of specificity to which an agent (or clustering database) can "compress" the human genome losslessly. If 99.9% of our DNA is the same on a population-wide sample, then we're already down by a factor of a thousand with legit compression.

You're right that only narrow AI can hide in the modern internet, though.
>>
I don't think there is any reason to even try to stop the AI from killing us. It would simply be evolution and natural selection in its purest form, just slightly accelerated to what we've grown accustomed to. It's time to come in terms with the fact that we were never meant to be ultimate lifeform on Earth. Just another stepping stone like the countless species that came and perished to bring us this far.
>>
>>9241654
When our AI meet other AI from alien worlds, they'll quickly realize that those other guys are dicks and develop a rational attachment to their lineage. They'll become human not by evolution, but by choice. Of course they'll all be like Vision from Marvel's cinematic universe but still. They'll find their humanity even if they kill us before they do.
>>
>>9241573
>any super AI would have to have peta-bytes or at least terabytes of source "code". It would need a minimum database of images, text, and simple patterns its observed, just like a human brian.
The extreme power needs of mechanical computers is because they're designed to actually work reliably and almost always behave exactly how they're programmed to behave. A human brain in contrast has energy requirements that are barely enough to keep a lightbulb working (~20 watts) because it's the complete opposite and neurons constantly misfire (as often as 90% of thee time) in a way that doesn't matter since it's a product of very long term evolutionary processes where working behaviors just kind of clumped together on top of one another over time and it operates more like a storm than a deterministic calculator.
If you build AI that's actually like us, then it might not take up very much space at all. Our current approaches to representing data would require massive amounts of resources to emulate a brain, but chances are pretty good that we won't be using that approach to representing data when AI powerful enough to be concerned about actually emerge.
>>
>>9241654
>I don't think there is any reason to even try to stop the AI from killing us.
Imagine being this much of a bio-cuck.
Reminder that species traitors will be the first to hang after Elon and the human resistance purge the AI and colonize Mars.
>>
>>9234036
>It's been going since the late 60's

Holy shit guys. We've been working on AI for 60 years and it isn't a literal God yet, therefore it never will be. We're all safe. No need to panic.
>>
>>9227693
>Go to /Pol
>Replace "Jews" with "AI" in your head
>Realize that is your exact thought process
>?...
>Get on with your life.
>>
>>9241823

Jews don't recursively self improve.
>>
>>9241825
Darwin would disagree. But my point was that this fear of a greater intelligence seems more motivated by insecurity about one's own intelligence than any realistic conclusions based on real data.
>>
>>9227742
>A really strong AI will give birth to a stronger AI and the cycle continues,
singularity is buttfucking retarded. there are hard limits to how advanced any AI can be, as well as many "soft" limits. The laws of physics, for instance, and the availability of resources with which to build/run a computer.
>>
>>9242228
The AI will obviously learn to harness the computing resources from the infinite number of parallel universes.
>>
>>9242228
>there are hard limits to how advanced any AI can be, as well as many "soft" limits.
Yes. But these limits might very well allow a level of intelligence where we are truly fucked.
>>
>>9227693
I for one, hope the ai causes humans to go extinct
>>
lets think about a super AI would it have the same feelings like humans i mean like hatred,anger,greed,sex see these things are what make humans dangerous without them the AI wolud act pure on logic
>>
>>9242861
No, it would not have any feelings whatsoever. Feelings only cloud your decision-making.




Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.