[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/sci/ - Science & Math

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • Additional supported file types are: PDF
  • Use with [math] tags for inline and [eqn] tags for block equations.
  • Right-click equations to view the source.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


New anti-spam measures have been applied to all boards.

Please see the Frequently Asked Questions page for details.

[Advertise on 4chan]


In ~10 years, all these problems will be solved by AI, so if we want them to be solved by humans, we have to hurry up
>>
>>16455740
You can't turn lead into gold and a computer will not auto populate knowledge and do your homework for you. Never gonna happen and all the baseless claims of AGI and a "God in the machine" mean nothing on the ground where I can mind fuck you AI in 30 seconds or less.
>>
File: gpt8.gif (2.45 MB, 320x320)
2.45 MB
2.45 MB GIF
>>16455753
would not be so sure about that.
it is relativly easy to create an infinit amount of training data for math.
in programming at least there seems to be no real limit what it can do. at least we can expect future systems to be very capable in these areas. it might very well be able to at least check your paper and find flaws like it can check my code and find flaws.
it very well will be able to slove any kind of universty exam with ease in a year or two, current versions allready getting good at this..
it might not be a god, but it will have a phd quite soon
>>
>>16455740
LLM isn't real AI and can't reason, stop coping.
>>
>>16455740
That's not how it works, bro
>>
>>16455814
A Ph.D. requires original research. Where will this "AI" get its original research from?
>>
>>16455740
>In ~10 years, all these problems will be solved by AI
GOOD
Then we get reverse aging
Even if I was capable of attacking those problems, which would I rather?
>experience 40 years of decay in the latter half of my life after being more capable than 99.99999% of humans
>be a normal human, feel good and go outside every day
>>
>>16455740
SAAAR PLEASE GIVE TRAINING DATA FOR MY MODEL YOU BLOODY BENCHOD IT WILL PROVE THE RIEMANN HYPOTHESIS BY LEARNING OFF OF THE FAILED PROOFS
>>
File: 1661732886340238.jpg (195 KB, 798x770)
195 KB
195 KB JPG
>>16455740
>*AI plays chess
>your consciousness is nothing special
>*AI passes Turing Test
>your consciousness is nothing special
>*AI does art
>your consciousness is nothing special
>*AI does science
>Noooo STooop my consciousness is special and AIs can't reason like me
The absolute state of /sci/tards.
>>
>>16455914
How will solving the Riemanm hypothesis solve aging? Solving aging doesn't require more intelligence, it just needs research, which takes time, because it takes 36 months for a mouse to die and if you need to try several compounds, a lot of time passes, even with AGI
>>
>>16455740
we've got another 975 years left surely?
>>
>>16455940
You're downplaying a human being as nothing more than a logical system and then proceed to toot your horn that logical systems are better at being logical systems than your caricature of a human being. Classic straw man.
>>
File: 3s65er87568.jpg (58 KB, 780x1140)
58 KB
58 KB JPG
>>16456095
For decades this board of materialists have been parading machine intelligence as the apex of intellect set to overtake man in almost every sci-fi.
Now when shit gets real and AI actually start to become comparable or even straight up outcompete humans in various sectors, all of a sudden ohhh nooo AI will never be able to do math/sci.
Like are you jokers for real? A logical machine will never be able to outcompete you in math/sci, of all things? I swear the moment /sci/tards perceive a threat to their paychecks all rationale goes out the window and their triple digit human IQs become wholly bent on manufacuring non-stop copium.
>>
>>16456124
That's a bunch of sweeping statements. It's wholly unclear what ''doing'' math and science even is and the future does not need to be humans versus AI it can be humans with AI and it's dishonest to portray a future as an inevitability. We have more to fear from what humans are going to do with AI than from the possibility of AI itself.
>>
File: 4d65f87t80.jpg (53 KB, 780x1140)
53 KB
53 KB JPG
>>16456130
>It's wholly unclear what ''doing'' math and science even is
Yep, here comes the cope.
Anybody genuinely doing math and science knows what it is. Empirical studies comes down to one thing - predictions. Create models of reality, predict or bust. That's it.
Whatever fears the general public might have, or should have, the fear on this board over irrelevancy and job-loss to AI is absolutely palpable.
>>
>>16456161
>That's it.
No it's not. Towards what end do we build models of reality? Where do our hypotheses come from? Where is the line between facts and interpretations? What cost/benefit ratios are acceptable? Do the ends of science justify the means? Are AI capable of asking themselves these questions and re-evaluating their answers during their life cycle? Many such questions.
>>
File: 3s6d47677.jpg (52 KB, 780x780)
52 KB
52 KB JPG
>>16456195
>No it's not. Towards what end do we build models of reality?
To consistently reproduce results, to get more or what people want, and less of what people don't want. A simple exercise in logics based on empirical sensory data. That's it. There is nothing mystical about it.
>Are AI capable of..(more empirical questions)
Short of appealing to /x/ concepts, there is no question whose answer that is rooted in observable facts that an AI won't be able to consider. It's simple data input/output.

In terms of processing power and software malleability, your wetware of a brain is severely capped while a machine's software/hardware is not. The eventual outcome is obvious to anyone not huffing copium by the barrel.
>>
>>16456234
Perhaps my point was unclear. To clarify: it's not up to AI to decide its input and its not up to AI to decide what to do with its output.
>>
File: s65e4d7f687.jpg (51 KB, 780x1140)
51 KB
51 KB JPG
>>16456245
And that's a question of subjective should, not objective could.
>>
>>16456255
So you concede that the subjective part of science is not going to be replaced by AI.
>>
>>16456124
>>16456161
>>16456234
absolute state of this trannyme lover with his inferiority complex.
If you weren't too busy watching anime and jerking off to hentai, you could see that science is not just input/output with randomised processes in between, it much more.
You mentioned creating a model. It's not something that you can brute force your way until you find the right one. A lot of problems in science and math are beyond the scope of our current knowledge and and understanding in that field, so no amount of pattern recognition would help you to find the answer. Those aren't the type of questions that "the answer is somewhere inside the vast amounts of current data, we just have to look closer" or "we just need to connect the dots, the answer will reveal itself". For solving a lot of problems in science and math, especially the difficult ones, you need to literally create something new that has never existed before, something that would challenge some of the well established Ideas that have been out there for decades or centuries. You need to think outside of the box like that faggot Steve jobs used to say.
AI can't do that.
>>
File: 3s65d76t871.jpg (50 KB, 780x780)
50 KB
50 KB JPG
>>16456281
>subjective part of science
>science
>subjective
Do you even read what you write?

Yes, the part of how I go about eating an output cake is subjective, and also not a science.
The part of how much I allow an AI to penetrate my privacy to collect input data on my preference of what type of cake I like is also subjective and is also, not a science.
The part of how the AI turns said data into a delicious cake measured in terms of efficiency and effectiveness is what is science and not subjective.
>>
>>16456284
Not him but you do beg the question where novelty comes from if not from some combination of current data. Whether or not AI is better in finding a novel combination that revolutionizes a scientific field is another question but that's exactly where he wants us because then he's going to say that we're shifting the goalposts by pretending that creativity is magical.
>>
>>16456288
>Do you even read what you write?
If you really mean to pretend that the objective can clearly be discerned from the subjective in 2024 then you're not worth anyone's time or effort to reply. An AI in the hands of the vax crowd will generate different results from an AI in the hands of the anti-vax crows even though in both cases they follow logic and reason.
>>
File: 3s65d8u6tr8.jpg (31 KB, 780x780)
31 KB
31 KB JPG
>>16456284
>You mentioned creating a model. It's not something that you can brute force your way until you find the right one.
Overwhelming empirical evidence says otherwise faggot.
>no amount of pattern recognition would help you to find the answer
>you need to literally create something new that has never existed before
Science is literally pattern recognition. What do you think is a hypothesis? It's a pattern phrased in the human language. What do you think the scientific method does? It's a methodology to adjust the patterns in a model to more closely match patterns in reality.
Create something new? All you are creating is more best-fit lines to try to better approximate the pattern in data points.
Yes, an AI can definitely do that, and better.
>>
>>16455740
https://www.cio.com/article/3593403/patients-may-suffer-from-hallucinations-of-ai-medical-transcription-tools.html
>cant even transcript
Did we get too cocky AI bros?
>>
>>16456301
>best-fit
How do you know when an AI has found a better fit? In b4:
>Because more reliable and accurate predictions.
Then explain the rules for how to determine whether an observation fits a prediction or not and by extension the rules for whether or not an AI did the science correctly. Pro tip: you can't because you would win a nobel prize for solving the entire climate change debate once and for all.
>>
File: 3s6d576r87.png (114 KB, 275x183)
114 KB
114 KB PNG
>>16456294
>>16456317
>AI cannot ever collect empirical data independently
>AI must rely on my meaty human hands to input data to verify models, of which I can lie my ass off
>This is a fundementally insurmountable barrier for AIs
Your copium tanks is running out of gas.
>>
>>16456322
The insurmountable problem is: all logic and reason must start with an axiom for which there is no logical or reasonable justification. In b4:
>Try all the axioms.
Still an axiom is needed to determine the hierarchy of all possible frameworks of valid, accurate and reliable correlations.
>>
>>16455817
So what's going on with "actual AI"? is it progessing as rapidly as LLMs?
>>
File: 3s64d76r8.jpg (38 KB, 780x780)
38 KB
38 KB JPG
>>16456336
>can't prove axioms
Neither can humans, nobody cares.
Useful axiomatic assumptions are induced generalizations based on large quantities of observations. AIs can either use human axioms, or form their own based on their own observations from ground zero.
The only thing fundmental is purpose. Axioms are secondary to that.
>>
>>16456341
Real AI is a dead dream of wannabe-genious modern alchemists who are too egotistic to admit that neuroscience must come before even a hint of creating real AGI can become apparent. All they can do is cope about artifical neuron count being too low and training data being too small (despite human brain being able to BTFO CNNs in terms of recognition with only a shred of data needed for instant, near perfect recognition).
>>
>>16456349
>The only thing fundmental is purpose. Axioms are secondary to that.
That's like the chicken-and-egg of ontology and epistemology. It's indeterminable which one is primary.

>Neither can humans
The point is that all decisions including the decisions of AI are ultimately based on subjective norms and values. Your pretense that science and AI are objective/independent systems is unfounded.

>AIs can either use human axioms, or form their own
So you concede AI and science done by AI is subjective.

>their own observations from ground zero.
Again the chicken-and-egg-problem: an observation is already an intentionally directed limited perspective. Thus observation already requires axioms before axioms based on observations are established. There is no such thing as ground zero. Humans nor AI are blank slates.

Unironically AI must rely on meaty human hands because it's of utmost importance to the wellbeing of sapient homos that AI operates on our norms and values and not on its own.
>>
File: e3s6y4d86r8.jpg (56 KB, 780x1140)
56 KB
56 KB JPG
>>16456374
>AI is subjective
No, AIs are trained on objective values. You can literally see the input codes. That is how their output is measured and adjusted.
>axioms are fundamental
Axioms are only fundamental in the human epistemological system. The human epistemological system is a way to efficiently utitlize the human mind to recongize patterns. It's not some baked in fundemental aspect of reality. Intelligence is measured purely by input/output. Human minds having generalized axiomatic assumptions helps toward this purpose.
>AI must rely on meaty hands to input perimeters
>should not be completely independent without oversight
Should not. Not could not.
>>
/sci/ is less freaked out about the possibility of AI replacing scientists in the immediate future but more about the fact AI will be used to read the vast amount of unread publishings and highlight suspicious papers for review.
The gruel train is about to end for many.
>>
>>16455740
If the answer isn't in the algorithms it won't know the answer
>>
>>16455814
> least there seems to be no real limit what it can do
You mean we don't know if there is a limit to what it can do

> at least we can expect future systems to be very capable in these areas
How can you expect that when you don't know what the limit is

>it very well will be able to slove any kind of universty exam with ease in a year or two, current versions allready getting good at this..
That is because that shit is explicitly trained for by these jewish AI companies so they can tell their investors look at how smart this thing is getting, you should give us another 10 gorillian dollars for our next gpu cluster.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.