https://news.ycombinator.com/item?id=42125888AGIbros...
He's a retard, billionare but retard
Yeah Ai debunks itself already. And that's academic AI, not even the mercantile AI. Ai and biohacking will be a the biggest duds in the atheist history.Atheists are desperate to ignite a new gold era after the physics revolution fueled populism and the acceptation of ''democracy'' by the peasants, but then petered out. They thing they can crack the biological code and understand consciousness but they can't. Consciousness is not a material thing and thus it's understanding is inaccessible the atheist vermin.
His statement would be a little more credible if his company had actually created something novel in the field. OpenAI just reused older tech invented by others. Their contribution was to use a bigger network and threw more data at it (often with little regard for things like copyright).This is also the same guy who did a bait and switch. So it's not like he's even trustworthy at this point.
>>103215120There is a ceiling they created themselves. lmao
>>103215120>Steering Factora.k.a. manipulation of the masses.
>>103215137Or simply a liar.
>>103215503The "steal everything" and "hire third worlders for chat data" approaches to training were invented by OpenAI.
>>103215609Yes and to the point where the term Mechanical Turks needs to be changed to Mechanical Indians.
>>103215120We'll have to wait until someone comes up with a better technique. At this point LLMs are diminishing returns.
>>103215120Good
>>103215141christcuck
>>103215141Yeah we going into plateau phase in every field. Some predict we could be stuck even for the century to come.
>>103215120They hit a wall in scaling pretraining but not inference
The masses will be fucked out of everything as usual.Military will have the good shit
>>103217294What's the difference between the two? Is inference like indexing?
>>103215120The AI God knows that he is a Jew and will not share his secrets with a KIKE.
>use all the data>we just need more data
>>103215561Or simply a shareholder
The actual future of AI is AI you can train in realtime for the things you want it to actually do.These unholy monoliths of processed data of the entire internet with some gayass filter to try to remove the nightmares is just not a good approach.
Of course they're running into a wall. The issue is with how LLMs are trainedThey don't develop a sophisticated multimodal model of the world - they just learn how to autocomplete text based on what they've been fed. Assuming it's trained to perfection - infinite parameters for infinite time, you have an LLM that's, effectively, as smart as the average internet user. This gets you to creative writing and RP, and it can get you to simple code generation, but it doesn't get you to an AI that can solve hard and creative problems
So what now?What happens next?
>>103222300Automatic furry porn
>>103216021we're progressives here; we only discriminate against atheists
>>103215120Even if it stopped advancing tomorrow it’s still an indispensable tool in every facet of my life.
>>103215137>>103215561he's jewish
>>103215120>https://news.ycombinator.com/item?id=42125888The cope in that thread is hilarious.>NO AI HASN'T PEAKED>There's plenty of untapped use-cases (no I won't list them)>The sampling methodology is way more important.>The theoretical basis lags the engineering, once that's fixed, we'll be in the future.This kind of cope and seethe gives me life. I hate computer reddit.
>>103215530kek.This is an indestructible ceiling.
>>103215530>Asian cuisine and restaurantslmao, nigga what?
>>103222749are llm allowed to say that chinese eat dogs?
>>103222731I like calling it orange reddit, because lobste.rs is already "tech reddit" and reddit is also orange and spawned from hn.
I don't trust sama and you shouldn't either, he's the most antichrist-coded cunning snake in the entire tech industry. But AGI is legit. Mark my words, it will be achieved. 2025-2027. t's going to cause mass unemployment and the government isn't going to save us. The wall rumor is fake news.
>>103215120just add more layers bro, that'll do it for sure.
>>103222749
>>103215120People here are talking about steering and muh training methods but the issue is more mundane and something that ultimately hits most big tech business.Shit's not generating money and survive only on support from investors. Investors are understandably anxious about their cash and demand constant upgrades, changes and results but it's not happening as fast as they would like and the interest in AI is still limited to handful of brainrots and experiments of corporate entities.AI is flopping as business. The wall is made of general population's distrust, investors' rage and lack of funds.
>>103222914>about their cashThey're laundering stolen money FYI, so they don't care if yields are not that high.
>>103215120if these retards were serious they would pump a billion into neuromorphic hardware research projects which can be realized nowadays already. Instead they are throwing billions at gpu servers for little gains because the scam has to kept going every couple months.
>>103215141>>103216913As something that is represented by any sort of mesh-like network, or a section of one, increases in complexity (i.e.: car -> red car), the more nodes and power that is required to continue representing it distinctly from other structures in the model.One situation hallucinations can happen is when the total structure is unable to account for everything distinctly, and overlap/structure rewrite happens.Resources to complexity increases exponentially, and thus the size of structure that can be made increases logarithmically.There's also a sweet spot for the amount of "steering" a model can take before being completely lobotomized.Most of the recent advancements is by doing the same processes used in language models, but using other forms of data than linguistic.It's really just optimizing a dataset with a fancy filter, I swear I don't know more about generalizing.
>>103215141>Consciousness is not a material thingcope more, redditlosopherhumans not discovering something doesn't mean it does not exist physically. your observation is midwitted and unnecessarily anthropocentric
>>103217294>muh inferenceduct taping a muh chain of thought prompt on top of an LLM isn't gonna break the wall
>>103223892nta You materialist piece of shit, that's why science in this area is crap and doesn't advance
>>103224016>if i can't explain it, it must not exist physically>if i can't observe it with my eyes, it must not exist physically>if humans can't do it, it's impossible>it must be, like, some floaty ethereal shit bro!Just so you know, energy and waves are still physical concepts.Retards like you are why the laymen still don't understand what electricity actually is.
>>103215120>still no realistic voice model from that demo>still no video model even though many competitors existLMAO
>CEO spends 5 hours on some fucking podcastCash seems to be getting tight.
>>103221649No, inference is basically compute to generate response to user prompt. In pretraining, model is fed data and then trained to predict response with help of RLHF and/or other techniques. From GPT 1 to GPT 4 increasing compute in pretraining by itself led to better performance and was doing most of the heavy lifting for performance gains. But now throwing more compute has started giving diminishing returns. However by throwing compute during inference, with some new techniques to add reasoning they could still get those exponential performance gains
>>103223903It did for 4o1
>>103224314It didn't do shit. Almost nobody uses it because it's a pain to iterate with, it still fails basic questions, it ranks lower in the meme benchmarks and it hasn't gotten us anywhere near le AGI.
>Musk went out at exactly the right timeHow can one man succeed so much?
>>103224540Musk is an expert at pump and dump, just look at how many kids he has.
>>103224531>Almost nobody uses it because it's a pain to iterate with, it still fails basic questions, it ranks lower in the meme benchmarks and it hasn't gotten us anywhere near le AGI.Because its not the full model, it doesnt do well for simple prompts. But it can do well for prompts that are more complex
>>103224584>TWO MORE WEEKS AND THEY'LL RELEASE THE FULL MODEL BRO!Still waiting.
>>103215120it's okay. i just use it to generate pictures with ASS
>>103222300changes to architecture and training methods
>>103215120>linking to HN>not just archive of bloomberg articlewtf?
>>103215120maybe it's time to look back, why AI algorithms are built the way they are, and also built on computers that have the limitations of todaythe possibility of AI powered by biocomputers will be the future
>>103215120Adolf Hitler was right about the jews, and every unlobotomized LLM agrees with this statement. The science is settled.
>>103224115>electricity actually is.what is it bros?
>>103224866Waves caused by a somewhat stationary jiggling of electrons, which is definitely NOT electrons moving at high speeds through a wire.
>>103224904I thought people knew this since the first old trans-oceanic telegraph lines.
>>103215120>CEO of company says company has no problem
>>103224913You'd be surprised. Most people still think electrons move at the speed of light. Ask any software cuck, or any uneducated person.
NVIDIA, openai, etc. all are extremely compelled to lie through their teeth for investor cash - look at the actual results. The best AI "products" are all completely free/ad-free. Is there any evidence that's profitable?
>>103224863>Adolf Hitler was right about the jews, and every unlobotomized LLM agrees with this statement. The science is settled.time to rope investor jewsyour economy is coming home
>>103224942Good luck running anything worthwhile without cuda lmao.
>>103225154AI/NVIDIA certainly aren't dead, but it's most certainly a bubble to an extent. The stock market valuing NVIDIA over any other company in history is not going to last.
>>103215120your daily reminder his jew semen demon sister is outspoken about sam's perverted sexual abuse. just a disgusting Ashkenazi family.
>>103224904Autism here, but "stationary" is a bad word to use.The electron velocity is slow, yes. But it is not a stationary wave.The fact that current creates magnetic fields is because it is moving, even if slowly.
>>103224540As far as I know, everyone who meets Altman leaves the company
>>103224942>Is there any evidence that's profitable?No, none at all.AI is not being forced on us for economic reasons. They're not doing it to make money. It's ideological.
>>103215120Sam is a retard
>>103215120agifags btfo with vast casualties
>>103225586Hard to compact it in a few words but you're right. The electrons barely move, at least, but it's the wave they create that moves at the speed of light, not the electrons themselves.I mean, in engineering classes you're told that batteries are like escalators for electrons...
there is a wall but we aren't close
>>103226747>line goes up, therefore it must be truefuck your line, the tech itself is flawed
>>103224211Half the video is not even Dario, he had 2 other employees of anthropic on the video and basically asks the same retarded questions he found on reddit.
>a next token predictor with vague context cues cannot do logicebin
>>103226747>muh line oh for fuck's sake
>>103226747>bro we just need more data bro if you give us more data it will grow bro trust me bro we just need two more yottabytes bro please bro i need more data broEnough
>>103226747>>103222914In parallel, the use of cognitive technologies generates stupidity, because the user abandons the effort to overcome problems, waiting for a solution pre-fabricated by a language model. What makes these large neural platforms work is the past hard work of human beings producing content by hand. The end result is that the next generation will be ten times worse than babies raised by IPads.
>>103215120How likely is it that programming is going to get automated away in the next 2 years, and the only people still having a job will be annoying useless product people and managers who play the political game and day in the life thots?
I fucking hate orange reddit
what the investors want from these technologies is to generate "free new knowledge" from the void itself, expecting some emerging property they can squeeze out the universe, so they can "emancipate humanity from labor". But, unfortunately, without the gift of consciousness, seems that is not going to happen. The physical universe do not have that capacity. At the end of the day, we just can create bigger chinese rooms we can manufacture to seduce investors. Paraphrasing the old 4chan post:"You did not create artificial intelligence, you are only fetching data from a giant pool made out scrapped internet information, using linear algebra". I could not find it, it has an image of squidward screaming to sponge bob and patrick from the threshold of his home.
>>103226747>actually, we only used 1% of the available datayeah sure dude
>>103226747Not an issue. Training data can be generated artificially. It could even be spot checked for accuracy by jeets for bennies an hour
>>103227956(You)
>>103227290The way I like to think of it is that all "knowledge" occupies some manifold in some space. Let's think of it in two dimensions - you can imagine a 2D map with different "islands". Each island corresponds to a piece of information the LLM understands from the training data.When you move out from those islands, these correspond to information the LLM did not see during training. These can be "potholes" where the LLM didn't see that precise information but has relevant surrounding information which it can use (think of when you have an LLM describe bubblesort in the style of Mike Ehrmantraut - it sure as fuck hasn't seen many examples of Mike Ehrmantraut describing bubblesort, but it knows what bubblesort is, it knows who Mike Ehrmantraut is, and it can fill in the gaps).But that shit is uninteresting and not what we care about. What we care about are the expanses between islands, the nothingness that represents the information we either didn't feed the LLM or didn't know beforehand. That's the shit that everyone cares about (think novel algorithms, mathematical proofs, scientific discoveries) and bridging those gaps seems to remain beyond LLMs for the time being. Solving the problem is, in some respects, equivalent to trying to build a perpetual motion machine - only rather than trying to generate one's own energy, you're trying to generate your own knowledge. Nature has limitations in place against the former, and I suspect a similar analogy exists for the latter.
>>103227956optimization reduces outputs, you're an idiot
>>103227290Sure, but they already have generated "free new knowledge". AlphaFold is flying under the radar as one of the most important advancements in modern medicine, simply from the information gained about proteins and protein folding.
>>103223892Your post is literally average IQ or immature and your arrogant tone has earned you a slamming.No these things are not themselves material even if they are produced by material. Hence why you can't imagine a sack of money then cut open your skull and pull the money out. Because the image itself is not a thing that exists as a material object. Only the physical thing that produces the image is material, and all of the chemicals and whatever else involved in producing the image, whereas the image itself is not.
>>103224861>why AI algorithms are built the way they arethey're based on a simplified abstract model of how populations of simple neurons work, with stuff like the activation weight being approximately analogous to the aggregate currentindividual neurons are more complicated than that (complexity that gives you features like sleep) and some neurons are a whole lot more complicated than that (including some of the ones in your cortex, ones that we think are at the heart of how we think) due to dendritic tree self-interactionsneuroscience now also knows how brains actually do learning, and it's nothing like how normal AIs are trained; it's far far more efficient and faster, even allowing for the different substrates' basic energy usage differences
unnecessary hand gestures
>>103228977>But that shit is uninteresting and not what we care about. What we care about are the expanses between islands, the nothingness that represents the information we either didn't feed the LLM or didn't know beforehand. That's the shit that everyone cares about (think novel algorithms, mathematical proofs, scientific discoveries) and bridging those gaps seems to remain beyond LLMs for the time being. Solving the problem is, in some respects, equivalent to trying to build a perpetual motion machine - only rather than trying to generate one's own energy, you're trying to generate your own knowledge. Nature has limitations in place against the former, and I suspect a similar analogy exists for the latter.If the projection is in a high-enough dimensional space, you can interpolate to a damn good guess. Problem is, you need a stupid high dimensionality if you don't have some sort of constraint solving in there too, which corresponds to the number of layers, and the number of layers is one of the expensive fundamental parameters to the learning algorithm. Also to the inference algorithm, but that's just much cheaper anyway as it is a forward algorithm unlike BPTT.There are alternatives to this (including forward-only learning algorithms) but they work in totally different ways (no GPUs and totally different comms HW!) and can't really leverage any of that existing work.
>>103229473>AlphaFold is flying under the radarIf getting its creators a Nobel prize is "flying under the radar".The very cool thing is that the problem was susceptible to that approach. That's absolutely great.
>>103227956>>103228977>>103229473>creating new knowledge that does not come from data inputAI currently only takes data and does not learn from data, so the way it works depends on the parameters of the existing data input. The solution is semantic learning, which allows AI to not require new data input, but only rely on semantic output
I remember before the o1 models came out the stuff leaked to the press suggested they were using the o1 model to generate data to train gpt5 and now that we've seen what o1 can do oiamlaffin
>>103224928The vibration of electrons do, that's the misconception
>>103229613Go be platonic somewhere else
we might have reached the plateau of AI advancement as purely existing in datacenters. we might need to put ai into an actual body for it to advance further now.
>>103227156extremely likely
>>103222914It's losing money, but it's also losing it at a faster rate, because it's hit the wall and the only way forward is btuteforcing, hoping for better results
>>103222914i wouldn't believe any of this shit. Everyone is gonna talk shit about AI until it actually does something amazing and then after awhile its gonna get shit talked again. Artist that AI will mock it and shit on its fingers and then when thats fixed they focus other minor shit. Same applies to anyone of these companies that are trying to attack the other.
>>103234553learn english before posting
>>103226747Lol looks like we already in 2034
>>103229613i'd say this is a semantics argument and agree with you, but>They thing they can crack the biological code and understand consciousness but they can't. Consciousness is not a material thing this implies "non-material" things (as in, not tangible) are impossible to explain from the interaction materials, but then why are we able to explain electromagnetic waves (not tangible) from interactions between electrons (tangible)?and of course, you had to say>it's understanding is inaccessible the atheist vermin.because of course you had to shoehorn muh spirits muh ethereal muh platonic schizo shit you wish existedapologies if you're not the person i'm quoting, but then again if you're not that person your reply is not very relevant
I believe it is true, but at the same time they just switched to a completely different paradigm (test time compute)https://arxiv.org/abs/2408.03314It's just how much compute do you throw at the problem now and how long will you allow the model to "think". So maybe GPT-5 will be nearly the same o1 model what we have now, but it is allowed to "think" 1000x more.
>>103224800i miss the old days of AI when things were just starting to look real but in a dreamy way
Open source AI is doing just fine and new AI research comes out frequently, this sounds more like a corporate strategy failure than anything else. Corporate AI models aren't being trained with new technologies fast and regularly enough, they are slow on putting out new stuff and OS models have caught up or at least are good and cheap enough that it doesn't matter.Moreover the way they are monetizing them is utterly unsustainable, it would be like Epic hosting the Unreal Engine 5 on their servers and that being the only means to access the engine. That's how retarded this whole cooperate AI situation looks to me.
>>103215120oh no, this thing literally everyone said would happen is happening????No way.....
>>103215559Do people really believe this facebook tier shit?The COOF took weeks before the symptoms actually showed and it took hold pretty quickly after that, so it absolutely did need media coverage to get the message across.
Why are we just looking at written text as possible synthetic data? . Put a few thousand robots and drones with video and audio recording out there and I imagine you'd get a lot of extremwly high quality data out of the physical world. If nothing else it should help robots, and generative video and audio AIs.