i need a grounded and realistic assessment of current AI capabilities
>ai ai ai"MUH AI"Christ on a stick is 4chan really dying? What happened to talking about actual tech stuff (ie soldering, video encoding, pc repair)? Is 4chan getting flooded with jeets?t. spic sick of jeets
Ask an AI.OK here's my actual answer.It is good at reading search results and documentation for you, and explaining concepts you don't understand. Anything where the information you need is already contained within some text the model has access to is something the AI can do really well.This is very useful for research and learning. Instead of being bogged down looking for what you need, you just get the answer quickly and can move your train of thought along to the next thing.It can synergise this found information to a limited extent as well. Possibly further than you expect, but also it will be bounded compared to you by the things you know and the things you've thought to tell it. When you're thinking about a question or problem you have more background data available to you than you might initially realise and think to put in the prompt.It can write functioning code if you are working within the bounds of what has already been done, on a function by function level. It tends to write code I find overly cautious, and lacking a good internal model for what the state of things will be at a given time, but it will include all these unnecessary checks to make up for its lack of internal state modelling. I prefer to have it teach me how a function works and employ it myself with a human internal model of state. If you are asking it to put a function together for something novel that has either never been done before, or never been published anywhere, it will do worse, and if it has been published but isn't commonly known or visible due to poor search ranking, it will do a little worse in that too.I don't find it that good for creative writing, but I have quite high standards for that.That's all LLM stuff, which I assume is what you meant, but in terms of classification, AI is fantastic. Visual classification has gotten really good, to the point where we're autosegmenting videos just by clicking on the thing you want masked. Transcription is great too.
>>108795825we're fucked
>>108795825it makes up unsolvable algorithms and then tricks itself into believing it was solved before continuing. In real world use, this means it can pretend to make cures to non-existent viruses and sell them. On the other hand, it can render images and sounds without a clear goal. Just keep prompting and stop asking questions, I have graphite to sell.
>>108795825Constant rapid evolution like easrly PC hardware. Each generational leap creating exponential improvements. From a financial perspective you are in 1998/1999 though. Pets.com. Yes the internet changed everything, No pets.com is not a worldwie petfood supplier. Shiller cape mirrors 1999. A three to five year stock market decline is imminent.AI will produce vast and I mean vast advances and changes in everythinng from agriculture to banking, mining, medcine, archeology you name it.However it onl has ANY value to you if it is offline and local. oherwise it is value for someone else.How far does AI have left to super moores law? It's not even hit the boradband and mass adoption sate or made any presence felt in SMEs beyond meh.Also>>108795825>i need a grounded and realistic assessment of current AI capabilitiesIs a really dumb way of asking. Claude mythos hacked openBSD and the linux kernel with ease. There's your fucking answer in the only metric any of us should care about.IT may not be 'sentient' whatever the fuck that means post turing test but it's still already smarter and better at everything than jeets, normies, and 50% of anons who just really copy the top 50%.TLDR: It's eveolving so rapidly at present by the time I have posed this it will already have advanced further.
>>108795825a few years away from being genuinely great, maybe longer depending on when openai runs out of cash and the market collapses.
It is just some Indian doing pick 4 and selecting what they think is the answer. Every time it says "analysing" it is actually an Indian guy with a metaphorical gun to his head in that speed matters and failing at selecting the appropriate answer means no jobs during punishment period which means no food which means tiktoks in the train tracks time. When it fails to generate an answer, that means the indian proxy was murdered by his boss for obtaining negative izzat values.
>>108795825I've found 'agents' like Codex and Claude Code to be the most useful thing AI has ever done so far. I'm still being a hardass about writing my own code but when shit goes wrong it's nice being able to tell a bot to fix it for me in 2 minutes rather than spending 40 minutes tracing and diagnosing something myself, without even having to copy and paste piecemeal code snippets from a chatroom. That shit was mad annoying.
>>108795825https://youtu.be/RJyPVLMyyuA?si=DuNNC74NKLUEzOXpThis is I think the best grounded discussion on the topic. I recently resigned because our company forced the use of AI undoing months of work on a project I was working on, so I'm butthurt and take that into consideration. My findings is that if AI can be more productive then a person or a script at a task then it is likely a failure of some person. I use AI often to read through documentation for me like 'what functions in pandas can add a row to a dataframe', that means there is a failure in the documentation (in this case my ability to navigate the documentation and better understand pandas). AI can be beneficial for exploring new ideas (suggest 3 different approaches to create a tree structure in a database), BUT is garbage for determining the value of a idea which you will easily see if you ask its opinion from a different perspective (eg. I'm a landlord vs I'm a tenant and who has to pay for the hole in the wall). All the crazy marketing around AI, like all whitecollar jobs are over, derives from the $1.6T investment that has been made. The hypothesis is because we can represent any thought in text and because LLM's can generate all text then if it is scaled enough it can solve any problem (representable as text), you can easily disprove this by playing chess with a llm (when you do that keep in mind the $1.6T). In essence I think humans still have a monopoly on abstract thought but AI beats us at producing text. Some programming problems are about producing text, like producing language bindings for some library in text. But most interesting problems are about abstract thought and here a LLM is just going to get in your way.