[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: imo.jpg (173 KB, 868x1023)
173 KB
173 KB JPG
OpenAI solved IMO using a plain LLM, unlike DeepMind, who used a custom system based on Lean 4.

Some myths that people are coping with (with answers):

* AIs saw these problems in their training data -- No, OpenAI got the problems within minutes of IMO ending
* AIs used a lot more time -- No, OpenAI's model had the same amount of time.
* They used a boutique system designed for solving IMO problems -- Only DeepMind. OpenAI used a general LLM.

AI is already smarter than you.

In b4: spelling gotchas and image-based tasks, or people who never used top of the line LLMs like gemini-3-pro and gpt-5-high.
>>
File: mpv-shot0204.jpg (287 KB, 1920x1080)
287 KB
287 KB JPG
>the number crunching machine is even better at crunching numbers
>>
>>107629669
fuck off, sam
>>
>>107629669
Programmers are already unemployable, while mathematicians are still scamming their respective governments for grant money.

This is because writing software is an investment. Why spend 5 years paying programmers to develop something if all software will be on-demand in a couple of years?
>>
>>107629704
*smacks computer*
Yep, this baby computes.
>>
>>107629669
If it's smarter than me (it's not, it can't think)
Why is my C++ code still vastly superior every time I tell it to generate a solution ?
>>
>>107629669
Also smarter than you, in the 1970s.
>>
bros it's fucking over a calculator is better at calculating than me...
>>
>>107629976
If ai were real, they would have used AI to generate that picture instead of buying it from getty images.

Also, I guarantee you, nothing about that graph is 3π anything.
>>
File: fortune.jpg (1.26 MB, 1170x1692)
1.26 MB
1.26 MB JPG
>>107630385
>Why is my C++ code still vastly superior every time I tell it to generate a solution ?
Investors see the writing on the wall already: >>107629976
>>
>>107630491
>(((Developer)))
>Apple laptop
>trust me bro
kek, ye right
>>
>>107630523
Why not? If you can afford it, Apple makes the best hardware out there. Also, it's UNIX, officially (unlike Linux).
>>
>>107629669
AI doesn't pay taxes.
>>
>>107630491
i demand you take down that picture of me beside my cousinwife's trailer
>>
>>107629669
I mean, I suck at math anyway.
But it still can't code the shit I am coding.
>>
>>107629669
Don't care. I can prostate cum myself. Ai recoil at the very idea. Useless garbage.
>>
>>107629669
openai is still using tools like lean for reinforcement learning. it's no coincidence that we can see LLMs do really well in domains where you can easily use reinforcement learning while they are still pretty mediocre in other domains.
internal chatter from openai suggests that they themselves have given up on the "generalist" approach to superintelligence last year and are now trying to get better in specific domains while reducing operating costs and delegating the "can a generalist AI become a superintelligence?" back to a research question that requires coming up with an architecture different from LLMs. iow, they're trying to make the entire thing profitable and have put the superintelligence idea on the backburner.
also worth pointing out that harmonic, with another system based on lean, also got a gold medal. their LLM even outputs lean proof certificates, so you could easily convince yourself that the proof is sound without having to check it manually. i think that's a huge benefit over deepmind and openai and massively decreases the amount of trust required.

>AIs used a lot more time
this is still true. it would be interesting to have an IMO-like competition where all competitors are more restrained both in terms of how much time they get and what kind of computational resources they get, but given the shoddy organization of the AI IMO this year, it likely won't happen.
i've also heard through the grapevine that none of the AI companies are really interested in competing at the IMO again now that they have a gold medal because the IMO was a one-off publicity stunt for all of them, its publicity value has diminished and everyone built non-reusable IMO-specific offshoots of their systems (that also got far more computational resources than your average LLM prompt).
>>
>>107629669
Lmao, these models have 0 intelligence, a fucking cat is smarter let alone a human that's not brown.
>>
>>107631439
>these models have 0 intelligence
Sure, grandma. Let's get you to bed.
>>
>>107629669
Pattern matching without understanding of context is not intelligence.
>>
>>107629669
>within minutes
The olympiad was in july though?



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.