>>101572077
>except it didn't solve it. it was pre-trained on some kinda data where that problem was solved(likely textbooks) conclusively.
Proof? Or are you just talking out of your ass?
>for things that have definitive answers LLMs are as smart as google serving you up a website that links to the particular textbook.
Then if they are just as smart as Google, it should be easy for you to find the book that question came from.
>try asking the gpt to generate a real world problem from the question - lets call this A, then ask the gpt to solve a modified version of that real world problem - lets call this A2. it will fail instantly despite it being only a "small" jump from A to A2 inference wise
I did that just for you, and GPT-4 got it right.
The original problem given in the book was to find the ways to give out a certain quantity of money out of an ATM given bills of given denominations. I changed the numbers, converted the problem to be about buying shoes instead of dispensing money, and GPT-4 did the conceptual jump quite easily and gave the correct solution. This is the prompt I gave it:
A kid goes to a shoe store that has three type of shoes that cost $20, $50, or $100 dollars.
The kid has n dollars in the form of a gift card, and must spend all his money in the shoe store.
Find the number of ways in which the kid can spend the money. Solve the problem using generating functions. Give the answer as a quotient of polynomials.