New ask Hacker News story: Hypothesis: Repeating the task description increases quality of ChatGPT output

Hypothesis: Repeating the task description increases quality of ChatGPT output
4 by kuboble | 2 comments on Hacker News.
There has been some experiments showing that ChatGPT performs better if given some incentives like tips or threats, etc. Also it's known that the chat GPT performs constant amount of computation per token. I wanted to test a hypothesis that adding any number of tokens after the initial task description increases quality of the output. The experiment consists of relatively simple coding tasks and we will compare two prompts: Please help me X. and I will provide an identical task description 10 times: Please help me X. Please help me X. Please help me X. Please help me X. Please help me X. Please help me X. Please help me X. Please help me X. Please help me X. Please help me X. I have decided to run 3 experiments and not cherry-pick the results. Experiments: 1) create an SVG element of a 5 edged star item 2) write a function to check if a number is prime in python. 3) write a function that given chess position in FEN notation as an argument returns which side has material advantage in python. On the task 2) both prompts returned exactly the same correct answer. Results for 1) https://ift.tt/bmDJOaN Results for 3) https://ift.tt/pZLRQ5j For the task 1) clearly and for the task 2) arguably the results are in line with hypothesis that simply increasing prompt length leads to better results. Does anyone have similar experiences / can check that with other short coding prompts?

Comments