Yes, I will forward this feedback to @admin for consideration. Dont worry about that 
Even I loved quite many of them. My favourite was CUTPLANT. AVGPR adn WEIGHTNUM had good concepts involved as well wheich I felt are good for beginners 
Thank you 
I started this when I wrote ICPC editorials for Amritapuri- to put basically any content which I feel might be useful to some, but couldnt be put in formal section. (I got too many complaints of my editorials being too long and too explanative xD). Chef Vijju’s corner, or the unofficial part is a nice place for some light hearted humor, and other approaches and “what if” things. Glad you liked it
. (After all, I believe my editorials should stand out from rest, shouldn;t they? xD)
We are aware of that. Just to reduce that they reduced it from 500 to 200. If any further reductions are needed, they will do so as well :). Do you think it is direly needed right now?
A simple way of helping the issue is to do something like what is done at the Yandex optimization track, just consider the LAST submission for the final score. This doesn’t give the incentive of submitting a lot of times to randomly get a good outcome (since you don’t know the full test suite). Rate limiting, while a decent idea, does not alleviate the issue that submitting a random algo a lot of times is beneficial. @admin @aryanc403 @vijju123
Yes because I think it is better if the competition is to find the program that works best on a random problem instance, rather than being a competition to reverse-engineer the hidden instances. I think that will make CodeChef a more attractive environment to compete in. (At the moment the top entry in division 1 is engineered to work on the specific hidden instances and won’t work properly on random instances. I don’t blame the author for doing this, but in my opinion it’s a more healthy competition if this kind of answer isn’t possible.)
@admin will collect all feedback from the thread today or latest by tomorrow. This will be considered upon. Thank you, both of you 
aryanc403: I appreciate that this is allowed strategy and is not trivial to carry out, but I don’t think it is a good one to encourage because it deflects from the primary purpose of making a good algorithm. I know (as you suggest) that I could do the same thing myself, but I am not going to do this: if that is the only way to win then I’d prefer to compete elsewhere instead. I don’t think I’m the only one to think this, because vijju said that the maximum number of submissions has been reduced to try to prevent this. (I have a suggestion as how to modify the rules in a separate post below.)
@alexthelemon I strongly agree with you that reverse-engineering isn’t fair. Unfortunately, it hasn’t prohibited till now, hence it’s allowed. Probably, it was the main reason why some contestants were too good at challenges for years. We’ll discuss it and I hope we find some solution(hidden time/memory and few submissions sounds good). Thanks for your feedback!! Also, congrats on winning Div2!! Good luck in Div1 
In my opinion it is better if the competition is to find the program that works best on a random problem instance
Oh! Are you suggesting that the TC at which program runs should be dynamically generated rather than being a fixed case?
We fear that some contestants may get unlucky (or too lucky- both are bad
). Like, once in the last problem of long (Something on squarefree numbers ) the TC were dynamically generated. My solution which TLE on cases, got accepted on 5th try. We will need to find a way to minimize- or even prevent these instances from happening if we are to implement it
I think we can implement hiding the time and memory taken for challenge problem- merely telling if its AC’d or not. That can help a lot.
I think we can do away with telling the “Score” of problem- merely telling how many points it fetched you out of 100 seems good.
The suggestion to “submit upto 200 solutions, out of which at most 20 (which ran on hidden TC) will be considered for leaderboard” seems nice. 20 submissions limits reverse engineering by a lot.
Already pinged @admin to collect feedback by tomorrow or day after, so feel free to suggest 
Yes, they require approval to be public, else people paste all sorts of code and ideone links. I once decided tog et them all disqualified- but later felt it would be too harsh to those who are new. Perhaps they didnt bother to read rules.
Yeah, I tried to answer as many comments as I can xD. For updated versions also, good thanks to @mgch , Misha is one of the best people out there 
No I wasn’t suggesting that the test case should be different for different people. That would make it far too random. You definitely need the same test cases for everyone, but no information about them should leak out. That way the problem from the programmer’s point of view is to get the best result on a random instance (because he or she knows nothing about the test case, so it is effectively a random instance from their point of view).
I didn’t mean to suggest that “at most 20 will be considered for the leaderboard”. Sorry if I wasn’t clear.
I was suggesting that when you submit a challenge problem solution you should have an extra option called “receive return code from hidden instances”. You are allowed to select this option at most (say) 20 times during the competition. When you select this option, if you get an AC it means you can be sure that your program worked for the hidden TCs.
The reason for restricting return codes like this is that the mechanism for information leaking back to the user is via the ret code…
I get it now. Thanks 
In addition, the time and memory information that you get back should only be for the “feedback-instances”.
And I think it is also probably better not to include the feedback-instances in the final score because too much is known about them. (Another reason: if you do include them, as happens now, then this makes @algmyr’s suggestion not work properly. That is, even if only the last submitted program is scored in the final ranking, you still have an incentive to keep resubmitting a random algorithm until you get a good visible score, even though you can only see part of your score.)
Personally I think one solution would be to completely separate provisional tests from the actual tests. Provisional tests will only give a temporary hint of the performance of programs, but is not included in the actual set of tests. No information should be given on the hidden tests. Since the data generation algorithm is given you can easily test performance on your own system, and the provisional tests should catch most server side stuff.
Combined with my earlier comment about only judging the last submission this should both prevent reverse engineering and discourage spam submissions.
I would be happy with that option (no feedback from actual tests at all), but I got the impression people wanted a bit of certainty that their programs would still work with the actual tests, so I made the above suggestion (20 return codes) as a compromise. But your suggestion has the virtue of simplicity, and as you say it’s unlikely a program that passes the provisional tests would fail to complete the actual tests (though you could just about imagine that it runs it 3.96 seconds on the provisional tests, but TLEs at 4.01 seconds on the actual tests due to a slight difference in the data).
Judging on the last submission only is tempting, though it might make the comparison a bit more random because the tail of the score distribution obtained from a random algorithm (which you get from maxing over lots of attempts) probably has less variance than a single instance.
But this could be fixed by increasing the number of hidden test cases. And this wouldn’t require extra server time compared to what happens now because the server would only need to run a single entry, not all 200. (Though it may delay scoring slightly after the contest if they aren’t being run pre-emptively.)