Difference in actual and expected score in TESTGEN

I had submitted a test-case in TESTGEN which gives 85 score when run on my PC as well on the Codechef IDE and ideone. However on submission I got only 64 points. Can someone tell me the reason?

Problem link : CodeChef: Practical coding for everyone
Code link : https://s3.amazonaws.com/codechef_shared/download/DEC19/checker.cpp
Submission link : CodeChef: Practical coding for everyone
Running on ideone: kCRpT1 - Online C++0x Compiler & Debugging Tool - Ideone.com

That happened with me also. This is mentioned in Checker : The only differences are in the input, output and in the “seed” used for the random generator.

1 Like

So how are we going to know whether the test-case will get the desired score or not?

That cant be known without submitting, author has kept the seed secret maybe due to some “antihashers”.

I think this is a feature not a bug.

Imagine that you are a test setter. You want to make solid tests so that heuristics like these won’t get accepted. But of course you can’t know in advance what seed values the contestants will use.

If you fix the seed, it’s very easy to generate random test case that fails all 4 heuristics (at least with M = 300). This is somewhat similar to breaking rolling hashes: simple if you know the hash function.

So, to actually make sure that your test case is robust you can try multiple seeds locally (change “const unsigned seed = 0x12345678;” in checker.cpp)

If your test case succeeds with many different seeds then it is actually hard and unsolvable with these heuristics, otherwise the failure is only by luck, and a different seed will change the picture.

2 Likes