Should long challenges include a test generator?

Currently, most long challenges provide a very detailed “test generation” section, that contestants can easily use to write their own test generator. However, this section is written in human language and thus leaves room for at least 3 types of errors:

  1. The problem author could make a human language error.
  2. The problem author could make a programming error.
  3. The contestant might make either a reading comprehension or programming error while trying to implement the test generator.

Now, if the test generator would be shared during the long challenge, then none of those errors would be possible:

  1. The author cannot make a human language error, as there is no need for a human language description - the code speaks for itself.
  2. The author cannot make a programming error - as (s)he is correct by definition. Their test generator is the only source of truth. (Well, as long as the tests at least follow the format of the problem that is.)
  3. The contestant cannot make a mistake, as they don’t need to write any code themselves.

What do you think? Would sharing the generator code improve the quality of long challenges?

Would it make problemsetting/testing more or less difficult on average?


Hey @andres96, thanks for the suggestion. It does make sense, and we will discuss with our contest admins and update here.