MAXDIFFW - Editorial

PROBLEM LINK:

Practice
Contest

Author: Mugurel Ionut Andreica
Tester: Mugurel Ionut Andreica
Editorialist: Lalit Kundu

DIFFICULTY:

Challenge

PREREQUISITES:

challenge, heuristics, graphs, game-theory

PROBLEM:

================

Chef has a complete directed graph composed of N vertices. A game on this graph is played. The game consists of alternating moves and you make the first move. Your first move consists of choosing a directed edge from the graph. Let’s assume that this edge is (v_1, v_2). Then Chef will choose an edge (v_2, v_3). After that, at his/her turn, each player (whether you or Chef) will choose a directed edge which has not been chosen before by any of the players and whose starting vertex is the destination vertex of the directed edge chosen in the previous move (by the opponent). Thus, at your second move, you will choose an edge (v_3, v_4), then Chef will choose an edge (v_4, v_5), and so on. The game will end when the player whose turn is next cannot make any move (meaning that the destination vertex of the last chosen edge has no unselected outgoing edges left).

At the end of the game your game score GS_\textrm{you} and Chef’s game score GSChef will be computed. GS_\textrm{you} is defined as the sum of the costs of all the edges you selected during the game and GS\textrm{Chef} is analogously defined as the sum of the costs of all the edges Chef selected during the game. Your final score FS for the game is defined as the difference GS_\textrm{you} - GS_\textrm{Chef} (note that FS could be zero or negative as well).

Chef, has implemented one of the two basic strategies described below:

  • Strategy MAX: Let’s assume that the last edge you selected is (a, b) and it is now Chef’s turn to move. Chef will select the non-selected edge (b, c) having maximum cost among all the possible edges he can select (if any).
  • Strategy RANDOM: Let’s assume that the last edge you selected is (a, b) and it is now Chef’s turn to move. Among all the non-selected edges (b, c), Chef will select one uniformly at random.

EXPLANATION:

================

First step is to know the Chef’s strategy since that is unknown to us. With a certain probability we can actually make that guess based on what kind of moves Chef is making. If at any step, Chef doesn’t choose maximum outgoing edge then we know for sure that his strategy is RANDOM, or else after k moves with probability \frac{1}{N^k}, we can say that his strategy is MAX. You can tweak over the variable k to actually decide the strategy of Chef.

We build a game tree. Game tree is a directed graph whose nodes are positions in a game and whose edges are moves. The complete game tree for a game is the game tree starting at the initial position and containing all possible moves from each position. Here the state at each node is the current vertex and the turn index; we also have to store the score obtained to get at the current node in the Game tree. Now, we can’t build the complete tree but we can call functions in our tree upto a certain depth to work within the time limit. Again, you have to tweak the depth to get better results based on the test cases. We can also prune our tree, basically something like Alpha-Beta pruning, combined with our heuristics to get better results.

Now, depending on Chef’s strategy we have to decide our strategy.

FOR MAX STRATEGY

From each state we consider what options we have:

  • If it’s the opponent’s turn, just simulate the move (choose the maximum remaining outgoing edge).
  • If it’s your turn, consider every outgoing edge and call the function recursively.

Pruning strategies:

  • When it’s your turn, only consider a few moves with scores close to the best one(the best one is the one maximizing your score minus the score of the next move made by the opponent).
  • When you reach a state with some score, do not continue from that state if you previously reached the same state with a score much higher than the current score.
  • Do not continue the search after a certain number of moves (limited depth of the search tree).

FOR RANDOM STRATEGY

  • When it’s your turn, choose a move which maximizes some kind of score depending on the cost of the edge you chose (let’s call this score S) and the remaining outgoing edges of the destination vertex (let’s call this vertex v).

Possible functions to maximize:

  • S - (sum of costs of outgoing edges from v) / (number of outgoing edges from v)
  • RT*S - sqrt((sum of squares of costs of outgoing edges from v) / (number of outgoing edges from v)) where RT is a constant which can give more or less weight to the cost of the next selected edge.
  • S - (a * min cost of an outgoing edge from v + (1-a) * max cost of an outgoing edge from v), with a being a number between 0 and 1.

Top scorers have used last two strategies in their best scoring solutions.

TUNING PARAMETERS

We use lot of parameters which are unknown to us and we have to tune them to fit into the test cases well. Top scorers always use various techniques to fine tune these parameters to score well on the test cases.

For the visible test cases

Let’s say you have parameter P and you want to try two values of it: P_1 and P_2.
You make a submission with P=P_1 and allocate memory proportional to the overall score - you can see the memory reported, say M_1; you do the same for P=P_2 and you see the allocated memory M_2. Now since memory reported is actually shown for visible test cases, we easily know which parameter gave a larger value.

For the invisible test cases

You need to use asserts in your solutions.
For instance, you try P=P_1 and then say: if score on test case X is > 200000 then assert(0); else if score > 400000 then while(1); else if score > 600000 then printf(1/0) else if score > 800000 then exit(1). Basically here you cannot find the exact score, but you can find a range based on the outcome of your program (SIGABRT, TLE, SIGFPE, NZEC, AC, etc.).

This is the end of editorial. And I hereby thank @mugurelionut for helping me out with this editorial. At several places, I have verbatim quoted him in the editorial.

AUTHOR’S, TESTER’S SOLUTIONS:

setter
tester

1 Like

May be it will be better to do more tests or decrease number of available submit to avoid overlearning on hidden data? Seems that it makes competition much less interesting and technical.

1 Like

The setter and tester link’s are for SEAGM2.

Hi @argos, We will be limiting number of submissions to 250 in the next long contest. We will try to think of some other measures to prevent that too. If you have any suggestions, please feel free to share.

1 Like

@dpraveen: As I also wrote you in an email, my suggestion is to also hide memory and time limits for the “visible” test cases, too. So, basically, when one submits a solution, the only “feedback” one gets is the score on the “visible” test cases and the status of the submission (AC, RE, TLE, WA, etc.) [ I don’t have a better term than “visible” for the tests scored during the contest ]. This can be done pretty easily from what I’ve seen in the code of the master judge.

@dpraveen: And, finally, one step further for killing most possibilities of overfitting a solution to the test data is to always show the “Correct answer” verdict. The master judge can assign a default score (e.g. 0.001) to a test case if its outcome is TLE, WA, RE, etc. (i.e. for any outcome different from OK). This will mean that every submission will get AC this way, but if it behaves badly (WA, TLE, RE, etc.) this score will be very low.

Hi @mugurelionut,
I agree with your first suggestion. It will be used in next contest.

Regarding the second, Sometimes, it might be the case that if a solution is getting RE/TLE in few test cases and only “Correct answer” verdict is shown, then it is tough for the user to figure out whether it is getting RE, TLE or not and that might be frustrating for many of the users.

If needed, we might decrease number of submissions further (may be 150/200).
We were thinking about creating different set of files for rejudging after the contest, though we need to have a feedback of community for this.

My suggestion is to increase number of hidden tests and to exclude open tests from final score. Decreasing number of submissions to 150/200 will help, but in this case you should care about cheater test accounts. I agree with idea to hide memory and time counters, but I think it is not good idea to hide status of submission at all.

Hi @argos: We use SPOJ system for problems testing which set a limit of 20 test files per problem. We will definitely try to look for the option of increasing number of test files. Thanks a lot for the suggestion !!