CHNWGM - Editorial

PROBLEM LINK:

Contest
Practice

Author: Vasya Antoniuk
Testers: Istvan Nagy
Editorialist: Praveen Dhinwa

DIFFICULTY:

challenge

PREREQUISITES:

PROBLEM:

EXPLANATION:

Please feel free to discuss your approach in comments. I will complete the editorial tomorrow with compilation of various approaches of users.

AUTHOR’S AND TESTER’S SOLUTIONS:

Setter
Tester

I tried to put the pieces (figures) received in fixed order (did not exploit the permutation flexibility).

My approach was to find all the values of row and col where this piece can be placed without overlapping or crossing the board boundaries. For each such positions I calculated a score for the result board and choose the position that leads to best score.

For score calculation I used the following:

  • More lines you make disappear, better your score is.
  • For each row and col, less transitions (empty to filled and filled to empty) you have better your score is. This is to avoid scattering.

With above 2, best score I could get was 3622 for 20% of the files. (To compare how good this score is, best score was more than 1.46 million)

Source code: CodeChef: Practical coding for everyone

1 Like

I first wrote an evaluate function of a board which simulates random playout about 10^5 times and takes the average score. It seems good but it is too expensive. So, I decided to estimate this function.

I took the count of each pattern of 2x2 blocks as the feature vector.
For example,

...
..*
.**

from the (partial) board above, we get

..      ..      .*
.. x 1, .* x 2, ** x 1.

Then I run a linear mean square algorithm to get the coefficients. Actual coefficients are like:

..       .*       .*       
.. +0.36 .. -0.32 *. -2.65

Moreover, I penalized to a position which has no space for big pieces.
This scoring function is the bottleneck. To speed up, I calculate only the difference from the unplaced board. Also, I did some bit manipulations.

After that, I did almost naive full search of permutation and positions of the three pieces.
I coded a certain pruning which cuts the full searching when the board is already good looking.

Although my final score is the first place (if the scores now listing is final), I couldn’t do several ideas.

  1. I couldn’t speed up to reach the maximum turns 50000. Maybe more pruning is very possible.
  2. I couldn’t do an estimation of the input distribution. My program is always optimized for a uniformly random case.
  3. I think me is too weak for this type of problems. I believe that someone who is good at this type of problems can make a program which outperforms my program with a big margin in a short time period like one or two days.
12 Likes

I submitted quite a number of times. But mainly with 4 developments. :v

  1. Just printed 1 line, taking 3 of the 4 5*5 squares to see whether I have understood the problem.

  2. Just simulated the game in a straightforward manner. Each time I take the pieces serially and find a place to put it and put it in the first place I find. Whenever I run out of options for one piece at least, I print -1

  3. Instead of trying piece 1,2 and 3 just once, I try them for 3 times. Because it may be so, that the first time piece 1 may not be placed , but piece 2 or 3 may be placed and they may clear some grids giving a place for piece 1.

4)Instead of going with the first place I can put the piece into. I look through all possible places and find the one which will clear the most rows and columns (if there is one)

  1. Instead of seeing just how many row or columns may be cleared, I calculate a weighted measure of how much of the rows and columns has one piece covered after it is placed. The calculation is such that clearing one row is better than occupying 9 places of all the rows (and likewise for occupying 8, 7 or 6 places. I am a little vague here myself and went with pretty much intuitions. But the score improved somewhat after I did this)

I know much more development is possible. But thats the furthest I could proceed with my time.

currently I am seeing my score as 0.157 . Not that bad I guess.

My solution is as follows: At every step I consider all the 6 permutations of the 3 pieces (or fewer if some of the 3 pieces are identical) and use beam search in order to find the best sequence of moves. I can use different beam widths for each piece (e.g. a larger width for the 1st piece and a smaller width for the last one). The scoring function for evaluating the boards considers several features:

  • number of “corners” (i.e. points in between grid cells which have an odd number of occupied cells around them, or two diagonally-opposite occupied cells) - this is somewhat similar to @anta0’s 2x2 patterns, except that I add all the corners together and assign the same weight to all of them

  • an approximation of the number of 3x3 pieces which I can place on the board: for every row R I consider all the maximal Ks such that a 3xK empty rectangle exists on the board, having R as its top row: for each such K I count K/3 such pieces (they may overlap with pieces counted on other rows, but that’s fine)

  • the number of 5x1 and 1x5 pieces which I can place on the board

  • the number of occupied cells on the board

I combine these features linearly (each has different weights). Plus, if there are too few 5x1 or 1x5 pieces or too few 3x3 pieces, I apply some penalties (this essentially increases the weight of the first such pieces). And the same for the number of occupied cells. Up to a threshold I use a weight, above it I use a higher weight. However, I chose the weights manually. I wanted to choose them automatically, but I was worried that they wouldn’t necessarily lead to the best solutions on the official test cases because of the small number of test cases and of the unknown probability distribution of the pieces.

When placing the next piece I consider only valid positions which also “touch” the edges of the board or other pieces.

This solution is not bad by itself, but a simple addition gave it a significant boost (on average). For each permutation of pieces I also simulate the addition of a 4th piece, of type 3x3. So I essentially pick the sequence of moves which leads to the best score after placing, in addition, also a 3x3 piece after them (if placing such a piece is possible).

Some issues that I had with this solution:

  • even with large beam widths, it’s not always capable of doing all the 50000 moves (it gets stuck at various points along the way) - e.g. with some beam widths of (10, 5, 3, 1) (which are already too large to execute all the moves in time), I could get 20% of the test cases running to 50000 moves, but only ~23000 moves (until stuck) on average

  • with large beam widths it is too slow so it will not do 50000 moves because of the strict time limit

  • with very small beam widths (<= 2) the solution is very bad (it gets stuck quickly, so it doesn’t execute too many moves)

  • with “reasonable” beam widths the scores have a huge variance (from executing the full 50000 moves to executing only 1000-2000)

The best trade-off was to use beam widths of 3, with heavy code optimizations (e.g. bit operations; some compiler-friendly optimizations - e.g. if you have an “if (A) { … } else { … }” and the “else” branch is more likely to be executed, then change this to “if (!A) { … } else { … }”; etc.) + pruning (cutting the scoring as soon as possible if the current state is worse than the worst state in the beam) + computing the features form scratch only on the parts of the board which were modified by the move (usually a much smaller area than the whole board). At best, on one of the 20% test cases, my code could score around 635K points before reaching the time limit. Given that with the full 50000 moves I usually scored around 670K-675K points on my local tests, this means that I got pretty close to using up all the moves on this test case.

The extra 2 days of contest helped me because I had more time to optimize my code for speed (so my improvement in the last day was mostly faster code, which meant many more points on some of the 20% test data). But the lead I had on the 20% test data did not hold over the remaining 80% of the test data. Given the huge variance of my scores, I was somewhat expecting that. Still, I find it funny that my score and @anta0’s score ended up being so close (approx. 0.1% difference, i.e. around 5000-6000 points in absolute value). As a fun fact, in what ended up being my best scoring solution, I used a time limit of 4.86 sec. In other solutions I used a time limit of 4.90 sec - just this small difference was giving me ~10000 additional points on just the 20% test data (so maybe even more on the full test data), which is more than the difference between mine and @anta0’s score :slight_smile: However, the solutions that I submitted with the higher time limit also contained other changes, which essentially made them get the 10000 additional points on the 20% test data, but lose more than that on the rest of the data.

Finally, I would like to say that I don’t think this was a good challenge problem, for three main reasons:

  1. I believe all the solutions had very large score variance - essentially, if one gets stuck for some reason, it cannot continue at all, losing all the remaining moves (which sometimes can be a huge number, sometimes not) ; it would have been better to just penalize the score somehow when a piece cannot be placed and allow the solution to continue with the next move after taking the penalty (or the solution could decide if to stop or to take the penalty and continue)

  2. The probability distribution of the pieces was unknown. This means we cannot generate relevant local test cases and we cannot estimate the behavior on the official test cases - everything becomes very random in this case.

  3. For every probability distribution, one would need many test cases in order to properly test a solution. For instance, even if only the uniform distribution were used, 20 test cases would be too few to properly evaluate the average behavior on this distribution (because of the large score variance issue I mentioned). But when you use multiple such distributions, this essentially means that each distribution is tested on very few test cases, making the results even more random.

6 Likes

Please update this editorial. @admin
Thanks.

Tomorrow never comes

@anta0: Can you provide more details about how you computed the weights for your features? I see this problem as an optimization task, not a task of minimizing the mean squared error - e.g. pick the weights which lead to the largest total score on a set of randomly generated games. From this perspective some kind of local search techniques can be used for finding the best weights (e.g. maybe even smth like simulated annealing, gradient descent, etc.) - but, of course, it would take quite a lot of CPU time.

@anta0: About your comment that someone could make a much better program than yours - I think this may be true if the piece distribution was known in advance (e.g. uniform distribution, or something else) and more tests were used for evaluation - this way someone could tune the features/weights for a better average score. But with the little amount of information given in this problem, I find it unlikely to do significantly better other than just by sheer luck.

@mugurelionut I wrote a program which simulates random playouts (iterate that: generate a random figure and place it on a random valid position) and ran it to get data (list of pairs (board, the average score of that simulation)). I used the linear mean square algorithm to get a linear regression of this data.
I got the idea of using random playouts from Monte Carlo tree search
(I don’t understand this method, though).
Why I believe that someone else can get a much higher score, is, I couldn’t do any nontrivial pruning. Also, the scoring function can be more concise I believe.