You are not logged in. Please login at www.codechef.com to post your questions!

×

SEAFUNC - Editorial

2
1

PROBLEM LINK:

Practice

Contest

Author: Sergey Nagin

Tester: Jingbo Shang

Editorialist: Hanlin Ren

DIFFICULTY:

Challenge

PREREQUISITES:

Greedy, Heuristics, etc

PROBLEM:

You are given a matrix $A$ of size $n\times n$. You need to construct a matrix $B$ of the same size. Initially all element in $B$ is $0$. By one operation you can choose $9$ parameters $a_1,a_2,b_1,b_2,c_1,c_2,d,l,r$, and for all $l\le i\le r$, let $$j=\lfloor\frac{a_1i^3}{a_2}\rfloor+\lfloor\frac{b_1i^2}{b_2}\rfloor+\lfloor\frac{c_1i}{c_2}\rfloor+d,$$ then set $B_{i,j}\gets 1$. You need to use as few as possible steps to construct such a $B$, that $A$ and $B$ has at most $100$ different cells.

EXPLANATION:

I'll list some common approaches(that are used by more than one contestants) below.

The most trivial one

It's very trivial to generate a valid solution.

Let $a_1=b_1=c_1=0$,$a_2=b_2=c_2=1$,$d=y$,$l=r=x$, then we can use one such operation to make $A[x][y]\gets 1$.

Also this approach doesn't exceed the $N^2$ operations limit. It can get 3.21 pts. A solution.

If you choose to ignore the last $100$ $1$'s, you can get 3.261 pts, which is a little bit higher. A solution.

Other approaches

Let's find consecutive $1$'s in a column. For any maximal(non-extendable) consecutive segment of $1$'s, we only need one operation to deal with. This approach gives you 36.365 pts. A solution.

We can, of course, stop the algorithm when there's at most $100$ $1$'s. However this only improves slightly to 37.512 pts. A solution.

A smarter approach is to sort all non-extendable consecutive segments of $1$'s, and ignore the shortest ones of them until you ignored $100$ cells. This gives a bigger improvement to 43.704 pts. A solution.

Another approach: when we meet $a_{i,j}=0$ with $a_{i-1,j}=1$ and $a_{i+1,j}=1$, we modify $a_{i,j}$ to be $1$. Of course such modification can't be done more than $100$ times, and some contestants choose to have at most one modification per column. If we didn't run out of the limit of $100$ different cells, we use the sorting approach above. This gives you 43.915 pts. A solution.

ALTERNATIVE SOLUTION:

Please, feel free to discuss your approaches. As this is a challenge problem, there must be a bunch of different approaches!

This question is marked "community wiki".

asked 07 Sep '17, 19:17

r_64's gravatar image

7★r_64
261924
accept rate: 16%

edited 12 Sep '17, 10:13

2

No approach for 100 solution? Or will it be added later?

(11 Sep '17, 15:31) vijju123 ♦♦5★

10

Let me be the first one to share my non-trivial ideas.

We choose some of the valid functions $$ f(x) = \left\lfloor \frac {a_1 x^3}{a_2} \right\rfloor + \left\lfloor \frac {b_1 x^2}{b_2} \right\rfloor + \left\lfloor \frac {c_1 x}{c_2} \right\rfloor + d, $$ and then list them in a table for every point (i, j), i.e. the table stores which functions go through (i, j) for each (i, j).

Then we deal all point (i, j) with value 1 in some order.

Valid functions are chosen practically. (just making every attempt.) When we deal a point (i, j), greedily find the function across the most point with value 1. The order of the dealing points is from right to left in the x axe, and from middle to edge in the y axe.

Here is my best code.

I'd like to invite the top three coder @ceilks @mugurelionut @dreamoon4 to share their approaches.

link

answered 11 Sep '17, 20:56

liouzhou_101's gravatar image

7★liouzhou_101 ♦♦
6822313
accept rate: 12%

1

I really appreciate you making efforts regarding solution of challenge problems. I have started attempting challenge problems only because I saw myself coming across your editorials/discussions and they made me realize that thinking a bit more optimally I can get a significant (non-zero :p) score in them. :)

(11 Sep '17, 21:09) vijju123 ♦♦5★

I just used "linear" function (i.e. a1=b1=0) but a whole lot of them. Witthin the timelimit I managed to consider all values of d and coprime pairs of c1,c2 with abs(c1)<50, c2<95 and abs(c1/c2)<5.5. The values for the constraints where chosen by experimentation.

Instead of dealing with each point at a time I collected all admissible (= covering only ones) and maximal (= not a subset of another admissible segemnt) segments. With the given constraints on c1,c2 there were just a few 100thousends of them.

The second step was a greedy algorithm, always picking a subset covering the most ones not yet covered.

in addition I used some rather heuristic approach to deal with the possibility of a larger segment covering a single zero and sgemnts with abs(c1/c2)>5.5. But those two modification yielded only an improvement of <1%.

link

answered 12 Sep '17, 03:00

ceilks's gravatar image

7★ceilks
1.8k9
accept rate: 36%

Damn! I also went for this approach, but faced the setback due to constraints $-N\le d\le N$. I was considering that, any 2 points can be joined by a straight line, so I just find the equation of line. (I was getting ~30 that time so that would practically halve my functions and would have been a good boast. :D)

(12 Sep '17, 10:01) vijju123 ♦♦5★
1

I noticed you have a function "impr" in your solution, which does something similar to what I did to improve my initial Greedy solution - but you seem to never call it :) Is it because it didn't bring sufficient score improvements compared to giving more time to your Greedy algorithm?

(13 Sep '17, 04:37) mugurelionut7★

Yes exactly, I tried it, but incresing the number of of function turned out to be better.

(14 Sep '17, 03:01) ceilks7★

I pre-generated a set of linear and quadratic functions (all with d=0), for which I precomputed all their values. Out of these I sub-select some functions for every test. I think I keep almost all possible linear functions (with d=0 there are only approx. 6K); I filtered the quadratic functions much more - I keep only those with abs(b1/b2) and abs(c1/c2) below some small threshold (I think ~24K of them).

Then I run a Greedy algorithm. I pick a random point and I choose the function passing through it which covers the most yet uncovered points. When iterating through the selected functions, I now consider their correct value of d (that value which makes them pass through the current point, if it's within the appropriate range). When there aren't too many points left to cover, I also consider a small subset of cubic functions.

Choosing the "base" point randomly is important. I tried iterating in some order (top-to-bottom, left-to-right, and all their variations), and the results are from 10% to 4% worse.

I only cover 1s and I don't allow the functions to go outside of the board. I stop when there are <= 100 1s left to cover. I also remove redundant functions, if any (there were cases when some functions added early on become redundant by the end).

This Greedy takes ~0.8 seconds on 20 test cases. I tried a lot to make it faster, but in the end that's all I could do. There are several reasons why making it faster would have improved my score: 1) I could have tried more functions in the available time (the improvement would not have been that large); 2) I could have run more iterations of stage 2, described below.

In the remaining time, I try to improve the solution found by Greedy for the top 10 test cases with the largest scores (essentially those with many 1s). I select the F functions which cover the smallest number of unique 1s (i.e. the least efficient ones) and remove them from the solution. Then I run the Greedy algorithm to "repair" the solution (again, until <= 100 uncovered 1s remain). It's possible that a "repair" actually increases the size of the solution - that's fine, I accept the changes anyway, in order to reach more diverse solutions.

I could run only ~10 iterations with F=4 for each of the top 10 tests in the remaining ~0.2 seconds and this improved my score by ~2.5% (I implemented it in the last day of the contest). Giving just 0.15 more seconds to my solution, I could obtain an extra 1% improvement on my local tests (which I think are a good approximation of the official ones, because any improvement I obtained locally was also an improvement on the official tests). Giving it 1 more second, I could get more than 2% improvement, and with 9 more seconds, more than 4% improvement. So, again, I tried to optimize the code for speed, in order to be able to run as many iterations as possible, but in the end I reached my limit :) Anyway, it seems to me that the problem could have allowed more interesting approaches if the time limit had not been so tight (1 second for 20 test cases is very tight!).

link

answered 13 Sep '17, 04:32

mugurelionut's gravatar image

7★mugurelionut
10.0k266990
accept rate: 18%

toggle preview
Preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported
  • mathemetical formulas in Latex between $ symbol

Question tags:

×15,683
×1,000
×877
×847
×286
×6

question asked: 07 Sep '17, 19:17

question was seen: 1,232 times

last updated: 14 Sep '17, 03:01