Dear Codechef Community,

I am having really hard time understanding the underlying theory of the MINIMAX problem(Contest Page | CodeChef). I looked at some of the solutions but still was unable figure out the underlying idea intuitively. Can anyone please help with some explanation on how to solve it?

Look at this solution.

Let dude = max (r_1, r_2, \dots, r_n) = min (c_1, c_2, \dots, c_n). Firstly, observe that if \exists (i, j) such that A[i][j] is minimum in row i and maximum in column j, then, A[i][j] = dude.

Suppose for some x \in A we want to find the minimum number of changes to be done in A so that x = max (r_1, r_2, \dots, r_n) = min (c_1, c_2, \dots, c_n). Let us select some random row i and some random column j. Suppose we make all the element of row i which are less than x equal to x, and similarly make all the elements of column j which are greater than x equal to x. Note that now A[i][j] = x and is minimum in row i and maximum in column j. Which means (using our observation) that x = max (r_1, r_2, \dots, r_n) = min (c_1, c_2, \dots, c_n). For all rows if we know the number of elements lesser than x we could choose the row with minimum such number, similarly if we knew the number of elements greater than x for each column we could choose the column with minimum such numbers. We just have to add them to get the minimum number of changes to be done.

Observe that we only require to do this for x \in A. To maintain the required information we can go in increasing or decreasing order of x \in A. In the solution mentioned @veeramanjula maintained the pair of sets to get the required information (which are vr and n - vc) and chose to iterate in increasing order.