GRIDGAME - Editorial

Problem Link:

Practice

Contest

Difficulty:

Medium-Hard

Pre-requisites:

2-player Impartial games, line-sweep

Explanation:

Some definitions:

A losing-cell is one such that if the pawn were on that cell at the beginning of the game, the first player loses.

A winning cell is one for which the first player can guarantee a win if the pawn were on that cell at the beginning of the game.

Also, we call a cell “bad” if it is impassable (just as a short form). Do not confuse “bad” with “losing”.

Characterization of losing and winning cells:

A cell X is winning, iff there exists a move to a losing cell Y.

A cell X is losing, iff there are no moves from it or for all possible moves to cell Y, Y is winning.

We first try to find the set of losing cells.

Firstly, if there were no bad cells, then the set of losing cells would look like:


X
 X
  X
   X

In essence, the set of losing cells would be characterized by {(x, x): x >= 0}. This structure suggests the following strategy: let us try to write the set of losing cells of the game as a union of disjoint “diagonal segments of length K”: (x, y, K) = {(x+d, y+d): 0 <= d < K}. Clearly, with there being no bad cells, then the set of losing positions is merely (0, 0, inf).

We now sweep a horizontal line, and keep track of “winning columns”. These “winning columns” are those set of cells in the current row, from which there exists a column-move that is winning. Note that these “winning columns” correspond to earlier losing cells, so we should also be interested in which all cells of the current selected row are “losing” (since they will give us our “winning columns” in future rows).

Each contiguous set of cells that aren’t bad would have atmost one losing cell. In fact, the losing cell (if it exists) would be the left-most column that is “not winning”. A good way to store these “winning columns” would be in the form of segments [L R]. Now, adding a new column to this only amounts to increasing a winning segment by 1, and possibly merging two earlier disjoint winning segments.

However, sweeping our line row by row is potentially very slow. Instead, we sweep our line only over rows having bad cells. We divide our rows thus into two types:

  1. Bad rows
  2. Good blocks

Good blocks are sets of consecutive rows having no bad cells. We need to handle this separately since these would require us to modify our “winning segments” appropriately before encountering further bad rows.

Also, in order to look-up segments quickly, we will store them in a balanced BST (such as STL set) S.

Initially, the winning segments set S is empty. Now,

Case 1. We encounter bad row x.

Suppose the winning segments and the row x (“B” denotes bad cell) looking like:


win:---WWWW---W----
 x: -----B------B--

Now, in subsequent rows, the column corresponding to the first B would no longer be winning, since you can’t pass through B along the column. Hence, we need to delete from each bad cell (x, y) the column y, if y was winning.

This can be done by:


[L, R] = S.lower_bound([y, y])	//find which segment contains y
// NOTE: For lower_bound to work properly here, you need to define the segments' comparator as [L1, R1] < [L2, R2] <=> (R1 < R2 || (R1 == R2 && L1 > L2)). Note that L1 > L2 is used here.
if([L, R] exists and L <= y <= R)
	S.erase([L, R])
	//insert [L, y-1] and [y+1, R] if they are non-empty
	if(L < y) S.insert([L,y-1])
	if(R > y) S.insert([y+1,R])

This handles maintaining of “winning segments of columns”. We would still like to find losing cells of this row. The logic behind finding such losing cells has been described above (since this is just a single-row shift).

Introduce for simplicity two dummy bad cells (x, -1) and (x, inf). Now, in every contiguous set of non-bad cells, the left-most non-winning column would correspond to a losing cell.


//consider (x, y1) and (x, y2) to be bad cells such that there is a nonempty contiguous set of non-bad cells in-between.
//now find the earliest point in (y1, y2) that is not winning
y = y1+1
[L, R] = S.lower_bound([y,y])
if([L,R] exists and L<=y<=R)
	y = R+1	//it is the first one that is to the right of this covering segment
// else y = y1+1, since this itself is supposedly losing

//we need to check also that this cell y is within the (y1, y2) set of non-bad cells
if(y < y2)
	losingSet.add((x,y,1));
	add [y,y] to S

The process of adding [y,y] to S should ensure that you should also merge segments as required


[L, R] = [y, y]
[L1, R1] = S.lower_bound([y+1, y+1])
if([L1, R1] exists and L1 == y+1)
	S.erase([L1, R1])
	R = R1

[L2, R2] = S.lower_bound([y-1,y-1])
if([L2, R2] exists and R2 == y-1)
	S.erase([L2, R2])
	L = L2

S.insert([L, R])

This handles the merging of [y,y] with potential candidates [L, y-1] and [y+1, R].

Thus we can handle bad rows in time O(logN) operations.

Case 2. Handling a good block.

Let the bounding bad rows be x = x1 and x = x2, where x1=-1 means there are no bad rows before this block and x2 = INF means that there are no bad rows after.

Also remember that we have a current “winning segment set S” that has information at bad row x1, and we need to modify this information before it reaches bad row x2, as well as add any losing segments.

Taking an example:


S: WWWWWWWWW---WW-------...---
x1
G:          X
O:           X
O:            X
D:               X
 :                X
B:                 X
L:                  X
O:                   X
C:                    X
K:                     X
x2
S: WWWWWWWWWWWWWWWWWWWWW---...---

In the above, “X” denotes a losing cell, and the desired configuration of S has been depicted after the good block.

What this means is, we need to add diagonal segments whose total length is x2-x1-1 for which their y-ranges do not intersect with each other or with the previous winning segments from S. Finally, some of the initial segments will be deleted while forming one large consecutive winning segment in the beginning.

Pseudocode for this updating is as follows


x = x1+1
need = x2-x1-1	//required amount of segment to be added
do
	[L1 R1] = first segment of S
	[L2 R2] = second segment of S
	if(L1 != 0) //we are interested in [0, L1-1]
		K = min(need, L1)
		y = 0
	else 		//we are interested in [R1+1, L2-1]
		K = min(need, L2 - R1-1)	//as much as possible in this segment
		y = R1+1
	losingSet.add((x, y, K))
	x = x+K
	need = need-K
	//ensure S is properly modified
while need > 0

Although it may seem that we may add many diagonal segments (like O(N^2)), but consider the following amortized analysis. Consider there to be X good block (X <= N), and Y is the number of segments that are added to S. It can be seen that Y = O(N). Then, the total number of losing diagonal segments we add will be at most 2*X + Y.

Hence we get O(N log N) algorithm for finding the losing cells.

The only thing left is to answer the queries. But this is simple. Since losing segments are diagonal, if we were to sort them by (x-y), (and then by x to break ties), we just search of “segment” (x, y, 1) among these diagonal losing cells. If (x, y) belongs to the found segment, then it is a losing cell, else it cannot belong to any other losing segment and hence it is a winning cell.

Overall, this algorithm is O(N log N + Q log N).

Author’s Solution:

Can be found here

Tester’s Solution:

Can be found here

2 Likes

please help…this is my first question here…i failed bcuz of time optmization can someone help

#include <stdio.h>
main()
{
int i,j,k,c=1,f=1,p,z,T,N,Q;
scanf("%d",&T);

for(i=0; i<T; i++)
{
	scanf("%d",&N);
	int n[N][2];
	for(j=0; j<N; j++)
	{
		scanf("%d%d",&n[j][0],&n[j][1]);
	}
	
	scanf("%d",&Q);
	int q[Q][2];
	for(j=0; j<Q; j++)
	{
		scanf("%d%d",&q[j][0],&q[j][1]);
	}

	//real game
	
	for(j=0; j<Q; j++) // game begin - different position 
	{
	
		while(1) // random loop
		{
			c++; // mover decider
			if(f%2 != 0)
			q[j][0]--;
			else
			q[j][1]--;
			for(k=0; k<N; k++) // impassable checker for loop
			{
				if(q[j][0] == n[k][0] && q[j][1] == n[k][1]) //impassable condition checker
				{
					if(f%2!=0)
					q[j][0]++;
					else
					q[j][1]++;
					c--;
				}
				
			}
			
			
			
			if(q[j][0] < 0)
			{
				p=1;
				q[j][0]++;
				c--;
		
			}

			if(q[j][1] < 0)
			{
				z=1;
				q[j][0]++;
				c--;	
			
			}
			
			f++;
			if(p==1 && z==1)
			{
				if(c%2==0)
				{
					puts("Bob");
					break;
				}
				else
				{
					puts("Alice");
					break;
				}
			}
		
		}
	}// game ends

}

}

I was trying out to solve this and thought it recursively. I am facing some implementation problems in my code so I can’t check if my logic is correct or not.
Can you help me in telling if i am right or not.

Logic :
I input a this position from where the alice will start the game. From here I will keep calling the same routine again and again recursively to check who wins at each of the block if the other player had started the game. It gives the answer finally.
Following is my code link. Any help or views regarding this would be appreciated and I will be thankful to you.

:smiley: :smiley:

The implementation is by and large correct, I can’t find any bugs in it.
However, this kind of algorithm is way too slow. This kind of recursion without even memoizing will have exponential time complexity O(2^(x+y)) for each query point (x, y). It will time out for grid sizes even 25x25.

hmm…okay ya i agree upon its inefficiency.