MEDIUM

# PROBLEM:

Given a grid of size N\times M. K cells of the grid are black, and the rest are white. The cost of a path from (1,1) to (x,y) (where we move one cell down/right in each step) is equal to the number of times we move between cells of different colors.

Answer Q independent queries of the form (x,y) - minimum possible cost of a path from (1,1)\to (x,y), after inverting the colors of all cells in some set of columns.

# EXPLANATION:

Observation: In any optimal path from (1,1)\to (x,y), we always move between similar colored cells when taking a step to the right. That is, we can invert columns in such a way that we incur no cost when moving to the right, and this never affects the cost of moving downwards.

With this key observation, the problem is reduced to calculating the minimum number of times we must move downwards between different colored cells in the original grid to reach the destination cell.

Hint 1

Given below is a naive DP approach to the problem.

Let dp[i][j] represent the least cost over all paths from (1,1)\to (i,j). Then,

dp[i][j] = \min(dp[i][j-1], dp[i-1][j]+c)

where c=1 if cells (i,j) and (i-1,j) are differently colored, and 0 otherwise.

This solution however, TLE’s for the given constraints.
But we can optimise! Observe that c=1 in only \approx 2\cdot K states of the above DP, and computation of the other states are redundant. Use this information and rewrite the solution.

Hint 2

Call a cell (i,j) special if the cells (i,j) and (i+1,j) have different colors.
Also, let s(i,j) equal the smallest x\ge i such that cell (x,j) is special.

Observation: dp[i][j]=dp[i+1][j]=\dots=dp[s(i,j)][j].

Why?

By the definition of s, it is clear that all cells in the range (i,j)\to(s(i,j),j) are of the same color. Moving down from (i,j) to any of the aforementioned cells doesn’t increase the cost, and hence dp[i+1][j],\dots dp[s(i,j)][j] \le dp[i][j].

To then show that dp[i+1][j],\dots, dp[s(i,j)][j]= dp[i][j], we make use of the following observation.

Observation: dp[i][j]\le dp[x][j] for all x\ge i.

How?

Let P=DRRDDRD\dots be the sequence of steps (down/right) taken in the optimal path from (1,1)\to(x,j). Erase the last x-i down steps (D's) from P. Then, invert the columns such that all R steps have cost 0.

Now, this new sequence of steps reaches cell (i,j). The cost of every R step is 0. Since inverting the columns doesn’t change the cost of D steps, and the total number of times we changed colors in D steps equals dp[x][j], the new sequence also changes colors atmost dp[x][j] times.

Thus, we have found a path from (1,1)\to(i,j) with cost \le dp[x][i] which implies dp[i][j]\le dp[x][j].

Hint 3

From the observations made in the previous spoiler, we know that the answer to any query (x,y) is equal to dp[s(x,y)][y] (or dp[N][y] if s(x,y) doesn’t exist). Thus, all we have to do is compute dp[i][j] for only special cells (i,j).

How will you accomplish this?

Hint

Try constructing a graph with weighted edges between special cells.

Solution

Firstly, generate the list P of all special cells. This can be done trivially in O(K). Also, for convenience, add cells (N,1),(N,2),\dots,(N,M) to this list.

Create a graph of |P| nodes, each corresponding to a particular special cell.

The cost of moving from special cell (i,j) to cell (s(i,j),j) is 1; add a directed edge between these nodes with weight 1. Also, since we may move to the left with cost 0, add a directed edge from (i,j) to each of the nodes (s(i,j+1),j+1),(s(i,j+2),j+2),\dots with weight 0.

Running Dijkstra/0/1 BFS on the generated graph (after appropriately adding a node corresponding to cell (1,1)) will give us the minimum possible cost of a path to each of the special cells, which can be used to answer the queries.

But wait! The number of edges in the above generated graph is \approx |P|^2, which is too much for the given constraints.
To remedy this, for each (i,j), we insert only one edge (s(i,j+1),j+1) with weight 0. Then, after running dijkstra/bfs on the graph - let the mininum cost to reach node (i,j) be f(i,j) - do:

dp[i][j] = \min(f(i,j), f(s(i,j),j), f(s(s(i,j)), j), \dots)
Why does this work?

We reason with the help of a simplified example.

Consider 3 special cells (a,1),(b,2),(c,3) where a\le b,c.
Now, our above approach adds an edge from (a,1) to only (b,2). We go on to show that (b,2) adds an edge that implicitly propogates a 0 edge from (a,1)\to (c,3).

Case 1: b\le c
Here, node (b,2) adds a 0 weight edge to (s(b, 3),3)=(c,3). Thus, running dijkstra on this graph would give us f(c,3)=0, which is what we’d have got if we added an edge from (a,1)\to(c,3).

Case 2: b > c.
Here, cell (b,2) adds a 0 weight edge to (s(b, 3),3)=(N,3). Then, running dijkstra would give us f(N,3)=0. By our 2^{nd} observation in “Hint 2”, we know that dp[c][3] \le dp[N][3]. Thus dp[c][3] = \min(f(c,3),f(N,3))=0, which is the answer we were looking for.

A more robust proof of the approach is left to the reader as an exercise .

Refer my attached code for implementation details.

# TIME COMPLEXITY:

There are atmost P=M+2\cdot K special cells. Computing s(i,j) can be done in O(\log P) using binary search. Since there are 2\cdot P nodes in the generated graph, running BFS takes O(2\cdot P) time.
Updating dp[i][j] as the minimum of the computed distance of special cells below cell (i,j) can be done in O(P) using sliding window.
Finally, answering each of the queries takes O(\log P) per query.

The total time complexity is therefore:

O(P\log P+2\cdot P+P+Q\log P)\approx O((P+Q)\log P)

per test case.

# SOLUTIONS:

Editorialist’s solution can be found here.

Author's solution
#include <bits/stdc++.h>
using pii=std::pair<int,int>;
using namespace std;

const int maxk = 3e5 + 5;

int t, n, m, k, x[maxk], y[maxk];

tuple<int, int> get_next_lowest(int x, int y, vector<vector<int>>& black_cells, vector<vector<int>>& furthest_consecutive_black_cells)
{
int next_lowest = lower_bound(black_cells[y].begin(), black_cells[y].end(), x) - black_cells[y].begin();
if(black_cells[y][next_lowest] == x)    // current cell is black
return {furthest_consecutive_black_cells[y][next_lowest], y};
else
return {black_cells[y][next_lowest] - 1, y};
}

int32_t main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);
cin >> t;
for(int cases = 0; cases < t; cases++)
{
cin >> n >> m >> k;
for(int i = 0; i < k; i++)
{
cin >> x[i] >> y[i];
x[i]--; y[i]--;
}
if(m > k)
{
cout << "0\n";
continue;
}
vector<vector<int>> black_cells(m), furthest_consecutive_black_cells(m);
for(int i = 0; i < k; i++)
black_cells[y[i]].push_back(x[i]);
for(int i = 0; i < m; i++)
{
black_cells[i].push_back(n);
sort(black_cells[i].begin(), black_cells[i].end());
int sz = black_cells[i].size();
furthest_consecutive_black_cells[i].resize(sz);
for(int j = sz - 1; j >= 0; j--)
{
if(j + 1 == sz || black_cells[i][j] + 1 != black_cells[i][j + 1])
furthest_consecutive_black_cells[i][j] = black_cells[i][j];
else
furthest_consecutive_black_cells[i][j] = furthest_consecutive_black_cells[i][j + 1];
}
}
deque<tuple<int, int>> q;
map<tuple<int, int>, int> dist;
auto start_lowest = get_next_lowest(0, 0, black_cells, furthest_consecutive_black_cells);
dist[start_lowest] = 0;
q.push_back(start_lowest);
while(!q.empty())
{
int curx, cury;
tie(curx, cury) = q.front();
q.pop_front();
int curdist = dist[{curx, cury}];
if(curx >= n - 1 && cury == m - 1)
{
cout << curdist << "\n";
break;
}
if(curx < n - 1)    // move down as much as possible in current column
{
auto cur_col_down = get_next_lowest(curx + 1, cury, black_cells, furthest_consecutive_black_cells);
if(!dist.count(cur_col_down) || dist[cur_col_down] > curdist + 1)
{
dist[cur_col_down] = curdist + 1;
q.push_back(cur_col_down);
}
}
if(cury < m - 1)    // move down as much as possible in next column (to the right)
{
auto next_col_down = get_next_lowest(curx, cury + 1, black_cells, furthest_consecutive_black_cells);
if(!dist.count(next_col_down) || dist[next_col_down] > curdist)
{
dist[next_col_down] = curdist;
q.push_front(next_col_down);
}
}
}
}
return 0;
}

Tester's solution
// Super Idol的笑容
//    都没你的甜
//  八月正午的阳光
//    都没你耀眼
//  热爱105°C的你
// 滴滴清纯的蒸馏水

#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/rope>
using namespace std;
using namespace __gnu_pbds;
using namespace __gnu_cxx;

#define int long long
#define ll long long
#define ii pair<ll,ll>
#define iii pair<ii,ll>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl

#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound

#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()

#define indexed_set tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update>
//change less to less_equal for non distinct pbds, but erase will bug

mt19937 rng(chrono::system_clock::now().time_since_epoch().count());

int n,m,k;
vector<int> pos[300005];
vector<ii> range[300005];
vector<int> memo[300005];

signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);

int TC;
cin>>TC;
while (TC--){
cin>>n>>m>>k;
rep(x,1,m+1) pos[x].clear(),range[x].clear();

int a,b;
rep(x,0,k){
cin>>a>>b;
pos[b].pub(a);
}

rep(x,1,m+1){
sort(all(pos[x]));

if (pos[x].empty()) range[x]={{1,n}};
else{
if (pos[x][0]!=1) range[x].pub({1,pos[x][0]-1});
rep(y,0,sz(pos[x])){
if (y && pos[x][y-1]+1==pos[x][y]){
range[x].back().se=pos[x][y];
}
else{
range[x].pub({pos[x][y],pos[x][y]});
}

if (y!=sz(pos[x])-1){
if (pos[x][y]+1!=pos[x][y+1]){
range[x].pub({pos[x][y]+1,pos[x][y+1]-1});
}
}
else{
if (pos[x][y]!=n){
range[x].pub({pos[x][y]+1,n});
}
}
}
}

memo[x].clear();
rep(y,0,sz(range[x])) memo[x].pub(1e9);
}

rep(x,0,sz(memo[1])) memo[1][x]=x;

rep(x,2,m+1){
int curr=0;
rep(y,0,sz(memo[x])){
while (range[x-1][curr].se<range[x][y].fi) curr++;

if (y==0) memo[x][y]=0;
else memo[x][y]=min(memo[x][y-1]+1,memo[x-1][curr]);
}
}

cout<<memo[m].back()<<endl;
}
}


Since the editorial is missing, I will post my solution.

First, notice that all horizontal moves in any path become free after appropriately flipping columns. Such flips also do not influence the cost of vertical steps.

Second, notice that we only pay to make a vertical step across a “border” (i, j), which means that cells (i, j) and (i+1, j) have different colors. Such borders can be precomputed and stored separately for each column, sorted by row.

Let’s define function f(i, j) as the minimum cost of getting to cell (i, j). It is easy to see that f(i, 1) equals the number of borders above cell (i, 1) in the first column. Indeed, there is no variety of paths that we can take.

Let’s now think of a transition from f(\cdot, j) to f(\cdot, j+1). Since horizontal moves are always free, f(i, j+1) \le f(i, j) for all j. Also, we can move down between two consecutive borders k and k' in column j+1 freely, implying f(k+1, j+1), \ldots, f(k'-1, j+1) \le f(k, j+1). Also f(k', j+1) \le f(k, j+1) + 1.

It turns out that these three inequalities consider all possible paths to cells of j+1-st column. Therefore, f(\cdot, j+1) is equal to the minimum of these values. Conveniently, a lazy segment tree supports such “min-assign on range” operations.

This allows us to compute values of f column-by-column, so that queries are answered offline when encountering their column.

Check the sample implementation for better understanding. (skip header with AtCoder library, it’s just a template lazy segtree).

1 Like

I think it is also possible to maintain a set of “bump points”, i.e. numbers i such that f(i+1, j) = f(i, j) + 1. Indeed, from the inequalities above, it follows that f(i+1, j) \le f(i, j) + 1 for all (i, j). In other words, the function f is piecewise constant with occasional “bumps” of size 1. However, it wasn’t clear to me how to process updates to the set of bump points, so I just used a segment tree instead.