CHEFCOMP - Editorial

PROBLEM LINK:

Division 1
Division 2

Author and Editorialist: Ritesh Gupta
Tester: Данило Мочернюк

DIFFICULTY:

Medium-Hard

PREREQUISITES:

DSU, DSU on Trees, Binary Search, DFS

PROBLEM:

You are given a tree T with N nodes (numbered 1 to N). Each node i consists of two positive integers A_i and B_i associated with itself.

We perform this operation exactly once every day for N days. On i-th day, we perform the operation on node P_i. (P is a given permutation of numbers from 1 to N).

An operation on node p is defined as:

  • Subtract A_p from all B_q such that q is reachable from p.
  • Delete node p and the edges associated with it.

For each node i from 1 to N, you have to print the first day when B_i becomes \le zero or print -1 if it never becomes \le zero.

QUICK EXPLANATION:

  • Simply doing what the problem statement suggests in a brute force manner is enough to pass the subtask 1 but the time limit will exceed for the subtask 2.
  • To pass subtask 2, we’ll have to rearrange the tree (Tree Decomposition?).
  • It can be observed that when an operation is performed on a node (say x), only some particular nodes are affected (the nodes which are reachable from x on that particular day) and the node x is never visited again.
  • So, the tree must be decomposed in such a way that in the newly formed tree, any node’s subtree will contain only the nodes that are going to be affected when the operation is performed on the subtree’s root.
  • The tree can be built in the bottom to top fashion (sounds like DSU?)
  • Once the new tree is formed, DFS can be performed from the root. For each of the children nodes in the root’s subtree, answers can be found using binary search.

EXPLANATION:

Subtask 1:

Consider the nodes in the permutation P in the given order. For each node in P, perform DFS from it assuming it to be the root and subtract A_root from B_j for each j in the root’s component, i.e., j is reachable from the root. If for any j the value B_j value becomes non-positive, the current day would be the answer for that particular node, i.e., node j. Then remove all the edges that connect the root to other nodes. At the end of N days, if there’s some j such that B_j value is still positive, then the answer to node j is -1.

Subtask 2:

Why decompose the given tree?

In the given tree, the operation performed on a particular node affects a specific set of nodes. But it’s hard to maintain the affected nodes and the tree structure in/after each operation. So why not decompose the given tree in a way that if i < j, then P_i will be an ancestor of P_j, such that in the decomposed tree, the below lemmas are satisfied :

Lemma 1: An operation on a node (say x) in the given tree is equivalent to an operation on node x in the decomposed tree such that it affects only the subtree of x (including x itself)

Proof: In the decomposed tree, the operation is performed on the root of any subtree before the operation is performed on the children of that subtree’s root because the tree is decomposed in a way that if i < j, then P_i will be an ancestor of P_j.

Lemma 2: A node x in the decomposed tree is affected only by the ancestors of x i.e., the nodes on the path from the root to x, excluding x

Proof: From Lemma 1, it can be inferred that the operation is performed on a node only if the operation has already been performed on all the ancestors of x i.e., the nodes on the path from the root to x, excluding x.

Let’s denote given tree as OT and the decomposed tree as NT. Construct the new tree, NT with the help of DSU and the knowledge about OT. How?

Start from the node on which the operation will be performed on N-th day, and construct the tree NT in the bottom to the top manner (if i < j, then P_i will be an ancestor of P_j). In other words, for i-th day from N to 1 (decreasing order), including the P_i into the NT, and add edges from P_i to those neighbours of P_i (use OT to get this information) who have already been included in NT. Simply, in NT, connect P_i with the root of each component in which P_i ’s neighbour is present.

  • How to decide who will be the root of the component?
    • The latest added node will always become the root of the component since the operation on this node will affect all its children. This is easy to maintain and process with DSU.

Now that the NT has been constructed, how to calculate the answer for each node?

Perform a DFS from the root of NT keeping track of the A_i values of nodes in the path from the root to the current node (say x), including x (Hint: use backtracking). Now, extending the Lemma 2, it can be said that if the sum of first k (k being minimum possible) values of A_i on the path from the root to x is greater than or equal to the B_x, then the answer to node x is the day on which the operation was performed on the k-th node on the path from the root to x. If all the values of A_i from root to x sum up to less than B_x, then the answer to node x is -1 because it is not possible for B_x to ever become less than or equal to zero.

Instead of summing up all values of A_i from root to x for each node which may take up O(n) time in the worst case, we can keep a prefix sum array where we can binary search for the B_x value, the time complexity for which will be O(log N) in the worst case.

TIME COMPLEXITY:

TIME: O(NlogN)
SPACE: O(N)

SOLUTIONS:

Setter's Solution
#include <bits/stdc++.h>
// #pragma comment(linker, "/STACK: 2000000")
#define int long long
 
using namespace std;
 
const int N = 200010;
int par[N], sz[N];
int parent[N], cache[N];
vector <int> v[N], v1[N];
int A[N], B[N];
int ans[N], day[N];
vector <int> prefix_value, prefix_day;
 
char input[] = "input/input18.txt";
char output[] = "output/output18.txt";
 
int getPar(int u)
{
    while(u != par[u])
    {
        par[u] = par[par[u]];
        u = par[u];
    }
    return u;
}
 
void unite(int u, int v)
{
    int par1 = getPar(u), par2 = getPar(v);
 
    if(par1 == par2)
        return;
 
    parent[cache[par2]] = u;
 
    if(sz[par1] > sz[par2])
    {
        sz[par1] += sz[par2];
        sz[par2] = 0;
        par[par2] = par1;
        cache[par1] = u;
    }
    else
    {
        sz[par2] += sz[par1];
        sz[par1] = 0;
        par[par1] = par2;
        cache[par2] = u;
    }
}
 
void dfs(int curr, int par)
{
    // cout << curr << endl;
    prefix_value.push_back(prefix_value.back()+A[curr]);
    prefix_day.push_back(day[curr]);
 
    if(prefix_value.back() >= B[curr])
    {
        int idx = lower_bound(prefix_value.begin(), prefix_value.end(), B[curr]) - prefix_value.begin();
        // cout << idx << " " << prefix_value.size() << " " << prefix_day.size() << endl;
        ans[curr] = prefix_day[idx];
    }
 
    for(int i:v1[curr])
    {
        if(i != par)
        {
            dfs(i, curr);
        }
    }
 
    prefix_value.pop_back();
    prefix_day.pop_back();
}
 
int32_t main()
{
    // freopen(input, "r", stdin);
    // freopen(output, "w", stdout);
 
    int t;
    cin >> t;
 
    while(t--)
    {
        int n;
        cin >> n;
 
        prefix_day.clear();
        prefix_value.clear();
 
        prefix_value.push_back(0);
        prefix_day.push_back(0);
 
        for(int i=0;i<=n;i++)
        {
            v[i].clear();
            v1[i].clear();
            par[i] = -1;
            sz[i] = 0;
            parent[i] = -1;
            ans[i] = -1;
            cache[i] = i;
        }
 
        for(int i=1;i<n;i++)
        {
            int x,y;
            cin >> x >> y;
 
            v[x].push_back(y);
            v[y].push_back(x);
        }
 
        vector <int> temp;
 
        for(int i=0;i<n;i++)
        {
            int x;
            cin >> x;
 
            temp.push_back(x);
            day[x] = i+1;
        }
 
        for(int i=1;i<=n;i++)
        {
            cin >> A[i];
        }
 
        for(int i=1;i<=n;i++)
        {
            cin >> B[i];
        }
 
        reverse(temp.begin(), temp.end());
 
        for(int i=0;i<n;i++)
        {
            par[temp[i]] = temp[i];
            sz[temp[i]] = 1;
            parent[temp[i]] = temp[i];
 
            for(int j:v[temp[i]])
            {
                if(par[j] != -1)
                {
                    unite(temp[i], j);
                }
            }
 
            // for(int i=1;i<=n;i++)
            //  cout << parent[i] << " ";
            // cout << endl;
 
        }
            
        int root = -1;
 
        for(int i=1;i<=n;i++)
        {
            if(parent[i] == i)
            {
                root = i;
                continue;
            }
 
            v1[i].push_back(parent[i]);
            v1[parent[i]].push_back(i);
        }
 
        int cnt = 0;
 
        for(int i=1;i<=n;i++)
        {
            if(parent[i] == i)
                cnt++;
        }
 
        dfs(root, -1);
 
        for(int i=1;i<=n;i++)
        {
            cout << ans[i] << " ";
        }
        cout << endl;
    }
} 
Tester's Solution
#include<bits/stdc++.h>
#define ll long long
#define pb push_back
#define x first
#define y second
#define sz(a) (int)(a.size())
using namespace std;
const int MAX = 200005;
vector<int> g[MAX] , t[MAX];
int P[MAX] , A[MAX], B[MAX], present[MAX], p[MAX], pos[MAX], answer[MAX];
int find(int u)
{
    return p[u] = (p[u] == u ? u : find(p[u]));
}
void unite(int u, int v)
{
    v = find(v);
    p[v] = u;
    t[u].pb(v);
}
vector<ll> path;
vector<int> path2;
void dfs(int u)
{
    path.pb(sz(path) ? path.back() + A[u] : A[u]);
    path2.pb(pos[u]);
    int T = lower_bound(path.begin(), path.end(), B[u]) - path.begin();
    answer[u] = T == sz(path) ? -2 : path2[T];
    for(auto v : t[u])
        dfs(v);
    path.pop_back();
    path2.pop_back();
}
int main(){
    
    ios::sync_with_stdio(false);
    int T;
    cin >> T;
    while(T--)
    {
        int N;
        cin >> N;
        for(int i = 0; i < N; i++)
            p[i] = i;
        for(int i = 1; i < N; i++)
        {
            int u , v;
            cin >> u >> v;
            u--;v--;
            g[u].pb(v);
            g[v].pb(u);
        }
        for(int i = 0; i < N; i++)
            cin >> P[i];
        for(int i = 0; i < N; i++)
            cin >> A[i];
        for(int i = 0; i < N; i++)
            cin >> B[i];
        for(int i = N - 1; i >= 0; i--)
        {
            P[i]--;
            pos[P[i]] = i;
            for(auto j : g[P[i]])
                if(present[j])
                    unite(P[i] , j);
            present[P[i]] = 1;
        }
        dfs(P[0]);
        for(int i = 0; i < N; i++)
        {
            cout << answer[i] + 1 << " ";
            g[i].clear(); t[i].clear();
            present[i] = 0;
        }
        cout << endl;
    }
            
    return 0;     
}     
17 Likes

Alteratively, notice that when a node is removed, at max one of the new components has size >= half the size of the previous component. Clearly halving the size of a component can only be performed logn times. So as long as we only move the the nodes of all but the largest component the total complexity is O(n logn). Only thing is performing interleaved bfs / dfs to figure out which is the largest component while ensuring that it doesn’t become O(n ^ 2) is a bit of a pain to code properly, but I managed to get AC using the idea.

Regarding the interleaved bfs / dfs - we can’t add the entire adjacency list to the queue / stack in one step, consider the construction of a single central node of maximum degree with 3 length arms and the directly adjacent elements to the center at the start of the permutation. We will add O(n) numbers to the queue O(n) times. So we have to move one step at a time through the adjacency list.

9 Likes

I rearranged the vertices in the given permutation so that for every vertex v, all the nodes affected by v are consecutive and to the right of v. Now its basically reduced to a segment tree question, where I have to update -A[i] in a range and then in the same range, query the indices, whose values became <=0 on that update. Here’s my submission.

14 Likes

Can you please take a look at the given solution and please tell me where i went wrong?
https://www.codechef.com/viewsolution/36835210

If you reverse the permutation then removing becomes adding and each time you add node it should be parent of nodes already visited and connected in original graph. This ensures that each node is child of parent that can give it his value. I used link cut tree implementation to get root and make root in arbitrary O(logn) time each. Then a single dfs on new tree is sufficient for subtree query. Here is my submission CodeChef: Practical coding for everyone

5 Likes

https://www.codechef.com/viewsolution/36755620

Let’s see if someone finds what the error in this is…
J.K please tell me why it did not passed for 30 pts. I cannot seem to find the mistake.
Thanks :slight_smile:

Can someone help me in CHEFCOMP question. What is the error in my solution
https://www.codechef.com/viewsolution/36786590

What is “cache array” storing?
Is it fairly common to use this array in dsu questions??

Can somebody explain how DSU on tree approach works . I couldn’t find much material on this topic . There is blog on codeforces by ARPA , but that blog talks much more on implementation rather than the idea .

1 Like

yeah same,i dont know how to transform a tree using dsu :frowning:

1 Like

https://codeforces.com/blog/entry/67696
this may help .

1 Like

thx :))

could you please tell me what is the “cache” array storing in the problem setter’s solution?

Can anyone tell me the test cases/situation for which my code is failing 2 TC’s (for partial). Tried brute force approach with hashing.

https://www.codechef.com/viewsolution/36866599

I used a slightly different (more complicated?) approach. Instead of building a new tree, I simulate the removal of nodes in the original tree.
Arbitrarily rooting the tree, for some operation on day i, I add the pair {a_i, i} to the highest ancestor of p_i that can be reached without crossing a previously blocked node. (This can be computed with binary lifting). The pair {a_i, i} means that this operation should be performed on the whole subtree, except for subtrees which were blocked previously, on some day before i.
Then we can get the answers with a DFS like in the editorial, maintaining a list of ancestors and their operations. However, before processing each node, we should have some way of ignoring all operations after the time where this node was blocked. We can do this by maintaining a segment tree that represents the ancestors, and for each node, we can build a new segment tree where some suffix is all zeros using persistence, so only log(n) new nodes are created.

Hopefully my solution is a little more clear:
https://www.codechef.com/viewsolution/36581652

Also, I thought of the editorial’s idea, but I didn’t think it would work because I didn’t realize the new tree doesn’t necessarily need to have the exact same edges as the original, since what’s really important is that the notion of connectedness is the preserved. Another good example of how starting from the end and adding edges can be more helpful than thinking about starting from the beginning and removing edges.

https://www.codechef.com/viewsolution/36684437

What is the error with my solution for the above problem , if I only aim for passing the subtask 1 ?

I Found somthing on youtube which is great.

3 Likes

Nice Editorial!!

Check this code (CodeChef: Practical coding for everyone), I did it for passing subtask 1.

@harshil21 can you tell the formation of second tree from first is a special technique Or it’s by observation only??