VIPS - Editorial

Problem Link: Escape Plan ( CodeChef: Practical coding for everyone )

Setter: Aman Singhal
Tester: Arpan Gupta

DIFFICULTY:

Easy

PREREQUISITES:

DFS, BFS, Tree

PROBLEM:

We are given a secured tree of N nodes rooted at 1, in which we have to change values of some nodes as 0 such that the tree still remains secured and the value of security of the tree is minimum. Each node has some value assigned to it, say A_i for node i. The security of the tree can be calculated as follows:
Security of a leaf node: Bitwise ‘or’ of all the values in path from root to that leaf.
Security of tree: Sum of security of all the leaf nodes.
A tree is called secured when security of all the leaf nodes has value greater than 0.
We have to output the minimum security of the tree.

EXPLANATION:

Observation 1

Tap to view

Bitwise ‘or’ of two values is always greater than or equal to the maximum of two values. Hence we can say that from the path of root to any leaf node we can make all the values as 0 except one value otherwise the tree will become unsecured.

Observation 2

Tap to view

The tree which will give minimum security will be the one in which from root node to every leaf node there exists only one node with positive value.
We can prove from observation 1 that if we have two or more positive values in the path from root to a leaf node then we can make any one as zero as bitwise ‘or’ will be less or equal to the current one.

A tree will remain secured when the value of security of every leaf node is greater than 0 and hence we can say that if there exists at least one positive value in the path from root node to every leaf then our tree is secured and also due to bitwise ‘or’ property it will give minimum value. In order to find the minimum security of the tree we can run the dfs on the tree and check the following cases:

Let, Count be the array which stores count of leaf nodes in the subtree of that node.
Formally, For any node 1<=u<=N, count_u will store the number of leaf nodes in its subtree and A_u will be the value of the node. Now for the node u we will run dfs on all its child node and check that whether the sum of value of all its child node is greater than the value of count_u *A_u , if it is then we will keep the current node value and return else will discard it.

If  ($ count_u * A_u $) <= sum of dfs(child node of u), then return $count_u*A_u$
Else return sum of dfs(child node of u)

Why are we checking these conditions?
Consider that we keep the value of node u and make all the values in its subtree as 0, then for every leaf node in its subtree the value will be A_u. So if we keep node u, then value in our security of this subtree will be A_u*count_u . And if we keep this value as 0 then we have to find values of its child node.

What if the values of the current node are already zero?
This is an edge case of the problem that if values are already zero we cannot change them and to handle this case in my solution I have changed its value to infinity as we cannot use them. We can handle it explicitly in the dfs call also.

TIME COMPLEXITY:

  • Time complexity per test case is O(N).

SOLUTIONS

"Setter’s Solution" (Python)
from collections import defaultdict
def solve(N,G,A):
    Q=[1]
    stk=[]
    vis=[0 for i in range(N+1)]
    cnt=[0 for i in range(N+1)]
    while(len(Q)!=0):
        a=Q.pop()
        vis[a]=1
        stk.append(a)
        flag=0
        for b in G[a]:
            if (vis[b]==0):
                flag=1
                Q.append(b)
                vis[b]=1
        if(flag==0):
            cnt[a]=1
    stk.reverse()
    for i in range(N):
        if (A[i]==0):
            A[i]=float("inf")
    val=[float("inf") for i in range(N+1)]
    for a in stk:
        total=0
        leaf=0
        for b in G[a]:
            if (G[b]==0):
                total=total+val[b]
                leaf=leaf+cnt[b]
                
        if (total==0 or leaf==0):
            val[a]=A[a-1]
            
        else:
            val[a]=min(leaf*A[a-1],total)
            cnt[a]=leaf

        G[a]=0
    return val[1]
        
T=int(input())
for _ in range(T):
    N=int(input())
    G=defaultdict(list)
    for i in range(N-1):
        a,b=map(int,input().split())
        G[a].append(b)
        G[b].append(a)
    A=list(map(int,input().split()))
    print(solve(N,G,A))
"Setter’s Solution" (C++)
#include <bits/stdc++.h>


#define MAX 1000000000007
#define ll long long int
using namespace std;


vector<vector<ll> > adj;
bitset<1000005> vis;
vector<ll>  arr;

pair<ll,ll> dfs(int node)
{
	vis[node]=0;
	ll tot=0,curr_node=0,count=0,curr_val=MAX;
	for(auto i:adj[node])
	{
		if(vis[i])
		{
			count++;
			pair<ll,ll> x1=dfs(i);
			curr_node+=x1.first;
			tot+=x1.second;
		}
	}
	
	if(count==0)
	{
		tot=MAX;
		curr_node++;
	}
	if(arr[node-1])
	curr_val=arr[node-1];
	pair<ll,ll> ans={0,0};
	ans.first=curr_node;
	ans.second=min(curr_val*curr_node,tot);
	
	return ans;
	
}
int main()
{
	ios_base::sync_with_stdio(false);
    cin.tie(NULL);
	int t=1;
	cin>>t;
	while(t--)
	{
		ll n,i,a,b;
		string s;
		cin>>n;
		vector<ll> v1;
		vis.set();
		adj.assign(n+1,v1);
		for(i=1;i<n;i++)
		{
			cin>>a>>b;
			adj[a].push_back(b);
			adj[b].push_back(a);
		}
		for(i=0;i<n;i++)
		{
			cin>>a;
			arr.push_back(a);
		}
		pair<ll,ll> ans=dfs(1);
		cout<<ans.second<<"\n";
		arr.clear();
	}
}

Feel free to share your approach. In case of any doubt or anything is unclear please ask it in the comment section. Any suggestions are welcomed.

1 Like