CHGORAM - Editorial

PROBLEM LINK:

Div1
Div2
Practice

Setter- Noszály Áron
Tester- Suchan Park
Editorialist- Abhishek Pandey

DIFFICULTY:

Medium-Hard

PRE-REQUISITES:

Principle of Counting , and knowledge of appropriate data structure (Eg- Order Statistic Tree , Policy Based Data Structure ) and depending on solution, appropriate techniques like Euler Tour )

PROBLEM:

Given a tree of N nodes, find number of valid triplets (a_1,a_2,a_3) which have same relative order of values (i.e. same order of values being greater than or less than) as given in sequence (p_1,p_2,p_3) and a_2 lies on shortest path between a_1 and a_3

SUPER-DUPER-BUMPER-HYPER QUICK EXPLANATION:

Use appropriate data structure to find the number of nodes greater than and less than the node assigned a_2 in its subtree and use some maths to find the answer.

NOT SO SUPER-DUPER-BUMPER-HYPER-QUICK-EXPLANATION:

Key to AC- The solution is simple if you know how to come up with the formula and are able to pin down the problem to its well-known form!

We are given a permutation (p_1,p_2,p_3). And we have to assign triplets (a_1,a_2,a_3) so that the relative ordering is same as that of the given permutation. Lets fix our node for a_2. Realize that since we can always flip the assignment of a_1 and a_3 , we have to solve for only 3 cases, i.e. p_2=1, p_2=2 and p_2=3.

Let S_S and B_S be the number of nodes smaller than and larger than a_2 in subtree of Node a_2 respectively. Similarly, let S_T and B_T be count of nodes smaller than and bigger than node a_2 in the entire tree.

The answer for each of the case can be divided into 2 parts-

  1. Contribution of triplets (a_1,a_2,a_3) with LCA(a_1,a_3)=a_2.
  2. Contribution of triplets (a_1,a_2,a_3) with LCA(a_1,a_3) \neq a_2.

Great observation right? Just see how it helps-

Contribution for pairs where LCA(a_1,a_3) is NOT a_2 is as follows:

  1. p_2=1: One of a_1 or a_3 outside the subtree and other one inside subtree of a_2. Both nodes must have the value larger than value of a_2. Hence-
    Contribution = B_S*(B_T-B_S)
  2. p_2=2 : One of a_1 or a_3 must be outside the subtree of a_2 and one must be inside. Hence, Contribution = B_S*(S_T-S_S)+S_S*(B_T-B_S)
  3. p_2=3 : Both a_1 and a_3 need to be smaller than value of a_2. Hence-
    Contribution = S_S*(S_T-S_S).

Now, to get answer for the case when LCA(a_1,a_3)=a_2, there are different methods to compute it. One of it is as follows-

When LCA(a_1,a_3)=a_2, then this means that a_1 and a_3 belong to subtrees of different children on our current node a_2. Say we are considering one of a_2's child, say Node v.

Let S_V and B_V be the number of nodes having value smaller than and bigger than value of a_2 respectively in subtree of v . Now, find number of triplets considering one of a_1 or a_3 in this subtree of Node v and the other node can be anywhere outside subtree of v.
The formulas for this, similar to intuition above, are-

  1. p_2=1: Contribution= B_V*(B_S-B_V)
  2. p_2=2 : Contribution = S_V*(B_S-B_V)+B_V*(S_S-S_V)
  3. p_2=3: Contribution = S_V*(S_S-S_V).

Note that what does this does is that, it counts the contribution of a_1 and a_3 with LCA(a_1,a_3)=a_2 twice. Hence, we divide it by 2.

Now we just add the answers for both the cases and we are done! :slight_smile:

EXPLANATION:

Grab your food, grab yo drinks, tighten your seatbelts - this editorial is going to be a long ride. After struggling for days on deciding on if I should write a concise editorial (which might be appreciated by high star coders) or make an elaborate editorial (which will let lower rating guys solve it) I decided to heck it and go with the flow XD.

The editorial is modularized so you can look at the part you are interested in, although its always advantageous to read the full editorial. Our plan for this editorial is-

  1. Discuss how we have to deal with only 3 type of queries.
  2. Derive the formula’s stated above and see intuition behind them. We will follow the implementation easiest to understand here.
  3. Some implementation details on how to get values to derive the formula above.
Only 3 Type of Queries

Okay, so the problem first gives us the permutation (p_1,p_2,p_3) which is some permutation of [1,2,3]. Altogether there are 3!=6 permutations, and 6 conditions overall is a pain to evaluate and take care of.

However, note that once you chose a triplet of nodes, say (n_1,n_2,n_3), then its completely up to you that which node is assigned a_1, which node is assigned a_2 and which node is assigned a_3 (with constraint that a_2 must lie in path from a_1 to a_3)! This lets us reduce the total number of conditions to 3, as we will see below, there are three sets of conditions/permutations which have exact same working and hence, exact same answer.

First taking an example for intuition, and then moving on to the formal proof-

Say my permutation (p_1,p_2,p_3) is (1,2,3). And also say that my triplet of nodes selected is (4,7,9). Clearly, a_1=4,a_2=7,a_3=9. Now, I will show that this triplet is also a valid answer for permutation (3,2,1). Simple, just switch the values assigned to a_1 and a_3, i.e. assign a_1 as 9, a_2 as 7 and a_3 as 4 - and now our (a_1,a_2,a_3) \equiv (9,7,4) satisfies the permutation. Look what we did here - we did NOT change the nodes, or the triplet of the nodes selected. All we did was to flip the assignment of a_1 and a_3.

Based on above, can you work out a formal proof for this case? Once you feel ready, you can read below to match if its in sync with your thought process or not.

Say I have a triplet (n_1,n_2,n_3). Also, lets say that the permutation (p_1,p_2,p_3) given in input was (1,2,3). Say my assignment was a_1=n_1,a_2=n_2,a_3=n_3 This means that n_1 < n_2 < n_3. Now, I will prove that the same triplet is also the answer for the permutation (3,2,1).
All that I will do is that, I will assign a_1=n_3 and a_3=n_1, making my new triplet (a_1,a_2,a_3) \equiv (n_3,n_2,n_1) where n_3 > n_2 > n_1 which satisfies the triplet. Obviously, vice versa is also possible, i.e. a triplet satisfying permutation (3,2,1) also satisfies permutation (1,2,3) by a similar flip of a_1 and a_3. Converse of this is also true - i.e. if a triplet does not satisfy permutation (1,2,3), then there is no method or assignment of (a_1,a_2,a_3) to make it satisfy permutation (3,2,1) under given constraints (that a_2 must lie on path between a_1 and a_3.)

If you are able to follow above, can you prove the similarity for permutations [3,1,2] \equiv [2,1,3] and [2,3,1] \equiv [1,3,2] ? (The answer is in the bonus section of the editorial).

Notice that, since we are able to flip a_1,a_3, this means that a triplet satisfying (p_1,p_2,p_3) can be made to satisfy (p_3,p_2,p_1) by switching assignment of a_1 and a_3. Hence, we only have 3 cases, (which can be distinguished by value of p_2 which is constant in both (p_1,p_2,p_3) and (p_3,p_2,p_1)).

Hence, our cases are p_2=1,p_2=2,p_2=3.

Deriving Formula

Now, the exact formula used depends on the implementation you used. We will discuss one of them.

Lets discuss the answer for the 2 cases, i.e. when LCA(a_1,a_3)=a_2 and LCA(a_1,a_3) \neq a_2. For each of them, we will discuss formulas for all the 3 permutations.

Let me define the following variables from quick explanation again:

S_S= Number of nodes smaller than a_2 in subtree of node a_2.
B_S = Number of nodes larger than a_2 in subtree of node a_2.
S_T= Number of nodes smaller than a_2 (in the entire tree).
B_T= Number of nodes larger than a_2 (in the entire tree).
S_V= Number of nodes smaller than a_2 in subtree of one of a_2's children, say vertex v.
B_V= Number of nodes larger than a_2 in subtree of one of a_2's children, say node v.

Case 1, when LCA(a1,a3) is not equal to a2

Case 1: When LCA(a_1,a_3) \neq a_2:

This case is easy if you are able to make a simple observation. The triplets contributing to this pair will be such that exactly 1 of a_1 or a_3 lies inside subtree of a_2 and the other one lies outside the subtree of a_2. Any other variation is rejected as for them a_2 will not lie in path from a_1 to a_3. (Why? - Answer is in bonus section as usual)

So, we can frame the following formulas for the 3 permutation cases we reduced the problem to -

  1. p_2=1: Remember that only one of a_1 or a_3 will lie in the subtree of a_2. Since p_2=1, both a_1,a_3 must be bigger than a_2. There are B_S such nodes in subtree of a_2 and hence (B_T-B_S) such nodes outside subtree of a_2. Note that, since a triplet contributing to (2,1,3) can also contribute to (3,1,2) by simply switching the assignment of a_1 and a_3, our life has become much easier. We do not need to do anything explicitly check that a_1 < a_ 3 if p_1 <p_3 or anything similar, because in case of violation we can simply switch the values of a_1 and a_3. The only thing to make sure of this, that both a_1 and a_3 must be larger than a_2. Hence, we arrive at the simple formula of-
    Contribution = B_S*(B_T-B_S)
  2. p_2=2 : A little trickier than above, but nonetheless, quite intuitive and easy to follow. Remember that one of a_1 or a_3 needs to be smaller than a_2. There are S_S such nodes in subtree of a_2 and thus (S_T-S_S) nodes outside subtree of a_2. Similarly, the other one of a_1 or a_3 needs to be larger than a_2 and there are B_S such nodes in subtree of a_2 and (B_T-B_S) such nodes outside the subtree of a_2. Exactly one of a_1 or a_3 can lie in subtree of a_2, hence our formula becomes-
    Contribution = B_S*(S_T-S_S)+S_S*(B_T-B_S)
  3. p_2=3 : Very similar to case p_2=1. We need both a_1 and a_3 to be smaller than a_2 - we dont care about relative ordering of a_1 and a_3 since a flip of assignment makes the triplet valid as long as both are less than a_2. There are S_S such nodes in subtree of a_2 and (S_T-S_S) such nodes outside subtree of a_2. Hence-
    Contribution = S_S*(S_T-S_S).

And with this, this case is done!

Case 2: When LCA(a1,a3)=a2

Case 2: When LCA(a_1,a_3)=a_2:

This happens when a_1 and a_3 are in subtree of a_2, such that both a_1 and a_3 belong to subtree of different children of a_2. This is because, if they were to lie in subtree of same child of a_2, then a_2 won’t be the LCA of a_1 and a_3. (Why? Answer in bonus section).

However, a brute computation for this case can be costly. Let us say we are talking about subtree of child v of a_2. Let B_V be number of nodes bigger than a_2 in this subtree of v, and let S_V be number of nodes smaller than a_2 in this subtree of v.

The brute force would try to find all possible pairs of children of a_2 , i.e (v_1,v_2) such that v_1,v_2 are both children of a_2 and v_1 \neq v_2. After that, it will find contribution by putting one of a_1 and a_3 in each of them. Clearly, its O( deg(a_2)^2) where deg(a_2) is degree of node a_2. For star graphs and similar, this will become O(N^2).

There is a famous saying, that, “Where there is darkness - where there is no hope, where even FFT and pragma’s cannot shine - the only ray of hope of lowering the time complexity is mathematics!” :stuck_out_tongue:

We want to calculate number of valid triplets such that LCA(a_1,a_3)=a_2. And we also know that a_1 and a_3 must belong to different children of a_2. Can we frame some formula like above case to make our life easier?

Lets see-

Say that we are iterating over a_2's children to calculate the answer for this case. Say that we are currently considering a_2's child, node v.

  1. p_2=1: We need a_1 and a_3 to be larger than a_2. And both must belong to different children of a_2. Note that there are B_V such nodes in in subtree of v and hence (B_S-B_V) nodes outside subtree of v and inside subtree of a_2 (i.e. there are (B_S-B_V) such nodes among different children of a_2 excluding v). Thats all we want! Like earlier, we can come up with formula-
    Contribution= B_V*(B_S-B_V)
  2. p_2=2 : Again, deriving a similar analogy as above. a_1 and a_3 must belong to different subtree, and one of them must be smaller than a_2 and the other one must be larger than a_2. There are S_V nodes smaller than a_2 in subtree of child v and hence (S_S-S_V) such nodes among in subtree of a_2 and outside subtree of v. Similarly there are B_V such nodes larger than a_2 in subtree of v and (B_S-B_V) such nodes outside subtree of v and inside subtree of a_2. Hence, our formula becomes:
    Contribution = S_V*(B_S-B_V)+B_V*(S_S-S_V)
  3. p_2=3: We need both nodes smaller than a_2. There are S_V such nodes in the subtree of a_2 and hence (S_S-S_V) such nodes outside the subtree of v and inside subtree of a_2. Hence, our formula becomes:
    Contribution = S_V*(S_S-S_V).

Note that, there is one thing to take care of in above - it counts each valid triplet twice!

Take a valid triplet (x,a_2,y) for example, where x and y are some nodes in subtree of a_2's children v_1 and v_2 respectively. Say we are considering each child of a_2 one by one.

When we are at v_1 , the triplet will be counted once. Remember that we are calculating with the intuition of “Number of triplets when one of the nodes is in subtree of v_1, and another anywhere in subtree of a_2 but outside subtree of v_1.” The triplet (x,a_2,y) gets counted as x is in subtree of v_1 and y is outside it, in subtree of v_2.

Now when we arrive at v_2, the triplet will be counted again as our intuition, again, is “Number of triplets with one of the nodes in subtree of v_2 and another anywhere in subtree of a_2 (and outside the subtree of v_2, of course)”. Here, (x,a_2,y) is again counted as y is in v_2, and x is outside subtree of v_2 in subtree of v_1.

Hence, to make the count correct, we divide the contribution by 2.

Once done, we just sum the contribution of both parts.

Some Implementation Details

Now, if the idea is clear to you, the only thing left is to some obtain the required values.

B_T and S_T can be trivially calculated as nodes follow labeling from [1,N].

Regarding finding S_S and B_S, we know that all labelings are distinct. Hence if we know even just one of the two, say S_S, then we can find the other one (say, B_S) as-
B_S=Size \ of \ subtree \ of \ a_2 - S_S.

That eases it out!

So now, my question reduces to “How many nodes in subtree of a_2 are smaller than a_2 ?”. Note that if I can solve this for any general node a_2, then I can easily recycle the same method to find B_V and S_V as well! So we only need to find a way to compute S_S for any give node a_2.

Note that, we can flatten the tree using Euler Tour, so that our questions like “How many nodes are smaller than a_2 in subtree of a_2” reduce to “How many numbers in range [L,R] in the array are smaller than a_2.”. Now, this is a known problem, which is solved by first sorting all the queries and doing point or range updates in a segment tree/ Fenwick Tree. One of the implementations is-

  1. Do Euler tour and make the required array, with all values set to 0.
  2. Iterate from node with smallest value to node with largest value. Update the values at required position to 1.
  3. Create a segment tree which would answer query for “Sum of values in range [L,R].” where L and R are the time when we enter the subtree and come out of the subtree of node a_2. Note that the values are 1 if the value corresponding to that node is already updated (because its smaller than a_2) else its 0 as we have not updated it yet (as its larger than a_2). Hence simply the sum of values in the range will give us the required value (i.e. S_S).

However, there is also a very nice, easy and alternate method which I feel we should look at. Note that my operation here reduces to “Number of values smaller than a_2 in the given range.” We can, hence used a data structure called “Order Statistic Tree” , popularly known as “indexed set” to answer the queries easily.

This data structure supports operations like finding the median of numbers inserted into it in O(LogN), finding the number of elements smaller than some element K in it & etc. Exact usage shall be given in bonus section which contains hall of fame for neat codes and my notes on the solution :slight_smile:

SOLUTION

Setter
#include<iostream>
#include<vector>
#include<map>
#include<set>
#include<cassert>
#include<cassert>
#include<unordered_map>
#include<unordereS_Vet>
#include<functional>
#include<queue>
#include<stack>
#include<cstring>
#include<algorithm>
#include<cmath>
#include<sstream>
#include<iomanip>
#include<cassert>
#include<cstdio>
#include<cstdlib>
#include<numeric>
#include<random>
#include<chrono>
#include<bitset>
using namespace std;
 
#define all(x) (x).begin(), (x).end()
#define pb push_back
#define xx first
#define yy second
#define sz(x) (int)(x).size()
#define gc getchar
#define IO ios_base::sync_with_stdio(false);cin.tie(0);cout.tie(0)
#define mp make_pair
 
#ifndef ONLINE_JUDGE
#  define LOG(x) (cerr << #x << " = " << (x) << endl)
#else
#  define LOG(x) ((void)0)
#endif
 
typedef long long ll;
typedef unsigned long long ull;
typedef long double ld;
 
const double PI=3.1415926535897932384626433832795;
const ll INF = 1LL<<62;
const ll MINF = -1LL<<62;
 
template<typename T> T getint() {
	T val=0;
	char c;
	
	bool neg=false;
	while((c=gc()) && !(c>='0' && c<='9')) {
		neg|=c=='-';
	}
 
	do {
		val=(val*10)+c-'0';
	} while((c=gc()) && (c>='0' && c<='9'));
 
	return val*(neg?-1:1);
}
 
//mt19937 rng(chrono::steady_clock::now().time_since_epoch().count()); uniform_int_distribution<int>(0, n-1)(rng)
 
const int MAXN=100003;
 
int n;
int p[3];
vector<int> adj[MAXN];
 
 
int arr[MAXN], sz[MAXN];
int L[MAXN], R[MAXN], par[MAXN], ind=0;
void dfs(int x) {
	L[x]=ind++;
	arr[L[x]]=x;
	sz[x]=1;
	for(auto i:adj[x]) {
		if(L[i]==-1) {
			par[i]=x;
			dfs(i);
			sz[x]+=sz[i];
		}
	}
	
	R[x]=ind-1;
}
 
struct fenwick {
	int arr[MAXN];
	
	fenwick() {memset(arr,0,sizeof arr);}
	
	void incr(int x, int by) {
		for(;x<MAXN;x+=(x&(-x))) {
			arr[x]+=by;
		}
	} 
	
	int sum(int x) {
		int sum=0;
		for(;x>0;x-=(x&(-x))) {
			sum+=arr[x];
		}
		
		return sum;
	}
};
 
struct ev {
	int typ;
	int val;
	int l,r;
	int idx;
	
	bool operator<(const ev& masik) const {
		if(val==masik.val) return typ>masik.typ;
		return val>masik.val;
	}
};
 
vector<int> pre[MAXN];
 
int main() {
	IO;
	int T;
	cin>>T;
	while(T--) {
		cin>>n; 
		
		for(int i=1;i<=n;++i) adj[i].clear();
		cin>>p[0]>>p[1]>>p[2];
		
		for(int i=1;i<n;++i) {
			int a,b;
			cin>>a>>b;
			adj[a].pb(b);
			adj[b].pb(a);
		}
		
		ind=0;
		for(int i=1;i<=n;++i) L[i]=-1;
		par[1]=-1;
		dfs(1);
					
		vector<ev> evs;
		evs.reserve(6*n);
		for(int i=1;i<=n;++i) {
			evs.pb({0,i,L[i],L[i]});
		}
		
		for(int i=1;i<=n;++i) {
			pre[i].resize(0);
			
			int szomsz=0;
			for(auto j:adj[i]) {
				if(j==par[i]) continue ;
				evs.pb({1,i, L[j],R[j], szomsz});
				szomsz++;
			}
			
			evs.push_back({1,i,0,L[i]-1, szomsz});
			evs.push_back({1,i,R[i]+1,ind-1, szomsz+1});
			
			pre[i].resize(szomsz+2);
		}
		
		sort(evs.begin(),evs.end());
		
		fenwick fw;
		for(auto i:evs) {
			if(i.typ==0) {
				fw.incr(i.l+1, 1);
			}else {
				pre[i.val][i.idx]=fw.sum(i.r+1)-fw.sum(i.l);
			}
		}
		
		ll res=0;
		for(int i=1;i<=n;++i) {
			ll ge, sm;
			vector<ll> gelst, smlst;
			
			gelst.reserve(sz(adj[i]));
			smlst.reserve(sz(adj[i]));
			
			int szomsz=0;
			for(auto j:adj[i]) {
				if(j==par[i]) continue ;
				
				ge=pre[i][szomsz];
				sm=sz[j]-ge;
				
				gelst.push_back(ge);
				smlst.push_back(sm);
				
				szomsz++;
			}
			
			ll kivul_ge=pre[i][szomsz]+pre[i][szomsz+1];
			ll kivul_sm=sz[1]-sz[i]-kivul_ge;
			
			gelst.pb(kivul_ge);
			smlst.pb(kivul_sm);
			
			int deg=gelst.size();
			
			ll sge=0, ssm=0;
			for(auto i:gelst) sge+=i;
			for(auto i:smlst) ssm+=i;
			
			if(p[1]==2) {
				res+=ssm*sge;
				for(int i=0;i<deg;++i) {
					res-=smlst[i]*gelst[i];
				}
			}else if(p[1]==3) {
				res+=ssm*ssm;
				for(int i=0;i<deg;++i) {
					res-=smlst[i]*smlst[i];
				}
			}else {
				res+=sge*sge;
				for(int i=0;i<deg;++i) {
					res-=gelst[i]*gelst[i];
				}
			}
		}
		
		if(p[1]!=2) res/=2;
		
		cout<<res<<"\n";
	}
	return 0;
}
 
Tester(KOTLIN)
import java.util.Random
import java.util.Stack

class Fenwick (private val n: Int) {
  private val tree = IntArray(n+1)
  fun upd(x: Int) {
    var i = x
    while(i <= n) {
      tree[i] += 1
      i += i and -i
    }
  }
  fun get(x: Int): Int {
    var i = x
    var ret = 0
    while(i > 0) {
      ret += tree[i]
      i = i and (i-1)
    }
    return ret
  }
  fun get(l: Int, r: Int): Int {
    return get(r) - get(l-1)
  }
}

class DisjointSet (private val n: Int) {
  private val grp = IntArray(n) { it }
  fun get(x: Int): Int {
    if(grp[x] != x) {
      grp[x] = get(grp[x])
    }
    return grp[x]
  }
  fun merge(a: Int, b: Int): Boolean {
    val p = get(a)
    val q = get(b)
    if(p != q) {
      grp[p] = q
    }
    return p != q
  }
}

fun main(Args: Array<String>) {
  val _NUM_TESTS = readLine()!!.toInt()
  require(_NUM_TESTS in 1..20)

  val random = Random()

  repeat(_NUM_TESTS) {
    val N = readLine()!!.toInt()
    require(N in 3..100000)

    val P = readLine()!!.split(" ").map{ it.toInt() }
    require(P.toSet() == setOf(1, 2, 3))
    val P2 = P[1]

    val gph = List(N) { mutableListOf<Int>() }
    val dj = DisjointSet(N)
    repeat(N-1) {
      val (a, b) = readLine()!!.split(" ").map{ it.toInt() - 1 }
      require(a in 0..N-1)
      require(b in 0..N-1)
      require(dj.merge(a, b))
      gph[a].add(b)
      gph[b].add(a)
    }

    val L = IntArray(N)
    val R = IntArray(N)
    val par = IntArray(N) { -1 }
    run {
      /* // Due to stack overflow, I have to change it to DFS without recursion
      var TIME = 0
      fun dfs(u: Int, p: Int) {
        TIME += 1
        L[u] = TIME
        par[u] = p
        for(v in gph[u]) if(v != p) dfs(v, u)
        R[u] = TIME
      }
      dfs(Random.nextInt(0, N-1), -1)
       */

      data class Element(val u: Int, val p: Int, var i: Int)
      val stk = Stack<Element>()
      stk.add(Element(random.nextInt(N), -1, 0))

      var TIME = 0
      while(!stk.empty()) {
        val cur = stk.peek()
        if(cur.i == gph[cur.u].size) {
          R[cur.u] = TIME
          stk.pop()
          continue
        }
        if(cur.i == 0) {
          TIME += 1
          L[cur.u] = TIME
        }
        val v = gph[cur.u][cur.i]
        if(v != cur.p) {
          par[v] = cur.u
          stk.add(Element(v, cur.u, 0))
        }
        cur.i += 1
      }
    }

    val fenwick = Fenwick(N)
    var ans = 0L
    for(u in 0 until N) {
      val a = mutableListOf<Int>()
      val b = mutableListOf<Int>()
      for(v in gph[u]) if(v != par[u]) {
        val num_smaller = fenwick.get(L[v], R[v])
        a.add(num_smaller)
        b.add((R[v] - L[v] + 1) - num_smaller)
      }
      run {
        val length = N - (R[u] - L[u] + 1)
        val num_smaller = fenwick.get(1, L[u] - 1) + fenwick.get(R[u] + 1, N)
        a.add(num_smaller)
        b.add(length - num_smaller)
      }

      fun go(p: List<Int>, q: List<Int>): Long {
        val sumq = q.map{ it.toLong() }.sum()
        return p.zip(q).map{ (x, y) -> x * (sumq - y) }.sum()
      }

      ans += when(P2) {
        1 -> go(b, b) / 2
        2 -> go(a, b)
        3 -> go(a, a) / 2
        else -> throw Exception("shouldn't happen!")
      }

      fenwick.upd(L[u])
    }

    println(ans)
  }
}

Time Complexity=O(NLogN)
Space Complexity=O(N)

CHEF VIJJU’S CORNER :smiley:

On equivalence of other 2 permutation

First lets take up [2,1,3] and [3,1,2].

The process is EXACTLY the same! I have a node triplet (n_1,n_2,n_3) which satisfies permutation (2,1,3). Say a_1=n_1,a_2=n_2, a_3=n_3, then we can say that n_2 < n_1 < n_3. Now, what if I switch a_3 and a_1 ? I get the permutation (n_3,n_2,n_1 with n_3 > n_1 >n_2 which corresponds to permutation (3,1,2).

Proving equivalence of (2,3,1) and (1,3,2) also follows exact same steps.

Why only exactly one of a1,a3 must lie on subtree of a2 for case 1

Lets break it into cases covering all 4 possibilities of a_1,a_3 lying in subtree of a_2-

  • When both a_1,a_3 lie in subtree of a_2 - In this case, if LCA(a_1,a_3) \neq a_2 then this will mean that a_2 does not lie on shortest path from a_1 to a_3. Hence the triplet is invalid.
  • When neither a_1, not a_2 lie on subtree of a_2 - Again, a_2 will not lie on shortest path from a_1 to a_3.
  • When one of a_1 and a_3 lie in the subtree of a_2 - In this, the path from the node outside subtree of a_2 must cross a_2 to finally reach the node in subtree of a_2. Hence for these two cases (when only a_1 or only a_3 are in the subtree of a_2) the triplet is valid for LCA(a_1,a_3 ) \neq a_2.
For LCA condition, why both a1,a3 had to lie in different subtree

Say they lie in subtree of same child, say node v. Now, we will prove that a_2 cannot be LCA of a_1,a_3.

Assume that LCA(a_1,a_3)=a_2 even if they lie in subtree of same child. Now, look at path from a_1 to a_2. It will go to a_2 via node v. Similar observation if for path from a_3 to a_2, i.e. it will also go to a_2 via node v. According to definition, LCA should be the first common node in this path. And we see that node v is surely a common node before node a_2. Note that we are not claiming that v is the LCA - that is wrong. We are claiming that, a_2 is not the first common node in this path as both paths contain v. Hence, this is a contradiction to the fact that LCA(a_1,a_3)=a_2.

TLDR : Hence, proved :smiley:

Can the editorial's approach work if multiple nodes can have same value?

No! Our formula’s will not hold good, as if a_1=a_3 then flipping them will no longer cause an invalid triplet to become valid! Also note that our proof that only 3 type of permutation-cases need to be handled also breaks down.

Setter's Notes

Let’s consider the euler tour of the tree and build a merge sort tree over it. Let’s consider an arbitrary vertex x as a candidate for p_2. Then with the help of the merge sort tree we can calculate for every adjacent vertex the number of vertices in its subtree that are less or greater than x in O(log^2 n). Let there be k adjacent vertices, a_1, ..., a_k and the number of vertices with smaller index in the appropriate subtree be l_1, ..., l_k, similarly the vertices with larger index be g_1, ..., g_k. Then for example if we have the permutation p={1,2,3} or {3,2,1}, then we want to calculate \sum_{i!=j} l_i * g_j, this can easily be calculated in O(k) time in the following manner, let L=l_1+...+l_k and G=g_1+...+g_k, then the above sum is L*G-\sum^{k}_{i=1} l_i*g_i. There are 2 other cases which can be similarly examined. And because for this arbitrary x vertex we have calculated the answer in O(k log^2 n) then for the whole problem we have a O(n log^2 n) solution. We can reduce this to O(n log n) by applying fractional cascading. However we can also notice that the queries are offline thus a sorting method is feasible too (and much faster in practice), i.e let’s look only at queries like in an interval [l;r] we want to calculate the numbers which are larger than y (the case when we want to calculate the numbers which are smaller than y can similarly be solved). Then we put these queries and the indices of the vertices in one list and sort them by their index (for the queries the index is y and for the vertices its their index :P), and then just loop over them from the largest index to the smallest, and do a point increase in a fenwick tree at a vertex, and calculate the sum from l to r for a query.

  • Discuss how Fractional Cascading can be applied to this question.

  • HALL OF FAME: The following solution’s are selected for hall of fame: :smiley:

    • @l_returns - CodeChef: Practical coding for everyone
      Notes:
      "Dear @vijjji123 ,
      Today I learned that variable names can be very deceptive. The dp[x] array in his solution does not store any dp table, it is merely storing the pair <Size of subtree, Number of nodes smaller than Node i in the subtree of i (i.e. S_S for node i)>. I think it is named dp[x] in context to some special one changing the display picture on Facebook. Who knows :stuck_out_tongue: ? The Euler Tour and Segment tree operate as discussed in editorial. He made 3 functions , find_c1 , find_c2 and find_c3 to find answer for each case of permutation. Hence, this solution is pretty clean.
      Regards
      @vijju123 "
     - @tomsyd - https://www.codechef.com/viewsolution/25691825
      "Dear @vijjji123 ,
      I finally found a solution using the policy based data structure. Turns out, he made an unofficial editorial for div2 (remind me to thank him for covering for me in CHGORAM till I wrote the editorial!) and so I did not need to hunt down all solutions sorted by runtime. Whats even better is that his solution actually use descriptive variable names. One can hence easily map the variables of editorial to the variables he has used. He has used the Order-statistic tree (or indexed set) in the following manner-
       - He inserts the node values as we traverse the tree. He tries to find $S_V$ and $B_V$ by finding the difference between "Number of elements smaller than $a_2$ AFTER traversing the child $v$ $-$ Number of elements initially smaller than $a_2$ BEFORE traversing the subtree of $v$. ". 
       - Similar formula is used to find $B_V$. 
       - He then calculates $S_S=\sum S_V$ and $B_S = \sum B_V$. 
       - $S_T$ and $B_T$ are trivial to find.
    

    That ends the hunt for another solution.
    Cheers!
    With Regards
    @vijju123 "

    • @ssjgz - CodeChef: Practical coding for everyone
      NOTES:
      -"Dear @vijjji123 ,
      There is some old guy who came to Codechef from the far and distant country of Hackerrank. I see him writing mind blowing comments in his solution. I think he should apply as editorialist for codechef sometime. I am pretty sure the community appreciates everything he does for our forum :heart: .
      Yours Sincerely,
      @vijju123 "
Related Problems
22 Likes

Finally :slight_smile: Thank you @vijju123, you’re the best Editorialist.

7 Likes

I imagine the high star coders are adept at skimming posts and quickly extracting the info they need (they need to do this for every Problem description they read, after all, and under greater time-pressure), so writing a detailed one seems like the right approach, IMO :slight_smile:

And this Old Guy thanks you for the shout-out XD

6 Likes

XD @vijju123
Thanks for including my soln here :slight_smile:

4 Likes

BTW @vijju123 no links to related/similar problems this time?

1 Like

Ahhhhhhhhh. That section somehow got overwritten when I pasted the hall of fame XD. Getting formatting right was a horror :3 . Added, thanks for reminding me that!

1 Like

You can really see the struggle if you read the Quick Explanation. Its wayy too long from my usual Quick Explanations and I was kind of clueless on how to compress it as there were total 6 conditions according to my solutions to look at.

I really appreciate your words :heart: :smiley:

5 Likes

I guess “INTRAPATH” must be added to the related problems list as well.

1 Like

Its “INTRPATH” XD \hspace{0mm}
Yeah logically they are similar but that problem requires fast LCA and this one needs euler tour+segtree

1 Like

How about adding “hint” section here for those who still wanna solve it without looking at detailed solution or full solution ?
That section won’t reveal full solution but will give some hints.
We can have two hints. level 1 (which reveals very very less) and level 2 (which reveals little more ) ?
Not for this editorial but in general.
@vijju123 @taran_1407

1 Like

Euler Tour + BIT/segtree is usually the easy part, and my favourite one too.Without the experience of ‘INTRAPATH’, I won’t be able to solve CHOGRAM.

3 Likes

I think vijju123 is more capable than me to decide :sweat_smile:
Usually euler tour + segtree is tough to come up with unless you have already solved that type of problem.
Also there were cases and math here.
And we had to sort queries ( that idea was also non trivial) or use some BST or OST.
So i think that might be the reason for keeping it med hard.

3 Likes

Nah. I feel pre requisites give more hint than we give them credit for. It takes efforts to write good hints which reveal just the right amount, so I prefer not having them. The same effort can go to analyzing solutions, listing tricky test cases which I feel are more useful.

2 Likes

No.

Editorial difficulty tags are not randomly allotted. They are written according to the feedback and problem feedback agreed to by the setter, the tester and the contest admin. It is rightly a medium-hard problem.

Just because you solve a problem doesnt mean you should rate it as lower difficulty - it seems as if boasting and actually demotivates people who were not able to solve the problem. :slight_smile:

8 Likes

Yeah Karan, we know you are a good problem solver and you solved/and are capable of solving the problem… But doesn’t mean you should downgrade the difficulty level of a problem. If this is an easy-medium problem then what’s the level of those who weren’t able to solve it? Are they all noobs? Sorry, don’t take it in a wrong but it appears bragging to me. :disappointed_relieved:

Hey…hey…it was just my opinion…I felt that everyone felt that is was medium nothing else…I am not talking about myself. For me , it was very very hard. Please don’t misunderstand me…I did 75 wrong submissions to this problem…I’ll make sure not to share my idea on difficulty level in any comment. I am really sorry. And to the beginners:-This is very hard problem, trust me,very hard…

2 Likes

Don’t misunderstand me…I was adding the opinion which I heard from many people…nothing else. And this problem is very hard for me. I did 75 wrong submissions to this problem.I am also a great noob. Please don’t call me good. I don’t like it…

2 Likes

You are the best editorialist, @vijju123… It kinda makes me sad that after putting such huge efforts in writing such a wonderful and super-helpful editorial, u get just 9 likes to that…anyways, keep up the good work and a hearty thanks for editorial !

4 Likes

Need Help. I am getting TLE . I followed the approach as given in second editorial. Any one can tell the reason for TLE.

@l_returns , Could you please tell me what is the purpose of arrays s[] and e[] in your solution ? what values are they storing ?

EDIT 1 : the s[] array is storing values that are smaller than that particular node… am I right?

EDIT 2 :

Can you please explain this process in a bit more detail by taking a small example (or share links maybe) ??