# LCASUM Editorial

Tester: Harris Leung
Editorialist: Pratiyush Mishra, Jakub Safin

Medium

# PREREQUISITES:

Tree algorithms - LCA, centroid or heavy-light decomposition, preorder/DFS labelling

# PROBLEM:

You’re given two rooted trees over vertices 1 through N, each rooted at node 1. Find a pair of vertices u,v with the maximum sum of depths of their LCAs in the two trees.

# EXPLANATION:

Problems with two trees are usually solved by processing one tree and using a suitable data structure to find required information about the other tree, and this one’s no exception.

We need to efficiently process all pairs of vertices along with their LCAs in the first tree. For each LCA l_1 in the first tree, we’ll find the maximum possible depth of the LCA l_2 in the second tree. A bruteforce approach would be listing the vertices in each subtree of the first tree, so that for each l_1, we can check pairs of sons of l_1 and try all pairs (u,v) where u is in the subtree of one son and v in the subtree of the other, plus all cases where u = l_1 and v is in the subtree of l_1. This is obviously too slow since we look at O(N^2) pairs.

There’s a particular technique that comes useful here. We keep a particular data structure for each subtree, which contains a subset of vertices. When processing l_1 with sons s_1, s_2, \ldots, s_k sorted by non-increasing subtree size, we take the DS for son s_1, add vertices from s_2, from s_3, and so on until s_k, and then add l_1 itself. This way, we create the DS for l_1 from the DS for s_1, and we can discard DS of other sons. Before adding vertices from some subtree s_j, we should also query the DS for s_1: for each v in the subtree s_j, we want to know the maximum depth of l_2 over all u that are already in the DS. This is just a generalisation of the bruteforce approach - the DS was a list of vertices and a query was traversing that list and calculating l_2.

The “smaller to larger” technique guarantees that each vertex is added/queried O(\log N) times. Whenever it’s added, then its new DS will contain at least twice as many vertices. In fact, we’re building HLD and updating one DS for each heavy path.

#### Efficient update/query

How should we design the DS? We already see that it needs to support two operations:

1. update: add v to set S
2. query: find the maximum depth of LCA_2(u,v) over all u \in S

In the query, we’re moving from v to 1 in the second tree. At the start, v \not\in S, but there can be some vertices from its subtree (remember that this is in the second tree!) in S. If there’s at least one, then l_2 = v. Otherwise, we move up to the parent of v. Again, if there’s a vertex from its subtree in S, this current vertex is l_2. Otherwise we move up again, and so on - at worst we stop in the root. Turns out that the answer to a query is the lowest ancestor of v which contains something from S. We just need to find it efficiently.

The standard algorithm for LCA uses jumps into 2^k-th ancestors: for two distinct vertices u,v at the same depth, if their 2^k-th ancestors are distinct, we jump from each vertex to its 2^k-th ancestor, then we decrease k and repeat - if we started with 2^k \gt N, we won’t need to try the same k twice. At the end the common parent of u and v is the LCA. Here we can do it for v:

query(v):
if intersection(S,subtree2(v)) not empty:
return depth(v)
cur = v
for k = 20...0:
if depth(cur) < 2^k:
k--
continue
anc = get_ancestor(cur,2^k)
if intersection(S,subtree2(anc)) empty:
cur = anc
k--
return depth(parent(cur))


DFS gives us 2^0-th ancestors (parents) and we can calculate 2^{k+1}-th from 2^k-th ancestors into a table with size O(N \log N). That means a query’s calculating intersection(S,subtree2(v)) O(\log N) times, and this is the part we need to do quickly.

Preorder labelling helps here. We run DFS and assign positive integers from 1 to N to vertices in the order in which the DFS visits them. This way, the labels of vertices in each subtree fully cover a contiguous range. If S contains preorder labels (in the second tree) instead of original labels of vertices and the range for v is [l,r], then intersection(S,subtree2(v)) is empty iff S.lower_bound(l) > r, and update is just inserting the label of v into S.

The DS extends an ordinary set by the query function above. Each query now contains O(\log N) calculations of lower bound, and there are O(N \log N) queries, so the time complexity is O(N \log^3 N). This could already pass.

#### Faster!

We can speed up the query and cut off a logarithm. The key observation is that when we move from v to its parent, the range [l,r] (described above) only extends.

Let’s denote the closest preorder label of a vertex in S smaller than the label of v by u_l and the closest label greater than the label of v by u_r. In case no such vertex exists to either side, we set u_l=0 or u_r=N+1 respectively. If u_l or u_r is in [l,r], then the intersection is non-empty, and if neither of them is in [l,r], then u_l \lt l \le r \lt u_r, so the intersection is empty.

That means we only need one calculation of lower bound (u_r, then we get u_l as the label just before it) at the start of a query call. Each intersection can be evaluated in constant time and query takes only O(\log N) time. Otherwise, nothing changed - the DS still extends a set.

# TIME COMPLEXITY:

There are O(N \log N) queries/updates. Each query takes O(\log N) iterations and each update is one insertion into a set. The total time complexity is O(N \log^2 N).