If you think of the array as a tree (chain), this is similar to centroid decomposition. Dividing the array(tree) into O(NlogN) subarrays(paths), in a way that any subarray can be represented as a concatenation of two of those O(NlogN) subarrays. If we append 1’s to the array until its size becomes 2^k-1 and then perform the decomposition, the nodes at height ‘h’ will have 2^(k-1-h), (assuming root is at height 0) as the largest power of 2 that divides them. The LCA of two nodes l,r will be some number l<=i<=r of the form x*2^k as mentioned in the editorial. We need to decrease ‘r’ until it becomes a multiple of some power of 2 having power as large as possible. This is same as picking a ‘1’ in 'r’s binary representation and setting all bits to the right of it to ‘0’. But, the ‘1’ we pick cannot be to the left of the first point of difference in the binary representations of l,r, because then ‘r’ would become less than ‘l’. That is exactly what the largest set bit in (l XOR r) represents.
For example- Consider array of size 15, then 8 is the root with children 4 and 12 and so on. Consider the query from 5 to 7 and we get the required number by which splitting is done as 6. (their lca). Similar is the case with query 7 and 11 and lca as 8.
