SONIQUE - Editorial

PROBLEM LINK:

Practice
Contest, div. 1
Contest, div. 2

Author: Danya Smelskiy
Tester: Zhong Ziqian
Editorialist: Oleksandr Kulkov

DIFFICULTY:
HARD
PREREQUISITES:
Treap, segment tree, euler tour, lca
PROBLEM:
You’re given a tree on N vertices’s. Each vertex of the tree has some value A_i assigned. Also some of tree’s edges are collapsed. You are to process queries of the following types:

  1. State of E-th road changes;
  2. For each vertex V in connected component of vertex X add value Y to A_V;
  3. For each vertex V in connected component of vertex X nullify A_V its value in favor of A_X;
  4. Output A_X;
  5. Output sum of A_V across all vertices V in connected component of vertex X;
  6. For each vertex V in connected component of vertex X turn A_V to 0;
  7. Output the minimum number of roads to be prepared so it’s possible to travel between each pair of vertices U,V such that A_U > 0 and A_V>0.

EXPLANATION:

It may look terrific at first, but dealing with first 6 types of queries is as easy as using treap.

Let’s traverse all vertices in dfs order and for each vertex v memorize time when we entered it in_v and time when we exited it out_v. Then if we order all vertices via in_v it will turn out that vertex u lies in subtree of vertex v iff in_v \leq in_u \leq out_v. Thus we may use segment [in_v;out_v) to refer to subtree of the vertex v. Let’s keep each connected component in a treap using in_v as explicit key.

What operations are we going to need? As for first query we’ll have to be able to either cut segment of elements having in_u in the range [in_v;out_v) if the road wasn’t collapsed or link a treap with such elements into another treap which doesn’t have any elements from this range yet.

So, is it tree or heap?

To do this let’s recall how the treap is maintained in general. Each node of the treap has some key and priority. We maintain treap in such a way that it stays balanced binary search tree via key and heap via priority. From this goes its name, tree+heap=treap:

struct node;
typedef node* treap;

struct node {
	node *l = 0, *r = 0, *p = 0;
	int key, val, prior;
	
	node(int key): key(key), val(val), prior(rand()) {}

We will also need some auxiliary functions which deal with cutting nodes from their parents or vice versa sets some nodes to be left or right child of the vertex:

	#define safe(x, op) (x ? x->op : 0)
	
	treap fix() {
		safe(l, p = this);
		safe(r, p = this);
		return this;
	}
	treap cut_l() {
		safe(l, p = 0);
		return l;
	}
	treap cut_r() {
		safe(r, p = 0);
		return r;
	}
	treap set_l(treap x) {
		l = x;
		return fix();
	}
	treap set_r(treap x) {
		r = x;
		return fix();
	}
};

Here safe(x, op) macro is used to refer to the treap x without additional checks that it’s not nullptr. Also fix() is a function which finalizes addition of new child setting link to its new parent and recalculating metadata (if any). We’ll introduce function push() later which passes accumulated queries to children. As for now this interface is enough to maintain split and merge operations:

treap merge(treap a, treap b) {
	if(!a || !b) {
		return a ? a : b;
	} else if(a->prior < b->prior) {
		return a->set_r(merge(a->cut_r(), b));
	} else {
		return b->set_l(merge(a, b->cut_l()));
	}
}
pair<treap, treap> split(treap a, int k) {
	treap b, c;
	if(!a) {
		return {0, 0};
	} else if(a->key < k) {
		tie(b, c) = split(a->cut_r(), k);
		return {a->set_r(b), c};
	} else {
		tie(b, c) = split(a->cut_l(), k);
		return {b, a->set_l(c)};
	}
}

merge takes two treaps having elements in ranges [a;b) and [b;c) and returns their merged version having elements in range [a;c). Split takes treap having elements in range [a;b) and the number k and returns two treaps having elements in ranges [a;k) and [k;c) correspondingly.

Let’s introduce new operations root() and min(). First one finds the root node of the treap in which this node is currently presented. The latter one seeks for minimum key accessible from the current node:

struct node {
    ...
    treap root() {
		return p ? p->root() : this;
	}
	treap min() {
		return l ? l->min() : this;
	}
};

With these operations we are able to implement link and cut operations mentioned above.

pair<treap, treap> cut(treap a, int l, int r) {
	treap b, c;
	tie(a, b) = split(a, l);
	tie(b, c) = split(b, r);
	return {merge(a, c), b};
}
treap link(treap a, treap x) {
	treap b;
	tie(a, b) = split(a, x->min()->key);
	return merge(a, merge(x, b));
}

Here cut takes treap on range [a;b) and the range [l;r) and splits it into two treaps having ranges [a;l) \cup [r;b) and [l;r) correspondingly. And link kind of inverses this operation, i.e. takes two treaps with ranges [a;l) \cup [r;b) and [l;r) correspondingly and merges it into single treap with range [a;b). These functions are enough to deal with the queries of the first kind:

void inv(int u, int v) {
	if(in[u] > in[v]) {
		swap(u, v);
	}
	auto U = nod[u]->root();
	auto V = nod[v]->root();
	if(U != V) {
		link(U, V);
	} else {
		cut(U, in[v], out[v]);
	}
}

Here nod_v is the pointer to the node corresponding to vertex v in initial tree.

Addition and summation

Going further on we’ll have to maintain metadata in our nodes. Namely to process queries of the second type we have to keep current value in the vertex val, sum of values in the subtree of current node in treap sum and the value which was added on the whole subtree and must be pushed further. Also we now have to keep the size of subtree:

struct node {
    int64_t val = 0, sum = 0, to_add = 0;
    int siz = 1;
    ...
	treap fix() {
		safe(l, p = this);
		safe(r, p = this);
		sum = val + safe(l, sum) + safe(r, sum);
		siz = 1 + safe(l, siz) + safe(r, siz);
		return this;
	}
	treap push() {
		if(to_add) {
			safe(l, add(to_add));
			safe(r, add(to_add));
			to_add = 0;
		}
		return this;
	}
	treap add(int64_t k) {
		to_add += k;
		val += k;
		sum += siz * k;
		return this;
	}
    treap cut_l() {
		push();
		safe(l, p = 0);
		return l;
	}
	treap cut_r() {
		push();
		safe(r, p = 0);
		return r;
	}
    ...
};

When dealing with treap it is important to keep some invariants and conventions. In our implementation we will keep convention that in any moment of time values val and sum are calculated correctly and to_add only keeps the value which should be pushed further and doesn’t affect current node at all. Also now we recalculate our metadata in fix() function and call push() function before cutting children from the node. The way merge and split functions are implemented guarantees that we only have to do this when set_x or cut_x methods are called.

Now we’re able to process queries of 2, 4 and 5 types:

void add(int x, int c) {
	nod[x]->root()->add(c);
}
int64_t get(int x) {
	treap a, b;
	tie(a, b) = cut(nod[x]);
	int64_t res = b->sum;
	link(a, b);
	return res;
}
int64_t get_total(int x) {
	return nod[x]->root()->sum;
}

Note that for get function we manually cut single node from a treap to be sure that all necessary pushes from above it are processed. Its slightly slowly then manually going from the root node and looking on stored queries or pushing them but in this way we’re able to maintain same level of abstraction, using only split and merge functions to work with treap which are guaranteed to work correctly.

Nullyfying

Queries of 3 and 6 types are actually processed in the same manner as previous ones. To deal with them we have to add new field to_nul to our structure with some slight changes:

struct node {
    int to_nul = 0;
    ...
	treap nullify() {
		to_nul = 1;
		sum = to_add = val = 0;
		return this;
	}
	treap push() {
		if(to_nul) {
			safe(l, nullify());
			safe(r, nullify());
			to_nul = 0;
		}
		if(to_add) {
            ...
		}
		return this;
	}		
    ...
};

Note that in push function it is important to maintain proper order. We actually push not one but two operations here which we should treat as single composite operation: “[do \ do not] nullify and then add k” in this exact order. Thus when new nullify operation comes we set switch to [do] and make k = 0 and when add x operation comes, we simply do k += x and ignore switch.

Now we’re able to process queries of type 3 and 6:

void compose(int x) {
	treap a, b;
	tie(a, b) = cut(nod[x]);
	b->add(safe(a, sum));
	safe(a, nullify());
	link(a, b);
}
void full_nul(int x) {
	nod[x]->root()->nullify();
}

This sets up all 6 types of queries and prepares us for true hard part of the problem.

Here is the link to the code which deals with first 6 types of queries only.

The Big One

Last thing we should do is learn how to answer queries of the 7-th type. To do this for each component with non-zero sum we may store its vertex with minimum in_v in the special set and store answer for this query in to_con variable.

int to_con;
set<int> non_zero;

to_con may change in two cases. Either component changes its status (becomes zero-sum or stops being zero-sum) or some edge changes its status. When component changes its status, we should calculate number of collapsed edges from this component to the minimum subtree which includes all other components.

To get this number we reduce the question to calculating sums on the paths in tree and changing numbers in particular vertices. Let sum_root be the function calculating number of collapsed edges on the path from the vertex to the root and sum_path be the function calculating number of collapsed edges on the path between two vertices.

int sum_path(int a, int b) {
	return sum_root(a) + sum_root(b) - 2 * sum_root(lca(a, b));
}
int process(int x) {
	if(non_zero.empty()) {
		return 0;
	} else {
		int g = lca(rin[*begin(non_zero)], rin[*prev(end(non_zero))]);
		if(is_par(g, x)) {
			vector<int> pos;
			int t = sum_root(x);
			auto it = non_zero.lower_bound(in[x]);
			if(it != end(non_zero)) {
				pos.push_back(t - sum_root(lca(x, rin[*it])));
			}
			if(it != begin(non_zero)) {
				pos.push_back(t - sum_root(lca(x, rin[*prev(it)])));
			}
			return *min_element(begin(pos), end(pos));
		} else {
			return sum_path(g, x);
		}
	}
}
void ins(int x) {
	to_con += process(x);
	non_zero.insert(in[x]);
}
void rem(int x) {
	non_zero.erase(in[x]);
	to_con -= process(x);
}

Here rin[x] keeps the vertex v such that in_v=x, lca simply returns lca between two vertices and is_par(a, b) checks if vertex a is the ancestor of vertex b in the tree. Thus to get the number of collapsed vertices on the path from x to minimum subtree containing all other vertices we should check two separate cases. Let g be the total lca of all vertices lying in non_zero. We calculate it by taking lca of first and last vertex according to in_v.

  1. If x doesn’t lie in subtree of g then we should look on the path from x to g;
  2. Otherwise we should look on the path from x to its lca with vertex having closest in_v;

And another kind of update we should consider is changing status of some particular edge:

void upd(int x, int k) {
	auto it = non_zero.lower_bound(in[x]);
	auto jt = non_zero.lower_bound(out[x]);
	to_con += k * (it != jt && (it != begin(non_zero) || jt != end(non_zero)));
	add_root(x, k);
}

Function upd takes lower vertex of the edge. Here we check that the edge which status was updated has non-zero vertices both below and above it. After that we add k=1 if edge collapsed and k=-1 if edge was repaired in function add_root.

Now we should implement functions add_root and sum_root. First one adds k to particular vertex and the second one calculates sum on the path from the vertex to the root of the tree.

int sum_root(int x) {
	return sum(out[x]);
}
void add_root(int x, int k) {
	add(0, in[x], k);
	add(1, out[x], k);
}

Usually to deal with queries on pathes you’d need to utilize heavy-light decomposition. But when it comes to queries of kind “add in vertex” and “get sum to the root” we may work with only segment tree as it’s described here (№12).

To do this we keep two segment trees: on vertices in in_v order and for vertices in out_v order. In both trees prefix containing subtree of vertex v ends in position out_v. But in first tree it includes ancestors of v and some other garbage vertices and in second tree it includes same elements as in first tree but not its ancestors. Thus we may get sum on the path to root by subtracting second sum from the first one:

const int maxn = 5e5 + 42;
int sm[2][4 * maxn]; 
void add(int z, int p, int c, int v = 1, int l = 0, int r = maxn) {
	sm[z][v] += c;
	if(r - l > 1) {
		int m = (l + r) / 2;
		if(p < m) {
			add(z, p, c, 2 * v, l, m);
		} else {
			add(z, p, c, 2 * v + 1, m, r);
		}
	}
}
int sum(int p, int v = 1, int l = 0, int r = maxn) {
	if(r - l == 1) {
		return sm[0][v];
	} else {
		int m = (l + r) / 2;
		if(p < m) {
			return sum(p, 2 * v, l, m);
		} else {
			return (sm[0][2 * v] - sm[1][2 * v]) + sum(p, 2 * v + 1, m, r);
		}
	}
}

After we introduced almost all new auxiliary functions except for basic like lca we may change a bit our main queries functions to keep track of node and edge changes:

void inv(int u, int v) {
	if(in[u] > in[v]) {
		swap(u, v);
	}
	auto U = nod[u]->root();
	auto V = nod[v]->root();
	if(U != V) {
		if(V->sum > 0) {
			rem(rin[V->min()->key]);
		}
		if(U->sum > 0) {
			rem(rin[U->min()->key]);
		}
		link(U, V);
		U = nod[u]->root();
		if(U->sum > 0) {
			ins(rin[U->min()->key]);
		}
		upd(v, -1);
	} else {
		if(U->sum > 0) {
			rem(rin[U->min()->key]);
		}
		cut(U, in[v], out[v]);
		U = nod[u]->root();
		V = nod[v]->root();
		if(U->sum > 0) {
			ins(rin[U->min()->key]);
		}
		if(V->sum > 0) {
			ins(rin[V->min()->key]);
		}
		upd(v, 1);
	}
}
void add(int x, int c) {
	if(nod[x]->root()->sum == 0 && c > 0) {
		ins(rin[nod[x]->root()->min()->key]);
	}
	nod[x]->root()->add(c);
}
void full_nul(int x) {
	if(nod[x]->root()->sum > 0) {
		rem(rin[nod[x]->root()->min()->key]);
	}
	nod[x]->root()->nullify();
}

Solution turns out extremely tedious and hard in implementation, but main idea is not that hard. Please, refer to the code attached for further understanding.

AUTHOR’S AND TESTER’S SOLUTIONS:
Author’s solution can be found here.
Tester’s solution can be found here.
Editorialist’s solution can be found here.

3 Likes