KETEKI2D editorial



Setter: Abhilash

Tester: Harsh Raj

Editorialist: Harsh Raj




Basic graph theory and DFS Traversal


We answer the queries offline.
Do a DFS to label all the nodes with their depths from node.
For each node in query, store its corresponding depth value to
be queried. Do a second DFS from root, when you visit a node 
that has to be queried, find the depths of nodes that can be
solution to this query and store in some global lookup table.

In given example, when we are at node 1, we update lookup 
table of 1 with 1, i.e. all the nodes in subtree with depth
1 will be answer to query 1. 

If the current depth is answer to some query, update answer
to the specific query.

when we reach node 2, we see its depth is 1, which is answer
to query 1, so we increase the count of query 1, by 1.

When returning back from subnodes (in 
dfs) remove the added depth query from the global lookup 
Finally output all answers.


O(N) as It is a tree and we a doing a DFS.
Settler's Solution

	using namespace std;
	int counter=0;
	void dfs(int s,int parent=-1,int _level=0)
	    for(auto x:edges[s])
	    int n,q;
	    for(int i=1;i<=n-1;i++){
	        int u,v;
	    for(int i=0;i<=n;i++)
	        int node,depth;
	        int _level=level[node]+depth;
	        auto r=upper_bound(level_nodes[_level].begin(),level_nodes[_level].end(),finish[node]);
	        auto l=lower_bound(level_nodes[_level].begin(),level_nodes[_level].end(),start[node]);
Tester's Solution

	using namespace std;
	#define ll long long int
	#define eb emplace_back
	const int N=2e5+4;
	int n,q,depth[N],ans_query[N],query[N];
	vector<int> g[N],down_query[N];
	set<int> up_query[N];
	// answer queries offline using single dfs
	// down_query stores idexes of queries from the given index(source)
	// query stores depth from the source of the given query index
	// up_query stores indexes of queries for which the current node is one of the answer
	void dfs1(int node,int par,int dep);
	void dfs2(int node,int par);
	int main(){
		for(int i=1,to,from;i<n;i++){//store graph
		dfs1(1,0,0);	//mark depths of all nodes
		for(int i=1,source,dep;i<=q;i++){//store queries
		dfs2(1,0);//find answer to queries
		for(int i=1;i<=q;i++)
		return 0;
	void dfs2(int node,int par){
		for(auto it:up_query[depth[node]]){//all the query indexes to which the current node is answer
		for(auto it:down_query[node]){//contains query number
			int dep=query[it];	//depth from given source
			dep+=depth[node];//depth from root
			up_query[dep].insert(it);//in the current subtree,at given depth,the node is answer to it'th query		
		for(auto it:g[node]){
		for(auto it:down_query[node]){//contains query number
			int dep=query[it];
			up_query[dep].erase(it);//it'th query is complete as we are not allowed to go towards root and all subnodes have been visited 
	void dfs1(int node,int par,int dep){
		for(auto it:g[node]){
		depth[node]=dep;//store depth of all nodes
1 Like

It’s also possible to solve the problem using DSU on Trees.
First store the queries at the nodes, then perform one DFS, during which you compute for each node how many nodes are in the subtrees at which depth. It you store the counts from most far away to nearest, you can add the root node to such a list in O(1) using a simple push_back. If you always merge the smaller list into the larger one, every node gets merged at most \log(n) times, which gives O(n \log n) complexity.

My (cleaned up) solution:

1 Like

Hi, need a little clarification in your method.
Are you doing the following :
for each node, store a lookup table for depth-> node in its subtree, and merge the lookup tables of child.

If so, how are you merging the two lookups table in log(n) ? I mean, when the tree becomes like a linear chain, how is log(n) maintained ?

There are two tricks.

First I store the lookup table in reverse. depth_counts[0] is the number of nodes that are the farthest away, depth_counts[1] is the number of nodes that are a little bit nearer, …, depth_counts.back() is the number of nodes at distance 0.
This means, if there is a large chain, I only have add a 1 at the end of the vector multiple times.
So if there is only one subtree at a current node, I can handle everything in constant time.

Second is DSU on trees.
The merge doesn’t work in \log n, but every node gets merged at most \log n times.
The trick of DSU on trees is, to always merge the smaller data structure into the larger data structure.
So if there are multiple subtrees, e.g. two subtrees. Then the total work is just the size of the smaller depth_counts array.

Let’s assume for a moment, that a depth_count array is longer, if the subtree is bigger. (This is of course not true, but it simplifies the following argument.)
Now let’s think. For a node x, how many times will a depth_count array, that represents a subtree containing x, be merged into a larger depth_count array representing a larger subtree.
After a merge, the merged subtree will be at least twice as large as the original subtree containing x. Therefore, during the complete DFS, a subtree containing x will only be merged into a larger subtree at most \log n times.
Since the lengths of the depth_count arrays are certainly smaller then the number of trees, you can also apply the argument to them.

1 Like

hey can you explain Settler’s Solution?

His solution is actually much simpler, the dfs does 3 things,

  1. store depth of every node
  2. store the start time when a node at given depth was visited (going to subtree)
  3. store the end time when a node at given depth was visited (going out of subtree)

For each query, answer would be the number of nodes at a given depths that are in the current subtree, i.e. number of nodes that were visited between the start and end time of visiting of query node. The clever part is that with just a little bit of precomputation, his solution becomes more better as it can answer online queries ( suppose the case of interactive type problems when one needs to answer current query to get the next query ) where as mine one is completely offline.

1 Like