BACREP - Editorial

@dardev If you look at the DFS order, you will find that every subtree in the original tree, is a continuous range or subarray of it. So, since update at a node would affect all nodes in its subtree, update is equivalent to range update (range corresponding to subtree). The range for updates on subtrees of 1 and 4 in this example will include 11 since it is in the subtree. Later when you query for 11, you will find sum of all previous updates on any of its ancestors.

2 Likes

Thank you for the reply @hemanth_1

I understand that we are doing DFS from root.
So - first I need to think in terms of a function that returns subtree for given node as a query?

I am unable to understand how to store queries and how to arrange the queries accordingly i.e. by i - dep(v), please help me to understand this…

Why in setter solution add function has call to add(r,-value) ?

Where I can get the editorial of last Problem Faulty System ??

Adding kj to all the nodes in the subtree of the j-th node. using general update in N nodes in segment tree and to clear it will take O(n). Then why did he use segment tree for that? Or if he use any other technique to update values to all the nodes in the subtree. please someone answer this.

I think you are confused about prefix sum related techniques.
You can read more about prefix sums & fenwick trees, that’ll also help you in understanding how O(log n) operations can be used to maintain prefix sums using segment trees.

how did you come up with the intuition of creating the segment tree in form of time - depth[v] . Going through DFS and updating and removing is intuitive and makes sense. But this sudden time - depth thing is seems of the blue. Shed some light.

@r_64 I followed your approach. I am getting TLE and SIGSEGV error can you please tell me why this is happening . I hope you will help me this is my code -
https://www.codechef.com/viewsolution/27471192

One thing that immediately caused a crash (in debug mode) for me is that your compare function is wrong: for a + query q it is not anti-reflexive as it should be i.e. compare(q,q) is not false. This will cause wrongness during sorting.

3 Likes

Thank you very much @ssjgz . Now this works - CodeChef: Practical coding for everyone

This one works -
bool compare(int a,int b)
{
if(queries[a].seq_val!=queries[b].seq_val)
return queries[a].seq_val<queries[b].seq_val;
return (queries[a].type==‘+’)&&(queries[b].type==‘?’);
}

Could you please clarify why the previous compare function does not works.
This was my previous compare function -
bool compare(int a,int b)
{
if(queries[a].seq_val<queries[b].seq_val) return true;
if(queries[b].seq_val<queries[a].seq_val) return false;
if(queries[a].type==‘?’) return false;
else return true;
}

1 Like

Comparing any + query with itself with you original compare function will return true - try it!

The requirements of a comparator used with std::sort are that it give a Strict Weak Ordering over its domain - in particularly, that it be irreflexive.

Here is the output from a simple testcase with gcc’s -D_GLIBCXX_DEBUG flag used at compilation time:

/usr/include/c++/7/bits/stl_algo.h:4866:
Error: comparison doesn't meet irreflexive requirements, assert(!(a < a)).

Objects involved in the operation:
    instance "functor" @ 0x0x7fffc2868758 {
      type = bool (*)(int, int);
    }
    iterator::value_type "ordered type" {
      type = int;
    }

3 Likes

Thanks a lot again :smiley:

1 Like

I used a different, not so efficient but i believe instructive strategy. I’ll provide a rough outline.
1) Static case(no updates)
This isnt so hard, you can have level-link pointers and store precomputed sum of bacterias from root to leaves. Now we can solve this easily by finding the level ancestor. We need to treat leaves differently(taking the level sum) as compared to non-leaves(taking level value) for queries, but nothing hard.
2) Dynamic. Take 1.
Let’s store the initial values as in case 1. For updates, we can maintain an update box for each node. This update can be propagated downwards. The problem however is that the update time for each time unit can quickly climb to O(N), and so we must rebuild the tree many times. Not good enough.
3) Dynamic. Take 2.
Let’s group the nodes into groups of k = \sqrt{NlogN} (this value of k can be found by writing down total time to do binary search on all group leader ancestors and minimizing it wrt k, see below). To do this, we do a bfs and mark each node as '1' until k nodes are marked. Now do bfs recursively on remaining elements in our bfs queue with value '2' (then '3' and so on). For each group designate a group leader. Whenever our propagation reaches a group leader, stop propagating and store the timestamp and update value in the group leader. For each node, we store pointers to all the group leaders on its root to node path. For queries, we can thus return (static case + propagated value + checked value from all ancestor group leaders). Note that each update propagates at most k times, and Nk just barely fits into time limit. There is one problem though: if our tree is very unbalanced, number of ancestor blows up. Not good.
3.1) Small improvement
Observe that as we move down the tree the depth/size ratio of our groups lessens. So going downward we can afford larger groups. We can recursively multiply our k by, say 1.2 for each incremented group ID.
4) Dynamic. Take 4.
I ended up using centroid decomposition, but only to designate group leaders and group sizes. Since the centroid decomposed tree is sufficiently balanced, we can prevent the problem of try 3. This means our group sizes are no longer exactly O(\sqrt{NlogN} ) but this is an improvement and I managed to use this to squeeze my time to under 2 seconds.
Final thoughts.
This was one hell of a question and I used a crazy contrived approach. Should have thought of the given solution which was much less complicated. Nevertheless, I learnt a lot solving this.
Good problem, and good editorial. Thanks

2 Likes

my solution is exact same as editorial but it is still failing and getting WA.I have tried using it on many different cases but still couldn’t find a case which fails my solution.i have coded in c++

EDIT: wa got corrected by converting only one occurence of int into long long int
but still tle issue

can anybody help as it is getting tle on last 50pt subtask .Should i use fenwick tree instead of seg tree?

https://www.codechef.com/viewsolution/27550087
@ssjgz @r_64.

1 Like

Glad you fixed the WA :slight_smile:

This is causing a lot of slowdown, according to my experiments:

int sort_by(vector<ll> a,vector<ll> b){

You should take the vectors a and b by reference instead of by value.

I’m not sure if that’s going to be enough to get AC, though.

3 Likes

how to take by reference
I am new to c++ as I usually code in python

1 Like

Oh, sorry - replace it with

int sort_by(vector<ll>& a,vector<ll>& b){

or (even better, stylistically):

int sort_by(const vector<ll>& a,const vector<ll>& b){

The time limit is still very tight, though.

3 Likes

although time for first subtask and second reduced drastically but still tle on 3rd one.
will try and use structs as they are generally faster than vectors.

once again thanks for helping

1 Like