THOUSES - Editorial

I thought the reason could be the “Stack limit reached” exception. But you didn’t use recursion in your code.

Coming to the reason, The CSES Josephus problem can be solved recursively. Implementing the solution available on GFG and running it for larger values of N (say around 1 Million), will result in stack overflow Exception. For this, you have to write some exclusive code, using Thread.

public class Main implements Runnable {
    public void run() {
        int testcases = 0;
        // testcases++;
        if(testcases == 0)
            testcases = io.nextInt();
        preCompute();
        for(int test = 0; test < testcases; test++) {
            // io.print("Case #" + (test + 1) + ": ");
            solve();
        }
    }

    public static void main(String[] args) { // This way the recursion depth can be increased
        new Thread(null, new Main(), "Main Thread", 1 << 30).start();
    }
}

But, this is not required in Codechef, since the environment provides enough stack size for Java.

WA on test 3 and 6. Please Help.
Code: CodeChef: Practical coding for everyone

For people getting WAs in first three files, consider the edges undirected. Edge (u, v) doesn’t mean that node u is a parent of node v. It can be the other way as well.

1 Like

I have implemented implemented using Maps,one WA,two NZAC

https://www.codechef.com/viewsolution/46363012
Hello Karina124,Why to go for BSF?
we have below info available
1)Always n is leafe
2)1 to n are like notebook writing way
By Maintaining parent child info and Node count info{sub tree count}
i wrote,but facing issues

Hello below is my approach
1)Maintaining Maps for NodeCount and Parentof maps
2)NodeCount Map give the number of nodes it covers/sub tree size
3)X_at_Node Map populated with the help of NodeCount Map
4)Add all X values int the X_at_Node Map gives the Answer

why i am getting wa on first two case? Please help if any one can.
my solution:CodeChef: Practical coding for everyone

Hello is it valid input
1
3 1
2 1
2 3

I used a bit different approach than editorial by calculating subtree sizes of all nodes and then rearranging those with highest to lowest in my adjacency list, but getting the first 2 cases wrong, can anyone guide me whats the error…
Thank you!
https://www.codechef.com/viewsolution/47844543

I believe that editorial does not explain one thing clearly (don’t mean to bash the author, great work indeed, I just think that key observation could be rephrased better and perhaps proven).

Here’s my take on this statement in case any beginner doubts the claim (I don’t like taking things for granted so I like to prove them) :slight_smile:

Say we are at node v. Suppose that contributions of its n children are represented as f(1), f(2), ... f(n). We should assign numbers 1, 2, ..., n to them, let’s denote the assigned value to i_{th} child as g(i). Without loss of generality suppose that there are two children a and b such that f(a) < f(b), we can reformulate this as f(b) = f(a) + t. Watch what happens when g(a) < g(b).

g(a) < g(b) \implies g(b) = g(a) + x

The total contribution can be calculated as A = g(a) \cdot f(a) + g(b) \cdot f(b), which is the same as g(a) \cdot f(a) + g(a) \cdot f(a) + x \cdot f(a) + g(a) \cdot t + x \cdot t

Now if we swap the values of g(a) and g(b), we get:

B = g(a) \cdot f(b) + g(b) \cdot f(a), which is the same as
g(a) \cdot f(a) + g(a) \cdot t + g(a) \cdot f(a) + x \cdot f(a)

By subtracting second equation from the first one, we get A - B = x \cdot t, which imples that it’s always optimal to swap g(a) and g(b) in the described case. Applying this to the whole array, we get that for increasingly sorted f(i), g(i) should be in decreasing order.

Good job by the editorialist, just wanted to provide more information to the doubters :wink:

1 Like