The formula comes from the above mentioned blog post, which unfortunately I can’t access at the moment. You can get it by expanding the recursive formula (the first formula in the EXPLANATION section above). The blog post also has a very nice intuitive reason why the formula is true.

Once you got the formula, you just have to keep track of the subtree sizes when you change the root of the whole tree. If you change the root to one of its neighbors then only the corresponding two subtree sizes will change. This was explained in the above editorial too. So you can calculate these for all possible roots. (This is the \mathtt{sizes} array in the code.) Now you could calculate the product of the subtree sizes for each possible root but the resulting number would be too big to handle. However, you don’t need the exact values of these products, you just need to find the minimum. (Or the second smallest.) Since log is a strictly monotonously increasing function, the order does not change if you apply the log and then find the smallest. What you get from the log is that the number won’t grow large since multiplications will turn into additions. (The log of a big number is a small number. )

I also had that idea but didn’t implement it because of the risk with double precision.
So for low K it will indeed work, for higher K two elements might get swapped because the precision of the double isn’t good enough to distinguish them. I am convinced test cases exist that would defeat your algorithm for general K, but it’s quite hard to find them. I am currently trying to proof that those test cases exist.

But when you modify peresz’ code to use 32-bit floats it gives

35 709016226

However using 64-bit floats (default) still gives the right answer.
I do believe there are testcases that would also break the 64-bit float version but I can’t find an efficient way to find them as then N must be a lot larger and my current search algorithm is O(N choose N/2)

@spaanse Brother you are right but
since cp questions or in general programming problems have an upper limit or constraint according to which solution is provided , it was right to provide that solution with double data type,
I also gave by maintaining ratios in MAXTOPO.
Even modern computer have a limit due to architecture upto which they can directly multiply two number but we can extend that by creating larger array for storing answers.

In a similar way, sufficiently long data type can be used according to constraint,

So it is not right to say that there exist test cases for which solution will fail because there also exists long enough data types(or can be made user defined) to satisfy the constraint of problem.

As @abhisek_12345 has already said, you have to solve any problem within the constraints of the problem provided. Sure, you can always take N to be too large for any fixed precision arithmetic. However, the problem statement says that N \leq 5 \cdot 10^5.

Let’s do some calculations! The largest number I store in the \mathtt{products} array is at most \ln(N!) \approx 6 \cdot 10^6, for N = 5 \cdot 10^5. You can get this using Stirling’s approximation. The smallest difference between any of these numbers is at least \ln(N) - \ln(N - 1) \approx 2 \cdot 10^{-6}, again for N = 5 \cdot 10^5. See line 43 in my code. The ratio of the two is around 3 \cdot 10^{12} < 2^{42}. This means that I need around 42 bits of precision for the code to work. Python floats are IEEE-754 double precision numbers which contain 53 bits of precision.

So all in all, I do not think you will be able to find a testcase that defeats my algorithm in the constraints given by the problem. Of course, I can be wrong in which case I would be curious to see such a testcase.