@mugurelionut I will have to look at your O(N+RMAX) solution for isBad(). I made some optimizations to avoid the full O(N*K) penalty, but I still had pretty easy-to-hit cornercases where I had to recalculate a maximum over a K subsequence. None of this got in the way of AC, but I need to see your solution here.
@anton one tact I took was in doing a BFS and only considering altering increasing element ‘i’ if element ‘i’ was indeed a local maximum in one or more of the K-subsets which satisfied the M-drunk criteria. I think this (and some bad coding) slowed down my isBad() routine (since I also had to calculate this new mask of “M-drunk contributors”), but the upshot is that the algorithm almost alwyas terminated with fewer iterations because all of the relevant 1-edits were considered before all of the relevant 2-edits, etc. I need to think on whether this is better or worse than the backtracking solution.
When i ran my backtracking algo code CodeChef: Practical coding for everyone (on my device and not on codechef)for the test file input.txt , it took more than 2 sec(for 10 test cases). Although , after seeing the solution , i realized that my solution was correct and was AC in practice section. But still I don’t get it , how could it take more than 2 sec on my device and still an AC.
Could you please check that the attached test file does not give TLE for the AC solutions
Thanks for pointing it out.
Actually I’ve wanted to add this but forget at some point
But note that backtracking solution has complexity O(2^N * K) which is slightly better.
Nice. So, let’s say we generate the 2^N masks by recursive backtracking. After every decision we make for a bit i (i.e. set it to 0 or 1) we compute the number of max. values in the interval [i-K+1,i] in O(K) time (assuming 0-based indexing and i>=K-1). We also maintain a counter with the number of 1 bits in the current mask. The total number of “decisions” (setting a bit to 0 or 1) and counter updates (+/- 1) during the backtracking is at most 2^1+2^2+…+2^N=2^(N+1)-2=O(2^N). Since we spend O(K) time per decision we get the O(2^N * K) complexity. Pretty easy.
Are you kidding on me?
You have check() inside the loop of filling vector test.
So complexity is O(2^N * N * (N-K+1) * K) which of course is slow.
Simply moving one brace I get AC in 0.27 max time per test: http://www.codechef.com/viewsolution/1727641
So don’t blame STL. Vector is the last thing you should blame.
Of course if you are using it wisely.
Codechef server is very fast.
It is Intel Pentium G860 3GHz.
Refer to this (it is Cube).
Tester first solution runs in 0.01 seconds on your test file.
While maximal time for your solution over our test data is 0.08.
Both the solution given are brute force solutions. What I do not understand is that is there no algorithmic approach to this question. I understand that these solution perform very well but am just curious about if there can be some algorithm for this.