SORTING - Editorial

I used an entirely different approach which may sound really interesting to the rest of you. As you may already know, the general idea of the task is to be able to quickly recover the element in the median position of some array A’ - formed when only numbers in an interval [l,r] are taken from the initial array A.
For example,
if A=[4,3,5,1,2] and [l,r] = [2,4], then A’=[4,3,2] and the element in the median position is the second element, 3.
or if [l,r] = [1,4] then we would have A’=[4,3,1,2]. Again, the element in the median position is 3.

So, how do we do this? When I have an array A’ described by [l,r], say [l,r] = [1,5] and A’ = [4,5,3,2,1] I make it a double linked list, and I always keep a pointer to the median element. When I find the median, and now it’s 3, i need to split the list into two - one described by [1,2] and another described by [4,5]. I take the shorter of these two lists, and ignore it for now. I make the longer of the two lists by starting from the original one (described by [1,5]) and removing elements. Element removal is O(1). What do we do with all the ignored shorter lists? We put them in a queue, every time building the linked list from scratch. Because we always build a new list from scratch from a shorter list the overall complexity ends up being O(n log n) - interestingly, without using complex data structures or even recursion.

I am aware that my explanation is not very clear but as you will see, the


[1] is quite simple. It solves the largest test case in 0.16 seconds.

  [1]: http://www.codechef.com/viewsolution/2581145

Edit: Just found out that using a different approach for sorting changes the recurrence relation, and then the solution ends up being O(n). Very good! We have an optimal solution!
7 Likes