Problem Link:Difficulty:EasyMedium Prerequisites:Persistent Data Structures, Segment Tree, Merge Sort Tree Problem:Find the number of comparisons that will be made during the sorting of the given permutation of integers with the provided QuickSort code. Explanation:Solution to the sub task 1:It's enough just to translate the given code into your favorite programming language. Given implementation is naturally a QuickSort and it is widely known that it will not make more than N*N comparisons. Solution to the sub task 2:Again, it's enough just to run the provided code. N is quite large here, but again, it's widely known that QuickSort makes about O(N*log(N)) comparisons on randomly generated arrays, so that's still enough. Solution to all the sub tasks:The solutions to sub tasks 3 and 4 are very similar in fact. Your score depends on how efficiently you implement the key idea. Observation: every time the list A[] is given to the sorting function, A[] always consists of the permutation of the numbers in some range [L; R]. Initially, it's [1; N], then it's [1; pivot1] and [pivot+1; N] and so on. Another observation: the relative order of numbers in every possible list A[] given to the sorting will not change. So, we just have to calculate the middle number by position (i.e. (RL+2)/2th number) among all the numbers with values in the range [L; R] in order to get pivot. For this, we generally use the concept of the "rankquery". The rank query basically asks, what is the kth least element in a given set, for a queried k. If the set contained the positions of the elements in [L; R], then the (RL+2)/2th element is what we are looking for. This is generally done by using a '1' at the value of the element in the set, and then the kth element is got by just asking what is the first position to have k 1's to the left of it (inclusive). This data structure is generally implemented by a BIT or a segment tree, but in this problem we will see how using a segment tree is of use to us. Lets define Pos(x) = position in original array where element x is, Thus, if just define S(x) = Set(1, x), we would get that given (L, R), we need to find the (RL+2)/2th element in S(b) \ S(a1). Using our concept of ones to denote positions, set difference where one set is subset of the other is now just finding the first position x where number of 1's to the left of x in S(R)  number of 1's to the left of x in S(L1) = (RL+2)/2 !! Notice now, that once we have N+1 segment trees (each segment tree corresponds to a set S(x)), we would be done, because traversal for a given query (L, R, k) is merely : go to left child with (L, R, k) if number of ones in left child(R)  number of ones in left child(L) <= k, else go to right child with query (L, R, k  #ones). But, we cannot have N+1 segment trees!! Thats too much memory!! and time to create !! But notice how S(x+1) = S(x) U {Pos(x+1)}. There is just one element being added to the set! Can we save memory and time now? The answer, is yes. The concept used is Persistence. You can read up about persistent data structures online through various sources : Article on Codeproject, Problem QUERY Editorial (the part on "Introducing Persistence"). For wikipedia lovers, what we are aiming to do in our persistent segment tree looks a lot like in the Example on Trees on the wiki page, whose diagrams are selfexplanatory). One of the key things to note in persistence, and in persistent segment trees, is that there is no childparent reference. All references and traversals are oneway: down the tree. Also, node numbers are dynamic and not the standard leftchild = 2i, rightchild = 2i+1. Putting it all together:
Code for segment tree modification can be found for example in setter's code. Analysis of Time complexity and Memory Complexity:Using persistence, each update operation take O(logN) time and memory. Thus, overall time and memory complexity of the solutioN: O(N log N) Note on getting full points:Finally, the above implementation will get you only 86 points. It is not because of efficiency (Subtask 3 and Subtask 4 are meant to differentiate time complexity of O(logN) with O(log^2N) per query), but due to the fact that with N as large as 5 * 10^5, the recursion stack would be too large resulting in a Runtime Exception/Segmentation Fault. The best way to overcome this issue, is to convert the implementation to a nonrecursive manner. In fact, since the [L; R] intervals being queried are independent of each other (subproblems don't even overlap), we need not worry about in which order we query the intervals, just that we need to query all the relevant ones. Thus, we can modify our DFSlike recursive implementation into a BFSlike version using queues. This will get you full 100. There exists also an O(log^2)perquery approach, however it's more complicated from the setter's point of view. Problems to practice Persistence: Common mistakeBugs in subtask 1 and 2 can arise from incorrect implementation of Quicksort.
The bug is in the indices used in the last line of code. To avoid this kind of bug, it is best to have separate variables for different indexdomains, and to completely avoid using "L+i" etc. Setter's Solution:Solution to subtasks 1 and 2 can be found here Tester's Solution:Can be found here asked 25 Aug '13, 14:31

I used an entirely different approach which may sound really interesting to the rest of you. As you may already know, the general idea of the task is to be able to quickly recover the element in the median position of some array A'  formed when only numbers in an interval [l,r] are taken from the initial array A. For example, if A=[4,3,5,1,2] and [l,r] = [2,4], then A'=[4,3,2] and the element in the median position is the second element, 3. or if [l,r] = [1,4] then we would have A'=[4,3,1,2]. Again, the element in the median position is 3. So, how do we do this? When I have an array A' described by [l,r], say [l,r] = [1,5] and A' = [4,5,3,2,1] I make it a double linked list, and I always keep a pointer to the median element. When I find the median, and now it's 3, i need to split the list into two  one described by [1,2] and another described by [4,5]. I take the shorter of these two lists, and ignore it for now. I make the longer of the two lists by starting from the original one (described by [1,5]) and removing elements. Element removal is O(1). What do we do with all the ignored shorter lists? We put them in a queue, every time building the linked list from scratch. Because we always build a new list from scratch from a shorter list the overall complexity ends up being O(n log n)  interestingly, without using complex data structures or even recursion. I am aware that my explanation is not very clear but as you will see, the code is quite simple. It solves the largest test case in 0.16 seconds. Edit: Just found out that using a different approach for sorting changes the recurrence relation, and then the solution ends up being O(n). Very good! We have an optimal solution! answered 26 Aug '13, 06:26
I got your explanation partially, can you tell me what do you mean by building from scratch and how much complexity would it take, see as far my understanding is concerned deletion can take O(n) in worst case and building from scratch can also take O(n), but surely that's not the case otherwise it wouldn't have got AC, so a little more explanation would be great. :)
(31 Aug '13, 23:04)

I used the range selection data structure mentioned in this paper: http://userscs.au.dk/gerth/papers/isaac09median.pdf (subsection 2.1). Range selection was used in order to find the position with rank (RL+2)/2 among the positions corresponding to the interval of numbers [L,R]. In essence the data structure is a segment tree with O(NlogN) memory. A standard implementation takes O(log^2(N)) per query (and, in fact, it only got me 86 points), but it can be improved to O(log(N)) by using fractional cascading (this way I got 100 points). answered 25 Aug '13, 16:21
Some O(log^2 N)perquery solutions for range selection can pass if the constant factor is low, you can take a look at one of my solutions. The key idea of it is very famous. However, there are many more solutions than I could imagine, that's why I like this problem :)
(25 Aug '13, 17:09)
@xcwgf666: Yes, indeed, it seems that there are many ways to successfully approach this problem, which is very nice. Which is your O(log^2(N)) solution which scores 100 points? Is it one of the solutions linked in the editorial or did you submit it in the practice section? (so I can find it there)
(26 Aug '13, 02:08)
@mugurelionut, there is a link in the editorial.
(26 Aug '13, 03:06)
Sir Mugurel i went through the paper but I am not able to understand it properly.....it will be very helpful if you will show the structure of the BST by taking a small example...I will be very thankfull to you....
(15 Oct '13, 12:15)

Can someone please explain how to solve DQUERY using a persistent segment tree? I am unable to find any way to count the number of distinct integers in a particular range without going right down to the leaves and there's no point in doing that. answered 10 Feb '15, 00:06

how can we solve this by Persistence tree http://www.spoj.com/problems/DQUERY/link text answered 18 Feb '16, 16:34

saya sedang mencari Alat Pengecil Perut dan belajar Cara Membuat Website Sendiri di Indramayu Kota Wisata bersama Jolie Jogja Wirobrajan namun akhirnya ketemu Software ERP Indonesia yang mana ia Pilih Nahwa untuk Jasa Travel dan Rental Mobil di Malang dengan Sepatu Ardiles, Bonus Game Berhadiah oleh karena itu saya tidak jadi belajar malah ke Krakatau Steel Peduli Masyarakat bersama Cipto Junaedy answered 19 Feb '16, 13:51

Tests are weak I think, for example this solution will get TL on test:
N
N1 N3 N5 ... 1 2 4 6... N2 N
where N = 500000.