 # CDZ14E - Editorial

### PROBLEM LINKS:

Practice

Contest

Author: sikander_nsit

Tester: sikander_nsit

Editorialist: sikander_nsit

MEDIUM

### PROBLEM:

In the problem we had to find the sum of the smallest k elements in the given array range.

### EXPLANATION:

Brute force solution would not work because of the constraints. Segment tree can be used for this.

Each node of the segment tree will consist of two arrays : first is the list of elements in the range in sorted order and the second stores the cumulative sum.

For every query we can find a list of S sorted arrays such that their union is the given range in query [L,R].

`````` 1.	Go through all the arrays and if any have length > k, truncate to length k .
2.	Identify the largest remaining array A. If more than one, pick one.
3.	Pick the middle element M of the largest array A.
4.	Use a binary search on the remaining arrays to find the same element (or the smallest element > M).
5.	Based on the indexes of the various elements, calculate the total number of elements <= M. This should give you B, the count of numbers <= M.
6.	If k < B, truncate all the arrays at the split points you've found and iterate on the smaller arrays (use the bottom halves).
7.	If k > B, truncate all the arrays at the split points you've found and iterate on the smaller arrays (use the top halves, and search for element (k-B)). Add the cumulative sums till the split points to the answer.
``````

When you get to the point where you only have one element per array (or 0), make a new array of size n with those data, sort, and calculate the sum of first k elements.
Because you’re always guaranteed to remove at least half of one array, in S iterations, you’ll get rid of half the elements. That means there are S log k iterations. Each iteration is of order S log k (due to the binary searches), so the whole thing is S^2 (log k)^2 in worst case. But the actual performance would be much better than the worst case.

### AUTHOR’S SOLUTION:

Author’s solution can be found here.

One more way of doing this is using a persistent segment tree.

" so the whole thing is S^2 (log k)^2 in worst case. But the actual performance would be much better than the worst case."

can anyone explain this why this statements holds?

because i followed it and my solution gets accepted in 0.19sec but i am expecting TLE instead…