INTCONCA - Editorial

@ajit123q do you have something against data structures?

https://www.codechef.com/viewsolution/52376246
https://www.codechef.com/viewsolution/52377329
https://www.codechef.com/viewsolution/52378003
https://www.codechef.com/viewsolution/52379864

Why is using a segment tree to calculate range frequency such a big deal? it would easily pass under 2 - 3s
I spent 30 mins optimizing this nonsense.

Please keep into consideration the constraints from next time.

3 Likes

I believe you can solve the problem in \sout{O(N)} using using two pointers. It’s still O(NlogN), but it should be more efficient that the editorial approach

for _ in range(int(input())):
    n, L, R = map(int, input().split())
    a = sorted(map(int, input().split()))

    c = 0  # Final answer

    ls = []  # Stores the minimum value such that it's greater or equal to L (for each index i)
    rs = []  # Stores the minimum value such that it's lesser or equal to R (for each index i)

    # Left pointer, Right pointer
    l, r = 0, n-1

    for i in range(n-1, -1, -1):
        while l < n and (not L <= int(str(a[l]) + str(a[i]))):
            l += 1

        ls.append(l)

    for i in range(n):
        while r >= 0 and (not int(str(a[r]) + str(a[i])) <= R):
            r -= 1

        rs.append(r)

    ls.reverse()

    s = 0
    for i in range(n):
        x = rs[i] - ls[i] + 1  # Number of possible j for the ith index
        if x > 0:
            s += x

    print(s)

Solution link: CodeChef: Practical coding for everyone

It’s true that they could’ve accommodated the constraints for segment tree, but they constraints seem very typical (N \leq 10^5, \sum{N} \leq 10^6), and any less might have made it so that sub-optimal solutions pass.

It would easily pass in 2s there was no need to modify the existing constraints. Only a change in TL was required.

I get the intuition behind the right variable but unable to get it for the left variable in solutions provided in the editorial. I mean in the setters solution why has he added d-1 to the numerator of the left variable. Can someone help me with that?

2 Likes

Exactly. All 3 solutions have used that same line, without any explanation whatsoever. It’s not even intuitive.

That is done to get the ceil value of the fraction, for right we are taking floor value

1 Like

But you are sorting the array. This operation alone pushes the time complexity of your solution at least to O(N log N).

1 Like

Thanks! That’s a nice solution. Too bad that the time limit was too tight. Though there are two relatively easy ways to improve your code:

  1. Use fenwick tree instead of segment tree: CodeChef: Practical coding for everyone
  2. Optimize ‘func’ a little bit (divide by 100 instead of 10 on each loop iteration): CodeChef: Practical coding for everyone
  3. Apply both of these optimizations: CodeChef: Practical coding for everyone

Fenwick tree and Segment tree have the same time complexity, but Fenwick tree has a better constant factor. Segment tree is usable in more situations because it is more versatile, but Fenwick tree may be preferable if it does exactly what needs to be done.

Running a profiler showed that the function ‘func’ was one of the worst performance bottlenecks in your code. This is not very visible in the codechef tests, but I think that the codechef tests are just not trying too hard to construct the slowest possible testcase. The following naive Ruby script generates a random testcase, which seems to be more difficult to handle than the codechef testscases:

t = 10
puts t
t.times do
  n = 100000
  l = rand(1 .. 10 ** 15)
  r = rand(l .. 10 ** 15)
  puts "#{n} #{l} #{r}"
  puts n.times.map { rand(1 .. 10 ** 7) }.join(" ")
end

On my 2.8 GHz processor, the timings are the following:

  • Your original code: 1.60 seconds
  • Fenwick tree: 1.15 seconds
  • Func optimization: 1.29 seconds
  • Fenwick tree+func optimization together: 0.78 seconds
1 Like

Hi, I just want to discuss…normally in Codechef , 1s time limit allows for solutions upto 10^8. How come is it now that even NlogN is not passing? I am not blaming anyone …just trying to figure out the right approach to think on seeing a problem.
Without coding and submitting a solution, it’s hard to know whether NlogN is allowed or not…cz if we think on the stricter side, we might end up not being able to solve a problem because of trying to over optimise it although NlogN may be allowed in that case.

1 Like

Thanks a lot for your response. The idea of dividing by 100 is really cool and yeah I will try learning fenwick eventually. :slightly_smiling_face:

Logically, ans should be
no. of pairs <= R - no. of pairs <= L - 1
For getting pairs < L, shouldn’t you use the same upper_bound formula for L - 1?
I mean the number of pairs < L is the same as the number of pairs <= L - 1.

I think instead of following the above editorials, it’s much intuitive to count pairs <= R, and count pairs <= L - 1, and then subtract the two.
In other words,

right = (r - arr[j]) / power[len];
left = (l - 1 - arr[j]) / power[len];
    
if(left <= right)
ret += upper_bound(all(arr) , right) - upper_bound(all(arr) , left);

This is much easier to understand.

1 Like

Welp, that’s a facepalm moment, though it should still be more efficient than the editorial approach

We are finding the first index which matches the left criteria and for right we are finding the first index that does not fit on the right criteria, so our window length becomes rightIdx - leftIdx. If we get a fraction value for L, the array elements are integers so we can safely say that the number >= ceil of L satisfies the condition. Similarly for fraction value of R, since elements are integers, so the last possible valid value would be floor of R.

Hi,I had asked you a question. Will it be possible for you to give your opinion to that?

Time complexities are just approximations. Using Big O notation, you’ll likely not get the exact number of operations. For example,

n = int(input())
c = 0
for i in range(n):
    for j in range(i):
        c += 1

print(c)

The number of operations this code takes is N*(N-1)/2 (which is still an approximation as constants operations weren’t counted), but in Big O notation, the number of operations become O(N^2)

Constant coefficients are not included in the final result. Therefore, if your O(NlogN) solution has a constant factor of 4, which would be the case for a segment tree, then it’s going to be 4 times as slow as an NlogN solution.

The reason why fenwick trees are faster than segment trees is due to the fact that the constant coefficient of a fenwick tree is small than that of a segment tree

But is it possible to guess beforehand what amount of a constant factor can pass the TL?

1 Like

The lower bound for the number of operations in a second I like to use is 3*10^8. Of course, it’s not precise, and can vary up to 3*10^9 depending on what the operations are.

I had roughly the same idea as given in editorial, but instead of fixing the Aj, I am fixing Ai. I need help in figuring out the TC where it is having issue! Or some idea why my code is giving TLE!

A link to my submission: Solution: 52432740 | CodeChef