I believe you can solve the problem in \sout{O(N)} using using two pointers. It’s still O(NlogN), but it should be more efficient that the editorial approach
for _ in range(int(input())):
n, L, R = map(int, input().split())
a = sorted(map(int, input().split()))
c = 0 # Final answer
ls = [] # Stores the minimum value such that it's greater or equal to L (for each index i)
rs = [] # Stores the minimum value such that it's lesser or equal to R (for each index i)
# Left pointer, Right pointer
l, r = 0, n-1
for i in range(n-1, -1, -1):
while l < n and (not L <= int(str(a[l]) + str(a[i]))):
l += 1
ls.append(l)
for i in range(n):
while r >= 0 and (not int(str(a[r]) + str(a[i])) <= R):
r -= 1
rs.append(r)
ls.reverse()
s = 0
for i in range(n):
x = rs[i] - ls[i] + 1 # Number of possible j for the ith index
if x > 0:
s += x
print(s)
It’s true that they could’ve accommodated the constraints for segment tree, but they constraints seem very typical (N \leq 10^5, \sum{N} \leq 10^6), and any less might have made it so that sub-optimal solutions pass.
I get the intuition behind the right variable but unable to get it for the left variable in solutions provided in the editorial. I mean in the setters solution why has he added d-1 to the numerator of the left variable. Can someone help me with that?
Fenwick tree and Segment tree have the same time complexity, but Fenwick tree has a better constant factor. Segment tree is usable in more situations because it is more versatile, but Fenwick tree may be preferable if it does exactly what needs to be done.
Running a profiler showed that the function ‘func’ was one of the worst performance bottlenecks in your code. This is not very visible in the codechef tests, but I think that the codechef tests are just not trying too hard to construct the slowest possible testcase. The following naive Ruby script generates a random testcase, which seems to be more difficult to handle than the codechef testscases:
t = 10
puts t
t.times do
n = 100000
l = rand(1 .. 10 ** 15)
r = rand(l .. 10 ** 15)
puts "#{n} #{l} #{r}"
puts n.times.map { rand(1 .. 10 ** 7) }.join(" ")
end
On my 2.8 GHz processor, the timings are the following:
Hi, I just want to discuss…normally in Codechef , 1s time limit allows for solutions upto 10^8. How come is it now that even NlogN is not passing? I am not blaming anyone …just trying to figure out the right approach to think on seeing a problem.
Without coding and submitting a solution, it’s hard to know whether NlogN is allowed or not…cz if we think on the stricter side, we might end up not being able to solve a problem because of trying to over optimise it although NlogN may be allowed in that case.
Logically, ans should be
no. of pairs <= R - no. of pairs <= L - 1
For getting pairs < L, shouldn’t you use the same upper_bound formula for L - 1?
I mean the number of pairs < L is the same as the number of pairs <= L - 1.
I think instead of following the above editorials, it’s much intuitive to count pairs <= R, and count pairs <= L - 1, and then subtract the two.
In other words,
right = (r - arr[j]) / power[len];
left = (l - 1 - arr[j]) / power[len];
if(left <= right)
ret += upper_bound(all(arr) , right) - upper_bound(all(arr) , left);
We are finding the first index which matches the left criteria and for right we are finding the first index that does not fit on the right criteria, so our window length becomes rightIdx - leftIdx. If we get a fraction value for L, the array elements are integers so we can safely say that the number >= ceil of L satisfies the condition. Similarly for fraction value of R, since elements are integers, so the last possible valid value would be floor of R.
Time complexities are just approximations. Using Big O notation, you’ll likely not get the exact number of operations. For example,
n = int(input())
c = 0
for i in range(n):
for j in range(i):
c += 1
print(c)
The number of operations this code takes is N*(N-1)/2 (which is still an approximation as constants operations weren’t counted), but in Big O notation, the number of operations become O(N^2)
Constant coefficients are not included in the final result. Therefore, if your O(NlogN) solution has a constant factor of 4, which would be the case for a segment tree, then it’s going to be 4 times as slow as an NlogN solution.
The reason why fenwick trees are faster than segment trees is due to the fact that the constant coefficient of a fenwick tree is small than that of a segment tree
The lower bound for the number of operations in a second I like to use is 3*10^8. Of course, it’s not precise, and can vary up to 3*10^9 depending on what the operations are.
I had roughly the same idea as given in editorial, but instead of fixing the Aj, I am fixing Ai. I need help in figuring out the TC where it is having issue! Or some idea why my code is giving TLE!