COOLCHEF EDITORIAL using binary search

Problem Link

binary search,permutation with repetition

There is no need to use square root decomposition like official editorial.I solved this problem(100) with binary search.Method in which even a beginner can understand

1.maintain a dictionary s with s[i] an array with indexes of elements in A which are equal to i.
Example A=[1,2,3,3,1] then s={1:[1,5],2:[2],3:[3,4]}

remove all the keys from s if length of s[key] is 1

2.maintain an array fact with c[i] =factorial(i)
eg fact=[1,2,6,24,120…]

Start Iterating in the queries now

for l,r in queries:
     Let answer first be fact[r-l+1]

     iterate over keys in s:
          let ind_1 be number of elements in s[key] less then l.Find it using 
          binary search

          let ind_2 be number of elements in s[key] less then or equal to 
          r.Find using binary search again.

          set answer to answer/fact[ind_2-ind_1]


done!!Wait not so fast you won’t get 100 with this

Now time for some Optimizations in the code.

When you iterate over keys in s add 2 conditions in the beginning.

if s[key][0]>r:
if s[key][-1]<l:

As factorial can be huge number don’t forget modulo arithmetic
Done 100 points!!

solution link: solution of python which got 100)

Time Complexity:
let complexity=0
for i in s:

complexity is ÎŁlog(|s[s.keys()[i]]|)

so after summation complexity will be around log2(n) per query

Hope You liked the editorial!!!My first one.

@l_returns @vijju123 @anon55659401 @samarthtandon


Just one single question.
Editorials should have time complexity mentioned.
What is your time complexity ?
O( number_of_distinct_values *\log_2(n)) per each query ??

adding that…

So this shouldn’t pass in give time limit right ?
In worst case it fails ?
It’s because maybe test cases are weak
Are you iterating all distinct values present in the array ??
If yes then it’s O(n*q) in worst case


You said it right. Weak test cases.


Your solution is O(nlogn) per query.

Simply iterating over the range from L to R, and counting frequencies will answer queries in O(n)

In the OP’s solution also you must be iterating over the elements. So why not count their frequencies and get the answer?

Edit: Iterating simply over L to R gets TLE (in C++)
Oh I get it now. Since you are traversing only the repeated numbers you reduce your work by atleast half. (But still increases by a factor of logn)

The problem setters probably didn’t think of this bruteforce approach.

a small typo- “no need to use root decomposition”.

1 Like

No no there is need if you want to solve it with good test cases


i know but think according to him!!:stuck_out_tongue:

1 Like

You got lucky Sir, I regret not using the cheap-trick.
Lesson learned:- “Always use cheap-tricks in Codechef”


Arre aapko reply nhi kiya :stuck_out_tongue:

hata to dia … bahut tez nikle apto

1 Like

Aise kaamo mai hum bachpan se hi tez hai xd :stuck_out_tongue:

1 Like

@anon55659401 blocked me on FB :frowning:
Why bro ??

I’ve deactivated fb, not blocked lol, aap jaise mahaan vyakti ko kaun block kar sakta hai :stuck_out_tongue:

1 Like

:joy::joy: Kyu ??

Its is not nlogn its way less check the post now

let aa be the frequency dictionary
sum of aa is n

complexity is ÎŁlog(aa[i])

I had written the wrong complexity I have edited the post

You are using too many keywords.
Are you checking all distinct elements from array in each query or not ???
Yes or no.

Meaning if given array is
1 1 1 2 3 4 5
Then you will check 1 2 3 4 5

for each distinct element the complexity is log2(frequency of that element in the array)

like for your example complexity is log2(3)+log2(1)+log2(1)+log2(1)+log2(1)