COOLCHEF EDITORIAL using binary search

https://www.codechef.com/problems/COOLCHEF

Prerequisites:
binary search,permutation with repetition

There is no need to use square root decomposition like official editorial.I solved this problem(100) with binary search.Method in which even a beginner can understand

1.maintain a dictionary s with s[i] an array with indexes of elements in A which are equal to i.
Example A=[1,2,3,3,1] then s={1:[1,5],2:[2],3:[3,4]}

remove all the keys from s if length of s[key] is 1

2.maintain an array fact with c[i] =factorial(i)
eg fact=[1,2,6,24,120â€¦]

Start Iterating in the queries now

```for l,r in queries:

iterate over keys in s:
let ind_1 be number of elements in s[key] less then l.Find it using
binary search

let ind_2 be number of elements in s[key] less then or equal to
r.Find using binary search again.

done!!Wait not so fast you wonâ€™t get 100 with this

Now time for some Optimizations in the code.

When you iterate over keys in s add 2 conditions in the beginning.

```if s[key][0]>r:
continue
if s[key][-1]<l:
continue
```

As factorial can be huge number donâ€™t forget modulo arithmetic
Done 100 points!!

https://www.codechef.com/viewsolution/24724216(only solution of python which got 100)

Time Complexity:
let complexity=0
for i in s:
complexity=complexity+log2(|s[i]|)

``````          or
``````
```              n
complexity is ÎŁlog(|s[s.keys()[i]]|)
i=1
```

so after summation complexity will be around log2(n) per query

Hope You liked the editorial!!!My first one.

5 Likes

Just one single question.
Editorials should have time complexity mentioned.
What is your time complexity ?
O( number_of_distinct_values *\log_2(n)) per each query ??

So this shouldnâ€™t pass in give time limit right ?
In worst case it fails ?
Itâ€™s because maybe test cases are weak
Are you iterating all distinct values present in the array ??
If yes then itâ€™s O(n*q) in worst case

2 Likes

You said it right. Weak test cases.

2 Likes

Your solution is O(nlogn) per query.

Simply iterating over the range from L to R, and counting frequencies will answer queries in O(n)

In the OPâ€™s solution also you must be iterating over the elements. So why not count their frequencies and get the answer?

Edit: Iterating simply over L to R gets TLE (in C++)
Oh I get it now. Since you are traversing only the repeated numbers you reduce your work by atleast half. (But still increases by a factor of logn)

The problem setters probably didnâ€™t think of this bruteforce approach.

a small typo- â€śno need to use root decompositionâ€ť.

1 Like

No no there is need if you want to solve it with good test cases

2 Likes

i know but think according to him!!

1 Like

You got lucky Sir, I regret not using the cheap-trick.
Lesson learned:- â€śAlways use cheap-tricks in Codechefâ€ť

3 Likes

hata to dia â€¦ bahut tez nikle apto

1 Like

Aise kaamo mai hum bachpan se hi tez hai xd

1 Like

@anon55659401 blocked me on FB
Why bro ??

Iâ€™ve deactivated fb, not blocked lol, aap jaise mahaan vyakti ko kaun block kar sakta hai

1 Like

Kyu ??
â€¦

Its is not nlogn its way less check the post now

let aa be the frequency dictionary
sum of aa is n

```              n
complexity is ÎŁlog(aa[i])
i=1
```

I had written the wrong complexity I have edited the post

You are using too many keywords.
Are you checking all distinct elements from array in each query or not ???
Yes or no.

Meaning if given array is
1 1 1 2 3 4 5
Then you will check 1 2 3 4 5

for each distinct element the complexity is log2(frequency of that element in the array)

like for your example complexity is log2(3)+log2(1)+log2(1)+log2(1)+log2(1)