SUBLCM - editorial

Kudos to Tester’s solution…very readable!
Thanks… :slight_smile:

4 Likes

Could somebody explain me this in the Tester’s solution:

if(i)
       {
           best[i] = MAX(best[i-1],x);
           ans=MAX(ans,i-best[i]);
       }
       else
         best[i]=-1;
   }

And how does it function similar to

dp[i]=min(dp[i-1]+1,i-Cal(i))

I’m getting WA and have no idea why. Here is my submission:
www.codechef.com/viewsolution/5149059

Can someone help me/provide tricky test cases? I wasn’t able to find any;

1 Like

Easy with sliding window algorithm…(or kadanes algorithm)

Why can’t we use normal gcd function in Cal(i)?
I agree it increase the time.But its still in limit,right?

a really nice problem :slight_smile:

Great problem and a great editorial _/\_

solution links broken…

1 Like

You don’t need to compute all the factors, You can only work with prime factors.

1 Like

I didn’t mean all factors. Prime factorization of every number could be calculated on the fly instead of pre-computation. Nevertheless learnt something from the question so no qualms :slight_smile:
PS: There is no python solution which clearly given an idea about the strict time limit.

You can compute on the fly but it’s time complexity would be different.

Can you provide a worst case for computing on the fly? Provided I cache them as and when I calculate them. That is I don’t compute the factors of any number more than once.

Yes, there was a file in which all the test cases were identical. It was n = 10^5 and all numbers being 720720.

10^5 numbers each number 720720

3 Likes

We have to take the smaller one… what if i+1 - Cal(i)<dp[i-1]+1… then within the length of dp[i-1]+1 there is a number that is not coprime with the ith element … therefore we have to take the valid segment which is the smaller one

getting TLE…CodeChef: Practical coding for everyone
please someone help!!!

Hi, I tried to solve it using binary search the answer, but got TLE . Here is my solution. In every iteration of binary search, I am checking the subsets of expected length over the whole array, so I think the time complexity should be something like n*log(n)*alpha, where alpha will depend on the length of expected array in every iteration of binary search. Though I am no sure about it, can anyone help me with this ?
@l_returns ?? Apart from binary search, I had idea of using two pointers to find the product of segment in O(1) preprocessing and lcm of range in log(n) segment tree, but there also this alpha factor arrives which makes me of that complexity to pass.

I think you are doing O(n*length) with BS over length. Which makes it O(n^2) at least in worst case ( imagine whole array has LCM = multiplication).
What I suggest is O(n) for each length which makes it O(n*\log_2(n))
Use hashing + sliding window for each length in BS. Or use two pointers with this method.

How to check if a subarray has LCM=multiplication ?
If I am not wrong. LCM of n numbers is their multiplication If and only if GCD of each and every pair of numbers is 1.
Now use hashing and push all prime factors (along with their frequency (remember if number is 4 then don’t insert 2 twice, instead what we want is how many numbers are divisible by 2) ) of all numbers ( all numbers in subarray for which you want to check if it’s LCM=multiplication or not).
If frequency of all prime factors is 1 then it’s LCM= multiplication.
Use hashing to keep freq of freq as well to check if freq of all prime factors is 1 or not maybe.
That makes it O(n) for each length. Hence all over O(n*\log_2(n))
Correct me if I am wrong.

Isn’t your complexity same as well ? for every iteration in binary search, you are inserting,(or removing) n/k i.e. O(n) elements and checking for it. ?

Nope my complexity is O(1) per each subarray. I am removing one and adding one element.
Total n/k subarrays of size k for each BS.
Hence Time complexity n*log(n)