PROBLEM LINK:Author: Kirill DIFFICULTY:Medium PREREQUISITES:PROBLEM:Given an array of numbers A, support the following two queries: 1) for a given range [L, R], reduce each number in A[L..R] by its smallest prime factor, 2) for a given range [L, R], find the number in A[L..R] with largest smallest prime factor. QUICK EXPLANATION:Use a segment tree while maintaining the segments in which each number is equal to one. EXPLANATION:We are given an array A of size N, and we are supposed to maintain two kind of range queries on this array: 1) Update the numbers in a range A[L..R]. 2) Query some information about the numbers in the range A[L..R]. Let us first solve a simpler version of the problem, in which we support the following queries, and then we will come back to the original problem: 1) Update a number A[x] 2) Query some information about the numbers in the range A[L..R]. This seems like a classical segment tree problem. We can create a segment tree for the array. As explained here in the segment tree each node maintains some information about the elements A[k * 2^x, (k + 1) * 2^x). In our case, this information would be the largest smallest prime factor of the numbers in the range. struct info { // The node corresponds to A[low..high] int low; int high; // The largest value of the smallest prime factors // of the number in A[low..high]. int value; // The children nodes of this node containing // information about the ranges // [low, mid], and [mid + 1, high]. info* left; info* right; }; Once we update a number A[x], the information in all nodes containing the index x will be updated. There are at most O (lg N) such nodes, hence the update operation can be done in O (lg N) time. For the query operation about the range [L, R], we find the list of nodes whose ranges is contained in [L, R], and take the maximum of their values. Once again, there are O (lg N) such nodes, hence the query operation can also be implemented in O (lg N) time. For the implementation details of update and query, please see here. Now, let us see how we could solve the original problem. One possibility could be that, when we get the range modification entry, we modify each element in the range one by one, and hence perform (R  L + 1) single element update queries. This approach would be to slow, as it makes the complexity of range update query to O ((R  L + 1) lg N). Next, we discuss that many of these single element update queries can be ignored, and the number of actual performed queries is much lower. Note that, if the value of an element A[x] is 1, then update query on A[x] does not have any effect on the segment tree. We call such queries as degenerate queries, and can discard them. Also note that a number M can be written as a product of at most O (lg M) prime numbers, Hence, we can perform at most O (lg M) nondegenerate modification queries on A[x] = M, all the latter queries will be degenerate. This means that if all numbers in the array are smaller than M, then a most O (N lg M) nondegenerate single element update queries would be performed. Each single element update query takes O (lg N) time, and hence all update queries can be performed in O (N lg M lg N) time. However, now the problem is how to find the nondegenerate single element update queries for the range [L, R]. For this purpose, we maintain another information in the segment tree node, which is the number of elements in the range which are larger than one. The following snippet shows how to use this information so that we only perform single element update queries, without iterating through all elements in the range. The idea is similar to the lazy propagation. struct info { // The node corresponds to A[low..high] int low; int high; // Number of elements in A[low..high], which are larger than 1. int count_non_degenerate; // The largest value of the smallest prime factors // of the number in A[low..high]. int value; // The children nodes of this node containing // information about the ranges // [low, mid], and [mid + 1, high]. info* left; info* right; }; Now, while performing the range modification query, we can discard all subranges which have all elements equal to one. void update (info* nd, int L, int R) { // This node has a range disjoint with [L, R] if (nd>low > R  L > nd>high) continue; // All elements in this range are 1, nothing to do. if (nd>count_non_degenerate == 0) continue; if (nd>low == nd>high) { // Perform the single element update query ... return; } update(nd>left, L, R); update(nd>right, L, R); } This means all the range update queries can be performed in O (N lg M lg N), while the range query time is O (lg N) per range. TIME COMPLEXITY:O (N lg M lg N + Q lg N), where N is the size of the array, M is the bound on the elements of the array, and Q is the number of queries. AUTHOR'S AND TESTER'S SOLUTIONS:Author's solution
This question is marked "community wiki".
asked 15 Sep '16, 14:56

I think that test cases were possibly weak, because they allow the following approach to pass (if there were a lot of queries with not many updates, then I expect it to fail). First, for each possible input value, compute its smallest prime factor. This can be done using Sieve for example. This is pretty standard. Then, the crucial observation is that every initial value in the array can be updated at most 19 times. This is true because $A[i] \leq 10^6$, and the number with the most prime divisors in this range is $2^{19}$, so it can be divided by it's smallest prime factor $19$ times until it becomes $1$. Notice that as soon as the number becomes $1$, we don't have to care about it at all: first, update operation on $1$ doesn't change its value, moreover, it doesn't have any affect on the result of any query operation, because its result is guarantee to be at least $1$. Based on the above observation, the following solution is possible. Maintain a map $S$ of positions in the array to their value. For example, if $A = [2,3,1,5]$ then $S[1] = 2$, $S[2] = 3$, $S[3] = 1$ and $S[4] = 5$. Then, for each $update(L, R)$ use binary search (lower_bound in C++ can be used) to find the first key in $S$ greater or equal to $L$ and go through keys from this position to at most $R$ dividing each value by its smallest prime divisors. As soon as any of these values becomes $1$, delete it from the map $S$. It follows that each input number can participate in queries at most $19$ times which is nice fact. Query operation is performed analogously. For $query(L, R)$ use binary search (lower_bound in C++ can be used) to find the first key in $S$ greater or equal to $L$. Then go through all keys from the found key to at most $R$, actualizing the result by considering the smallest prime factor of the current number. So the query operation has time complexity O(S + log(S)) and if no 1's get deleted during update operations, then a single query will have O(N) time complexity, which is a lot. My submission using exactly the above idea is available here However, this approach can be easily transformed to a more efficient one using the same idea (any input number will participate in at most $19$ updates) by using a segment tree rather than a map, which will be probably almost the same as described in the editorial. answered 15 Sep '16, 15:46
My solution is similar to your first map approach : https://www.codechef.com/viewsolution/11406089 Difference is i am not using a map, i am maintaing an array , next[i] = index of the the next number to the right which is not 1 , next[i] = (A[i+1] == 1) next[i+1]:i+1 In this way updating next pointer is O(1) we have pointer to next element which is not 1 yes after atmost 19 times a particular element is gaurenteed to be 1
(15 Sep '16, 20:31)

I used the following techniques:
And just 2 simple if statements made a big difference from TLE to 0.14s answered 15 Sep '16, 18:49

When I saw the problem in the first time I was confused by constraints T=100 and N=10^5 without any extra constraints for a sum of N in a single file. After a long time of thinking I thought there is something strange here. So I decided to check out the real constraint for a sum of N and realized it is exactly 5*10^5. So what is the reason for not publishing the real limits? answered 15 Sep '16, 20:29

Can someone how did this solution based on lazy propagation work? ( https://www.codechef.com/viewsolution/11417075 ) answered 15 Sep '16, 15:20
I think he is updating lazily in Klog(K) where K is the size of the lazy marked segment. Also atmost the complexity would be 20Nlog(N)
(15 Sep '16, 15:33)

The Practice Question Link is pointing to the Contest Page. The tester solution's link need to be updated as well. Please Update the links. answered 15 Sep '16, 15:54

@pkacprzak i also did the same solution. IO optimizations got it passed answered 15 Sep '16, 15:58

Well, just to say, One need not maintain the number of ones in range LR of a node. Look at my solution here to understand more clearly. I have used almost everything similar to what is mentioned in the editorial, and the fact that if a node of segment tree is "1", I need not propagate down, as all its child elements will be also 1, since each node is storing the max. answered 15 Sep '16, 16:29

I used sqrt decomposition , just if currentBlock.updates > 19 i+sqrt(n) else i++ Solution my code . i took each block size to be 500 instead of sqrt(n). answered 15 Sep '16, 17:39

Guyz i got RTE for this problem inspite of it working well on ideone for the sample test cases as well as on codeblocks. Please help me with it. Here's the link to my solution https://www.codechef.com/viewsolution/11528652 answered 15 Sep '16, 17:48
Increase the size of your segment tree and other arrays accordingly! You are trying to access an element which is more than size of the arrays you have made.
(15 Sep '16, 18:59)
@lohit_97 then y am i not getting 10 points? My code is sufficient for the 1st two subtasks.
(15 Sep '16, 19:22)

Each number from 1 to 10^6 has number of factors at most 19. Hence, each node in the tree can only be updated at most 19 times (bounded by a smallest coefficient. So the worst case for the combined complexity of all updates is an order of 19 * N steps which is still O(N). All queries can be solved in O(logn) because we are using segment tree. answered 15 Sep '16, 19:52

I solved this using basic segment tree. Notce that each number from 1 to 10^6 has number of prime factors (including repitition) at most log2(10^6) ~ 19. Hence, each node in the tree can only be updated at most 19 times (ie. bounded by a smallest coefficient). So the worst case for the combined complexity of all updates is an order of 19 * N steps which is still O(N). To ensure this, we must ofcourse avoid down propagation during update when a tree no longer needs update (ie. when the max factor for a subtree is already 1). All queries can be solved in O(logn) because we are using segment tree. To update a leaf, you can preprocess the prime factorization of each number from 1 to 10^6 using sieve. (preprocessing took .7.8 s on my laptop). answered 15 Sep '16, 19:57

What about T test cases, isn't the complexity then O(TNlogMAXlogN) which shouldn't pass???? answered 15 Sep '16, 20:00

can someone please tell me why i m getting TLE > https://www.codechef.com/viewsolution/11535913 answered 16 Sep '16, 06:55

I made a test case for this problem, below is the generator of the test case. Its pretty self explanatory of how I go on to generate the test case. In the question it was given that sum of M over all test cases is <= 10^6. So I have kept T = 10 and M = 10^5. All the numbers i.e. A[i] 's are 10^6, which is again within the constraint of the problem. But this seems to give TLE for my own code which got AC during the contest. Here is the link for the AC Solution of mine. It takes about 7 seconds to run on my mac machine.
answered 19 Sep '16, 21:33

Can someone please explain why my solution is giving TLE on 2 Cases. I have followed the exactly same method explained in tutorial. answered 12 Oct '16, 00:42

Can some body please prove how the time complexity of the algorithm got reduced form I have an intuition about the idea that we won't go into subtree rooted at node answered 08 Nov, 12:00

@shubamg First see the point that an element can be divided only log(n) times cause after that it becomes 1. So how I approached this problem is to first maintain a range I calculate largest smallest prime factor and maintain it in a segment tree. Now perform point updates on all points on ranges. However don't go inside segments which have the largest smallest prime factor as 1 as you cannot do any operation for 1. Now you notice that each range can be called atmost logm times and there are nlogn ranges.So it is O(Nlog(m)log(n)). For each query it is log(n). So the extra Qlogn factor. So it is O(Nlog(m)log(n)+Qlog(n)). answered 09 Nov, 10:38

I would like to give a proof of time complexity of the algorithm stated in the editorial. I had a doubt earlier that there may be updates on the ranges which have all their members equal to $1$. This will not decrease the value of any array element but still consume extra time. But we can still use amortised analysis to prove the time complexity of the algorithm. So let the processing at any node of the segment tree may take at max $\alpha$ time. Consider the potential function $\phi = 2*\alpha*\sum_{n \in \textrm{node}} \sum_{i=l(n)}^{r(n)} f(A[i])$. Here $n$ can be any node of segment tree which covers range $[l(n),r(n)]$. $f(x)$ gives number of prime factors of x e.g. $f(2^n)=n$, $f(6)=2$. Using this $\phi$, we can prove the desired time complexity of the algorithm. answered 14 Nov, 23:22
