### PROBLEM LINK

**Author:** Denis Anischenko

**Tester:** Alexey Zayakin

**Editorialist:** Jakub Safin

### DIFFICULTY

MEDIUM-HARD

### PREREQUISITIES

segment trees

### PROBLEM

You’re given an array A of N integers and Q queries of two types:

- add x to A\lbrack i\rbrack
- for given i,L,R, compute the number of indices j such that there’s a ray from \left(i-\frac12,k-\frac12\right) to \left(\infty,k-\frac12\right) for some L \le k \le R that intersects the line segment from (j,0) to (j,A\lbrack j\rbrack) and doesn’t intersect any such segment for any smaller j

Answer the queries of the second type.

### QUICK EXPLANATION

Reduce the problem to counting numbers greater than any previous number, stopping at the first number \ge R. Use a segment tree computing max(); in the same segment tree, compute the answers to queries of type 2 for each segment. Use that to answer queries in O(\log^2 N).

### EXPLANATION

Geometry is bad. Let’s reformulate this problem without geometry.

We’re looking for indices j \ge i. If the ray at height k-1/2 intersects some segment with height A\lbrack j\rbrack, then (all elements of A are integers) A\lbrack j\rbrack \ge k. That means we need to count indices j \ge i such that A\lbrack j\rbrack \ge L, A\lbrack j\rbrack > A\lbrack l\rbrack for any i \le l < j (otherwise any ray will intersect some earlier taller segment) and \mathrm{max}(A\lbrack l\rbrack) for i \le l < j is less than R (all rays will stop at a segment of height \ge R).

The third condition can be handled easily. If we find the smallest h \ge i such that A_h \ge R, then we can ignore segments A_j for j > h.

The first condition can be handled easily too - we can find the smallest index s \ge i such that A_s \ge L, then we can ignore segments A_j for j < s (because they’re shorter than L) and count segments in the range \lbrack s,h\rbrack that satisfy the second condition; thanks to A_s \ge L, they must satisfy the first condition too (the third condition is automatically satisfied by j \le h).

We’re left with the problem of counting A_j \in \lbrack s,h\rbrack such that A_k < A_j for all s \le k < j. It turns out we can do that in the same way as finding s, and even at the same time: using a segment tree.

In each node of this segment tree, we should store the maximum in the node’s segment and the answer to the problem - number of elements which are larger than any earlier element in that segment.

## Finding h

The “maximum” part of our segment tree will suffice for this. How do we find the smallest j \ge i such that A_j \ge R (or j=N if there’s no such index) within the segment of some node n? If n contains just 1 element, this is trivial; otherwise, it has 2 sons.

If the maximum is < R or all indices j \ge i are outside of this node’s segment, the answer is definitely “nothing”. Otherwise, let’s ask the left son the same question and look at the answer.

If it returns some index, we don’t need to bother with the right son – we’ve got what we need. If it returns “nothing”, we have to ask the right son.

The time complexity of computing this is O(\log{N}) even if it might not seem so at first. That’s because the first time we asked the left son, which contained some indices \ge i, got “nothing” and branched to the right son, we won’t need any branching anymore. If we move to the left son, then the left son has to contain something \ge R; otherwise, we can just move to the right son instantly. It can be viewed as finding O(\log{N}) segments which cover [i,N), then picking the first one with maximum \ge R and binary-searching inside it.

If we do this, we’ve got h, can run the rest on \lbrack i,h\rbrack and forget about R forever.

## Answering a subquery

Let’s say that we need to know the answer for a segment [z,k) in some node n with given L. The stored answer for this segment (with L=-\infty) is a. If n contains just 1 element, this is trivial; otherwise, it has 2 sons whose answers are a_l,a_r. Let’s look at the maximum from the left son.

If it’s \le L, we can safely ignore this son and simply recurse to the right son.

Otherwise, we can recurse to the left son and get its part of the answer; since its maximum is > L, the right son’s part doesn’t depend on L (only on the left son’s maximum), so we know what it is: a-a_l (watch out, not a_r).

We have straightforward non-branching recursion here, so the time complexity is O(\log{N}).

## Completing the algorithm

How to answer a query of type 2 with given [z,k) (initially z=i,k=h+1) and L if we’re in some node n? If n covers exactly the segment \lbrack i,h\rbrack, that’s just running the above mentioned subquery. Cases where n contains one element or doesn’t cover anything from \lbrack i,h\rbrack are trivial. Otherwise, let’s recurse into the sons of the current node. A son that covers a segment S will get [z,k) modified to [z,k)\cap S, the right son will get L increased to \mathrm{max}(L,m_l) where m_l is the maximum from the left son, but that’s it. Again, this visits O(\log{N}) nodes which cover the original segment \lbrack i,h\rbrack.

We still need to do with type 1 queries – updating the tree. You should know by now how updating maxima works. In order to update the answer we store in any node (of which we again visit and change just O(\log{N})), we need to ask the subquery on its segment with L=-1. Note that we’ve got a separate method dealing with the hard stuff and everything else is just standard segment tree operations.

The total time complexity is O(N\log{N}+Q\log^2{N}) – we need to compute those subqueries everytime we visit a node; the tree can be built by visiting each node just once and for each query, we visit O(\log{N}) nodes. As usual, the segment tree takes O(N) memory.

# Alternative slower solution

This type of problems can often be solved using sqrt decomposition with the general form: split queries into buckets of at most K queries, before processing each bucket, split the array at indices which are going to change, do some preprocessing, use that to answer queries, update the array after processing the bucket.

In this case, we’ll keep the indices which change (appear in queries of the first type) in the current bucket as completely separate. In order to process queries of the second type, we should find a way to speed up the process "start with A_i, keep jumping to the first larger number to the right, increment the answer if it’s in the range \lbrack L,R\rbrack", since we can’t afford to just make all these jumps.

For each chunk of the array A that doesn’t change, we can compute all its suffix and prefix maxima. When solving a query of the second type, we can now go through all O(K) changing elements and unchanging chunks (subarrays); using the prefix maxima, we know where we encounter the first number \ge R and can compute the exact index by binary search. Using suffix maxima, we can compute the maximum element we must have jumped to before processing each of them (we should start with that equal to L-1), which allows us to decide if the changing elements should increment the answer to this query. We can’t do that with the unchanging chunks, but for each of them except maybe the first and last one (we’ll deal with them separately), we know the L we have when processing it and that we should add to the answer for this query all elements we jump to when starting at L to the left of this chunk.

We can’t answer these subqueries directly, but why not offline, at once for each chunk C? If we’re starting to the left of this chunk, we can discard any element C_i such that C_j \ge C_i for some j < i. This can be done in a single pass in linear time and gives us an increasing sequence. Then, we have several queries Q_j, each of which needs the answer for some L_j. We need the queries sorted by L_j; then we can use a 2-pointer algorithm where we’re going through the elements of that subsequence in increasing order and moving the second pointer to the query with the largest L_j that leads to jumping on the current element.

How do we sort the queries? A standard O(K\log{K}) sort is too slow, we need it in O(K). Fortunately, we don’t need to sort separately for each chunk – we can keep a sorted array of them, add/delete queries when appropriate (each is added or deleted just once) and when updating their L to max (current L, maximum of the current chunk or changing element), that’s just taking a segment of queries with old L less than that maximum, giving them the same L and moving them to the right place in the array to keep it sorted. Everything is O(K) (pun intended).

We’re left with the starting and final chunk for each query. We’ll need to solve these offline as well; let’s describe just what to do with starting chunks, since it’s symmetric in both cases (plus there’s the case when they’re the same chunk, which is annoying but similar). This time, Q_j has L_j and the first index i_j.

We can afford to quicksort the queries by i_j now. Let’s add elements from the end to an array and use the well-known stack algorithm for maintaining the current increasing subsequence – before adding any element, we remove \le elements from the end of the array. If we’ve added all elements \ge i_j, we should find how many of them are \ge L_j and add that to the answer for Q_j. That can be easily done by binary search, since the array is sorted, which makes O(\log{N}) per query.

What’s the time complexity? For a fixed K, O(QK+Q\log{N}+NQ/K); if we choose K optimally around \sqrt{N}, that becomes O(Q\sqrt{N}). Memory complexity: O(K^2+N+Q)=O(N+Q).

This approach is slower and requires dealing with more small stupid parts, but if you’re fixated on solving this problem with sqrt decomposition or just don’t know how to modify segment trees, it’s possible.