### PROBLEM LINK:

**Author:** Sunny Agarwal

**Tester:** Hiroto Sekido

**Editorialist:** Kevin Atienza

### DIFFICULTY:

Medium

### PREREQUISITES:

stock span problem, stack, binary search, value compression, combinatorics

### PROBLEM:

Given an array [A_1, \ldots, A_N]. Let B be the list of maximums of all subarrays of A (B has length \frac{N(N+1)}{2}). M games will be played, each with a single constraint of the form "C K", where C is either <, =, >, and K is an integer. In a single game, players alternate turns, and in a move a player marks off an integer in B satisfying the constraint "C K", and the player with no more valid moves is the loser.

You have to determine the winner of each game.

### QUICK EXPLANATION:

Each game depends only on the *parity* of the number of integers satisfying the constraint. Thus, we only need to find the number of values in B satisfying the constraint.

Let L_i be the largest index L_i < i such that A_{L_i} \ge A_i (set L_i = 0 if no such value exists)

Let R_i be the smallest index R_i > i such that A_{R_i} > A_i (set R_i = N+1 if no such value exists)

All $L_i$s and $R_i$s can be computed in O(N) using a stack.

There can be at most N distinct values in A, say [V_1, \ldots, V_M] with M \le N (in sorted order). Let f(j) be the number of subarrays in which V_j is the maximum. Then f(j) = \sum_{A_i = j} (R_i - i)(i - L_i).

There are three kinds of constraints â€ś< Kâ€ť, â€ś= Kâ€ť and â€ś> Kâ€ť. we need to know the sum of the f(j) for all j in which V_j satisfies the constraint. We introduce a fourth kind of constraint, â€ś\le Kâ€ť. The other three kinds of constraints can be reduced to this.

To answer a â€ś\le Kâ€ť constraint, first find the largest index j such that V_j \le K (with binary search). Then the result we want is f(1) + \cdots + f(j). This number can quickly be obtained if we precompute all prefix sums beforehand.

### EXPLANATION:

The solution has multiple parts. We start with the most obvious questions first, so that we know what to precompute, etc.

# A â€śgameâ€ť

Letâ€™s first discuss how a game occurs. Suppose we have the list B, which contains the \frac{N(N+1)}{2} maximums among all subarrays (of course, we canâ€™t construct this array in the harder subtasks because of its size). The first thing to notice is that the order of B's elements doesnâ€™t matter. Thus, we can assume that B is sorted.

Letâ€™s first assume that we have a constraint. The key thing to notice is that the game is really simple: the players simply alternate turns marking off numbers in B satisfying the constraint. In fact, the winner is solely dependent on the number of values in B satisfying the constraint, i.e., it doesnâ€™t matter whatever strategy both players are using! To be specific:

- If there are an even number of such numbers, then the second player always moves last.
- If there are an odd number of such numbers, then the first player always moves last.

Therefore, we only need to count those elements of B satisfying the constraint! Because B is sorted and because of the nature of the constraints, this number can be computed using one or two binary searches!

To illustrate, let I(K) be the number of elements of B that are \le K (or simply the index of the largest i such that B_i \le K, which can be computed with a binary search). Then:

- The number of elements of B that are < K is I(K-1).
- The number of elements of B that are = K is I(K) - I(K-1).
- The number of elements of B that are > K is \frac{N(N+1)}{2} - I(K).

Thus, if we already have the array B, then we can easily compute the result of any game in O\left(\log \left(\frac{N(N+1)}{2}\right)\right) = O(\log N) time.

# Run-length encoding

Unfortunately, the previous approach uses up O(N^2) memory because of the array B. But we can reduce this significantly by noticing that **there are only at most N distinct values in B**. (Why?) So let [V_1, \ldots, V_M] be the distinct values in B, and let f(i) be the number of occurrences of V_i in B.

We can then compute I(K) by doing a binary search in V instead for the largest i such that V_i \le K, and then I(K) is simply f(1) + f(2) + \cdots + f(i). If we precompute all prefix sums of the f(i), then I(K) can still be computed in O(\log N) time, but the memory requirements decrease to just O(N).

# Computing V and f(i)

To complete the algorithm, we need to compute the array [V_1, \ldots, V_M] and the values f(1), \ldots, f(M). Note that the sequence V can be computed in O(N \log N) time from the array A (with a simple sort + uniquify operation). Thus, all that remains is to compute the $f(i)$s.

First, letâ€™s suppose that all elements of A are distinct. Consider the value A_i. How many subarrays are there whose maximum is A_i? Well, let L_i and R_i are the nearest larger elements to the left and right of A_i, respectively. Then the subarray cannot contain either A_{L_i} or A_{R_i}. Thus, the subarray should be strictly contained in [A_{L_i+1}, \ldots, A_{R_i-1}], but due to the way L_i and R_i are defined, we see that the latter condition is sufficient. Thus, there are (R_i - i)(i - L_i) possible subarrays (there are (R_i - i) and (i - L_i) ways to choose the right and left endpoints, respectively). Let us denote this quantity by m(i).

As an example, consider the following image (A_i is the height of the bar over the number i):

```
#
# #
# # #
# # # #
# # # # #
# # # # # #
# # # # # # #
# # # # # # # #
1 2 3 4 5 6 7 8
```

In this case, suppose i = 5. Then L_i = 1 and R_i = 8. Thus, there are (R_i - i) = 3 and (i - L_i) = 4 choices for the right and left endpoints, respectively, for a total of 3\times 4 = 12 subarrays, as shown below:

```
1 2 3 4 5 6 7 8
(2 3 4 5)
(3 4 5)
(4 5)
(5)
(2 3 4 5 6)
(3 4 5 6)
(4 5 6)
(5 6)
(2 3 4 5 6 7)
(3 4 5 6 7)
(4 5 6 7)
(5 6 7)
```

As a consequence, we have f(j) = m(i) for the (unique) i such that A_i = V_j.

In case either L_i or R_i doesnâ€™t exist, we can use the values 0 or N+1, respectively.

This formula, and the expression m(i) = (R_i - i)(i - L_i) is nice; unfortunately it doesnâ€™t extend straightforwardly if the $A_i$s are not all distinct. To see why, consider the following example:

```
# # #
# # # # # #
# # # # # # #
# # # # # # # # #
1 2 3 4 5 6 7 8 9
```

The natural extension of f(j) = m(i) would be f(j) = \sum_{A_i = V_j} m(i). However, in the above case, it doesnâ€™t quite work. Consider j = 3. There are three i s with A_i = 3, namely i = 3, i = 5 and i = 8. But we have m(3) = 6, m(5) = 6 and m(8) = 2, so according to this formula this gives f(3) = 6 + 6 + 2 = 14. However, the correct value of f(3) is 10\,! (verify) Clearly we need to fix this formula.

We can do that by redefining m(i) so that it counts the subarrays whose **leftmost maximum** is A_i. This uses the fact that each subarray has a **unique** leftmost maximum, so no subarray is double-counted. To implement this, redefine L_i by weakening it so that A_{L_i} can equal A_i (i.e. A_{L_i} \ge A_i). Then m(i) stays the same, i.e. (R_i - i)(i - L_i). In the example above, we get the new values m(3) = 6, m(5) = 2 and m(8) = 2, so f(3) is correctly computed as 6 + 2 + 2 = 10.

But how do we compute the $L_i$s and $R_i$s? Actually, these values can be computed easily in O(N) time by means of a stack: itâ€™s actually a standard problem called the stock span problem. (Recognizing this as the stock span problem requires experience, but if you didnâ€™t know about this, notice that the problem of computing the $L_i$s and $R_i$s can be done in O(N \log N) time with a segment tree)

Now, all that remains is to compute the $f(i)$s given the $m(i)$s. But this can be done easily in linear time also:

- Create an array for f, all initialized to zero.
- Create an inverse map of V, say \phi, where \phi(V_i) = i.
- For every 1 \le i \le N, add the value m(i) to f(\phi(A_i)).

Thatâ€™s it! We now have all the parts needed for the whole solution. The preprocessing takes O(N \log N) time, and each query can be answered in O(\log N) time! The following is an example implementation in C++:

```
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
using namespace std;
#define ll long long
#define LIM 1001111
#define INF (1<<30)
int A[LIM];
int L[LIM];
int R[LIM];
int s[LIM]; // stack
typedef pair<int,int> ct;
#define value first
#define count second
ct cts[LIM];
char typ[11];
char ans[LIM]; // will contain the answer string
int n, m;
int find(int k) {
// binary search
int L = 0, R = n + 1;
while (R - L > 1) {
int M = L + R >> 1;
(cts[M].value <= k ? L : R) = M;
}
return cts[L].count;
}
int main() {
scanf("%d%d", &n, &m);
A[0] = A[n+1] = INF;
for (int i = 1; i <= n; i++) scanf("%d", A + i);
// compute L from left to right
s[0] = 0;
for (int q = 0, i = 1; i <= n; i++) {
while (A[s[q]] < A[i]) q--;
L[i] = s[q];
s[++q] = i;
}
// compute R from right to left
s[0] = n+1;
for (int q = 0, i = n; i; i--) {
while (A[s[q]] <= A[i]) q--;
R[i] = s[q];
s[++q] = i;
}
// compute the frequencies of maximums of subarrays in sorted order.
cts[0].value = -INF;
cts[0].count = 0;
for (int i = 1; i <= n; i++) {
cts[i].value = A[i];
cts[i].count = (R[i] - i) * (i - L[i]);
}
sort(cts, cts + n + 1);
// compute cumulative sums. Since we only need the parity, we can use '^' instead of '+'
for (int i = 1; i <= n; i++) {
cts[i].count ^= cts[i-1].count;
}
// answer queries
for (int i = 0; i < m; i++) {
int k;
scanf("%s%d%s", typ, &k, ans + i);
if (!((*typ == '>' ? n*(n+1LL)/2 - find(k) : *typ == '<' ? find(k-1) : find(k) - find(k-1)) & 1)) ans[i] ^= 7;
}
ans[m] = 0;
printf("%s\n", ans);
}
```

As always, if you have any questions, feel free to ask

### Time Complexity:

O((N + M) \log N)