# PROBLEM LINK:

**Author:** Ankush Khanna

# DIFFICULTY:

Medium

# PREREQUISITES:

Online Algorithms, Range trees like Segment Tree or Fenwick Tree, Order Statistics Tree (like GNU Policy Based Data Structure (PBDS) in C++) or various Decomposition techniques.

# PROBLEM:

You are given with a sequence A consisting of N integers denoting unique IDs. You have to perform Q queries (online) in which you have to insert a unique integer into A and provide rewards equal to the number of elements in A smaller than this currently inserted unique integer. Find the total number of rewards provided after all queries are performed.

# EXPLANATION:

The first sub-task can be solved with a naive implementation where each query is processed in O(N) time by linearly traversing A each time to count the integers smaller than the current cat’s worthiness and hence each test is solved in O(N.Q) time. The constraints are small enough for this sub-task, so this solution is viable.

The real challenge here is the second sub-task which will, again, force us to use some wonderful data structures to solve this online problem. This problem can be solved using various different strategies as mentioned in the *Prerequisites* part of this editorial. The solution discussed in this editorial is based on Binary Indexed Tree (Fenwick Tree) implementation.

Fenwick tree is a range tree which supports logarithmic time range sum queries (which are used in the solution). According to the constraints, the worthiness of each cat lies in the range (1 \leq W \leq 10^{9}). So, a Fenwick tree can be constructed to store range sums for all unique IDs where each unique ID has a count of 1 and hence we can find the range sum for the range [1, W] for each cat before it’s insertion in A, which will give us the number of IDs less than it’s ID and hence the reward. Summing up all the rewards will finally lead us to the ultimate answer for each test.

For a space-time (not Einstein’s!) efficient implementation and ability to support dynamic (online) insertions efficiently in the Fenwick tree (because W is large), Hash Map data structure has been used. So, we’ll be able to process each query in O(\log_{2}{W}) time which is fast enough.

# SOLUTION:

## Setter's Solution

```
#include <bits/stdc++.h>
using namespace std;
constexpr long MAXW = 1'000'000'001L;
unordered_map<long, long> fenwick;
inline void add(long x, long y) {
while (x < MAXW) {
fenwick[x] += y;
x += x & -x;
}
}
inline long get(long x) {
long res = 0L;
while (x) {
res += fenwick[x];
x -= x & -x;
}
return res;
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int tt;
cin >> tt;
while (tt--) {
int n;
cin >> n;
fenwick.clear();
for (int i = 0; i < n; i++) {
long x;
cin >> x;
add(x, 1L);
}
int qq;
cin >> qq;
long ans = 0L;
while (qq--) {
long x;
cin >> x;
x ^= ans;
ans += get(x);
add(x, 1L);
}
cout << ans << '\n';
}
return 0;
}
```

# ALTERNATE SOLUTION:

This problem can be solved with lesser code using GNU PBDS in C++ which is an ordered set implemented using Order Statistics Tree. It supports operations like `order_of_key`

which tells the expected position (0-indexed) of an element in sorted order in the set. And as all the IDs are unique so `order_of_key`

for an element will tell the number of elements currently in the set which are smaller than it. This operation works in O(\log_{2}{X}) time where X is the current size of the set.

So, this solution runs in O((N + Q) \log_{2}(N + Q)) time per test with O(N + Q) auxiliary space.

## Alternate Solution

```
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
using namespace std;
using namespace __gnu_pbds;
template <typename T, typename C = less<T>>
using indexed_set = tree<T, null_type, C, rb_tree_tag, tree_order_statistics_node_update>;
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int tt;
cin >> tt;
while (tt--) {
int n;
cin >> n;
vector<int> a(n);
for (int i = 0; i < n; i++) {
cin >> a[i];
}
indexed_set<long> s(a.begin(), a.end());
int qq;
cin >> qq;
long ans = 0L;
while (qq--) {
long x;
cin >> x;
x ^= ans;
ans += s.order_of_key(x);
s.insert(x);
}
cout << ans << '\n';
}
return 0;
}
```

# COMPLEXITY:

**Time complexity:** O((N + Q) \log_{2}{\max\{ W \}}) per test.

**Space Complexity:** O(N + Q) auxiliary space per test.

Feel free to share your approach. If you have any queries, they are always welcome.