MGCHGYM - Editorial

Three small optimizations for the DP part:

  1. Don’ t iterate up to w in each iteration - just up to the currently maximal possible value (sorting the weights by maximal obtainable value may further decrease runtime)

  2. The first iteration can be done directly without iterating the whole array.

  3. Similar to 2) the last iteration can be replaced by just looking at the values w- k*w_i.

1 Like

We actually do not need dp for this one. I did it using segment trees and bitsets.
Maintain as usual a segment tree but this time instead of saying \text{left}(\text{nodes}[i]) = \text{nodes}[2*i] etc , explicitly store the pointer to the left and right children. Also for each node store a bool as to whether it is reversed or not.
During preprocessing at each node we have an unordered set of all elements in its range. Since each element occurs in exacty \log N levels the extra overhead is \mathcal O(N \log N).
When you have to update an element, traverse up the tree, deleting the old value from the unordered sets and inserting the new value. It takes \mathcal O(\log N) per update.
For reversals, just mark the node as reversed and exchange its left and right pointers + lazy propagation : whenever we “touch” a reversed node we propagate its reversal to its children.
In effect, reversal takes \mathcal O (\log N) time per reversal amortized.

For the query:
We can do this in \mathcal O(N) per query using bitsets; refer this. It is given that there are at most P of these queries.

Overall time complexity is \mathcal O(NP + (N + Q) \log N).