# MULGAME - Editorial

Author: Miloš Purić
Testers: Felipe Mota and Radoslav Dimitrov
Editorialist: Miloš Purić

Medium

# PREREQUISITES:

Game theory, Square root optimizations

# PROBLEM:

You are given a strictly increasing sequence of positive integers A_1, A_2, ..., A_N of length N where A_i + A_1 \geq A_{i+1} for each 1 \leq i < N.

Alice and Bob will play Q independent games using this sequence. Before they start playing, Alice must choose an integer G such that 1 \leq G \leq M.

The rules of the i-th (1 \leq i \leq Q) game are:

• There is a set S_i = \{A_i | L_i \leq i \leq R_i\} representing allowed moves in the game.
• There is a single pile of objects. In the beginning there are G objects on the pile.
• Let P be the current number of objects on the pile. During their move, the player should choose an integer b \in S_i such that b \leq P and take b objects from the pile.
• Alice and Bob alternate turns, with Alice moving first.
• The player that can’t make a move loses.

If both players play optimally, what is the maximum number of games Alice can win by choosing the best possible G?

# QUICK EXPLANATION:

The criterion (A_i + A_1 \geq A_{i+1} for all i where 1 \leq i < N) that the array A satisfies is a sufficient condition for each game i with the set of allowed moves S_i = \{A_i | i \in \Z \land L_i \leq i \leq R_i\} to be equivalent to a game with the set of allowed moves S_i = \{x | x \in \Z \land A_{L_i} \leq x \leq A_{R_i}\}.

With this simplification on mind, it can be shown that a game state with f objects on the pile is a winning position in the i-th game if and only if (f \mod (A_{L_i}+A_{R_i})) \geq A_{L_i}, where (a \mod b) denotes the remainder after division of a by b, when a and b are positive integers.

Let cnt_p denote how many of these Q inequalities would be satisfied if Alice chooses p as G. We can calculate cnt_i for each i from 0 to M in time O((M+Q)\sqrt M) by naively incrementing values at appropriate positions in the array cnt naively for games where A_{L_i}+A_{R_i} \geq \sqrt M and considering the rest of the games in a more clever way.

# EXPLANATION:

We will start by introducing two well-known terms in game theory, which are here partly specific for the games described in the problem:

\textbf{Definition 1:} We define the winning and losing positions inductively:

• The state of the game when there are 0 objects on the pile is a losing position.

• The state of the game when there are a objects on the pile is a winning position if the player that is on turn can make a move by choosing b \in S, b \leq a such that a-b is a losing position. Otherwise, it is a losing position.

Namely, a certain position is winning if and only if the player on turn can go to a losing position from it.

We will now omit the initial numbers of objects on the piles in all Q games and we will characterize each game only with its set of allowed moves S_i. Instead, we will focus on the winning and losing states of the games. Then the following definition of equivalence of games arises naturally:

\textbf{Definition 2:} Two games are equivalent if and only if the following statement holds true for every nonnegative integer p:

• The state of the first game when there are p objects on the pile is a winning position if and only if the state of the second game when there are p objects on the pile is a winning position.

In other words, if we were to write down winning and losing positions as a list of ones and zeros respectively, two equivalent games would have the exact same list.

In the following text, (a \mod b) will denote the remainder after division of a by b, when a and b are positive integers and [a,b] will denote the set \{x | x \in \Z \land a \leq x \leq b\}.

Before we go any deeper into the actual solution, we’re going to state an assertion that will be helpful later.

\textbf{Lemma 1:} The state when there are F objects on the pile in the game with the set of allowed moves [l,r], where l and r are positive integers satisfying l \leq r, is a winning position if and only if (F \mod (l+r)) \geq l.

Proof of Lemma 1

All states F with 0 \leq F < l are obviously losing. From each state F with l \leq F < l+r the player on turn can take exactly F objects from the pile and win, so those are all winning states. We have now proved that the first block of (l+r) elements satisfies Lemma 1.

Suppose now a certain block from k*(l+r) to k*(l+r)+(l+r-1) satisfies the lemma, for a nonnegative integer k. We will show that the next block also satisfies Lemma 1.

From the state (k+1)*(l+r) the player on turn can take at most r objects from the pile, which means he can go to the state k*(l+r)+l leftmost. He can also take at least l objects from the pile, which means he can go to the state k*(l+r)+r rightmost. Since all the states from k*(l+r)+l to k*(l+r)+r are winning by the inductive hypothesis, the state (k+1)*(l+r) must be losing.

A similar argument can be used to prove that all states from (k+1)*(l+r) to
(k+1)*(l+r)+(l-1) are winning - the player on turn can only move to a winning state from them.

Now, (k+1)*(l+r)+l is obviously winning since by taking l objects from the pile we reach (k+1)*(l+r) which is losing. From each of the states from (k+1)*(l+r)+l to
(k+1)*(l+r)+(l+r-1) we can reach at least one losing state from (k+1)*(l+r) to (k+1)*(l+r)+(l-1), so they are all wining.

Now we have inductively proved that Lemma 1 holds for all positive integers F.

The terminology we’ve established and Lemma 1 now enable us to formulate the following lemma, which should advance our understanding of the problem:

\textbf{Lemma 2:} A game with the set of allowed moves S is equivalent to a game played with the set of allowed moves [min(S),max(S)] = \{min(S),min(S)+1,...,max(S)-1,max(S)\} if and only if the following criterion holds:

• For every two a,b \in S such that a+min(S) < b there exists c \in S such that a < c < b.

Proof of Lemma 2

Let S = \{x_1,x_2,...,x_k\} where x_1 < x_2 < ... < x_k. Therefore, min(S)=x_1 and max(S)=x_k.

We need to prove the following: S is equivalent to [min(S),max(S)] \Leftrightarrow criterion.

Implication from left to right

Since S is equivalent to [min(S),max(S)], we know the exact configuration of winning and losing states in S because of Lemma 1. Namely, only states F with (F \mod (x_1+x_k)) \geq x_1 are winning.

Suppose that there exists an index i with 1 \leq i < k = |S| such that x_i+x_1< x_{i+1}. The state x_i+x_1 must be winning because all the states from x_1 to x_1+x_k-1 are winning due to Lemma 1 and x_i+x_1 must lie somewhere between them since i < k.

However, we can take at most x_i and at least x_1 objects from the pile of x_1+x_i objects, which means that we can go to x_1 leftmost and to x_i rightmost. Since all states from x_1 to x_i are winning, x_1+x_i must be losing.

This contradicts the initial assumption. Therefore, we have proved this direction.

Implication from right to left

We can notice that it is sufficient to prove that the first block of x_1+x_k states is the same as the first block of x_1+x_k states in the game with allowed moves [min(S),max(S)]. This is so because we can use the same ‘transition between blocks’ argument from the proof of Lemma 1 to show that all the other blocks of x_1+x_k states are also exactly the same as in the game with allowed moves [min(S),max(S)].

All the states from 0 to x_1-1 are obviously losing. Now, suppose there exists a state F that is a losing position such that x_1 < F < x_1+x_k. We first establish that F cannot be equal to any x_i because it would immediately be a winning position.

If F > x_k we can take x_k objects from the pile and go to the state F-x_k < x_1 which is a losing state. This would mean that F is a winning state, which brings us to the desired contradiction.

Suppose now that F < x_k. We can now conclude that F is located between some
two x's: x_j < F < x_{j+1} holds for some j < k. The criterion tells us that x_j+x_1 \geq x_{j+1} so x_j+x_1 > F. This means that F-x_j < x_1, so by taking x_j objects from the pile we can reach a losing state, which tells us that F is a winning state. We also arrive to a contradiction in this case, which completes the proof of this direction.

For this problem we only need the implication from right to left to hold, but the derived equivalence is a nice result nonetheless.

We can now work with Q games with the sets of allowed moves [A_{L_i},A_{R_i}] instead of the sets of elements of A from ranges [L_i,R_i]. With Lemma 1 in our mind, which tells us the exact configuration of losing and winning states in a specific type of a game, we can reduce our problem to finding the maximum number of the following Q inequalities that can be satisfied by an optimal choice of G such that 0 \leq G \leq M:

• (G \mod (A_{L_i}+A_{R_i})) \geq A_{L_i} for each i such that 1 \leq i \leq Q

We will introduce u_i = A_{L_i}+A_{R_i} and v_i = A_{L_i} for 1 \leq i \leq Q for convenience.
Now we have to satisfy the maximum possible number of the following Q inequalities:

• (G \mod u_i) \geq v_i for each i such that 1 \leq i \leq Q

Next, we will construct an array cnt of length M+1 where cnt_p is the number of games that Alice would win (i.e. the maximum number of inequalities satisfied) if she chooses p as G, for p such that 0 \leq p \leq M.

To do so, for every game i we will add 1 to every cnt_p where (p \mod u_i) \geq v_i. This is equivalent to adding 1 to every cnt_p with (p \mod u_i)=v_i and adding -1 to every cnt_p with (p \mod u_i)=0 (except for p=0) for every game i and doing cnt_p := cnt_p+cnt_{p−1} for all p such that 1 \leq p \leq M afterwards (making the prefix sum array out of it).

Consider a positive integer C. For all u_i \geq C we will naively go through the array cnt and add 1s and -1s on appropriate positions.

However, for all games where u_i < C we will calculate F_{i,j} which stands for how much we should add to every position p with (p \mod i) = j. Then, for every position p such that 1 \leq p \leq M and for every integer g such that 1 \leq g \leq C we will add F_{g, (p \mod g)} to cnt_p.

As mentioned earlier, after all games/inequalities have been processed we will go through the array and assign cnt_p := cnt_p+cnt_{p−1} to get the desired prefix sum array.

Since we now know all the values in the array cnt we can give the answer to our problem by finding the biggest one.

The computational complexity of this approach is O(\frac{M}{C}*Q+M*C). If we choose C to be close to \sqrt M we can achieve the complexity O((M+Q)* \sqrt M).

Due to the nature of the problem, setting C to any value approximately between 30 and 1500 should probably result in a feasible runtime.

# AN ALTERNATIVE SOLUTION:

There is an alternative way of building the array cnt after we’ve reduced our problem to satisfying the maximum number of the following Q inequalities:

• (G \mod u_i) \geq v_i for each i such that 1 \leq i \leq Q

Firstly, we are going to use a map of pairs of integers to make sure we never have to do an exact same query (L_i,R_i) we’ve already done before. If there are repeating queries, we’ll remember how many there are of each kind in the mentioned map and do them all at once.

Alternatively, it is possible to sort the games and solve them in segments of exactly same queries. This has the same effect as memorizing them in a map.

Now that we’ve made sure never to solve the same query we’ve solved before, we can say that for a certain positive integer X, we will solve a query with u_i=X no more than X times.

Let num_X stand for how many queries there are with u_i = X. So far we’ve ensured
that num_X \leq X holds for every X.

We will now prove that if we do all the queries naively (add 1s and -1s on appropriate spots like in the first solution) we will achieve a feasible complexity.

The number of operations for all queries with u_i=X will be around \frac{M}{X}*num_X. If we sum this up across all X's, we’ll get the approximate total number of operations L:

L \approx \sum_{X=3}^{\infty} \frac{M}{X}*num_X = M* \sum_{X=3}^{\infty} \frac{num_X}{X}

Note that we start from X=3 because we can’t have X=1 or 2 as the sum of L_i+R_i since 1 \leq L_i < R_i, and go to X=\infty because it’s more convenient.

We can see that the sum \sum_{X=3}^{\infty} \frac{num_X}{X} attains the maximum value when all the queries have the minimum u_i possible. That is, there is the maximum possible number of queries with u_i=3, then the maximum possible number of queries with u_i=4, and so on until we use up all of the available Q queries. However, in that case, the maximum u_i over all queries will obviously be around \sqrt Q, because \sum_{X=3}^{\infty} num_X = Q.

This observation allows us to gage the maximum possible total number of operations:

L \approx M*\sum_{X=3}^{\infty} \frac{num_X}{X} \leq M*(\sum_{X=3}^{\sqrt Q} 1) \approx M*\sqrt Q

Therefore, doing all the queries naively should indeed result in a feasible runtime as long as you make sure to memorize the queries that repeat and add their total contribution towards cnt
in one go.

In conclusion, although the solutions that use maps of pairs might be a bit slower and the ones that replace the maps by sorting can work rather fast, the difference in their runtime is mostly insignificant.

# SOLUTIONS:

Setter's Solution 1
/// THE FIRST SOLUTION
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int N = 2e5+23;

int n,m,q,a[N],l[N],r[N];
int cnt[N],f[504][504]; /// the size of there arrays (f and was) is 504 > sqrt(m) = 418 so it's okay
bool was[504];

void naive(int l,int r){ /// the function that solves a single query naively - adds 1 to every point p
where (p mod (l+r)) = l and adds -1 to every point p where (p mod (l+r)) = 0  (except p = 0)
int md = l+r; /// it is important not to add anything to position 0 even though it's divisible by (l+r) - it's a losing position anyway
for(int i = 0; i <= m; i += md){ /// we must take care not to exceed m
if(i+l <= m){
cnt[i+l]++;
}
if(i+md <= m){
cnt[i+md]--;
}
}
}

void solve(){ /// O((M+Q)*sqrt(M))
cin >> n >> q >> m;
for(int i = 1; i <= n; i++){
cin >> a[i];
}
for(int i = 1; i <= q; i++){
cin >> l[i] >> r[i];
}

/// clearing the arrays
int c = sqrt(m);
for(int i = 0; i <= m; i++) cnt[i] = 0;
for(int i = 0; i <= c; i++){
was[i] = 0;
for(int j = 0; j < i; j++){
f[i][j] = 0;
}
}

/// partitioning the queries based on their size - the ones larger than c we immediately do
naively and the ones smaller than c we save for later
vector <int> todo;
for(int i = 1; i <= q; i++){
int md = a[l[i]]+a[r[i]];
if(md > c){ /// do the query naively
naive(a[l[i]],a[r[i]]);
}
else{ /// add the query's contribution towards CNT to the array f
if(!was[md]){
todo.push_back(md);
}
was[md] = 1;
f[md][a[l[i]]]++;
f[md][0]--;
}
}

for(int i = 1; i <= m; i++){ /// solve all the queries smaller than c in O(m*sqrt(c))
for(auto md : todo){
cnt[i] += f[md][i%md];
}
}

for(int i = 1; i <= m; i++){ /// make the prefix sum array
cnt[i] += cnt[i-1];
}

int ans = 0;
for(int i = 0; i <= m; i++){ /// find the solution
ans = max(ans,cnt[i]);
}
cout << ans << endl;
}

int main(){
ios_base::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while(t--){
solve();
}
}

Setter's Solution 2
/// THE SECOND SOLUTION
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int N = 2e5+23;

int n,q,m,a[N],l[N],r[N];
map <pair<int,int>,int> mapa; /// the map where we're gonna store the summed contributions
of repeated queries
map <int,int> mapv;  /// mapv[X] stands for the index in vector v (down) where the vector in
which all queries have modulo's X is located (v is a vector of vectors of pairs)
int cnt[N];

void naive(int l,int r,int x){ /// the same function from the solution 1, the only difference is that
here it adds x and -x instead of 1 and -1 because it solves multiple queries at once
int md = l+r;
for(int i = 0; i <= m; i += md){
if(i+l <= m){
cnt[i+l] += x;
}
if(i+md <= m){
cnt[i+md] -= x;
}
}
}

void solve(){ /// O(q log + m*sqrt(q))
cin >> n >> q >> m;
for(int i = 1; i <= n; i++){
cin >> a[i];
}
for(int i = 1; i <= q; i++){
cin >> l[i] >> r[i];
}

for(int i = 0; i <= m; i++) cnt[i] = 0;

mapa.clear();
mapv.clear();
vector <vector<pair<int,int>>> v; /// for each X that appears as a modulo in queries we'll
dedicate a vector of pairs for it where all the queries with modulo's X will go
/// in a single such vector of pairs, for each query that is in it, we'll store the left end of the
query (li) and the number of times it has appeared in a pair (that's why need the pair)
for(int i = 1; i <= q; i++){ /// O(q log )
int li = a[l[i]],ri = a[r[i]];
if(!mapv.count(li+ri)){ /// we haven't had a single query with modulo li+ri before
vector <pair<int,int>> a; /// we create a new vector a for all queries with modulo's li+ri
a.push_back({li+ri,0}); /// we add this particular query
v.push_back(a); /// add the new vector a to the vector of all the vectors v
mapv[li+ri] = v.size()-1; /// remember the position (in vector v) where the vector with
queries which all have modulo's li+ri is located
}
if(!mapa.count({li,ri})){ /// we haven't encountered this particular query before
int gde = mapv[li+ri]; /// gde - the position (in v) of the vector with queries which all have
modulo's li+ri
int vel = v[gde].size(); /// vel - how many vectors with modulo's li+ri there are so far
v[gde].push_back({li,0}); /// add this query to the vector of all queries with modulo's li+ri
mapa[{li,ri}] = vel; /// remember where this query (li,ri) is located
}
/// we now have both the vector for queries with modulo's (li+ri) and a place in it for this
query (li,ri)
v[mapv[li+ri]][mapa[{li,ri}]].second++; /// so now we increase the contribution of all queries
(li,ri) by one at the corresponding place
}

for(auto f : v){ /// now we simply traverse all the queries and do all of them naively - go
through each vector of pairs and add their separate contributions to CNT
int md = f[0].first;
for(int j = 1; j < f.size(); j++){
int l = f[j].first,puta = f[j].second;
naive(l,md-l,puta);
}
}

for(int i = 1; i <= m; i++) cnt[i] += cnt[i-1]; /// we make the prefix sum array

int ans = 0;
for(int i = 0; i <= m; i++){ /// we can now find the solution
ans = max(ans,cnt[i]);
}
cout << ans << endl;
}

int main(){
ios_base::sync_with_stdio(false);
cin.tie(0);
int t;
cin >> t;
while(t--){
solve();
}


}

Tester's Solution
#include <bits/stdc++.h>
using namespace std;
template<typename T = int> vector<T> create(size_t n){ return vector<T>(n); }
template<typename T, typename... Args> auto create(size_t n, Args... args){ return
vector<decltype(create<T>(args...))>(n, create<T>(args...)); }
long long readInt(long long l,long long r,char endd){
long long a;
cin >> a;
return a;
long long x=0;
int cnt=0;
int fi=-1;
bool is_neg=false;
while(true){
char g=getchar();
if(g=='-'){
assert(fi==-1);
is_neg=true;
continue;
}
if('0'<=g && g<='9'){
x*=10;
x+=g-'0';
if(cnt==0){
fi=g-'0';
}
cnt++;
assert(fi!=0 || cnt==1);
assert(fi!=0 || is_neg==false);

assert(!(cnt>19 || ( cnt==19 && fi>1) ));
} else if(g==endd){
if(is_neg){
x= -x;
}
assert(l<=x && x<=r);
return x;
} else {
assert(false);
}
}
}
string ret="";
int cnt=0;
while(true){
char g=getchar();
assert(g!=-1);
if(g==endd){
break;
}
cnt++;
ret+=g;
}
assert(l<=cnt && cnt<=r);
return ret;
}
long long readIntSp(long long l,long long r){
}
long long readIntLn(long long l,long long r){
}
}
}
int main(){
ios::sync_with_stdio(false);
cin.tie(0);
const int maxn = 200000, maxa = 100000000, sq = 30;
for(int _ = 0; _ < t; _++){
vector<int> a(n);
for(int i = 0; i < n; i++){
if(i == n - 1) a[i] = readIntLn(1, maxa);
}
vector<int> accum(m + 1, 0), ext(m + 1, 0);
map<pair<int,int>, int> ops;
for(int i = 0; i < q; i++){
int mi = a[l - 1], ma = a[r - 1];
int len = mi + ma;
ops[{len, mi}] += 1;
}
for(auto e : ops){
int len = e.first.first;
int mi = e.first.second;
int w = e.second;
for(int j = 0; j <= m; j += len){
if(j + mi <= m) accum[j + mi] += w;
if(j + len <= m) accum[j + len] -= w;
}
}
for(int i = 1; i <= m; i++)
accum[i] += accum[i - 1];
int res = 0;
for(int i = 0; i <= m; i++)
res = max(res, accum[i] + ext[i]);
cout << res << '\n';
}
return 0;
}


# VIDEO EDITORIAL:

5 Likes

I think the complexity be bounded by M \cdot log(Q) similar to concept of harmonic series but I’m not sure.

Edit- I realized there can be multiple values with the same denominator after reading the editorial.

I think that it’ll be M \sqrt{Q} only, as we can have multiple segments (but a certain limit for each segment size) with the same length. The worst case time complexity can be calculated as M \sqrt{Q}

2 Likes

This is one of the amazing problems of the year 2021 .

1 Like

I was thinking along the lines of solution 1 for a long time, only to implement the naive solution with some optimizations at the end which passed.

To be honest, it’s kinda frustrating that a better solution is slower than the naive one, but I don’t think this was intended.

can any body tell what wrong in my this solution:
https://www.codechef.com/viewsolution/42833764

I have used a map so that if I have a count of segments .

Can someone please tell me the error here? It’s giving TLE in 2 test cases only.
https://www.codechef.com/viewsolution/42880190

You can cache queries before building your dp array and not repeat if the same query appears multiple times.

1 Like

Thanks @akshitm16 , it passed

Wow the last optimization is simple to come to mind, but may seem trivial, but surely has a big effect

1 Like