# LCMCONST - Editorial

Author: Noureldin

Editorialist: Noureldin

MEDIUM-HARD

# PREREQUISITES:

Meet in the middle, subset-sum (SOS), vertex-cover

# PROBLEM:

You are given M triplets (X_1, Y_1, B_1), (X_2, Y_2, B_2), \ldots, (X_M, Y_M, B_M). Find the number of sequences of positive integers A_1, A_2, \ldots, A_N such that for each valid i, \mathrm{lcm}(A_{X_i},A_{Y_i}) = B_i, or determine that there is an infinite number of such sequences.

# QUICK EXPLANATION:

The problem can be solved independently for each prime p, we want to determine e_1, ..., e_n where p^{e_i} divides the ith value, but p^{e_i+1} doesn’t and the choices don’t contradict any constraint.

The solution can be split in two stages. In the first stage we do logical deductions which determine the upper bound of some of the exponents, the exact values for others and may detect contradictions. In the second stage, we reduce the problem the vertex cover problem with can be solved in O(m*2^{n/2}).

# EXPLANATION:

Let f(x, p) denote the largest e such that p^e divides x. Each constraint lcm(a, b) = x means that for each prime p, f(a, p) \leq f(x, p) and f(b, p) \leq f(x, p) and at least one of f(a, p) and f(b, p) equals f(x, p).

Let’s iterate over each prime p that divides any number in the input (other primes don’t affect the answer) and do the following:

For each triplet (X_i, Y_i, B_i) let T_i = f(B_i, p) and E_x = \min(T_i) for each constrain i where x = X_i or x = Y_i) meaning that the largest power of p that can divide the x-th element of the possible arrays is at most p^{E_x}

## First Stage (logical deductions)

Let cnt_i = [E_{X_i} = T_i] + [E_{Y_i} = T_i] so cnt_i \in \{0, 1, 2\}
If at any moment we detect a triplet with cnt_i = 0 then there is a contradiction and the answer is zero.

If cnt_i = 1 then let k be the one from \{X_i, Y_i\} with E_k = T_i then we know that f(A_t, p) must be equal to T_i in any valid assignment because that’s the only way to satisfy the triplet, now lets iterate over all the triplets that k is a member of and update their cnts

If no contradiction is detected, then all triplets that have not been visited in the last stage have cnt_i = 2 which we deal with in the second stage.

## Second Stage (meet-in-the-middle, SOS)

Let’s build a graph where the vertices are the variables for which we have not assigned any values yet, and the edges are whether there a triplet with cnt_i = 2 between them

Lemma: for each connected component in this graph the value E_v of its nodes is the same.
Proof: assume that it’s wrong then there exists at least two nodes u, v with an edge between them for which E_u \neq E_v WLOG assume E_u < E_v if the triplet between them is c then by definition cnt_c \leq 1 but this can’t happen since the first stage doesn’t terminate until it deals with all constraints with cnt_c \leq 1

### Observations:

• Each connected component is independent from other components
• If v is the value on the edges of a connected component then an assignment in which we choose a subset S of its vertices, set their value v and set the values of other vertices to values in the half-open range [0, v) is valid if and only if S is a vertex cover of the connected component

Let V be the set of vertices of one maximal connected component and E be its edges, then if S is a vertex cover of the component then its contribution to the answer would be v^{|V| - |S|} in other words the number of valid assignments for this components is \sum_{k = 1}^{|V|} g(k, V, S) v^{|V| - |S|} where g(k, V, S) counts the number of vertex covers of size k of the component

Lemma: g(k, V, S) can’t be computed in polynomial time unless \mathcal{P} = \mathcal{NP}
Proof: assume that can be computed in polynomial time then for any graph we can use it to binary search for smallest k for which g is non zero to get the minimum vertex cover (MVC) but MVC is a known \mathcal{NP}-hard problem and solving it polynomial time imply that \mathcal{P} = \mathcal{NP} a contradiction (unless indeed \mathcal{P} = \mathcal{NP} which is not know until now)

When dealing with \mathcal{NP} problem meet-in-the-middle technique is often helpful, let’s forget about g as we will compute that summation without computing its terms, lets first solve the problem of counting the number of vertex covers of a graph (V, E) then extend our solution to solve our problem, let’s split the vertices in half V_1 = 1..\lfloor\frac{|V|}{2}\rfloor and V_2 = V - V_1

Lets try all subsets of vertices of V_1 and check if they could be part of a vertex cover by iterating over all edges, let an edge be u, v then there are 3 cases

1. u, v \in V_1 \implies at least one of u or v must be in the subset
2. u, v \notin V_1 \implies do nothing
3. WLOG u \in V_1, v \notin V_1 \implies v must be in whatever subset we choose from V_2

Let V' be the vertices from the third case then we want to count the number of subsets S_2 of V_2 such that V' \subseteq S_2 and it covers all edges contained inside it, this can be implemented in O(m 3^{|V|/2}) which is too slow or in O(m 2^{|V|/2}) using SOS (sum over subsets) technique

To extend this solution to our problem notice that in SOS dp_{subset} initially is whether the subset covers the edges contained inside V_2 or not, if that boolean value is multiplied by \prod_{v \in V_2 - subset} E_v then the answer to our original problem can be computed in O(m 2^{|V|/2})

# SOLUTIONS:

Setter's Solution
const int MAX = 40, hMAX = (MAX + 1)/2, MAXE = MAX*MAX, MAXM = MAX*MAX;
int f[1 << hMAX];
int fr[MAXE], to[MAXE];

bool in_range(int s, int e, int x){
return s <= x && x <= e;
}

void build(int L, int R, int *f, int *fr, int *to, int m, int *powers){
int N = R-L+1;

for(int msk = 0; msk < (1 << N); msk++) {
f[msk] = 0;
bool is_cover = 1;
loop(e, m) if(in_range(L, R, fr[e]) && in_range(L, R, to[e])){
int a = fr[e] - L;
int b = to[e] - L;
int ia = (msk >> a) & 1;
int ib = (msk >> b) & 1;
is_cover &= ia || ib;
if(!is_cover) break;
}
int cnt = N;
loop(i, N) if((msk >> i) & 1) {
cnt--;
}
// f[msk] = powers[cnt] * is_cover;
f[msk] = powers[cnt] * is_cover;
}
// prArr(f, (1 << N), int);
for(int b = 0;b < N;b ++) {
loop(msk, (1 << N)) if(!(msk & (1 << b))) f[msk] = add(f[msk], f[msk ^ (1 << b)]);
}
// prArr(f, (1 << N), int);
}

int solve(int N, int k, int *fr, int *to, int m) {
static int powers[2*MAX];
powers[0] = 1;
loop(i, N) powers[i+1] = mul(powers[i], k);

build(N/2, N-1, f, fr, to, m, powers);

int n = N/2, res = 0;

loop(msk, (1 << n)) {
int other_msk = 0;
int cnt = n;
loop(i, n) if((msk >> i)&1){
cnt--;
}
bool is_cover = 1;
loop(e, m) {
int a = fr[e], b = to[e];
if(a >= n && b >= n) continue;
if(a < n && b < n) {
int ia = (msk >> a) & 1;
int ib = (msk >> b) & 1;
is_cover &= ia || ib;
continue;
}
if(a > b) swap(a, b);
assert(a < n);
if(msk & (1 << a)) continue;
other_msk |= 1 << (b - n);
}
if(!is_cover) continue;
}
return res;
}

int N, M;
bool is_infinite = 0;
map<int, int> Z[MAXM];
int A[MAXM], B[MAXM], target[MAXM];
int lim[MAX], val[MAX];
bool done[MAXM];

queue<int> q;
vi V;
int solve(int p) {
//cerr << "solve " << p << endl;
loop(i, N) lim[i] = INT_MAX, val[i] = -1;
loop(i, M){
target[i] = 0;
if(Z[i].count(p)) target[i] = Z[i][p];
lim[A[i]] = min(lim[A[i]], target[i]);
lim[B[i]] = min(lim[B[i]], target[i]);
done[i] = 0;
}

while(!q.empty()) q.pop();

//cerr << "make q" << endl;
loop(i, M){
int a = A[i], b = B[i];
if(lim[a] > lim[b]) swap(a, b);

int ways = 0;
int fa = lim[a] == target[i];
int fb = lim[b] == target[i];
ways = fa + fb - fa*(a == b);
if(ways == 0) {
return 0;
}
if(ways == 1) q.push(i), done[i] = 1;
}

//cerr << "deduct" << endl;
vi usable;
while(!q.empty()){
int i = q.front(); q.pop();
int a = A[i], b = B[i];
if(lim[a] > lim[b]) swap(a, b);

if(val[a] == target[i] || val[b] == target[i]) continue;
usable.clear();
if(val[a] == -1 && lim[a] == target[i]) usable.pb(a);
if(val[b] == -1 && lim[b] == target[i] && a != b) usable.pb(b);
//cerr << "@" << i << " ";
if(usable.empty()){
return 0;
}
assert(usable.size() == 1);

a = usable[0];

val[a] = target[i];
//cerr << "set " << a << " to " << target[i] << endl;

loop(j, M) if(!done[j] && (a == A[j] || a == B[j])) {
done[j] = 1;
q.push(j);
}
}
loop(i, N) if(lim[i] == INT_MAX) {
is_infinite = 1;
return 0;
}
//cerr << "VC" << endl;
set<int> freeVals;
loop(i, N) if(val[i] == -1) freeVals.insert(lim[i]);
int ret = 1;
for(int v : freeVals) {
V.clear();
loop(i, N) if(val[i] == -1 && lim[i] == v) V.pb(i);
int m = 0;
loop(c, M) if(binary_search(all(V), A[c]) && binary_search(all(V), B[c])){
int a = lower_bound(all(V), A[c]) - V.begin();
int b = lower_bound(all(V), B[c]) - V.begin();
if(a > b) swap(a, b);
fr[m] = a;
to[m] = b;
m++;
}
ret = mul(ret, solve(sz(V), v, fr, to, m));
}
return ret;
}

void test_case(){
set<int> P;
scanf("%d %d", &N, &M);
// cerr << "\t" << N << " " << M << endl;
map<pi, map<int, int> > mem;
int i = 0;
bool return_zero = 0;

loop(j, M){
scanf("%d %d ", A + i, B + i);
A[i]--, B[i]--;
int k; scanf("%d", &k);
Z[i].clear();
for(;k;k--) {
int p, e; scanf("%d %d", &p, &e);
assert(p);
Z[i][p] += e;
P.insert(p);
}
int a = A[i], b = B[i];
if(a > b) swap(a, b);
pi p(a, b);
if(mem.count(p)) {
if(mem[p] != Z[i]) {
return_zero = 1;
}
continue;
}
mem[p] = Z[i];
i++;
}
if(return_zero) {
puts("0");
return;
}
// cerr << "M was " << M << " will become " << i << endl;
M = i;
set<int> constrained;
loop(i, M) constrained.insert(A[i]), constrained.insert(B[i]);
// cerr << constrained.size() << " vs " << N << endl;
is_infinite = sz(constrained) != N;

int res = 1;
for(int p : P) {
res = mul(res, solve(p));
// //cerr << has_contradiction << endl;
puts("0");
return ;
}
}
if(is_infinite) {
puts("-1");
return ;
}
printf("%d\n", res);
}

int main(){
#ifdef HOME
freopen("in.in", "r", stdin);
#endif
int T; scanf("%d", &T);
while(T--) test_case();
return 0;
}

6 Likes

Am I the only one who has used backtracking with a worst case time complexity of O((2^N)*(N^2))?

And still got all 100 points that too in 0.0000s

3 Likes

You’re not alone, I used backtracking with some pruning. my submission. Was really surprised to see it pass

Ya same, I got 20 without pruning but for 100 backtracking with pruning worked magically.

666

The editorial is very difficult to understand…can someone plz explain in simple way. And what does i mean in “where pei divides the ith value”???

1 Like

Neither is there any video solution as far as I know

can somebody help me explaining the approch in simpler way
Thanks

1 Like

Will there be a live stream this time?

I did the same too.

@aneee004 hey can you explain your logic… I want to do this question but not able to understand editorial if you can tell your logic in simpler words then I may do this question…

Let’s visualize this entire problem as a graph with n nodes (the array elements) and m edges, the LCM constraints. For simplicity, I will assume that there is only one prime p involved. Calculating for multiple primes is just the product of individual contribution.

Once we create our adjacency matrix to represent the power of the prime, we loop through all vertices. For each vertex, we loop through all it’s edges, and store the minimum power of p that occurs. Let’s say this is base_i. We can clearly see that the vertex i (that is the number A_i) cannot be more than p^{base_i}, as crossing this value will violate the LCM constraint ‘base_i’ for some edge from vertex i.

We do the same, and store the maximum possible power a node can take, for each node. After this, let’s loop through all the given edges.

If both base_i and base_j are strictly less than edge_{ij}, then this is not possible to achieve. So we can immediately print 0. (Also note that it is not possible for base_i > edge_{ij}, as base_i is the minimum of all edges from i). This leads to some observations and a couple of cases.

1. base_i, base_j \le edge_{ij}

Cases:

• If base_i < edge_{ij} and base_j < edge_{ij}, we print 0.
• If base_i = edge_{ij} and base_j < edge_{ij}, then node i doesn’t really have a choice. To maintain the LCM constraint, node i had to be at it’s maximum capacity, and node j can be anything between 0 and base_j. (Note that node j might also have a constraint depending on other edges from j).
• If base_i = edge_{ij} and base_j = edge_{ij}, here one of them has to equal to edge_{ij} and the other has a free choice.

After computing all these node constraints, we run a dfs. (Note that there can also be multiple components). If we hit a node which has only one possibility, we continue the dfs. (Type 2).

If we hit a node of Type 3, we first fix that node as base_{i} and give free will for all it’s neighbors. After we count this, we backtrack and give free will for node i, and constraint all his neighbors, and append this to the final result!

So that’s it! After we run dfs on each component, the product of possibilities in each of them is the result for this particular prime. Multiply the result of all primes for the final answer.

12 Likes

After checking all subsets of V1 for vertex cover, what does this mean in setter’s solution? Can someone please help?

for(int b = 0;b < N;b ++) {
loop(msk, (1 << N)) if(!(msk & (1 << b))) f[msk] = add(f[msk], f[msk ^ (1 << b)]);
}

aneee can u suggest some reading material for problem solving or
describe the way u learnt cp;

I got familiar with the C++ stl as I solved some problems from leetcode.
For DS, and other advanced algorithms, I just learn it from CodeChef editorials as they appear after every contest. (Like this one).

And upsolving one or two problems after a contest always helps.

3 Likes

Hi there could you guys say how you became so good at solving these kind of problems like using backtracking pruning and all and let alone knowing what to do
Thank

Hey

I wanted to raise a concern about the testcases used. Below I am attaching 2 links, both of them are my submissions and are exactly the same( i have checked them using diff on the linux terminal ). The issue is that while one of them is partially correct, the other one gets a WA. There was no announcement during the contest about the testcases being changed. I would appreciate if the problem author and the editorialist could shed some light on it.

https://www.codechef.com/viewsolution/35547395 - partially correct solution