MINDIFF1 - Editorial

#include<bits/stdc++.h>
using namespace std;

void solve() {
int n, m;
cin >> n >> m;

vector<vector> a(n);
vector in(n);
for (int i = 0; i < m; i++) {
int u, v;
cin >> u >> v;

u--;
v--;

a[u].push_back(v);
a[v].push_back(u);
in[u]++;
in[v]++;

}

set<pair<int, int>> st;
for (int i = 0; i < n; i++) {
st.insert(make_pair(in[i], i));
}

int ans = 0;
vector ans_a(n);
int fill_no = n + 1;

while (!st.empty()) {
fill_no–;
auto curr = st.begin();
st.erase(curr);

int adj_nodes_cnt = curr->first;
int curr_node = curr->second;

ans = max(ans, adj_nodes_cnt);
ans_a[curr_node] = fill_no;

for (auto adj_node : a[curr_node]) {
  if (in[adj_node] > 0) {
    in[adj_node]--;
    in[curr_node]--;
    st.erase(make_pair(in[adj_node] + 1, adj_node));
    st.insert(make_pair(in[adj_node], adj_node));
  }
}

}

cout << ans << ‘\n’;
for (int i = 0; i < n; i++) {
cout << ans_a[i] << " ";
}
cout << ‘\n’;
}

int main() {
ios::sync_with_stdio(0);
cin.tie(0);

int tc;
cin >> tc;

while (tc–) {
solve();
}

return 0;
}

Exactly, what I had thought but I couldn’t execute it efficiently.

I have a very simple solution for the solve(k) function mentioned in the editorial that works in O(M)
As mentioned in the official editorial, we will assign each node in decreasing order, but maintaining queue instead of a set
Lets say we check if max Di <= k
We maintain the number of unassigned neighbours of each node, starting with unassigned[i] = degree[i] . When we assign for node u , we will update unassigned[v]-- for all v that is a neighbour of u . We will also maintain a queue of all nodes with the number of unassigned neighbours that is <= k, which can be easily checked and added to the queue when updating unassigned[v]-- . If the whole graph is saturated, then we can safely declare that it is possible to assign each node so that max [i] <= k .
Combined with binary search, the time complexity is O(MlogN)
My implementation: here

Why the greedy is correct? Would someone please help explain?

why not just assigning larger values to nodes having lower degrees work??

because in any assignment, the order does not matter, only the set of assigned nodes at the end of the assignment matters. This is because we are already assuming that the answer(max D) is k. The number of unassigned neighbors can only decrease or stay the same after each assignment. So any vertex that has number of unassigned neighbors<=k at any point can be assigned a value; we can assign it now or later, it does not matter. You can think that each value of k has a set of nodes that are “assignable”, and we are trying to find the smallest value of k such that this set includes all the nodes of the graph.

1 Like

I wouldn’t even call it greedy: we’re choosing literally any possible “move”, rather than a maximal/minimal in some sense. A simple explanation why it works: if we’re stuck at some point, then the graph that we’re left with is an induced subgraph of the original graph with all degrees > K. But if there is such a subgraph, no possible sequence of assignments can get rid of it: at the point when we’re removing the first vertex of the subgraph its degree is > K.

I sorted edges on the basis of cnt of neighbors and assinged n to a node having smallest value.
Also i got correct ans for your mentioned TC.
but got WA while submitting.
link:-CodeChef: Practical coding for everyone
where it fails then??

1
5 4
1 3
3 2
2 4
4 5
ans will be 1
@sourabh_0123 yours is 2

1 Like

One observation that helps speed my solution along slightly is that:
\sum D_i = M
since every edge contribute 1 to the D value of one of the nodes it links.

We also know that the node v that receives the label 1 will have D_v=0.

So my process goes as follows:

  • Assign the top numbers to any isolated node (with no edges), just to get them out of the way.
  • Now calculate the smallest possible value of D_{max} from the pigeonhole principle applied to the remaining nodes (reduced by one as above). For example, with 100 edges and 29 nodes, we know that D_{max} is at least (100/28) rounded up; in general D_{max} \ge \lceil M / (N_{con}{-}1) \rceil (where N_{con} is the count of nodes that have at least one edge).
  • you should also get the minimum degree of the remnant graph. Both of these are lower limits, so you can set D_{max} to the larger.
  • Gather all nodes at or below with degree \le D_{max} into an action set, assign labels adjusting the adjacent node degrees as the assignments are made and add those adjacent nodes into the action set if their reduced degree qualifies.
  • If there are still unlabelled nodes, iterate on this reduced set of nodes/edges to find an increased D_{max}.

At each stage we know that we absolutely need to have D_{max} at the level we set it, so we are definitively making it as small as possible, and subsequently any remaining graph needing a higher D_{max} has had as many nodes pruned off it as possible, so we can treat that in the same way.

In principle the iteration for large graphs might be accelerated by detecting connected components (at each stage) and treating these separately.

My Python solution (which only uses pigeonhole principle first time, when it’s likely to have most benefit, and actual minimum remaining degree thereafter)

It will work if you will check the degree of every particular node at each visit of its neighboring node

i didnt get u .i am finding the degree once and thyen assigning the values as explained above.are u saying the same

thank you