#include<bits/stdc++.h>
#define ll long long int
#define pb push_back
#define vi vector<ll>
#define vb vector<bool>
#define vd vector<double>
#define vc vector<char>
#define vii vector<vi>
#define mp make_pair
#define vpi vector< pair<ll,ll> >
#define take_input freopen("input.txt", "r", stdin)
#define give_output freopen("output.txt", "w", stdout)
#define fastIO ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL)
#define fi first
#define se second
#define mod 1000000007
#define min_pql priority_queue< ll, vector<ll>, greater<ll> >
using namespace std;
using namespace std::chrono;
vi par(200001, 0);
vi rnk(200001, 0);
vi val(200001, 0);
vii teams(200001);
void make_set(int n){
for(int i=1; i<=n; i++){
par[i] = i;
rnk[i] = 1;
teams[i].pb(i);
}
}
int find_set(int a){
if(a == par[a]) return a;
return par[a] = find_set(par[a]);
}
void union_set(int a, int b){
a = find_set(a);
b = find_set(b);
if(a != b){
if(rnk[a] > rnk[b]){
par[b] = a;
for(ll i:teams[b]){
teams[a].pb(i);
}
teams[b] = {b};
rnk[a] += rnk[b];
}else{
par[a] = b;
for(ll i:teams[a]){
teams[b].pb(i);
}
teams[a] = {a};
rnk[b] += rnk[a];
}
}
}
int main(){
fastIO;
// take_input;
// give_output;
int n, m; cin >> n >> m;
make_set(n);
for(int i=0; i<m; i++){
string s; cin >> s;
if(s == "add"){
int a,b; cin >> a >> b;
a = find_set(a);
for(ll i:teams[a]){
val[i] += b;
}
}else if(s == "join"){
int a,b; cin >> a >> b;
union_set(a, b);
}else if(s == "get"){
int a; cin >> a;
cout << val[a] << endl;
}
}
}
I used a 2d array named team which used to hold all the members in dsu corresponding to a leader. But I am getting TLE. What can be an optimized approach?
I just saw the problem statement. You’re supposed to add b points to all team members of the team to which player a belongs to. Can we do it more efficiently?
Try using another array that keeps track of experience points. I am not sure if I encounter a bug in this approach, I should probably try solving this problem.
Okay, so we can solve this problem by manipulating the usual disjoint set data structure.
Notation
F(X) returns the root of the component in which X is present.
A[X] (X is the root of its component) represents the list of all nodes which are present in the component of X.
D[X] (X is the root of its component ) is an array that stores the total pending value that is yet to be propagated to all elements of the component.
R[X] is the total experience value that has been propagated to node X.
So we’ll look at get and add operations first as they’re comparatively easier to understand.
Get Operation:
It’s pretty clear that experience value of X will be the total updated value+the pending value which is to be propagated to all nodes in the component in which X is present.
\therefore get(X)=R[X]+D[F(X)]
Add Operation:
For adding a given value V to all nodes X in the component of X we’ll be a little clever and instead of adding X to all nodes in the component will increase the D[F(X)] by V, i.e. increase the pending value for root of X by V .
add(X,V) \implies D[F(X)]=D[F(X)]+V
Join Operation \implies join(X,Y):
Firstly we’ll transfer all elements in the list A[X] to A[Y].
Pass on the pending value in component of X to all elements in A[X], i.e, R[X]+=D[F(X)]
Subtract D[F(Y)] from R[X] as note that in get operation we add the pending values at the root so if we don’t do this then we’ll be over counting the D[F(Y)] twice. R[X]-=D[F(Y)]
Do rest of the standard DSU operations as it is.
Time Complexity analysis.
Adding complexity: \mathcal{O(logN)}
Query complexity: \mathcal{O(logN)}
Joining complexity: \mathcal{O(NlogN)}
Hence a naive analysis might give an impression that this solution should TLE as the worst-case complexity of the entire solution is \mathcal{O(Q\times NlogN)}. But that’s not the case, thanks to the following observation.
Since there can be at most N join operations over all queries and each join operation at least doubles the size of the component each node belongs to, the number of join operations involving a particular node will be at most \mathcal{O(logN)}. And therefore the total complexity of all the queries of joining type will be \mathcal{O(Nlog^2N)}
Hence the correct complexity of solving the entire problem (considering contributions from the queries separately) would be: \mathcal{O(Nlog^2N+QlogN)} which fits in the bounds pretty well
I’d tried a similar approach but rather at joining nodes first i m distributing the pending xps from the node x’s parent to all its childs also similarly for node y then reducing the pending xps of both x and y’s parent to 0
though i’m getting RE at test case 2
heres my code include <bits/stdc++.h> define int int64_t
using namespace std;
void setIO() {
cin.tie(0)->sync_with_stdio(0);
}
void setIJ() { #ifdef ONLINEJUDGE
clock_t tStart = clock();
freopen(“input.txt”,“r”,stdin); //can need to change file . this one for taking input
freopen(“output.txt”,“w”,stdout); // this one for output #endif
}
void setOJ() { #ifdef ONLINEJUDGE
fprintf(stderr, “\n>> Runtime: %.10fs\n”, (double) (clock() - tStart) / CLOCKS_PER_SEC); // this line gives your code runtime #endif
}
class DSU {
private:
vector parents;
vector sizes;
vector exp,exp_;