Help Needed in understanding time complexity of DSU

I have seen lot of DSU solutions for a problem that implements DSU as shown below.

class Dsu {
    private int parent[];

    public Dsu(int N) {
        parent = new int[N];
        for (int i = 0; i < N; i++) {
            parent[i] = i;
        }
    }

    int find(int u) {
        if (parent[u] != u) return parent[u] = find(parent[u]);
        return parent[u];
    }

    int union(int u, int v) {
        return parent[find(u)] = find(v);
    }
}

I am not able to understand how the time complexity of find() and union() operation is O(1).
Kindly provide explanation with proof that time complexity would be constant, some learning resources on this would be very useful.

The union operation (the way you’ve done it) doesn’t use union by rank or size. You can improve the union by checking for size or rank of u and v before deciding to point the leader of u to leader of v.

If you do it by union by rank/size and path compression, the amortized time complexity does reduce to O(alpha(n)) where alpha(n) is the Inverse Ackermann function.

You can learn about it by watching the Advanced Union-Find videos by Prof. Tim Roughgarden here => Algorithms - Tim Roughgarden | Stanford Lagunita - YouTube

He also talks about the Ackermann function.

Proof–>

Have you heard of rank by Union and path compression. You can call them optimization over standard technique.

They have an entire play list…

1 Like