Being new to competitive programming, and knowing nothing about the methods described here (or at least not by the abbreviations used), I developed a completely different method based on my previous experience of flood-filling. This scored me 100 points, completing in 1.66 seconds. If anyone is interested please see my submitted solution (in C#), or ask me to explain it further here.
I think thereās another way to look at the question , Iāve traversed the matrix blocks (of two unique numbers) only once , I donāt really know bfs or dfs so hereās a simple approach .
Hereās my solution link.
Example, in a matrix
1 2 3
2 4 5
The traversal is like :
1-2
|
2
2-3
2
|
4
3
|
5
2-4
4-5
Thereās only repeated traversal of those which form more then one block.
For single number blocks Iāve defined a simple separate function.
P.S. - Itās my first post so please excuse if my approach is not clear to you guys.
I didnāt do anything special to handle such a case, and when I tested my solution on a test case constructed like this with N and M both in the order of 1000, it ran for more than two minutes and I had to quit it eventuallyā¦ So Iām pretty sure my solution is N^2*M^2. Yet, it passed with 100 points in the contest which may mean the test cases were weak?
agreeā¦ I found it mediumā¦
Update: You are correct @akamoha. The test cases were weak. Thanks for a similar test case. Have updated the editorial with a warning too.
can you share the input file ? (like on a G drive or dropbox)
This pointed out in the editorial. Please check again.
ya, seen that, thank you!!
I will have to ask the Codechef team and author to share the counter test case.
Union-find algorithm works in amortised O(n) only if it is implemented by āRankā algorithm, i.e. merging smaller sets to larger ones. But in practice, even random merging works fine and it is tough to generate a test case against it.
i converted editorialistās solution to java Codeā¦ but in java same code is giving TLEā¦ Here is link:- CodeChef: Practical coding for everyone
You may want to check out my code xD
You can optimise the code. Obvious things are reducing function calls, replacing Math.max() etc. by ternary operator. Also, remember that usually, constant associated with hashing is high. You can have a look at my code for reference.
I was talking about akamohaās test case, the one which is mentioned in the comment above.
We perform one DSU per edge and reset only those values that have been affected when processing a set of edges.
Yeah, you are right.
Corrected.
@smartnj, you are correct. My implementation would give TLE, because I traverse each component the number of elements it has in it. Thus the test case provided above have similar idea where a lot of 1's are clustered close to each other.