# Judge giving unusual verdict on FCTRE

I have solved FCTRE of this long challenge using mo’s algorithm. where one of the correct solution gave TLE.

Blocksize = 550, AC with 2.94s Here
Blocksize = 700, AC with 2.83s Here
Blocksize = 630, TLE Here

After the contest, I re-submitted the code with blocksize 630 and it gave AC with 3.29s Here

It could have been okay if It got TLE within 1s range. but here is the difference of around 4s.
This would be dangerous for short contests.

(Side note : I believe that for a certain block size, we should get minimum time and by either increasing or decreasing the block size, it would increase the running time, am I correct or not ?)

6 Likes

Continuing from here. I completely agree that something unusual is there with the judging time limits. @admin Please look over these issues.

2 Likes

yes its true, in this ques. also judge is giving unusual verdict as my sol. has same complexity as others solutions. but in my case it is giving TLE .
and this sol passes even with BLOCK SIZE >1000. https://www.codechef.com/viewsolution/31464290

UPD: now it works fine after update of comparator function

I think you misunderstood the post, it is not why some code gets TLE having same complexity as others. Your code might be missing some optimizations which is a separate issue to speculate over.

The point is that exact same code give different times by a considerable margin , which is a clear fault.

sorry for this thing, but u r right when i changed my comparator function, it works fine .

The same here. Mi contest solution geting TLE and exactly the same solution geting AC after the contest. Although in my case the time difference was not that big.

not necessarily. A subtask constraints can be intended to accept certain approachs that will not pass full constraints. That doesn’t mean that an approach that pass full constraints would be faster with the substask constraints.
For example: a problem with 2 parameters N and M. Subtask with intended solutions O(N*M) and N<5, M=1E6 constaints. Full task with intended solutions O(M) and N=1000, M=1E6 constraints. Your O(M) approach will have comparable running time in both blocks.

1 Like

Thanks for reporting.

Running https://www.codechef.com/viewsolution/31853251 many times locally on a system against a single test file has an execution time range of about 0.3 seconds and the memory used also varies by a couple of hundreds of KB. So my guess is that there is some inherent non-determinism in the code - I haven’t read the code, but it could be because of some library calls that are being made.

And the online judge definitely does has its own variance in execution times, due to various factors. Running the code multiple times on the online judge gets an execution time of 3 seconds +/- a few tenths, which is reasonable enough. So that explains most of your post. Except for that submission which has got TLE. Even after submitting it 20 times on the judge, I was not able to reproduce the TLE, which most probably means that it was an anomaly in the judge, which does happen rarely. These are mostly issues in the kernel and are very rare. We will investigate this further, though.

2 Likes

What I wanted to know is for same constraints and same test files (Consider subtask 3 of the mentioned problem). So for a single testfile, how does blocksize vs running time behave ?

Thanks for considering the issue.

The reason memory used varies locally might be caching issue, as there too many accesses of array having size 10^6.

I checked my code twice, I didnt find any non-determinism in the code, and library calls made are mainly getchar, sort and memset.

This has happened again in DSA Learning Series Contest 3, Problem : Yet Again a Subarray Problem.

I approached this question using ordered_set which was giving TLE so I looked into its editorial and copied someone’s code to check the difference.

Submission during the Long challenge March 19, AC with 1.74s Here.

Submission in DSA Learning series, TLE Here

I faced a similar issue with this question. I was experimenting with various optimisations to lower the run time of my code, accidentally I copied the exact same code (which I had gotten an AC on in around ~5.5 sec). The same code gave TLE, to confirm this I tried it again a few times and the verdict was rather unstable, getting accepted with run time between 5.5 - 7.3 and getting TLE with equal probability. I really want to know what is the logic behind this.

1 Like

If the memory used is being different for some reason, then I don’t see what’s unreasonable about expecting the execution time to also differ by a couple of tenths of a second because of that. That’s also part of what I generally called ‘non-determinism’. If the same code, run multiple times on the same machine under the same environment, behaves differently, I call that non-determinism, and there’s really nothing that can be done about that. Not saying that there’s something wrong with your code - just trying to explain the few tenths of a second difference.

1 Like

Running that code 10 times gets AC 4 times with the execution time above 1.97 seconds, and TLE the other 6 times. That’s looks pretty consistent, because the execution time’s range is probably around 2 seconds, which is the time limit.

The only difference is the code from earlier, and that’s probably just because of checker configurations being tweaked over time. A couple of tenths of a second difference over more than a year is not particularly unreasonable.

1 Like

Please run your code locally using some random large data, and if the execution times there are pretty consistent there, then please send us an email with the details, and we’ll investigate it.

More often than not, these differences are due to some non-determinism in the code/library/process, as mentioned above.

1 Like

I got your point. Thank you.

1 Like