ANDOOR - Editorial

Ahhh! 125 submissions and no AC (Failed after passing 7 test files). Yet to know my mistake. Could you help me @anton_lunyov .

P.S:Code Ok with Test case given above.

Thanks!

Turns out using (22.0/7.0) gives a different pie value than acos(-1) , causing the precision problem for my solution. Shouldnt 22.0/7.0 be more accurate ?

I’m still not clear on why either of the provided solutions are correct.

You have shown in the editorial that if the intersection between two circles is actually 0, then using doubles for comparisons can result in a WA because tiny errors in doubles can cause a tiny error in the size of the intersection, but enough to cause a WA.

Consider two circles which do intersect. You have no choice but to use doubles, and tiny errors in doubles will cause a tiny error in the length of the intersection. Unless I’m mistaken, there could be just over 2 million intersections all which contribute to the final sum, so each intersection needs to be calculated to better than 10^-12 accuracy.

Can you prove that both provided solutions do this? Trig functions will quickly escalate any errors, as you also noted.

Feel free to correct my logic if this is not true.

1 Like

@anton_lunyov, I’d really appreciate seeing the test data at which my code didn’t match. I really need to know what was wrong.

Thanks!

P.S. Generated more than 20 challenging cases. My results including above match successful submissions.

My reply to triplem.

In short, the reason is that when we have the intersection, then acos() is calculated for numbers not very close to the critical points 1 and -1 and hence effects above are not so huge.

When we calculate acos((W-X)/R) and similar 3 others, the argument is at least 1e-5 away of critical points and the error of 1e-17 for number of the form x = 1 − 1e-5 leads to error about 2e-15 in acos(x). Which IMHO is safe.

But when we calculate intersection of circles situation is much more interesting.
Even for the test I posted above:

1000 1000 2
10.00 10.00 0.02
674.94 756.87 1000.00

We need to calculate acos of the number like 1 − 1e-15 here:
double alp = acos((sqr(Ri) + Dij - sqr(Rj)) / (2 * Ri * sqrt(double(Dij))));
The error of 1e-17 for number of the form x = 1 − 1e-15 leads to error about 2e-10 in acos(x).
Which could be too large error.
But in this test case we need the answer with relative error 1e-9 so this formula produces the correct result.

I have came up with another solution where I can guarantee the precision near 1e-13 of the answer for every possible test case and verify that our answers are correct. This is some combination of weakness of test data and luckiness at some critical test data :slight_smile:

So I can guarantee that each AC solution produces the correct answer for official test data.
But they are weak and probably it is possible to create a set of test data where all solutions including author’s and tester’s ones will produce incorrect result.

My new solution is quite tricky.
So it is very sad for me that I did not have enough time to investigate all this before the contest.
I see now that with having 5-6 digits after the decimal in initial points and asking for absolute or relative error of 10-12 this could be really HARD problem from precision issues perspective though quite straightforward at the first sight.

I don’t want to disclose my new solution, since I have a wish to post some similar problem in near future, though I am afraid that this problem has already revealed too many tricks.

P.S.
Also forget to mention that the number of arcs in the answer unlikely could be close to a million. Note that exactly this value affects on the resulting precision since the answer is the sum of length of all separate arcs on each circle. We have this maximum near 5000 in our test data. And I have no idea what is the real maximum.

Also I don’t know exactly what could be the maximal possible answer under the given constraints. My guess is something like O(W * sqrt(N)). The example for this value is non-intersecting circles arranged like in the square grid. We have such test in the system. The answer of it is near 99000. And actually many of you have WA on it due to acos() trick described in the editorial.

So there are many open problems here left if someone want to design the toughest possible test data.

3 Likes

@anton_lunyov : Please tell a test case for Submission ID : 1721687

Is there not any test case where three or more circle intersect with each other like http://www.daviddarling.info/images/Venn_diagram.gif
As in this case if we we find the intersection points of C and C’ and add the corresponding arc on C to the list of arcs that should be erased then some part of the arc will remove two or more times.

… Hi, eventually I find my bug with my code during the contest …
… I only delete a segment of code about the circle outside the rectangle …

But still, I can not figure out … why my previous method was wrong …
… Is there any guy who can help me find that ? … THanks A Lot ! … lol !!..

Correct:
http://www.codechef.com/viewsolution/1744236
Wrong ( But didn’t know the reason …
http://www.codechef.com/viewsolution/1744238

IMHO this issue with acos() fair to be a trick of the high quality problem. If you are so smart then suggest some “quality contest problem” having some precision stuff. I am sick of your unfair critic comments! Really!

5 Likes

Hey guys , don’t fight ! I made 100 wrong submissions and am still laughing and maybe I will learn a few things after reading this editorial . I am still to figure out my mistake . Last two days of the contest I solved 4-5 easy problems from easy section of practice section to uplift my mood after my terrible time loss / energy loss and no points gained in this problem . Cheers , keep the energy positive :slight_smile:

10 Likes

^I agree with this completely!!!

We are here to learn new things and enjoy problems, not fight with eachother! :))

Also, this community works hard as nobody other does and the problems are BETTER than some found at CGJ imho… So, let’s just enjoy this

I think that precision problems are a fair game in a 10-day contest with an unlimited number of attempts. Actually, they deserve a place here.

However, I’m still not sure where triplem’s last WA submission failed. Long doubles saved the day in the end, but his solution seems to handle all the mentioned precision issues correctly.

Actually, he exactly traps in the situation explained in the paragraph 2.
Note that he performs the intersection check with right side as
if (circ[i][0]+circ[i][2]>W) {
where circ[i][0] and circ[i][2] are doubles.
While he uses integers testcirc[i][j] only for some intermediate checks.

This is one of reasons why I angry with him: I exactly explained in the editorial his bug but he still consider this problem as crappy precision problem. BTW I still didn’t forget his comments for MAXCIR and CLOSEST.

2 Likes

1000 1000 2
10.00 10.00 0.02
674.94 756.87 1000.00
Our answer is 28.5976846
While your answer is 28.5976824

Thanks. It’s like a balm to the soul :slight_smile:
Actually we did not expect any such precision issues with this problem.
I was quite busy with other problems so forgot to consider this problem carefully from this perspective.
I’ve only noticed that we ask for 1e-11 relative error in the worst case.
Since working double precision is 1e-16 I was absolutely sure that all will be fine.
The acos() trick was noticed only in the middle of the contest.

Actually I did some additional investigation. So you could expect some evilest problem ever on precision from me in the near future :wink:

2 Likes

Thanks a lot! Turns out I was very lucky with my AC submission. In fact, my AC submission has more rounding errors, but they pull in opposite directions :slight_smile:

so just googled it, and 22/7 is actually the least accurate fraction for PIE. Huge lesson learned here.
Great question all together! Lots to learn. Geometry + Programming is one of my weak points.

1 Like

I apologise for the above post; it wasn’t called for. (Though I’m not sure what you mean regarding MAXCIR/CLOSEST, in MAXCIR I mentioned an issue with unsigned integers that you/everyone else seemed to agree with, rather than any complaining?) My more thought-out response is below.

1 Like

I think you cannot prove that these solution are 100% correct, but one may reason as follows to “show” that a big error is unlikely to occur:

  • each precision error can be seen as a random value say in interval (-eps,eps)

  • if we sum n such values: by CLT, the probability that we get an error bigger than 3epssqrt(n) is negligible.

Ah, and the edge cases only make a difference due to the fact you don’t get the negatives cancelling them out.

Still, with n being a million, and the fact that you have to raise the probability of success to the power of 1000 test cases…