# Why do we get wrong answer in questions based on floating point error?

Hi, I’d like to ask everyone why this kind of stuff happens. I did solve 2 questions in different events. I was able to get the logic and I can gurantee you that my solution was absolutely correct. The question had mentioned to print output with error of less than 10^-6 will only be accepted. So I did print my output carefully using double precision ( which means 10^-11 ) which was quite less than the error given in the question i.e., I did use a double datatype variable for solving the equations and getting the value.

Now the worst part happens. The judge gives me wrong answer and I waste my time wondering where I went wrong?! This happened with me on 2 different occasions. In first one, I cracked my head but to avail penalty but no AC! :’(. In the second one, Instead of cracking my head I did just change the language on which I did code. First I coded with java/cpp and got wrong answer and then suddenly switched to Python 3.5 (not py2.7 or cpython) and voila!!! It just got accepted. The same code with same logic and it is accepted. What in the world the language just made the difference? I believe Codechef should also tell us whether our sample test cases are working or not while running the code (but not submitting to online judge).

But first I’d like to know the reason. Thanks if you can help me.

[EDITED]:

Eg Write a program for solving a quadratic equation’s root. You’re given the value of a,b and c of the equation ax^2+ bx +c =0.
The judge assumes the answer is right if the error is below 10^-6.

``````I/P
1 4 4
O/P
-2 -2
I/P
1 4 3
O/p
-0.394448 -7.6055512

I/P
9 6 1
O/P
-0.333333 -0.333333
I/P
9 6 0
O/P
0 -0.666666

**Note that the question is different and I am just mimicking that with it. **``````

This happens mostly to the beginners. It is a common error but interesting. Since you have not given your code where you are experiencing this error, I am going to assume it to be a truncated errors. When you code in C/C++, even if you store the result in a double precision variable, the fraction part gets truncated when you perform “Integer” operations. For example, below code stores 1.0 instead of 1.5 in the double precision variable.

``````double x;
x =3/2;
``````

However, below code stores 1.5 in x since you are telling the compiler to use the floating point operation.

``````double x;
x = 1.0 * 3/2;
``````

On higher level languages like python, this conversion is implicit.

This usually happens when setter decides to go for "ABSOLUTE ERROR of {10}^{-6}" instead of relative error, and it IS really frustrating.

I really dont know why they do that, because if you google any guide or tips for problem setting, then one of the fundamental things you see there is-

• Dont go too deep into the double and flaoting point. usually a relative error of {10}^{-6} is ok. No need to go deeper than that, as then language mechanics come to play over which candidates may not have control. It is not fair to penalize logic in one language and give AC to same logic in other language from contestants point of view.

The reason why you see it in ICPC is because, well, I don’t think their setters are “routine” setters, i.e. they arent people who regularly set up contest etc. I may be wrong though.

Until and unless there is a specific concept he wants to test which has to do with the precision, it is never advisable going too deep into floating point. Absolute error further makes situation harsh for many candidates. I find CodeForces system perfect, they judge it on max({10}^{-6},Relative error) [Since if your correct solution fails due to these minor issues in system testing then its seriously unfair].

But how to solve that issue?