I saw many times competitive programmer use mid = low + (high- low )/2 .But in many article we read mid = (low+high)/2 .which should we prefer in competitive programming.And why most competitive programmer prefer to use mid = low + (high-low)/2 . Can any anyone Explain

Because if we use mid = (low + high)/2 then it might lead to overflow, as (high + low) can exceed range and will eventually lead to overflow. But if we use mid = low + (high - low)/2, then the possibility of overflow becomes none, as if high and low are in the range, then (high - low) will definitely be in range and will not overflow.

We know that if we add high and low let say, 20 and 10 we will get 30. and if we divide it by 2…(30/2) we will get the middle number, which is 15. this formula is (high+low)/2.

But again if we just divide the distance between two number high and low. which is 20-10= 10. and divide it by 2, 10/2. we will get the half distance of total distance which is 5.

And if we add this half distance to the low number, we will get the middle number.

which is low(10) + 5 = 15

To find mid value, generally we add both value and divide it by two.

But what if the number are very close to the max range stored by a data type or the sum of this two value is exceeding the range, it cause overflow.

So to minimise this chances of overflow

we use mid = low + (high-low)/2

Why codechef is not blocking such users who are spamming the comments

Can you give an example because low+high/2 will always be in range of low and high.

low + high will not be always in the range of low and high. what if high is at the max range and we add low to it? That is overflow condition.