 # #define giving unexpected results for fast i/o

Given two instances of code for this problem https://www.codechef.com/problems/CLBRKT .
My doubt isn’t related to the solution or it’s logic but rather consider these two codes submitted for this problem.
TLE solution- https://www.codechef.com/viewsolution/35693152
accepted solution- https://www.codechef.com/viewsolution/35692756
The only difference between these two solutions is that the fast i/o commands were predefined using " #define " in the case of accepted solution. Everything else is the same( literally ).
My question- why did this pass but the other one didn’t.

Sum of n in all test cases does not exceed 10^7
But in first submission you are initialising vectors of 10^7 in every test case so its complexity becomes 10^3*10^7=10^10
whereas in second submission total memory initialised is 10^7

1 Like

as @humane pointed out the error, removing it actually gave AC verdict
Solution without #define

okay, i’m so sorry that i didn’t notice that. But even if i initialised a 1e7 vector, i still iterated only n of it’s elements. Does that mean assigning extra space somehow increases complexity? I dont agree that complexity becomes 1e10 as pointed by you, because i am not iterating it till 1e7 , i only just assigned it which is a very common practice as far as i know.

You do know that when vector is initialised all its elements are initialised to 0.
How do you think that is implemented behind the scenes?

Okay, i get it. Thanks a lot. It all makes sense now. Guess we learn everyday. 