Interesting Time Complexity

Consider the following program, which is a dummy code I wrote to get a better grasp of how time complexities actually work. The interesting thing I found is that if try to run the same code in Codechef’s IDE you’ll notice that it exceeds the time limit of 1 sec where as if you try running the same code by commenting out the 2nd statement of nested loop ( one where we are pushing elements into arr vector) you’ll notice a drastic difference in time taken in this case.

We know that push_back operation takes O(1) and in both the cases I mentioned above overall time complexity of the program will be O(n^2) then What could the reason of such drastic change in time taken by the program if the time complexity is same?

#include <iostream>
using namespace std;

int main() {

	long long n=1e4, sum=0;
	vector<long long> arr;
	for(long long i=0;i<n;i++)
	     for(long long j=0;j<n;j++)
	return 0;

Yeah, but the time complexity should logically decrease by half as you have two O(1) operations inside a O(n^2) loop and you’re commenting out one of them. So, this makes sense.

I don’t think complexities work this way. Try replacing that push_back statement with many other statements of O(1) and you wouldn’t notice it exceeding 1 sec. I am kinda suspicious if push_back operation really takes O(1).

Asymptotic notations are theoretical constructs which aids the complexity analysis. They are not directly mapped to the running time of programs.

If some procedure is executed large number of times, its constant factor (which is not apparent from asymptotic analysis) matters that much more.

First statement accounts for assignment and addition but, second one is much more complex (it accounts for various function calls and memory allocations), hence constant factor of second statement is higher than first. Running time difference becomes significant, especially because we do it 10^8 times.