Just curious why vector gets me a TLE while a dynamic array doesn’t.
I used static array that was working fine.
N can be very large (up to 10^9) so zero-ing all N entries in
vector does, will take a very long time.
The array-based version does not explicitly set all entries to
0 (so the behaviour is technically undefined), but it seems that, by chance, the entries of
a that end up being queried are
0, so you end up getting the correct answer, at least with SPOJ’s system and the given testcases.
if you “correct” the array-based version by initialising all entries to
memset(a, 0, sizeof(a) * n);
int *a = new int[n+1]();
you should find that it also TLEs (and leaks memory all over the place XD)
Shouldn’t every element end up being a garbage value in case of array? How come by chance they are all set to 0 ?
Looks like the process is more complex than I thought: see e.g. c - Kernel zeroes memory? - Stack Overflow
This also helps explain another difference between the “don’t initialise to
0” vs “explicitly initialise to
0” solutions: the former finishes almost instantly on my machine (even with max N), whereas the latter not only takes a long time, but also sucks up gigabytes of RAM, bringing my laptop to its knees.
Since the entries in
a that actually end up being used are sparsely distributed (for example, with N=10^9, only 241 indices of
a are actually read/ written, or thereabouts), what I presume is happening in the former case is that small pages of
a are doled out to your program as and when they are requested, and each of these pages is zero’d out when they are first doled out.
But in the latter case, we request every entry of
a while we are explicitly zero-ing
a out, so all pages of
a end up having to be doled out to your program, using up a large amount of heap.
(This is all supposition on my part, by the way :))
That makes a lot of sense actually. The AC code uses 0 megabytes of space even when an array of size 10^9 is declared.