Why two different execution times?

I did this problem by two methods:
1)Using Dynamic Array (using “new” keyword) - Solution
2)Using Vector - Solution

In the Dynamic Array Solution, Execution Time is 0.11 sec.
In the Vector Solution,Execution Time is 0.03 sec.

Can Anybody tell why Execution Time in dynamic array solution is more than Execution Time in Vector solution even when both are dynamic memory allocation?

1 Like

Try repeating the experiment, as execution times can fluctuate quite a bit.

2 Likes

I have tried two times,it gives the same results.
Is Initializing dynamic array O(n)?

1 Like

Yes. However, your two solutions are very different: vector::resize does not necessarily cause a reallocation (if the requested size is less than the current capacity). in fact, in that case, vector::resize will be \mathcal{O}(1) (Oops - this last bit was wrong; see below).

5 Likes

No its not reallocation.
It is providing size to vector equal to input and vector dont need to be initiliazed with 0 as it is by default but why execution is long in dynamic array?

The values are saved from the last computation. So new memory is not allocated.

At a guess, because allocations are typically relatively expensive, and your dynamic array version allocates (and initialises) for every t.

2 Likes

But i am resizing the vector equal to n so i dont think the following point is valid:

(if the requested size is less than the current capacity ). in fact, in that case, vector::resize will be O(1).

If i am wrong can you elaborate?

" and your dynamic array version allocates (and initialises) for every t "

Vector is also getting resized for every “t” so what’s the difference?

Sorry, I was partially wrong; it should be:

if the requested size if less than the current capacity, then no re-allocation occurs.
if the requested size is less than the current size, then vector::resize will be \mathcal{O}(1) (<-- Whoops - that’s not right either!)

Edit:

No, that’s not quite right, either: for a general vector<T>, resizeing to a smaller size will have to destroy the excess Ts. However, for ints, for which destruction is a no-op, I can imagine that a smart Standard Library implementation might skip the destruction and do the whole process in \mathcal{O}(1).

2 Likes

“if the requested size if less than the current capacity”

“if the requested size is less than the current size”

Aren’t these two statements same?

To Summarize ,
vector.resize is O(1) ?

No - capacity and size are not the same.

#include <iostream>
#include <vector>
#include <numeric>

using namespace std;

int main()
{
vector<int> v(10000);
iota(v.begin(), v.end(), 0);
cout << "size: " << v.size() << " capacity: " << v.capacity() << endl;
v.resize(100);
cout << "size: " << v.size() << " capacity: " << v.capacity() << endl;
}

1 Like

In general, no - but I would speculate that a vector of primitive types, when asked to resize to less than its current size, would perform this in \mathcal{O}(1). Might be wrong in this, though

Edit:

Whoo - based on quick test, it looks like this speculation might be correct!

Compare the times taken when N is decreasing compared to when it is increasing:

#include <iostream>
#include <vector>
#include <numeric>

using namespace std;

int main()
{
vector<int> v;
for (int N = 1'000'000'000; N >= 1; N--)
//for (int N = 1; N <= 1'000'000'000; N++)
{
v.resize(N);
cout << "size: " << v.size() << endl;
}

}

2 Likes

I am sure vector.resize() is done at every “t” and whole vector is initialized to 0 because i am not clearing the vector ( dp.clear( ) ) for every “t” and still i got AC .

Am i right @ssjgz

Well check this https://www.codechef.com/viewsolution/36183846 and
https://www.codechef.com/viewsolution/36183868
Memory allocation is costly. If you reallocate memory for each t then it will take more time.
Always reuse the same memory again if possible.
This will also be cache friendly. Whereas it might not be the case when we relocate again and again.

2 Likes

Yes.

Every t? No. See the documentation: https://en.cppreference.com/w/cpp/container/vector/resize

3 Likes

Yeah, it looks like the value of dp[i] for a given i doesn’t actually depend on the value of n for that testcase, based on the contents of the sol function. And your solution

3 Likes

edited.

Why the two solutions you mentioned have same execution time(0.0 sec)?
isn’t memset creating any difference?