 Practice

EASY-MEDIUM

# PREREQUISITES:

Dynamic Programming

# PROBLEM:

You are Pikachu. There are N cities numbered 1,2,\dots,N and you have travel through the N cities in this order. You have an initial strength S_{in} and experience value XV, which is initially 0.

In each city, you can choose to either train or fight with the gym leader. The gym leader of city i has experience E_i.

In city i, if you decide to train, you increase your strength by the cube of the sum of the digits of your current strength. If your current strength is S and you choose to fight the gym leader, you increase your XV by S*E_i.

Find the maximum XV that you can attain after visiting all N cities.

# QUICK EXPLANATION:

Observation : Our strength after training in r cities is not dependent on the cities we decide to train in. In other words, for a given initial strength (denoted by S), our strength after training in any r cities is the same.

Thus, we can precompute the strength after training in exactly r cities, for all valid r. Call the strength after training in r cities \text{Strength}_r.

Define \text{maxXV}(n,r) as the maximum XV that we can achieve, if we visit the first n cities and train ourselves in exactly r cities among these. \text{maxXV}(n,r) is thus equal to

\max\{\text{maxXV}(n-1,r-1),\text{Strength}_r*E_n+\text{maxXV}(n-1,r)\}

where the former/latter term represent the cases where we choose to train/fight in city n respectively.
Overlapping subproblems exist in our recurrence which can be efficiently managed using DP.

# EXPLANATION:

In each of the N cities, we can either choose to train or fight with the gym leader. So, there are 2 options in each city, which implies that the total possible number of sequences is \underbrace{2*2*\dots*2}_\text{N times} = 2^N

As N\le 20 in the first subtask, we can iterate over all the possible sequences and output the maximum XV over all the sequences. We can do this recursively, as in the below code.

int ans = 0; //Variable to hold the maximal XV after visiting all N cities

void Current(int n, int s, int XV){
/*
n = city we are currently in
s = current strength
XV = current XV value
*/
if(n > N){ //if we have visited all N cities
ans = max(ans, XV); //Update variable ans to the maximum
return;
}
//CubeSum(s) returns the strength after training in this city
//which is equal to s + (sum of digits of s)^3
Current(n + 1, CubeSum(s), XV); //If we train in this city
Current(n + 1, s, XV + s*E[n]); //If we fight the leader in this city
return;
}

Current(1, S, 0) //The city we start in, our initial strength and XV
cout << ans << endl; //Our final answer!


This is what is done in the above code.
For each city n that we arrive at, with strength s and experience XV, there are two options :

• We train in this city - In this case, our XV doesn’t increase, but our strength increases to CubeSum(s), where CubeSum(s) equals s+(\text{Sum of digits of }s)^3. Thus, we move to city n+1, with changed strength and same XV value.
• We fight in this city - Here, our strength doesn’t increase, but our XV increases by s*E_n. Thus, we go to city n+1, with changed XV and same strength.

After visiting all N cities, we update ans to \max(ans, XV) which shall now hold the maximum XV over all different possibilities that we have processed.
The answer is finally the value of ans after iterating over all 2^N different possibilities.

The above solution however \textcolor{red}{TLE}'s for subtasks 2 and 3! Could we do better?

Observation : Our strength after training in r cities is not dependent on the cities we decide to train in. In other words, for a given initial strength (denoted by S), our strength after training in any r cities is the same.

The reasoning has been left to the reader as an exercise.

Thus, we precompute our strength after training in exactly r cities, for all valid values of r\space(0\le r\le N). Let \text{Strength}_r represent our strength after training in r cities. Then,

• \text{Strength}_0=S which represents our initial strength
• \text{Strength}_r=\text{CubeSum(Strength}_{r-1}) which follows from our definition of \text{Strength}_r and \text{CubeSum()}

This, small yet important observation is what leads us to the solution of this problem!

Define \text{maxXV}(n,r) as the maximum XV that we can achieve, if we visit the first n cities and train ourselves in exactly r cities among these. We work the solution backwards. The recurrence relation is then as follows :

int maxXV(int n, int r){
/*
n = city we are currently in
r = number of cities we train in
*/
if(r > n or r < 0) //Out-of-bounds case
return -INF; //INF = 1e9
if(n < 1) //We have completed processing the first n cities
return 0; //As 0 is our initial XV

int P1 = maxXV(n - 1, r - 1);
//Maximum XV possible if we train in this city

int P2 = Strength[r]*E[n] + maxXV(n - 1, r);
//Maximum XV possible if we fight in this city
return max(P1, P2);
}


The explanation of the above code is as follows.
Imagine we are at city n and have trained in exactly r cities. In the current city, we can either choose to train ourselves, or fight with the gym leader.

• We train - In this case, there is no increase in our XV. Thus, the maximum possible XV here is \text{maxXV}(n-1,r-1) which represents the maximum XV we can achieve if we visit the first n-1 cities, training in exactly r-1 cities (we subtract 1 from r since city n is 1 among r cities where we choose to train).
• We fight - In this case, our XV increases by \text{Strength}_r*E_n. Thus, we add this increase to the maximum XV attainable on visiting the first n-1 cities and fighting in r cities.

We then return \max(P1, P2) which represents the maximum XV over both the cases.
The answer to our problem is thus

\max\{\text{maxXV}(N,0),\text{maxXV}(N,1),\dots,\text{maxXV}(N,N)\}

You might have noticed overlapping cases in our recurrence. What does this call for? Yes, you got it right. We use DP to prevent multiple computations of the same state. We can easily tweak the above code to satisfy our requirements.
We define DP[n][r] as the maximum XV that we can achieve, if we visit the first n cities and train ourselves in exactly r cities among these. We then work the solution backwards as usual.

Note : As the solution may exceed 2*10^9, you may need to use a larger data type, conveniently a 64 bit data type.

# SOLUTION SNIPPET:

void Calc(){
Strength = S; //Initial strength
for(ll r = 1; r <= N; r++){
ll k = Strength[r - 1], sum = 0;
while(k > 0){ //Calculate sum of digits
sum += k % 10;
k /= 10;
}
Strength[r] = Strength[r - 1] + pow(sum, 3);
}
return;
}

ll maxXV(int n, int r){
/*
n = city we are currently in
r = number of cities we train in
*/
if(r > n or r < 0) //Invalid case
return -INF;
if(n < 1) //Processing of n cities completed
return 0;
return DP[n][r];

ll P1 = maxXV(n - 1, r - 1); //We train
ll P2 = Strength[r]*E[n] + maxXV(n - 1, r); //We fight
return DP[n][r] = max(P1, P2);
}

Calc();

ll maxi = 0; //Final answer is stored by this var
for(ll r = 0; r <= N; r++){
maxi = max(maxi, maxXV(N, r));
}
cout << maxi << endl; //The final answer!!


# TIME COMPLEXITY:

As the sum of digits of a particular number can be computed in constant time, we can construct array \text{Strength} in O(N)
As there are N possible values for n\space(1\rarr N) and N+1 possible values for r\space(0\rarr N), there are a total of N*(N+1) different states in our DP solution. Computing each states take O(1).

The overall complexity of our solution is thus

O(N+N*(N+1))\approx O(N^2)

which easily passes the given constraints.

# SOLUTIONS:

Editorialist’s solution can be found here.

# SIMILAR PROBLEMS:

1271D

### BONUS:

• Write a bottom-up solution for this problem! My bottom-up solution may be viewed here.

• Reduce the memory complexity of the DP table to O(2*N). Notice that computing row i of the DP table only requires the values in row i-1.

• What if an additional constraint was added to the question, stating that you can’t fight the gym leader in any 2 consecutive cities? How would you tackle this?

Hint

Add another state to the DP. Let DP[n][r] define the maximum XV we can gain if we visit the first n cities and train in exactly r cities, where we train ourselves in the n^{th} city. We define DP[n][r] similarly, but in this case, we choose to fight in the n^{th} city.

The recurrence is thus

\begin{aligned}DP[n][r]&=\max\{DP[n-1][r-1],DP[n-1][r-1]\}\\DP[n][r]&=DP[n-1][r]\end{aligned}

The terms in the recurrence of DP[n][r] is justified by the fact that we have to train in the n^{th} city and that we can either choose to fight or train again in the (n-1)^{th} city.

The reason for a single term in DP[n][r] is due to the fact that in this case, we can only train in the (n-1)^{th} city, as we have decided to fight in the current city.

Did you like the editorial? Do you have any other approaches to the problem? Any suggestions? Comment them below!

Experimental: For better evaluation of my editorials, I request all readers to rate the editorial (on a scale from 1-5, where 5 represents an awesome editorial)

• 1
• 2
• 3
• 4
• 5

0 voters

Also don’t forget to up-vote this post if you liked it ! 