### PROBLEM LINK:

**Author:** Praveen Dhinwa

**Tester:** Kevin Atienza

**Translators:** Vasya Antoniuk (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)

**Editorialist:** Kevin Atienza

### DIFFICULTY:

Simple

### PREREQUISITES:

Greedy algorithm

### PROBLEM:

There are two arrays, A and B, each of size N. You can make the following operation at most K times:

- In a single operation, you can increase or decrease any of the elements of A by 1.

What is the maximum possible value of \sum_{i=1}^N A_i B_i? (We’ll call this the **interaction** of A and B.)

### QUICK EXPLANATION:

The answer is \left(\sum_{i=1}^N A_i B_i\right) + K\cdot \max_i |B_i|. (\max_i |B_i| is the maximum increase we can get with one operation, which we will perform K times.)

### EXPLANATION:

We must find ways to reduce the number of cases we need to check. (Otherwise, the number of cases becomes too many, even for the first subtask.)

First, let us understand the effect of one operation to the *interaction* \sum_{i=1}^N A_i B_i. Suppose we increase A_j by 1. Then the summands of \sum_{i=1}^N A_i B_i will remain the same except for the j th summand, which will become (A_j + 1)B_j instead of A_jB_j. In other words, the j th summand will increase by B_j (because (A_j + 1)B_j = A_jB_j + B_j). Thus, the effect of adding 1 to A_j is to increase the interaction by B_j, **regardless of the value of A_j**.

Similarly, the effect of *subtracting* 1 to A_j is to *decrease* the interaction by B_j, regardless of the value of A_j. Now, remember that we want to *maximize* the interaction. This means:

- We should pick the operation which gives us the greatest increase in the interaction.
- We must perform this best operation K times.

In other words, we can (and must) be greedy with our choice of operations. This *greatly* reduces the number of cases we need to check! Specifically, since there are 2N *distinct* operations (N possible indices, and choosing whether to increment or decrement), it means we only need to check 2N cases. Here are sample implementations.

C++:

```
#include <iostream>
#include <algorithm>
using namespace std;
#define ll long long int
#define INF (1LL<<60) // some really large number
#define N 100111
ll a[N];
ll b[N];
int n;
ll k;
ll interaction() {
ll res = 0;
for (int i = 0; i < n; i++) res += a[i] * b[i];
return res;
}
ll max_interactions() {
ll res = -INF;
for (int i = 0; i < n; i++) {
// try incrementing k times
a[i] += k;
res = max(res, interaction());
a[i] -= k; // cancel
// try decrementing k times
a[i] -= k;
res = max(res, interaction());
a[i] += k; // cancel
}
return res;
}
int main() {
int cases;
cin >> cases;
while (cases--) {
cin >> n >> k;
for (int i = 0; i < n; i++) cin >> a[i];
for (int i = 0; i < n; i++) cin >> b[i];
cout << max_interactions() << endl;
}
}
```

Python:

```
def interaction(a,b):
return sum(x*y for x,y in zip(a,b))
def interactions(k,a,b):
for i in xrange(n):
# try incrementing k times
a[i] += k
yield interaction(a,b)
a[i] -= k # cancel
# try decrementing k times
a[i] -= k
yield interaction(a,b)
a[i] += k # cancel
for cas in xrange(input()):
n, k = map(int, raw_input().strip().split())
a = map(int, raw_input().strip().split())
b = map(int, raw_input().strip().split())
print max(interactions(k,a,b))
```

In both codes, `interaction`

returns the interaction between the two arrays `a`

and `b`

.

Both codes simply try out all 2N operations. Since computing the interaction between the two arrays takes O(N) time, the overall complexity is 2N\cdot O(N) = O(N^2). This passes the first two subtasks, but is too slow for the final subtask.

Actually, with a simple trick, this approach can be optimized to pass the final subtask. Suppose we know the interaction between the two arrays \sum_i A_iB_i before some operation. After performing an operation, it’s very easy to *update* this interaction! Specifically:

- After incrementing some value, say A_j, the interaction simply increases by B_j.
- After decrementing some value, say A_j, the interaction simply decreases by B_j.

Similarly, incrementing A_j a total of K times increases the interaction by K\cdot B_j, and decrementing it K times decreases the interaction by K\cdot B_j. This update can be done very quickly. More importantly, it doesn’t require us to perform another O(N) loop to compute the new interaction; the update runs in O(1) time. Thus, we can iterate through everything in 2N\cdot O(1) = O(N) time instead, which passes the time limit for the final subtask!

The following is a Python implementation:

```
def interactions(k,a,b):
interaction = sum(x*y for x,y in zip(a,b))
for i in xrange(n):
# try incrementing k times
yield interaction + k*b[i]
# try decrementing k times
yield interaction - k*b[i]
for cas in xrange(input()):
n, k = map(int, raw_input().strip().split())
a = map(int, raw_input().strip().split())
b = map(int, raw_input().strip().split())
print max(interactions(k,a,b))
```

Finally, we can simplify this even further. Instead of checking all 2N operations, we can simply find the *best* operation and perform it K times. Finding the best operation is easy: Just find the maximum value among B_1, -B_1, B_2, -B_2, B_3, -B_3, \ldots, B_N, -B_N. (This maximum value is equivalent to \max_i |B_i|.) This allows for a much shorter code such as the following:

```
for cas in xrange(input()):
n, k = map(int, raw_input().split())
a = map(int, raw_input().split())
b = map(int, raw_input().split())
print sum(x*y for x,y in zip(a,b)) + k*max(map(abs, b))
```

**Tip:** If you’ve been getting wrong answers for some reason, you might be getting **arithmetic overflow**, which means your variables are too small to contain the intermediate values or the result. This could happen certainly in the final subtask because the answer could reach values greater than 10^{14}, which is too large for a 32-bit variable (such as C++'s `int`

) to hold. To fix this, use a 64-bit variable for the final result (`long long int`

in C++, `long`

in Java, etc.) AND for K and the arrays A and B (because you’ll be multiplying these values).

### Time Complexity:

O(N)