CHN03 - Editorial

PROBLEM LINK:

Contest

Author: ???
Tester: Kevin Atienza, Jingbo Shang
Editorialist: Kevin Atienza

PREREQUISITES:

Dynamic programming

PROBLEM:

There are n contests, each having three problems categorized easy, medium and hard. For the i th contest, an easy, medium or hard problem takes t_{e,i}, t_{m,i} or t_{h,i} hours and gives p_{e,i}, p_{m,i} or p_{h,i} pleasure to solve, respectively.

You may swap up to k pairs problems from different contests, regardless of difficulty. You can only solve at most one problem every contest, and you only have time \mathit{time}. What is the maximum amount of pleasure that can be achieved?

QUICK EXPLANATION:

The answer can be computed with dynamic programming. The key here is to think of a swap as two separate events: taking a problem and placing it. We restrict the number of takes to k, and after processing all problems there must be no more problems we need to place. Also, it’s only beneficial to swap two problems if you’re gonna solve exactly one of them.

Let f(i,\mathit{take},\mathit{place},t) be the maximum amount of pleasure you can have assuming:

  • We are only checking the first i contests.
  • We only have t hours remaining.
  • We can still take up to \mathit{take} problems.
  • We need to place \mathit{place} problems (which we are holding right now).

Note that \mathit{place} can be negative, which means there are |\mathit{place}| extra spaces we can place problems in. The base case is

f(0,\mathit{take},\mathit{place},t) = \begin{cases} 0 & \text{if $\mathit{place} \le 0$} \\\ -\infty & \text{otherwise} \end{cases}

For the recurrence, we first say f(i,\mathit{take},\mathit{place},t) = -\infty if t < 0 or \mathit{take} < 0. Then f(i,\mathit{take},\mathit{place},t) can be gotten as the maximum of the following candidates:

  • f(i-1, \mathit{take}, \mathit{place}-1, t)
  • f(i-1, \mathit{take}, \mathit{place}, t - t_{e,i}) + p_{e,i}
  • f(i-1, \mathit{take}, \mathit{place}, t - t_{m,i}) + p_{m,i}
  • f(i-1, \mathit{take}, \mathit{place}, t - t_{h,i}) + p_{h,i}
  • f(i-1, \mathit{take}-1, \mathit{place}+1, t - t_{e,i} - t_{m,i}) + p_{e,i} + p_{m,i}
  • f(i-1, \mathit{take}-1, \mathit{place}+1, t - t_{m,i} - t_{h,i}) + p_{m,i} + p_{h,i}
  • f(i-1, \mathit{take}-1, \mathit{place}+1, t - t_{h,i} - t_{e,i}) + p_{h,i} + p_{e,i}
  • f(i-1, \mathit{take}-2, \mathit{place}+2, t - t_{e,i} - t_{m,i} - t_{h,i}) + p_{e,i} + p_{m,i} + p_{h,i}

The answer is f(n,\min(k,\lfloor 2n/3 \rfloor),0,\mathit{time}).

EXPLANATION:

It looks like a weird optimization problem with lots of weird restrictions. This kind of problem suggests that a dynamic programming solution might work, because the power of dynamic programming comes partly from recursion: We’re calling our solution recursively which means we’re using the full power of a (presumably) correct solver already, and the only thing left to do is to know how to glue them.

However, it gets complicated because the problem involves swaps from vastly different locations, and with dynamic programming we want to process them linearly (for example from i = 1 to i = n). Thus, what we want is to break down a “swap” into two events which can be processed linearly.

Observations

Here are a few observations that will make our life easier:

  • It is not beneficial to swap two problems if you’re going to solve both of them anyway.
  • It is not beneficial to swap two problems if you’re not going to solve any of them.

In other words, if we’re going to swap two problems, exactly one of them will be solved. This is also quite intuitive; doing any of the above swappings results in a waste of swap. We must use swaps to let us solve multiple problems from the same contest.

From this, we can derive a few more (should-be-intuitive) observations:

  • No problem will need to be swapped more than once.
  • If none of the three problems from a given contest will be solved, then this contest can simple be thought of as “free space” for a problem that we might want to swap in later.
  • If at least one problem from a given contest will be solved, then exactly one problem stays in the contest, and the remaining ones will be swapped.

From these observations, we can show that the maximum number of swaps we’ll need is actually 2n/3, and the worst case is when all n problems we will solve are packed into n/3 contests. Thus, we can replace k with \min(k,\lfloor 2n/3 \rfloor), potentially reducing the time complexity of our program :smiley:

But why does a problem need to be swapped at most once? The key here is to notice that we only care about whether each problem will be solved or not, because we only care that, at the end, at most one problem in each contest will be solved.

Let’s say we have three problems a, b and c at positions i, j and k, and first we swap (i,j) and then later (j,k). Note that the final outcome is that b, c and a will be at positions i, j and k, respectively. We’ll show that at most one swap is needed to achieve the same effect.

  • From the above, we are only allowed to swap problems if exactly one will be solved.
  • If a will be solved, then b and c will not be solved. Thus, the problems at positions (i,j,k) have the initial state (Y,N,N), where Y means “will be solved” and N means “will not be solved”, while the final state is (N,N,Y). Thus, we only needed to swap (i,k) to achieve the same effect.
  • If a won’t be solved, then b and c will be solved. Thus, the initial state is (N,Y,Y) and the final state is (Y,Y,N), which means we only needed to swap (i,k) to achieve the same effect.

In any case, swapping a particular problem more than once results in at least one wasted swap.

Now let’s figure out how to make a swap amenable to our left-to-right dynamic programming. A “swap” can be broken down into two events: taking a problem and then placing it into the position of another. As we process our contests from i = 1 to i = n, either event can take place first. Also, since exactly one problem will be solved, the other problem is effectively thrown away. It means we can assume the following:

  • When we take a problem, it means that we will eventually solve it.
  • We will only place a problem in a contest where none of the three problems will be solved, i.e. in a free space contest. (This is an observation from above.)

Dynamic programming

Armed with these takes and places, we can now formulate our dynamic programming solution.

Let f(i,\mathit{take},\mathit{place},t) be the maximum amount of pleasure you can have assuming:

  • We are only checking the first i contests.
  • We only have t hours remaining.
  • We can still take up to \mathit{take} problems.
  • We need to place \mathit{place} problems (which we are holding right now).

Remember that a place can happen before a take, so we allow \mathit{place} to be negative. If it is negative, it means we have |\mathit{place}|free space” contests.

Let’s figure out the recurrence. Say we want to compute f(i,\mathit{take},\mathit{place},t). In contest i, there are eight possibilities we can do, corresponding to the ways in which we select which problems will be solved and which problems won’t be.

If none of the problems in contest i will be solved, then this contest is free space. It means that the first i-1 problems have an extra space for a problem. The maximum pleasure we can get is therefore f(i-1, \mathit{take}, \mathit{place}-1, t).

If exactly one problem will be solved, then no swaps will occur in this contest. We simply need to choose which one to solve. We currently have no way of knowing which is best to solve, but since there are only three, we just try them all. If we decide to solve the easy problem, then we gain p_{e,i} pleasure from it, and spend t_{e,i} hours, which means for the first i-1 problems we only have t - t_{e,i} hours remaining. The maximum pleasure we can get is therefore f(i-1, \mathit{take}, \mathit{place}, t - t_{e,i}) + p_{e,i}. In general, we have the following candidates:

  • f(i-1, \mathit{take}, \mathit{place}, t - t_{e,i}) + p_{e,i}
  • f(i-1, \mathit{take}, \mathit{place}, t - t_{m,i}) + p_{m,i}
  • f(i-1, \mathit{take}, \mathit{place}, t - t_{h,i}) + p_{h,i}

If exactly two problems will be solved, then exactly one swap will occur. However, as we mentioned before, when we take a problem, it means that we will eventually solve it. So it doesn’t matter which problem will stay and which will be swapped. All we care about is that there is exactly one swap. Again we try all possibilities, giving the following candidates:

  • f(i-1, \mathit{take}-1, \mathit{place}+1, t - t_{e,i} - t_{m,i}) + p_{e,i} + p_{m,i}
  • f(i-1, \mathit{take}-1, \mathit{place}+1, t - t_{m,i} - t_{h,i}) + p_{m,i} + p_{h,i}
  • f(i-1, \mathit{take}-1, \mathit{place}+1, t - t_{h,i} - t_{e,i}) + p_{h,i} + p_{e,i}

Finally, if all problems will be solved, then exactly two swaps will occur. The maximum pleasure we will get here is f(i-1, \mathit{take}-2, \mathit{place}+2, t - t_{e,i} - t_{m,i} - t_{h,i}) + p_{e,i} + p_{m,i} + p_{h,i}.

We have exhausted all possibilities, so f(i,\mathit{take},\mathit{place},t) is simply the maximum among these eight candidates!

Now, let’s figure out some edge cases. First, \mathit{take} or t is not allowed to be negative, so we say f(i,\mathit{take},\mathit{place},t) = -\infty if \mathit{take} < 0 or t < 0. As for the base case, i = 0, there should be no more problems to place, but we’re allowed to have extra free space contests, so we only allow \mathit{place} \le 0. So we have:

f(0,\mathit{take},\mathit{place},t) = \begin{cases} 0 & \text{if $\mathit{place} \le 0$} \\\ -\infty & \text{otherwise} \end{cases}

Finally, the answer is f(n,\min(k,\lfloor 2n/3 \rfloor),0,\mathit{time}).

By computing the values of f with dynamic programming, the time complexity becomes proportional to the number of entries we will compute. This is O(n^3\mathit{time}) because we only care about f(i,\mathit{take},\mathit{place},t) in the following ranges of arguments:

  • 0 \le i \le n
  • 0 \le \mathit{take} \le 2n/3
  • -n \le \mathit{place} \le 2n/3
  • 0 \le t \le \mathit{time}

Time Complexity:

O(n^3\cdot \mathit{time})

AUTHOR’S AND TESTER’S SOLUTIONS:

[setter][333]
[tester][444]
[editorialist][555]

[333]: The link is provided by admins after the contest ends and the solutions are uploaded on the CodeChef Server.
[444]: The link is provided by admins after the contest ends and the solutions are uploaded on the CodeChef Server.
[555]: The link is provided by admins after the contest ends and the solutions are uploaded on the CodeChef Server.

what if we have wasted a take in earlier part and we need more take in latter part.
for example data is arranged such that we are chosing both of the problems of contest i and then comes contest j>i and we want to have both the problems of that contest and we have no more swaps left with us