PROBLEM LINK:
Practice
Contest: Division 1
Contest: Division 2
Contest: Division 3
Contest: Division 4
Author: raysh07
Tester: sushil2006
Editorialist: iceknight1093
DIFFICULTY:
Easy
PREREQUISITES:
Binary search
PROBLEM:
You start at x = 1, and want to reach x = N+1.
When at x \leq N, you can do one of the following:
- Move to x+1 with a cost of A_x, or
- Move to some random integer in [1, N+1] with a cost of B_x.
If you make decisions optimally, compute the expected time it will take.
EXPLANATION:
Let \mathbb{E}[x] denote the expected time it takes to reach N+1 if we’re currently at x.
Our goal is to compute \mathbb E[1] under optimal decision-making.
We know \mathbb{E}[N+1] = 0 for sure, but all the other values are quite unclear initially.
However, the costs A_x and B_x tell us that
If we knew the value of \mathbb E[1] + \mathbb E[2] + \ldots + \mathbb E[N], it would actually be quite easy to compute all of the \mathbb E[x] values, since this above formula lends itself to a straightforward right-to-left DP.
Unfortunately, we have no idea what this sum is either!
Let’s make a guess: suppose we took some value S, and decided that
S = \mathbb E[1] + \mathbb E[2] + \ldots + \mathbb E[N].
If S is fixed like this, we can then quite easily compute all the \mathbb E[x] values using
\mathbb E[x] = \min\left(\mathbb E[x+1] + A_x, \frac{S}{N+1} + B_x\right)
and processing x from N down to 1.
Once all of them are known, let S_0 denote the sum of all the computed \mathbb E[x] values.
Now, S_0 will likely differ from S, since S was just something of an initial guess.
However, we can in fact use this information to refine our guess!
In fact,
- If S_0 = S, we’ve reached a “stable state” and we’re done: our guess was correct, so we can just report the value of \mathbb E[1].
- If S_0 \lt S, our initial guess was too high - so it’s a good idea to reduce the guess a bit.
- If S_0 \gt S, our initial guess was too low instead - so we should increase the guess instead.
This “guess a value, then refine it” approach might seem familiar - it’s what we do in binary search!
And indeed, binary search is exactly what is needed to turn our approach into a full solution.
We’ll binary search on the appropriate value of S.
Start with a lower bound of 0 and an upper bound of 10^{18}.
When dealing with \text{mid}, compute f(\text{mid}) to be the value of S_0 assuming that S = \text{mid} (this can be done in linear time, as seen above), and then adjust either the upper bound or the lower bound of the binary search appropriately.
That is, if f(\text{mid}) \lt \text{mid}, set the upper bound to \text{mid} since this guess was too large; otherwise set the lower bound to \text{mid}.
In the end, use the final value of \text{mid} to compute the corresponding \mathbb E[x] values, and return \mathbb E[1] as the answer.
Since the binary search is being done on doubles rather than on integers, the safe way to binary search is to simply run enough iterations of it, rather rely on the difference between the bounds becoming small.
In this case, we want the error to be \lt 10^{-6}, so with a starting range of length 10^{18} that gets halved every iteration, we’ll need to perform at least \log_2 10^{24} iterations.
A safe bound on the number of iterations is thus 100 or so.
The above discussion was a bit on the intuitive side, and in the process some important details are hidden:
- We implicitly assumed that there was only a single value S for which f(S) = S; since if there can be more than one such value our approach will fail.
- Similarly, it was also intuitive that if f(S) \lt S then making S smaller is better, and if f(S) \gt S then increasing S is better - which is why we could binary search.
This needs to be properly proven too.
So, below is a formal proof of why binary search can be applied here.
Proof
Let f(S) denote the value of S_0 obtained if we guess the sum of \mathbb E[x] to be S.
The first thing to note is that f(S) is monotonically non-decreasing.
The proof of this is not very hard.
Consider S_1 and S_2, with S_1 \lt S_2.
Let \mathbb E_1[x] be the computed expected value with parameter S_1, and \mathbb E_2[x] be the computed expected value with parameter S_2.
We’ll prove that \mathbb E_1[x] \leq \mathbb E_2[x] for all x.Proof
\mathbb E_1[N+1] = \mathbb E_2[N+1] = 0 trivially.
Now, suppose we know \mathbb E_1[x+1] \leq \mathbb E_2[x+1] for some x.
Then,
- \mathbb E_1[x] = \min(A_x + \mathbb E_1[x+1], B_x + \frac{S_1}{N+1})
- \mathbb E_2[x] = \min(A_x + \mathbb E_2[x+1], B_x + \frac{S_2}{N+1})
- We know that S_1 \lt S_2, so B_x + \frac{S_1}{N+1} \lt B_x + \frac{S_2}{N+1}.
- We also know that \mathbb E_1[x+1] \leq \mathbb E_2[x+1] by supposition, so A_x + \mathbb E_1[x+1] \leq A_x + \mathbb E_2[x+1].
- Both terms in the expression for \mathbb E_1[x] are thus not larger than the corresponding terms in the expression for \mathbb E_2[x], so \mathbb E_1[x] cannot be larger than \mathbb E_2[x].
Since f(S) is obtained by summing up the computed \mathbb E[x] values, this proves that f(S_1) \leq f(S_2).
Next, we can prove that f is in fact a continuous function - specifically, if S_1 \lt S_2 and \mathbb E_1[x] and \mathbb E_2[x] are defined as previously, it can be shown that \mathbb E_1[x] \leq \mathbb E_2[x] \leq \mathbb E_1[x] + \frac{(S_2 - S_1)}{N+1}.
The argument is essentially the same: it’s trivially true for x = N+1, and then it can be shown for x by assuming the truth for x+1, noting that corresponding terms of the minimum both increase by \frac{S_2-S_1}{N+1} at best.Summing up across all x, we thus know that that f(S_2) - f(S_1) \leq \frac{N}{N+1} \cdot (S_2 - S_1)
(The numerator is N because only 1 \leq x \leq N need to be considered, given that at N+1 the value is always 0 anyway.)This is enough to show f being continuous, but along with f being non-decreasing, we obtain something even stronger: f is a contraction mapping, and so has a unique fixed point!
(More precisely, f is a contraction mapping when its domain is restricted to [0, M] where M equals the sum of suffix sums of A, that being the largest possible non-trivial input to it).We now know that f has a unique fixed point which is what we’re searching for; and f being monotonic allows for binary search to find this fixed point, thus proving correctness of our algorithm.
TIME COMPLEXITY:
\mathcal{O}(N\log{10^{24}}) per testcase.
CODE:
Editorialist's code (C++)
// #include <bits/allocator.h>
// #pragma GCC optimize("O3,unroll-loops")
// #pragma GCC target("avx2,bmi,bmi2,lzcnt,popcnt")
#include "bits/stdc++.h"
using namespace std;
using ll = long long int;
mt19937_64 RNG(chrono::high_resolution_clock::now().time_since_epoch().count());
int main()
{
ios::sync_with_stdio(false); cin.tie(0);
int t; cin >> t;
while (t--) {
int n; cin >> n;
vector a(n, 0), b(n, 0);
for (auto &x : a) cin >> x;
for (auto &x : b) cin >> x;
// e[i] = min(a[i] + e[i+1], b[i] + (e[1] + e[2] + ... + e[n]) / (n+1))
// say s = e[1] + ... + e[n] is fixed
// then computing all e[i] is trivial
// let s' = sum of computed e[i]
// if s' < s, s is too high
// if s' > s, s is too low
auto calc = [&] (long double s) {
long double actual = 0, cur = 0;
for (int i = n-1; i >= 0; --i) {
cur = min(cur + a[i], s / (n + 1) + b[i]);
actual += cur;
}
return pair{actual, cur};
};
long double lo = 0, hi = 1e18;
for (int itr = 0; itr < 100; ++itr) {
long double mid = (lo + hi) / 2;
auto [s, val] = calc(mid);
if (s < mid) hi = mid;
else lo = mid;
}
cout << fixed << setprecision(15) << calc(lo).second << '\n';
}
}