 # DAKIMAKU - Editorial

Setter: Yuriy Fedorov
Tester: Aryan
Editorialist: Taranpreet Singh

Easy-Medium

# PREREQUISITES

Binary search and Dynamic Programming

# PROBLEM

Complicated problem description, refer problem statement.

# QUICK EXPLANATION

• We can binary search on the minimum time needed to distribute dakimakuras. Now, we have to check if we can distribute dakimakuras in time T or not.
• The whole city shall be partitioned into segments [L_1,R_1], [L_2, R_2], \ldots [L_K, R_K] where all cities in segment [L_i, R_i] are served by a branch at city R_i. Each segment operates independent of other.
• Let’s compute f(L, R, T) as the minimum productivity needed at a branch at city R to serve all cities from L to R within time T.
• We can use dynamic programming within each iteration of binary search. Specifically, DP_i = min_j(DP_{j-1} + f(j, i, T))

# EXPLANATION

We can see in this problem that there’s no immediate way to approach the problem, no equations, no parameters within range to brute force.

### Binary Search on Minimum Time Needed

But we can notice the fact that there exists some time T such that it is possible to distribute dakimakuras in time T' \geq T with cost up to X and it is not possible to distribute in time T' \lt T. We need to compute this T with a cost up to X.

As the conditions for binary search are satisfied, we can run a binary search.

Now, the problem becomes, given time T, compute the minimum cost of opening branches to distribute dakimakuras in time T.

### Minimum Cost to distribute dakimakuras in time T

Let’s focus on the distribution system now. Let’s assume branches are open at cities c_1, c_2 \ldots c_k.

We can see that these branches divide the whole city into segments [1, c_1], [c_1+1, c_2] \ldots [c_{k-1}+1, c_k], where cities in segment [c_{i-1}+1, c_i] are served by branch in city c_i.

Let us compute F(x) as the minimum cost of opening branches such that all people are served in the first x cities within time T and there’s a branch in the city x. We are looking to compute F(N). Also, we have F(0) = 0

Now, if we open a branch in city x, and the previous branch was opened in city y, then the cost to serve people in the first x cities can be written as F(y) + f(x+1, y, T) where f(l, r, T) denotes the minimum productivity needed at the branch at city r to serve all people in cities [l, r] within time T

Hence, we have the DP transition \displaystyle F(x) = \min_y (F(y-1) + f(y, x, T))

The problem now reduces to computing f(l, r, T)

### Minimum cost to serve a range of cities

Now, we have an interval [l, r], only one branch at city r, and time T, we want to find minimum productivity to serve all people in these cities within time T.

One decent way to proceed is to binary search on productivity, which is a valid idea, but the time complexity goes too high in that case.

We can see that now we have pair (C_1, T_1), (C_2, T_2) \ldots (C_K, T_K) where K = r-l+1 and T_i \lt T_{i+1} denoting that C_i people reach city r at time T_i, and we need to find minimum productivity to serve them in time T.

Now, it is possible that sometimes branch may serve some people, then have to wait for people to arrive, and then again serve people and so on. Whenever the queue is non-empty, it is optimal to serve rather than not.

Let’s assume we need to process X people in time T, which imply that productivity must be at least \displaystyle \left\lceil \frac{X}{T} \right\rceil .

Let’s suppose, tuple (C_x, T_x) is the last tuple, before which branch had to wait for the people to arrive. Calling such tuple as the special tuple.

All people in tuples before are already served. Current time is T_x, so the minimum productivity at which all would be served within time T, would be \displaystyle \frac{\sum_{j = x}^K C_i}{T - T_x}. Intuitively, you can think of it as wait for T_i time, and then you have to serve \displaystyle \sum_{j = i}^K C_j people in time T - T_i time.

Hence, let’s consider each tuple (C_i, T_i) as the special tuple and compute compute \displaystyle \left\lceil \frac{\displaystyle\sum_{j = i} C_j}{T - T_i}\right\rceil. We can now simply take maximum value of productivity needed.

### Conclusion

Hence, we do a binary search on time, compute a minimum number of coins needed to serve in time T, Divide the cities into segments where each segment is processed by the rightmost city of the segment, and computed the minimum cost needed to set up a branch at the right end of the segment.

# TIME COMPLEXITY

Initial binary search takes O(log(MAX)) time. There are O(N^2) transitions, leading to N^2 calls to f(l, r, T) function. Function f works in O(N) time, leading to overall time complexity O(N^3 * log(MAX)) per test case. disten

# SOLUTIONS

Setter's Solution
#include <bits/stdc++.h>

using namespace std;

const long long inf = 1e18 + 228;

long long calc(int i, int j, vector<long long> &sum_c, vector<long long> &sum_t, long long val) {
long long tmp_val = 0;
if (sum_c[i + 1] - sum_c[j] == 0)
return 0;
if (sum_t[i] - sum_t[j] >= val)
return inf;
for (int k = j; k <= i; ++k) {
long long s1 = sum_c[k + 1] - sum_c[j], s2 = sum_t[i] - sum_t[k];
tmp_val = max(tmp_val, (s1 + val - s2 - 1) / (val - s2));
}
return tmp_val;
}

bool check(int n, int x, vector<int> &c, vector<int> &t, long long val) {
vector<long long> dp(n + 1, inf);
vector<long long> sum_c(n + 1);
vector<long long> sum_t(n);
for (int i = 0; i < n; ++i) {
sum_c[i + 1] = c[i] + sum_c[i];
if (i != n - 1)
sum_t[i + 1] = t[i] + sum_t[i];
}
dp = 0;
for (int i = 0; i < n; ++i) {
for (int j = 0; j <= i; ++j) {
dp[i + 1] = min(dp[i + 1], dp[j] + calc(i, j, sum_c, sum_t, val));
}
}
return dp.back() <= x;
}

void solve() {
int n, x;
cin >> n >> x;
vector<int> c(n), t(n - 1);
for (int i = 0; i < n; ++i)
cin >> c[i];
for (int i = 0; i < n - 1; ++i)
cin >> t[i];

long long l = -1, r = inf;
while (r - l > 1) {
long long mid = (l + r) / 2;
if (check(n, x, c, t, mid))
r = mid;
else
l = mid;
}
cout << r << '\n';
}

int main() {
ios_base::sync_with_stdio(0);
cout.tie(0);
cin.tie(0);

int t;
cin >> t;
while (t--)
solve();
}

Tester's Solution
/* in the name of Anton */

/*
Compete against Yourself.
Author - Aryan (@aryanc403)
Atcoder library - https://atcoder.github.io/ac-library/production/document_en/
*/

#ifdef ARYANC403
#else
#pragma GCC optimize ("Ofast")
#pragma GCC target ("sse,sse2,sse3,ssse3,sse4,popcnt,abm,mmx,avx")
//#pragma GCC optimize ("-ffloat-store")
#include<bits/stdc++.h>
#define dbg(args...) 42;
#endif

using namespace std;
#define fo(i,n)   for(i=0;i<(n);++i)
#define repA(i,j,n)   for(i=(j);i<=(n);++i)
#define repD(i,j,n)   for(i=(j);i>=(n);--i)
#define all(x) begin(x), end(x)
#define sz(x) ((lli)(x).size())
#define pb push_back
#define mp make_pair
#define X first
#define Y second
#define endl "\n"

typedef long long int lli;
typedef long double mytype;
typedef pair<lli,lli> ii;
typedef vector<ii> vii;
typedef vector<lli> vi;

const auto start_time = std::chrono::high_resolution_clock::now();
void aryanc403()
{
#ifdef ARYANC403
auto end_time = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> diff = end_time-start_time;
cerr<<"Time Taken : "<<diff.count()<<"\n";
#endif
}

long long readInt(long long l, long long r, char endd) {
long long x=0;
int cnt=0;
int fi=-1;
bool is_neg=false;
while(true) {
char g=getchar();
if(g=='-') {
assert(fi==-1);
is_neg=true;
continue;
}
if('0'<=g&&g<='9') {
x*=10;
x+=g-'0';
if(cnt==0) {
fi=g-'0';
}
cnt++;
assert(fi!=0 || cnt==1);
assert(fi!=0 || is_neg==false);

assert(!(cnt>19 || ( cnt==19 && fi>1) ));
} else if(g==endd) {
if(is_neg) {
x=-x;
}
assert(l<=x&&x<=r);
return x;
} else {
assert(false);
}
}
}
string readString(int l, int r, char endd) {
string ret="";
int cnt=0;
while(true) {
char g=getchar();
assert(g!=-1);
if(g==endd) {
break;
}
cnt++;
ret+=g;
}
assert(l<=cnt&&cnt<=r);
return ret;
}
long long readIntSp(long long l, long long r) {
}
long long readIntLn(long long l, long long r) {
}
string readStringLn(int l, int r) {
}
string readStringSp(int l, int r) {
}

assert(getchar()==EOF);
}

if(n==0){
return {};
}
vi a(n);
for(int i=0;i<n-1;++i)
return a;
}

const lli INF = 1e12;

lli seed;
inline lli rnd(lli l=0,lli r=INF)
{return uniform_int_distribution<lli>(l,r)(rng);}

class CMP
{public:
bool operator()(ii a , ii b) //For min priority_queue .
{    return ! ( a.X < b.X || ( a.X==b.X && a.Y <= b.Y ));   }};

void add( map<lli,lli> &m, lli x,lli cnt=1)
{
auto jt=m.find(x);
if(jt==m.end())         m.insert({x,cnt});
else                    jt->Y+=cnt;
}

void del( map<lli,lli> &m, lli x,lli cnt=1)
{
auto jt=m.find(x);
if(jt->Y<=cnt)            m.erase(jt);
else                      jt->Y-=cnt;
}

bool cmp(const ii &a,const ii &b)
{
return a.X<b.X||(a.X==b.X&&a.Y<b.Y);
}

const lli mod = 1000000007L;
// const lli maxN = 1000000007L;

int main(void) {
ios_base::sync_with_stdio(false);cin.tie(NULL);
// freopen("txt.in", "r", stdin);
// freopen("txt.out", "w", stdout);
// cout<<std::fixed<<std::setprecision(35);
while(T--)
{

// reverse(all(a));a.pb(0);reverse(all(a));
vi tt(n+1),a_sum(n+1);
for(lli i=0;i+1<n;++i){
tt[i+2]=e[i]+tt[i+1];
}
for(lli i=0;i<n;++i)
a_sum[i+1]=a_sum[i]+a[i];

dbg(a,e,tt,a_sum);
auto checkCostRange = [&](lli l,lli r,lli tugriks){
for(lli i=r;i>=l;--i){
if(a[i]&&tugriks==0)
return INF;
}
if(tugriks==0)
return 0LL;
lli ans=0;
lli c=0,t=0;
for(lli i=r;i>=l;--i)
{
const lli delta = min((c+tugriks-1)/tugriks,tt[r]-tt[i]-t);
if(c)
ans=max(ans,t+delta);
c=max(0LL,c-delta*tugriks);
t=tt[r]-tt[i];
c+=a[i];
dbg(c,t);
}

if(c)
{
const lli delta = (c+tugriks-1)/tugriks;
ans=max(ans,t+delta);
t+=delta;
}

return ans;
};

auto findCost=[&](lli l,lli r,lli maxTime){
if(checkCostRange(l,r,INF)>maxTime)
return INF;
lli LT = -1, RT = INF;
while(RT-LT>1){
const lli MT = (LT+RT)/2;
if(checkCostRange(l,r,MT)<=maxTime)
RT=MT;
else
LT=MT;
}
return RT;
};

auto findCost2=[&](lli l,lli r,lli maxTime){
if(a_sum[r]==a_sum[l-1])
return 0LL;
if(tt[r]-tt[l]>=maxTime)
return INF;
lli ans=0;
for(lli i=l;i<=r;++i){
const lli c=a_sum[i]-a_sum[l-1],t= maxTime - (tt[r]-tt[i]);
ans=max(ans,(c+t-1)/t);
}
return ans;
};

auto chkTime=[&](lli maxTime){
vi dp(n+1,INF);
dp=0;
for(lli r=1;r<=n;++r)
for(lli l=1;l<=r;++l)
dp[r]=min(dp[r],dp[l-1]+findCost2(l,r,maxTime));
return dp[n]<=maxTugriks;
};

{
lli L=-1,R=INF;
while(R-L>1){
const lli M = (L+R)/2;
if(chkTime(M))
R=M;
else
L=M;
}
cout<<R<<endl;
}
}   aryanc403();
return 0;
}

Editorialist's Solution
import java.util.*;
import java.io.*;
class DAKIMAKU{
//SOLUTION BEGIN
void pre() throws Exception{}
void solve(int TC) throws Exception{
int N = ni();
long X = nl();
long[] C = new long[1+N], T = new long[1+N];
for(int i = 1; i<= N; i++)C[i] = nl();
for(int i = 1; i<= N-1; i++)T[i] = nl();
long lo = 1, hi = (long)1e18;
while(lo < hi){
long mid = lo+(hi-lo)/2;
if(cost(N, C, T, mid) <= X)hi = mid;
else lo = mid+1;
}
pn(hi);
}
//Minimum cost to ensure delivery in time X
long cost(int N, long[] C, long[] T, long ti){
long[] DP = new long[1+N];//DP[i] = Minimum coins needed for first i cities, if there's a branch at city i
Arrays.fill(DP, Long.MAX_VALUE/2);
DP = 0;
for(int i = 1; i<= N; i++){
List<long[]> list = new ArrayList<>();
long dist = 0;//, sum = 0;
for(int j = i; j> 0; j--){
//[j, i] are dealt by a branch at city i
dist += T[j-1];
DP[i] = Math.min(DP[i], DP[j-1] + minProductivityNeeded(list, ti));
}
}
return DP[N];
}
//finding the minimum productivity needed to serve all people upto maxTime
long minProductivityNeeded(List<long[]> people, long maxTime){
long minProd = 0;
long sum = 0;
for(int i = people.size()-1; i >= 0; i--){
sum += people.get(i);
long ti = people.get(i);
if(ti > maxTime)return Long.MAX_VALUE/2;//T <= Ti
long di = (maxTime-ti+1);
//sum = sum C[k] for i <= k <= j
minProd = Math.max(minProd, (sum+di-1)/di);
}
return minProd;
}
//SOLUTION END
void hold(boolean b)throws Exception{if(!b)throw new Exception("Hold right there, Sparky!");}
static boolean multipleTC = true;
void run() throws Exception{
out = new PrintWriter(System.out);
//Solution Credits: Taranpreet Singh
int T = (multipleTC)?ni():1;
pre();for(int t = 1; t<= T; t++)solve(t);
out.flush();
out.close();
}
public static void main(String[] args) throws Exception{
new DAKIMAKU().run();
}
int bit(long n){return (n==0)?0:(1+bit(n&(n-1)));}
void p(Object o){out.print(o);}
void pn(Object o){out.println(o);}
void pni(Object o){out.println(o);out.flush();}
String n()throws Exception{return in.next();}
String nln()throws Exception{return in.nextLine();}
int ni()throws Exception{return Integer.parseInt(in.next());}
long nl()throws Exception{return Long.parseLong(in.next());}
double nd()throws Exception{return Double.parseDouble(in.next());}

StringTokenizer st;
}

}

String next() throws Exception{
while (st == null || !st.hasMoreElements()){
try{
}catch (IOException  e){
throw new Exception(e.toString());
}
}
return st.nextToken();
}

String nextLine() throws Exception{
String str = "";
try{
}catch (IOException e){
throw new Exception(e.toString());
}
return str;
}
}
}


Feel free to share your approach. Suggestions are welcomed as always. 2 Likes

This problem had a very easy subtask (when x = 1). The optimal location for the branch would be the last city with non-zero population.

1 Like

It seemed to me that a solution immediately follows from this subtask (we can count f, and the dynamics are simple), but apparently I was mistaken)

2 Likes

I used the same approach for the binary search on the minimum time needed, but a different approach to evaluate whether everyone can be given a daki in a given time. This was faster, O(N^2 log(N)) approximately overall. I have run speed tests with randomly generated C and T for different N and X. The method works fast enough for N = 500, but with N = 1000 and approximately 10 < X < 1000 it takes several seconds.

At the start, remove all cities with no citizens, adding the time to move to the previous city. Thi makes subsequent evaluations easier.

Here is the algorithm for the Function ‘Evaluate’ which evaluates whether everyone can be given a daki in a given time using no more than X tugriks in total.

Evaluate the number of tugriks required to give every citizen in each city a daki without anyone moving, and the total number of tugriks required.

If this total S <= X (the number available) we have a solution.

Evaluate the number G we have to gain by moving citizens = S - X. When we move the citizens from one city to another, we may be able to gain no more than 1 tugrik compared to leaving them alone. So if G >= N no solution is possible with this total time.

Define a stack containing a pair of the item index and the value of G there, which may be added to as described below. Initially put the last city and initial G on the stack.

Work backwards from the end, one city at a time.

If the time to move from the current city to the last to which people have moved is less than the
time available, we may be able to gain one tugrik by moving people from this city.

(My submission during the contest excluded this step, making the method slow for small X.) While the current city has only one tugrik, when we move the number of tugriks at the destination city will remain unchanged. We can readily calculate in a single loop how many cities’ populations we can move in this way within the constraints, and so reduce G by this number. If G has reached 0, we have a solution.

Otherwise evaluate whether we can move from the current city to the last to which people have moved using the combined number of tugriks - 1 with the allowed time. If so, reduce G by 1 and continue to the previous city.

Otherwise evaluate whether we can move from the current city to the last to which people have moved using the combined number of tugriks with the allowed time. We don’t know if this will be helpful overall, but it usually is (when I submitted a version assuming we always move in this way, most tests passed). So add the option of not moving to the stack, and make the move and continue to the previous city.

On reaching a city where we do not move its citizens, check whether to continue. If G is more than the number of cities remaining, or we have visited this city on a previous item in the stack and G is greater than or equal to the lowest value found there on any pass so far, move on to try the next item in the stack, if any.

If we reach the end with G > 0, there is no solution with this total time.

You can see a solution with the full algorithm described here at https://www.codechef.com/submit/complete/48750637 and the successful one I submitted during the contest at Solution: 48678947 | CodeChef
Both passed in 0.02 seconds

1 Like

I don’t really understand how publishing editorials on codechef works, so I’ll just throw the parsing here)

Let d_i be equal to the depth of i of the vertex, and val_i is the depth of the deepest vertex in the subtree of the vertex i, the number of leaves in the tree is cnt. Then, by the definition of the ladder composition, the length of the path leaving the vertex i is val_i - d_i.

Let the lengths of the paths in our ladder composition be l_1 \dots l_k. Then \sum_{i = 1}^{i \leq n} (val_i - d_i) = \sum_{i = 1}^{i \leq k} l_i \cdot (l_i + 1) / 2 \iff 2 \sum_{i = 1}^{i \leq n} (val_t - d_i) = \sum_{i = 1}^{i \leq k} l_i^2 + \sum_{i = 1}^{i \leq k} l_i. And the latter is equal to n-cnt

Thus, you need to be able to maintain the sum \sum_{i = 1}^{i \leq k} (val_t - d_i)

It is clear that when we get a vertex, the maximum depth changes only on the prefix of vertices, so we can make bin ascents + UP to to check the maximum depth. The asymptotics of the solution is O (n \cdot log^2)

Let’s solve it more optimally. Let the new vertex be v, A_i - all vertices at the depth of i. Then we note that if there is an old vertex in the subtree of the vertex u at the depth of d_v, then its answer will not change. So we are only interested in vertices of the form lca (v, {A_{d_v}}_j). And among them, you need to take a vertex with a minimum depth, which is not difficult to do with a set with pre-calculated entry times for the vertices.

2 Likes

Thanks for the editorial !!

Actually i did it this way and i could fit in the time limit. (but it was tight)

1 Like

Managed to pass the tests in 0.06s : )
https://www.codechef.com/viewsolution/48637726

1 Like

I had tried binary search first and got TLE in the last sub-task, so mentioned that.