EXACTSAVINGS - Editorial

PROBLEM LINK:

Practice
Contest: Division 1
Contest: Division 2
Contest: Division 3
Contest: Division 4

Author: mhndalaa
Preparation: jay_1048576
Tester: mexomerf
Editorialist: iceknight1093

DIFFICULTY:

TBD

PREREQUISITES:

Dynamic programming

PROBLEM:

Chef earns X for each hour he works.
There are N gifts, the i-th has price A_i. Each gift can be bought at most once.

Find the minimum number of hours Chef needs to work so that he can buy some (possibly empty) subset of gifts and be left with exactly Z afterwards.

EXPLANATION:

Suppose Chef works for k hours, and buys gifts worth S in total.
We want the condition k\cdot X - S = Z to hold, or k\cdot X = S + Z.

Our aim is to minimize k, which as you can see is the same as minimizing S (provided S+Z is a multiple of X, of course).
That is, our objective is to find the smallest number S such that S can be written as the sum of some subset of A_i, and S+Z is a multiple of X.

Now, finding all values of S (i.e, all possible subset sums) is a classical dynamic programming task, the subset sum problem.
Unfortunately, its complexity of \mathcal{O}(N\cdot sum(A)) is too slow for us.

However, it can be modified to fit our use-case.
Notice that we don’t care about the actual sum too much: we’re more concerned about whether S+Z is a multiple of X or not (and only then, minimization).
So, it’s enough to keep information about the subset sums modulo X.

That is, define f(i, j) to be the smallest subset sum from the first i numbers, such that this sum is j modulo X.
Then, we can either take A_i into the subset or not, so we have:

f(i, j) = \min(f(i-1, j), f(i-1, j-A_i) + A_i)

where the second argument is always taken modulo X (so consider (j-A_i)\mod X in the second function call).
The base cases are, of course, f(0, 0) = 0 and f(0, j) = \infty for j\gt 0.

Notice that this gives us \mathcal{O}(N\cdot X) states, with \mathcal{O}(1) transitions from each, so dynamic programming gives us a complexity of \mathcal{O}(N\cdot X) overall.

Now, the optimal value of S is just f(N, j), where j is the unique remainder modulo X such that (j+Z) \equiv 0 \pmod X.
If this f(N, j) is \infty no solution exists (so print -1), otherwise the answer is \frac{Z + f(N, j)}{X}.

TIME COMPLEXITY:

\mathcal{O}(N\cdot X) per testcase.

CODE:

Editorialist's code (Python)
for _ in range(int(input())):
    n, x, z = map(int, input().split())
    a = list(map(int, input().split()))
    dp = [10**16]*x
    ndp = [0]*x
    dp[0] = 0
    for y in a:
        for i in range(x): ndp[i] = dp[i]
        for i in range(x): ndp[i] = min(ndp[i], dp[(i-y)%x] + y)
        for i in range(x): dp[i] = ndp[i]
    want = (-z) % x
    if dp[want] > 10**15: print(-1)
    else: print((dp[want] + z)//x)
2 Likes

Can anyone help me out where this solution fails? I did the same thing as written in the editorial.

https://www.codechef.com/viewsolution/1039693877
What’s wrong in this ?

this might help:

Test Case: 
1
1 3 2
2 

Expected: 
-1 

Found: 
1

Can anyone help me with this, i used pretty much same approach in editorial

can anyone tell me why i am getting tle even when my codes time complexity looks like clear O(N*x).CodeChef: Practical coding for everyone

1 Like

I think the judging machine behaves strange on this problem. After encountering a TLE, I spent a lot of time on trying to reduce the time complexity during the contest but failed after all. After the contest, I viewed the solution and found it having the same time complexity as mine. I submitted the sample code but it also TLEd. Then I realized that I’d been using the “Python 3” as my language. After switching to “PyPy 3” the sample code passed the tests. After that, I submitted my code again a few times as “PyPy 3” solution. Some of them passed while the others TLEd, EVEN THOUGH THE CODE IS ABSOLUTELY IDENTICAL. Maybe it’s because the data for testing is randomly generated, but I think it may be improper to let contestants deal with this kind of thing.

I’m new to this platform. If you know a better place for feedback, please kindly forward this message to the organizers of this contest, thanks!

Here is my O(NX) solution, and here is one passed submission and a failed one both on the code below.

from copy import deepcopy
t = int(input())
for _ in range(t):
    n, x, z = [int(i) for i in input().strip().split(' ')]
    a = [int(i) for i in input().strip().split(' ')]
    curr = [10**100] * x
    cp = [0] * x
    curr[z % x] = z
    res = 10**100
    for i in a:
        for j in range(x):
            cp[j] = curr[j]
        for j in range(x):  # residual of money required
            if cp[j] != 10 ** 100:
                curr[(j + i) % x] = min(curr[(j + i) % x], cp[j] + i)
    if curr[0] < 10 ** 100:
        print(curr[0] // x)
    else:
        print(-1)

Try using map instead of unordered_map

Because the test result of this specific problem largely depends on the constant part of your run-time. Some STL data structures allocate space on heap, which is time-consuming. I would recommend C-style arrays for competitive programming.

use c++, it is worth it and you will never regret it.

The time limit was of 1sec.
Testcases(T): 5e4,
N: 1e5,
X:1e3
Does O(T * N * X): O(5e12) fit into 1 sec time limit? How to determine complexity given the time limit, when we perform the basic operations?

Other blogs ignores the number testcases such as,

There’s also the constraint

The sum of N over all test cases won’t exceed 10^5

It’s \mathcal{O}(N\cdot X) for a single testcase, so across all testcases the complexity is bounded above by \mathcal{O}(sum(N) \cdot X) (not just T\cdot N\cdot X).
This is 10^5\cdot 10^3 = 10^8 which is perfectly fine for one second.

I can’t figure out what I’m doing wrong with my implementation for this problem . I’m pretty positive that my logic is correct.
Here’s the code .

#include<cstdio>
#include<vector>
#include<algorithm>
#include<set>
#include<map>
#include<cstring>
#include<cmath>
#include<climits>
#include<iostream>
#include<ctime>
#include<cstdlib>
#include<queue>

#define int long long
#define pb push_back
#define all(a) (a).begin(), (a).end()
#define print(a) for (auto &x : a)cout<<x<<" ";
#define f(i, a, b) for (int i = a; i < b; i++)
#define printv(a,n) f(i,0,n) cout<<a[i]<<" ";

using namespace std;
using vi = vector<int>;
int Bit(int mask, int b) { return (mask >> b) & 1; }

int n, x, z;
vector<int>a;
vector<vector<int>>dp;

bool ok (int i, int sub) {
    if(sub==0) {
        return 1;
    }
    if(dp[i][sub]!=-1) {
        return dp[i][sub];
    }
    if(i>=n) {
        return 0;
    }
    bool pick = 0;
    if(sub>=a[i])
        pick = ok(i+1,sub-a[i]);
    bool np = ok(i+1,sub);

    return dp[i][sub] = pick|np;
}

void solve() {
    cin>>n>>x>>z;

    a = vector<int>(n);
    dp = vector<vector<int>>(n+1,vector<int>(x+1,-1));
    for(int i  = 0;i<n;i++) {
        cin>>a[i];
    }

    if(z%x==0) {
        cout<<z/x<<endl;
    }
    else {
        bool ans = ok(0,x-z%x);
        if(ans==1) {
            cout<<(z+(x-z%x))/x<<endl;
        }
        else {
            cout<<-1<<endl;
        }
    }

    // for(int i = 0;i<n;i++) {
    //     for(int j = 0;j<x;j++) {
    //         cout<<dp[i][j]<<" ";
    //     }
    //     cout<<endl;
    // }
}

signed main() {
    ios_base::sync_with_stdio(false);cin.tie(0);
    int t = 1; 
    cin >> t;
    for(int i = 1;i<=t;i++) {
        solve();
    }
}

Can someone explain how we can find all subset sums in the said time complexity?