MGCHGYM - Editorial

Well here’s what I tried during the contest.

My complexity for query 3 is O(K*W) and for query 1 and query 2 its O(Log N) by using Treap.

I tried to optimise as much as I could but the code was giving TLE on one test case.

Can you explain why? :confused:

I personally feel time constraints were a bit too strict.

My solution

1 Like

What is the purpose of all numbers being squarefree?

2 Likes

Ha! I have complexity ~ O(Q sqrt N + QK + PK2^K + WK)! Okay, never mind, it’s something like O(QN). Luckily, it’s very unlikely to reach the worst case and my code was fast enough.

1 Like

My solution is a bit different than the one described in editorial and has a time complexity
O ((W + P) 2^k) for the third kind of queries.

In order to check whether a sum S is possible, we compute the number of ways T(S) of obtaining that sum. The sum S is possible iff T(S) \ne 0.

Let us say we have coins of denomination x1, x2, \ldots, xk.
and queries are of the form (S, r1, r2, \ldots, rk), i.e., is it possible to make a sum S, using at most r1 coins of the first type, r2 coins of the second type, and so on (S \le W).

For this set of denomination, we pre-compute an array P[1 \ldots W], such that P[x] represents the number of ways of obtaining the sum x, with no restriction on the number of coins. For the time being let us assume that we already have this array, we will explain later how to compute this array.

Now the number of ways of obtaining the sum S with the restrictions on the number of coins can be computed using inclusion-exclusion principle.

T(S, r1, r2, \ldots, rk) = P[S] - Q(S, r1, r2, \ldots, rk),

where Q(S, r1, r2, \ldots, rk) is the number of ways in which at least one of coins is used more than specified bound.

Q(S, r1, r2, …, rk)

= \sum (-1)^{u + 1} \text{number of ways of obtaining } S \text{ such that coins } i1, i2, \ldots, iu \text{ violate the bound.}

= \sum (-1)^{u + 1} P[S - (r_{i1} + 1) x_{i1} - (r_{i2} + 1) x_{i2} - \ldots - (r_{iu} + 1) x_{iu}]

Hence, T(S, r1, r2, \ldots, rk) can be computed in O(2^k) time using the P array.

Now let us see, how to compute the array P.
If there were a single coin denomination x, then we have computed this array quite easily.

P[i] = 1, if i is divisible by x, and 0 otherwise,

Suppose we already have such an array Q for the coins \{x2, x3, \ldots, xk\}, and we want to use it to compute array P for the coin set \{x1, x2, \ldots, xk\}

P[i] = Q[i] + Q[i - x1] + Q[i - 2x1] + \ldots

Note that,
P[i] = Q[i] + P[i - x1]

Hence, array P can be computed in O (W) time using the array Q.

We have k denominations, and we would want to compute the pre-computation array for each subset of denominations, hence the pre-computation can be done in O (W2^k) time.

Alternatively, we can always use the denomination set which consists of all k types, and add the bounds (r_i = 0), for denominations which are missing in the set. This means we only need to pre-compute the array for the coin set consisting of all denominations, and can be done in O (Wk)

Also note that, we need to do all the computation modulo some prime (I chose 10^4 + 7, mainly to reduce the number of modulo operations)

Of course, this means that the constraint P \le 1000, can be relaxed, and we can probably allow up to 10^5 queries. On the other hand, this approach has exponential complexity in terms of k, so even if we increase the value of k slightly (say to 15), this approach will fail.

8 Likes

CodeChef: Practical coding for everyone This solutions runs fine for most test cases but failed one can somebody Explain!

In the first optimization, k<=10.

We can actually do that knapsack in O(PWK) time instead of O(PWKlogN/32). Although it does not work faster -at least not my implementation- it’s better asymptotically.

Suppose we have K different values : \{(x_1, c_1),(x_2,c_2)...,(x_K,c_K)\}.

x_k denotes the value and c_k denotes the quantity.

Then this pseudo-code calculates the dp array that we want:

f = [1,0,0...0]
for i = 1 to K:
  nf = [0,0,0,...,0]
  for j = 1 to W:
    nf[j] = 0
    for t = 0 to c[i]
      nf[j] |= f[j - t * x[i]]
  f = nf

Well, this obviously takes O(PWKN) time but we can easily get rid of the third loop. Key insight is this observation, during any change for array nf, j % x[i] = (j + t * x[i]) % x[i]. Which means if we group the integers according to their remainder each group is independent with each other. That’s why we can calculate them separately.

Here is the implementation for more details : https://www.codechef.com/viewsolution/8433813

Movers and Packers Jaipur @ http://www.shiftingsolutions.in/packers-and-movers-jaipur.html
Movers and Packers jamnagar @ http://www.shiftingsolutions.in/packers-and-movers-jamnagar.html

Can’t we use segment tree to solve this one

for first and third query it will take O(log n) time and for second query it will take O(X+logn) X=number of elemnets to be reversed

I suppose this part is wrong O(X+logn) . Because in the worst case lets suppose you will want to reverse
n elements than you again have to rebuilt the whole tree . which can take O(n+nlogn) . So according to me second query could be solved in O(X+XlogX) with segment tree approach .

[Solved]
The sample code in the editorial computed dp[] in reverse order for each summand. (which is the right thing to do here in this JOB )

But, in the provided code, we have something:

for(int j = 0; j <= limit - complete_shifts; j++)

It seems it is doing the JOB opposite.
I am confused if its really working in opposite , or I am missing something.
Any help would be greatly appreciated.

[Explanation:]
I missed the fact that there are two arrays: reachable_sum[] and shifted_mask[] , which are independent.
As we are using shifted_mask[] for a loop and then updating r_sum[] ; the above loop direction MATTERS here here :slight_smile: and its logical.

Best Services Of Packing And Shifting Now In Your Hyderabad

Hello! Friends and welcome to the services of Packers and Movers in Hyderabad. Well honestly speaking and even it’s the fact of the life that 90% people out of 100 faces this shifting time once in their life. The actual reason can be anything for shifting and relocating but the help is one. And that help is provided by us Packers and Movers of Hyderabad. Sometimes it happens that you shift because of your job transmission, shifting to a new place, shifting your business or anything but truly you want a hand which helps you during your move and your stuffs be safe in the hands you are choosing for you relocation.

That satisfaction and assurance is provided by the Packers and Movers in Hyderabad. Trust me it is the number 1 packing and shifting company in the whole region of Hyderabad. It has a good name and fame in this field. So without a doubt you can choose us for your hard times like shifting.

If you guys are thinking that I am just saying the positive sides of Packers and Movers of Hyderabad but I am not mentioning the negative sides. Then on this kind of issue let me clear you one thing that Packers and movers in Hyderabad is a company with no negative side and no negative feedbacks. Your suggestions are precious to us and we try to work over it. But honestly speaking we are always being the first choice of the people living in Hyderabad for shifting desires. If you have any kind of doubt regarding us and our services then you can check our websites and profiles on various websites and search engines. For more information you can also call us get the entire information about our packing and shifting services from our executives and other employs.

Packers and Movers Hyderabad is a right and reliable packing company because we are genuine and we are having a list of best and reliable packing companies in the region of Hyderabad which provide you such royal treatment during your every move. With different quotation services having different companies gives you a option of choosing the best one from the bulk of right ones.

We never discriminate people on the basis of caste, religion, bank balance and etc. Who so ever you are and from whichever background you belong to our services for you and for every person will remain same. This is the good thing about us that we complete our every task and every shifting properly and maintaining the quality level. Many other companies are also there in Hyderabad which do all such things but the difference is just one that our services is always being the best from the rest one and the way of our working is so appropriate that everything completes properly without any damage and losses.
source: Best Services Of Packing And Shifting Now In Your Hyderabad
http://packersmovershyderabadcity.in/

Probably nothing.

1 Like

Test cases are definitely weak. A lot of people had similar O(n*W) solutions for query 3 and got 50 points. I think Codechef should rejudge the solutions after revising test cases.

https://www.codechef.com/viewsolution/8535260

Seriously what is up with the square-freeness :stuck_out_tongue:

And it is a shame that O(PWK) does not pass but O(PWKlogN/32) passes.

1.5 seconds time limit is too harsh, considering setter’s solution works with 1.45 seconds.

Do not you think it is a bad idea to set time limit (spent by setter’s solution) + epsilon even though it is a 10 days contest.

4 Likes

I got Accepted with O(PWK), but I had to do multiple code optimizations to get AC (and, even so, I had a runtime of 1.48 sec on one of the test cases :slight_smile: ). But, on the other hand, I had O(sqrt(N)) for the other operations (I didn’t use treaps), so I had to optimize more the O(PWK) part to compensate for the other slightly slower operations: 1) sort the K values in increasing order of value * count(value); 2) stop the DP if we formed sum W (obvious); 3) if value * count(value) >= W then you can ignore the count and do DP as if count = infinite; 4) iterate only up to the max sum formed so far

I’m also disappointed that the square-freeness was bogus :slight_smile: I was looking forward to some fancy mathematical properties of the knapsack problem when all the values are square-free :slight_smile: On the other hand, Googling this didn’t provide anything useful, so I had suspected the square-freeness was useless.

2 Likes

Well I tried all of these individually but it seemed to increase the time taken by my code ( weird ).

I didn’t try all of them together in one code though.

Also I had used treaps so Q1,Q2 were log N per query.

Can you check out my code and tell me how to further optimise it?
It TLEs on one test :confused: ( Link given in above comment )

Three small optimizations for the DP part:

  1. Don’ t iterate up to w in each iteration - just up to the currently maximal possible value (sorting the weights by maximal obtainable value may further decrease runtime)

  2. The first iteration can be done directly without iterating the whole array.

  3. Similar to 2) the last iteration can be replaced by just looking at the values w- k*w_i.

1 Like

We actually do not need dp for this one. I did it using segment trees and bitsets.
Maintain as usual a segment tree but this time instead of saying \text{left}(\text{nodes}[i]) = \text{nodes}[2*i] etc , explicitly store the pointer to the left and right children. Also for each node store a bool as to whether it is reversed or not.
During preprocessing at each node we have an unordered set of all elements in its range. Since each element occurs in exacty \log N levels the extra overhead is \mathcal O(N \log N).
When you have to update an element, traverse up the tree, deleting the old value from the unordered sets and inserting the new value. It takes \mathcal O(\log N) per update.
For reversals, just mark the node as reversed and exchange its left and right pointers + lazy propagation : whenever we “touch” a reversed node we propagate its reversal to its children.
In effect, reversal takes \mathcal O (\log N) time per reversal amortized.

For the query:
We can do this in \mathcal O(N) per query using bitsets; refer this. It is given that there are at most P of these queries.

Overall time complexity is \mathcal O(NP + (N + Q) \log N).