We have two algorithms for such problems:

  1. For smaller y(Assuming y<=k): DP Solution where dp[x][y] represents requrired value.
    dp[x][y] = dp[x+y][y] + a[x].
    This solution can only be implemented for smaller y due to memory constraints.
    Preprocessing complexity is O(n*k)
    Query complexity O(1)

  2. For Larger y(y>k): Direct simulation of what is needed,i.e. an iterative approach.
    for(i = x; i <= n; i+=y)
    Query Complexity is n/y. Worst case O(n/k)

Worst case complexity if we use both algorithm with fixed value of k.
O(nk) + O(q1) + O(q*n/k)

The idea is to choose a wise value of k such that worst case complexity of both the algorithms lies within time constraints.
Ex: for k = 100.
O(300000100) + O(300000) + O(300000300000/100) = O(900000000)

for k = 200
O(300000200) + O(300000) + O(300000300000/200) = O(450000000)

This should pass in given time limits.

We arrive at a minimum complexity where k = root(n). [Since n==q in constraints, nk will become larger than nq/k at k = root(n)]
Reference Solution

Setter Solution

Tester Solution

Choosing k sqrt(n) gives a better complexity

Firstly,how did you calculate that 450 milion operations passes 1-2 second tl?

Also,I’ve encountered many wa and also many coders did so,are there any corner cases with y<0 or …?

1 Like
  1. my solutions:
  2. @archazey my algorithm for this problem is 2e8
1 Like