PROBLEM LINK:
Practice
Contest: Division 1
Contest: Division 2
Contest: Division 3
Contest: Division 4
Author: raysh07
Tester: sushil2006
Editorialist: iceknight1093
DIFFICULTY:
Easy
PREREQUISITES:
None
PROBLEM:
You start with X = 1. In one move, you can either add D to X, or multiply X by 2.
Find the minimum number of moves to obtain X = N, or say that it’s impossible to reach.
EXPLANATION:
To make things simpler to reason about, let’s say that we combine consecutive moves of the same type - so our moves are effectively “multiply by some power of 2” and “add some multiple of D”.
With this reduction, we will end up alternating moves - that is, our sequence of moves will look like:
- Add a_1\cdot D
The current value is X = 1 + a_1\cdot D. - Multiply by 2^{b_1}
The current value is X = 2^{b_1} + 2^{b_1}\cdot a_1\cdot D. - Add a_2\cdot D
The current value is X = 2^{b_1} + 2^{b_1}\cdot a_1\cdot D + a_2\cdot D. - Multiply by 2^{b_2}
The current value is X = 2^{b_1+b_2} + 2^{b_1+b_2}\cdot a_1\cdot D + 2^{b_2}\cdot a_2\cdot D. - Add a_3\cdot D
The current value is X = 2^{b_1+b_2} + 2^{b_1+b_2}\cdot a_1\cdot D + 2^{b_2}\cdot a_2\cdot D + a_3\cdot D.
\vdots
Observe that apart from the first term (which will be a power of 2), every other term in the above expression will be a multiple of D.
That is, the final value will look like
where b_1 \gt b_2 \gt b_3 \gt\ldots \gt b_k, and a_1 \geq 0 while a_2, a_3, \ldots, a_k \gt 0.
(a_1 is allowed to be 0 to account for the very first move being a multiplication rather than an addition).
The cost of ending up with this expression is exactly b_1 + (a_1 + a_2 + \ldots + a_k), because the number of times we multiply by 2 is exactly b_1 while the number of times we add D is exactly
(a_1 + \ldots + a_k).
Our aim is thus to make this expression equal N while minimizing the cost.
Observe that the value of b_1 cannot be too large: after all, N \le 10^{18} means that 2^{61} \gt 10^{18} \ge N, so we surely must have b_1 \le 60 because we require 2^{b_1} \le N.
So, let’s fix the value of b_1 in [0, 60].
Once that’s done, the remaining expression is D\cdot (2^{b_1}\cdot a_1 + 2^{b_2}\cdot a_2 + \ldots + 2^{b_k}\cdot a_k), and this must equal (N - 2^{b_1}).
Note that if (N - 2^{b_1}) is not a multiple of D, this is impossible, so we can ignore such b_1 entirely.
That leaves us with the case where (N - 2^{b_1}) is indeed a multiple of D.
Here, let R = \frac{N - 2^{b_1}}{D} be the sum we want to reach.
We now need to minimize a_1 + \ldots + a_k while making 2^{b_1} a_1 + \ldots + 2^{b_k}\cdot a_k = R.
One way of looking at this is, we have with us the set of values \{2^0, 2^1, \ldots, 2^{b_1}\}, and we can choose each power of 2 as many times as we like - with our goal being to obtain a sum of R while minimizing the number of powers of 2 chosen.
With this perspective, it’s easy to see that picking powers greedily is optimal: that is, take as many copies of 2^{b_1} as possible, then as many copies of 2^{b_1-1} as possible, then copies of 2^{b_1-2}, and so on.
This leads to a simple algorithm:
- Iterate i in descending order, from b_1 down to 0.
- Compute the number of copies of 2^i that can be taken into R.
This simply equals \left\lfloor \frac{R}{2^i} \right\rfloor. - Add this count to the answer, and replace R by R\bmod 2^i to account for subtracting all these copies.
The count obtained this way, plus b_1, gives the number of moves required for this fixed value of b_1.
Try every b_1 in [0, 60] and take the minimum cost across all of them to obtain the final answer.
Once b_1 is fixed, we run a single loop in \mathcal{O}(b_1) to compute the addition cost.
Since b_1 \le \log N, the overall complexity is thus \mathcal{O}(\log^2 N) which is fast enough for the constraints.
It’s possible to improve the time complexity further, to \mathcal{O}(\log N).
To achieve this, note that after fixing b_1 and taking as many copies of 2^{b_1} as possible into \frac{N - 2^{b_1}}{D}, every smaller power will end up being chosen at most once.
This means the number of smaller powers is simply the number of ones in the binary representation of the remaining value, which can be computed in constant time (for example by the __builtin_popcountll function in C++).
TIME COMPLEXITY:
\mathcal{O}(\log^2 N) per testcase.
CODE:
Editorialist's code (PyPy3)
for _ in range(int(input())):
n, d = map(int, input().split())
ans = n+1
for i in range(63):
x = 2**i
if x > n: break
y = n - x
if y%d != 0: continue
y //= d
cost = i
for j in reversed(range(i+1)):
cost += y // (2 ** j)
y %= 2 ** j
ans = min(ans, cost)
if ans == n+1: ans = -1
print(ans)