 EASY

# PROBLEM:

Given two positive integers a and b. You have a number x which is initially 0. At first, you can add a to x any number of times. After that, you can divide x by b any number of times.

Determine if it is possible to make x equal 1.

# EXPLANATION:

Say it is possible to make x equal to 1 by:

• first adding a to it n times, and then
• dividing it b, m times.

Mathematically, this can be written as

\frac{n\times a}{b^m} = 1\implies n=\frac{b^m}{a}

Since n is an integer, the equation implies that a divides b^m for some m. This is true if and only if b is divisible by every prime factor of a.

Proof

If there exists some p such that p|a, then

p \nmid b \implies p \nmid b^m \implies a \nmid b^m

Thus, b should be divisible by every prime factor of a.

When m is large enough, the highest power of p that divides b^m will exceed the highest power of p that divides a, for all prime p, implying a|b^m.

All that remains now is to check if b is divisible by each prime factor of a.

Naive solution

We can calculate all prime factors of a in \sqrt a, using the Sieve of Eratosthenes. However, the time complexity over all test cases will be

O(T\sqrt a)

which will TLE for the given constraints.

Let g = \gcd(a, b). While g \ne 1, divide a by g and repeat. If at the end, a equals 1, then b is divisible by each prime factor of a, and not otherwise.

Proof

If there exists some p such that p|a and p\nmid b, then p\nmid \gcd(a,b), a will always have a factor of p and so will never become equal to 1.

When p|b, it can be seen that the highest power of p that divides a is successively reduced to 0 on dividing by g, which implies a becomes 1 eventually.

The time complexity of this approach can be found in the corresponding section below.

# TIME COMPLEXITY:

Computing the gcd takes O(\log \min(a,b)). The number of prime factors in the factorisation of a is \le \log a (why?). Since in each division by g, the number of prime factors in the factorization of a is reduced (until g or a equals 1), the total time complexity is

O(\log\min(a,b)\times\log a)

per test case.

# SOLUTIONS:

Editorialist’s solution can be found here

Experimental: For evaluation purposes, please rate the editorial (1 being poor and 5 excellent)

• 1
• 2
• 3
• 4
• 5

0 voters

5 Likes

I got it accepted with the naive solution.

2 Likes

https://www.codechef.com/viewsolution/51698307
can someone tell me what is wrong with my code??

the logic behind my code is if i can make x equal to some power of b then and only then we can get 1 .

2 Likes

Is there any algorithm to find all unique prime factors of N with less time complexity instead of O(sqrt(N)) for N=1e9,

Codechef really needs to improve their test cases. It has happened repeatedly that naive solutions pass due their weak test cases. This question would have not nearly had as many solves if the test cases had been stronger. In fact, many wouldn’t even bother to code the naive solution since they know it would time out. This is extremely unfair to them. It is very difficult to add test cases and judge the solution again, but every question should be tested properly to ensure this doesn’t happen. Please try to improve the test cases.

7 Likes

You can do it in \displaystyle \mathcal{O}(\frac{\sqrt N}{\ln N})

Precompute all primes less than \sqrt {A_{\text{max}}} and then iterate over them code

2 Likes

In fact, many wouldn’t even bother to code the naive solution since they know it would time out.

Even if a naive solution is too slow, such naive solution is often very useful for testing correctness of a more optimized version. I would code it even if I knew that it would likely time out, especially if the only other option is to just give up and don’t do anything.

It is very difficult to add test cases and judge the solution again

And it would be also very unfair to re-judge solutions, without first announcing such change of the rules before the contest. The existing rules encourage constructing solutions, which are just good enough to be accepted. And then move on to the next problem. Constraints and time limits are the same for everyone.

@cubefreak777 Sir, I thought of a similar approach but the precalculating number of primes till root(1e9), i.e approx 32000(extra margin), there would be approx 3300 and 1e5 test cases, it would be 1e5*3300, i.e 3e8 operations, I know nuances of breaking the loop if exceeds but would u suggest using this type of approach if required on questions on cf or cc, I was not very sure of this. Would appreciate your help.

No, I was just demonstrating the approach he asked me to elaborate for finding prime factors faster than square root time. It’s definitely not the intended solution.

1 Like

Ohh, thanks a lot then.

I agree coding the naive solution is useful for testing. But would you submit the code if the if there was penalties on wrong answers (like Cook-offs)? Isn’t the point of the constraints of the questions meant to think outside the box and not just the most straight forward solution that exists? Even though this solution just works, it is not exactly the intended solution ( the editorialist clearly mentions this). It is also possible that I don’t waste precious time on coding a solution which I know for a fact is too slow, just to get a verdict which is TLE.
I’m not asking to re-judge all solutions, I understand that is unfair. But shouldn’t we have strong test cases to start with. In the editorial it also mentioned that this approach will time out, so should it not time out?

Most of the solutions submitted TLE on T = 100000, a = 982451653 and b = 982451653 (This is valid as per the constriants).

2 Likes

Absolutely agree with Kabra Neel. I made ~5-6 submissions, of which the later 2 were naive, but, nevertheless correct solutions and got TLE’ed. I thought of implementing O(n) sieve of eratosthenes (the exact same solution which passed for others) but then decided against it as I thought it would be futile.

Also, this seems to be trend for a lot of problems these days. If you cannot make proper testcases, why make the constraints tighter? Astoundingly thoughtless.

1 Like

Wow, this is a really clever solution!!!

Although it is glaringly obvious now that I’ve read your solution, I kept racking my brain, but, it didn’t occur to me that after all the primes up to sqrt(10e9) have been divided from ‘a’, there can only be at most one factor left which has to be a prime in itself.

1 Like

I think that your mistake was not realizing how close you were to getting accepted. Did you benchmark your code on your computer with a large random testcase?

1 Like

You can also use this for checking for overflow.

3 Likes

No, I did not.

While I do know how to use timers to find running time of a program, is there any established/preferred way of doing this for CP?
Also, a few more questions along similar lines, if you may please answer:

1. Is there any guide/template/references which covers these and other such things (as before you pointed, I never thought of doing this in a live round of CP and I usually just try to compute complexity by hand).

2. Also, how to proceed about it (if there’s any generic way at all or, rather a checklist) once you figure the time complexity is hovering around the allowed limit.

Thanks! Was informative and would be a nice addition to anybody’s repertoire of debugging.

1 Like

In complexity analysis of the editorial:

The number of prime factors in the factorisation of a is \le \log a

I can’t seem to reason why. Can somebody please explain why?