bit manipulation advanced concepts

Can you post your code? You may have miswritten nCk, because it works for me

More specifically:

## My code

```
const ll mod = 998244353;
long long mpow(long long a, long long b) {
long long x = 1;
while (b > 0) {
if (b & 1) {
x = (x * a) % mod;
}
a = (a * a) % mod;
b >>= 1;
}
return x;
}
long long fact[101];
long long ifact[101];
long long nck(long long n, long long k) {
long long dem = (ifact[k] * ifact[n - k]) % mod;
return (fact[n] * dem) % mod;
}
void solve(int tc) {
n = 100;
fact[0] = fact[1] = 1;
for (int i = 2; i <= n; i++) {
fact[i] = (fact[i - 1] * i) % mod;
}
ifact[n] = mpow(fact[n], mod - 2);
for (int i = n - 1; i >= 0; i--) {
ifact[i] = ((i + 1) * ifact[i + 1]) % mod;
}
cout << nck(6, 0) << " " << nck(6, 3) << endl;
}
```

outputs `1 20`

as expected.

Yaa may be then that have happened.I actually deleted my code back at that time and used the code of geeks for geeks. But yours concept is much more convenient for me to grasp and So i would like to use this code.

Anything specific (like an advanced concept) you want? I don’t have much of a list of tricks for it.

Can you make a tutorial on aho-corasick algorithm involving tires. I had hard time understanding it from cp-algorithms. As you already made a tutorial on Kmp, it would be easier now for us to understand it in depth.

As you provide explanation and code, which I find it really good for beginners to understand.

Hi @galencolin, could you do lagrange interpolation, especially power sums. (Basically unconventional concepts, not present on cp-algorithms)

Reference questions : JETPACK and GCDMASK

I know editorials exist, but they are not clear to me.

Also, if you’re going as far as optimizing addition and subtraction, why not go for ints and (1LL * a * b) for multiplication. I have had significant speedups with this (mostly in NTT questions with hard time contraints).

I’ll give it a shot in the future. Currently I have no idea how that works myself, so it’ll take some time

Thanks for the note on multiplication! I’ll add it, and I didn’t mention it because I didn’t even know that optimization would make a difference.

As of now, I know very, very little about lagrange interpolation (or any FFT/etc.-related stuff in general) apart from what they actually do. So, being able to write a detailed blog will definitely take some time for me to learn it intuitively enough to explain it, but I’ll keep it in mind/in queue.

Just to be clear, it’s not the multiplication that optimizes things, it’s rather the use of int over long long. Using long long in modular arithmetic questions has given me quite some TLE’s (and MLE’s).

Yep, got it

@galencolin Computing all inverse of **i** where i is from 1 to N , u can do in more easy way -

```
inv[1] = 1;
for(int i = 2; i <=N; ++i)
inv[i] = (m - (m/i) * inv[m%i] % m) % m;
```

Purely O(N)

Yes, I know about that formula, but the point was actually to avoid it because (at least, I find it) harder to remember/less intuitive. Doing it by using the factorials is simpler to me and could be to others too.

yeah that may be the case

@galencolin, Nice tutorial. I guess you had missed a small condition here:

i.e. b should not be divisible by p to satisfy above statement.

While this is true, I assumed we would only consider b where b < m (because if b \geq m, you can just take b \mod m, which is 0 if m \vert b and greater than 0 otherwise).

nice tutorial … future tutorial may be on hashing in depth it would be helpful for us

Yeah, I completely forgot about the fact that b would be less than m…

I got the best solution for Binary exponential algorithm through this website: Binary Exponentiation Algorithm

Yep, sorry, just a copy/paste error. Fixed