It looks like for each i, you’re trying to find a j >= i such that the range [i,j] contains at most k 0’s - is that correct?
If so, it’s computing incorrect values for the sample testcase:
10 2
1 0 0 1 0 1 0 1 0 1
In particular, for i = 6, it’s giving j = 11 leading to m = 6 when it should be 5. I think So it might be as simple as clamping j to not exceed n - my randomised tests, with this change, seem to be giving results that agree with a naive, brute force approach.
Add a bit of diagnostic output (like I did ;)) to help you debug
Also, the question is ambiguous - for the sample input, there appear to be multiple ways of flipping k 0’s that give the same maximum run of 1s (5) - which are you supposed to output?