ARRANGE - Editorial

PROBLEM LINKS

Practice
Contest

DIFFICULTY

EASY

EXPLANATION

The straightforward approach of writing a function that simply reverses the bits of an index to get the new index suffices (recalling that indices with fewer bits than k are padded with leading zeros). The complexity of this algorithm is O(n log n) where n = 2^k is the length of the string. While this was fast enough for the contest, an O(n) algorithm can be devised using a slighly more clever approach.

Let a[] be the original string and b[] be the scrambled string we are to compute. Consider a recursive function scramble(x,n,d,i) that will correctly place every character in the substring of a[] of length n starting at index x in its correct place in b[]. Here, d and i are auxiliary variables that will be used to index a location in b[] at the base of the recursion. The base of the recursion is when n == 1 in which case a[x] is assigned to b[d]. Otherwise, if n > 1 then just recursively call serve(x,n/2,d,i * 2) followed by serve(x+n/2,n/2,d+i,i * 2). Initially call serve(0,n,0,1) to begin the recursion. One can see that at a particular recursive call at depth j in the recursion, the function is basically taking the j’th most significant digit of l and placing it at the k-j’th bit of d which, in effect, is reversing the digits.

While the depth of the recursion is O(log n), one sees that only 1 call to the function is at depth 1, 2 are at depth 2, 4 are at depth 3, 8 are at depth 4, etc. Summing over all recursive calls, we see the total number of times serve() is called throughout the recursion is only O(n).

For interest’s sake, reversing the bits of a binary number is not just some contrived problem that resides solely in algorithms competitions. It is a fundamental subroutine used in some implementations of the Fast Fourier Transform which is, perhaps, one of the most important algorithms in computing science with applications in digital signal processing and fast integer multiplication.

2 Likes

I observed that the indices of the integers represented by the bits behave as follows:

0 Bit: 0

1 Bit: 0, 1

2 Bit: 0, 2, 1, 3

3 Bit: 0, 4, 2, 6, 1, 5, 3, 7

4 Bit: 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15

…

This means, having 3 bits (K=3) then the character at (zero-indexed) position 3 would be at position 6 after scrambling.

It shows that, taking this as a matrix, M[i+1][j] can be calculated by M[i][j] = 2 * M[i-1][j] and the second half of the subsequent row where no corresponding M[i-1][j] exists is filled by M[i][j] = M[i][j-2^(i-1)]+1.

For example, M[3][3] = 2 * M[2][3] = 2 * 3 = 6 and M[3][5] = M[3][5-4] + 1 = M[3][1] + 1 = 4+1 = 5.

In that way I precomputed the Matrix for k up to 16. This should run in O(2^K) = O(N). The input is rearranged in O(N) as there is no way but going through the whole string and looking up the index in O(1). Alternately as the matrix is bijective of course, the output could directly be processed by looking up the corresponding character in the input string. The built-in IDE gives me 0 seconds for the precomputation, still I get TLE when submitting here. Where’s the mistake?

http://www.codechef.com/viewsolution/6834804

Hello Elpacogrande,
your first claim that it will run in O(n) i.e O(2^k) will make your program give TLE.
Reason-
k<=16 => Max = 2^16 >>> 10^7 operations.
Tus, it gives TLE.

Hi likecs,

first of all, thanks for your answer.
What made me think this will work is, that the editor’s solution is also said to run in O(2^k). The matrix is filled in 2^17 - 1 calculations , so I thought for the case there are at least two cases with K=16, I would be faster. Somehow I must have misunderstood the complexity of the sample or either miscalculated mine.

Made it work now, the problem was in the output function going from 0 to strlen(S), after replacing it by (1 << K) it passed in 0.03 Here it is

could some one please explain the logic behind the recursion approach? What do they mean by “at a particular recursive call at depth j in the recursion, the function is basically taking the j’th most significant digit of l and placing it at the k-j’th bit of d”? …Thanks in advance