How did you approach the solution(100 pts) for the long challenge problems KILLKTH and XYHUMOQ?

I couldn’t solve killjee but I think it can be done with Suffix Tree and maintaining size of subtree in each node (cumulative sum) and then binary search the ans.Ans for each prefix can be calculated easily.

For XYHUMQ the main thing is you can compute the value for a string in O(length). This can be done using 2 values ct (for 1’s) and ct2 for (0’s). Initially ct=1, ct2=0 (since the first character is 1). Now for each 1

ct=ct+ct2+1 and for 0’s ct2=ct2+ct. The answer is ct2.

Now we must compute for x.

Now use this and recursively compute all values (2^n). This will pass the small subtask.

For the large subtask we must prune our choices. We must prune if our already computed value>=mini or if ct+ct2>x.

Also notice for a partially filled string X , the maximum value can be obtained by either adding 101010…

or 010101… , to it.

So now for all partially filled strings find if max value>=x , else prune them.

Hi, although I have solved ‘A humongous Query’ partially but I have a doubt. Why do these two submissions give different verdict(the only difference is string instead of char array and cin instead of scanf)?

For killkth : Say you have the suffix tree of the string. Here each node represents a set of substrings. And all the substrings that a node represents occur equal number of times, which is the number of leaves under it in the tree (synonymously, the number of suffixes each substring is a part of). It is worth noting that, all of these substrings occur consecutively in your hidden string. This can be computed for each node.

Now maintain, for each node, the \sum(length of substrings that are lexicographically lesser than the smallest string for that node). This is, basically \sum(length of substrings of already encountered nodes * number of leaves it’s an ancestor of).

Note that, nodes are being traversed in a depth first manner, and one node is encountered before an other if substrings of one are lexicographically lesser than that of the other.

Say you are traversing the nodes in the order P_1, P_2, ..

You have a number X_i(increasing with i), and F_i(frequency of each substring of that node) for each node P_i. X_i tells you the position of the last character corresponding to the largest string in P_{i-1} in the hidden string. So if you have a query K, you can find the node P_i corresponding this query, where X_i is the smallest value > K, do this by a simple binary search.

You have the node. Now you find which substring the (K - X_i)^{th}(-X_i because, X_i characters have already been discovered before this node). Substrings corresponding to a node P_i are of the form S_1, S_2, S_3 .. S_w, where |S_1| = L, |S_2| = L+1, .. |S_w| = L+w-1 = R. Each occurring F_i times. Here you need to find smallest i where F_i * \sum_{j = L}^{L+i-1}j > K - X_i. This calls for another binary search. So you have the substring. You can do some math and get the position of the character in the substring as well. You have the substring and the position of the desired character in the substring.

https://visualgo.net/en/suffixtree (try repeating characters for better understanding)

I think that my solution for XYHUMQ is easier than using pruning and it works independent of the actual value of X. It is essentially a meet in the middle algorithm (https://www.geeksforgeeks.org/meet-in-the-middle/). The trick was to come up with two methods to calculate the number of humongous strings in linear-time for a given 10-string.

Using either of those methods we can solve the smaller problem by simply computing the number of humongous strings in each of the 2^18 possible 10-strings.

For the larger problem that will not work. That is why we need two linear time methods. In reality they are sort of conjugate methods. One starts from the left and moves right the other moves from right to left. If we use each method to work through half of the length of each string, then we have at most 2^15 different possibilities on each side. In each case we end up with two integers. By combining the two numbers from each side we can then get the total number of humongous strings. This is essentially a third linear-time method.

The trick is to avoid comparing all possible prefixes with all possible suffixes. We can store the two values associated with any prefix in a hash table. Let us call those values p1 and p2. Given any prefix we can then compute the two associated numbers. We then loop over all possible values of p1 (there are much less than 10^15). For each p1 we can calculate the needed value of p2 to get exactly X. If that p2 is an integer, we check the hash table to see if the pair (p1,p2) is associated with any prefix.

If we think of the two-dimensional plane of values (p1,p2), it can contain as many as 2^15 values. This method allows us to search loop over only one of the two dimensions and use the hash table with O(1) look ups to deal with the other dimension. For the entire method this gives a complexity of O(2^(n/2-1)*C1(n)) where C1(n) is the number of possible values of p1. Here n is the length of the string minus 2.

We can then actually improve the complexity by making the prefixes longer than the suffixes. In my actual code I precomputed values for all prefixes of a single predetermined length. For smaller strings I used the simple linear-time method and for larger strings I worked with suffixes that made up the difference.

For KILLKTH I used a suffix array (http://web.stanford.edu/class/cs97si/suffix-array.pdf).

In a suffix array all substrings are simply prefixes of some suffix. The number of letters in all prefixes of any given suffix is just a triangular number. We can use that to construct a list of the cumulative sums of all letters associated with previous suffixes. That allows us to do a binary search and determine the first letter of the suffix associated with the kth letter. The binary search is only accurate to the first letter since the substrings are only in the correct order up for the first letter. For example the suffix “FFE” might be right before the suffix “FFX” in the suffix array but the substring “F”, which is a prefix of “FFX”, comes before “FFE” alphabetically.

Having found the first letter of the associated suffix we need to find the range of suffixes that start with that letter. It is then easy to determine if the letter occurs in one of the substring of length one: Each suffix is associated with exactly one such substring. If the letter comes from a substring of length greater than one, we need to search for the second letter using the same method. While searching for the second letter it is important to account for the substrings of length 1. Alphabetically, they come before all other substrings with the same first letter.

Once we have found the correct suffix we need to directly determine the letter: Adding letters one at a time can be too slow.

I used Trie to solve KILLKTH and got my solution accepted in 0.07 seconds.

XYHUMOQ is almost same as this problem :

Let the number of subsequences of the form 1010…10 in a string be ‘a’ and the number of subsequences of the form 1010…101 be ‘b’. So, you can represent any given binary string as a pair (a,b). By appending a 1 or 0 to such a string, you get either of these two strings (a+b,b), (a,a+b). This is because adding a ‘1’ gives ‘a’ more sequences of type 1010…1 by extending each of the previously present sequences of type 1010…10, similarly for the other character, you get ‘b’ more sequences of type 101010…10.

Now, consider the reverse process. Given a string (a,b), you can go to either (a-b,b) or (a,b-a), by removing the last bit present in it. And, furthermore, there is only one way of going back this way, depending on whether a>b or b>a. This process is the same as the GCD algorithm. And, since the string must always begin with ‘1’, we must always reach (0,1). In the given problem, its mentioned that we need to have exactly ‘X’ subsequences of the form 101010…10. So, any candidate string will be represented by a pair of the form (X,Y).

Also, since we must reach (0,1), that means that the gcd of X and Y must be 1. This still leaves us with a lot of candidates for ‘Y’. But, since all strings must end with 0, Y will always be less than X. So, we just need to loop through 1,2,3,…X-1 as the candidates for ‘Y’, and for each of them, find the number of bits the string will have, and in case its equal to ‘n’, we will see in how many places this string is different from the input string. Finally, we will print the one with the minimum number of differences as the answer. Overall, the complexity of this solution is O(XlogX).

Generate all the strings of length 20, for each string store two values X, Y where X denotes the number of good subsequences ending at last 0 and X denotes the number of good subsequences ending at last 1. This can be done simply in 20*2^20. Now the fun part is the observation that the number of distinct values of X is very less (order of 10^4) let these be K. For each distinct X, let’s build the next 2^12 strings and take min over the strings where X == Given ways. Time Complexity: K*2^12.

I too wanna know…

SOS DP+sqrt decomposition on queries

Refer my solution, i guess its clear… after reading cf blog about SOS dp

I solved MONSTER with Square Root Decomposition and SOS DP.We can divide queries in blocks and process each block with SOS DP.After then we can check each monster if he is no more alive after this much queries.If yes we can check all queries in this block exactly at which this monster is died.Otherwise we can simply reduce health of monster after this block.

Thanks…

I too used sqrt decomposition.

You can even solve it using parallel binary search

How did you derive the formula? I spent 2 days to find it and gave up.

what is the value of C1(n) and how did u calculate it?

I took the first 128 numbers and represented them as binary and calculated their answer using n^2 dp. Now I observed the pattern. Another thing you can notice is that the inverting all the bits from 1 to s.length()-2 also gives the same answer.

That is 1100 = 1010 ,

Could u share ur approach in detail?