A problem about infeasibly large amounts of seriously overpackaged candy.
Andy got a box of candies for Christmas. In fact, he discovered that the box contained several identical smaller boxes, and they could contain even smaller boxes, and so on. Formally, we say that candies are “boxes” of level 0, and for 1 ≤ i ≤ n, a level i box contains a_i boxes of level i - 1. The largest box has level n. Andy realized that it can take quite a long time to open all the boxes before he actually gets to eat some candies, so he put the box aside in frustration.
But today being his birthday, some friends came to visit Andy, and Andy decided to share some candies with them. In order to do that, he must open some of the boxes. Naturally, Andy can not open a box that is still inside an unopened box. If Andy wants to retrieve X candies, what is the least number of boxes he must open? You must help him answer many such queries. Each query is independent.
My path to solution
My first thought was to solve for the given small test cases and use that to get ideas for the much bigger limits on test cases.
Clearly we know how many of the inmost boxes need to be opened - take X/a_1 and round up. This suggested two ways forward:
- feed that number of boxes to the next level
- work out how many candies are ultimately in each level box.
I tried both versions; the second version seemed easier to understand and precalculate for.
Then I came up with the idea of effective boxes vs. wrappers. A wrapper is what I called a box that only has one box inside it. If an effective box has k wrappers then you don’t need to recalculate for each level of wrapping - just multiply that effective box count by k+1.
This means we can collapse the box structure into effective boxes each with a multiplier, based on the number of wrappers above it. Then we can track our effective boxes up until we have enough box structure to hold the largest number of candies in a query. The limit on candy query size means that we only need to worry about at most 60 effective boxes. And bingo, we have a reasonable amount of work per query and we can deliver the answer in the time available.
There’s more detail but I’ll leave that for you to work out.
Handy general hint
There were no (other) Python solutions (first!), but I was still able to generate reliable test data by copying an accepted answer into the IDE under its own language and generating some additional (more complex) test case results, which helped me get rid of an error caused by an operator precedence mistake.