PROBLEM LINKSDIFFICULTYHARD PREREQUISITESGraph Theory, Segment Trees, Heavy Light Decomposition PROBLEMYou are given a connected graph on N vertices consisting of N edges.
You wish to support two operations
QUICK EXPLANATIONFirstly it is obvious that a connected graph on N vertices with N edges will contain exactly one cycle. It is not necessary for that cycle to be an oddlength cycle; but the problem statement assures you that such a cycle will always be odd length. This constraint ensures that there is always a unique shortest path between any two vertices in the graph. Let us explore an easy to state solution for this graph. For either of the flip or find operations, we will move along the unique path in the graph, edge by edge, and update / calculate the result. The update will of course change the sign of the edge weight. The result can of course be calculated using Kadane's algorithm. How do we exactly trace the shortest path between any two vertices? We use the following insights
Now, for each of the trees, we can orient them. This effectively means that for each vertex in the tree, we can mark its unique parent, right up to the root, which lies in the cycle. We of course do not mark the root of the tree. For each vertex we can mark which tree it belongs to (probably just store the label of the root vertex of the tree). This helps us figure out for some given vertices u and v, whether they are in the same tree or not. Now, we can trace the path from u to v
The complextiy of each flip or find opeartion will be O(N). As you can figure, this is too low. Let us see how to make this O(log^{2} N). EXPLANATIONThere are two ideas that have to be mixed together to make a flip / find operation take O(log N) time.
Then, for each flip / find operation on the path from u to v we will
We can consider the single cycle as a special case and build a segment tree for the cycle. For all the trees, we will have to find their partitioning into paths, such that, any path from any vertex to the root visits O(log N) paths from the partitioning. One way of such a decomposition is Heavy Light Decomposition. The idea is
Consider the impact of the existance of a light edge along the path from root to some vertex. If you encounter a light edge in the path from root to some vertex, then after that edge, only half as many vertices remain in the subtree you are in. This means, that there can never be more than O(log N) light edges in any path from the root to any descendent! The flip / find operation still require care to be taken about where the two vertices are  in the same tree or not. But in either case, you are always considering the path from some vertex u, to an ancestor t, and performing the flip / find. The procedure to do so looks like
The description above brings us to the most important aspect of the solution. In the find operation, we are expected to consider complete segments and merge information within the segments together to generate the answer for a merged segment. More clearly, for each segment, storing only the maximum contiguous subsequence sum is not sufficient. We have to consdier the possibility of merging two segments for which we know the maximum contiguous subsequence sum.
Thus, for each segment we should store not only the sum of the maximum contiguous sum subsequence, but also the sum of the maximum sum prefix, and suffix. There are yet more details to fill to define the algorithm completely. Consider the path from u to v, where they belong to separate trees. The path from the root of the tree that has v, to the vertex v, should be considered in reverse order. It is important for us then to be able to consider the reverse of a sequence. Fortunately, the information we have stored till now with each segment, is enough. We need only flip the maximum sum for prefix, and suffix to achieve the impact of considering a segment in reverse. Of course, considering a segment in reverse does not affect the maximum contiguous subsequence sum for the segment. Now let us consider how flip is going to work. For some segment tree, the flip operation will have to support the ability to invert the sign of all the elements in a complete segment (because that is the basic operation, an operation on the complete segment, for a segment tree). We can do this via storing
for each segment. These values can also be just as easily merged as the values for their respective maximum versions. A flip for a complete segment will simply swap the maximums and minimums. Lastly, we require to store the current flipped state for each segment (where each segment's initial state was 0). This is important because while doing a flip / find, we will only be updating / considering the largest segments in the tree that are completely in the range for the operation. That means, the child segments for a segment that is marked as flipped, is of course not touched during the operation. So only when you are drilling down from a parent segment to a child segment, the parents flipped state is actually applied to the children. The parents state in this case is restored back to "unflipped". The key idea to understanding this is to consider this as lazy application of the flip operation. The flip actually happens only when some operation has to consider a strict subrange of the range covered by a segment. Without these lazy updates, we would have to apply updates to O(N log N) segments in the segment tree, and the find operation would still take O(log N) anyway. Thus, each flip / find in the segment tree can be done in O(log N) time. Mixing this with the fact that each flip / find operation on the graph considers O(log N) segment trees, the total complexity for a flip / find operation is O(log^{2} N). RELATED PROBLEMSSETTER'S SOLUTIONCan be found here. TESTER'S SOLUTIONTester's solution will be updated soon..
This question is marked "community wiki".
asked 16 May '13, 03:24

I did HLD and divided the paths into const sized blocks instead of building a segment tree. Later I experimented a bit with the solution and simply choose the (first) maximal outgoing edge to be heavy at each vertex. In a perfectly balanced graph there will be no heavy edges in HLD, but this way if there is at least one outgoing edge, there will be exactly one heavy edge. I think practically there is no difference, but the implementation might be easier. This is the first time I did a HLD, that's why I played with it a bit. Solution runs in ~10s, last version beats other codes only because of fast i/o used. I have found a nice way to compute subtree sizes, by using the time_in and time_out counters, size=time_outtime_in, if we do a time_in=t++; on entering, and a time_out=t; before leaving the node (incrementing t only once). answered 18 May '13, 07:10

the approach described in the editorial is almost identical to my solution. i am wondering if anyone solved this problem in a different manner (e.g. without using the heavylight path decomposition technique or maybe without building segment trees for each node of the decomposition ? note that i am not sure other efficientenough solutions exist)