You are not logged in. Please login at www.codechef.com to post your questions!

×

QTREE - Editorial

10
6

PROBLEM LINKS

Practice
Contest

DIFFICULTY

HARD

PREREQUISITES

Graph Theory, Segment Trees, Heavy Light Decomposition

PROBLEM

You are given a connected graph on N vertices consisting of N edges.

  • Each edge has a weight, not necessarily positive
  • The graph contains a single cycle
  • The cycle contains odd number of vertices (and edges).

You wish to support two operations

  • flip(u,v) flip the sign of the edge weights along the shortest path from u to v. Note that the shortest path is defined as the path of minimum hops, not the path with the least sum of edge weights.
  • find(u,v) consider the sequence of edge weights along the shortest path from u to v. In this sequence, find the maximum sum contiguous subsequence. Note that the shortest path is same as defined above. You have to print the sum of the maximum sum contiguous subsequence, not the sequence itself.

QUICK EXPLANATION

Firstly it is obvious that a connected graph on N vertices with N edges will contain exactly one cycle.

It is not necessary for that cycle to be an odd-length cycle; but the problem statement assures you that such a cycle will always be odd length. This constraint ensures that there is always a unique shortest path between any two vertices in the graph.

Let us explore an easy to state solution for this graph.

For either of the flip or find operations, we will move along the unique path in the graph, edge by edge, and update / calculate the result. The update will of course change the sign of the edge weight. The result can of course be calculated using Kadane's algorithm.

How do we exactly trace the shortest path between any two vertices? We use the following insights

  • The graph has a single cycle
  • Vertices of this cycle behave as roots of connected trees

Now, for each of the trees, we can orient them. This effectively means that for each vertex in the tree, we can mark its unique parent, right up to the root, which lies in the cycle. We of course do not mark the root of the tree.

For each vertex we can mark which tree it belongs to (probably just store the label of the root vertex of the tree). This helps us figure out for some given vertices u and v, whether they are in the same tree or not.

Now, we can trace the path from u to v

  • If they are in the same tree, we can find their LCA.
    • This will require calculating the depth for each vertex
  • If they are in separate trees
    • The path of both the vertices to their roots, say s and t respectively are to be considered
    • Then the path from s to t, through the cycle, should be considered
    • The path from the cycle should be the smaller one. This can be determined trivially by arranging the vertices in an array in a clockwise (or counter-clockwise) traversal of the vertices in the cycle.

The complextiy of each flip or find opeartion will be O(N). As you can figure, this is too low. Let us see how to make this O(log2 N).

EXPLANATION

There are two ideas that have to be mixed together to make a flip / find operation take O(log N) time.

  • Split the graph into paths, such that the path between any two given vertices contains O(log N) different paths from the decomposition
  • For each path, maintain a segment tree to answer queries of flip / find in O(log N) time

Then, for each flip / find operation on the path from u to v we will

  • start from u
  • iterate over the sequence of "paths" that contain edges from the path from u to v
    • for each path, flip / find for the range of edges that belong to the path from u to v

We can consider the single cycle as a special case and build a segment tree for the cycle.

For all the trees, we will have to find their partitioning into paths, such that, any path from any vertex to the root visits O(log N) paths from the partitioning.

One way of such a decomposition is Heavy Light Decomposition. The idea is

  • For each node, find a heavy child
    • A heavy child is a child of the node that is the root of a tree with strictly more than half the number of vertices from among all the descendents of the node
    • mark the edge connecting to this child as heavy
  • Each node will have at most 1 heavy child, and thus only one outgoing heavy edge
  • Heavy edges will merge into a disjoint set of paths. This is obvious because each vertex may have at most one incoming heavy edge and at most one outgoing heavy edge.
  • The rest of the edges, called light edges, can be considered individually, without the need to consider them as paths.

Consider the impact of the existance of a light edge along the path from root to some vertex.

If you encounter a light edge in the path from root to some vertex, then after that edge, only half as many vertices remain in the sub-tree you are in. This means, that there can never be more than O(log N) light edges in any path from the root to any descendent!

The flip / find operation still require care to be taken about where the two vertices are - in the same tree or not. But in either case, you are always considering the path from some vertex u, to an ancestor t, and performing the flip / find. The procedure to do so looks like

  • stat at vertex u
  • if you are at t, stop
  • if the edge connecting you to your parent is light edge
    • flip the edge weight to perform the flip operation
    • consider the single edge as a segment to be merged with other segments
  • if the edge connecting you to your parent is heavy edge
    • if t belongs to the same heavy path, flip / find the segment from your position to t
    • otherwise flip / find the segment from your position to the highest vertex in the heavy path

The description above brings us to the most important aspect of the solution. In the find operation, we are expected to consider complete segments and merge information within the segments together to generate the answer for a merged segment.

More clearly, for each segment, storing only the maximum contiguous subsequence sum is not sufficient. We have to consdier the possibility of merging two segments for which we know the maximum contiguous subsequence sum.

  • We of course take the larger of the two sums as candidate sum for the merged sequence.
  • It is also important to find the largest sum by some "suffix" of the first segment, and the largest sum by some "prefix" of the second segment. The sum of these two values is also a candidate to compete for the maximum contiguous subsequence sum in the merged segment.

Thus, for each segment we should store not only the sum of the maximum contiguous sum subsequence, but also the sum of the maximum sum prefix, and suffix.

There are yet more details to fill to define the algorithm completely. Consider the path from u to v, where they belong to separate trees. The path from the root of the tree that has v, to the vertex v, should be considered in reverse order.

It is important for us then to be able to consider the reverse of a sequence. Fortunately, the information we have stored till now with each segment, is enough. We need only flip the maximum sum for prefix, and suffix to achieve the impact of considering a segment in reverse. Of course, considering a segment in reverse does not affect the maximum contiguous subsequence sum for the segment.

Now let us consider how flip is going to work.

For some segment tree, the flip operation will have to support the ability to invert the sign of all the elements in a complete segment (because that is the basic operation, an operation on the complete segment, for a segment tree). We can do this via storing

  • the minimum contiguous subsequence sum
  • the minimum sum for some prefix
  • the minimum sum for some suffix

for each segment. These values can also be just as easily merged as the values for their respective maximum versions. A flip for a complete segment will simply swap the maximums and minimums.

Lastly, we require to store the current flipped state for each segment (where each segment's initial state was 0). This is important because while doing a flip / find, we will only be updating / considering the largest segments in the tree that are completely in the range for the operation. That means, the child segments for a segment that is marked as flipped, is of course not touched during the operation.

So only when you are drilling down from a parent segment to a child segment, the parents flipped state is actually applied to the children. The parents state in this case is restored back to "unflipped". The key idea to understanding this is to consider this as lazy application of the flip operation. The flip actually happens only when some operation has to consider a strict sub-range of the range covered by a segment.

Without these lazy updates, we would have to apply updates to O(N log N) segments in the segment tree, and the find operation would still take O(log N) anyway.

Thus, each flip / find in the segment tree can be done in O(log N) time. Mixing this with the fact that each flip / find operation on the graph considers O(log N) segment trees, the total complexity for a flip / find operation is O(log2 N).

RELATED PROBLEMS

DGCD
QUERY

SETTER'S SOLUTION

Can be found here.

TESTER'S SOLUTION

Tester's solution will be updated soon..

This question is marked "community wiki".

asked 16 May '13, 03:24

gamabunta's gravatar image

1★gamabunta
2.4k128183169
accept rate: 14%

edited 16 May '13, 12:49

admin's gravatar image

0★admin ♦♦
19.8k350498541

the approach described in the editorial is almost identical to my solution. i am wondering if anyone solved this problem in a different manner (e.g. without using the heavy-light path decomposition technique or maybe without building segment trees for each node of the decomposition ? note that i am not sure other efficient-enough solutions exist)

(18 May '13, 00:30) mugurelionut7★

I did HLD and divided the paths into const sized blocks instead of building a segment tree. Later I experimented a bit with the solution and simply choose the (first) maximal outgoing edge to be heavy at each vertex. In a perfectly balanced graph there will be no heavy edges in HLD, but this way if there is at least one outgoing edge, there will be exactly one heavy edge. I think practically there is no difference, but the implementation might be easier. This is the first time I did a HLD, that's why I played with it a bit. Solution runs in ~10s, last version beats other codes only because of fast i/o used. I have found a nice way to compute subtree sizes, by using the time_in and time_out counters, size=time_out-time_in, if we do a time_in=t++; on entering, and a time_out=t; before leaving the node (incrementing t only once).

link

answered 18 May '13, 07:10

bela_mervado's gravatar image

7★bela_mervado
194115
accept rate: 0%

Hey can't we use BFS to find the shortest path in given problem??????

link

answered 25 May '13, 03:15

paramvi's gravatar image

3★paramvi
165
accept rate: 0%

4

@paramvi: if you do this for every query, the solution will be way too slow.

(25 May '13, 04:41) mugurelionut7★
toggle preview
Preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported
  • mathemetical formulas in Latex between $ symbol

Question tags:

×15,875
×1,768
×1,359
×281
×55
×18

question asked: 16 May '13, 03:24

question was seen: 12,505 times

last updated: 25 May '13, 04:41