July challenge, scores

There are some very interesting scores for the challenge problem this time.

(100)
99.999
99.99
99.984
99.974

about 50x 99.383

And I don’t think more tests will split it much more. At this point, it hangs on the choice of seeds for individual tests.

On a side note, when will they be updated?

3 Likes

I think it could be differentiated further by taking into account the time of execution. Like if they are same, then multiplying load unbalancing factor by time. Least the product, max the score.

They are going to have a hard time to select Top 3 challenge problem solvers Indian and Global (apart from the winners) if the score doesn’t change much xD

Well if we did not have to check the constraint on C, then what’s the point at all? I have seen many solutions giving the load directly to the node with minimum weight and getting some 99 points. I think the test cases in this contest in many problems are pretty weak!

Or by considering the network cost spent on each test? :open_mouth:

4 Likes

One reason against time of execution: some languages are slower and introducing a factor to correct it is still too random. Also, a slugfest in improving the execution time by miliseconds is not a very good thing either; ppl shouldn’t have to bother with this.

A weighted sum of network cost & load difference would probably work. As it stands, the network cost constraint does nothing whatsoever.

Yes, I found that almost every time I got AC without even considering the constraint on the network cost…

https://www.codechef.com/viewsolution/10772639

see the main(), I didn’t even input the cost; nwcost[504][504] = Scan_u() part instead of nwcost[i][j] which gave 99.xxx pts

So it would be a good idea to consider it.

It’s too late now.

FYI it’s because shortest paths in a randomly generated complete graph are quite small compared to maximum edge length, you can check it by generating your own.