Answers to: FRMQ-APRIL CHALLENGE PROBLEMhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem<p>I submitted the problem using segment trees but got 70 points only...then i saw the inherent pattern in the queries..that were being generated by the formulae...by the corresponding N values...
and came with a faster dp approach but still got 70 points....
finally i wrote a sparse table...for quering in O(1) ...but still got 70 points...
people got AC using sparse table...where did i go wrong?
here is my solution <a href="http://www.codechef.com/viewsolution/6754646">http://www.codechef.com/viewsolution/6754646</a></p>enTue, 14 Apr 2015 13:21:06 +0530Answer by sumeet_varmahttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67805<p>i can't understand the purpose behind asking this question... ? First of all giving 70 points just for writing a classic segment tree solution... and even worse..100 points for just optimising the sparse table solution.. ! </p>sumeet_varmaTue, 14 Apr 2015 13:21:06 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67805Comment by pavel on sampritipanda's answerhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67787<p>Simple answer: slow RAM, fast CPU and CPU caches. The problem is, that current CPUs are much faster than RAM - see <a href="http://www.extremetech.com/extreme/188776-how-l1-and-l2-cpu-caches-work-and-why-theyre-an-essential-part-of-modern-chips">this article</a> for more information about the CPU-RAM gap. As a result, a CPU access to RAM costs a lot of CPU cycles. To reduce this problem, current CPUs have several levels of caches, which usually work best, when accessing memory addresses, that are close to each other.</p>
<p>You can try a simple example. Try running this program and measure its execution time:</p>
<pre><code>#define N 10000
int a[N][N];
int main(void) {
for (int i=0;i&lt;N;i++) {
for (int j=0;j&lt;N;j++) {
a[i][j] = 0;
}
}
}
</code></pre>
<p>Then switch indices used to access the array - change <code>a[i][j] = 0</code> to <code>a[j][i] = 0</code> - and measure the time again. Depending on your machine and compiler optimizations, the second version will be slower (on my machine it is about 5 times slower). You can try the same with different values of <code>N</code>.</p>
<p>In the first version, you access the array (or matrix) row-by-row. As each row is stored as a continuous memory block, you start with the lowest memory address and continue up to the highest memory address assigned to the array. This type of access is cache-friendly. On the other hand, in the second version you access the array column by column, which means skipping to a distant memory address in each iteration. </p>
<p>As for the original question, one of the problems could also be the <code>log2</code> function, which seems to be quite slow. Try to replace it with a lookup table. See my <a href="http://www.codechef.com/viewsolution/6657245">solution</a> for more details.</p>pavelTue, 14 Apr 2015 03:13:13 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67787Comment by ironmandhruv on sampritipanda's answerhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67783<p>Can you explain why?</p>ironmandhruvTue, 14 Apr 2015 01:03:37 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67783Comment by sp1rs on coolsduy's questionhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67780<p>Plz post the editorial ..of this problem ..</p>sp1rsTue, 14 Apr 2015 00:20:42 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67780Comment by fauzdar65 on karanaggarwal's answerhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67763<p>I used this same observation, for each x find sum of [x,y] [x,y-11] and so on . The queries would be uniformly distributed among all x excluding those which are never visited. So, we can find no. of Y's for each x and calculate individual sum for all x's.
However complexity-wise this would still be O(M) assuming we use sparse tables.But apparently, this is a better optimized version than going for all the M queries normally because for a chunk of queries, we are keeping x same,only changing Y. This makes significant difference in time of execution.</p>fauzdar65Mon, 13 Apr 2015 21:44:57 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67763Comment by karanaggarwal on karanaggarwal's answerhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67747<p>What would be the complexity of your solution ?</p>karanaggarwalMon, 13 Apr 2015 20:00:21 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67747Comment by anuj95 on karanaggarwal's answerhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67741<p>I used the observation that for a particular X suppose the initial Y mapping was Y0 , Then after each subsequent cycles X would be mapped to Y0 - 11 , Y0 - 2*11 and so on . So I could skip all the queries that mapped X to Y later than the leftmost position of maxima between X and Y0 using a simple formula .</p>anuj95Mon, 13 Apr 2015 18:59:33 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem#67741Answer by coolsduyhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67737<p><a href="/users/7736/karanaggarwal">@karanaggarwal</a> : yes i tried solving it using maths...first i found the pattern in queries...
no matter what the initial x and y were the query pattern repeated after N-1 for x and after N for y
when x=(x+7)%(t-1)
and y=(y+11)%t;
that means the queries will repeat exactly after N<em>(N-1) queries...so if M was greater than N</em>(N-1)..
q=N<em>(N-1)
i calculated only the sum of N</em>(N-1) queries and multipled it M/q times...and processed the remaining M%q queries...
also queries for x depended on N in such way that if(N%7==1)...then x repeated after N/7 times...
and queries for y depended on N in such way that if(N%11==0)...then y repeated after N/11 times...
so the q term would be (number of time x repeated) * (no. of times y repeated)...
and then using above dp approach...but it still fell apart when the N was neither N%7==1 or N%11==0
thats why i used sparse tables</p>coolsduyMon, 13 Apr 2015 18:48:48 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67737Answer by karanaggarwalhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67735<p>Did anyone actually used any maths to solve this problem ?</p>karanaggarwalMon, 13 Apr 2015 18:36:18 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67735Answer by lebronhttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67733<p>First of all, you have stuff like "(b+11)%(t);" there. % operation is very slow; try to rewrite it using subtraction instead. Also you may change M[maxi][17] to M[17][maxi] to avoid large jumps over memory.</p>lebronMon, 13 Apr 2015 18:34:56 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67733Answer by sampritipandahttps://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67730<p>For performance improvements, you need to use int M[17][maxi]. This makes your code much faster.</p>sampritipandaMon, 13 Apr 2015 18:32:43 +0530https://discuss.codechef.com/questions/67728/frmq-april-challenge-problem/67730