You are not logged in. Please login at www.codechef.com to post your questions!

×

Invitation to CodeChef April Long Challenge 2018!

Hello Fellow Coders!

Greetings from Chef! Close on the heels of an exciting lineup of contests you all enjoyed, here’s more from Chefland to satisfy your coding appetites in April. This time it’s not a quick snack, it’s a full-course meal with ten days prep time!

I would like to invite you to the April Long Challenge 2018 for 10 days of exciting programming challenges. Joining me on the problem setting panel are:

Please give your feedback on the problem set in the comments below, after the contest.

Contest Details:

Time: 6th April 2018 (1500 hrs) to 16th April 2018 (1500 hrs). (Indian Standard Time — +5:30 GMT) — Check your timezone.

Contest link: https://www.codechef.com/APRIL18

Registration: You just need to have a CodeChef handle to participate. For all those, who are interested and do not have a CodeChef handle, are requested to register in order to participate.

Prizes: Top 10 global and top 20 Indian winners get 300 Laddus each, with which the winners can claim cool CodeChef goodies. Know more here: https://www.codechef.com/laddu . (For those who have not yet got their previous winning, please send an email to winners@codechef.com)

Good Luck!
Hope to see you participating!!
Happy Programming !!

This question is marked "community wiki".

asked 04 Apr, 18:45

mgch's gravatar image

6★mgch
235717
accept rate: 16%

edited 13 Apr, 13:24

admin's gravatar image

0★admin ♦♦
18.4k348492529


So with this we conclude our April Long Challenge, 2018.

We hoped you enjoyed the problems, and will enjoy the editorials as well. Any feedback on problems, editorials, or any other issue is appreciated. We will try our best to improve upon those areas :)

If theres anything we can do for you, do let us know. Its been a pleasurable experience for me on the other side of the panel (thanks to the team and @mgch ) and I hope I can say the same for you guys as well :).

link

answered 17 Apr, 16:48

vijju123's gravatar image

4★vijju123 ♦
12.6k1425
accept rate: 19%

edited 17 Apr, 16:48

loved the problems this time around :)

(17 Apr, 17:03) harrypotter03★
2

Even I loved quite many of them. My favourite was CUTPLANT. AVGPR adn WEIGHTNUM had good concepts involved as well wheich I felt are good for beginners :)

(17 Apr, 17:06) vijju123 ♦4★

@admin @mgch . I was seeing all submissions page Just after the contest was over. There were submission which were still processing after 3.30 pm. Big queue was there on system. At around 5-6 am. System was taking about 3-4 min to process the result.

One suggestion Please don't make the challenge problem such that anyone can atleast get AC. For this contest challenge problem Submissions which simply print A[i]. No processing. Or which print A[i]+k . Were awarded around ~80-90 points. This was the main reason for Big Queue.

link

answered 17 Apr, 16:40

aryanc403's gravatar image

5★aryanc403
4228
accept rate: 13%

edited 17 Apr, 16:45

I already replied to your query. I will copy it here for documentation purposes :) -

@aryanc403- Thats not correct, and wont help. One instance I can give is, that today itself, some guy submitted ≈60 submissions on PyPy, each of which took 20secs to get verdict.

A better solution which we suggested what "Limit number of submissions a user can make in X minutes to Y"- i.e. dont allow indefinite submissions for any problem.

(17 Apr, 16:45) vijju123 ♦4★

Yes, this is nice idea. Time duration constraint is okay. Currently we can submit 500 submissions in long challenge. Please don't reduce this limit. One more suggestion this limit can also be on basis of language used. Because PyPy take around 20 sec. So X can be less for PyPy. And C takes less time So X can be greater for C.

(17 Apr, 16:54) aryanc4035★

Yes, I will forward this feedback to @admin for consideration. Dont worry about that :)

(17 Apr, 17:06) vijju123 ♦4★
3

A simple way of helping the issue is to do something like what is done at the Yandex optimization track, just consider the LAST submission for the final score. This doesn't give the incentive of submitting a lot of times to randomly get a good outcome (since you don't know the full test suite). Rate limiting, while a decent idea, does not alleviate the issue that submitting a random algo a lot of times is beneficial. @admin @aryanc403 @vijju123

(17 Apr, 19:47) algmyr6★
1

Thats also an idea worth considering @algmyr . Lets see developer's stand on it. :)

(2 days ago) vijju123 ♦4★

One more suggestion I have that during long contest wee can have some system reserved(kind of). That they will give priority only to challenge questions and the other way round. Because a person who is submitting around ~60 submission really didn't care about output/points of problems. Just he want to submit same questions with as many random questions as he can. So, some system can give priority to non challenge problems. And other problems soln verdict can be given without much delay.

(2 days ago) aryanc4035★

@algmyr, that's a very interesting suggestion. We will discuss about this. Thanks!

(2 days ago) admin ♦♦0★

@admin Also check out my comment under https://discuss.codechef.com/questions/125323/invitation-to-codechef-april-long-challenge-2018/125673, it expands the idea into a more complete solution, both regarding reverse engineering and spam submissions.

(yesterday) algmyr6★
2

@algmyr what is the sense of having provisional tests there? You can optimize the solution for it and receive overfitting(as saying in ML) and in the end, your time will be wasted cause final tests will be completely different. It almost has no sense of checking the solutions in the contest, am I wrong? I have another suggestion: what if we'll try to use multitests in the challenge(around 50-1000 per test case, different types are combined) and testing will be provided only on 5-10% of data. I guess it will be hard for unfair solutions to get the test data. What do you think about that?

(yesterday) mgch6★
1

Yes, solutions which have $T=1$ are far more prone. With mixed solutions of different kinds, I think we can minimize the issue by a good factor.

(yesterday) vijju123 ♦4★

@mgch Provisional tests would be there only to give a rough indicator of how you stack up. If you have a solution that performs consistently it will also be a decent estimate of your final score, similar to what the visible test is today. Even today you have no idea if the hidden tests are vastly different, you pretty much presume that the visible test is representative already. Also, importantly, you are given the data generation algorithm so that you can generate your own test cases to benchmark your program to see that if performs well in general.

(yesterday) algmyr6★
1

@mgch If you're worried that the provisional test cases are not representative you could always add a few more cases of each type to reduce impact of potential outliers. If the final tests are run after the competition (and only on the final submission) this would still be less computationally intensive than running the full test suite on every submission as it's done today (from what I've understood). What I fundamentally would like to enforce is a separation between sets of test cases so you can't gather information on the point giving tests during the competition.

(yesterday) algmyr6★
showing 5 of 12 show all

One of the high-scoring solutions to CHEFPAR worked by reverse-engineering some of the parameters for the hidden problems and individually optimising different random seeds for each hidden problem. This was done by making use of the fact that you get told if your program aborts on a hidden problem, so some hidden information leaks back. The program would not work so well (or might not run at all) on a new random problem instance.

I don't know if this is allowed or not, but I don't think it is a desirable feature (and I wouldn't use this method) as it seems to be against the spirit of the competition, in my opinion anyway. I suggest that the challenge problems be adjusted in future to reduce the effectiveness of this method. One thing that could be done is reduce the number of submissions allowed, since 200 is rather a lot: 50 should be plenty since you can try your method offline on test data. (Maybe even less than 50 is better.)

Another problem is that there are potentially even larger information leaks from the non-hidden problem instances (four instances in this case), because you get to see about 20 digits of output and there is execution time and memory usage that can be manipulated. Perhaps it would be a good idea to exclude the non-hidden problem instances entirely from the final judgement (in this case judging on 16 instances rather than 20). At least, that would fix this subproblem (information leak from non-hidden problem).

I suppose the nuclear option is not to give any information back at all (even TLE, RE etc) about what your program did on the hidden instances. I would personally be in favour of this, though I realise this could lead to disappointments if none of your submitted solutions run properly on the hidden set of instances.

link

answered 17 Apr, 17:58

alexthelemon's gravatar image

4★alexthelemon
524
accept rate: 0%

We are aware of that. Just to reduce that they reduced it from 500 to 200. If any further reductions are needed, they will do so as well :). Do you think it is direly needed right now?

(17 Apr, 19:17) vijju123 ♦4★

Yes because I think it is better if the competition is to find the program that works best on a random problem instance, rather than being a competition to reverse-engineer the hidden instances. I think that will make CodeChef a more attractive environment to compete in. (At the moment the top entry in division 1 is engineered to work on the specific hidden instances and won't work properly on random instances. I don't blame the author for doing this, but in my opinion it's a more healthy competition if this kind of answer isn't possible.)

(2 days ago) alexthelemon4★

My personal Opinion- @alexthelemon One thing which is being neglected here is that challenge problems require strategy which is different for different users. And this is also not so trivial. What you did is I see as anther strategy. Which probably nobody tried. And doing this also not an easy task. And I wish you All the Best for next question where you can again apply this strategy. There is no harm in Doing this. For other problems also we try to find edge cases and sometimes hard code the edge case. Others might not agree with me.

(2 days ago) aryanc4035★
1

@admin will collect all feedback from the thread today or latest by tomorrow. This will be considered upon. Thank you, both of you :)

(2 days ago) vijju123 ♦4★
1

aryanc403: I appreciate that this is allowed strategy and is not trivial to carry out, but I don't think it is a good one to encourage because it deflects from the primary purpose of making a good algorithm. I know (as you suggest) that I could do the same thing myself, but I am not going to do this: if that is the only way to win then I'd prefer to compete elsewhere instead. I don't think I'm the only one to think this, because vijju said that the maximum number of submissions has been reduced to try to prevent this. (I have a suggestion as how to modify the rules in a separate post below.)

(2 days ago) alexthelemon4★
3

@alexthelemon I strongly agree with you that reverse-engineering isn't fair. Unfortunately, it hasn't prohibited till now, hence it's allowed. Probably, it was the main reason why some contestants were too good at challenges for years. We'll discuss it and I hope we find some solution(hidden time/memory and few submissions sounds good). Thanks for your feedback!! Also, congrats on winning Div2!! Good luck in Div1 :)

(2 days ago) mgch6★
showing 5 of 6 show all

@vijju123 CHEF VIJJU'S CORNER Nice initiative. Increases the effectiveness of editorial. And helps a lot in overcoming mistakes.

link

answered 17 Apr, 18:58

aryanc403's gravatar image

5★aryanc403
4228
accept rate: 13%

2

Thank you :D

I started this when I wrote ICPC editorials for Amritapuri- to put basically any content which I feel might be useful to some, but couldnt be put in formal section. (I got too many complaints of my editorials being too long and too explanative xD). Chef Vijju's corner, or the unofficial part is a nice place for some light hearted humor, and other approaches and "what if" things. Glad you liked it :) . (After all, I believe my editorials should stand out from rest, shouldn;t they? xD)

(17 Apr, 19:09) vijju123 ♦4★

Yes, they should stand out from rest.

(17 Apr, 19:21) aryanc4035★

Suggestion for how to prevent reverse-engineering of challenge problems.

Taken from the thread on CHEFPAR and reverse-engineering instance parameters. In my opinion it is better if the competition is to find the program that works best on a random problem instance, rather than being a competition to reverse-engineer the hidden (or visible) instances. I think that would make CodeChef a more attractive environment to compete in (it is already very good, by the way - I hope this suggestion might help make it a bit better).

Definitions: by "feedback-instances" I mean those which contribute to your provisional rank, and for which you get back a score (four of these in the CHEFPAR contest, one for each type). By "hidden instances" I mean those for which you don't get a score back during the contest (sixteen of these in the CHEFPAR contest, four for each type).

As far as I can make out, there are two competing needs:

(i) It is nice if people can be confident during the competition that their program will work (not abort, time-out etc) on hidden instances, so they need some kind of feedback during the competition that it works on hidden instances. It's also nice if people feel able to try out lots of solutions over the 10 days.

(ii) It is good (in my opinion, anyway) if you can't use the information from the result code (AC, TLE etc) to reverse-engineer parameters of the hidden instances. It's also good if you can't use the result information (score, time, memory usage) from the feedback-instances to reverse-engineer their parameters.

I think you can satisfy both of these needs by the following modifications to the rules:

(i) You are still allowed to submit a lot of answers (say 200), but in only (say) 20 of these will you get result code feedback from the hidden instances. 200 should be plenty to try out lots of ideas, and 20 should be plenty to make sure your program (that you already know runs successfully on feedback-instances) doesn't crash on hidden instances. The user would get to choose which (up to 20) submissions count for the extra result code information. (The displayed time and memory usage would be taken from the feedback-instances, so you can't get extra information about the hidden instances that way.)

(ii) feedback-instances are excluded entirely from the final score (after competition ends). In the case of the April 2018 challenge, that would mean you would be judged on 16 hidden instances rather than 20 = 4 + 16 instances. The reason for this is that it is very easy to leak information back from the feedback-instances, much easier than from the hidden instances. For example in CHEFPAR, during the competition you got back a 16-digit final score: you could encode information in some of these digits, so you get back much more than 1 bit of information per submission.

I think these changes would keep most of the necessary feedback so that users know their code is working and preserve the fun of seeing a live scoreboard, but would mostly prevent the problem being "hacked".

link

answered 2 days ago

alexthelemon's gravatar image

4★alexthelemon
524
accept rate: 0%

In my opinion it is better if the competition is to find the program that works best on a random problem instance

Oh! Are you suggesting that the TC at which program runs should be dynamically generated rather than being a fixed case?

We fear that some contestants may get unlucky (or too lucky- both are bad :( ). Like, once in the last problem of long (Something on squarefree numbers ) the TC were dynamically generated. My solution which TLE on cases, got accepted on 5th try. We will need to find a way to minimize- or even prevent these instances from happening if we are to implement it

(2 days ago) vijju123 ♦4★

I think we can implement hiding the time and memory taken for challenge problem- merely telling if its AC'd or not. That can help a lot.

I think we can do away with telling the "Score" of problem- merely telling how many points it fetched you out of 100 seems good.

The suggestion to "submit upto 200 solutions, out of which at most 20 (which ran on hidden TC) will be considered for leaderboard" seems nice. 20 submissions limits reverse engineering by a lot.

Already pinged @admin to collect feedback by tomorrow or day after, so feel free to suggest :)

(2 days ago) vijju123 ♦4★

No I wasn't suggesting that the test case should be different for different people. That would make it far too random. You definitely need the same test cases for everyone, but no information about them should leak out. That way the problem from the programmer's point of view is to get the best result on a random instance (because he or she knows nothing about the test case, so it is effectively a random instance from their point of view).

(2 days ago) alexthelemon4★

I didn't mean to suggest that "at most 20 will be considered for the leaderboard". Sorry if I wasn't clear.

I was suggesting that when you submit a challenge problem solution you should have an extra option called "receive return code from hidden instances". You are allowed to select this option at most (say) 20 times during the competition. When you select this option, if you get an AC it means you can be sure that your program worked for the hidden TCs.

The reason for restricting return codes like this is that the mechanism for information leaking back to the user is via the ret code...

(2 days ago) alexthelemon4★

I get it now. Thanks :)

(2 days ago) vijju123 ♦4★

In addition, the time and memory information that you get back should only be for the "feedback-instances".

And I think it is also probably better not to include the feedback-instances in the final score because too much is known about them. (Another reason: if you do include them, as happens now, then this makes @algmyr's suggestion not work properly. That is, even if only the last submitted program is scored in the final ranking, you still have an incentive to keep resubmitting a random algorithm until you get a good visible score, even though you can only see part of your score.)

(2 days ago) alexthelemon4★

Personally I think one solution would be to completely separate provisional tests from the actual tests. Provisional tests will only give a temporary hint of the performance of programs, but is not included in the actual set of tests. No information should be given on the hidden tests. Since the data generation algorithm is given you can easily test performance on your own system, and the provisional tests should catch most server side stuff.

Combined with my earlier comment about only judging the last submission this should both prevent reverse engineering and discourage spam submissions.

(2 days ago) algmyr6★

I would be happy with that option (no feedback from actual tests at all), but I got the impression people wanted a bit of certainty that their programs would still work with the actual tests, so I made the above suggestion (20 return codes) as a compromise. But your suggestion has the virtue of simplicity, and as you say it's unlikely a program that passes the provisional tests would fail to complete the actual tests (though you could just about imagine that it runs it 3.96 seconds on the provisional tests, but TLEs at 4.01 seconds on the actual tests due to a slight difference in the data).

(2 days ago) alexthelemon4★
1

Judging on the last submission only is tempting, though it might make the comparison a bit more random because the tail of the score distribution obtained from a random algorithm (which you get from maxing over lots of attempts) probably has less variance than a single instance.

But this could be fixed by increasing the number of hidden test cases. And this wouldn't require extra server time compared to what happens now because the server would only need to run a single entry, not all 200. (Though it may delay scoring slightly after the contest if they aren't being run pre-emptively.)

(2 days ago) alexthelemon4★

Thank you for this informative discussion :) We will definitely take these points into consideration while figuring out what to do. We'll get back to you soon.

(yesterday) admin ♦♦0★
2

I should correct something I said above. It's not just the return code that leaks information: another mechanism is the reported memory usage. If you want to discover the number 'x' from a hidden test case you just allocate 'x' MB in your code then stop. The results page will then show you what 'x' was. (I didn't realise that the memory usage from the results page included that of the hidden cases.)

So a simple change, even if nothing else is changed, would be to make the result page only report the time and memory usage from the provisional test cases, not the hidden test cases.

(yesterday) alexthelemon4★
showing 5 of 11 show all

For Documentation. Today as soon as editorial were released. Our @vijju123 were quite active in forum to answer queries of users. This is also a Nice initiative. If we have someone from problem setting panel to resolve our queries. =

link

answered 2 days ago

aryanc403's gravatar image

5★aryanc403
4228
accept rate: 13%

@aryanc403- Why? Thanks mate for the compliments!

We intend to do that as a regular practice to be honest, but problem setters are really busy at times. Like, some are having end-semester exams &etc. Even I have those, but comparatively I am a bit free right now, so I am trying to resolve as much as I can :)

This time, I also tried to answer as many comments as I can xD. Just to assist setters. You wont believe, but we had over 150 comments for weightnum alone, all saying one thing "Why is W upto 300?"

Well, there is only 1 @vijju123 in entire world. And I have the motto that "If its me, things should change for good :) ." I tried my best to act on lines I expect the panel to act- eg- answering all comments, even if its a trivial "Comment denied" or "Asking for hints isnt allowed." Its really important in my opinion, that, setting panel should respond to comments else there hangs a feeling of "Aloofness". Like, in CF rounds, all queries get answered, even if it takes 5-10minutes. If it happens there, why not here?

Though, I must agree on other side too, answering comments is tiring. For AVGPR and WEIGHNUM, there were like, 10 new comments every hour. Dont believe me? See the tab below-

View Content

And that brings me to one of my key points. People, especially Div2, should NOT share ideone links etc. in comments, not ask for hints, or ask setter to debug his/her code. Thats simply outrageous. Because of that, we keep on missing some valid comments :(

link

answered 2 days ago

vijju123's gravatar image

4★vijju123 ♦
12.6k1425
accept rate: 19%

edited 2 days ago

One query left Does comments on question during live contest require approval before they are made public ?

And for your activeness I mentioned was of after the contest. Which I'm see today after 3 pm. I never knew there were these many comments during a live contest. And how much hard it is to reply these. May be probably due to I rarely opened comments sections of question. Whenever I opened I only saw few comments. :) Because if there is error then soon we will have an updated question.

(2 days ago) aryanc4035★
1

Yes, they require approval to be public, else people paste all sorts of code and ideone links. I once decided tog et them all disqualified- but later felt it would be too harsh to those who are new. Perhaps they didnt bother to read rules.

Yeah, I tried to answer as many comments as I can xD. For updated versions also, good thanks to @mgch , Misha is one of the best people out there :)

(2 days ago) vijju123 ♦4★

This is to inform you that all the Setter's, Tester's and Editorialist's solutions are successfully linked to the editorials, except for CUTPLANT.(I am in talks over this delay for that editorial, we will look into that).

The editorialist solution is commented most (I think all :p ) the time, so you can refer to that in case you have any further doubt. Hope you enjoyed the editorials as much as (or preferably more :p) than the problems. In case there is any further issue accessing solution of any other editorial, except CUTPLANT, do ping me here, we will look into that. With that, I would like to conclude the final announcement for this long with three magical words.....

View Content
link

answered 2 days ago

vijju123's gravatar image

4★vijju123 ♦
12.6k1425
accept rate: 19%

Just chiming in to the lovely discussion of @alexthelemon and @algmyr.

As I see it, the essence of the CodeChef tie-break problems is to design an efficient algorithm that finds approximate solution to an NP-hard problem. You are given the time and memory budgets, and also - which may not happen in real life - a nicely defined domain of the possible input values. Since a lot of approximation algorithms carry a significant amount of randomization techniques, it is appropriate to ask how the fairness of the judging process can be assured.

20 test cases is definitely too low of a number to have a 95% confidence that one algorithm is better than another. Furthermore, the current practice of taking the best result out of (potentially) 200 solutions favors the algorithms with high volatility of the results - which is kind of gambling.

Definitely, the first step is the clear separation of the provisional tests from the hidden/final ones. The purpose of the provisional tests is to give you some kind of calibration of your solution to the CodeChef system resources, and to give a quick test run. The final score should be calculated from the hidden tests generated for the final scoring only. Maybe a hundred test cases to give a good spread of the potential input values, and to reduce the element of darn luck.

The second step (which is more laborious to implement) is to give a contestant the choice which solution he/she considers as a final solution. Not the last solution, but the actual choice to define the final solution. We can even go one step further - not a single solution but multiple solutions. But out of multiple solutions, not the maximum score but the average score. Some people may prefer to gamble and choose a single final solution, some people may want to mitigate the risk (the downside) and choose multiple solutions (CodeChef may limit the number of final solutions by, say, 10). This way the final score calculation can still proceed at reasonable pace (more test cases but less solutions to check) and the judging as a whole appears more fair.

link

answered 12 hours ago

oleg_b's gravatar image

6★oleg_b
111
accept rate: 0%

What I have to say is in next contest I'm expecting to have something new on table. After seeing so many suggestions. Finally admin has to decide. :P

(12 hours ago) aryanc4035★
1

Will ask them to take this feedback as well. Though I feel second step can take time to implement

@aryanc403 xD. I knoow what you're feeling, but usually we test things out and bring changes to contests only when fully sure. So, things may take some time. :)

(5 hours ago) vijju123 ♦4★

@oleg_b Thanks for elucidating the reasons so well :) We are looking into this.

(4 hours ago) admin ♦♦0★

As an additional reminder-

Kindly discuss any issues, clarifications etc. related to problem/problem-statements at COMMENTS section of the problem with the setting panel. Any feedback related to this long should be posted here only - the use of separate threads should be avoided as it makes collecting feedback a mess.

Any instance of your code getting public due to online ides like ideone.com is punishable, and you will be penalized with same punishment as that of any regular plagiarist.

link

answered 04 Apr, 20:05

vijju123's gravatar image

4★vijju123 ♦
12.6k1425
accept rate: 19%

If possible , the challenge should be extended just for a day .As we are having Google code jam today.

link

answered 07 Apr, 09:25

aadeshrathore's gravatar image

4★aadeshrathore
313
accept rate: 0%

Duration of APRIL18 will be extended by one day soon.

(07 Apr, 17:09) mgch6★

It really shouldn't matter. GCJ was just for 27 hours.

(08 Apr, 16:00) dollarakshay4★
1

Yes, but we extended APRIL18A from another reason - JADUGAR

(08 Apr, 18:03) mgch6★

For 300 laddus rank should be in top-20(Indians) in Division-1 or top-20 in both divisions combined ?

link

answered 11 Apr, 19:06

rj25's gravatar image

5★rj25
1894
accept rate: 0%

Div1 afaik. The page there says nothing about div2 top 20. It would be unfair for newly gone div1 guys if they'd get equal prize in div2. Would be undesirable.

(11 Apr, 19:55) vijju123 ♦4★

why duration of APRIL18 long challenge of div-B is not extended while for div-A it is already extended? @mgch @vijju123

link

answered 11 Apr, 22:53

souradeep1999's gravatar image

3★souradeep1999
414
accept rate: 0%

Div1 got extension because the problem JADUGAR was split into two, and a new subproblem was added in the newer version, JADUGAR2. This did not affect div2 in any way.

However, if you want an extension as well, I can talk to @mgch regarding that- but I need a valid reason to do so.

(12 Apr, 02:36) vijju123 ♦4★

Because code jam take almost one full day.So many participant cant try problem through out whole day....if you kindly extend it..it will be better...@vijju123

link

answered 12 Apr, 19:21

souradeep1999's gravatar image

3★souradeep1999
414
accept rate: 0%

I will consult contest admin @mgch over your concern.

(13 Apr, 13:38) vijju123 ♦4★

The contest for Div2 is extended now. It will end on Tuesday with Div1.

(13 Apr, 22:42) vijju123 ♦4★

thank you @vijju123

link

answered 13 Apr, 22:50

souradeep1999's gravatar image

3★souradeep1999
414
accept rate: 0%

when will the editorials get uploaded.

link

answered 17 Apr, 16:18

ankit_3005's gravatar image

2★ankit_3005
11
accept rate: 0%

edited 17 Apr, 16:20

They already are.

(17 Apr, 16:35) vijju123 ♦4★
toggle preview
Preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported
  • mathemetical formulas in Latex between $ symbol

Question tags:

×975
×253

question asked: 04 Apr, 18:45

question was seen: 4,685 times

last updated: 4 hours ago