In the current system, a lot of languages are shown in the supported list and (afaik) variations in time & memory constraints are dealt with by applying constant multipliers.
What I’ve observed is that only C/C++ really work here. Python & Java may do the job occasionally.
Which is to be expected, as C/C++ are a must for extreme end performance.
Plus, setters/testers have very limited bandwidth to implement solutions in different languages & differences in languages can’t accurately be reduced to constant multipliers.
The following 2 improvements are possible to make the system more transparent to users,
-
A new section for each problem, called “Tested Languages”. This will contain only the languages for which at least 1 solution exists that passes (may be setter’s, tester’s or competitors).
-> This not only hints the users about when they are in uncharted territory, but also allows for nice features such as filtering problems tested on your language of choice. -
Variable multipliers for tested languages that authors can tweak.
-> I have never viewed the system from author’s perspective, so my knowledge is limited on this part.
Personally, the language that I really love solving problems in is Haskell & would love to see which problems are known to be nice with it.
I’d also love to know when I can safely switch to Python in competitions to avoid most of the verbosity.