Before the contest starts, a number of things will need to be configured by the administrator. You can check that information, such as the problem set(s), test data and time limits, contest start- and end time, the time at which the scoreboard will be frozen and unfrozen, all from the links from the front page.
Note that multiple contests can be defined, with corresponding problem sets, for example a practice session and the real contest.
The problem sets are listed under `Problems'. It is possible to change whether teams can submit solutions for that problem (using the toggle switch `allow submit'). If disallowed, submissions for that problem will be rejected, but more importantly, teams will not see that problem on the scoreboard. Disallow judge will make DOMjudge accept submissions, but leave them queued; this is useful in case an unexpected problem shows up with one of the problems. Timelimit is the maximum number of seconds a submission for this problem is allowed to run before a `TIMELIMIT' response is given (to be multiplied possibly by a language factor). Note that a `timelimit overshoot' can be configured to let submissions run a bit longer. Although DOMjudge will use the actual limit to determine the verdict, this allows judges to see if a submission is close to the timelimit.
Problems can be imported and exported into and from DOMjudge
using zip-files that contain the problem metadata and testdata files,
based on the problemarchive.org
format.
See appendix
Problem package format -specification for details.
Problems can have special compare and
run scripts associated to them, to deal with problem
statements that require non-standard evaluation. For more details see
the administrator's manual.
The `Languages' overview is quite the same. It has a timefactor column; submissions in a language that has time factor 2 will be allowed to run twice the time that has been specified under Problems. This can be used to compensate for the execution speed of a language, e.g. Java.
For checking whether the your testdata conforms to the specifications of your problem statement, we recommend the checktestdata program, which is available from a separate repository. It allows you to not only check on simple (spacing) layout errors, but a simple grammar file must be specified for the testdata, according to which the testdata is checked. This allows e.g. for bounds checking.
This program is built upon the separate library libchecktestdata.h
that can be used to write the syntax checking part of special
compare scripts: it can easily handle the tedious task of verifying
that a team's submission output is syntactically valid, leaving just
the task of semantic validation to another program.
Before a contest, you will want to have tested your reference solutions on the system to see whether those are judged as expected and maybe use their runtimes to set timelimits for the problems. There is no special method to test such solutions; the easiest way is to submit these as a special team before the contest. This requires some special care and coordination with the contest administrator. See the administrator's manual for more details.
If your contest has a test session or practice contest, use it also as a general rehearsal of the jury system: judge test submissions as you would do during the real contest and answer incoming clarification requests.