Hi Tobi,
On Wed, October 31, 2018 19:59, Tobias Werth wrote:
The main idea is to not do the next-to-judge calls (as long as possible) and to group the add-judging-run calls. The next-to-judge calls are not necessary as long as all previous test cases are correct. For the most commonly used judging model (fail on first error) this is true for at least N-1 of N test cases. Now we still want to ping back from time to time to a) signal progress, and b) not to post too much data back in the database at once.
So I could imagine reasonable default values is to post back at least every 10s and if we accumulated more than twice the output limit but that should be configurable.
yes, that seems like a good approach. It does diminish a bit the experience of watching some judging being judged "live" in the DJ interface; you now get updates less frequently when you watch a submission page of a submission being judged.
For the actual grouping/batching I can see two options: either allow the one POST add-judging-run call to allow multiple results to be posted or
I think I prefer this approach.
Was also wondering if you'd want to take fetch_executable() outside of the inner loop; I know it does not fire off API calls each time but it seems unlikely or even undesirable to have executables change between testcases - call that once before judging starts?
Also I think we could be saving some time by not rebuilding the cURL connection each and every request. Made a pull request about that here: https://github.com/DOMjudge/domjudge/pull/445
Not really tested since my MySQL on my real development instance is broken due to dependency hell. But maybe you can measure that with your testset. Should do it in any case I guess.
Cheers, Thijs