Environment

Team hardware

The contest takes place remotely this year. Exceptionally this year, contestants can participate from their own machines. There are no regulations on which machines contestants are allowed to use.

Exceptionally this year, each contestant can use their own machine (or multiple machines).

Team software

Exceptionally this year, participants are free to use any software environment to solve the problems. The supported programming languages are as follows, with details about the versions and setup on the judging system:

  • Languages
    • C
      • gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)
    • C++
      • g++ version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)
    • Java
      • openjdk version "11.0.10" 2021-01-19
      • OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)
      • OpenJDK 64-Bit Server VM (build 11.0.10+9-Ubuntu-0ubuntu1.20.04, mixed mode, sharing)
    • Kotlin
      • Kotlin version 1.4.30 (JRE 11.0.10+9-Ubuntu-0ubuntu1.20.04)
    • Python 3
      • PyPy 7.3.3 with GCC 7.5.0 list of installed modules
      • Like at the world finals, there is no support for the Python 2 programming language.
    • OCaml 4.08.1

Virtual machine

We provide an optional Open Virtualization Archive image that can be used with virtualization software such as Virtualbox. It contains a system with the same program versions as the one used to judge submissions. These are the current latest packages on Ubuntu 20.04, except that Kotlin has additionally been installed.

This image has the same configuration as the judging system. It is provided in case you want to be able to debug problems that could arise from a difference in versions between your own environment and the judging environment. If you are encountering issues such as compilation issues on the judging system that do not appear on your local machine, you can try to reproduce them on this VM before asking the judges about them.

The use of this virtual machine is not required for the contest. The login of the default user in the image is swerc and the password is also swerc. You can run the image in graphical mode, or you can access it from SSH. If running the image in graphical mode, please note that the keyboard on the login screen is set up as QWERTY by default. No programming environment has been installled but you can install them yourself (the user swerc has sudo powers).

Warning! The default login and password do not provide any security! Please make sure that you do not run the machine in a way that would allow untrusted people to access it via the network.

Compilation flags

The judging system compiles submissions with the following options.

Language Implementation Command
C gcc gcc -g -O2 -Wall -Wextra -std=gnu11 -static "$@" -lm
C++ g++ g++ -g -O2 -Wall -Wextra -std=gnu++17 -static "$@" -lm
Java OpenJDK javac "$@"
java -Xss 8m "$@"
Kotlin Kotlin kotlinc "$@"
kotlin "$@"
Python pypy3 pypy3 "$@"
OCaml ocaml ocamlopt str.cmxa "$@"

Language features

The following language features are not permitted in any of the contest languages:

  • inline assembly code
  • threads
  • file I/O
  • file management
  • device management
  • interprocess communication
  • forking and execution of external commands

Besides, in C and C++, the use of pragmas is forbidden and should raise a Compiler Error.

More generally, any system call other than memory management, reading from the standard input, writing to the standard output, and exception management, is forbidden.

Submissions using any of these features will be rejected, either automatically by the judging system, or manually by the judges.

Judging hardware

Submissions are judged on three machines, each having an Intel Xeon E5-2660 CPU (2.6 Ghz, 20 cores) and having between 256 GB and 512 GB of RAM.

Judging software

The software configuration for judge machines is an Ubuntu virtual machine with exactly the same software version as the team software above.

The contest control system used is DOMjudge, version 7.3.2.

Submissions are evaluated automatically unless something unexpected happens (system crash, error in a test case, etc.).

Verdicts are given in the following order:

  • Too-late: This verdict is given if the submission was made after the end of the contest. This verdict does not lead to a time penalty.
  • Compiler-error: This verdict is given if the contest control system failed to compile the submission. Warnings are not treated as errors. This verdict does not lead to a time penalty. Details of compilation errors will not be shown by the judging system. If your code compiles correctly in the client environment but leads to a Compiler-error verdict on the judge, contestants should submit a clarification request to the judges.
  • The submission is then evaluated on several secret test cases in some fixed order. Each test case is independent, i.e., the time limits, memory limits, etc., apply to each individual test case. If the submission fails to process correctly a test case, then evaluation stops and an error verdict is returned (see next list), and a penalty of 20 minutes is added for the problem (which are only counted against the team if the problem is eventually solved). If a submission is rejected, no information will be provided about the number of the test case(s) where the submission failed.
  • Correct: If the evaluation process completes and the submission has returned the correct answer on each secret test case following all requirements, then the submission is accepted. Note that this verdict may still be overridden manually by judges.

The following errors can be raised on a submission. The verdict returned is the one for the first test case where the submission has failed. The verdicts are as follows, in order of priority:

  • Error verdicts where execution did not complete: the verdict returned will be the one of the first error amongst:
    • Output-limit: The submission produced too much output. (The precise output limit is not specified.)
    • Run-error: The submission failed to execute properly on a test case (segmentation fault, divide by zero, exceeding the memory limit, etc.). Details of the error are not shown.
    • Timelimit: The submission exceeded the time limit on one test case, which may indicate that your code went into an infinite loop or that the approach is not efficient enough. (The precise time limit is not specified, but the order of magnitude is of a few seconds.)
  • Error verdicts where execution completed but did not produce the correct answer: the verdict returned will be the first matching verdict amongst:
    • No-output: There is at least one test case where the submission executed correcly but did not produce any result; and on other test cases, it executed properly and produced the correct output.
    • Wrong-answer: The submission executed properly on a test case but it did not produce the correct answer. Details are not specified.

Note that there is no "presentation-error" verdict: errors in output format are treated as wrong answers.

Problem set

The problem set is provided as PDF files on the judge system (allowing you to copy and paste the sample inputs and outputs). We may also provide an archive of the sample inputs and outputs to be used directly.

Making a submission

Once you have written code to solve a problem, you can submit it to the contest control system (DOMjudge) for evaluation. To do so, navigate to the private contest control system URL that you have received, log in with your private credentials, and submit from the Web interface.

Asking questions

If a contestant has an issue with the problem set (e.g., it is ambiguous or incorrect), they can ask a question to the judges using the clarification request mechanism of DOMJudge. Usually, the judges will either decline to answer or issue a general clarification to all teams to clarify the meaning of the problem set or fix the error.

As contestants can use their own machine, there will be no technical setup provided by judges for problems arising from the contestant environment (e.g., hardware malfunction, crashes, data loss). All of this is each contestant's responsibility.

Of course, the judges will not answer any requests for technical support, e.g., debugging your code, understanding a compiler error, etc.

Printing

Exceptionally this year, there is no printing support. Teams are free to print anything they wish on their own.

Location and rooms

Exceptionally this year, teams can participate from any location that they like.

Requests for additional software

Exceptionally this year, there is no official contest environment, so there is no mechanism to request the installation of additional software.

Gold sponsor

Jane Street

Bronze sponsors

Jump Trading Sopra Steria

Institutional sponsors

Région Île de France

ICPC Global sponsors

Huawei JetBrains IBM

Lisbon local sponsors

Critical TechWorks Unbabel