Environment

Team hardware

SWERC 2023-2024 will be onsite. Computers for the contest are provided to the teams.

All the machines will be laptops with AZERTY keyboards with the French layout. Teams are not permitted to bring their own keyboards, but may put stickers on the keyboard if they wish; see regulations.

Of course, no matter the physical layout of the keyboards, it is always possible to reconfigure them in software to a different layout (that does not match what is printed on the keyboard).

Team software

The software configuration of the team environment is described here.

(This section is not final, and is subject to modifications.)

  • OS
    • Debian
  • Desktop
    • GNOME
    • Xfce
  • Editors
    • vi/vim
    • gvim
    • emacs
    • gedit
    • geany
    • kate
    • Atom
  • Languages
    • Java
      • OpenJDK version 17.0.9
    • C
      • gcc 12.2.0
    • C++
      • g++ 12.2.0
    • Python 3.9.6
      • PyPy3 7.3.11
    • OCaml
      • ocamlopt 4.13.1
    • Kotlin 1.9.21
  • IDEs
    • IntelliJ IDEA Community Edition
    • PyCharm Community
    • Eclipse IDE for Java Developers
    • Eclipse IDE for Python Developers
    • Visual Studio Code
      • C/C++ extension by Microsoft
      • Language Support for Java extension by Red Hat
      • Python extension by Microsoft
      • Vim by vscodevim (disabled by default)
    • Apache NetBeans IDE
  • Debuggers
    • gdb
    • valgrind
    • ddd
  • Browsers
    • Firefox

The exact version of the packages above will be published close to the contest.

  • Python: NumPy will not be available, nor will be networkx

Teams may ask for more software to be installed by emailing us (at swerc@lip6.fr). We will consider all the incoming requests, but we reserve to deny any request we consider not reasonable enough.

An offline version of the documentation for all the supported languages will be installed on the team computers. The documentation will be very similar to devdocs.io, where only the supported languages are visible.

Compilation flags

The judging system will compile submissions with all the following options. In some cases, it may need to add additional flags to specify the path of the produced binary (e.g. -o ... for C/C++).

Each exists as an alias on the team machines:

Language Implementation Command Alias
C gcc gcc -x c -Wall -Wextra -O2 -std=gnu11 -static -pipe "$@" -lm mygcc
C++ g++ g++ -x c++ -Wall -Wextra -O2 -std=gnu++20 -static -pipe "$@" myg++
Note: unlike ICPC World Finals, the C++ standard version is 20, not 17.
Java OpenJDK 17 javac -encoding UTF-8 -sourcepath . -d . "$@" myjavac
taskset -c 0 java -Dfile.encoding=UTF-8 -XX:+UseSerialGC -Xss128m -Xms1856m -Xmx1856m "$@" myjava
Note: 1856m is the task's memory limit (2GB) minus 192MB.
OCaml OCaml 4.13.1 ocamlopt unix.cmxa str.cmxa bigarray.cmxa "$@" myocamlopt
Kotlin Kotlin 1.6.0 kotlinc -d . "$@" mykotlinc
taskset -c 0 kotlin -Dfile.encoding=UTF-8 -J-XX:+UseSerialGC -J-Xss128m -J-Xms1856m -J-Xmx1856m "$@" mykotlin
Note: 1856m is the task's memory limit (2GB) minus 192MB.
Python pypy3 pypy3 "$@" mypython3

Judging hardware

Compilation and execution as described above will take place in a “sandbox” on dedicated judging machines. The judging machines will be as identical as possible to, and at least as powerful as, the machines used by teams. The sandbox will allocate 2GB of memory; the entire program, including its runtime environment, must execute within this memory limit. For interpreted languages (Java, Python, and Kotlin) the runtime environment includes the interpreter (that is, the JVM for Java/Kotlin and the Python interpreter for Python).

The sandbox memory allocation size will be the same for all languages and all contest problems. For Java and Kotlin, the above commands show the stack size and heap size settings which will be used when the program is run in the sandbox.

Judging software

The software configuration for judge machines is based on an Ubuntu 22.04 64bit machine with exactly the same software version as the team software above.

The contest control system that will be used is DOMjudge.

Submissions will be evaluated automatically unless something unexpected happens (system crash, error in a test case, etc.).

Verdicts are given in the following order:

  • Too-late: This verdict is given if the submission was made after the end of the contest. This verdict does not lead to a time penalty.
  • Compiler-error: This verdict is given if the contest control system failed to compile the submission. Warnings are not treated as errors. This verdict does not lead to a time penalty. Details of compilation errors will not be shown by the judging system. If your code compiles correctly in the client environment but leads to a Compiler-error verdict on the judge, contestants should submit a clarification request to the judges.
  • The submission is then evaluated on several secret test cases in some fixed order. Each test case is independent, i.e., the time limits, memory limits, etc., apply to each individual test case. If the submission fails to process correctly a test case, then evaluation stops and an error verdict is returned (see next list), and a penalty of 20 minutes is added for the problem (which are only counted against the team if the problem is eventually solved). If a submission is rejected, no information will be provided about the number of the test case(s) where the submission failed.
  • Correct: If the evaluation process completes and the submission has returned the correct answer on each secret test case following all requirements, then the submission is accepted. Note that this verdict may still be overridden manually by judges.

The following errors can be raised on a submission. The verdict returned is the one for the first test case where the submission has failed. The verdicts are as follows, in order of priority:

  • Error verdicts where execution did not complete: the verdict returned will be the one of the first error amongst:
  • Run-error: The submission failed to execute properly on a test case (segmentation fault, divide by zero, exceeding the memory limit, etc.). Details of the error are not shown.
  • Timelimit: The submission exceeded the time limit on one test case, which may indicate that your code went into an infinite loop or that the approach is not efficient enough.
  • Error verdict where execution completed but did not produce the correct answer: the verdict returned will be:
  • Wrong-answer: The submission executed properly on a test case, but it did not produce the correct answer. This could be too much output (correct followed by extra output), the wrong answer or no output at all. Further details are not specified.

Note that there is no "presentation-error" verdict: errors in the output format are treated as wrong answers. DOMjudge does allow a reasonable amount of extra whitespaces but we don't advice to expect this in your solution.

Problem set

The problem set will be provided on paper (one copy per contestant), and also in PDF files on the judge system (allowing you to copy and paste the sample inputs and outputs). We may also provide an archive of the sample inputs and outputs to be used directly.

Making a submission

Once you have written code to solve a problem, you can submit it to the contest control system for evaluation. Each team will be automatically logged into the contest control system. You can submit using the web interface, by opening a web browser and using the provided links/bookmarks, or you can submit by command line using the submit program. (If you use the submit program, make sure that the file that you wish to submit has the correct name, e.g., a.cpp, because submit uses this to determine automatically for which problem you are submitting.)

Asking questions

If a contestant has an issue with the problem set (e.g., it is ambiguous or incorrect), they can ask a question to the judges using the clarification request mechanism of DOMjudge. Usually, the judges will either decline to answer or issue a general clarification to all teams to clarify the meaning of the problem set or fix the error.

If a contestant has a technical issue with the team workstation (hardware malfunction, computer crash, etc.), they should ask a volunteer in their room for help.

Neither the judges nor the volunteers will answer any requests for technical support, e.g., debugging your code, understanding a compiler error, etc.

Printing

During the contest, teams will have the possibility to request printouts, e.g., of their code. These printouts will be delivered by volunteers. Printouts can be requested within reason, i.e., as long as the requested quantities do not negatively impact contest operations.

You can print using the web interface or by command line using the printout program. (Make sure that the file that you wish to submit has the correct extension, because printout uses this to determine automatically which syntax highlighting to apply.)

Reading large instances

We recommend to take care when reading large instances.

  • C++: Better use scanf than cin
  • Python: Better use sys.stdin.readline than input
  • Java: Better use BufferedReader than Scanner(System.in)

SWERC off - competition for everyone

Another domjudge server will be set with the SWERC problem set, open for anyone who wants to compete. You can self-register but will not appear in the scoreboard of the official SWERC competition.

ICPC Global sponsors

Huawei JetBrains Jane Street

Gold sponsor

Bending Spoons

Silver sponsor

Hexaly CFM meritis

Bronze sponsor

CNES onepoint LIP6 CNRS IRIL