Environment

Team hardware

SWERC 2022-2023 will be onsite, and the computers for the contest are provided to the teams, and they will be randomly allocated.

All the machines will be laptops with QWERTY keyboards with the Italian layout. Teams are not permitted to bring their own keyboards, but may put stickers on the keyboard if they wish; see regulations.

Of course, no matter the physical layout of the keyboards, it is always possible to reconfigure them in software to a different layout (that does not match what is printed on the keyboard).

Laptop Laptop

All the laptops are Dell Precision 3520 15,6” with the following hardware:

  • CPU: Intel i5-7440HQ
  • Ram: 8gb
  • Storage: 512 GB SSD NVMe
  • GPU: Intel HD Graphics 630 + Nvidia Quadro M620 2Gb

Note that the Nvidia GPU will be disabled.

Team software

The software configuration of the team environment is described here.

  • OS
    • Ubuntu 22.04 LTS Linux (64-bit)
  • Desktop
    • GNOME 3
  • Editors
    • vi/vim
    • gvim
    • emacs
    • gedit
    • geany
    • kate
    • Atom
    • kakoune (Note: this is typically not available at ICPC World Finals)
  • Languages
    • Java
      • OpenJDK version 17
    • C
      • gcc
    • C++
      • g++
    • Python 3
      • PyPy
    • Kotlin
    • Note that Python 2 is not supported.
  • IDEs
    • IntelliJ IDEA Community Edition
    • CLion
    • PyCharm Community
    • Eclipse IDE for Java Developers
    • Eclipse IDE for Python Developers
    • Visual Studio Code
      • C/C++ extension by Microsoft
      • Language Support for Java extension by Red Hat
      • Python extension by Microsoft
      • Vim by vscodevim (disabled by default)
    • Apache NetBeans IDE
    • Code::Blocks (Note: Code::Blocks will be installed, but it is unsupported: use at your own risk)
    • CodeLite (Note: this is not present at ICPC World Finals, but can be used as a replacement for Code::Blocks)
  • Debuggers
    • gdb
    • valgrind
    • ddd
  • Browsers
    • Firefox
    • Chromium

The exact version of the packages above will be published close to the contest, and they will the latest versions in the official Ubuntu 22.04 repositories, where applicable.

Teams may ask for more software to be installed by emailing us (at swerc@appc.team) no later than January 16th. We will consider all the incoming requests, but we reserve to deny any request we consider not reasonable enough.

An offline version of the documentation for all the supported languages will be installed on the team laptops. The documentation will be very similar to devdocs.io, where only the supported languages are visible.

Compilation flags

The judging system will compile submissions with all the following options. In some cases, it may need to add additional flags to specify the path of the produced binary (e.g. -o ... for C/C++).

Each exists as an alias on the team machines:

Language Implementation Command Alias
C gcc gcc -x c -Wall -Wextra -O2 -std=gnu11 -static -pipe "$@" -lm mygcc
C++ g++ g++ -x c++ -Wall -Wextra -O2 -std=gnu++20 -static -pipe "$@" myg++
Note: unlike ICPC World Finals, the C++ standard version is 20, not 17.
Java OpenJDK 17 javac -encoding UTF-8 -sourcepath . -d . "$@" myjavac
taskset -c 0 java -Dfile.encoding=UTF-8 -XX:+UseSerialGC -Xss128m -Xms1856m -Xmx1856m "$@" myjava
Note: 1856m is the task's memory limit (2GB) minus 192MB.
Kotlin Kotlin 1.6.0 kotlinc -d . "$@" mykotlinc
taskset -c 0 kotlin -Dfile.encoding=UTF-8 -J-XX:+UseSerialGC -J-Xss128m -J-Xms1856m -J-Xmx1856m "$@" mykotlin
Note: 1856m is the task's memory limit (2GB) minus 192MB.
Python pypy3 pypy3 "$@" mypython3

Judging hardware

Compilation and execution as described above will take place in a “sandbox” on dedicated judging machines. The judging machines will be as identical as possible to, and at least as powerful as, the machines used by teams. The sandbox will allocate 2GB of memory; the entire program, including its runtime environment, must execute within this memory limit. For interpreted languages (Java, Python, and Kotlin) the runtime environment includes the interpreter (that is, the JVM for Java/Kotlin and the Python interpreter for Python).

The sandbox memory allocation size will be the same for all languages and all contest problems. For Java and Kotlin, the above commands show the stack size and heap size settings which will be used when the program is run in the sandbox.

Judging software

The software configuration for judge machines is based on an Ubuntu 22.04 64bit machine with exactly the same software version as the team software above.

The contest control system that will be used is DOMjudge.

Submissions will be evaluated automatically unless something unexpected happens (system crash, error in a test case, etc.).

Verdicts are given in the following order:

  • Too-late: This verdict is given if the submission was made after the end of the contest. This verdict does not lead to a time penalty.
  • Compiler-error: This verdict is given if the contest control system failed to compile the submission. Warnings are not treated as errors. This verdict does not lead to a time penalty. Details of compilation errors will not be shown by the judging system. If your code compiles correctly in the client environment but leads to a Compiler-error verdict on the judge, contestants should submit a clarification request to the judges.
  • The submission is then evaluated on several secret test cases in some fixed order. Each test case is independent, i.e., the time limits, memory limits, etc., apply to each individual test case. If the submission fails to process correctly a test case, then evaluation stops and an error verdict is returned (see next list), and a penalty of 20 minutes is added for the problem (which are only counted against the team if the problem is eventually solved). If a submission is rejected, no information will be provided about the number of the test case(s) where the submission failed.
  • Correct: If the evaluation process completes and the submission has returned the correct answer on each secret test case following all requirements, then the submission is accepted. Note that this verdict may still be overridden manually by judges.

The following errors can be raised on a submission. The verdict returned is the one for the first test case where the submission has failed. The verdicts are as follows, in order of priority:

  • Error verdicts where execution did not complete: the verdict returned will be the one of the first error amongst:
    • Run-error: The submission failed to execute properly on a test case (segmentation fault, divide by zero, exceeding the memory limit, etc.). Details of the error are not shown.
    • Timelimit: The submission exceeded the time limit on one test case, which may indicate that your code went into an infinite loop or that the approach is not efficient enough.
  • Error verdict where execution completed but did not produce the correct answer: the verdict returned will be:
    • Wrong-answer: The submission executed properly on a test case, but it did not produce the correct answer. This could be too much output (correct followed by extra output), the wrong answer or no output at all. Further details are not specified.

Note that there is no "presentation-error" verdict: errors in the output format are treated as wrong answers. DOMjudge does allow a reasonable amount of extra whitespaces but we don't advice to expect this in your solution.

Problem set

The problem set will be provided on paper (one copy per contestant), and also in PDF files on the judge system (allowing you to copy and paste the sample inputs and outputs). We may also provide an archive of the sample inputs and outputs to be used directly.

Making a submission

Once you have written code to solve a problem, you can submit it to the contest control system for evaluation. Each team will be automatically logged into the contest control system. You can submit using the web interface, by opening a web browser and using the provided links/bookmarks, or you can submit by command line using the submit program. (If you use the submit program, make sure that the file that you wish to submit has the correct name, e.g., a.cpp, because submit uses this to determine automatically for which problem you are submitting.)

Asking questions

If a contestant has an issue with the problem set (e.g., it is ambiguous or incorrect), they can ask a question to the judges using the clarification request mechanism of DOMjudge. Usually, the judges will either decline to answer or issue a general clarification to all teams to clarify the meaning of the problem set or fix the error.

If a contestant has a technical issue with the team workstation (hardware malfunction, computer crash, etc.), they should ask a volunteer in their room for help.

Neither the judges nor the volunteers will answer any requests for technical support, e.g., debugging your code, understanding a compiler error, etc.

Printing

During the contest, teams will have the possibility to request printouts, e.g., of their code. These printouts will be delivered by volunteers. Printouts can be requested within reason, i.e., as long as the requested quantities do not negatively impact contest operations.

You can print using the web interface or by command line using the printout program. (Make sure that the file that you wish to submit has the correct extension, because printout uses this to determine automatically which syntax highlighting to apply.)

Gold sponsor

Bending Spoons Jane Street

Bronze sponsor

Google Cefriel OT Consulting

ICPC Global sponsors

Huawei JetBrains