Progcomp web is our programming challenge marker, succeeding a F# command line marker. This version has a lovely interface and avoids some of the other problems that marker had like having some piping in/out limits or deadlocks or something, I can't really remember.
As is tradition, progcomp-web is usually worked on late into the night before each successive progcomp. Keegan made the initial version, which stored all data in program memory only -- making scoring and rankings very awkward. Owen rewrote this before the next progcomp to actually use a DB, so we could restart the server without losing all data and interact with the data as well.
Following that progcomp, it was further updated to allow multiple progcomps to be hosted on the website, so we'll get old ones on at some point. Each progcomp has a set of (typically five) problems, each with multiple test cases. The problems are described in a problems.pdf
per progcomp (should be markdown on each problem's page, but Keegan likes writing LaTeX). While data is stored in the DB, these files and test cases are stored as files (maybe outsource file hosting to CDN or something so we can containerize a bit easier).
Running the website is mostly just using a set of scripts, as SQLAlchemy makes interacting with the DB very easy.
Progcomp web is currently hosted in an lxc: access it at keegan@progcomps.internal.uwcs.co.uk
from Localtoast.
pip install pipenv
pipenv install
pipenv run python progcomp/scripts/initialize.py
pipenv run gunicorn progcomp:app -b 0.0.0.0:8192
Website will run at localhost:8192
.
Once the server is running, you control it by running the scripts in /progcomp/scripts
from a separate terminal. First run pipenv shell
to avoid prefixing each command with pipenv run
.
pipenv shell
python progcomp/scripts/create_progcomp.py <progcomp_name>
export SCRIPT_PROGCOMP="<progcomp_name>" # Future scripts reference this env var
python progcomp/scripts/set_pg_start_time.py "in 5 mins" # This parsing is flexible
python progcomp/scripts/set_pg_start_time.py "8pm" # This parsing is flexible
/problems/<progcomp_name>
- Directory per problem,
- Within problem dir, have an `input` and `output` directory with a text file named by each test set
- Problem PDF at root of the progcomp.
problems
├───<progcomp_name>
│ │ problems.pdf
│ ├───<problem_one>
│ │ ├───input
│ │ │ <0-test>.txt
│ │ │ <1-test>.txt
│ │ │ ...
│ │ └───output
│ │ <0-test>.txt
│ │ <1-test>.txt
│ │ ...
│ ├───<problem_two>
│ │ ├───input
│ . │ ...
│ . └───output
│ . ...
│
├───<other_progcomp>
│ ...
toggle_pg_leaderboard.py
to change.python progcomp/scripts/update_pg_problems.py # Read from directory
python progcomp/scripts/get_problems.py # List results
# Set visibility for each problem - can reveal one-by-one during competition
python progcomp/scripts/set_problem_visibility.py <problem_one> <open|closed|hidden>
python progcomp/scripts/set_problem_visibility.py <problem_two> <open|closed|hidden>
...
# Set visibility for progcomp once ready
python progcomp/scripts/set_pg_visibility.py <open|closed|hidden>
python progcomp/scripts/swap_pg_pdf.py <new_pdf_path>
python progcomp/scripts/get_problems.py # List all problems from all progcomps
python progcomp/scripts/get_scores.py <problem> <test> # Get scores for a specific problem
python progcomp/scripts/get_scores_overall.py # Get overall leaderboard
python progcomp/scripts/get_submissions.py # List all submissions