2
0
Captive Portal implementation in python
Go to file
2024-04-08 12:28:53 +02:00
contrib add contrib 2023-01-13 14:05:47 +01:00
protobuf initial commit 2022-04-05 12:05:18 +02:00
src/capport apply formatting with current black/isort versions 2024-04-08 12:28:53 +02:00
.flake8 flake8 linting 2023-01-12 13:16:58 +01:00
.gitignore sync controller database to disk and load it on start 2022-04-07 17:11:11 +02:00
.pycodestyle move flake8 and mypy to lints.sh; add fmts.sh; use more linters 2023-11-15 13:11:06 +01:00
capport-example.yaml make path of custom templates and static files configurable 2023-11-15 13:14:45 +01:00
fmt.sh apply formatting with current black/isort versions 2024-04-08 12:28:53 +02:00
LICENSE initial commit 2022-04-05 12:05:18 +02:00
lints.sh move flake8 and mypy to lints.sh; add fmts.sh; use more linters 2023-11-15 13:11:06 +01:00
pyproject.toml mypy: enable warn_redundant_casts, warn_unused_ignores, warn_unreachable 2023-11-15 13:11:06 +01:00
README.md add contrib 2023-01-13 14:05:47 +01:00
setup-venv.sh upgrade dependencies too on ./setup-env.sh run 2023-01-12 14:20:58 +01:00
setup.cfg move project to our public git 2023-01-12 15:27:26 +01:00
setup.py initial commit 2022-04-05 12:05:18 +02:00
start-api.sh hypercorn forks workers now, don't want that - manual hypercorn calls 2023-01-12 15:13:27 +01:00
start-control.sh initial commit 2022-04-05 12:05:18 +02:00
stats-to-prometheus-collector.sh add labels and conntrack stats to prometheus output 2023-11-15 17:40:58 +01:00
stats.sh prometheus stats cli tool 2022-04-11 18:34:40 +02:00

python Captive Portal

Installation

Either clone repository (and install dependencies either through distribution or as virtualenv with ./setup-venv.sh) or install as package.

pipx (available in debian as package) can be used to install in separate virtual environment:

pipx install https://git-nks-public.tik.uni-stuttgart.de/net/python-capport

In production put a reverse proxy in front of the local web ui (on 127.0.0.1:8000), and handle /static path either to src/capport/api/static/ or your customized version of static files.

See the contrib directory for config of other software needed to setup a captive portal.

Customization

Create custom/templates and put customized templates (from src/capport/api/templates) there.

Create i18n/<langcode> folders to put localized templates into (localized extends must use the full i18n/.../basetmpl paths though).

Requests with a setlang=<langcode> query parameter will set the language and try to store the choice in a session cookie.

Run

Run ./start-api.sh to start the web ui (listens on 127.0.0.1:8000 by default).

Run ./start-control.sh to start the "controller" ("enforcement") part; this needs to be run as root (i.e. as CAP_NET_ADMIN of the current network namespace).

The controller expects this nft set to exist:

table inet captive_mark {
        set allowed {
                type ether_addr
                flags timeout
        }
}

Restarting the controller will push all entries it should contain again, but won't cleanup others.

Internals

Login/Logout

This is for an "open" network, i.e. no actual logins required, just an "we accept the ToS" form.

Designed to work without cookies; CSRF protection implemented by verifying the Origin header against the Host header (but allowing missing Origin header), and also requiring the clients MAC address (which an attacker from the same L2 could know, or guess from a non-temporary IPv6 address).

HA

The list of "allowed" clients is stored in a "database"; each instance has the full database, and each time two instances connect to each other, they will send their full database for sync (and also all received updates will be broadcast to all others - but only if they actually led to a change in the database).

On each node there are two instances: one "controller" (also responsible for deploying the list to the kernel, aka "enforcement"), and the webui (also contains the RFC 8908 API).

The "controller" also stores updates to disk, and loads it on start.

This synchronization of the database works because it shouldn't matter in which order "changes" to the database are merged (each change is also just the new state from another database); see the merge method in the MacEntry class in src/capport/database.py.

Protocol

The controllers should be a full-mesh (be connected to all other controllers), and the webui instances are connected to all controllers (but not to other webui instances). The controllers listen on the fixed TCP port 5000 for a custom "database sync" protocol.

This protocol is based on an anonymous TLS connection, which then uses a shared secret to verify the connection (not perfect yet; it would be better if python simply supported SRP - https://bugs.python.org/issue11943).

Then both sides can send protobuf messages; each message is prefixed by its 4-byte length. The basic message is defined in protobuf/message.proto.

Web-UI

The ui needs to know the clients mac address to add it to the database. Right now this means that the webui must run on a host connected to the L2 of the clients to see them in the neighbor table (and client connection to the ui must use this L2 connection - the ui doesn't actively query for neighbors, it only looks at the neighbor cache).

async

This project uses the trio python library for async IO.

Only the netlink handling (ipneigh.py, nft_*.py) uses blocking IO - but that should be ok, as we only make requests to the kernel which should get answered immediately.

Disk-IO for writing the database to disk is done in a separate thread.