80 lines
4.1 KiB
Markdown
80 lines
4.1 KiB
Markdown
# python Captive Portal
|
|
|
|
## Installation
|
|
|
|
Either clone repository (and install dependencies either through distribution or as virtualenv with `./setup-venv.sh`) or install as package.
|
|
|
|
[`pipx`](https://pypa.github.io/pipx/) (available in debian as package) can be used to install in separate virtual environment:
|
|
|
|
pipx install https://git-nks-public.tik.uni-stuttgart.de/net/python-capport
|
|
|
|
In production put a reverse proxy in front of the local web ui (on 127.0.0.1:8000), and handle `/static` path either to `src/capport/api/static/` or your customized version of static files.
|
|
|
|
See the `contrib` directory for config of other software needed to setup a captive portal.
|
|
|
|
## Customization
|
|
|
|
Create `custom/templates` and put customized templates (from `src/capport/api/templates`) there.
|
|
|
|
Create `i18n/<langcode>` folders to put localized templates into (localized extends must use the full `i18n/.../basetmpl` paths though).
|
|
|
|
Requests with a `setlang=<langcode>` query parameter will set the language and try to store the choice in a session cookie.
|
|
|
|
## Run
|
|
|
|
Run `./start-api.sh` to start the web ui (listens on 127.0.0.1:8000 by default).
|
|
|
|
Run `./start-control.sh` to start the "controller" ("enforcement") part; this needs to be run as root (i.e. as `CAP_NET_ADMIN` of the current network namespace).
|
|
|
|
The controller expects this nft set to exist:
|
|
|
|
```
|
|
table inet captive_mark {
|
|
set allowed {
|
|
type ether_addr
|
|
flags timeout
|
|
}
|
|
}
|
|
```
|
|
|
|
Restarting the controller will push all entries it should contain again, but won't cleanup others.
|
|
|
|
## Internals
|
|
|
|
### Login/Logout
|
|
|
|
This is for an "open" network, i.e. no actual logins required, just an "we accept the ToS" form.
|
|
|
|
Designed to work without cookies; CSRF protection implemented by verifying the `Origin` header against the `Host` header (but allowing missing `Origin` header), and also requiring the clients `MAC` address (which an attacker from the same L2 could know, or guess from a non-temporary IPv6 address).
|
|
|
|
### HA
|
|
|
|
The list of "allowed" clients is stored in a "database"; each instance has the full database, and each time two instances connect to each other, they will send their full database for sync (and also all received updates will be broadcast to all others - but only if they actually led to a change in the database).
|
|
|
|
On each node there are two instances: one "controller" (also responsible for deploying the list to the kernel, aka "enforcement"), and the webui (also contains the RFC 8908 API).
|
|
|
|
The "controller" also stores updates to disk, and loads it on start.
|
|
|
|
This synchronization of the database works because it shouldn't matter in which order "changes" to the database are merged (each change is also just the new state from another database); see the `merge` method in the `MacEntry` class in `src/capport/database.py`.
|
|
|
|
#### Protocol
|
|
|
|
The controllers should be a full-mesh (be connected to all other controllers), and the webui instances are connected to all controllers (but not to other webui instances).
|
|
The controllers listen on the fixed TCP port 5000 for a custom "database sync" protocol.
|
|
|
|
This protocol is based on an anonymous TLS connection, which then uses a shared secret to verify the connection (not perfect yet; it would be better if python simply supported SRP - https://bugs.python.org/issue11943).
|
|
|
|
Then both sides can send protobuf messages; each message is prefixed by its 4-byte length. The basic message is defined in `protobuf/message.proto`.
|
|
|
|
### Web-UI
|
|
|
|
The ui needs to know the clients mac address to add it to the database. Right now this means that the webui must run on a host connected to the L2 of the clients to see them in the neighbor table (and client connection to the ui must use this L2 connection - the ui doesn't actively query for neighbors, it only looks at the neighbor cache).
|
|
|
|
### async
|
|
|
|
This project uses the `trio` python library for async IO.
|
|
|
|
Only the netlink handling (`ipneigh.py`, `nft_*.py`) uses blocking IO - but that should be ok, as we only make requests to the kernel which should get answered immediately.
|
|
|
|
Disk-IO for writing the database to disk is done in a separate thread.
|