# Usage ## Tauri viewer (local) If `trunk` or `cargo-tauri` is missing, enter the nix shell (or install the tools manually). `direnv allow` also sets `EVERY_CHANNEL_ROOT` so Tauri can find the UI folder. ```sh direnv allow cd apps/tauri cargo tauri dev ``` If you want to run the desktop app directly from Cargo against the bundled frontend instead of the dev server, run: ```sh EVERY_CHANNEL_ROOT=$PWD cargo run -p ec-tauri --features custom-protocol ``` If you want deterministic transcoding instead of stream copy: ```sh EVERY_CHANNEL_TRANSCODE=1 cargo tauri dev ``` For node ingest/MoQ publish, you can force deterministic transcode with: ```sh EVERY_CHANNEL_DETERMINISTIC=1 cargo run -p ec-node -- ingest hdhr --channel 8.1 ``` iroh discovery is opt-in (DNS discovery is off by default). Enable DHT and/or mDNS like this: ```sh EVERY_CHANNEL_IROH_DISCOVERY=dht,mdns cargo tauri dev ``` In the Tauri app, use **Add stream** to add an HDHomeRun host, a direct HLS URL, or a yt-dlp supported URL (e.g. YouTube Live). The flow rejects non-live sources. `https://www.nbc.com/watch/...` URLs are also supported in the Tauri app. This path is browser-backed: - on macOS, the app first opens an in-app Tauri webview backed by `WKWebView` - NBC / Adobe Pass authentication stays in that native app window, including popup sign-in flows - if native playback cannot become ready, the app falls back to the existing external Chrome path - once playback is live, the app captures rendered video frames and feeds them into the existing ffmpeg ladder Notes: - the first run may require you to finish your MVPD login in the native app window or, if native playback falls back, in the launched Chrome window - on macOS, the default native webview data directory is app-local; override it with `EVERY_CHANNEL_NBC_WEBVIEW_DATA_DIR=/path/to/webview-data` - for future unattended runs with a warm session, set `EVERY_CHANNEL_NBC_HIDE_WINDOWS=1` to keep the native NBC webviews hidden; if interactive auth is needed, the app will surface the window instead of silently hanging - the desktop app also exposes `bootstrap_nbc_auth`; in the Add menu, use `Bootstrap selected NBC` or `Bootstrap pasted NBC URL` to warm the hidden session before later playback runs - the fallback Chrome profile directory is app-local; override it with `EVERY_CHANNEL_NBC_PROFILE_DIR=/path/to/profile` - override the Chrome binary with `EVERY_CHANNEL_NBC_CHROME_PATH=/path/to/chrome` - when `EVERY_CHANNEL_NBC_HIDE_WINDOWS=1` is set, the app refuses visible Chrome fallback if the native path fails - the app also pulls NBC's public live guide before auth so browseable NBC channel rows can appear in the Channels list; override that guide shaping with `EVERY_CHANNEL_NBC_PUBLIC_TIMEZONE`, `EVERY_CHANNEL_NBC_PUBLIC_NBC_AFFILIATE`, `EVERY_CHANNEL_NBC_PUBLIC_TELEMUNDO_AFFILIATE`, and `EVERY_CHANNEL_NBC_PUBLIC_BROADCAST_TYPE` - capture is currently video-first; audio is not guaranteed in the first cut - adjust startup timeout / capture rate with `EVERY_CHANNEL_NBC_CAPTURE_TIMEOUT_SECS`, `EVERY_CHANNEL_NBC_CAPTURE_FPS`, and `EVERY_CHANNEL_NBC_CAPTURE_QUALITY` On Linux / forge hosts, the equivalent worker path lives in `ec-node`: - warm auth with `ec-node nbc-bootstrap --source-url 'https://www.nbc.com/live?brand=nbc-sports-philadelphia'` - publish with `ec-node nbc-wt-publish --url https://cdn.moq.dev/anon --name forge-nbc-sports-philly --source-url 'https://www.nbc.com/live?brand=nbc-sports-philadelphia'` - for unattended hosts, persist the Chrome profile with `EVERY_CHANNEL_NBC_PROFILE_DIR=/path/to/profile` - to automate a Verizon popup on Linux / forge, pass MVPD credentials via env or file paths: `EVERY_CHANNEL_NBC_MVPD_USERNAME`, `EVERY_CHANNEL_NBC_MVPD_PASSWORD`, `EVERY_CHANNEL_NBC_MVPD_USERNAME_FILE`, `EVERY_CHANNEL_NBC_MVPD_PASSWORD_FILE` - the NixOS module can point the Linux worker at root-managed credential files with `services.every-channel.ec-node.nbc.mvpdUsernameFile` and `services.every-channel.ec-node.nbc.mvpdPasswordFile` - for forge-style isolation, the NixOS module can keep only the NBC publisher inside a rootless user+network namespace backed by `slirp4netns` with `services.every-channel.ec-node.nbc.isolateWithUserNetns = true` - pair that with `services.every-channel.ec-node.nbc.requireMullvad = true` to block worker startup until the host Mullvad daemon is connected; optionally pin a region/country family with `services.every-channel.ec-node.nbc.mullvadLocation = "USA"` - the NixOS module exposes `services.every-channel.ec-node.nbc.*` for a persistent Xvfb display plus an optional local-only VNC bridge so MVPD auth can be completed only when the session is cold - on Linux virtual displays, the worker disables Chrome GPU acceleration by default; only set `EVERY_CHANNEL_NBC_ENABLE_GPU=1` if the host has a real GL-capable display path - the forge path is also currently video-first; audio is still a follow-up item Linux DVB sources can be added with a URL like: ``` linux-dvb://localhost?adapter=0&dvr=0&tune=dvbv5-zap&tune=-r&tune=Channel%20Name ``` On Linux, if `/dev/dvb` exists and a `channels.conf` is found, Linux DVB channels are auto-listed in the Channels panel. You can override the `channels.conf` path with `EVERY_CHANNEL_DVB_CHANNELS_CONF=/path/to/channels.conf`. In the UI, you can still type `linux-dvb` in the Add stream field to open the Linux DVB picker (adapter, channels.conf, channel). Select a channel and click **Share** to start a MoQ publisher. The share bundle (endpoint addr, broadcast, track) appears under the viewer panel and can be pasted into **Manual MoQ connect** on another node. For gossip announcements, you can provide peers in the UI or set `EVERY_CHANNEL_GOSSIP_PEERS` (comma-separated). mDNS peer discovery is used on LANs to supplement the peer list when available. ## Coverage See `docs/COVERAGE.md`. ## HDHomeRun E2E Test (Local Network) This runs two local `ec-node` processes (publish then subscribe) against a real HDHomeRun source and validates that: chunks are encrypted, manifests are required, and the subscriber produces a playable HLS output directory. Requires Nix (so `ac-ffmpeg` finds FFmpeg headers): ```sh ./scripts/e2e-hdhr.sh --host --channel ``` ## HDHomeRun + Observation Chain E2E Test This runs a local Anvil chain, deploys the observation registry/ledger, publishes one HDHomeRun manifest epoch, and verifies that the manifest-derived observation finalizes on-chain. Requires Nix, Foundry, and a reachable local HDHomeRun: ```sh ./scripts/e2e-hdhr-blockchain.sh --host --channel ``` ## Local HDHomeRun Publisher Against Remote Observation Chain The remote OP Stack RPC on `ecp-forge` is intentionally local-only. From the local publisher box, tunnel it first: ```sh ssh -N -L 9545:127.0.0.1:28545 root@git.every.channel ``` Then run a local HDHomeRun publisher with observation submission enabled: ```sh cargo run -p ec-node -- moq-publish \ --publish-manifests \ --epoch-chunks 1 \ --broadcast-name local-hdhr-8-1 \ --observation-rpc-url http://127.0.0.1:9545 \ --observation-ledger \ --observation-private-key-file /path/to/witness.key \ hdhr --host --channel ``` Environment fallbacks are also supported: - `EVERY_CHANNEL_OBSERVATION_RPC_URL` - `EVERY_CHANNEL_OBSERVATION_LEDGER` - `EVERY_CHANNEL_OBSERVATION_PRIVATE_KEY` - `EVERY_CHANNEL_OBSERVATION_PRIVATE_KEY_FILE` - `EVERY_CHANNEL_OBSERVATION_PARENT_HASH` ## Mesh E2E Test (Split Sources) This runs two publishers over the same broadcast: - peer A publishes **manifests only** (`--publish-chunks=false`) - peer B publishes **objects only** The subscriber fetches objects from peer B and manifests from peer A using `--remote-manifests`. ```sh ./scripts/e2e-mesh-split.sh --host --channel ``` ### yt-dlp bundling (YouTube Live URLs) To enable the “Add stream (yt-dlp)” UI flow, bundle the Python runtime + yt-dlp into the app resources: ```sh scripts/vendor-yt-dlp.sh ``` The app will use the bundled runtime under `apps/tauri/resources/yt-dlp//venv`. You can override with `EVERY_CHANNEL_YTDLP_PYTHON`. ## Node ingest (MoQ file relay) ### HDHomeRun ```sh cargo run -p ec-node -- ingest hdhr --channel 8.1 ``` Enable deterministic transcode: ```sh cargo run -p ec-node -- ingest hdhr --channel 8.1 --deterministic ``` Use a specific device: ```sh cargo run -p ec-node -- ingest hdhr --device-id --channel 8.1 ``` ### Linux DVB ```sh cargo run -p ec-node -- ingest linux-dvb --adapter 0 --dvr 0 --tune-cmd dvbv5-zap --tune-cmd -r --tune-cmd "Channel Name" ``` ### HLS playlist ```sh cargo run -p ec-node -- ingest hls --url https://example.com/live.m3u8 ``` Use deterministic transcode if needed: ```sh cargo run -p ec-node -- ingest hls --url https://example.com/live.m3u8 --mode transcode ``` ### Raw TS file or URL ```sh cargo run -p ec-node -- ingest ts --input /path/to/stream.ts ``` ## Time sync inspection ```sh cargo run -p ec-cli -- ts-sync /path/to/stream.ts --chunk-ms 2000 --max-events 50 ``` ## MoQ publish over iroh ```sh cargo run -p ec-node -- moq-publish hdhr --channel 8.1 ``` Publish with deterministic transcode: ```sh cargo run -p ec-node -- moq-publish hdhr --channel 8.1 --deterministic ``` Use a specific device: ```sh cargo run -p ec-node -- moq-publish hdhr --device-id --channel 8.1 ``` Publish an HLS source: ```sh cargo run -p ec-node -- moq-publish hls --url https://example.com/live.m3u8 ``` Set a stable identity: ```sh IROH_SECRET= cargo run -p ec-node -- moq-publish ts --input /path/to/stream.ts ``` Enable discovery (DHT + mDNS): ```sh cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\ --discovery dht,mdns ``` Announce to catalog gossip: ```sh cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\ --announce --gossip-peer ``` Publish per-chunk manifests alongside chunks: ```sh EVERY_CHANNEL_MANIFEST_SIGNING_KEY= cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\ --publish-manifests --announce --gossip-peer ``` Batch multiple chunks per manifest (epoch): ```sh cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\ --publish-manifests --epoch-chunks 8 ``` ## MoQ subscribe (HLS output) ```sh cargo run -p ec-node -- moq-subscribe \\ --remote \\ --broadcast-name ``` Enable discovery (DHT + mDNS): ```sh cargo run -p ec-node -- moq-subscribe \\ --remote \\ --broadcast-name \\ --discovery dht,mdns ``` Segments and `index.m3u8` will be written to `./tmp/moq-hls` by default. Subscribe to manifests and require them for validation: ```sh cargo run -p ec-node -- moq-subscribe \\ --remote \\ --broadcast-name \\ --subscribe-manifests --require-manifest ``` Restrict to a manifest signer: ```sh cargo run -p ec-node -- moq-subscribe \\ --remote \\ --broadcast-name \\ --subscribe-manifests --require-manifest \\ --manifest-signers ed25519: ``` Throttle incoming data: ```sh cargo run -p ec-node -- moq-subscribe \\ --remote \\ --broadcast-name \\ --max-bytes-per-sec 5000000 --max-bytes-burst 10000000 ``` If the stream is encrypted, pass the same network secret used by the publisher: ```sh EVERY_CHANNEL_NETWORK_SECRET= cargo run -p ec-node -- moq-subscribe \\ --remote \\ --broadcast-name ``` ## MoQ self-test (local round trip) ```sh cargo run -p ec-node -- moq-selftest /path/to/stream.ts ``` Pass a URL (for HDHomeRun): ```sh cargo run -p ec-node -- moq-selftest http:///auto/v8.1 --max-chunks 8 ``` ## Determinism test (libx264 single-thread) ```sh cargo run -p ec-cli -- determinism-test /path/to/stream.ts ./tmp/determinism --runs 3 ``` ## Tests and coverage Run the core unit tests: ```sh just test-core ``` Generate a coverage report (requires `nix develop` so ffmpeg headers are visible to `ac-ffmpeg`): ```sh just cov-core-html ```