every.channel/docs/USAGE.md

12 KiB

Usage

Tauri viewer (local)

If trunk or cargo-tauri is missing, enter the nix shell (or install the tools manually). direnv allow also sets EVERY_CHANNEL_ROOT so Tauri can find the UI folder.

direnv allow
cd apps/tauri
cargo tauri dev

If you want to run the desktop app directly from Cargo against the bundled frontend instead of the dev server, run:

EVERY_CHANNEL_ROOT=$PWD cargo run -p ec-tauri --features custom-protocol

If you want deterministic transcoding instead of stream copy:

EVERY_CHANNEL_TRANSCODE=1 cargo tauri dev

For node ingest/MoQ publish, you can force deterministic transcode with:

EVERY_CHANNEL_DETERMINISTIC=1 cargo run -p ec-node -- ingest hdhr --channel 8.1

iroh discovery is opt-in (DNS discovery is off by default). Enable DHT and/or mDNS like this:

EVERY_CHANNEL_IROH_DISCOVERY=dht,mdns cargo tauri dev

In the Tauri app, use Add stream to add an HDHomeRun host, a direct HLS URL, or a yt-dlp supported URL (e.g. YouTube Live). The flow rejects non-live sources.

https://www.nbc.com/watch/... URLs are also supported in the Tauri app. This path is browser-backed:

  • on macOS, the app first opens an in-app Tauri webview backed by WKWebView
  • NBC / Adobe Pass authentication stays in that native app window, including popup sign-in flows
  • if native playback cannot become ready, the app falls back to the existing external Chrome path
  • once playback is live, the app captures rendered video frames and feeds them into the existing ffmpeg ladder

Notes:

  • the first run may require you to finish your MVPD login in the native app window or, if native playback falls back, in the launched Chrome window
  • on macOS, the default native webview data directory is app-local; override it with EVERY_CHANNEL_NBC_WEBVIEW_DATA_DIR=/path/to/webview-data
  • for future unattended runs with a warm session, set EVERY_CHANNEL_NBC_HIDE_WINDOWS=1 to keep the native NBC webviews hidden; if interactive auth is needed, the app will surface the window instead of silently hanging
  • the desktop app also exposes bootstrap_nbc_auth; in the Add menu, use Bootstrap selected NBC or Bootstrap pasted NBC URL to warm the hidden session before later playback runs
  • the fallback Chrome profile directory is app-local; override it with EVERY_CHANNEL_NBC_PROFILE_DIR=/path/to/profile
  • override the Chrome binary with EVERY_CHANNEL_NBC_CHROME_PATH=/path/to/chrome
  • when EVERY_CHANNEL_NBC_HIDE_WINDOWS=1 is set, the app refuses visible Chrome fallback if the native path fails
  • the app also pulls NBC's public live guide before auth so browseable NBC channel rows can appear in the Channels list; override that guide shaping with EVERY_CHANNEL_NBC_PUBLIC_TIMEZONE, EVERY_CHANNEL_NBC_PUBLIC_NBC_AFFILIATE, EVERY_CHANNEL_NBC_PUBLIC_TELEMUNDO_AFFILIATE, and EVERY_CHANNEL_NBC_PUBLIC_BROADCAST_TYPE
  • capture is currently video-first; audio is not guaranteed in the first cut
  • adjust startup timeout / capture rate with EVERY_CHANNEL_NBC_CAPTURE_TIMEOUT_SECS, EVERY_CHANNEL_NBC_CAPTURE_FPS, and EVERY_CHANNEL_NBC_CAPTURE_QUALITY

On Linux / forge hosts, the equivalent worker path lives in ec-node:

  • warm auth with ec-node nbc-bootstrap --source-url 'https://www.nbc.com/live?brand=nbc-sports-philadelphia'
  • publish with ec-node nbc-wt-publish --url https://cdn.moq.dev/anon --name forge-nbc-sports-philly --source-url 'https://www.nbc.com/live?brand=nbc-sports-philadelphia'
  • for unattended hosts, persist the Chrome profile with EVERY_CHANNEL_NBC_PROFILE_DIR=/path/to/profile
  • to automate a Verizon popup on Linux / forge, pass MVPD credentials via env or file paths: EVERY_CHANNEL_NBC_MVPD_USERNAME, EVERY_CHANNEL_NBC_MVPD_PASSWORD, EVERY_CHANNEL_NBC_MVPD_USERNAME_FILE, EVERY_CHANNEL_NBC_MVPD_PASSWORD_FILE
  • the NixOS module can point the Linux worker at root-managed credential files with services.every-channel.ec-node.nbc.mvpdUsernameFile and services.every-channel.ec-node.nbc.mvpdPasswordFile
  • for forge-style isolation, the NixOS module can keep only the NBC publisher inside a rootless user+network namespace backed by slirp4netns with services.every-channel.ec-node.nbc.isolateWithUserNetns = true
  • pair that with services.every-channel.ec-node.nbc.requireMullvad = true to block worker startup until the host Mullvad daemon is connected; optionally pin a region/country family with services.every-channel.ec-node.nbc.mullvadLocation = "USA"
  • the NixOS module exposes services.every-channel.ec-node.nbc.* for a persistent Xvfb display plus an optional local-only VNC bridge so MVPD auth can be completed only when the session is cold
  • on Linux virtual displays, the worker disables Chrome GPU acceleration by default; only set EVERY_CHANNEL_NBC_ENABLE_GPU=1 if the host has a real GL-capable display path
  • the forge path is also currently video-first; audio is still a follow-up item

Linux DVB sources can be added with a URL like:

linux-dvb://localhost?adapter=0&dvr=0&tune=dvbv5-zap&tune=-r&tune=Channel%20Name

On Linux, if /dev/dvb exists and a channels.conf is found, Linux DVB channels are auto-listed in the Channels panel. You can override the channels.conf path with EVERY_CHANNEL_DVB_CHANNELS_CONF=/path/to/channels.conf.

In the UI, you can still type linux-dvb in the Add stream field to open the Linux DVB picker (adapter, channels.conf, channel).

Select a channel and click Share to start a MoQ publisher. The share bundle (endpoint addr, broadcast, track) appears under the viewer panel and can be pasted into Manual MoQ connect on another node.

For gossip announcements, you can provide peers in the UI or set EVERY_CHANNEL_GOSSIP_PEERS (comma-separated). mDNS peer discovery is used on LANs to supplement the peer list when available.

Coverage

See docs/COVERAGE.md.

HDHomeRun E2E Test (Local Network)

This runs two local ec-node processes (publish then subscribe) against a real HDHomeRun source and validates that: chunks are encrypted, manifests are required, and the subscriber produces a playable HLS output directory.

Requires Nix (so ac-ffmpeg finds FFmpeg headers):

./scripts/e2e-hdhr.sh --host <HDHR_HOST> --channel <CHANNEL>

HDHomeRun + Observation Chain E2E Test

This runs a local Anvil chain, deploys the observation registry/ledger, publishes one HDHomeRun manifest epoch, and verifies that the manifest-derived observation finalizes on-chain.

Requires Nix, Foundry, and a reachable local HDHomeRun:

./scripts/e2e-hdhr-blockchain.sh --host <HDHR_HOST> --channel <CHANNEL>

Local HDHomeRun Publisher Against Remote Observation Chain

The remote OP Stack RPC on ecp-forge is intentionally local-only. From the local publisher box, tunnel it first:

ssh -N -L 9545:127.0.0.1:28545 root@git.every.channel

Then run a local HDHomeRun publisher with observation submission enabled:

cargo run -p ec-node -- moq-publish \
  --publish-manifests \
  --epoch-chunks 1 \
  --broadcast-name local-hdhr-8-1 \
  --observation-rpc-url http://127.0.0.1:9545 \
  --observation-ledger <OBSERVATION_LEDGER_ADDRESS> \
  --observation-private-key-file /path/to/witness.key \
  hdhr --host <HDHR_HOST> --channel <CHANNEL>

Environment fallbacks are also supported:

  • EVERY_CHANNEL_OBSERVATION_RPC_URL
  • EVERY_CHANNEL_OBSERVATION_LEDGER
  • EVERY_CHANNEL_OBSERVATION_PRIVATE_KEY
  • EVERY_CHANNEL_OBSERVATION_PRIVATE_KEY_FILE
  • EVERY_CHANNEL_OBSERVATION_PARENT_HASH

Mesh E2E Test (Split Sources)

This runs two publishers over the same broadcast:

  • peer A publishes manifests only (--publish-chunks=false)
  • peer B publishes objects only

The subscriber fetches objects from peer B and manifests from peer A using --remote-manifests.

./scripts/e2e-mesh-split.sh --host <HDHR_HOST> --channel <CHANNEL>

yt-dlp bundling (YouTube Live URLs)

To enable the “Add stream (yt-dlp)” UI flow, bundle the Python runtime + yt-dlp into the app resources:

scripts/vendor-yt-dlp.sh

The app will use the bundled runtime under apps/tauri/resources/yt-dlp/<platform>/venv. You can override with EVERY_CHANNEL_YTDLP_PYTHON.

Node ingest (MoQ file relay)

HDHomeRun

cargo run -p ec-node -- ingest hdhr --channel 8.1

Enable deterministic transcode:

cargo run -p ec-node -- ingest hdhr --channel 8.1 --deterministic

Use a specific device:

cargo run -p ec-node -- ingest hdhr --device-id <DEVICE_ID> --channel 8.1

Linux DVB

cargo run -p ec-node -- ingest linux-dvb --adapter 0 --dvr 0 --tune-cmd dvbv5-zap --tune-cmd -r --tune-cmd "Channel Name"

HLS playlist

cargo run -p ec-node -- ingest hls --url https://example.com/live.m3u8

Use deterministic transcode if needed:

cargo run -p ec-node -- ingest hls --url https://example.com/live.m3u8 --mode transcode

Raw TS file or URL

cargo run -p ec-node -- ingest ts --input /path/to/stream.ts

Time sync inspection

cargo run -p ec-cli -- ts-sync /path/to/stream.ts --chunk-ms 2000 --max-events 50

MoQ publish over iroh

cargo run -p ec-node -- moq-publish hdhr --channel 8.1

Publish with deterministic transcode:

cargo run -p ec-node -- moq-publish hdhr --channel 8.1 --deterministic

Use a specific device:

cargo run -p ec-node -- moq-publish hdhr --device-id <DEVICE_ID> --channel 8.1

Publish an HLS source:

cargo run -p ec-node -- moq-publish hls --url https://example.com/live.m3u8

Set a stable identity:

IROH_SECRET=<hex> cargo run -p ec-node -- moq-publish ts --input /path/to/stream.ts

Enable discovery (DHT + mDNS):

cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
  --discovery dht,mdns

Announce to catalog gossip:

cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
  --announce --gossip-peer <ENDPOINT_ADDR>

Publish per-chunk manifests alongside chunks:

EVERY_CHANNEL_MANIFEST_SIGNING_KEY=<hex> cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
  --publish-manifests --announce --gossip-peer <ENDPOINT_ADDR>

Batch multiple chunks per manifest (epoch):

cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
  --publish-manifests --epoch-chunks 8

MoQ subscribe (HLS output)

cargo run -p ec-node -- moq-subscribe \\
  --remote <ENDPOINT_ADDR> \\
  --broadcast-name <STREAM_ID>

Enable discovery (DHT + mDNS):

cargo run -p ec-node -- moq-subscribe \\
  --remote <ENDPOINT_ADDR> \\
  --broadcast-name <STREAM_ID> \\
  --discovery dht,mdns

Segments and index.m3u8 will be written to ./tmp/moq-hls by default.

Subscribe to manifests and require them for validation:

cargo run -p ec-node -- moq-subscribe \\
  --remote <ENDPOINT_ADDR> \\
  --broadcast-name <STREAM_ID> \\
  --subscribe-manifests --require-manifest

Restrict to a manifest signer:

cargo run -p ec-node -- moq-subscribe \\
  --remote <ENDPOINT_ADDR> \\
  --broadcast-name <STREAM_ID> \\
  --subscribe-manifests --require-manifest \\
  --manifest-signers ed25519:<hex>

Throttle incoming data:

cargo run -p ec-node -- moq-subscribe \\
  --remote <ENDPOINT_ADDR> \\
  --broadcast-name <STREAM_ID> \\
  --max-bytes-per-sec 5000000 --max-bytes-burst 10000000

If the stream is encrypted, pass the same network secret used by the publisher:

EVERY_CHANNEL_NETWORK_SECRET=<hex> cargo run -p ec-node -- moq-subscribe \\
  --remote <ENDPOINT_ADDR> \\
  --broadcast-name <STREAM_ID>

MoQ self-test (local round trip)

cargo run -p ec-node -- moq-selftest /path/to/stream.ts

Pass a URL (for HDHomeRun):

cargo run -p ec-node -- moq-selftest http://<hdhr>/auto/v8.1 --max-chunks 8

Determinism test (libx264 single-thread)

cargo run -p ec-cli -- determinism-test /path/to/stream.ts ./tmp/determinism --runs 3

Tests and coverage

Run the core unit tests:

just test-core

Generate a coverage report (requires nix develop so ffmpeg headers are visible to ac-ffmpeg):

just cov-core-html