every.channel: sanitized baseline

This commit is contained in:
every.channel 2026-02-15 16:17:27 -05:00
commit 897e556bea
No known key found for this signature in database
258 changed files with 74298 additions and 0 deletions

57
docs/ARCHITECTURE.md Normal file
View file

@ -0,0 +1,57 @@
# Architecture
## Layers
1. Capture
- Hardware: ATSC antennas -> HDHomeRun or Linux IPTV capture devices.
- Output: MPEG-TS or ATSC 3.0 streams.
2. Normalize
- Demux and normalize timestamps.
- Select program IDs and identify audio/video tracks.
3. Deterministic transcode
- Encode with a fixed profile (codec, GOP, bitrate, keyframe cadence).
- Emit fixed-duration chunks with stable ordering.
- Hash chunks to produce content identifiers.
4. MoQ publish
- Map each channel to a track namespace.
- Each chunk becomes a MoQ object in a group.
- Objects are named and addressed deterministically.
5. Relay mesh
- Relays cache objects and announce tracks.
- iroh provides programmable topology and peer routing.
- Multiple relays can serve identical objects.
6. Playback
- Desktop: Tauri app that subscribes to tracks.
- CLI: debugging, inspection, and headless clients.
- Web: static site that connects to a relay gateway.
## Roles
- Runner: owns capture + transcode + publish.
- Chopper: executes deterministic chunking profiles.
- Relay: stores and forwards MoQ objects.
- Manager: configures nodes and applies policy.
- Provisioner: bootstraps nodes and manages deployment.
## Determinism
- The same input with the same profile should yield identical chunks.
- Chunk hashes are the primitive for availability and de-duplication.
- Deterministic names allow relays to converge without coordination.
## Time synchronization
- Chunk boundaries are derived from PCR and, when available, broadcast UTC (ATSC STT / DVB TDT/TOT).
- Unsynced sources remain source-scoped until broadcast time is present.
- Discontinuities force a new chunk group boundary.

57
docs/BABY_STEPS.md Normal file
View file

@ -0,0 +1,57 @@
# Baby steps
These are the smallest useful steps to get a real MoQ pipeline running end-to-end.
1. Capture and inspect transport streams
- Confirm HDHomeRun discovery on the local network.
- Fetch lineup JSON and map it to `ec-core::Channel`.
- Open a raw MPEG-TS stream for a single channel and write it to disk.
2. Deterministic transcode + chunking
- Choose a reference ffmpeg profile (encoder, GOP, keyframe cadence).
- Emit fixed-duration chunks with deterministic timestamps.
- Hash chunks and verify that repeated runs are byte-identical.
3. MoQ object model
- Model track/group/object IDs and object metadata in `ec-moq`.
- Map each chunk to a MoQ object with deterministic naming.
- Validate that object IDs are stable across runs.
4. Single-node publish + replay
- Build a local relay that stores objects on disk.
- Publish from the chopper to the relay.
- Replay stored objects to a local subscriber and validate playback.
5. Multi-node relay mesh
- Integrate iroh for node discovery and routing.
- Replicate objects between two relays.
- Verify that a subscriber can pull from either relay.
6. Client surfaces
- Tauri shell that lists available tracks and plays one.
- CLI that can subscribe and dump objects for inspection.
- Static web UI that connects to a relay gateway.
7. Time-synchronized chunking
- Parse PCR + STT/TDT/TOT to anchor chunk boundaries to broadcast UTC.
- Emit deterministic TS chunks with sync metadata.
- Promote source-scoped streams to broadcast-scoped streams when synced.
8. MoQ publish path
- Wrap time-aligned chunks as MoQ objects (timing metadata attached).
- Write objects to the local relay store.
- Wire to iroh transport once MoQ session adapters are stable.
7. Production hardening
- Crash recovery and backfill.
- Observability: tracing, metrics, object retention.
- Network policies for DMCA resilience and takedown mitigation.

View file

@ -0,0 +1,45 @@
# Claude Code prompt (opus-4.6): design + IA pass
Goal: Do a design pass on the every.channel web site and apply the same information architecture and language to the desktop app UI.
Constraints:
- Keep it clean and minimal, but warmer (a subtle nod to old TV).
- Do not use protocol jargon in user-facing labels. Avoid words like "MoQ", "mDNS", "DHT", "endpoint", "broadcast", "track", "gossip", "pkarr".
- Prefer plain language:
- "Watch a link"
- "Nearby (same Wi-Fi)"
- "Public (internet)"
- "Directory"
- "Sharing key"
- Keep layout readable on mobile and desktop.
- Reuse the existing visual direction (soft gradient background, rounded cards) but tighten typography and spacing.
- Preserve existing functionality. This is a design and IA pass, not a behavior rewrite.
Scope:
1. Web site IA and design
- Implemented site lives at `apps/web/`:
- `apps/web/src/main.rs`
- `apps/web/style.css`
- `apps/web/index.html`
- Improve the IA and copy, but keep the same sections: Watch, Directory, Participate, About.
- Make the top nav and sections feel cohesive and calm.
- Add small touches that feel "old TV" without being kitsch. Very subtle scanlines/noise is OK.
2. App UI IA and design alignment
- Desktop UI lives at:
- `apps/tauri/ui/src/main.rs`
- `apps/tauri/ui/style.css`
- Ensure the same language and IA as the web site.
- Hide advanced fields. Default flows should be link-first.
- Make the Directory controls read like product features, not network plumbing.
3. Deliverables
- Update CSS and UI copy.
- If you add new components, keep them in the same files for now.
- Keep changes reversible and minimal in surface area.
- Provide a short summary of what changed and why.
Please edit the repo directly.

31
docs/COVERAGE.md Normal file
View file

@ -0,0 +1,31 @@
# Coverage
every.channel uses `cargo-llvm-cov` to produce formal coverage reports.
## Run (Recommended: Nix)
```sh
direnv allow
./scripts/coverage.sh
```
To run the unit-subset coverage (does not require FFmpeg headers):
```sh
./scripts/coverage.sh unit
```
Artifacts:
- `tmp/coverage/workspace.lcov`
- `tmp/coverage/workspace.summary.txt`
- `tmp/coverage/workspace-html/index.html`
## Run (Inside nix develop)
If you are already in `nix develop`, you can use:
```sh
just cov-workspace
just cov-workspace-html
```

22
docs/DEPLOY_CLOUDFLARE.md Normal file
View file

@ -0,0 +1,22 @@
# Cloudflare Deploy (Forgejo Actions)
This repo deploys `https://every.channel` via Wrangler.
## Prereqs
- Forgejo Actions enabled on the repo.
- A Cloudflare API token stored as a Forgejo Actions secret:
- name: `CLOUDFLARE_API_TOKEN`
The workflow is defined in `.forgejo/workflows/deploy-cloudflare.yml`.
## Manual deploy (local)
```sh
cd apps/tauri/ui
trunk build --release --public-url /
cd deploy/cloudflare-worker
npm ci
npm run deploy
```

46
docs/IROH_EXAMPLES.md Normal file
View file

@ -0,0 +1,46 @@
# iroh examples and references
Cloned under `third_party/iroh-org/` for local inspection.
## iroh-examples
- **browser-chat**: gossipbased chat in browser + CLI.
- **browser-blobs**: running irohblobs in WebAssembly.
- **custom-router**: manual protocol routing by ALPN.
- **dumbpipe-web**: HTTP forwarding over iroh.
- **tauri-todos**: Tauri + iroh documents.
## iroh-experiments
- **content-discovery**: tracker + pkarr discovery flow.
- **h3-iroh**: HTTP/3 over iroh connections.
- **iroh-pkarr-naming-system**: IPNSstyle naming experiment.
## dumbpipe
- P2P stream forwarding via iroh (good reference for live media piping).
## iroh-gossip
- Pub/sub swarm model with topic IDs; ALPN routing examples.
## iroh-docs
- Shows how to layer blobs + gossip + docs into a protocol stack with Router.
## iroh-willow
- Confidential sync + encryption concepts relevant to swarm security.
## callme / iroh-live
- Audio/video streaming examples over iroh (already cloned in `third_party/`).
## Recent iroh blog + changelog highlights
- The changelog lists releases through 0.33.0 (Feb 24, 2025) with browser/Wasm support, discovery data publishing, and 0-RTT connections.
- 0.32.0 added browser alpha, Quic Address Discovery (QAD) on relays, and the n0-future crate.
- 0.29.0 rebranded `iroh-net` to `iroh` and removed the bundled Node in favor of endpoints + routers (protocols moved to separate crates).
- Sources:
- https://iroh.computer/changelog
- https://iroh.computer/blog/iroh-0-29

13
docs/IROH_NOTES.md Normal file
View file

@ -0,0 +1,13 @@
# iroh notes
## Recent highlights (from the iroh blog)
- iroh 0.96.0 focuses on QUIC multipaths on the road to 1.0.
- Custom transports (including Tor via `iroh-tor`) can be used to establish anonymous connections.
- Address lookup migration to a new DNS system (`N0Dns`) is underway.
## Implications for every.channel
- Multipath QUIC aligns with our resilience goals.
- Tor transport is a strong fit for anti-takedown posture.
- Endpoint discovery and DNS naming should be designed to tolerate evolution in irohs discovery infrastructure.

View file

@ -0,0 +1,16 @@
# MoQ implementations (Rust)
## moq-rs (kixelated/moq)
- Rust implementation of Media over QUIC.
- Includes `moq-lite` and `hang` catalog components.
## iroh-live
- Uses `moq-rs` with iroh connections via `web-transport-iroh` and `iroh-moq` adapters.
- Provides publish/watch examples for media streams over iroh.
## callme (iroh-roq)
- Audio-only proof that low-latency media over iroh works at scale.
- Useful reference for handshake and session management patterns.

16
docs/MOQ_NOTES.md Normal file
View file

@ -0,0 +1,16 @@
# MoQ notes
Reference: https://blog.cloudflare.com/moq
- MoQ (Media over QUIC) is a publish/subscribe protocol for media on QUIC.
- The data model uses track namespaces, tracks, groups, and objects.
- Relays can cache and forward objects to many subscribers.
- MoQ is designed to bridge low-latency media delivery with scalable distribution.
- Browsers are expected to connect via WebTransport.
## Mapping to every.channel
- Each channel becomes a track namespace with tracks for video/audio.
- Each chunk is a MoQ object with deterministic IDs.
- Relays store objects and can serve them from any location.
- Object metadata includes optional timing fields (chunk index, PCR/UTC anchors) to support convergence.

279
docs/USAGE.md Normal file
View file

@ -0,0 +1,279 @@
# Usage
## Tauri viewer (local)
If `trunk` or `cargo-tauri` is missing, enter the nix shell (or install the tools manually).
`direnv allow` also sets `EVERY_CHANNEL_ROOT` so Tauri can find the UI folder.
```sh
direnv allow
cd apps/tauri
cargo tauri dev
```
If you want deterministic transcoding instead of stream copy:
```sh
EVERY_CHANNEL_TRANSCODE=1 cargo tauri dev
```
For node ingest/MoQ publish, you can force deterministic transcode with:
```sh
EVERY_CHANNEL_DETERMINISTIC=1 cargo run -p ec-node -- ingest hdhr --channel 8.1
```
iroh discovery is opt-in (DNS discovery is off by default). Enable DHT and/or mDNS like this:
```sh
EVERY_CHANNEL_IROH_DISCOVERY=dht,mdns cargo tauri dev
```
In the Tauri app, use **Add stream** to add an HDHomeRun host, a direct HLS URL, or a yt-dlp supported URL (e.g. YouTube Live). The flow rejects non-live sources.
Linux DVB sources can be added with a URL like:
```
linux-dvb://localhost?adapter=0&dvr=0&tune=dvbv5-zap&tune=-r&tune=Channel%20Name
```
On Linux, if `/dev/dvb` exists and a `channels.conf` is found, Linux DVB channels are auto-listed in the Channels panel.
You can override the `channels.conf` path with `EVERY_CHANNEL_DVB_CHANNELS_CONF=/path/to/channels.conf`.
In the UI, you can still type `linux-dvb` in the Add stream field to open the Linux DVB picker (adapter, channels.conf, channel).
Select a channel and click **Share** to start a MoQ publisher. The share bundle (endpoint addr, broadcast, track) appears under the viewer panel and can be pasted into **Manual MoQ connect** on another node.
For gossip announcements, you can provide peers in the UI or set `EVERY_CHANNEL_GOSSIP_PEERS` (comma-separated). mDNS peer discovery is used on LANs to supplement the peer list when available.
## Coverage
See `docs/COVERAGE.md`.
## HDHomeRun E2E Test (Local Network)
This runs two local `ec-node` processes (publish then subscribe) against a real HDHomeRun source and validates that:
chunks are encrypted, manifests are required, and the subscriber produces a playable HLS output directory.
Requires Nix (so `ac-ffmpeg` finds FFmpeg headers):
```sh
./scripts/e2e-hdhr.sh --host <HDHR_HOST> --channel <CHANNEL>
```
## Mesh E2E Test (Split Sources)
This runs two publishers over the same broadcast:
- peer A publishes **manifests only** (`--publish-chunks=false`)
- peer B publishes **objects only**
The subscriber fetches objects from peer B and manifests from peer A using `--remote-manifests`.
```sh
./scripts/e2e-mesh-split.sh --host <HDHR_HOST> --channel <CHANNEL>
```
### yt-dlp bundling (YouTube Live URLs)
To enable the “Add stream (yt-dlp)” UI flow, bundle the Python runtime + yt-dlp into the app resources:
```sh
scripts/vendor-yt-dlp.sh
```
The app will use the bundled runtime under `apps/tauri/resources/yt-dlp/<platform>/venv`. You can override with `EVERY_CHANNEL_YTDLP_PYTHON`.
## Node ingest (MoQ file relay)
### HDHomeRun
```sh
cargo run -p ec-node -- ingest hdhr --channel 8.1
```
Enable deterministic transcode:
```sh
cargo run -p ec-node -- ingest hdhr --channel 8.1 --deterministic
```
Use a specific device:
```sh
cargo run -p ec-node -- ingest hdhr --device-id <DEVICE_ID> --channel 8.1
```
### Linux DVB
```sh
cargo run -p ec-node -- ingest linux-dvb --adapter 0 --dvr 0 --tune-cmd dvbv5-zap --tune-cmd -r --tune-cmd "Channel Name"
```
### HLS playlist
```sh
cargo run -p ec-node -- ingest hls --url https://example.com/live.m3u8
```
Use deterministic transcode if needed:
```sh
cargo run -p ec-node -- ingest hls --url https://example.com/live.m3u8 --mode transcode
```
### Raw TS file or URL
```sh
cargo run -p ec-node -- ingest ts --input /path/to/stream.ts
```
## Time sync inspection
```sh
cargo run -p ec-cli -- ts-sync /path/to/stream.ts --chunk-ms 2000 --max-events 50
```
## MoQ publish over iroh
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1
```
Publish with deterministic transcode:
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1 --deterministic
```
Use a specific device:
```sh
cargo run -p ec-node -- moq-publish hdhr --device-id <DEVICE_ID> --channel 8.1
```
Publish an HLS source:
```sh
cargo run -p ec-node -- moq-publish hls --url https://example.com/live.m3u8
```
Set a stable identity:
```sh
IROH_SECRET=<hex> cargo run -p ec-node -- moq-publish ts --input /path/to/stream.ts
```
Enable discovery (DHT + mDNS):
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
--discovery dht,mdns
```
Announce to catalog gossip:
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
--announce --gossip-peer <ENDPOINT_ADDR>
```
Publish per-chunk manifests alongside chunks:
```sh
EVERY_CHANNEL_MANIFEST_SIGNING_KEY=<hex> cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
--publish-manifests --announce --gossip-peer <ENDPOINT_ADDR>
```
Batch multiple chunks per manifest (epoch):
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
--publish-manifests --epoch-chunks 8
```
## MoQ subscribe (HLS output)
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID>
```
Enable discovery (DHT + mDNS):
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID> \\
--discovery dht,mdns
```
Segments and `index.m3u8` will be written to `./tmp/moq-hls` by default.
Subscribe to manifests and require them for validation:
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID> \\
--subscribe-manifests --require-manifest
```
Restrict to a manifest signer:
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID> \\
--subscribe-manifests --require-manifest \\
--manifest-signers ed25519:<hex>
```
Throttle incoming data:
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID> \\
--max-bytes-per-sec 5000000 --max-bytes-burst 10000000
```
If the stream is encrypted, pass the same network secret used by the publisher:
```sh
EVERY_CHANNEL_NETWORK_SECRET=<hex> cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID>
```
## MoQ self-test (local round trip)
```sh
cargo run -p ec-node -- moq-selftest /path/to/stream.ts
```
Pass a URL (for HDHomeRun):
```sh
cargo run -p ec-node -- moq-selftest http://<hdhr>/auto/v8.1 --max-chunks 8
```
## Determinism test (libx264 single-thread)
```sh
cargo run -p ec-cli -- determinism-test /path/to/stream.ts ./tmp/determinism --runs 3
```
## Tests and coverage
Run the core unit tests:
```sh
just test-core
```
Generate a coverage report (requires `nix develop` so ffmpeg headers are visible to `ac-ffmpeg`):
```sh
just cov-core-html
```

1
docs/allowed_signers Normal file
View file

@ -0,0 +1 @@
founder@every.channel ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJCBTSEEcBOhOkf3WF1e8xmblAZHvgTibFsqck2GY8D/