Advance forge rollout, Ethereum rails, and NBC sources

This commit is contained in:
every.channel 2026-04-01 15:58:49 -07:00
parent be26313225
commit 7d84510eac
No known key found for this signature in database
88 changed files with 11230 additions and 302 deletions

View file

@ -24,13 +24,24 @@
- Each chunk becomes a MoQ object in a group.
- Objects are named and addressed deterministically.
5. Relay mesh
5. Settlement rails
- Ethereum-compatible commitments mirror stream identity, manifests, and transport announcements.
- Observation consensus is a separate rail on top of those commitments: a reality-derived
`ObservationHeader` is hashed, witnessed, and finalized per `(stream, epoch)` slot.
- The chain stores compact commitment pointers only; media bytes and full manifests remain on iroh,
relays, and archive storage.
- OP Stack is the current private-chain operator target, with `ecp-forge` as the head node for the
first Sepolia-anchored testnet tranche.
- Private-chain operation is a protocol extension, not a replacement for transport.
6. Relay mesh
- Relays cache objects and announce tracks.
- iroh provides programmable topology and peer routing.
- Multiple relays can serve identical objects.
6. Playback
7. Playback
- Desktop: Tauri app that subscribes to tracks.
- CLI: debugging, inspection, and headless clients.
@ -43,15 +54,30 @@
- Relay: stores and forwards MoQ objects.
- Manager: configures nodes and applies policy.
- Provisioner: bootstraps nodes and manages deployment.
- Witness: attests to a reality-derived observation hash for a stream epoch.
## Determinism
- The same input with the same profile should yield identical chunks.
- Chunk hashes are the primitive for availability and de-duplication.
- Deterministic names allow relays to converge without coordination.
- Observation consensus derives a second deterministic summary from the archive/manifests layer:
`streamHash`, `epochHash`, `dataRoot`, and `locatorHash` become the on-chain observation header.
- Local manifests keep BLAKE3 `manifest_id`s and `merkle+blake3` proofs, while the Ethereum rail
adds Keccak ABI/data commitments and optional secp256k1 EIP-712 body signatures for settlement.
- Discovery identity should prefer broadcast-scoped channel identity when available and only fall
back to source-scoped IDs when the ingest path cannot yet prove a usable broadcast identity.
- PAT-derived identity is accepted only when the stream exposes a single non-zero program; ambiguous
multi-program TS inputs remain source-scoped to avoid accidental convergence on the wrong channel.
- `ec-ts` parses ATSC PSIP at the table layer (`MGT`, `TVCT/CVCT`, `STT`, `RRT`, `EIT`, `ETT`),
including `EIT` / `ETT` on the PIDs advertised by `MGT`.
- Current discovery promotion uses `PAT` plus `VCT` fields; the rest of PSIP is parsed and preserved
for inspection, validation, and future policy rather than guessed into the stream key.
## Time synchronization
- Chunk boundaries are derived from PCR and, when available, broadcast UTC (ATSC STT / DVB TDT/TOT).
- ATSC STT is interpreted as GPS time plus offset correction, then converted to Unix time for chunk
anchoring and diagnostics.
- Unsynced sources remain source-scoped until broadcast time is present.
- Discontinuities force a new chunk group boundary.

View file

@ -20,7 +20,7 @@
Optional overrides:
```sh
EVERY_CHANNEL_FORGE_HOST=https://forge.every.channel \
EVERY_CHANNEL_FORGE_HOST=https://git.every.channel \
EVERY_CHANNEL_FORGE_REPO=every-channel/every.channel \
EVERY_CHANNEL_PROTECTED_BRANCH=main \
EVERY_CHANNEL_REQUIRED_CHECKS="ci-gates / checks" \
@ -31,7 +31,7 @@ EVERY_CHANNEL_REQUIRED_APPROVALS=1 \
Token source order:
1. `EVERY_CHANNEL_FORGE_TOKEN` / `FORGE_TOKEN` / `CODEBERG_TOKEN` env var
2. `secrets/forge-token.age` (preferred) via `agenix` or `age`
3. `secrets/codeberg-token.age` (compat) via `agenix` or `age`
2. `secrets/forgejo-api-token.age` (preferred) via `agenix` or `age`
3. `secrets/forge-token.age` or `secrets/codeberg-token.age` (compat) via `agenix` or `age`
The token must have repository admin scope to edit branch protection.

46
docs/DEPLOY_ECP_FORGE.md Normal file
View file

@ -0,0 +1,46 @@
# Sovereign Deploy: `ecp-forge`
This repository owns deployment of `git.every.channel` (Hetzner 300TB host).
## Requirements
- SSH access to `root@git.every.channel`.
- Local key that matches host `authorized_keys` (default: `~/.ssh/id_ed25519`).
- `nix` with flakes enabled.
## Deploy
```sh
./scripts/deploy-ecp-forge.sh
```
For the OP Stack operator path and observation-rail validation, see:
```sh
cat docs/OP_STACK_ECP_FORGE.md
```
Equivalent:
```sh
NIX_SSHOPTS="-o BatchMode=yes -o IdentityAgent=none -o IdentitiesOnly=yes -i ~/.ssh/id_ed25519" \
nix run nixpkgs#nixos-rebuild -- \
--flake .#ecp-forge \
--target-host root@git.every.channel \
--build-host root@git.every.channel \
--use-remote-sudo \
switch
```
## Overrides
- `EVERY_CHANNEL_FORGE_TARGET_HOST` (default `root@git.every.channel`)
- `EVERY_CHANNEL_FORGE_BUILD_HOST` (default same as target)
- `EVERY_CHANNEL_FORGE_SSH_IDENTITY` (default `~/.ssh/id_ed25519`)
## Verify
```sh
ssh -o BatchMode=yes -o IdentityAgent=none -i ~/.ssh/id_ed25519 root@git.every.channel \
'hostnamectl --static; systemctl is-active forgejo caddy every-channel-netboot-stage every-channel-netboot'
```

View file

@ -2,7 +2,7 @@
Primary host:
- Forgejo (`origin`)
- Forgejo (`origin`, `git.every.channel`)
Mirrors (push-only):
@ -19,7 +19,7 @@ Codeberg and GitHub are distribution mirrors only. CI/actions should run on Forg
Defaults:
- `origin`: `git@forge.every.channel:every-channel/every.channel.git`
- `origin`: `ssh://forgejo@git.every.channel:2222/every-channel/every.channel.git`
- `mirror-codeberg`: `git@codeberg.org:every-channel/every.channel.git`
- `mirror-github`: `git@github.com:every-channel/every.channel.git`

View file

@ -1,35 +1,88 @@
# NUC Fleet Netboot (Unifi + ProxyDHCP)
# NUC Fleet Netboot (Unifi)
This runbook provisions x86_64 NUCs from runner netboot artifacts without USB image flashing.
It uses:
Supported modes:
- Unifi DHCP for IP leases.
- Local `dnsmasq` ProxyDHCP for PXE/iPXE bootfile logic.
- Local HTTP + TFTP service for boot artifacts.
## Why ProxyDHCP
iPXE commonly needs two boot stages:
1. firmware PXE -> `ipxe.efi`
2. iPXE -> `netboot.ipxe`
If DHCP always returns `ipxe.efi`, clients can loop forever. ProxyDHCP handles stage-specific boot responses cleanly while leaving Unifi as the DHCP lease server.
- ProxyDHCP mode: recommended when you want automatic stage-1/2 iPXE handling.
- UniFi-only mode: DHCP options 66/67 in UniFi, no ProxyDHCP.
## Prerequisites
- A Linux boot server on the same VLAN/L2 domain as the NUCs.
- Unifi network with normal DHCP enabled.
- Linux boot server on the same VLAN/L2 domain as the NUCs.
- Unifi network with DHCP enabled.
- Local DNS record on that VLAN: `boot.every.channel -> <boot-server-ip>`.
- `curl`, `tar`, `python3`, `dnsmasq` installed on the boot server.
- Runner netboot artifact already published to Forgejo Releases (or available as a local tarball).
- For UniFi-only mode with reliable chainloading: `git` and `make` to build embedded iPXE.
- `openssl` (or equivalent) if you want generated chain tokens.
- Runner netboot artifact published to Forgejo Releases (or available as local tarball).
## 1) Stage artifacts
## Persistent NixOS service (recommended)
From repository root on the boot server:
Instead of running scripts manually, use the exported NixOS module and keep netboot
staging/serving declarative:
```nix
{
imports = [ every-channel.nixosModules.ec-netboot ];
services.every-channel.netboot = {
enable = true;
listenIP = "10.20.30.2";
interface = "enp195s0";
hostname = "boot.every.channel";
tftpBootFilename = "ec-ipxe.efi";
httpAllowedCIDRs = [ "10.20.30.0/24" ];
chainTokenFile = "/run/agenix/every-channel-netboot-chain-token";
# UniFi-only mode by default (no ProxyDHCP):
proxyDhcp.enable = false;
release.host = "https://git.every.channel";
release.repo = "every-channel/every.channel";
# release.tag = "boot-v2026.03.02"; # optional pin
# release.tokenFile = "/run/agenix/forgejo-api-token"; # optional private repo token
};
}
```
Operational commands:
```sh
sudo systemctl start every-channel-netboot-stage.service
sudo systemctl restart every-channel-netboot.service
sudo systemctl status every-channel-netboot.service
```
If you prefer ProxyDHCP mode:
```nix
services.every-channel.netboot.proxyDhcp.enable = true;
services.every-channel.netboot.proxyDhcp.subnet = "10.20.30.0/24";
```
## 1) Build embedded iPXE (UniFi-only mode)
This removes iPXE boot loops without requiring ProxyDHCP.
```sh
EVERY_CHANNEL_NETBOOT_HOSTNAME=boot.every.channel \
EVERY_CHANNEL_NETBOOT_HTTP_PORT=8080 \
EVERY_CHANNEL_NETBOOT_CHAIN_TOKEN="$(openssl rand -hex 16)" \
./scripts/netboot-build-ipxe.sh
```
Output:
- `tmp/netboot/tftp/ec-ipxe.efi` (use this as DHCP option 67 filename)
## 2) Stage runner netboot artifacts
```sh
EVERY_CHANNEL_NETBOOT_HOSTNAME=boot.every.channel \
EVERY_CHANNEL_NETBOOT_CHAIN_TOKEN="<same-token-as-step-1>" \
EVERY_CHANNEL_NETBOOT_IPXE_EFI_PATH=tmp/netboot/tftp/ec-ipxe.efi \
EVERY_CHANNEL_NETBOOT_IPXE_EFI_FILENAME=ec-ipxe.efi \
./scripts/netboot-stage.sh
```
@ -38,16 +91,31 @@ Optional inputs:
- `EVERY_CHANNEL_NETBOOT_RELEASE_TAG=boot-v2026.02.28`
- `EVERY_CHANNEL_NETBOOT_TARBALL=/path/to/ec-runner-x86_64-netboot-....tar.gz`
- `EVERY_CHANNEL_FORGE_TOKEN=<token>` for private releases
- `EVERY_CHANNEL_NETBOOT_HOSTNAME=boot.every.channel`
- `EVERY_CHANNEL_NETBOOT_ALLOW_REMOTE_IPXE=true` only if you intentionally want to download iPXE from URL
- `EVERY_CHANNEL_IPXE_EFI_SHA256=<sha256>` to pin iPXE binary integrity
This stages:
- `tmp/netboot/http/{kernel,initrd,netboot.ipxe}`
- `tmp/netboot/tftp/ipxe.efi`
- `tmp/netboot/tftp/ec-ipxe.efi`
## 2) Serve HTTP + TFTP + ProxyDHCP
## 3) Serve HTTP + TFTP
Example (replace values for your VLAN):
UniFi-only mode (no ProxyDHCP):
```sh
sudo \
EVERY_CHANNEL_NETBOOT_LISTEN_IP=10.20.30.2 \
EVERY_CHANNEL_NETBOOT_INTERFACE=eth0 \
EVERY_CHANNEL_NETBOOT_HOSTNAME=boot.every.channel \
EVERY_CHANNEL_NETBOOT_CHAIN_TOKEN="<same-token-as-step-1>" \
EVERY_CHANNEL_NETBOOT_HTTP_ALLOWED_CIDRS=10.20.30.0/24 \
EVERY_CHANNEL_NETBOOT_PROXY_DHCP=false \
EVERY_CHANNEL_NETBOOT_TFTP_BOOT_FILENAME=ec-ipxe.efi \
./scripts/netboot-serve.sh
```
ProxyDHCP mode:
```sh
sudo \
@ -55,48 +123,40 @@ sudo \
EVERY_CHANNEL_NETBOOT_INTERFACE=eth0 \
EVERY_CHANNEL_NETBOOT_PROXY_SUBNET=10.20.30.0/24 \
EVERY_CHANNEL_NETBOOT_HOSTNAME=boot.every.channel \
EVERY_CHANNEL_NETBOOT_CHAIN_TOKEN="<same-token-as-step-1>" \
EVERY_CHANNEL_NETBOOT_HTTP_ALLOWED_CIDRS=10.20.30.0/24 \
EVERY_CHANNEL_NETBOOT_PROXY_DHCP=true \
EVERY_CHANNEL_NETBOOT_TFTP_BOOT_FILENAME=ec-ipxe.efi \
./scripts/netboot-serve.sh
```
Notes:
## 4) UniFi settings (you do this)
- Keep this process running during provisioning.
- Do not set Unifi DHCP bootfile options while this proxy mode is active.
- Ensure `boot.every.channel` resolves to the boot server IP from NUC clients.
UniFi-only mode:
## 3) Unifi / NUC settings
- `Network Boot`: enabled
- `Server`: `boot.every.channel` (or boot server IP)
- `Filename`: `ec-ipxe.efi`
- `TFTP Server`: `boot.every.channel`
Unifi:
ProxyDHCP mode:
- Keep DHCP enabled for the provisioning VLAN.
- Leave DHCP boot/TFTP overrides unset when using `netboot-serve.sh`.
- Create/verify local DNS host override: `boot.every.channel -> <boot-server-ip>`.
- leave UniFi boot/TFTP options unset.
NUC BIOS:
- Enable UEFI network boot (IPv4 PXE).
- Disable Legacy/CSM if possible.
- Put network boot before disk for first install cycle.
- Enable UEFI PXE boot.
- Disable Legacy/CSM where possible.
- Put network boot first for initial install.
## 4) Provision the fleet
## Security hardening
1. Boot each NUC on the provisioning VLAN.
2. PXE will chainload into iPXE and then runner `netboot.ipxe`.
3. Complete install/bootstrap flow on each node.
4. After successful install, switch boot order back to local disk.
## Troubleshooting
- Symptom: iPXE loop (`ipxe.efi` repeatedly)
- Cause: static DHCP bootfile without iPXE-aware logic.
- Fix: use ProxyDHCP flow (`netboot-serve.sh`) or set conditional DHCP rules.
- Symptom: NUC gets IP but never downloads boot artifacts
- Verify firewall allows UDP 67/68, UDP 69, and TCP 8080 between NUCs and boot server.
- Symptom: no `dnsmasq` offers seen
- Verify `EVERY_CHANNEL_NETBOOT_INTERFACE` and `EVERY_CHANNEL_NETBOOT_PROXY_SUBNET`.
## Security / networking
- Tailscale is not required for provisioning.
- Keep the provisioning VLAN isolated from regular clients.
- Stop `netboot-serve.sh` when rollout is complete.
- Keep provisioning on an isolated VLAN.
- Allow only required ports from NUC VLAN to boot server: UDP 69, TCP 8080 (and DHCP if ProxyDHCP mode).
- Keep provisioning services up only during rollout, then stop them.
- Use `EVERY_CHANNEL_NETBOOT_HTTP_ALLOWED_CIDRS` to limit HTTP artifact access to NUC subnet(s).
- Use `EVERY_CHANNEL_NETBOOT_CHAIN_TOKEN` so only tokened iPXE chain requests receive `netboot.ipxe`.
- Use checksum verification in `netboot-stage.sh` (enabled by default when release has `SHA256SUMS.txt`).
- `netboot-stage.sh` now defaults to local iPXE binaries; remote URL download requires explicit opt-in.
- Prefer embedded `ec-ipxe.efi` with fixed chain target over generic unsigned internet binaries.
- If Secure Boot is required, use signed boot chain and keys for your environment (outside this basic runbook).

110
docs/OP_STACK_ECP_FORGE.md Normal file
View file

@ -0,0 +1,110 @@
# OP Stack on `ecp-forge`
This runbook covers the repo-owned OP Stack testnet surface introduced by [ECP-0093](/Users/conradev/Projects/every.channel/evolution/proposals/ECP-0093-ecp-forge-op-stack-sepolia-observation-testnet.md).
## Scope
- `ecp-forge` runs the every.channel OP Stack services with pinned official container images.
- The chain is Sepolia-anchored and private by default on the RPC side.
- Application-level consensus lives in the observation rail:
- [EveryChannelWitnessRegistry.sol](/Users/conradev/Projects/every.channel/contracts/EveryChannelWitnessRegistry.sol)
- [EveryChannelObservationLedger.sol](/Users/conradev/Projects/every.channel/contracts/EveryChannelObservationLedger.sol)
## Required inputs
- `secrets/op-stack-sepolia-private-key.age`
- Sepolia operator key for `op-deployer`, `op-node`, `op-batcher`, and `op-proposer`.
- `secrets/op-stack-challenger-prestate.bin.gz.age`
- Cannon absolute prestate artifact for `op-challenger`.
If the private key secret is absent, `services.every-channel.op-stack.enable = false` on `ecp-forge`.
If the prestate artifact is absent, `op-challenger` and `op-dispute-mon` stay disabled even when the
core rollup services are enabled.
## Local validation
Contracts:
```sh
nix shell .#foundry .#solc -c forge test -vv
```
Real archive data through Anvil:
```sh
nix shell .#foundry .#solc nixpkgs#jq nixpkgs#openssh nixpkgs#curl -c ./scripts/op-stack/anvil-reality-smoke.sh
```
The smoke script:
- deploys the witness registry and observation ledger to Anvil,
- reads a real archive JSONL entry from `root@git.every.channel`,
- derives `stream_hash`, `epoch_hash`, `locator_hash`, and `observation_hash`,
- finalizes the observation with two Anvil witnesses.
Default remote source:
- `/var/lib/every-channel/manifests/la-cbs/video0.m4s.jsonl`
Output artifact:
- `test-results/anvil-reality-smoke.json`
## Deploy
```sh
./scripts/deploy-ecp-forge.sh
```
The host bootstrap service is:
- `every-channel-op-stack-bootstrap.service`
It writes runtime state under:
- `/var/lib/every-channel/op-stack`
Key generated artifacts:
- `/var/lib/every-channel/op-stack/deployment.json`
- `/var/lib/every-channel/op-stack/sequencer/genesis.json`
- `/var/lib/every-channel/op-stack/sequencer/rollup.json`
## Verify
Core services:
```sh
ssh -o BatchMode=yes -o IdentityAgent=none -o IdentitiesOnly=yes -i ~/.ssh/id_ed25519 root@git.every.channel \
'systemctl is-active every-channel-op-stack-bootstrap podman-every-channel-op-geth podman-every-channel-op-node podman-every-channel-op-batcher podman-every-channel-op-proposer'
```
Full stack including challenger/dispute monitor:
```sh
ssh -o BatchMode=yes -o IdentityAgent=none -o IdentitiesOnly=yes -i ~/.ssh/id_ed25519 root@git.every.channel \
'systemctl is-active podman-every-channel-op-challenger podman-every-channel-op-dispute-mon'
```
Bootstrap outputs:
```sh
ssh -o BatchMode=yes -o IdentityAgent=none -o IdentitiesOnly=yes -i ~/.ssh/id_ed25519 root@git.every.channel \
'jq . /var/lib/every-channel/op-stack/deployment.json'
```
## Contract deployment on the rollup
Once the rollup RPC is live, deploy the observation rail to the L2 RPC:
```sh
EVERY_CHANNEL_RPC_URL=http://127.0.0.1:8545 \
EVERY_CHANNEL_PRIVATE_KEY_FILE=/path/to/private-key \
./scripts/op-stack/deploy-observation-ledger.sh
```
## Notes
- `op-geth` and `op-node` RPC surfaces bind to `127.0.0.1` on `ecp-forge`.
- The public firewall opening is only for the `op-node` P2P port.
- The bootstrap uses `op-deployer/v0.6.0-rc.3` by default and official OP Labs container images.

View file

@ -9,6 +9,12 @@ This repo exports reproducible NixOS runner configurations via flake outputs:
- `nixosConfigurations.ec-runner-x86_64-iso`
- `nixosConfigurations.ec-runner-aarch64-sdimage`
It also exports reusable NixOS modules:
- `nixosModules.ec-runner`
- `nixosModules.ec-node`
- `nixosModules.ec-netboot` (persistent HTTP/TFTP netboot stage+serve service)
The runner OS exposes this repo's flake source inside the system at:
- `/etc/every-channel/flake`

View file

@ -11,6 +11,13 @@ cd apps/tauri
cargo tauri dev
```
If you want to run the desktop app directly from Cargo against the bundled frontend instead of the
dev server, run:
```sh
EVERY_CHANNEL_ROOT=$PWD cargo run -p ec-tauri --features custom-protocol
```
If you want deterministic transcoding instead of stream copy:
```sh
@ -31,6 +38,40 @@ EVERY_CHANNEL_IROH_DISCOVERY=dht,mdns cargo tauri dev
In the Tauri app, use **Add stream** to add an HDHomeRun host, a direct HLS URL, or a yt-dlp supported URL (e.g. YouTube Live). The flow rejects non-live sources.
`https://www.nbc.com/watch/...` URLs are also supported in the Tauri app. This path is
browser-backed:
- on macOS, the app first opens an in-app Tauri webview backed by `WKWebView`
- NBC / Adobe Pass authentication stays in that native app window, including popup sign-in flows
- if native playback cannot become ready, the app falls back to the existing external Chrome path
- once playback is live, the app captures rendered video frames and feeds them into the existing
ffmpeg ladder
Notes:
- the first run may require you to finish your MVPD login in the native app window or, if native
playback falls back, in the launched Chrome window
- on macOS, the default native webview data directory is app-local; override it with
`EVERY_CHANNEL_NBC_WEBVIEW_DATA_DIR=/path/to/webview-data`
- for future unattended runs with a warm session, set `EVERY_CHANNEL_NBC_HIDE_WINDOWS=1` to keep
the native NBC webviews hidden; if interactive auth is needed, the app will surface the window
instead of silently hanging
- the desktop app also exposes `bootstrap_nbc_auth`; in the Add menu, use `Bootstrap selected NBC`
or `Bootstrap pasted NBC URL` to warm the hidden session before later playback runs
- the fallback Chrome profile directory is app-local; override it with
`EVERY_CHANNEL_NBC_PROFILE_DIR=/path/to/profile`
- override the Chrome binary with `EVERY_CHANNEL_NBC_CHROME_PATH=/path/to/chrome`
- when `EVERY_CHANNEL_NBC_HIDE_WINDOWS=1` is set, the app refuses visible Chrome fallback if the
native path fails
- the app also pulls NBC's public live guide before auth so browseable NBC channel rows can
appear in the Channels list; override that guide shaping with
`EVERY_CHANNEL_NBC_PUBLIC_TIMEZONE`, `EVERY_CHANNEL_NBC_PUBLIC_NBC_AFFILIATE`,
`EVERY_CHANNEL_NBC_PUBLIC_TELEMUNDO_AFFILIATE`, and
`EVERY_CHANNEL_NBC_PUBLIC_BROADCAST_TYPE`
- capture is currently video-first; audio is not guaranteed in the first cut
- adjust startup timeout / capture rate with `EVERY_CHANNEL_NBC_CAPTURE_TIMEOUT_SECS`,
`EVERY_CHANNEL_NBC_CAPTURE_FPS`, and `EVERY_CHANNEL_NBC_CAPTURE_QUALITY`
Linux DVB sources can be added with a URL like:
```