every.channel: sanitized baseline

This commit is contained in:
every.channel 2026-02-15 16:17:27 -05:00
commit 897e556bea
No known key found for this signature in database
258 changed files with 74298 additions and 0 deletions

2
.envrc Normal file
View file

@ -0,0 +1,2 @@
use flake
export EVERY_CHANNEL_ROOT="$(git rev-parse --show-toplevel 2>/dev/null || pwd)"

View file

@ -0,0 +1,40 @@
name: deploy-cloudflare
on:
push:
branches: [main]
workflow_dispatch: {}
concurrency:
group: cloudflare-deploy-${{ forgejo.ref }}
cancel-in-progress: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- uses: dtolnay/rust-toolchain@stable
with:
targets: wasm32-unknown-unknown
- name: Build Web App (Trunk)
run: |
set -euo pipefail
cargo install trunk --locked
cd apps/tauri/ui
trunk build --release --public-url /
- name: Deploy Worker (Wrangler)
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
run: |
set -euo pipefail
cd deploy/cloudflare-worker
npm ci
npm run deploy

28
.gitignore vendored Normal file
View file

@ -0,0 +1,28 @@
apps/tauri/resources/yt-dlp/*/venv/
tmp/
target/
apps/tauri/ui/target/
apps/tauri/ui/apps/
apps/tauri/dist/
apps/tauri/gen/
.direnv/
result
.wrangler/
# third_party is managed as submodules for build-critical deps (iroh-live, iroh-gossip).
# Everything else under third_party is treated as local scratch space.
third_party/*
!third_party/iroh-live
!third_party/iroh-org
third_party/iroh-org/*
!third_party/iroh-org/iroh-gossip
# Cloudflare worker local deps / builds
deploy/cloudflare-worker/node_modules/
# NEVER commit private keys
every_channel_ed25519
*.pem
*.key
*.p12
*.pfx
**/.env

24
AGENTS.md Normal file
View file

@ -0,0 +1,24 @@
# Agent Instructions
This repo runs on explicit governance. Agents should operate autonomously and record decisions as ECPs.
## Principles
- The constitution states enduring principles, not specific technical choices.
- Technical decisions belong in ECPs.
- Use ECPs as the primary review surface for the founder.
## Workflow
- For any non-trivial change, draft an ECP in `evolution/proposals/` before or alongside implementation.
- Keep ECPs short, decisive, and reversible where possible.
- Prefer incremental commits; document rationale in ECPs rather than inline comments.
## Identity and signing
- Commits must be signed with SSH or age identities (minimum: SSH-signed commits).
- If unsure about signing configuration, pause and ask.
## Autonomy
- Proceed independently; ask for input only when blocked by ambiguous design or missing constraints.

42
CONSTITUTION.md Normal file
View file

@ -0,0 +1,42 @@
# every.channel Constitution
1. Mission
Make broadcast television universally reachable.
Build a global, disaggregated network of relays that lets anyone, anywhere, watch every.channel on any device, for free.
2. Principles
These are non-negotiable. Amendments require explicit constitutional process.
- **Free access.** No paywalls or tiers for viewing or participation. Donations and grants are welcome.
- **Public-first.** Broadcast spectrum is public. The network exists to expand public access and reduce artificial scarcity.
- **User sovereignty.** Nodes are user-run, user-owned, and programmable. Leaving the network must be as easy as joining it.
- **Resilient by design.** The system must tolerate takedowns, failures, and hostile pressure without losing the whole.
- **Transparent operation.** Source, protocols, and governance are public. Hidden control planes are not acceptable.
- **Composable layers.** The system is built from separable components so multiple implementations can coexist.
3. Infrastructure
**The project controls its own infrastructure.** CI, deployment, and secrets are defined in this repository.
External services may be used when practical but must not create dependencies that prevent independent operation.
4. Contributor Conduct
- Non-trivial changes require a written proposal in `evolution/proposals/` referencing this constitution.
- Capture decisions and rationale in the repository. If it is not written down, it did not happen.
- When tradeoffs appear, prefer choices that maximize user control and network resiliency.
- Security-sensitive changes require senior contributor review.
5. Governance
- ECP (every.channel proposals) is the legislative process.
- Senior contributors are named in `CONTRIBUTORS.md`.
- All changes merge through pull requests.
- Constitutional amendments require a dedicated ECP quoting the affected section with explicit rationale.
6. Origin
This constitution implements the intent of the every.channel genesis documents.

3
CONTRIBUTORS.md Normal file
View file

@ -0,0 +1,3 @@
# Contributors
- Founder: founder@every.channel

9029
Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

36
Cargo.toml Normal file
View file

@ -0,0 +1,36 @@
[workspace]
resolver = "2"
members = [
"crates/ec-core",
"crates/ec-moq",
"crates/ec-direct",
"crates/ec-hdhomerun",
"crates/ec-linux-iptv",
"crates/ec-iroh",
"crates/ec-crypto",
"crates/ec-ts",
"crates/ec-chopper",
"crates/ec-node",
"crates/ec-cli",
"apps/tauri",
]
exclude = [
# Vendored upstream crates; we build them as dependencies but do not treat them
# as first-class workspace members (their upstream tests are timing-sensitive).
"third_party/iroh-org/iroh-gossip",
"third_party/iroh-live/iroh-moq",
"third_party/iroh-live/web-transport-iroh",
]
[workspace.package]
edition = "2021"
license = "AGPL-3.0-only"
[workspace.dependencies]
anyhow = "1"
blake3 = "1"
clap = { version = "4", features = ["derive"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tracing = "0.1"
tracing-subscriber = "0.3"

644
LICENSE Normal file
View file

@ -0,0 +1,644 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
if it includes a convenient and prominently visible feature that
(1) displays an appropriate copyright notice, and (2) tells the user
that there is no warranty for the work (except to the extent that
warranties are provided), that licensees may convey the work under
this License, and how to view a copy of this License. If the
interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders
of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor was required to provide the Corresponding Source.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under
this License and any other pertinent obligations, then as a consequence
you may not convey it at all. For example, if you agree to terms that
obligate you to collect a royalty for further conveying from those to
whom you convey the Program, the only way you could satisfy both those
terms and this License would be to refrain entirely from conveying the
Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero
General Public License "or any later version" applies to it, you have
the option of following the terms and conditions either of that
numbered version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number
of the GNU Affero General Public License, you may choose any version
ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that
proxy's public statement of acceptance of a version permanently
authorizes you to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.

94
README.md Normal file
View file

@ -0,0 +1,94 @@
# every.channel
A global, disaggregated mesh of relays that turns local ATSC antennas into a coherent, worldwide stream. The stack is Rust-first, MoQ-native, and designed for deterministic chunking so identical broadcasts yield identical data.
## Goals
- Free, global access to broadcast TV through user-run relays.
- Deterministic encoding and chunking to make availability a coordination problem.
- Clean layering: capture -> transcode -> MoQ publish -> relay -> client playback.
- Cross-platform clients: Tauri app, CLI, and a static web UI.
## Repository layout
- `crates/ec-core`: shared types and determinism profiles.
- `crates/ec-hdhomerun`: HDHomeRun discovery and lineup scaffolding.
- `crates/ec-linux-iptv`: Linux DVB ingest scaffolding.
- `crates/ec-iroh`: iroh transport scaffolding.
- `crates/ec-crypto`: stream key derivation helpers.
- `crates/ec-ts`: MPEG-TS timing and table parsing.
- `crates/ec-chopper`: deterministic ffmpeg chunking scaffolding.
- `crates/ec-moq`: MoQ data model and relay scaffolding.
- `crates/ec-node`: node runner (ingest + publish).
- `crates/ec-cli`: CLI for discovery and node control.
- `apps/tauri`: desktop client shell.
- `apps/tauri/ui`: Dioxus web frontend embedded in the Tauri app.
- `docs/USAGE.md`: runbook for viewer and ingest pipelines.
- `docs/IROH_EXAMPLES.md`: summary of iroh repos/examples used for design.
- `docs/`: architecture, roadmap, and MoQ notes.
## Development
Nix:
```sh
nix develop
```
Rust:
```sh
cargo build
```
Runbook:
```sh
cat docs/USAGE.md
```
Coverage:
```sh
./scripts/coverage.sh
```
Build static web:
```sh
./scripts/build-web.sh
```
Deploy to Cloudflare Workers (static site):
```sh
./scripts/deploy-workers.sh
```
Remote website E2E (local publisher -> deployed every.channel web):
```sh
./scripts/e2e-remote-website-direct.sh
```
Remote website E2E (public list/signaling -> website selects stream automatically):
```sh
./scripts/e2e-remote-website-directory.sh
```
Tauri viewer (Dioxus + Trunk):
```sh
cd apps/tauri/ui
trunk serve --port 1420 --public-url /
```
```sh
cd ../
cargo run
```
## Status
This repository is intentionally minimal. It captures the initial architecture and scaffold for a MoQ-first network and will expand as proposals are accepted.

29
apps/tauri/Cargo.toml Normal file
View file

@ -0,0 +1,29 @@
[package]
name = "ec-tauri"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
axum = "0.7"
blake3.workspace = true
ec-crypto = { path = "../../crates/ec-crypto" }
ec-core = { path = "../../crates/ec-core" }
ec-chopper = { path = "../../crates/ec-chopper" }
ec-hdhomerun = { path = "../../crates/ec-hdhomerun" }
ec-linux-iptv = { path = "../../crates/ec-linux-iptv" }
ec-iroh = { path = "../../crates/ec-iroh" }
ec-moq = { path = "../../crates/ec-moq" }
hex = "0.4"
iroh = "0.96"
reqwest = { version = "0.12", default-features = false, features = ["blocking", "rustls-tls"] }
serde.workspace = true
serde_json = "1"
tauri = { version = "2", features = [] }
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
tower-http = { version = "0.5", features = ["fs"] }
tracing.workspace = true
[build-dependencies]
tauri-build = { version = "2", features = [] }

3
apps/tauri/build.rs Normal file
View file

@ -0,0 +1,3 @@
fn main() {
tauri_build::build();
}

BIN
apps/tauri/icons/icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 B

View file

@ -0,0 +1,8 @@
Bundled yt-dlp runtime lives under platform-specific folders.
Use `scripts/vendor-yt-dlp.sh` to populate:
- macos/venv
- linux/venv
- windows/venv
These directories are intentionally empty in git.

3452
apps/tauri/src/main.rs Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,31 @@
{
"$schema": "https://schema.tauri.app/config/2",
"productName": "every.channel",
"version": "0.0.0",
"identifier": "channel.every.app",
"build": {
"beforeBuildCommand": "cd ui && trunk build --release",
"beforeDevCommand": "cd ui && trunk serve --port 1420 --public-url /",
"devUrl": "http://localhost:1420",
"frontendDist": "dist"
},
"app": {
"withGlobalTauri": true,
"windows": [
{
"title": "every.channel",
"width": 1280,
"height": 820,
"resizable": true
}
],
"security": {
"csp": null
}
},
"bundle": {
"resources": [
"resources/**/*"
]
}
}

3619
apps/tauri/ui/Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

22
apps/tauri/ui/Cargo.toml Normal file
View file

@ -0,0 +1,22 @@
[package]
name = "ec-tauri-ui"
version = "0.0.0"
edition = "2021"
[dependencies]
dioxus = { version = "0.6", features = ["web"] }
ec-direct = { path = "../../../crates/ec-direct" }
bytes = "1"
futures-util = { version = "0.3", features = ["sink"] }
gloo-timers = { version = "0.3", features = ["futures"] }
gloo-net = { version = "0.6", features = ["websocket"] }
js-sys = "0.3"
just-webrtc = "0.2"
serde = { version = "1", features = ["derive"] }
serde-wasm-bindgen = "0.6"
serde_json = "1"
wasm-bindgen = "0.2"
wasm-bindgen-futures = "0.4"
web-sys = { version = "0.3", features = ["Window", "Navigator", "Clipboard", "MediaSource", "SourceBuffer", "Url", "HtmlVideoElement", "HtmlMediaElement", "EventTarget", "Event", "Blob", "Request", "RequestInit", "Response", "Headers", "Location"] }
[workspace]

3
apps/tauri/ui/Trunk.toml Normal file
View file

@ -0,0 +1,3 @@
[build]
dist = "../dist"
public_url = "/"

Binary file not shown.

After

Width:  |  Height:  |  Size: 984 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

34
apps/tauri/ui/index.html Normal file
View file

@ -0,0 +1,34 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, viewport-fit=cover" />
<meta name="theme-color" content="#f7f4ef" />
<meta name="color-scheme" content="light" />
<meta name="apple-mobile-web-app-capable" content="yes" />
<meta name="apple-mobile-web-app-status-bar-style" content="default" />
<title>every.channel</title>
<link rel="manifest" href="manifest.webmanifest" />
<link rel="icon" href="icons/icon-192.png" />
<link rel="apple-touch-icon" href="icons/apple-touch-icon.png" />
<link data-trunk rel="css" href="style.css" />
<link data-trunk rel="rust" data-wasm-opt="z" />
<link data-trunk rel="copy-file" href="manifest.webmanifest" />
<link data-trunk rel="copy-file" href="sw.js" />
<link data-trunk rel="copy-dir" href="icons" />
</head>
<body>
<div id="main"></div>
<script>
// Installable app shell (PWA). Keep this tiny and resilient.
if ("serviceWorker" in navigator) {
window.addEventListener("load", () => {
navigator.serviceWorker.register("./sw.js").catch(() => {});
});
}
</script>
</body>
</html>

View file

@ -0,0 +1,29 @@
{
"name": "every.channel",
"short_name": "every.channel",
"description": "every.channel viewer",
"start_url": "/",
"scope": "/",
"display": "standalone",
"background_color": "#f7f4ef",
"theme_color": "#f7f4ef",
"icons": [
{
"src": "icons/icon-192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "icons/icon-512.png",
"sizes": "512x512",
"type": "image/png"
},
{
"src": "icons/icon-512.png",
"sizes": "512x512",
"type": "image/png",
"purpose": "maskable"
}
]
}

2329
apps/tauri/ui/src/main.rs Normal file

File diff suppressed because it is too large Load diff

671
apps/tauri/ui/style.css Normal file
View file

@ -0,0 +1,671 @@
@import url("https://fonts.googleapis.com/css2?family=Space+Grotesk:wght@400;500;600&family=IBM+Plex+Sans:wght@400;500&display=swap");
:root {
color-scheme: light;
--bg: #f7f4ef;
--bg-ink: #151410;
--bg-muted: #f1ede6;
--bg-card: #ffffff;
--accent: #18a89b;
--accent-strong: #0c6f68;
--accent-warm: #d4915a;
--ink: #151410;
--ink-muted: #5a564c;
--border: rgba(21, 20, 16, 0.12);
--shadow: 0 24px 50px rgba(21, 20, 16, 0.15);
font-family: "Space Grotesk", "IBM Plex Sans", "Segoe UI", sans-serif;
}
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
background: radial-gradient(circle at top left, #fff7ec 0%, #f7f4ef 42%, #eef5f3 100%);
color: var(--ink);
min-height: 100vh;
}
body::before {
content: "";
position: fixed;
inset: 0;
background-image: radial-gradient(circle at 20% 20%, rgba(24, 168, 155, 0.15), transparent 45%),
radial-gradient(circle at 85% 12%, rgba(255, 155, 82, 0.18), transparent 40%),
radial-gradient(circle at 40% 80%, rgba(24, 168, 155, 0.1), transparent 50%);
pointer-events: none;
z-index: -1;
}
#main {
min-height: 100vh;
}
.app {
display: flex;
flex-direction: column;
gap: 20px;
padding: 24px clamp(16px, 4vw, 40px) 36px;
animation: fadeIn 0.6s ease-out;
}
.topbar {
display: flex;
align-items: center;
justify-content: space-between;
gap: 16px;
}
.topbar-actions {
display: flex;
align-items: center;
gap: 12px;
position: relative;
}
.add-source {
border: none;
border-radius: 14px;
background: var(--accent-strong);
color: white;
padding: 8px 14px;
font-size: 13px;
font-weight: 600;
cursor: pointer;
transition: transform 0.2s ease, box-shadow 0.2s ease;
}
.add-source:hover {
transform: translateY(-1px);
box-shadow: 0 12px 20px rgba(12, 111, 104, 0.25);
}
.source-menu {
position: absolute;
top: 48px;
right: 0;
width: 280px;
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: 16px;
padding: 12px;
box-shadow: var(--shadow);
display: flex;
flex-direction: column;
gap: 8px;
max-height: 520px;
overflow-y: auto;
z-index: 10;
}
.source-menu-title {
font-size: 13px;
font-weight: 500;
}
.source-menu-status {
font-size: 12px;
color: var(--ink-muted);
}
.source-menu-item {
display: flex;
justify-content: space-between;
font-size: 12px;
color: var(--ink-muted);
}
.source-menu-action {
border: none;
border-radius: 12px;
background: var(--bg-muted);
padding: 8px 10px;
font-size: 12px;
cursor: pointer;
}
.source-menu-action.small {
padding: 6px 10px;
border-radius: 10px;
}
.source-menu-inline {
display: flex;
gap: 8px;
align-items: center;
}
.source-menu-inline .source-menu-input {
flex: 1;
}
.source-menu-divider {
height: 1px;
background: var(--border);
margin: 6px 0;
}
.source-menu-section {
display: flex;
flex-direction: column;
gap: 6px;
}
.source-menu-subsection {
display: flex;
flex-direction: column;
gap: 8px;
}
.source-menu-label {
font-size: 11px;
color: var(--ink-muted);
text-transform: uppercase;
letter-spacing: 0.08em;
}
.source-menu-input {
border: 1px solid var(--border);
border-radius: 10px;
padding: 6px 8px;
font-size: 12px;
background: var(--bg-muted);
}
.source-menu-button {
border: none;
border-radius: 12px;
background: var(--accent);
color: #fff;
padding: 8px 10px;
font-size: 12px;
cursor: pointer;
}
.source-menu-toggle {
display: flex;
align-items: center;
gap: 8px;
font-size: 12px;
color: var(--ink-muted);
}
.brand {
display: flex;
flex-direction: column;
gap: 6px;
}
.brand-title {
font-size: clamp(24px, 2.6vw, 30px);
font-weight: 600;
letter-spacing: -0.03em;
}
.brand-subtitle {
font-size: 12px;
color: var(--ink-muted);
}
.status-pill {
display: inline-flex;
align-items: center;
gap: 10px;
padding: 8px 14px;
border-radius: 999px;
border: 1px solid var(--border);
background: var(--bg-card);
font-size: 13px;
color: var(--ink-muted);
box-shadow: 0 10px 20px rgba(21, 20, 16, 0.08);
}
.status-dot {
width: 8px;
height: 8px;
border-radius: 50%;
background: var(--accent);
box-shadow: 0 0 0 4px rgba(24, 168, 155, 0.2);
animation: pulse 1.8s ease-in-out infinite;
}
@keyframes pulse {
0% {
box-shadow: 0 0 0 4px rgba(24, 168, 155, 0.2);
}
50% {
box-shadow: 0 0 0 7px rgba(24, 168, 155, 0.08);
}
100% {
box-shadow: 0 0 0 4px rgba(24, 168, 155, 0.2);
}
}
.grid {
display: grid;
grid-template-columns: minmax(260px, 1fr) minmax(320px, 2fr);
gap: 20px;
}
.left-column {
display: flex;
flex-direction: column;
gap: 24px;
}
.panel {
background: var(--bg-card);
border-radius: 18px;
border: 1px solid var(--border);
box-shadow: var(--shadow);
padding: 20px;
}
.panel-title {
font-size: 12px;
text-transform: uppercase;
letter-spacing: 0.14em;
color: var(--ink-muted);
margin-bottom: 12px;
}
.panel-header {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
margin-bottom: 12px;
}
.panel-header .panel-title {
margin-bottom: 0;
}
.panel-actions {
display: flex;
align-items: center;
gap: 10px;
}
.panel-select {
border: 1px solid var(--border);
background: var(--bg-muted);
border-radius: 10px;
padding: 6px 10px;
font-size: 11px;
text-transform: uppercase;
letter-spacing: 0.1em;
}
.panel-button {
border: none;
border-radius: 12px;
background: var(--bg-muted);
padding: 8px 12px;
font-size: 12px;
cursor: pointer;
}
.pager {
display: flex;
align-items: center;
gap: 6px;
}
.pager-button {
border: none;
border-radius: 10px;
background: var(--bg-muted);
padding: 6px 10px;
font-size: 11px;
cursor: pointer;
}
.pager-label {
font-size: 12px;
color: var(--ink-muted);
}
.channel-list {
display: flex;
flex-direction: column;
gap: 10px;
max-height: 480px;
overflow: auto;
padding-right: 4px;
}
.channel-card {
border-radius: 14px;
border: 1px solid transparent;
background: var(--bg-muted);
padding: 12px 14px;
display: flex;
flex-direction: column;
gap: 4px;
cursor: pointer;
transition: transform 0.2s ease, border 0.2s ease, box-shadow 0.2s ease;
}
.channel-badge {
align-self: flex-start;
border-radius: 999px;
padding: 4px 10px;
font-size: 10px;
letter-spacing: 0.12em;
text-transform: uppercase;
font-weight: 600;
background: rgba(21, 20, 16, 0.08);
color: var(--ink-muted);
}
.channel-badge.drm {
background: rgba(255, 155, 82, 0.2);
color: #a14d00;
}
.channel-badge.source {
background: rgba(24, 168, 155, 0.16);
color: #0f6c63;
}
.channel-card:hover {
transform: translateY(-2px);
border: 1px solid rgba(24, 168, 155, 0.4);
box-shadow: 0 12px 24px rgba(21, 20, 16, 0.1);
}
.channel-card.active {
border: 1px solid var(--accent);
background: rgba(24, 168, 155, 0.1);
}
.channel-title {
font-size: 15px;
font-weight: 600;
}
.channel-meta {
font-size: 13px;
color: var(--ink-muted);
}
.source-status {
font-size: 13px;
color: var(--ink-muted);
margin-bottom: 12px;
}
.source-list {
display: flex;
flex-direction: column;
gap: 12px;
margin-bottom: 20px;
}
.source-card {
border-radius: 16px;
border: 1px solid var(--border);
background: #fbf9f6;
padding: 12px 14px;
}
.source-name {
font-weight: 600;
margin-bottom: 6px;
}
.source-meta {
font-size: 12px;
color: var(--ink-muted);
}
.catalog-panel {
margin-bottom: 18px;
}
.player-shell {
display: flex;
flex-direction: column;
gap: 16px;
}
.video-frame {
width: 100%;
aspect-ratio: 16 / 9;
border-radius: 14px;
overflow: hidden;
background: #0f0f0f;
border: 1px solid rgba(21, 20, 16, 0.2);
position: relative;
}
.video-frame::after {
content: "";
position: absolute;
inset: 0;
pointer-events: none;
opacity: 0.11;
mix-blend-mode: overlay;
background:
linear-gradient(
to bottom,
rgba(255, 255, 255, 0.045),
rgba(0, 0, 0, 0.045)
),
repeating-linear-gradient(
to bottom,
rgba(255, 255, 255, 0.04) 0px,
rgba(255, 255, 255, 0.04) 1px,
rgba(0, 0, 0, 0) 2px,
rgba(0, 0, 0, 0) 4px
);
}
video {
width: 100%;
height: 100%;
object-fit: cover;
}
.placeholder {
display: grid;
place-items: center;
height: 100%;
color: rgba(255, 255, 255, 0.7);
font-size: 14px;
}
.meta-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
gap: 12px;
}
.meta-card {
border-radius: 12px;
background: #fdfbf8;
border: 1px solid var(--border);
padding: 12px 14px;
font-size: 13px;
color: var(--ink-muted);
}
.meta-card.drm {
border-color: rgba(255, 155, 82, 0.5);
background: rgba(255, 155, 82, 0.12);
}
.meta-card strong {
display: block;
font-size: 12px;
text-transform: uppercase;
letter-spacing: 0.12em;
color: var(--ink-muted);
margin-bottom: 4px;
}
.share-card {
margin-top: 16px;
border-radius: 14px;
border: 1px solid var(--border);
background: rgba(255, 255, 255, 0.7);
padding: 14px 16px;
display: flex;
flex-direction: column;
gap: 8px;
}
.share-title {
font-size: 12px;
text-transform: uppercase;
letter-spacing: 0.2em;
color: var(--ink-muted);
}
.share-row {
display: flex;
flex-direction: column;
gap: 4px;
}
.share-label {
font-size: 11px;
color: var(--ink-muted);
text-transform: uppercase;
letter-spacing: 0.08em;
}
.share-value {
font-size: 12px;
color: var(--ink);
word-break: break-all;
}
.share-link-row {
display: grid;
grid-template-columns: 1fr auto;
gap: 10px;
align-items: center;
}
.share-link {
border-radius: 10px;
border: 1px solid var(--border);
padding: 8px 10px;
font-size: 12px;
background: #fbf9f6;
color: var(--ink);
}
.share-copy {
border-radius: 10px;
border: 1px solid var(--border);
background: rgba(255, 255, 255, 0.8);
padding: 8px 12px;
font-size: 12px;
color: var(--ink);
cursor: pointer;
transition: transform 140ms ease, box-shadow 140ms ease, background 140ms ease;
}
.share-copy:hover {
transform: translateY(-1px);
box-shadow: 0 10px 24px rgba(30, 23, 17, 0.08);
}
.toggle {
display: flex;
align-items: center;
gap: 8px;
font-size: 12px;
color: var(--ink);
}
.toggle input {
accent-color: var(--accent);
}
.share-input {
margin-top: 8px;
border-radius: 10px;
border: 1px solid var(--border);
padding: 8px 10px;
font-size: 12px;
background: #fbf9f6;
color: var(--ink);
}
.moq-panel {
margin-top: 24px;
padding-top: 20px;
border-top: 1px dashed var(--border);
display: flex;
flex-direction: column;
gap: 10px;
}
.moq-title {
font-size: 13px;
font-weight: 600;
color: var(--ink);
}
.moq-label {
font-size: 12px;
color: var(--ink-muted);
text-transform: uppercase;
letter-spacing: 0.12em;
}
.moq-input {
border-radius: 10px;
border: 1px solid var(--border);
padding: 10px 12px;
font-size: 13px;
background: #fbf9f6;
color: var(--ink);
}
.moq-input:focus {
outline: none;
border-color: rgba(24, 168, 155, 0.6);
box-shadow: 0 0 0 3px rgba(24, 168, 155, 0.15);
}
.moq-button {
margin-top: 6px;
border: none;
border-radius: 12px;
background: var(--accent);
color: white;
padding: 10px 14px;
font-size: 13px;
font-weight: 600;
cursor: pointer;
transition: transform 0.2s ease, box-shadow 0.2s ease;
}
.moq-button:hover {
transform: translateY(-1px);
box-shadow: 0 12px 20px rgba(24, 168, 155, 0.25);
}
@keyframes fadeIn {
from {
opacity: 0;
transform: translateY(12px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
@media (max-width: 900px) {
.grid {
grid-template-columns: 1fr;
}
}

103
apps/tauri/ui/sw.js Normal file
View file

@ -0,0 +1,103 @@
/* every.channel PWA service worker
*
* Goal: cache the app shell so it can be installed and load offline.
* Do not interfere with media fetching/streaming: always network-pass-through
* for non-GET requests and for large binary media responses.
*/
const CACHE_NAME = "every.channel-shell-v1";
const SHELL = [
"./",
"./index.html",
"./style.css",
"./manifest.webmanifest",
"./icons/icon-192.png",
"./icons/icon-512.png",
"./icons/apple-touch-icon.png",
];
self.addEventListener("install", (event) => {
event.waitUntil(
caches
.open(CACHE_NAME)
.then((cache) => cache.addAll(SHELL))
.then(() => self.skipWaiting())
);
});
self.addEventListener("activate", (event) => {
event.waitUntil(
caches
.keys()
.then((keys) =>
Promise.all(
keys.map((key) => {
if (key !== CACHE_NAME) return caches.delete(key);
return Promise.resolve();
})
)
)
.then(() => self.clients.claim())
);
});
function isNavigationRequest(request) {
return request.mode === "navigate";
}
function isMediaRequest(request) {
const url = new URL(request.url);
const path = url.pathname.toLowerCase();
return (
path.endsWith(".m3u8") ||
path.endsWith(".m4s") ||
path.endsWith(".mp4") ||
path.endsWith(".ts")
);
}
self.addEventListener("fetch", (event) => {
const { request } = event;
if (request.method !== "GET") return;
// Don't cache/modify streaming media requests.
if (isMediaRequest(request)) {
event.respondWith(fetch(request));
return;
}
// For navigations, prefer network but fall back to cached shell.
if (isNavigationRequest(request)) {
event.respondWith(
fetch(request).catch(() => caches.match("./index.html").then((r) => r || Response.error()))
);
return;
}
// Cache-first for same-origin static assets; network fallback.
const url = new URL(request.url);
if (url.origin === self.location.origin) {
event.respondWith(
caches.match(request).then((cached) => {
if (cached) return cached;
return fetch(request)
.then((resp) => {
// Avoid caching huge binary responses.
const len = resp.headers.get("content-length");
const tooBig = len && Number(len) > 5_000_000;
if (resp.ok && !tooBig) {
const clone = resp.clone();
caches.open(CACHE_NAME).then((cache) => cache.put(request, clone)).catch(() => {});
}
return resp;
})
.catch(() => cached || Response.error());
})
);
return;
}
// Default: network.
event.respondWith(fetch(request));
});

1857
apps/web/Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

14
apps/web/Cargo.toml Normal file
View file

@ -0,0 +1,14 @@
[package]
name = "ec-web"
version = "0.0.0"
edition = "2021"
[dependencies]
dioxus = { version = "0.6", features = ["web"] }
js-sys = "0.3"
serde = { version = "1", features = ["derive"] }
wasm-bindgen = "0.2"
wasm-bindgen-futures = "0.4"
web-sys = { version = "0.3", features = ["Window", "Navigator", "Clipboard"] }
[workspace]

18
apps/web/README.md Normal file
View file

@ -0,0 +1,18 @@
# every.channel web site (static)
This is a static web site built in Rust with Dioxus and compiled to WASM.
## Dev
From repo root:
```bash
nix develop -c bash -lc 'cd apps/web && trunk serve --port 1421 --public-url /'
```
## Build
```bash
nix develop -c bash -lc 'cd apps/web && trunk build --release --public-url /'
```

5
apps/web/Trunk.toml Normal file
View file

@ -0,0 +1,5 @@
[build]
target = "index.html"
dist = "dist"
public_url = "/"

17
apps/web/index.html Normal file
View file

@ -0,0 +1,17 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>every.channel</title>
<meta
name="description"
content="Watch and share free over-the-air TV. Local first, global when you want."
/>
<link data-trunk rel="css" href="style.css" />
<link data-trunk rel="rust" data-wasm-opt="z" />
</head>
<body>
<div id="main"></div>
</body>
</html>

125
apps/web/src/main.rs Normal file
View file

@ -0,0 +1,125 @@
use dioxus::prelude::*;
use wasm_bindgen_futures::JsFuture;
fn main() {
dioxus::launch(App);
}
#[component]
fn App() -> Element {
let mut link = use_signal(|| "".to_string());
let mut status = use_signal(|| "".to_string());
rsx! {
div { class: "page",
header { class: "top",
div { class: "brand",
div { class: "brand-title", "every.channel" }
div { class: "brand-subtitle",
"Watch and share free over-the-air TV. Local first, global when you want."
}
}
nav { class: "nav",
a { href: "#watch", "Watch" }
a { href: "#directory", "Directory" }
a { href: "#join", "Join" }
a { href: "#about", "Info" }
}
}
div { class: "grid",
section { class: "card section", id: "watch",
div { class: "card-title", "Watch" }
div { class: "h1", "Watch a link" }
div { class: "p",
"Got a link from a friend? Paste it here to copy, then open the desktop app."
}
div { class: "row",
input {
class: "input",
placeholder: "every.channel://watch?...",
value: "{link.read()}",
oninput: move |evt| link.set(evt.value()),
}
button {
class: "btn primary",
onclick: move |_| {
let value = link.read().trim().to_string();
if value.is_empty() {
status.set("Paste a link first".to_string());
return;
}
let mut status = status.clone();
spawn(async move {
match copy_to_clipboard(value).await {
Ok(_) => status.set("Copied! Open the app and paste under Watch a Link.".to_string()),
Err(err) => status.set(format!("Copy failed: {err}")),
}
});
},
"Copy"
}
}
if !status.read().is_empty() {
div { class: "kicker",
span { class: "dot" }
span { "{status.read()}" }
}
}
}
section { class: "card section", id: "directory",
div { class: "card-title", "Directory" }
div { class: "h1", "Find channels from people you trust" }
div { class: "p",
"The directory is opt-in. You choose what to share and who to connect with."
}
div { class: "kicker",
span { class: "dot" }
span { "Enable Nearby or Public reach in the app to find others." }
}
}
section { class: "card section", id: "join",
div { class: "card-title", "Join" }
div { class: "h1", "Run your own" }
div { class: "p",
"Anyone can watch, share, and relay. Works with HDHomeRun, Linux TV tuners, and live streams."
}
div { class: "kicker",
span { class: "dot" }
span { "Desktop app and CLI available now." }
}
}
section { class: "card section", id: "about",
div { class: "card-title", "About" }
div { class: "h1", "A small promise" }
div { class: "p",
"TV signals are just waves in the air. This project makes it easier to pick them up and share them with others."
}
div { class: "kicker",
span { class: "dot" }
span { "Open source. No central server." }
}
}
}
footer { class: "footer",
span { "AGPLv3" }
span { "every.channel" }
a { href: "https://every.channel", "every.channel" }
}
}
}
}
async fn copy_to_clipboard(text: String) -> Result<(), String> {
let window = web_sys::window().ok_or_else(|| "window unavailable".to_string())?;
let clipboard = window.navigator().clipboard();
let promise = clipboard.write_text(&text);
JsFuture::from(promise)
.await
.map_err(|err| format!("clipboard write rejected: {err:?}"))?;
Ok(())
}

285
apps/web/style.css Normal file
View file

@ -0,0 +1,285 @@
@import url("https://fonts.googleapis.com/css2?family=Space+Grotesk:wght@400;500;600&family=IBM+Plex+Sans:wght@400;500&display=swap");
:root {
color-scheme: light;
--bg: #f8f5f0;
--bg-muted: #f2ede5;
--card: rgba(255, 255, 255, 0.82);
--ink: #1a1814;
--ink-muted: #5c574d;
--border: rgba(26, 24, 20, 0.10);
--shadow: 0 16px 40px rgba(26, 24, 20, 0.10);
--accent: #18a89b;
--accent-ink: #0c6f68;
--warm: #d4915a;
--warm-muted: rgba(232, 160, 92, 0.12);
font-family: "Space Grotesk", "IBM Plex Sans", "Segoe UI", system-ui, sans-serif;
font-feature-settings: "ss01" 1;
}
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
min-height: 100vh;
color: var(--ink);
background: linear-gradient(168deg, #fdfaf5 0%, #f8f5f0 35%, #f4f1ec 70%, #f0ece6 100%);
}
/* Subtle "old TV" nod: soft phosphor glow and faint scanlines */
body::before {
content: "";
position: fixed;
inset: 0;
background-image:
radial-gradient(ellipse 80% 60% at 15% 10%, rgba(232, 160, 92, 0.08), transparent 50%),
radial-gradient(ellipse 60% 50% at 85% 15%, rgba(24, 168, 155, 0.06), transparent 45%),
radial-gradient(ellipse 70% 40% at 50% 90%, rgba(232, 160, 92, 0.05), transparent 50%),
repeating-linear-gradient(
0deg,
transparent 0px,
transparent 2px,
rgba(26, 24, 20, 0.012) 2px,
rgba(26, 24, 20, 0.012) 4px
);
pointer-events: none;
z-index: -1;
}
#main {
min-height: 100vh;
}
.page {
max-width: 1120px;
margin: 0 auto;
padding: 24px clamp(16px, 4vw, 40px) 40px;
display: flex;
flex-direction: column;
gap: 20px;
animation: fadeIn 0.5s ease-out;
}
.top {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: 16px;
flex-wrap: wrap;
}
.brand {
display: flex;
flex-direction: column;
gap: 4px;
}
.brand-title {
font-size: clamp(24px, 2.6vw, 30px);
font-weight: 600;
letter-spacing: -0.025em;
color: var(--ink);
}
.brand-subtitle {
font-size: 12px;
color: var(--ink-muted);
max-width: 44ch;
line-height: 1.4;
}
.nav {
display: flex;
gap: 5px;
flex-wrap: wrap;
justify-content: flex-end;
}
.nav a {
text-decoration: none;
color: var(--ink-muted);
font-size: 12px;
font-weight: 500;
padding: 6px 10px;
border-radius: 999px;
border: 1px solid var(--border);
background: rgba(255, 255, 255, 0.65);
transition: transform 120ms ease, box-shadow 120ms ease, background 120ms ease, color 120ms ease;
}
.nav a:hover {
transform: translateY(-1px);
background: rgba(255, 255, 255, 0.9);
color: var(--ink);
box-shadow: 0 8px 20px rgba(26, 24, 20, 0.06);
}
.grid {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 14px;
}
@media (max-width: 720px) {
.grid {
grid-template-columns: 1fr;
}
}
.card {
border-radius: 16px;
border: 1px solid var(--border);
background: var(--card);
box-shadow: var(--shadow);
padding: 18px;
backdrop-filter: blur(12px);
}
.card-title {
font-size: 10px;
text-transform: uppercase;
letter-spacing: 0.12em;
color: var(--ink-muted);
margin-bottom: 8px;
}
.h1 {
font-size: clamp(16px, 1.8vw, 19px);
font-weight: 600;
letter-spacing: -0.015em;
margin-bottom: 8px;
line-height: 1.3;
}
.p {
font-size: 12px;
line-height: 1.5;
color: var(--ink-muted);
}
.row {
display: grid;
grid-template-columns: 1fr auto;
gap: 6px;
margin-top: 10px;
}
.input {
border: 1px solid var(--border);
border-radius: 10px;
padding: 8px 10px;
font-size: 11px;
background: rgba(242, 237, 229, 0.6);
color: var(--ink);
}
.input:focus {
outline: none;
border-color: rgba(24, 168, 155, 0.4);
box-shadow: 0 0 0 3px rgba(24, 168, 155, 0.08);
}
.btn {
border: 1px solid var(--border);
border-radius: 10px;
padding: 8px 12px;
font-size: 11px;
font-weight: 500;
cursor: pointer;
background: rgba(255, 255, 255, 0.85);
color: var(--ink);
transition: transform 120ms ease, box-shadow 120ms ease, background 120ms ease;
}
.btn:hover {
transform: translateY(-1px);
box-shadow: 0 6px 16px rgba(26, 24, 20, 0.06);
}
.btn.primary {
background: var(--accent-ink);
color: #fff;
border-color: rgba(12, 111, 104, 0.3);
}
.btn.primary:hover {
box-shadow: 0 6px 16px rgba(12, 111, 104, 0.2);
}
.kicker {
margin-top: 8px;
display: inline-flex;
align-items: center;
gap: 6px;
padding: 6px 10px;
border-radius: 999px;
border: 1px solid var(--border);
background: var(--warm-muted);
color: var(--ink-muted);
font-size: 11px;
}
.dot {
width: 6px;
height: 6px;
border-radius: 50%;
background: var(--warm);
box-shadow: 0 0 0 3px rgba(232, 160, 92, 0.2);
}
.section {
scroll-margin-top: 12px;
}
code {
font-family: "IBM Plex Mono", ui-monospace, monospace;
font-size: 0.9em;
background: rgba(26, 24, 20, 0.05);
padding: 2px 5px;
border-radius: 4px;
}
.footer {
margin-top: 6px;
padding-top: 12px;
border-top: 1px solid var(--border);
font-size: 10px;
color: var(--ink-muted);
display: flex;
flex-wrap: wrap;
gap: 10px;
justify-content: space-between;
}
.footer a {
color: var(--ink-muted);
text-decoration: none;
}
.footer a:hover {
color: var(--ink);
}
@keyframes fadeIn {
from {
opacity: 0;
transform: translateY(4px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
@media (max-width: 720px) {
.top {
flex-direction: column;
align-items: stretch;
}
.nav {
justify-content: flex-start;
}
}

View file

@ -0,0 +1,13 @@
[package]
name = "ec-chopper"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
ac-ffmpeg = "0.19.0"
anyhow.workspace = true
blake3.workspace = true
ec-core = { path = "../ec-core" }
ec-ts = { path = "../ec-ts" }
serde.workspace = true

View file

@ -0,0 +1,899 @@
//! Deterministic chunking and transcode scaffolding.
use ac_ffmpeg::format::{
demuxer::Demuxer,
io::IO,
muxer::{Muxer, OutputFormat},
};
use anyhow::{anyhow, Context, Result};
use ec_core::{
merkle_root_from_hashes, DeterminismProfile, ManifestBody, StreamId, StreamMetadata,
};
use ec_ts::{SectionAssembler, TimeSyncEngine, TimeSyncUpdate, TsReader};
use serde::{Deserialize, Serialize};
use std::fs;
use std::io::{Read, Write};
use std::path::{Path, PathBuf};
use std::process::{Child, Command, Stdio};
use std::time::Duration;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StreamProbe {
pub index: usize,
pub kind: String,
pub decoder: Option<String>,
pub width: Option<usize>,
pub height: Option<usize>,
pub sample_rate: Option<u32>,
pub channels: Option<u32>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum ChunkFormat {
Fmp4,
MpegTs,
Matroska,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChunkerConfig {
pub output_dir: PathBuf,
pub segment_duration_ms: u64,
pub segment_template: String,
pub format: ChunkFormat,
pub profile: DeterminismProfile,
}
impl ChunkerConfig {
pub fn default_segment_template() -> String {
"segment_%06d.m4s".to_string()
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChunkSegment {
pub index: usize,
pub path: PathBuf,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChunkManifest {
pub output_dir: PathBuf,
pub segments: Vec<ChunkSegment>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TsChunk {
pub index: u64,
pub path: PathBuf,
pub timing: ChunkTiming,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct HashedTsChunk {
pub index: u64,
pub path: PathBuf,
pub timing: ChunkTiming,
pub hash: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct HashedTsChunkManifest {
pub output_dir: PathBuf,
pub chunks: Vec<HashedTsChunk>,
pub merkle_root: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChunkTiming {
pub chunk_index: u64,
pub chunk_start_27mhz: Option<u64>,
pub chunk_duration_27mhz: u64,
pub utc_start_unix: Option<i64>,
pub sync_status: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TsChunkManifest {
pub output_dir: PathBuf,
pub chunks: Vec<TsChunk>,
}
#[derive(Debug, Clone)]
pub enum ChunkerInput {
Url(String),
File(PathBuf),
}
#[derive(Debug)]
pub struct SegmenterProcess {
pub child: Child,
pub output_dir: PathBuf,
}
#[derive(Debug, Clone)]
pub struct FfmpegCliSegmenter {
pub ffmpeg_bin: PathBuf,
}
impl Default for FfmpegCliSegmenter {
fn default() -> Self {
Self {
ffmpeg_bin: PathBuf::from("ffmpeg"),
}
}
}
impl FfmpegCliSegmenter {
pub fn spawn(&self, input: ChunkerInput, config: &ChunkerConfig) -> Result<SegmenterProcess> {
fs::create_dir_all(&config.output_dir)
.with_context(|| format!("failed to create {}", config.output_dir.display()))?;
let input_arg = match input {
ChunkerInput::Url(url) => url,
ChunkerInput::File(path) => path
.to_str()
.ok_or_else(|| anyhow!("invalid input path"))?
.to_string(),
};
let segment_time = format!("{:.3}", config.segment_duration_ms as f64 / 1000.0);
let output_template = config.output_dir.join(&config.segment_template);
let output_template = output_template
.to_str()
.ok_or_else(|| anyhow!("invalid output template path"))?
.to_string();
let mut cmd = Command::new(&self.ffmpeg_bin);
cmd.arg("-hide_banner")
.arg("-loglevel")
.arg("error")
.arg("-nostdin")
.arg("-y")
.arg("-i")
.arg(&input_arg);
for arg in ffmpeg_profile_args(&config.profile) {
cmd.arg(arg);
}
cmd.arg("-f")
.arg("segment")
.arg("-segment_time")
.arg(segment_time)
.arg("-reset_timestamps")
.arg("1")
.arg("-segment_format")
.arg(segment_format_arg(&config.format))
.arg(&output_template)
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::inherit());
let child = cmd
.spawn()
.with_context(|| "failed to spawn ffmpeg".to_string())?;
Ok(SegmenterProcess {
child,
output_dir: config.output_dir.clone(),
})
}
}
pub fn collect_segments(output_dir: &Path) -> Result<ChunkManifest> {
let mut entries = fs::read_dir(output_dir)?
.filter_map(Result::ok)
.filter(|entry| entry.file_type().map(|t| t.is_file()).unwrap_or(false))
.map(|entry| entry.path())
.collect::<Vec<_>>();
entries.sort();
let segments = entries
.into_iter()
.enumerate()
.map(|(index, path)| ChunkSegment { index, path })
.collect();
Ok(ChunkManifest {
output_dir: output_dir.to_path_buf(),
segments,
})
}
pub fn probe_read_stream<T: Read>(stream: T) -> Result<Vec<StreamProbe>> {
let io = IO::from_read_stream(stream);
let demuxer = Demuxer::builder()
.build(io)
.map_err(|err| anyhow!(err.to_string()))?;
let demuxer = demuxer
.find_stream_info(Some(Duration::from_secs(2)))
.map_err(|(_, err)| anyhow!(err.to_string()))?;
let mut probes = Vec::new();
for (index, stream) in demuxer.streams().iter().enumerate() {
let params = stream.codec_parameters();
let mut probe = StreamProbe {
index,
kind: if params.is_video_codec() {
"video".to_string()
} else if params.is_audio_codec() {
"audio".to_string()
} else if params.is_subtitle_codec() {
"subtitle".to_string()
} else {
"data".to_string()
},
decoder: params.decoder_name().map(|name| name.to_string()),
width: None,
height: None,
sample_rate: None,
channels: None,
};
if let Some(video) = params.as_video_codec_parameters() {
probe.width = Some(video.width());
probe.height = Some(video.height());
}
if let Some(audio) = params.as_audio_codec_parameters() {
probe.sample_rate = Some(audio.sample_rate());
probe.channels = Some(audio.channel_layout().channels());
}
probes.push(probe);
}
Ok(probes)
}
pub fn analyze_ts_time<T: Read>(
stream: T,
chunk_duration_ms: u64,
max_events: usize,
) -> Result<Vec<TimeSyncUpdate>> {
let mut reader = TsReader::new(stream);
let mut assembler = SectionAssembler::default();
let mut engine = TimeSyncEngine::new(chunk_duration_ms);
let mut events = Vec::new();
while let Some(packet) = reader.read_packet()? {
for update in engine.ingest_packet(&packet, &mut assembler) {
events.push(update);
if events.len() >= max_events {
return Ok(events);
}
}
}
Ok(events)
}
pub fn chunk_ts_stream<T: Read>(
stream: T,
output_dir: &Path,
chunk_duration_ms: u64,
max_chunks: Option<usize>,
) -> Result<TsChunkManifest> {
let mut chunks = Vec::new();
chunk_ts_stream_live(stream, output_dir, chunk_duration_ms, max_chunks, |chunk| {
chunks.push(chunk);
Ok(())
})?;
Ok(TsChunkManifest {
output_dir: output_dir.to_path_buf(),
chunks,
})
}
pub fn chunk_ts_stream_live<T: Read, F: FnMut(TsChunk) -> Result<()>>(
stream: T,
output_dir: &Path,
chunk_duration_ms: u64,
max_chunks: Option<usize>,
mut on_chunk: F,
) -> Result<()> {
fs::create_dir_all(output_dir)
.with_context(|| format!("failed to create {}", output_dir.display()))?;
let mut reader = TsReader::new(stream);
let mut assembler = SectionAssembler::default();
let mut engine = TimeSyncEngine::new(chunk_duration_ms);
let mut current_index: Option<u64> = None;
let mut current_file: Option<std::fs::File> = None;
let mut current_timing: Option<ChunkTiming> = None;
let mut emitted = 0usize;
let mut close_and_emit =
|index: u64, timing: ChunkTiming, file: std::fs::File| -> Result<bool> {
drop(file);
let path = chunk_path(output_dir, index);
on_chunk(TsChunk {
index,
path,
timing,
})?;
emitted += 1;
Ok(max_chunks.map(|limit| emitted >= limit).unwrap_or(false))
};
while let Some(packet) = reader.read_packet()? {
let updates = engine.ingest_packet(&packet, &mut assembler);
for update in updates {
if update.discontinuity {
if let (Some(index), Some(timing), Some(file)) = (
current_index.take(),
current_timing.take(),
current_file.take(),
) {
if close_and_emit(index, timing, file)? {
return Ok(());
}
}
}
if let Some(index) = update.chunk_index {
if current_index != Some(index) {
if let (Some(prev_index), Some(timing), Some(file)) = (
current_index.take(),
current_timing.take(),
current_file.take(),
) {
if close_and_emit(prev_index, timing, file)? {
return Ok(());
}
}
let path = chunk_path(output_dir, index);
let file = std::fs::File::create(&path)
.with_context(|| format!("failed to create {}", path.display()))?;
current_file = Some(file);
current_index = Some(index);
current_timing = Some(ChunkTiming {
chunk_index: index,
chunk_start_27mhz: update.chunk_start_27mhz,
chunk_duration_27mhz: chunk_duration_ms * 27_000,
utc_start_unix: update.utc_start_unix,
sync_status: if update.synced {
"synced".to_string()
} else {
"unsynced".to_string()
},
});
}
}
}
if let Some(file) = current_file.as_mut() {
file.write_all(packet.as_bytes())?;
}
}
if let (Some(index), Some(timing), Some(file)) = (
current_index.take(),
current_timing.take(),
current_file.take(),
) {
let _ = close_and_emit(index, timing, file);
}
Ok(())
}
fn chunk_path(output_dir: &Path, index: u64) -> PathBuf {
output_dir.join(format!("chunk_{index:010}.ts"))
}
pub fn hash_file_blake3(path: &Path) -> Result<String> {
let mut file =
fs::File::open(path).with_context(|| format!("failed to open {}", path.display()))?;
let mut hasher = blake3::Hasher::new();
let mut buffer = [0u8; 8192];
loop {
let read = file.read(&mut buffer)?;
if read == 0 {
break;
}
hasher.update(&buffer[..read]);
}
Ok(hasher.finalize().to_hex().to_string())
}
pub fn chunk_stream_ffmpeg<T: Read>(
stream: T,
output_dir: &Path,
chunk_duration_ms: u64,
max_chunks: Option<usize>,
) -> Result<TsChunkManifest> {
fs::create_dir_all(output_dir)
.with_context(|| format!("failed to create {}", output_dir.display()))?;
let io = IO::from_read_stream(stream);
let demuxer = Demuxer::builder()
.build(io)
.map_err(|err| anyhow!(err.to_string()))?;
let demuxer = demuxer
.find_stream_info(Some(Duration::from_secs(2)))
.map_err(|(_, err)| anyhow!(err.to_string()))?;
let stream_info = demuxer
.streams()
.iter()
.map(|stream| (stream.codec_parameters(), stream.time_base()))
.collect::<Vec<_>>();
let mut demuxer = demuxer.into_demuxer();
let chunk_duration_micros = chunk_duration_ms as i64 * 1000;
let mut chunks = Vec::new();
let mut current_index: Option<u64> = None;
let mut current_muxer: Option<Muxer<std::fs::File>> = None;
let mut current_timing: Option<ChunkTiming> = None;
loop {
let Some(packet) = demuxer.take().map_err(|err| anyhow!(err.to_string()))? else {
break;
};
let ts = packet
.pts()
.as_micros()
.or_else(|| packet.dts().as_micros());
let chunk_index = ts
.and_then(|micros| {
if micros < 0 {
None
} else {
Some((micros / chunk_duration_micros) as u64)
}
})
.or(current_index);
if let Some(index) = chunk_index {
if current_index != Some(index) {
if let Some(mut muxer) = current_muxer.take() {
muxer.flush().map_err(|err| anyhow!(err.to_string()))?;
let _ = muxer.close();
}
if let (Some(prev_index), Some(timing)) =
(current_index.take(), current_timing.take())
{
chunks.push(TsChunk {
index: prev_index,
path: chunk_path(output_dir, prev_index),
timing,
});
}
let path = chunk_path(output_dir, index);
let file = std::fs::File::create(&path)
.with_context(|| format!("failed to create {}", path.display()))?;
let io = IO::from_write_stream(file);
let mut builder = Muxer::builder();
for (params, _) in &stream_info {
builder
.add_stream(params)
.map_err(|err| anyhow!(err.to_string()))?;
}
for (stream, (_, tb)) in builder.streams_mut().iter_mut().zip(stream_info.iter()) {
stream.set_time_base(*tb);
}
let format = OutputFormat::find_by_name("mpegts")
.ok_or_else(|| anyhow!("mpegts format not found"))?;
let muxer = builder
.interleaved(true)
.build(io, format)
.map_err(|err| anyhow!(err.to_string()))?;
current_muxer = Some(muxer);
current_index = Some(index);
current_timing = Some(ChunkTiming {
chunk_index: index,
chunk_start_27mhz: ts.map(|micros| (micros as u64) * 27),
chunk_duration_27mhz: chunk_duration_ms * 27_000,
utc_start_unix: None,
sync_status: "pts".to_string(),
});
if let Some(limit) = max_chunks {
if chunks.len() >= limit {
break;
}
}
}
}
if let Some(muxer) = current_muxer.as_mut() {
let packet = packet.with_time_base(ac_ffmpeg::time::TimeBase::MICROSECONDS);
muxer.push(packet).map_err(|err| anyhow!(err.to_string()))?;
}
}
if let Some(mut muxer) = current_muxer.take() {
let _ = muxer.flush();
let _ = muxer.close();
}
if let (Some(index), Some(timing)) = (current_index.take(), current_timing.take()) {
chunks.push(TsChunk {
index,
path: chunk_path(output_dir, index),
timing,
});
}
Ok(TsChunkManifest {
output_dir: output_dir.to_path_buf(),
chunks,
})
}
pub fn hash_ts_chunks(manifest: &TsChunkManifest) -> Result<HashedTsChunkManifest> {
let mut ordered = manifest.chunks.clone();
ordered.sort_by_key(|chunk| chunk.index);
let mut hashes = Vec::with_capacity(ordered.len());
let mut chunks = Vec::with_capacity(ordered.len());
for chunk in ordered {
let hash = hash_file_blake3(&chunk.path)?;
hashes.push(hash.clone());
chunks.push(HashedTsChunk {
index: chunk.index,
path: chunk.path.clone(),
timing: chunk.timing.clone(),
hash,
});
}
let merkle_root = merkle_root_from_hashes(&hashes)?;
Ok(HashedTsChunkManifest {
output_dir: manifest.output_dir.clone(),
chunks,
merkle_root,
})
}
pub fn build_manifest_body_for_chunks(
stream_id: StreamId,
epoch_id: impl Into<String>,
chunk_duration_ms: u64,
chunk_start_index: u64,
encoder_profile_id: impl Into<String>,
created_unix_ms: u64,
metadata: Vec<StreamMetadata>,
chunk_hashes: &[String],
) -> Result<ManifestBody> {
let merkle_root = merkle_root_from_hashes(chunk_hashes)?;
Ok(ManifestBody {
stream_id,
epoch_id: epoch_id.into(),
chunk_duration_ms,
total_chunks: chunk_hashes.len() as u64,
chunk_start_index,
encoder_profile_id: encoder_profile_id.into(),
merkle_root,
created_unix_ms,
metadata,
chunk_hashes: chunk_hashes.to_vec(),
variants: None,
})
}
pub fn manifest_for_ts_chunks(
stream_id: StreamId,
epoch_id: impl Into<String>,
chunk_duration_ms: u64,
chunk_start_index: u64,
encoder_profile_id: impl Into<String>,
created_unix_ms: u64,
metadata: Vec<StreamMetadata>,
manifest: &TsChunkManifest,
) -> Result<(ManifestBody, HashedTsChunkManifest)> {
let hashed = hash_ts_chunks(manifest)?;
let chunk_hashes = hashed
.chunks
.iter()
.map(|chunk| chunk.hash.clone())
.collect::<Vec<_>>();
let body = build_manifest_body_for_chunks(
stream_id,
epoch_id,
chunk_duration_ms,
chunk_start_index,
encoder_profile_id,
created_unix_ms,
metadata,
&chunk_hashes,
)?;
Ok((body, hashed))
}
pub fn chunk_stream_ffmpeg_live<T: Read, F: FnMut(TsChunk) -> Result<()>>(
stream: T,
output_dir: &Path,
chunk_duration_ms: u64,
max_chunks: Option<usize>,
mut on_chunk: F,
) -> Result<()> {
fs::create_dir_all(output_dir)
.with_context(|| format!("failed to create {}", output_dir.display()))?;
let io = IO::from_read_stream(stream);
let demuxer = Demuxer::builder()
.build(io)
.map_err(|err| anyhow!(err.to_string()))?;
let demuxer = demuxer
.find_stream_info(Some(Duration::from_secs(2)))
.map_err(|(_, err)| anyhow!(err.to_string()))?;
let stream_info = demuxer
.streams()
.iter()
.map(|stream| (stream.codec_parameters(), stream.time_base()))
.collect::<Vec<_>>();
let mut demuxer = demuxer.into_demuxer();
let chunk_duration_micros = chunk_duration_ms as i64 * 1000;
let mut current_index: Option<u64> = None;
let mut current_muxer: Option<Muxer<std::fs::File>> = None;
let mut current_timing: Option<ChunkTiming> = None;
let mut emitted = 0usize;
loop {
let Some(packet) = demuxer.take().map_err(|err| anyhow!(err.to_string()))? else {
break;
};
let ts = packet
.pts()
.as_micros()
.or_else(|| packet.dts().as_micros());
let chunk_index = ts
.and_then(|micros| {
if micros < 0 {
None
} else {
Some((micros / chunk_duration_micros) as u64)
}
})
.or(current_index);
if let Some(index) = chunk_index {
if current_index != Some(index) {
if let Some(mut muxer) = current_muxer.take() {
muxer.flush().map_err(|err| anyhow!(err.to_string()))?;
let _ = muxer.close();
}
if let (Some(prev_index), Some(timing)) =
(current_index.take(), current_timing.take())
{
let chunk = TsChunk {
index: prev_index,
path: chunk_path(output_dir, prev_index),
timing,
};
on_chunk(chunk)?;
emitted += 1;
if let Some(limit) = max_chunks {
if emitted >= limit {
return Ok(());
}
}
}
let path = chunk_path(output_dir, index);
let file = std::fs::File::create(&path)
.with_context(|| format!("failed to create {}", path.display()))?;
let io = IO::from_write_stream(file);
let mut builder = Muxer::builder();
for (params, _) in &stream_info {
builder
.add_stream(params)
.map_err(|err| anyhow!(err.to_string()))?;
}
for (stream, (_, tb)) in builder.streams_mut().iter_mut().zip(stream_info.iter()) {
stream.set_time_base(*tb);
}
let format = OutputFormat::find_by_name("mpegts")
.ok_or_else(|| anyhow!("mpegts format not found"))?;
let muxer = builder
.interleaved(true)
.build(io, format)
.map_err(|err| anyhow!(err.to_string()))?;
current_muxer = Some(muxer);
current_index = Some(index);
current_timing = Some(ChunkTiming {
chunk_index: index,
chunk_start_27mhz: ts.map(|micros| (micros as u64) * 27),
chunk_duration_27mhz: chunk_duration_ms * 27_000,
utc_start_unix: None,
sync_status: "pts".to_string(),
});
}
}
if let Some(muxer) = current_muxer.as_mut() {
let packet = packet.with_time_base(ac_ffmpeg::time::TimeBase::MICROSECONDS);
muxer.push(packet).map_err(|err| anyhow!(err.to_string()))?;
}
}
if let Some(mut muxer) = current_muxer.take() {
let _ = muxer.flush();
let _ = muxer.close();
}
if let (Some(index), Some(timing)) = (current_index.take(), current_timing.take()) {
let chunk = TsChunk {
index,
path: chunk_path(output_dir, index),
timing,
};
on_chunk(chunk)?;
}
Ok(())
}
fn segment_format_arg(format: &ChunkFormat) -> &'static str {
match format {
ChunkFormat::Fmp4 => "mp4",
ChunkFormat::MpegTs => "mpegts",
ChunkFormat::Matroska => "matroska",
}
}
pub fn ffmpeg_profile_args(profile: &DeterminismProfile) -> Vec<String> {
let mut args = Vec::new();
if !profile.encoder.is_empty() {
args.push("-c:v".to_string());
args.push(profile.encoder.clone());
}
for arg in &profile.encoder_args {
args.push(arg.clone());
}
args
}
pub fn deterministic_h264_profile() -> DeterminismProfile {
DeterminismProfile {
name: "deterministic-h264-aac".to_string(),
description: "Single-threaded H.264 + AAC with fixed GOP and bitexact flags".to_string(),
encoder: "libx264".to_string(),
encoder_args: vec![
"-c:a".to_string(),
"aac".to_string(),
"-b:a".to_string(),
"128k".to_string(),
"-ac".to_string(),
"2".to_string(),
"-ar".to_string(),
"48000".to_string(),
"-pix_fmt".to_string(),
"yuv420p".to_string(),
"-g".to_string(),
"60".to_string(),
"-keyint_min".to_string(),
"60".to_string(),
"-sc_threshold".to_string(),
"0".to_string(),
"-bf".to_string(),
"0".to_string(),
"-threads".to_string(),
"1".to_string(),
"-fflags".to_string(),
"+bitexact".to_string(),
"-flags:v".to_string(),
"+bitexact".to_string(),
"-flags:a".to_string(),
"+bitexact".to_string(),
],
chunk_duration_ms: 2000,
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Cursor;
fn ts_packet_with_pcr(pid: u16, cc: u8, pcr_27mhz: u64) -> [u8; ec_ts::TS_PACKET_SIZE] {
// Match ec_ts parser expectations.
let base = pcr_27mhz / 300;
let ext = pcr_27mhz % 300;
let mut pcr = [0u8; 6];
pcr[0] = ((base >> 25) & 0xFF) as u8;
pcr[1] = ((base >> 17) & 0xFF) as u8;
pcr[2] = ((base >> 9) & 0xFF) as u8;
pcr[3] = ((base >> 1) & 0xFF) as u8;
pcr[4] = (((base & 0x1) << 7) as u8) | 0x7E | (((ext >> 8) & 0x1) as u8);
pcr[5] = (ext & 0xFF) as u8;
let mut data = [0u8; ec_ts::TS_PACKET_SIZE];
data[0] = 0x47;
data[1] = ((pid >> 8) as u8) & 0x1F;
data[2] = (pid & 0xFF) as u8;
data[3] = (2 << 4) | (cc & 0x0F); // adaptation only
data[4] = 7;
data[5] = 0x10;
data[6..12].copy_from_slice(&pcr);
data
}
#[test]
fn segment_format_mapping_is_correct() {
assert_eq!(segment_format_arg(&ChunkFormat::Fmp4), "mp4");
assert_eq!(segment_format_arg(&ChunkFormat::MpegTs), "mpegts");
assert_eq!(segment_format_arg(&ChunkFormat::Matroska), "matroska");
}
#[test]
fn deterministic_profile_args_are_single_threaded_and_bitexact() {
let profile = deterministic_h264_profile();
let args = ffmpeg_profile_args(&profile);
assert!(args.iter().any(|a| a == "-threads"));
assert!(args.iter().any(|a| a == "1"));
assert!(args.iter().any(|a| a == "+bitexact"));
assert!(args.iter().any(|a| a == "libx264"));
}
#[test]
fn hash_file_blake3_matches_direct_hash() {
let dir = std::env::temp_dir().join(format!("ec-chopper-hash-{}", std::process::id()));
let _ = fs::create_dir_all(&dir);
let path = dir.join("x.bin");
fs::write(&path, b"hello").unwrap();
let h = hash_file_blake3(&path).unwrap();
assert_eq!(h, blake3::hash(b"hello").to_hex().to_string());
let _ = fs::remove_file(&path);
}
#[test]
fn chunk_ts_stream_emits_expected_chunk_indices() {
let chunk_ms = 1000u64;
let dir = std::env::temp_dir().join(format!("ec-chopper-chunks-{}", std::process::id()));
let _ = fs::remove_dir_all(&dir);
fs::create_dir_all(&dir).unwrap();
let mut bytes = Vec::new();
bytes.extend_from_slice(&ts_packet_with_pcr(0x0100, 0, 0));
bytes.extend_from_slice(&ts_packet_with_pcr(0x0100, 1, 27_000_000));
bytes.extend_from_slice(&ts_packet_with_pcr(0x0100, 2, 54_000_000));
let manifest = chunk_ts_stream(Cursor::new(bytes), &dir, chunk_ms, None).unwrap();
let indices = manifest.chunks.iter().map(|c| c.index).collect::<Vec<_>>();
assert_eq!(indices, vec![0, 1, 2]);
for chunk in &manifest.chunks {
let data = fs::read(&chunk.path).unwrap();
assert_eq!(data.len() % ec_ts::TS_PACKET_SIZE, 0);
}
let _ = fs::remove_dir_all(&dir);
}
#[test]
fn hashed_manifest_merkle_root_matches_core() {
let dir = std::env::temp_dir().join(format!("ec-chopper-merkle-{}", std::process::id()));
let _ = fs::remove_dir_all(&dir);
fs::create_dir_all(&dir).unwrap();
let mut bytes = Vec::new();
bytes.extend_from_slice(&ts_packet_with_pcr(0x0100, 0, 0));
bytes.extend_from_slice(&ts_packet_with_pcr(0x0100, 1, 27_000_000));
let manifest = chunk_ts_stream(Cursor::new(bytes), &dir, 1000, None).unwrap();
let hashed = hash_ts_chunks(&manifest).unwrap();
let hashes = hashed
.chunks
.iter()
.map(|c| c.hash.clone())
.collect::<Vec<_>>();
let expected = ec_core::merkle_root_from_hashes(&hashes).unwrap();
assert_eq!(hashed.merkle_root, expected);
let _ = fs::remove_dir_all(&dir);
}
}

17
crates/ec-cli/Cargo.toml Normal file
View file

@ -0,0 +1,17 @@
[package]
name = "ec-cli"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
blake3.workspace = true
clap.workspace = true
ec-chopper = { path = "../ec-chopper" }
ec-core = { path = "../ec-core" }
ec-hdhomerun = { path = "../ec-hdhomerun" }
ec-linux-iptv = { path = "../ec-linux-iptv" }
serde_json.workspace = true
tracing.workspace = true
tracing-subscriber.workspace = true

379
crates/ec-cli/src/main.rs Normal file
View file

@ -0,0 +1,379 @@
use anyhow::{anyhow, Context, Result};
use blake3;
use clap::{Parser, Subcommand};
use std::fs::{self, File};
use std::io::{Read, Write};
use std::path::PathBuf;
#[derive(Parser, Debug)]
#[command(name = "every.channel")]
#[command(about = "CLI for the every.channel mesh", long_about = None)]
struct Cli {
#[command(subcommand)]
command: Commands,
}
#[derive(Subcommand, Debug)]
enum Commands {
/// Discover HDHomeRun devices on the network.
Discover,
/// Fetch channel lineup for a device.
Lineup {
/// Hostname or IP (e.g. 192.168.1.10 or hdhomerun.local).
#[arg(long)]
host: Option<String>,
/// Device ID (used as <deviceid>.local).
#[arg(long)]
device_id: Option<String>,
},
/// Parse lineup JSON from a file on disk.
LineupFile { path: String },
/// Open an HDHomeRun stream and dump MPEG-TS to a file.
StreamDump {
/// Hostname or IP (e.g. 192.168.1.10).
#[arg(long)]
host: Option<String>,
/// Device ID (used as <deviceid>.local).
#[arg(long)]
device_id: Option<String>,
/// Guide number (e.g. 8.1).
#[arg(long)]
channel: Option<String>,
/// Guide name (e.g. KQED).
#[arg(long)]
name: Option<String>,
/// Optional duration in seconds (if supported by the tuner URL).
#[arg(long)]
duration: Option<u32>,
/// Output path for the transport stream.
#[arg(long, default_value = "stream.ts")]
output: PathBuf,
},
/// Chunk an input stream using ffmpeg.
Chunk {
/// Input URL or file path.
input: String,
/// Output directory for segments.
output_dir: PathBuf,
},
/// Probe a media file using ac-ffmpeg.
Probe {
/// Input file path.
input: String,
},
/// Analyze TS timing and chunk boundaries.
TsSync {
/// Input TS file.
input: String,
/// Chunk duration in ms.
#[arg(long, default_value_t = 2000)]
chunk_ms: u64,
/// Maximum number of events to print.
#[arg(long, default_value_t = 50)]
max_events: usize,
},
/// Re-encode the same input multiple times and compare segment hashes.
DeterminismTest {
/// Input file path (TS or other supported by ffmpeg).
input: String,
/// Output directory root (runs will be placed under run-*/).
output_dir: PathBuf,
/// Number of runs to compare.
#[arg(long, default_value_t = 2)]
runs: usize,
},
/// Open a Linux DVB DVR device and dump MPEG-TS to a file.
LinuxDvbDump {
/// DVB adapter index.
#[arg(long, default_value_t = 0)]
adapter: u32,
/// DVR device index.
#[arg(long, default_value_t = 0)]
dvr: u32,
/// Optional tune command (repeat for each arg).
#[arg(long, allow_hyphen_values = true)]
tune_cmd: Vec<String>,
/// Optional tune wait (ms).
#[arg(long)]
tune_wait_ms: Option<u64>,
/// Output path for the transport stream.
#[arg(long, default_value = "linux-dvb.ts")]
output: PathBuf,
},
}
fn main() -> Result<()> {
tracing_subscriber::fmt().init();
let cli = Cli::parse();
match cli.command {
Commands::Discover => {
let devices = ec_hdhomerun::discover()?;
println!("{}", serde_json::to_string_pretty(&devices)?);
}
Commands::Lineup { host, device_id } => {
let device = resolve_device(host, device_id)?;
let lineup = ec_hdhomerun::fetch_lineup(&device)?;
println!("{}", serde_json::to_string_pretty(&lineup)?);
}
Commands::LineupFile { path } => {
let bytes = fs::read(&path)?;
let lineup = ec_hdhomerun::lineup_from_json_bytes(&bytes, None)?;
println!("{}", serde_json::to_string_pretty(&lineup)?);
}
Commands::StreamDump {
host,
device_id,
channel,
name,
duration,
output,
} => {
let device = resolve_device(host, device_id)?;
let lineup = ec_hdhomerun::fetch_lineup(&device)?;
let entry = if let Some(channel) = channel {
ec_hdhomerun::find_lineup_entry_by_number(&lineup, &channel)
.or_else(|| ec_hdhomerun::find_lineup_entry_by_name(&lineup, &channel))
.ok_or_else(|| anyhow!("channel not found: {channel}"))?
} else if let Some(name) = name {
ec_hdhomerun::find_lineup_entry_by_name(&lineup, &name)
.ok_or_else(|| anyhow!("channel not found: {name}"))?
} else {
return Err(anyhow!("--channel or --name required"));
};
let mut stream = ec_hdhomerun::open_stream_entry(entry, duration)?;
let mut file = File::create(&output)
.with_context(|| format!("failed to create {}", output.display()))?;
let mut buf = [0u8; 8192];
loop {
let read = stream.read(&mut buf)?;
if read == 0 {
break;
}
file.write_all(&buf[..read])?;
}
}
Commands::Chunk { input, output_dir } => {
let profile = ec_chopper::deterministic_h264_profile();
let config = ec_chopper::ChunkerConfig {
output_dir,
segment_duration_ms: profile.chunk_duration_ms,
segment_template: ec_chopper::ChunkerConfig::default_segment_template(),
format: ec_chopper::ChunkFormat::Fmp4,
profile,
};
let input = if input.starts_with("http://") || input.starts_with("https://") {
ec_chopper::ChunkerInput::Url(input)
} else {
ec_chopper::ChunkerInput::File(PathBuf::from(input))
};
let segmenter = ec_chopper::FfmpegCliSegmenter::default();
let mut process = segmenter.spawn(input, &config)?;
let status = process.child.wait()?;
if !status.success() {
return Err(anyhow!("ffmpeg exited with status {status}"));
}
let manifest = ec_chopper::collect_segments(&process.output_dir)?;
println!("{}", serde_json::to_string_pretty(&manifest)?);
}
Commands::Probe { input } => {
let file = File::open(&input).with_context(|| format!("failed to open {}", input))?;
let probes = ec_chopper::probe_read_stream(file)?;
println!("{}", serde_json::to_string_pretty(&probes)?);
}
Commands::TsSync {
input,
chunk_ms,
max_events,
} => {
let file = File::open(&input).with_context(|| format!("failed to open {}", input))?;
let events = ec_chopper::analyze_ts_time(file, chunk_ms, max_events)?;
println!("{}", serde_json::to_string_pretty(&events)?);
}
Commands::DeterminismTest {
input,
output_dir,
runs,
} => {
if runs < 1 {
return Err(anyhow!("runs must be >= 1"));
}
let profile = ec_chopper::deterministic_h264_profile();
let format = ec_chopper::ChunkFormat::Fmp4;
let template = ec_chopper::ChunkerConfig::default_segment_template();
let mut baseline: Option<Vec<String>> = None;
for run in 0..runs {
let run_dir = output_dir.join(format!("run-{}", run + 1));
let _ = fs::remove_dir_all(&run_dir);
let config = ec_chopper::ChunkerConfig {
output_dir: run_dir.clone(),
segment_duration_ms: profile.chunk_duration_ms,
segment_template: template.clone(),
format: format.clone(),
profile: profile.clone(),
};
let input_spec = if input.starts_with("http://") || input.starts_with("https://") {
ec_chopper::ChunkerInput::Url(input.clone())
} else {
ec_chopper::ChunkerInput::File(PathBuf::from(&input))
};
let segmenter = ec_chopper::FfmpegCliSegmenter::default();
let mut process = segmenter.spawn(input_spec, &config)?;
let status = process.child.wait()?;
if !status.success() {
return Err(anyhow!("ffmpeg exited with status {status}"));
}
let hashes = hash_segments(&process.output_dir)?;
match baseline.as_ref() {
None => {
baseline = Some(hashes);
println!(
"run {}: baseline ({}) segments",
run + 1,
baseline.as_ref().unwrap().len()
);
}
Some(base) => {
let mismatches = compare_hashes(base, &hashes);
if mismatches > 0 {
return Err(anyhow!(
"determinism mismatch on run {} ({} mismatches)",
run + 1,
mismatches
));
}
println!("run {}: matched baseline", run + 1);
}
}
}
}
Commands::LinuxDvbDump {
adapter,
dvr,
tune_cmd,
tune_wait_ms,
output,
} => {
let config = ec_linux_iptv::LinuxDvbConfig {
adapter,
frontend: 0,
dvr,
tune_command: if tune_cmd.is_empty() {
None
} else {
Some(tune_cmd)
},
tune_timeout_ms: tune_wait_ms,
};
let mut stream = ec_linux_iptv::open_stream(&config)?;
let mut file = File::create(&output)
.with_context(|| format!("failed to create {}", output.display()))?;
let mut buf = [0u8; 8192];
loop {
let read = stream.read(&mut buf)?;
if read == 0 {
break;
}
file.write_all(&buf[..read])?;
}
}
}
Ok(())
}
fn hash_segments(output_dir: &PathBuf) -> Result<Vec<String>> {
let manifest = ec_chopper::collect_segments(output_dir)?;
let mut hashes = Vec::new();
for segment in manifest.segments {
let bytes = fs::read(&segment.path)
.with_context(|| format!("failed to read {}", segment.path.display()))?;
let hash = blake3::hash(&bytes);
hashes.push(hash.to_hex().to_string());
}
Ok(hashes)
}
fn compare_hashes(base: &[String], candidate: &[String]) -> usize {
let mut mismatches = 0usize;
let max_len = base.len().max(candidate.len());
for idx in 0..max_len {
let base_hash = base.get(idx);
let candidate_hash = candidate.get(idx);
if base_hash != candidate_hash {
mismatches += 1;
}
}
mismatches
}
fn resolve_device(
host: Option<String>,
device_id: Option<String>,
) -> Result<ec_hdhomerun::HdhomerunDevice> {
if let Some(host) = host {
ec_hdhomerun::discover_from_host(&host)
} else if let Some(device_id) = device_id {
let host = format!("{device_id}.local");
ec_hdhomerun::discover_from_host(&host)
} else {
let mut devices = ec_hdhomerun::discover()?;
devices
.pop()
.ok_or_else(|| anyhow!("no HDHomeRun devices found"))
}
}
#[cfg(test)]
mod tests {
use super::*;
use clap::Parser;
#[test]
fn clap_parses_common_subcommands() {
let cli = Cli::try_parse_from(["every.channel", "discover"]).unwrap();
matches!(cli.command, Commands::Discover);
let cli = Cli::try_parse_from([
"every.channel",
"ts-sync",
"input.ts",
"--chunk-ms",
"1000",
"--max-events",
"5",
])
.unwrap();
matches!(cli.command, Commands::TsSync { .. });
let cli = Cli::try_parse_from([
"every.channel",
"linux-dvb-dump",
"--adapter",
"0",
"--dvr",
"0",
"--tune-cmd",
"dvbv5-zap",
"--tune-cmd",
"-r",
"--tune-cmd",
"KQED",
])
.unwrap();
matches!(cli.command, Commands::LinuxDvbDump { .. });
}
}

10
crates/ec-core/Cargo.toml Normal file
View file

@ -0,0 +1,10 @@
[package]
name = "ec-core"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
serde.workspace = true
blake3.workspace = true
serde_json.workspace = true

463
crates/ec-core/src/lib.rs Normal file
View file

@ -0,0 +1,463 @@
//! Core types shared across every.channel.
use serde::{Deserialize, Serialize};
use std::fmt;
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct ChannelId(pub String);
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct DeviceId(pub String);
#[derive(Debug, Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct StreamId(pub String);
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StreamDescriptor {
pub id: StreamId,
pub title: String,
pub number: Option<String>,
pub source: String,
pub metadata: Vec<StreamMetadata>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StreamMetadata {
pub key: String,
pub value: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BroadcastId {
pub standard: String,
pub transport_stream_id: Option<u16>,
pub program_number: Option<u16>,
pub callsign: Option<String>,
pub region: Option<String>,
pub frequency: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SourceId {
pub kind: String,
pub device_id: Option<String>,
pub channel: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StreamKey {
pub version: u16,
pub broadcast: Option<BroadcastId>,
pub source: Option<SourceId>,
pub profile: Option<String>,
pub variant: Option<String>,
}
impl StreamKey {
pub fn to_stream_id(&self) -> StreamId {
let mut parts = vec![
"ec".to_string(),
"stream".to_string(),
format!("v{}", self.version),
];
if let Some(broadcast) = &self.broadcast {
parts.push("broadcast".to_string());
parts.push(sanitize(&broadcast.standard));
if let Some(tsid) = broadcast.transport_stream_id {
parts.push(format!("tsid-{tsid}"));
}
if let Some(program) = broadcast.program_number {
parts.push(format!("program-{program}"));
}
if let Some(callsign) = &broadcast.callsign {
parts.push(format!("callsign-{}", sanitize(callsign)));
}
if let Some(region) = &broadcast.region {
parts.push(format!("region-{}", sanitize(region)));
}
if let Some(freq) = &broadcast.frequency {
parts.push(format!("freq-{}", sanitize(freq)));
}
} else if let Some(source) = &self.source {
parts.push("source".to_string());
parts.push(sanitize(&source.kind));
if let Some(device) = &source.device_id {
parts.push(format!("device-{}", sanitize(device)));
}
if let Some(channel) = &source.channel {
parts.push(format!("channel-{}", sanitize(channel)));
}
} else {
parts.push("unknown".to_string());
}
if let Some(profile) = &self.profile {
parts.push(format!("profile-{}", sanitize(profile)));
}
if let Some(variant) = &self.variant {
parts.push(format!("variant-{}", sanitize(variant)));
}
StreamId(parts.join("/"))
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Channel {
pub id: ChannelId,
pub name: String,
pub number: Option<String>,
pub program_id: Option<u16>,
pub metadata: Vec<ChannelMetadata>,
}
fn sanitize(value: &str) -> String {
value
.chars()
.map(|c| match c {
'a'..='z' | '0'..='9' | '-' | '_' => c,
'A'..='Z' => c.to_ascii_lowercase(),
_ => '_',
})
.collect()
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum ChannelMetadata {
Callsign(String),
Network(String),
Region(String),
Frequency(String),
Extra(String, String),
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PacketDigest {
pub algorithm: String,
pub hex: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DeterminismProfile {
pub name: String,
pub description: String,
pub encoder: String,
pub encoder_args: Vec<String>,
pub chunk_duration_ms: u64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NodeDescriptor {
pub node_id: String,
pub human_name: String,
pub location_hint: Option<String>,
pub capabilities: Vec<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StreamEncryptionInfo {
pub alg: String,
pub key_id: String,
pub nonce_scheme: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct MoqStreamDescriptor {
pub endpoint: String,
pub broadcast_name: String,
pub track_name: String,
pub encryption: Option<StreamEncryptionInfo>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StreamCatalogEntry {
pub stream: StreamDescriptor,
pub moq: Option<MoqStreamDescriptor>,
pub manifest: Option<ManifestSummary>,
pub updated_unix_ms: u64,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StreamCatalog {
pub entries: Vec<StreamCatalogEntry>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ManifestSummary {
pub manifest_id: String,
pub merkle_root: String,
pub epoch_id: String,
pub total_chunks: u64,
pub chunk_start_index: u64,
pub encoder_profile_id: String,
pub signed_by: Vec<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChunkId {
pub stream_id: StreamId,
pub epoch_id: String,
pub chunk_index: u64,
pub chunk_hash: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ManifestVariant {
pub variant_id: String,
pub stream_id: StreamId,
pub chunk_start_index: u64,
pub total_chunks: u64,
pub merkle_root: String,
pub chunk_hashes: Vec<String>,
#[serde(default)]
pub metadata: Vec<StreamMetadata>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ManifestBody {
pub stream_id: StreamId,
pub epoch_id: String,
pub chunk_duration_ms: u64,
pub total_chunks: u64,
pub chunk_start_index: u64,
pub encoder_profile_id: String,
pub merkle_root: String,
pub created_unix_ms: u64,
pub metadata: Vec<StreamMetadata>,
pub chunk_hashes: Vec<String>,
#[serde(default)]
pub variants: Option<Vec<ManifestVariant>>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ManifestSignature {
pub signer_id: String,
pub alg: String,
pub signature: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Manifest {
pub body: ManifestBody,
pub manifest_id: String,
pub signatures: Vec<ManifestSignature>,
}
impl Manifest {
pub fn summary(&self) -> ManifestSummary {
ManifestSummary {
manifest_id: self.manifest_id.clone(),
merkle_root: self.body.merkle_root.clone(),
epoch_id: self.body.epoch_id.clone(),
total_chunks: self.body.total_chunks,
chunk_start_index: self.body.chunk_start_index,
encoder_profile_id: self.body.encoder_profile_id.clone(),
signed_by: self
.signatures
.iter()
.map(|sig| sig.signer_id.clone())
.collect(),
}
}
}
#[derive(Debug, Clone)]
pub enum ManifestError {
Empty,
InvalidHash(String),
}
impl fmt::Display for ManifestError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ManifestError::Empty => write!(f, "no chunk hashes supplied"),
ManifestError::InvalidHash(value) => write!(f, "invalid chunk hash: {value}"),
}
}
}
impl std::error::Error for ManifestError {}
impl ManifestBody {
pub fn manifest_id(&self) -> Result<String, serde_json::Error> {
let bytes = serde_json::to_vec(self)?;
Ok(blake3::hash(&bytes).to_hex().to_string())
}
}
pub fn merkle_root_from_hashes(hashes: &[String]) -> Result<String, ManifestError> {
if hashes.is_empty() {
return Err(ManifestError::Empty);
}
let mut nodes: Vec<blake3::Hash> = Vec::with_capacity(hashes.len());
for hash in hashes {
let parsed = blake3::Hash::from_hex(hash.as_bytes())
.map_err(|_| ManifestError::InvalidHash(hash.clone()))?;
nodes.push(parsed);
}
while nodes.len() > 1 {
if nodes.len() % 2 == 1 {
if let Some(last) = nodes.last().cloned() {
nodes.push(last);
}
}
let mut parents = Vec::with_capacity(nodes.len() / 2);
for pair in nodes.chunks(2) {
let left = pair[0].as_bytes();
let right = pair[1].as_bytes();
let mut merged = [0u8; 64];
merged[..32].copy_from_slice(left);
merged[32..].copy_from_slice(right);
parents.push(blake3::hash(&merged));
}
nodes = parents;
}
Ok(nodes[0].to_hex().to_string())
}
pub fn merkle_proof_for_index(
hashes: &[String],
index: usize,
) -> Result<Vec<String>, ManifestError> {
if hashes.is_empty() {
return Err(ManifestError::Empty);
}
if index >= hashes.len() {
return Err(ManifestError::InvalidHash(format!(
"index {index} out of bounds"
)));
}
let mut nodes: Vec<blake3::Hash> = Vec::with_capacity(hashes.len());
for hash in hashes {
let parsed = blake3::Hash::from_hex(hash.as_bytes())
.map_err(|_| ManifestError::InvalidHash(hash.clone()))?;
nodes.push(parsed);
}
let mut proof = Vec::new();
let mut pos = index;
while nodes.len() > 1 {
if nodes.len() % 2 == 1 {
if let Some(last) = nodes.last().cloned() {
nodes.push(last);
}
}
let sibling_index = if pos % 2 == 0 { pos + 1 } else { pos - 1 };
let sibling = nodes
.get(sibling_index)
.ok_or_else(|| ManifestError::InvalidHash("missing sibling".to_string()))?;
proof.push(sibling.to_hex().to_string());
let mut parents = Vec::with_capacity(nodes.len() / 2);
for pair in nodes.chunks(2) {
let left = pair[0].as_bytes();
let right = pair[1].as_bytes();
let mut merged = [0u8; 64];
merged[..32].copy_from_slice(left);
merged[32..].copy_from_slice(right);
parents.push(blake3::hash(&merged));
}
nodes = parents;
pos /= 2;
}
Ok(proof)
}
pub fn verify_merkle_proof(
leaf_hash: &str,
mut index: usize,
branch: &[String],
expected_root: &str,
) -> bool {
let Ok(mut acc) = blake3::Hash::from_hex(leaf_hash.as_bytes()) else {
return false;
};
for sibling_hex in branch {
let Ok(sibling) = blake3::Hash::from_hex(sibling_hex.as_bytes()) else {
return false;
};
let (left, right) = if index % 2 == 0 {
(acc, sibling)
} else {
(sibling, acc)
};
let mut merged = [0u8; 64];
merged[..32].copy_from_slice(left.as_bytes());
merged[32..].copy_from_slice(right.as_bytes());
acc = blake3::hash(&merged);
index /= 2;
}
acc.to_hex().to_string() == expected_root
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn manifest_id_changes_with_body() {
let body = ManifestBody {
stream_id: StreamId("s".to_string()),
epoch_id: "e".to_string(),
chunk_duration_ms: 2000,
total_chunks: 1,
chunk_start_index: 0,
encoder_profile_id: "p".to_string(),
merkle_root: "00".repeat(32),
created_unix_ms: 1,
metadata: Vec::new(),
chunk_hashes: vec!["11".repeat(32)],
variants: None,
};
let id1 = body.manifest_id().unwrap();
let mut body2 = body.clone();
body2.created_unix_ms = 2;
let id2 = body2.manifest_id().unwrap();
assert_ne!(id1, id2);
}
#[test]
fn merkle_root_single_is_leaf() {
let leaf = blake3::hash(b"leaf").to_hex().to_string();
let root = merkle_root_from_hashes(&[leaf.clone()]).unwrap();
assert_eq!(root, leaf);
}
#[test]
fn merkle_root_rejects_invalid_hash() {
let err = merkle_root_from_hashes(&["not-hex".to_string()]).unwrap_err();
assert!(matches!(err, ManifestError::InvalidHash(_)));
}
#[test]
fn merkle_proof_roundtrip_small_sets() {
for size in 1..=9usize {
let leaves = (0..size)
.map(|i| blake3::hash(&[i as u8]).to_hex().to_string())
.collect::<Vec<_>>();
let root = merkle_root_from_hashes(&leaves).unwrap();
for idx in 0..size {
let proof = merkle_proof_for_index(&leaves, idx).unwrap();
assert!(
verify_merkle_proof(&leaves[idx], idx, &proof, &root),
"size {size} idx {idx} failed"
);
}
}
}
#[test]
fn merkle_proof_detects_tampering() {
let leaves = (0..4usize)
.map(|i| blake3::hash(&[i as u8]).to_hex().to_string())
.collect::<Vec<_>>();
let root = merkle_root_from_hashes(&leaves).unwrap();
let mut proof = merkle_proof_for_index(&leaves, 2).unwrap();
proof[0] = blake3::hash(b"evil").to_hex().to_string();
assert!(!verify_merkle_proof(&leaves[2], 2, &proof, &root));
}
}

View file

@ -0,0 +1,12 @@
[package]
name = "ec-crypto"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
blake3 = "1"
chacha20poly1305 = "0.10"
ed25519-dalek = { version = "2", features = ["pkcs8"] }
hex = "0.4"
ec-core = { path = "../ec-core" }

227
crates/ec-crypto/src/lib.rs Normal file
View file

@ -0,0 +1,227 @@
//! Cryptographic helpers for every.channel.
use chacha20poly1305::{aead::Aead, KeyInit, XChaCha20Poly1305, XNonce};
use ec_core::ManifestSignature;
use ed25519_dalek::{Signature, Signer, SigningKey, Verifier, VerifyingKey};
use std::env;
use std::fs;
pub const MANIFEST_SIG_ALG: &str = "ed25519";
pub const ENCRYPTION_ALG: &str = "xchacha20poly1305";
/// Derive a stream encryption key from a stream id and optional network secret.
///
/// This is deterministic: identical stream ids produce identical keys.
pub fn derive_stream_key(stream_id: &str, network_secret: Option<&[u8]>) -> [u8; 32] {
let mut input = Vec::new();
if let Some(secret) = network_secret {
input.extend_from_slice(secret);
input.push(0);
}
input.extend_from_slice(stream_id.as_bytes());
blake3::derive_key("every.channel stream key v1", &input)
}
/// Derive a deterministic nonce for a stream chunk.
pub fn derive_stream_nonce(stream_id: &str, chunk_index: u64) -> [u8; 24] {
let mut hasher = blake3::Hasher::new();
hasher.update(b"every.channel stream nonce v1");
hasher.update(stream_id.as_bytes());
hasher.update(&chunk_index.to_be_bytes());
let hash = hasher.finalize();
let mut nonce = [0u8; 24];
nonce.copy_from_slice(&hash.as_bytes()[..24]);
nonce
}
#[derive(Debug, Clone)]
pub struct EncryptedPayload {
pub ciphertext: Vec<u8>,
pub nonce: [u8; 24],
pub alg: &'static str,
}
pub fn encrypt_stream_data(
stream_id: &str,
chunk_index: u64,
plaintext: &[u8],
network_secret: Option<&[u8]>,
) -> EncryptedPayload {
let key_bytes = derive_stream_key(stream_id, network_secret);
let cipher = XChaCha20Poly1305::new_from_slice(&key_bytes).expect("key size");
let nonce_bytes = derive_stream_nonce(stream_id, chunk_index);
let nonce = XNonce::from_slice(&nonce_bytes);
let ciphertext = cipher
.encrypt(nonce, plaintext)
.expect("encryption failure");
EncryptedPayload {
ciphertext,
nonce: nonce_bytes,
alg: ENCRYPTION_ALG,
}
}
pub fn decrypt_stream_data(
stream_id: &str,
chunk_index: u64,
ciphertext: &[u8],
network_secret: Option<&[u8]>,
) -> Option<Vec<u8>> {
let key_bytes = derive_stream_key(stream_id, network_secret);
let cipher = XChaCha20Poly1305::new_from_slice(&key_bytes).expect("key size");
let nonce_bytes = derive_stream_nonce(stream_id, chunk_index);
let nonce = XNonce::from_slice(&nonce_bytes);
cipher.decrypt(nonce, ciphertext).ok()
}
#[derive(Debug, Clone)]
pub struct ManifestKeypair {
pub signing_key: SigningKey,
pub verifying_key: VerifyingKey,
}
pub fn load_manifest_keypair_from_env() -> Result<Option<ManifestKeypair>, String> {
let value = match env::var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY") {
Ok(value) => value,
Err(env::VarError::NotPresent) => return Ok(None),
Err(err) => return Err(err.to_string()),
};
let trimmed = value.trim();
let key_bytes = if std::path::Path::new(trimmed).exists() {
let text = fs::read_to_string(trimmed).map_err(|err| err.to_string())?;
hex::decode(text.trim()).map_err(|err| err.to_string())?
} else {
hex::decode(trimmed).map_err(|err| err.to_string())?
};
let bytes = if key_bytes.len() == 32 {
key_bytes
} else if key_bytes.len() == 64 {
key_bytes[..32].to_vec()
} else {
return Err("manifest signing key must be 32 or 64 hex bytes".to_string());
};
let mut secret = [0u8; 32];
secret.copy_from_slice(&bytes[..32]);
let signing_key = SigningKey::from_bytes(&secret);
let verifying_key = signing_key.verifying_key();
Ok(Some(ManifestKeypair {
signing_key,
verifying_key,
}))
}
pub fn signer_id_from_key(key: &VerifyingKey) -> String {
format!("ed25519:{}", hex::encode(key.to_bytes()))
}
pub fn sign_manifest_id(manifest_id: &str, keypair: &ManifestKeypair) -> ManifestSignature {
let signature: Signature = keypair.signing_key.sign(manifest_id.as_bytes());
ManifestSignature {
signer_id: signer_id_from_key(&keypair.verifying_key),
alg: MANIFEST_SIG_ALG.to_string(),
signature: hex::encode(signature.to_bytes()),
}
}
pub fn verify_manifest_signature(manifest_id: &str, sig: &ManifestSignature) -> bool {
if sig.alg != MANIFEST_SIG_ALG {
return false;
}
let signer_id = sig
.signer_id
.strip_prefix("ed25519:")
.unwrap_or(&sig.signer_id);
let Ok(pk_bytes) = hex::decode(signer_id) else {
return false;
};
if pk_bytes.len() != 32 {
return false;
}
let mut pk = [0u8; 32];
pk.copy_from_slice(&pk_bytes);
let Ok(verifying_key) = VerifyingKey::from_bytes(&pk) else {
return false;
};
let Ok(sig_bytes) = hex::decode(&sig.signature) else {
return false;
};
let Ok(signature) = Signature::from_slice(&sig_bytes) else {
return false;
};
verifying_key
.verify(manifest_id.as_bytes(), &signature)
.is_ok()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn stream_key_is_deterministic_and_secret_sensitive() {
let k1 = derive_stream_key("s1", None);
let k2 = derive_stream_key("s1", None);
assert_eq!(k1, k2);
let k3 = derive_stream_key("s2", None);
assert_ne!(k1, k3);
let secret = [7u8; 32];
let ks1 = derive_stream_key("s1", Some(&secret));
assert_ne!(k1, ks1);
let ks2 = derive_stream_key("s1", Some(&secret));
assert_eq!(ks1, ks2);
}
#[test]
fn nonce_changes_per_chunk_index() {
let n1 = derive_stream_nonce("s", 1);
let n2 = derive_stream_nonce("s", 2);
assert_ne!(n1, n2);
}
#[test]
fn encrypt_decrypt_roundtrip() {
let plaintext = b"hello world";
let enc = encrypt_stream_data("s", 42, plaintext, None);
assert_ne!(enc.ciphertext, plaintext);
let out = decrypt_stream_data("s", 42, &enc.ciphertext, None).unwrap();
assert_eq!(out, plaintext);
}
#[test]
fn decrypt_fails_with_wrong_index() {
let plaintext = b"hello world";
let enc = encrypt_stream_data("s", 42, plaintext, None);
assert!(decrypt_stream_data("s", 43, &enc.ciphertext, None).is_none());
}
#[test]
fn manifest_sign_verify_roundtrip() {
let secret = [1u8; 32];
let signing_key = SigningKey::from_bytes(&secret);
let verifying_key = signing_key.verifying_key();
let keypair = ManifestKeypair {
signing_key,
verifying_key,
};
let sig = sign_manifest_id("m", &keypair);
assert!(verify_manifest_signature("m", &sig));
assert!(!verify_manifest_signature("evil", &sig));
}
#[test]
fn load_keypair_from_env_hex() {
let prev = env::var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY").ok();
env::set_var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", "00".repeat(32));
let loaded = load_manifest_keypair_from_env().unwrap().unwrap();
let id = signer_id_from_key(&loaded.verifying_key);
assert!(id.starts_with("ed25519:"));
match prev {
Some(value) => env::set_var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", value),
None => env::remove_var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY"),
}
}
}

View file

@ -0,0 +1,16 @@
[package]
name = "ec-direct"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
base64 = "0.22"
just-webrtc = { version = "0.2", default-features = true }
serde.workspace = true
serde_json.workspace = true
[dev-dependencies]
bytes = "1"
tokio = { version = "1", features = ["rt-multi-thread", "macros", "time"] }

View file

@ -0,0 +1,94 @@
use anyhow::{anyhow, Context, Result};
use base64::engine::general_purpose::URL_SAFE_NO_PAD;
use base64::Engine;
use just_webrtc::types::{ICECandidate, SessionDescription};
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct DirectCodeV1 {
pub v: u8,
pub desc: SessionDescription,
pub candidates: Vec<ICECandidate>,
#[serde(default)]
pub label: Option<String>,
}
const PREFIX: &str = "every.channel://";
pub fn encode_code(code: &DirectCodeV1) -> Result<String> {
let json = serde_json::to_vec(code)?;
Ok(URL_SAFE_NO_PAD.encode(json))
}
pub fn decode_code(code: &str) -> Result<DirectCodeV1> {
let bytes = URL_SAFE_NO_PAD
.decode(code.trim())
.context("invalid base64url code")?;
let parsed: DirectCodeV1 = serde_json::from_slice(&bytes).context("invalid code json")?;
if parsed.v != 1 {
return Err(anyhow!("unsupported direct code version {}", parsed.v));
}
Ok(parsed)
}
pub fn build_direct_link(code_b64: &str) -> String {
format!("every.channel://direct?c={code_b64}")
}
pub fn encode_direct_link(code: &DirectCodeV1) -> Result<String> {
let b64 = encode_code(code)?;
Ok(build_direct_link(&b64))
}
pub fn decode_direct_link(link_or_code: &str) -> Result<DirectCodeV1> {
let s = link_or_code.trim();
if !s.starts_with(PREFIX) {
return decode_code(s);
}
let rest = &s[PREFIX.len()..];
let (path, query) = rest.split_once('?').ok_or_else(|| anyhow!("missing '?'"))?;
if !path.eq_ignore_ascii_case("direct") {
return Err(anyhow!("not a direct link"));
}
for pair in query.split('&') {
let pair = pair.trim();
if pair.is_empty() {
continue;
}
let (k, v) = pair.split_once('=').unwrap_or((pair, ""));
if k.eq_ignore_ascii_case("c") {
return decode_code(v);
}
}
Err(anyhow!("missing code parameter"))
}
#[cfg(test)]
mod tests {
use super::*;
use just_webrtc::types::SDPType;
#[test]
fn code_roundtrips() {
let code = DirectCodeV1 {
v: 1,
desc: SessionDescription {
sdp_type: SDPType::Offer,
sdp: "x".to_string(),
},
candidates: vec![ICECandidate {
candidate: "c".to_string(),
sdp_mid: Some("0".to_string()),
sdp_mline_index: Some(0),
username_fragment: None,
}],
label: Some("ec".to_string()),
};
let enc = encode_code(&code).unwrap();
let dec = decode_code(&enc).unwrap();
assert_eq!(dec, code);
let link = encode_direct_link(&code).unwrap();
let dec2 = decode_direct_link(&link).unwrap();
assert_eq!(dec2, code);
}
}

View file

@ -0,0 +1,134 @@
use anyhow::{anyhow, Result};
use bytes::Bytes;
use ec_direct::{decode_direct_link, encode_direct_link, DirectCodeV1};
use just_webrtc::types::{
DataChannelOptions, PeerConfiguration, PeerConnectionState, SessionDescription,
};
use just_webrtc::{DataChannelExt, PeerConnectionBuilder, PeerConnectionExt};
async fn wait_connected(pc: &impl PeerConnectionExt) -> Result<()> {
tokio::time::timeout(std::time::Duration::from_secs(20), async {
loop {
match pc.state_change().await {
PeerConnectionState::Connected => break Ok(()),
PeerConnectionState::Failed => break Err(anyhow!("peer connection failed")),
PeerConnectionState::Closed => break Err(anyhow!("peer connection closed")),
_ => {}
}
}
})
.await
.map_err(|_| anyhow!("timed out waiting for peer connection"))?
}
// Ignored by default: WebRTC can be timing-sensitive on some hosts.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
#[ignore]
async fn e2e_direct_connect_loopback_sends_bytes() -> Result<()> {
// Avoid depending on external STUN servers in tests: use host candidates only.
let cfg = PeerConfiguration {
ice_servers: vec![],
..Default::default()
};
let offerer = PeerConnectionBuilder::new()
.set_config(cfg.clone())
.with_channel_options(vec![(
"simple_channel_".to_string(),
DataChannelOptions::default(),
)])
.map_err(|e| anyhow!("{e:#}"))?
.build()
.await
.map_err(|e| anyhow!("{e:#}"))?;
let offer_desc: SessionDescription = offerer
.get_local_description()
.await
.ok_or_else(|| anyhow!("missing offer local description"))?;
let offer_candidates = offerer
.collect_ice_candidates()
.await
.map_err(|e| anyhow!("{e:#}"))?;
let offer_link = encode_direct_link(&DirectCodeV1 {
v: 1,
desc: offer_desc,
candidates: offer_candidates,
label: Some("every.channel0".to_string()),
})?;
let offer_code = decode_direct_link(&offer_link)?;
let answerer = PeerConnectionBuilder::new()
.set_config(cfg.clone())
.with_remote_offer(Some(offer_code.desc.clone()))
.map_err(|e| anyhow!("{e:#}"))?
.build()
.await
.map_err(|e| anyhow!("{e:#}"))?;
answerer
.add_ice_candidates(offer_code.candidates.clone())
.await
.map_err(|e| anyhow!("{e:#}"))?;
let answer_desc = answerer
.get_local_description()
.await
.ok_or_else(|| anyhow!("missing answer local description"))?;
let answer_candidates = answerer
.collect_ice_candidates()
.await
.map_err(|e| anyhow!("{e:#}"))?;
let answer_link = encode_direct_link(&DirectCodeV1 {
v: 1,
desc: answer_desc,
candidates: answer_candidates,
label: Some("every.channel0".to_string()),
})?;
let answer_code = decode_direct_link(&answer_link)?;
offerer
.set_remote_description(answer_code.desc.clone())
.await
.map_err(|e| anyhow!("{e:#}"))?;
offerer
.add_ice_candidates(answer_code.candidates.clone())
.await
.map_err(|e| anyhow!("{e:#}"))?;
// Wait for both peers to report a full connection before waiting for the data channel.
wait_connected(&offerer).await?;
wait_connected(&answerer).await?;
let offerer_ch = offerer
.receive_channel()
.await
.map_err(|e| anyhow!("{e:#}"))?;
let answerer_ch = answerer
.receive_channel()
.await
.map_err(|e| anyhow!("{e:#}"))?;
offerer_ch.wait_ready().await;
answerer_ch.wait_ready().await;
let payload = Bytes::from_static(b"hello");
offerer_ch
.send(&payload)
.await
.map_err(|e| anyhow!("{e:#}"))?;
let got = tokio::time::timeout(std::time::Duration::from_secs(10), answerer_ch.receive())
.await
.map_err(|_| anyhow!("timed out waiting for receive"))?
.map_err(|e| anyhow!("{e:#}"))?;
assert_eq!(&got[..], b"hello");
// Confirm the reverse direction works too (this also guards against one-way readiness bugs).
answerer_ch
.send(&Bytes::from_static(b"world"))
.await
.map_err(|e| anyhow!("{e:#}"))?;
let got = tokio::time::timeout(std::time::Duration::from_secs(10), offerer_ch.receive())
.await
.map_err(|_| anyhow!("timed out waiting for receive"))?
.map_err(|e| anyhow!("{e:#}"))?;
assert_eq!(&got[..], b"world");
Ok(())
}

View file

@ -0,0 +1,14 @@
[package]
name = "ec-hdhomerun"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
ec-core = { path = "../ec-core" }
crc32fast = "1"
hex = "0.4"
serde.workspace = true
serde_json.workspace = true
ureq = { version = "2", default-features = true, features = ["tls"] }

View file

@ -0,0 +1,676 @@
//! HDHomeRun discovery, lineup ingest, and stream scaffolding.
use anyhow::{anyhow, Context, Result};
use ec_core::{Channel, ChannelId, ChannelMetadata, DeviceId};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::io::Read;
use std::net::{Ipv4Addr, SocketAddrV4, UdpSocket};
use std::time::{Duration, Instant};
const DISCOVER_UDP_PORT: u16 = 65001;
const TYPE_DISCOVER_REQ: u16 = 0x0002;
const TYPE_DISCOVER_RPY: u16 = 0x0003;
const TAG_DEVICE_TYPE: u8 = 0x01;
const TAG_DEVICE_ID: u8 = 0x02;
const TAG_TUNER_COUNT: u8 = 0x10;
const TAG_DEVICE_AUTH_BIN: u8 = 0x29;
const TAG_BASE_URL: u8 = 0x2A;
const TAG_DEVICE_AUTH_STR: u8 = 0x2B;
const DEVICE_TYPE_TUNER: u32 = 0x00000001;
const DEVICE_ID_WILDCARD: u32 = 0xFFFFFFFF;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DeviceField {
pub key: String,
pub value: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct HdhomerunDevice {
pub id: DeviceId,
pub ip: String,
pub tuner_count: u8,
pub lineup_url: Option<String>,
pub discover_url: Option<String>,
pub base_url: Option<String>,
pub device_auth: Option<String>,
pub friendly_name: Option<String>,
pub model_number: Option<String>,
pub firmware_name: Option<String>,
pub firmware_version: Option<String>,
pub device_type: Option<String>,
pub discovery_tags: Vec<DeviceField>,
pub raw_discover_json: Option<Value>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LineupEntry {
pub channel: Channel,
pub stream_url: String,
pub tags: Vec<String>,
pub raw: Value,
}
pub struct HdhomerunStream {
pub url: String,
reader: Box<dyn Read + Send>,
}
impl std::fmt::Debug for HdhomerunStream {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("HdhomerunStream")
.field("url", &self.url)
.finish_non_exhaustive()
}
}
impl Read for HdhomerunStream {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
self.reader.read(buf)
}
}
#[derive(Debug, Clone, Deserialize)]
struct DiscoverJson {
#[serde(rename = "DeviceID")]
device_id: Option<String>,
#[serde(rename = "DeviceAuth")]
device_auth: Option<String>,
#[serde(rename = "BaseURL")]
base_url: Option<String>,
#[serde(rename = "LineupURL")]
lineup_url: Option<String>,
#[serde(rename = "DiscoverURL")]
discover_url: Option<String>,
#[serde(rename = "FriendlyName")]
friendly_name: Option<String>,
#[serde(rename = "ModelNumber")]
model_number: Option<String>,
#[serde(rename = "FirmwareName")]
firmware_name: Option<String>,
#[serde(rename = "FirmwareVersion")]
firmware_version: Option<String>,
#[serde(rename = "DeviceType")]
device_type: Option<String>,
#[serde(rename = "TunerCount")]
tuner_count: Option<u8>,
}
#[derive(Debug, Clone, Deserialize)]
struct LineupJsonEntry {
#[serde(rename = "GuideNumber")]
guide_number: Option<String>,
#[serde(rename = "GuideName")]
guide_name: Option<String>,
#[serde(rename = "Tags")]
tags: Option<String>,
#[serde(rename = "URL")]
url: Option<String>,
}
/// Discover devices using UDP broadcast, then hydrate with /discover.json when possible.
pub fn discover() -> Result<Vec<HdhomerunDevice>> {
let mut devices = discover_udp(Duration::from_millis(400))?;
if devices.is_empty() {
if let Ok(device) = discover_from_host("hdhomerun.local") {
devices.push(device);
}
}
Ok(devices)
}
/// Discover a device by hostname or IP using the HTTP discover.json endpoint.
pub fn discover_from_host(host: &str) -> Result<HdhomerunDevice> {
let base_url = format!("http://{host}");
let discover_url = format!("{base_url}/discover.json");
let json = fetch_json(&discover_url)?;
let discover: DiscoverJson = serde_json::from_value(json.clone())
.with_context(|| format!("invalid discover.json from {discover_url}"))?;
let device = HdhomerunDevice {
id: DeviceId(
discover
.device_id
.clone()
.unwrap_or_else(|| "unknown".to_string()),
),
ip: host.to_string(),
tuner_count: discover.tuner_count.unwrap_or(0),
lineup_url: discover.lineup_url.clone(),
discover_url: discover.discover_url.clone().or(Some(discover_url)),
base_url: discover.base_url.clone().or(Some(base_url)),
device_auth: discover.device_auth.clone(),
friendly_name: discover.friendly_name.clone(),
model_number: discover.model_number.clone(),
firmware_name: discover.firmware_name.clone(),
firmware_version: discover.firmware_version.clone(),
device_type: discover.device_type.clone(),
discovery_tags: Vec::new(),
raw_discover_json: Some(json),
};
Ok(device)
}
/// Fetch and normalize lineup information for a device.
pub fn fetch_lineup(device: &HdhomerunDevice) -> Result<Vec<LineupEntry>> {
let lineup_url = resolve_lineup_url(device)?;
let json = fetch_json(&lineup_url)?;
lineup_from_json_value(&json, Some(&device.id))
.with_context(|| format!("invalid lineup.json from {lineup_url}"))
}
/// Parse a lineup.json file already loaded into memory.
pub fn lineup_from_json_bytes(
bytes: &[u8],
device_id: Option<&DeviceId>,
) -> Result<Vec<LineupEntry>> {
let json: Value = serde_json::from_slice(bytes)?;
lineup_from_json_value(&json, device_id)
}
/// Open a raw MPEG-TS stream by channel ID (lineup lookup required).
pub fn open_stream(device: &HdhomerunDevice, channel: &ChannelId) -> Result<HdhomerunStream> {
let lineup = fetch_lineup(device)?;
let entry = lineup
.into_iter()
.find(|entry| entry.channel.id == *channel)
.ok_or_else(|| anyhow!("channel {} not found in lineup", channel.0))?;
open_stream_entry(&entry, None)
}
/// Open a raw MPEG-TS stream from a lineup entry.
pub fn open_stream_entry(
entry: &LineupEntry,
duration_secs: Option<u32>,
) -> Result<HdhomerunStream> {
open_stream_url(&entry.stream_url, duration_secs)
}
/// Open a raw MPEG-TS stream by URL.
pub fn open_stream_url(url: &str, duration_secs: Option<u32>) -> Result<HdhomerunStream> {
let url = if let Some(duration) = duration_secs {
append_query_param(url, "duration", &duration.to_string())
} else {
url.to_string()
};
// Streams can be long-lived. Only apply read timeout when the caller requests
// `duration=...` (useful for tests and short captures).
let mut agent_builder = ureq::AgentBuilder::new().timeout_connect(Duration::from_secs(3));
if let Some(duration) = duration_secs {
agent_builder = agent_builder.timeout_read(Duration::from_secs(duration as u64 + 10));
}
let agent = agent_builder.build();
let response = agent
.get(&url)
.call()
.with_context(|| format!("failed to open stream {url}"))?;
if response.status() < 200 || response.status() >= 300 {
return Err(anyhow!(
"stream returned http {} for {}",
response.status(),
url
));
}
Ok(HdhomerunStream {
url,
reader: response.into_reader(),
})
}
pub fn find_lineup_entry_by_number<'a>(
lineup: &'a [LineupEntry],
guide_number: &str,
) -> Option<&'a LineupEntry> {
lineup
.iter()
.find(|entry| entry.channel.number.as_deref() == Some(guide_number))
}
pub fn find_lineup_entry_by_name<'a>(
lineup: &'a [LineupEntry],
guide_name: &str,
) -> Option<&'a LineupEntry> {
lineup.iter().find(|entry| entry.channel.name == guide_name)
}
fn discover_udp(timeout: Duration) -> Result<Vec<HdhomerunDevice>> {
let socket = UdpSocket::bind((Ipv4Addr::UNSPECIFIED, 0))?;
socket.set_broadcast(true)?;
socket.set_read_timeout(Some(Duration::from_millis(100)))?;
let packet = build_discover_packet()?;
let broadcast_addr = SocketAddrV4::new(Ipv4Addr::BROADCAST, DISCOVER_UDP_PORT);
socket.send_to(&packet, broadcast_addr)?;
let mut devices = Vec::new();
let start = Instant::now();
let mut buf = [0u8; 2048];
while start.elapsed() < timeout {
match socket.recv_from(&mut buf) {
Ok((len, addr)) => {
if let Ok(device) = parse_discover_response(&buf[..len], addr.ip().to_string()) {
devices.push(device);
}
}
Err(err) if err.kind() == std::io::ErrorKind::WouldBlock => continue,
Err(err) if err.kind() == std::io::ErrorKind::TimedOut => continue,
Err(err) => return Err(err.into()),
}
}
for device in devices.iter_mut() {
if let Ok(json) = try_fetch_discover_json(&device.ip) {
apply_discover_json(device, json);
}
}
Ok(devices)
}
fn build_discover_packet() -> Result<Vec<u8>> {
let mut payload = Vec::new();
payload.extend(tlv(TAG_DEVICE_TYPE, &DEVICE_TYPE_TUNER.to_be_bytes()));
payload.extend(tlv(TAG_DEVICE_ID, &DEVICE_ID_WILDCARD.to_be_bytes()));
let mut packet = Vec::with_capacity(4 + payload.len() + 4);
packet.extend(TYPE_DISCOVER_REQ.to_be_bytes());
packet.extend((payload.len() as u16).to_be_bytes());
packet.extend(payload);
let crc = crc32fast::hash(&packet);
packet.extend(crc.to_le_bytes());
Ok(packet)
}
fn parse_discover_response(bytes: &[u8], ip: String) -> Result<HdhomerunDevice> {
if bytes.len() < 8 {
return Err(anyhow!("discover reply too short"));
}
let packet_type = u16::from_be_bytes([bytes[0], bytes[1]]);
if packet_type != TYPE_DISCOVER_RPY {
return Err(anyhow!("unexpected packet type"));
}
let payload_len = u16::from_be_bytes([bytes[2], bytes[3]]) as usize;
if bytes.len() < 4 + payload_len + 4 {
return Err(anyhow!("truncated discover reply"));
}
let payload = &bytes[4..4 + payload_len];
let expected_crc = u32::from_le_bytes([
bytes[4 + payload_len],
bytes[4 + payload_len + 1],
bytes[4 + payload_len + 2],
bytes[4 + payload_len + 3],
]);
let actual_crc = crc32fast::hash(&bytes[..4 + payload_len]);
if expected_crc != actual_crc {
return Err(anyhow!("bad crc"));
}
let mut cursor = 0usize;
let mut device_id: Option<String> = None;
let mut tuner_count: Option<u8> = None;
let mut base_url: Option<String> = None;
let mut device_auth: Option<String> = None;
let mut tags: Vec<DeviceField> = Vec::new();
while cursor < payload.len() {
let tag = payload[cursor];
cursor += 1;
let (length, consumed) = read_varlen(&payload[cursor..])?;
cursor += consumed;
if cursor + length > payload.len() {
return Err(anyhow!("discover TLV length overflow"));
}
let value = &payload[cursor..cursor + length];
cursor += length;
match tag {
TAG_DEVICE_ID => {
if value.len() == 4 {
let id = u32::from_be_bytes([value[0], value[1], value[2], value[3]]);
device_id = Some(format!("{id:08X}"));
}
}
TAG_TUNER_COUNT => {
if let Some(first) = value.first() {
tuner_count = Some(*first);
}
}
TAG_BASE_URL => {
if let Ok(text) = std::str::from_utf8(value) {
base_url = Some(text.trim_end_matches('\0').to_string());
}
}
TAG_DEVICE_AUTH_STR => {
if let Ok(text) = std::str::from_utf8(value) {
device_auth = Some(text.trim_end_matches('\0').to_string());
}
}
TAG_DEVICE_AUTH_BIN => {
tags.push(DeviceField {
key: "device_auth_bin".to_string(),
value: hex::encode(value),
});
}
TAG_DEVICE_TYPE => {
tags.push(DeviceField {
key: "device_type".to_string(),
value: hex::encode(value),
});
}
other => {
tags.push(DeviceField {
key: format!("tag_{other:02X}"),
value: hex::encode(value),
});
}
}
}
let id = device_id.unwrap_or_else(|| "unknown".to_string());
let device = HdhomerunDevice {
id: DeviceId(id),
ip,
tuner_count: tuner_count.unwrap_or(0),
lineup_url: None,
discover_url: None,
base_url,
device_auth,
friendly_name: None,
model_number: None,
firmware_name: None,
firmware_version: None,
device_type: None,
discovery_tags: tags,
raw_discover_json: None,
};
Ok(device)
}
fn read_varlen(buf: &[u8]) -> Result<(usize, usize)> {
if buf.is_empty() {
return Err(anyhow!("missing varlen"));
}
let first = buf[0];
if first & 0x80 == 0 {
Ok((first as usize, 1))
} else {
if buf.len() < 2 {
return Err(anyhow!("missing varlen second byte"));
}
let len = ((first & 0x7F) as usize) | ((buf[1] as usize) << 7);
Ok((len, 2))
}
}
fn tlv(tag: u8, value: &[u8]) -> Vec<u8> {
let mut out = Vec::with_capacity(2 + value.len());
out.push(tag);
out.extend(encode_varlen(value.len()));
out.extend(value);
out
}
fn encode_varlen(len: usize) -> Vec<u8> {
if len <= 0x7F {
vec![len as u8]
} else {
vec![((len & 0x7F) as u8) | 0x80, (len >> 7) as u8]
}
}
fn fetch_json(url: &str) -> Result<Value> {
let agent = ureq::AgentBuilder::new()
.timeout_connect(Duration::from_secs(3))
.timeout_read(Duration::from_secs(6))
.build();
let response = agent
.get(url)
.call()
.with_context(|| format!("request failed for {url}"))?;
if response.status() < 200 || response.status() >= 300 {
return Err(anyhow!("http {} for {url}", response.status()));
}
let mut body = String::new();
response
.into_reader()
.read_to_string(&mut body)
.with_context(|| format!("failed to read response body for {url}"))?;
Ok(serde_json::from_str::<Value>(&body)
.with_context(|| format!("invalid json body for {url}"))?)
}
fn try_fetch_discover_json(host: &str) -> Result<Value> {
let url = format!("http://{host}/discover.json");
fetch_json(&url)
}
fn apply_discover_json(device: &mut HdhomerunDevice, json: Value) {
if let Ok(discover) = serde_json::from_value::<DiscoverJson>(json.clone()) {
if let Some(device_id) = discover.device_id {
device.id = DeviceId(device_id);
}
if let Some(tuner_count) = discover.tuner_count {
device.tuner_count = tuner_count;
}
device.lineup_url = discover.lineup_url.or(device.lineup_url.take());
device.discover_url = discover.discover_url.or(device.discover_url.take());
device.base_url = discover.base_url.or(device.base_url.take());
device.device_auth = discover.device_auth.or(device.device_auth.take());
device.friendly_name = discover.friendly_name.or(device.friendly_name.take());
device.model_number = discover.model_number.or(device.model_number.take());
device.firmware_name = discover.firmware_name.or(device.firmware_name.take());
device.firmware_version = discover.firmware_version.or(device.firmware_version.take());
device.device_type = discover.device_type.or(device.device_type.take());
}
device.raw_discover_json = Some(json);
}
fn resolve_lineup_url(device: &HdhomerunDevice) -> Result<String> {
if let Some(lineup_url) = device.lineup_url.as_ref() {
return Ok(lineup_url.clone());
}
if let Some(base_url) = device.base_url.as_ref() {
return Ok(format!("{base_url}/lineup.json"));
}
if !device.ip.is_empty() {
return Ok(format!("http://{}/lineup.json", device.ip));
}
Err(anyhow!("no lineup URL available"))
}
fn append_query_param(url: &str, key: &str, value: &str) -> String {
if url.contains('?') {
format!("{url}&{key}={value}")
} else {
format!("{url}?{key}={value}")
}
}
fn lineup_from_json_value(json: &Value, device_id: Option<&DeviceId>) -> Result<Vec<LineupEntry>> {
let entries = json
.as_array()
.ok_or_else(|| anyhow!("lineup json is not an array"))?;
let mut output = Vec::with_capacity(entries.len());
for (index, entry) in entries.iter().enumerate() {
let parsed: LineupJsonEntry = serde_json::from_value(entry.clone())
.with_context(|| format!("invalid lineup entry at index {index}"))?;
let guide_number = parsed.guide_number.clone();
let guide_name = parsed
.guide_name
.clone()
.or_else(|| guide_number.clone())
.unwrap_or_else(|| format!("Channel {index}"));
let tags = parsed
.tags
.unwrap_or_default()
.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect::<Vec<_>>();
let url = parsed.url.clone().unwrap_or_else(|| "".to_string());
let id = match (device_id, guide_number.as_ref()) {
(Some(device_id), Some(guide_number)) => {
ChannelId(format!("hdhr:{}:{}", device_id.0, guide_number))
}
(_, Some(guide_number)) => ChannelId(guide_number.clone()),
(_, None) => ChannelId(format!("hdhr:unknown:{index}")),
};
let mut metadata = Vec::new();
for tag in &tags {
metadata.push(ChannelMetadata::Extra("tag".to_string(), tag.clone()));
}
if let Some(guide_number) = guide_number.clone() {
metadata.push(ChannelMetadata::Extra(
"guide_number".to_string(),
guide_number,
));
}
if let Some(obj) = entry.as_object() {
for (key, value) in obj.iter() {
if key == "GuideNumber" || key == "GuideName" || key == "Tags" || key == "URL" {
continue;
}
metadata.push(ChannelMetadata::Extra(key.clone(), value.to_string()));
}
}
let channel = Channel {
id,
name: guide_name,
number: parsed.guide_number,
program_id: None,
metadata,
};
output.push(LineupEntry {
channel,
stream_url: url,
tags,
raw: entry.clone(),
});
}
Ok(output)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn varlen_roundtrip_small_and_large() {
for len in [0usize, 1, 10, 127, 128, 200, 1024] {
let enc = encode_varlen(len);
let (decoded, consumed) = read_varlen(&enc).unwrap();
assert_eq!(decoded, len);
assert_eq!(consumed, enc.len());
}
}
#[test]
fn parse_discover_response_happy_path() {
let device_id = 0x10ACEBB9u32;
let ip = "192.0.2.10"; // RFC 5737 TEST-NET-1
let mut payload = Vec::new();
payload.extend(tlv(TAG_DEVICE_ID, &device_id.to_be_bytes()));
payload.extend(tlv(TAG_TUNER_COUNT, &[4u8]));
payload.extend(tlv(TAG_BASE_URL, b"http://192.0.2.10\0"));
payload.extend(tlv(TAG_DEVICE_AUTH_STR, b"auth-token\0"));
payload.extend(tlv(0x99, b"unknown"));
let mut packet = Vec::new();
packet.extend(TYPE_DISCOVER_RPY.to_be_bytes());
packet.extend((payload.len() as u16).to_be_bytes());
packet.extend(&payload);
let crc = crc32fast::hash(&packet);
packet.extend(crc.to_le_bytes());
let dev = parse_discover_response(&packet, ip.to_string()).unwrap();
assert_eq!(dev.id.0, "10ACEBB9");
assert_eq!(dev.ip, ip);
assert_eq!(dev.tuner_count, 4);
assert_eq!(dev.base_url.as_deref(), Some("http://192.0.2.10"));
assert_eq!(dev.device_auth.as_deref(), Some("auth-token"));
assert!(dev.discovery_tags.iter().any(|t| t.key == "tag_99"));
}
#[test]
fn parse_discover_response_rejects_bad_crc() {
let mut payload = Vec::new();
payload.extend(tlv(TAG_TUNER_COUNT, &[2u8]));
let mut packet = Vec::new();
packet.extend(TYPE_DISCOVER_RPY.to_be_bytes());
packet.extend((payload.len() as u16).to_be_bytes());
packet.extend(&payload);
let crc = crc32fast::hash(&packet);
packet.extend(crc.to_le_bytes());
// corrupt the last byte
*packet.last_mut().unwrap() ^= 0xFF;
assert!(parse_discover_response(&packet, "1.2.3.4".to_string()).is_err());
}
#[test]
fn lineup_parsing_generates_channel_ids_and_metadata() {
let device_id = DeviceId("ABCDEF01".to_string());
let json = serde_json::json!([
{
"GuideNumber": "2.1",
"GuideName": "KCBS-HD",
"Tags": "drm,encrypted,",
"URL": "http://hdhr/auto/v2.1",
"Foo": "Bar"
},
{
"GuideNumber": "2.2",
"GuideName": "StartTV",
"Tags": "",
"URL": "http://hdhr/auto/v2.2"
}
]);
let entries = lineup_from_json_value(&json, Some(&device_id)).unwrap();
assert_eq!(entries.len(), 2);
assert_eq!(entries[0].channel.id.0, "hdhr:ABCDEF01:2.1");
assert_eq!(entries[0].channel.name, "KCBS-HD");
assert_eq!(entries[0].channel.number.as_deref(), Some("2.1"));
assert_eq!(entries[0].stream_url, "http://hdhr/auto/v2.1");
assert!(entries[0].tags.iter().any(|t| t == "drm"));
assert!(entries[0].channel.metadata.iter().any(|m| match m {
ChannelMetadata::Extra(key, value) => key == "guide_number" && value == "2.1",
_ => false,
}));
assert!(entries[0].channel.metadata.iter().any(|m| match m {
ChannelMetadata::Extra(key, _) => key == "Foo",
_ => false,
}));
}
}

16
crates/ec-iroh/Cargo.toml Normal file
View file

@ -0,0 +1,16 @@
[package]
name = "ec-iroh"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
blake3 = "1"
bytes = "1"
ec-core = { path = "../ec-core" }
futures-lite = "2"
iroh = { version = "0.96", features = ["address-lookup-mdns", "address-lookup-pkarr-dht"] }
iroh-gossip = { path = "../../third_party/iroh-org/iroh-gossip", features = ["net"] }
serde_json.workspace = true
tokio = { version = "1", features = ["time"] }

328
crates/ec-iroh/src/lib.rs Normal file
View file

@ -0,0 +1,328 @@
//! iroh transport scaffolding for every.channel.
use anyhow::{Context, Result};
use bytes::Bytes;
use ec_core::StreamCatalogEntry;
use futures_lite::StreamExt;
use iroh::address_lookup::{
DhtAddressLookup, DiscoveryEvent, DnsAddressLookup, MdnsAddressLookup, PkarrPublisher, UserData,
};
use iroh::endpoint::RelayMode;
use iroh::{
address_lookup::memory::MemoryLookup, protocol::Router, Endpoint, EndpointAddr, PublicKey,
SecretKey,
};
use iroh_gossip::{
api::{Event, GossipReceiver, GossipSender},
net::{Gossip, GOSSIP_ALPN},
proto::TopicId,
};
use std::collections::BTreeMap;
use std::env;
use std::time::{Duration, Instant};
pub const ALPN_MOQ: &[u8] = b"every.channel/moq/0";
pub const DEFAULT_CATALOG_TOPIC: &str = "every.channel/catalog/v1";
pub const MDNS_USER_DATA: &str = "every.channel";
#[derive(Debug, Clone)]
pub struct TokenBucket {
capacity: u64,
tokens: f64,
refill_per_sec: f64,
last_refill: Instant,
}
impl TokenBucket {
pub fn new(capacity: u64, refill_per_sec: u64) -> Self {
let capacity = capacity.max(1);
let refill_per_sec = refill_per_sec.max(1) as f64;
Self {
capacity,
tokens: capacity as f64,
refill_per_sec,
last_refill: Instant::now(),
}
}
pub fn allow(&mut self, amount: u64) -> bool {
self.refill();
let amount = amount as f64;
if amount <= self.tokens {
self.tokens -= amount;
true
} else {
false
}
}
fn refill(&mut self) {
let now = Instant::now();
let elapsed = now.duration_since(self.last_refill).as_secs_f64();
if elapsed <= 0.0 {
return;
}
self.tokens = (self.tokens + elapsed * self.refill_per_sec).min(self.capacity as f64);
self.last_refill = now;
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn token_bucket_allows_and_refills() {
let mut bucket = TokenBucket::new(10, 10);
assert!(bucket.allow(7));
assert!(bucket.allow(3));
assert!(!bucket.allow(1));
// Force a refill without sleeping.
bucket.last_refill = Instant::now() - Duration::from_secs(1);
assert!(bucket.allow(1));
}
}
#[derive(Debug, Clone, Copy, Default)]
pub struct DiscoveryConfig {
pub dht: bool,
pub mdns: bool,
pub dns: bool,
}
impl DiscoveryConfig {
pub fn from_env() -> Result<Self> {
match env::var("EVERY_CHANNEL_IROH_DISCOVERY") {
Ok(value) => Self::from_list(&value),
Err(env::VarError::NotPresent) => Ok(Self::default()),
Err(err) => Err(err.into()),
}
}
pub fn from_list(value: &str) -> Result<Self> {
let mut config = DiscoveryConfig::default();
for raw in value.split(|c: char| c == ',' || c == ';' || c.is_whitespace()) {
let token = raw.trim().to_ascii_lowercase();
if token.is_empty() {
continue;
}
match token.as_str() {
"dht" => config.dht = true,
"mdns" => config.mdns = true,
"dns" => config.dns = true,
"all" => {
config.dht = true;
config.mdns = true;
config.dns = true;
}
"none" | "off" => {
config = DiscoveryConfig::default();
}
_ => {
return Err(anyhow::anyhow!("unknown discovery mode: {token}"));
}
}
}
Ok(config)
}
}
pub async fn build_endpoint(
secret: Option<SecretKey>,
discovery: DiscoveryConfig,
) -> Result<Endpoint> {
let relay_mode = relay_mode_from_env().unwrap_or(RelayMode::Default);
let mut builder = Endpoint::empty_builder(relay_mode);
if let Some(secret) = secret {
builder = builder.secret_key(secret);
}
if discovery.dns {
builder = builder
.address_lookup(PkarrPublisher::n0_dns())
.address_lookup(DnsAddressLookup::n0_dns());
}
if discovery.dht {
builder = builder.address_lookup(DhtAddressLookup::builder());
}
if discovery.mdns {
builder = builder.address_lookup(MdnsAddressLookup::builder());
}
let endpoint = builder.bind().await?;
endpoint.set_alpns(vec![ALPN_MOQ.to_vec()]);
Ok(endpoint)
}
fn relay_mode_from_env() -> Result<RelayMode> {
let value = match env::var("EVERY_CHANNEL_IROH_RELAY") {
Ok(value) => value,
Err(env::VarError::NotPresent) => return Ok(RelayMode::Default),
Err(err) => return Err(err.into()),
};
match value.trim().to_ascii_lowercase().as_str() {
"" | "default" => Ok(RelayMode::Default),
"disabled" | "off" => Ok(RelayMode::Disabled),
other => Err(anyhow::anyhow!("unknown relay mode: {other}")),
}
}
pub async fn start_endpoint() -> Result<Endpoint> {
let discovery = DiscoveryConfig::from_env()?;
build_endpoint(None, discovery).await
}
pub fn catalog_topic() -> TopicId {
let hash = blake3::hash(DEFAULT_CATALOG_TOPIC.as_bytes());
TopicId::from_bytes(*hash.as_bytes())
}
pub fn parse_endpoint_addr(value: &str) -> Result<EndpointAddr> {
let value = value.trim();
if value.starts_with('{') {
let addr =
serde_json::from_str::<EndpointAddr>(value).context("invalid EndpointAddr json")?;
return Ok(addr);
}
let id = value.parse::<PublicKey>().context("invalid endpoint id")?;
Ok(EndpointAddr::new(id))
}
#[derive(Debug, Clone)]
pub struct MdnsDiscovery {
mdns: MdnsAddressLookup,
endpoint_id: PublicKey,
user_data: Option<UserData>,
}
impl MdnsDiscovery {
pub async fn start(
endpoint: &Endpoint,
user_data: Option<&str>,
advertise: bool,
) -> Result<Self> {
let mdns = MdnsAddressLookup::builder()
.advertise(advertise)
.build(endpoint.id())
.context("mdns address lookup failed")?;
endpoint.address_lookup().add(mdns.clone());
let user_data = if let Some(value) = user_data {
let data = UserData::try_from(value.to_string()).context("invalid mdns user data")?;
endpoint.set_user_data_for_address_lookup(Some(data.clone()));
Some(data)
} else {
None
};
Ok(Self {
mdns,
endpoint_id: endpoint.id(),
user_data,
})
}
pub async fn discover_peers(&self, timeout: Duration) -> Result<Vec<EndpointAddr>> {
let mut stream = self.mdns.subscribe().await;
let deadline = Instant::now() + timeout;
let mut peers: BTreeMap<PublicKey, EndpointAddr> = BTreeMap::new();
loop {
let now = Instant::now();
if now >= deadline {
break;
}
let remaining = deadline - now;
match tokio::time::timeout(remaining, stream.next()).await {
Ok(Some(DiscoveryEvent::Discovered { endpoint_info, .. })) => {
if endpoint_info.endpoint_id == self.endpoint_id {
continue;
}
if let Some(expected) = self.user_data.as_ref() {
if endpoint_info.data.user_data() != Some(expected) {
continue;
}
}
let addr = EndpointAddr::from(endpoint_info);
peers.insert(addr.id, addr);
}
Ok(Some(DiscoveryEvent::Expired { .. })) => {}
Ok(None) => break,
Err(_) => break,
}
}
Ok(peers.into_values().collect())
}
}
#[derive(Debug)]
pub struct CatalogGossip {
sender: GossipSender,
receiver: GossipReceiver,
_router: Router,
_gossip: Gossip,
_memory_lookup: MemoryLookup,
}
impl CatalogGossip {
pub async fn join(endpoint: Endpoint, peers: &[String]) -> Result<Self> {
let memory_lookup = MemoryLookup::new();
endpoint.address_lookup().add(memory_lookup.clone());
let gossip = Gossip::builder().spawn(endpoint.clone());
let router = Router::builder(endpoint.clone())
.accept(GOSSIP_ALPN, gossip.clone())
.spawn();
let peer_addrs = peers
.iter()
.map(|peer| parse_endpoint_addr(peer))
.collect::<Result<Vec<_>, _>>()
.context("failed to parse gossip peer addr")?;
for peer in &peer_addrs {
memory_lookup.add_endpoint_info(peer.clone());
}
let peer_ids = peer_addrs
.iter()
.map(|addr| addr.id)
.collect::<Vec<PublicKey>>();
let (sender, receiver) = gossip
.subscribe_and_join(catalog_topic(), peer_ids)
.await?
.split();
Ok(Self {
sender,
receiver,
_router: router,
_gossip: gossip,
_memory_lookup: memory_lookup,
})
}
pub async fn announce(&mut self, entry: StreamCatalogEntry) -> Result<()> {
let bytes = serde_json::to_vec(&entry)?;
self.sender.broadcast(Bytes::from(bytes)).await?;
Ok(())
}
pub async fn next_entry(&mut self) -> Result<Option<StreamCatalogEntry>> {
while let Some(event) = self.receiver.try_next().await? {
if let Event::Received(msg) = event {
if let Ok(entry) = serde_json::from_slice::<StreamCatalogEntry>(&msg.content) {
return Ok(Some(entry));
}
}
}
Ok(None)
}
/// Add peers after the gossip topic has already been joined. This enables
/// "nearby" discovery to continuously contribute new peers over time.
pub fn add_peers(&self, peers: Vec<EndpointAddr>) {
for peer in peers {
self._memory_lookup.add_endpoint_info(peer);
}
}
}

View file

@ -0,0 +1,9 @@
[package]
name = "ec-linux-iptv"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
serde.workspace = true

View file

@ -0,0 +1,292 @@
//! Linux IPTV (LinuxDVB) ingest scaffolding.
use anyhow::{anyhow, Result};
use serde::{Deserialize, Serialize};
use std::collections::BTreeSet;
use std::fs;
use std::fs::File;
use std::io::Read;
use std::path::{Path, PathBuf};
use std::process::Child;
#[cfg(target_os = "linux")]
use std::{process::Command, time::Duration};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LinuxDvbConfig {
pub adapter: u32,
pub frontend: u32,
pub dvr: u32,
pub tune_command: Option<Vec<String>>,
pub tune_timeout_ms: Option<u64>,
}
#[derive(Debug)]
pub struct LinuxDvbStream {
file: File,
_tuner: Option<Child>,
pub path: PathBuf,
}
impl Read for LinuxDvbStream {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
self.file.read(buf)
}
}
/// Open the Linux DVB DVR device. Optionally spawns a tune command (like dvbv5-zap).
#[cfg(target_os = "linux")]
pub fn open_stream(config: &LinuxDvbConfig) -> Result<LinuxDvbStream> {
let tuner = if let Some(cmd) = config.tune_command.clone() {
spawn_tune_command(cmd, config.tune_timeout_ms)?
} else {
None
};
let path = dvb_path(config.adapter, config.dvr);
let file =
File::open(&path).map_err(|err| anyhow!("failed to open {}: {err}", path.display()))?;
Ok(LinuxDvbStream {
file,
_tuner: tuner,
path,
})
}
#[cfg(not(target_os = "linux"))]
pub fn open_stream(_config: &LinuxDvbConfig) -> Result<LinuxDvbStream> {
Err(anyhow!("Linux DVB support requires Linux"))
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LinuxDvbAdapterInfo {
pub adapter: u32,
pub dvrs: Vec<u32>,
pub frontends: Vec<u32>,
}
pub fn list_adapters() -> Result<Vec<LinuxDvbAdapterInfo>> {
list_adapters_in(Path::new("/dev/dvb"))
}
fn list_adapters_in(root: &Path) -> Result<Vec<LinuxDvbAdapterInfo>> {
if !root.exists() {
return Ok(Vec::new());
}
let mut adapters = Vec::new();
for entry in fs::read_dir(root)? {
let entry = entry?;
if !entry.file_type()?.is_dir() {
continue;
}
let name = entry.file_name();
let name = name.to_string_lossy();
if !name.starts_with("adapter") {
continue;
}
let Ok(adapter) = name.trim_start_matches("adapter").parse::<u32>() else {
continue;
};
let path = entry.path();
let mut dvrs = BTreeSet::new();
let mut frontends = BTreeSet::new();
for dev in fs::read_dir(&path)? {
let dev = dev?;
let dev_name = dev.file_name().to_string_lossy().to_string();
if dev_name.starts_with("dvr") {
if let Ok(idx) = dev_name.trim_start_matches("dvr").parse::<u32>() {
dvrs.insert(idx);
}
} else if dev_name.starts_with("frontend") {
if let Ok(idx) = dev_name.trim_start_matches("frontend").parse::<u32>() {
frontends.insert(idx);
}
}
}
adapters.push(LinuxDvbAdapterInfo {
adapter,
dvrs: dvrs.into_iter().collect(),
frontends: frontends.into_iter().collect(),
});
}
adapters.sort_by_key(|info| info.adapter);
Ok(adapters)
}
pub fn channels_conf_candidates() -> Vec<PathBuf> {
// Prefer an explicit path for determinism and testability.
if let Ok(value) = std::env::var("EVERY_CHANNEL_DVB_CHANNELS_CONF") {
let value = value.trim();
if !value.is_empty() {
return vec![PathBuf::from(value)];
}
}
let home = std::env::var("HOME").ok().map(PathBuf::from);
let mut out = Vec::new();
if let Some(home) = home {
out.push(home.join(".dvb").join("channels.conf"));
out.push(home.join(".config").join("dvb").join("channels.conf"));
}
out.push(PathBuf::from("/etc/dvb/channels.conf"));
out
}
pub fn find_channels_conf() -> Option<PathBuf> {
for candidate in channels_conf_candidates() {
if candidate.exists() {
return Some(candidate);
}
}
None
}
pub fn parse_channels_conf(path: &Path) -> Result<Vec<String>> {
let text = fs::read_to_string(path)
.map_err(|err| anyhow!("failed to read {}: {err}", path.display()))?;
let mut channels = BTreeSet::new();
for line in text.lines() {
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
continue;
}
if let Some((name, _)) = line.split_once(':') {
let name = name.trim();
if !name.is_empty() {
channels.insert(name.to_string());
}
}
}
Ok(channels.into_iter().collect())
}
pub fn default_zap_tune_command(adapter: u32, channels_conf: &Path, channel: &str) -> Vec<String> {
vec![
"dvbv5-zap".to_string(),
"-a".to_string(),
adapter.to_string(),
"-c".to_string(),
channels_conf.display().to_string(),
"-r".to_string(),
channel.to_string(),
]
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn parse_channels_conf_extracts_names() {
let dir = std::env::temp_dir().join(format!("ec-channels-{}", std::process::id()));
let _ = fs::create_dir_all(&dir);
let path = dir.join("channels.conf");
fs::write(
&path,
"\
# comment
KQED:foo
KQED:duplicate
KCBS-HD:bar
",
)
.unwrap();
let channels = parse_channels_conf(&path).unwrap();
assert_eq!(channels, vec!["KCBS-HD".to_string(), "KQED".to_string()]);
let _ = fs::remove_file(&path);
}
#[test]
fn default_zap_command_contains_adapter_and_channel() {
let conf = Path::new("/tmp/channels.conf");
let cmd = default_zap_tune_command(2, conf, "KQED");
assert_eq!(cmd[0], "dvbv5-zap");
assert!(cmd.iter().any(|arg| arg == "2"));
assert!(cmd.iter().any(|arg| arg == "KQED"));
}
#[test]
fn find_channels_conf_prefers_env_override() {
let dir = std::env::temp_dir().join(format!("ec-channels-env-{}", std::process::id()));
let _ = fs::create_dir_all(&dir);
let path = dir.join("channels.conf");
fs::write(&path, "KQED:foo\n").unwrap();
let prev = std::env::var("EVERY_CHANNEL_DVB_CHANNELS_CONF").ok();
std::env::set_var(
"EVERY_CHANNEL_DVB_CHANNELS_CONF",
path.display().to_string(),
);
let found = find_channels_conf().unwrap();
assert_eq!(found, path);
match prev {
Some(value) => std::env::set_var("EVERY_CHANNEL_DVB_CHANNELS_CONF", value),
None => std::env::remove_var("EVERY_CHANNEL_DVB_CHANNELS_CONF"),
}
let _ = fs::remove_file(&path);
}
#[test]
fn list_adapters_parses_fake_dev_tree() {
let root = std::env::temp_dir().join(format!("ec-dvb-root-{}", std::process::id()));
let _ = fs::remove_dir_all(&root);
fs::create_dir_all(root.join("adapter1")).unwrap();
fs::create_dir_all(root.join("adapter0")).unwrap();
fs::write(root.join("adapter0").join("dvr0"), "").unwrap();
fs::write(root.join("adapter0").join("frontend0"), "").unwrap();
fs::write(root.join("adapter1").join("dvr2"), "").unwrap();
fs::write(root.join("adapter1").join("frontend0"), "").unwrap();
fs::write(root.join("adapter1").join("frontend1"), "").unwrap();
let list = list_adapters_in(&root).unwrap();
assert_eq!(list.len(), 2);
assert_eq!(list[0].adapter, 0);
assert_eq!(list[0].dvrs, vec![0]);
assert_eq!(list[0].frontends, vec![0]);
assert_eq!(list[1].adapter, 1);
assert_eq!(list[1].dvrs, vec![2]);
assert_eq!(list[1].frontends, vec![0, 1]);
let _ = fs::remove_dir_all(&root);
}
}
#[cfg(target_os = "linux")]
fn spawn_tune_command(command: Vec<String>, tune_timeout_ms: Option<u64>) -> Result<Option<Child>> {
if command.is_empty() {
return Ok(None);
}
let mut cmd = Command::new(&command[0]);
if command.len() > 1 {
cmd.args(&command[1..]);
}
let child = cmd.spawn()?;
if let Some(timeout_ms) = tune_timeout_ms {
std::thread::sleep(Duration::from_millis(timeout_ms));
}
Ok(Some(child))
}
#[cfg(not(target_os = "linux"))]
fn spawn_tune_command(
_command: Vec<String>,
_tune_timeout_ms: Option<u64>,
) -> Result<Option<Child>> {
Ok(None)
}
fn dvb_path(adapter: u32, dvr: u32) -> PathBuf {
Path::new("/dev/dvb")
.join(format!("adapter{adapter}"))
.join(format!("dvr{dvr}"))
}

23
crates/ec-moq/Cargo.toml Normal file
View file

@ -0,0 +1,23 @@
[package]
name = "ec-moq"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
bytes = "1"
ec-core = { path = "../ec-core" }
ec-iroh = { path = "../ec-iroh" }
iroh = "0.96"
iroh-moq = { path = "../../third_party/iroh-live/iroh-moq" }
moq-lite = "0.10.1"
serde.workspace = true
serde_json.workspace = true
tokio = { version = "1", features = ["sync", "rt", "macros"] }
tracing.workspace = true
[dev-dependencies]
blake3.workspace = true
ec-crypto = { path = "../ec-crypto" }
hex = "0.4"

832
crates/ec-moq/src/lib.rs Normal file
View file

@ -0,0 +1,832 @@
//! Media over QUIC (MoQ) scaffolding.
use anyhow::{anyhow, Context, Result};
use bytes::Bytes;
use ec_core::Manifest;
use ec_iroh::DiscoveryConfig;
use iroh::{protocol::Router, Endpoint, EndpointAddr, SecretKey};
use moq_lite::{BroadcastConsumer, BroadcastProducer, Group, Track};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::fs;
use std::path::PathBuf;
use std::time::Duration;
use tokio::sync::mpsc;
use tokio::task::JoinHandle;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TrackName {
pub namespace: String,
pub name: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GroupId(pub u64);
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ObjectId(pub u64);
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ObjectMeta {
pub created_unix_ms: u64,
pub content_type: String,
pub size_bytes: u64,
pub timing: Option<TimingMeta>,
pub encryption: Option<EncryptionMeta>,
pub chunk_hash: Option<String>,
pub chunk_hash_alg: Option<String>,
pub chunk_proof: Option<Vec<String>>,
pub chunk_proof_alg: Option<String>,
pub manifest_id: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ObjectPayload {
pub meta: ObjectMeta,
pub data: Vec<u8>,
}
pub const DEFAULT_TRACK_NAME: &str = "chunks";
pub const DEFAULT_MANIFEST_TRACK_NAME: &str = "manifests";
pub trait Publisher {
fn publish_object(
&self,
track: &TrackName,
group: GroupId,
object: ObjectPayload,
) -> Result<()>;
}
pub trait Subscriber {
fn subscribe_track(&self, track: &TrackName) -> Result<()>;
}
pub trait Relay {
fn announce_track(&self, track: &TrackName) -> Result<()>;
fn cache_object(&self, track: &TrackName, group: GroupId, object: ObjectPayload) -> Result<()>;
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TimingMeta {
pub chunk_index: u64,
pub chunk_start_27mhz: u64,
pub chunk_duration_27mhz: u64,
pub utc_start_unix: Option<i64>,
pub sync_status: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EncryptionMeta {
pub alg: String,
pub key_id: String,
pub nonce_hex: String,
}
#[derive(Debug, Clone)]
pub struct FileRelay {
root: PathBuf,
}
impl FileRelay {
pub fn new(root: impl Into<PathBuf>) -> Self {
Self { root: root.into() }
}
pub fn write_object(
&self,
track: &TrackName,
group: GroupId,
object_id: ObjectId,
object: &ObjectPayload,
) -> Result<()> {
let base = self.object_dir(track, group, object_id);
fs::create_dir_all(&base)
.with_context(|| format!("failed to create {}", base.display()))?;
let data_path = base.join("data.bin");
let meta_path = base.join("meta.json");
fs::write(&data_path, &object.data)
.with_context(|| format!("failed to write {}", data_path.display()))?;
fs::write(&meta_path, serde_json::to_vec_pretty(&object.meta)?)
.with_context(|| format!("failed to write {}", meta_path.display()))?;
Ok(())
}
fn object_dir(&self, track: &TrackName, group: GroupId, object_id: ObjectId) -> PathBuf {
let namespace = sanitize_component(&track.namespace);
let name = sanitize_component(&track.name);
self.root
.join(namespace)
.join(name)
.join(format!("group-{}", group.0))
.join(format!("object-{}", object_id.0))
}
}
impl Relay for FileRelay {
fn announce_track(&self, _track: &TrackName) -> Result<()> {
Ok(())
}
fn cache_object(&self, track: &TrackName, group: GroupId, object: ObjectPayload) -> Result<()> {
self.write_object(track, group, ObjectId(0), &object)
}
}
fn sanitize_component(value: &str) -> String {
value
.chars()
.map(|c| match c {
'a'..='z' | '0'..='9' | '-' | '_' => c,
'A'..='Z' => c.to_ascii_lowercase(),
_ => '_',
})
.collect()
}
pub fn encode_object_frame(meta: &ObjectMeta, data: &[u8]) -> Result<Vec<u8>> {
let meta_bytes = serde_json::to_vec(meta)?;
let meta_len = u32::try_from(meta_bytes.len()).map_err(|_| anyhow!("object meta too large"))?;
let mut out = Vec::with_capacity(4 + meta_bytes.len() + data.len());
out.extend_from_slice(&meta_len.to_be_bytes());
out.extend_from_slice(&meta_bytes);
out.extend_from_slice(data);
Ok(out)
}
pub fn decode_object_frame(bytes: &[u8]) -> Result<ObjectPayload> {
if bytes.len() < 4 {
return Err(anyhow!("object frame too short"));
}
let meta_len = u32::from_be_bytes([bytes[0], bytes[1], bytes[2], bytes[3]]) as usize;
if bytes.len() < 4 + meta_len {
return Err(anyhow!("object frame missing metadata bytes"));
}
let meta = serde_json::from_slice(&bytes[4..4 + meta_len])?;
let data = bytes[4 + meta_len..].to_vec();
Ok(ObjectPayload { meta, data })
}
pub fn encode_manifest_frame(manifest: &Manifest) -> Result<Vec<u8>> {
Ok(serde_json::to_vec(manifest)?)
}
pub fn decode_manifest_frame(bytes: &[u8]) -> Result<Manifest> {
Ok(serde_json::from_slice(bytes)?)
}
#[derive(Debug)]
pub struct MoqNode {
endpoint: Endpoint,
router: Router,
moq: iroh_moq::Moq,
}
impl MoqNode {
pub async fn bind(secret: Option<SecretKey>) -> Result<Self> {
let discovery = DiscoveryConfig::from_env()?;
Self::bind_with_discovery(secret, discovery).await
}
pub async fn bind_with_discovery(
secret: Option<SecretKey>,
discovery: DiscoveryConfig,
) -> Result<Self> {
let endpoint = ec_iroh::build_endpoint(secret, discovery).await?;
let moq = iroh_moq::Moq::new(endpoint.clone());
let router = Router::builder(endpoint.clone())
.accept(iroh_moq::ALPN, moq.protocol_handler())
.spawn();
Ok(Self {
endpoint,
router,
moq,
})
}
pub fn endpoint(&self) -> &Endpoint {
&self.endpoint
}
pub fn endpoint_addr(&self) -> EndpointAddr {
self.router.endpoint().addr()
}
pub async fn publish_objects(
&self,
broadcast_name: impl Into<String>,
track_name: impl Into<String>,
) -> Result<MoqPublisher> {
let broadcast_name = broadcast_name.into();
let track_name = track_name.into();
let mut broadcast = BroadcastProducer::default();
let track = broadcast.create_track(Track {
name: track_name.clone(),
priority: 0,
});
self.moq
.publish(broadcast_name.clone(), broadcast.clone())
.await?;
Ok(MoqPublisher {
broadcast_name,
track_name,
broadcast,
track,
})
}
/// Publish a broadcast containing multiple tracks, all created before publishing.
///
/// This avoids subtle issues in some MoQ implementations where tracks added after the
/// initial publish are not reliably deliverable to subscribers.
pub async fn publish_track_set(
&self,
broadcast_name: impl Into<String>,
object_tracks: Vec<String>,
manifest_tracks: Vec<String>,
) -> Result<MoqPublishSet> {
let broadcast_name = broadcast_name.into();
let mut broadcast = BroadcastProducer::default();
let mut object = HashMap::new();
for name in object_tracks {
let track = broadcast.create_track(Track {
name: name.clone(),
priority: 0,
});
object.insert(name, track);
}
let mut manifests = HashMap::new();
for name in manifest_tracks {
let track = broadcast.create_track(Track {
name: name.clone(),
priority: 0,
});
manifests.insert(name, track);
}
self.moq.publish(broadcast_name.clone(), broadcast).await?;
Ok(MoqPublishSet {
broadcast_name,
object,
manifests,
})
}
pub async fn subscribe_objects(
&self,
remote: EndpointAddr,
broadcast_name: impl Into<String>,
track_name: impl Into<String>,
) -> Result<MoqObjectStream> {
let broadcast_name = broadcast_name.into();
let track_name = track_name.into();
let mut session = self.moq.connect(remote).await?;
let broadcast = session.subscribe(&broadcast_name).await?;
let track = subscribe_track(&broadcast, &track_name)?;
MoqObjectStream::spawn(session, track)
}
pub async fn subscribe_manifests(
&self,
remote: EndpointAddr,
broadcast_name: impl Into<String>,
track_name: impl Into<String>,
) -> Result<MoqManifestStream> {
let broadcast_name = broadcast_name.into();
let track_name = track_name.into();
let mut session = self.moq.connect(remote).await?;
let broadcast = session.subscribe(&broadcast_name).await?;
let track = subscribe_track(&broadcast, &track_name)?;
MoqManifestStream::spawn(session, track)
}
}
pub struct MoqPublishSet {
broadcast_name: String,
object: HashMap<String, moq_lite::TrackProducer>,
manifests: HashMap<String, moq_lite::TrackProducer>,
}
impl MoqPublishSet {
pub fn publish_object(
&mut self,
track_name: &str,
group: GroupId,
object: ObjectPayload,
) -> Result<()> {
let Some(track) = self.object.get_mut(track_name) else {
return Err(anyhow!("unknown object track {}", track_name));
};
let Some(mut group_writer) = track.create_group(Group { sequence: group.0 }) else {
return Err(anyhow!("group {} already published", group.0));
};
let frame = encode_object_frame(&object.meta, &object.data)?;
group_writer.write_frame(Bytes::from(frame));
group_writer.close();
Ok(())
}
pub fn publish_manifest(
&mut self,
track_name: &str,
sequence: u64,
manifest: &Manifest,
) -> Result<()> {
let Some(track) = self.manifests.get_mut(track_name) else {
return Err(anyhow!("unknown manifest track {}", track_name));
};
let Some(mut group_writer) = track.create_group(Group { sequence }) else {
return Err(anyhow!("manifest group {} already published", sequence));
};
let frame = encode_manifest_frame(manifest)?;
group_writer.write_frame(Bytes::from(frame));
group_writer.close();
Ok(())
}
pub fn broadcast_name(&self) -> &str {
&self.broadcast_name
}
}
pub struct MoqPublisher {
broadcast_name: String,
track_name: String,
broadcast: BroadcastProducer,
track: moq_lite::TrackProducer,
}
impl MoqPublisher {
pub fn publish_object(&mut self, group: GroupId, object: ObjectPayload) -> Result<()> {
let Some(mut group_writer) = self.track.create_group(Group { sequence: group.0 }) else {
return Err(anyhow!("group {} already published", group.0));
};
let frame = encode_object_frame(&object.meta, &object.data)?;
group_writer.write_frame(Bytes::from(frame));
group_writer.close();
Ok(())
}
pub fn create_side_track(&mut self, track_name: impl Into<String>) -> Result<MoqSidePublisher> {
let track_name = track_name.into();
let track = self.broadcast.create_track(Track {
name: track_name.clone(),
priority: 0,
});
Ok(MoqSidePublisher { track_name, track })
}
pub fn create_manifest_track(
&mut self,
track_name: impl Into<String>,
) -> Result<MoqManifestPublisher> {
let track_name = track_name.into();
let track = self.broadcast.create_track(Track {
name: track_name.clone(),
priority: 0,
});
Ok(MoqManifestPublisher { track_name, track })
}
pub fn broadcast_name(&self) -> &str {
&self.broadcast_name
}
pub fn track_name(&self) -> &str {
&self.track_name
}
}
pub struct MoqSidePublisher {
track_name: String,
track: moq_lite::TrackProducer,
}
impl MoqSidePublisher {
pub fn publish_object(&mut self, group: GroupId, object: ObjectPayload) -> Result<()> {
let Some(mut group_writer) = self.track.create_group(Group { sequence: group.0 }) else {
return Err(anyhow!("group {} already published", group.0));
};
let frame = encode_object_frame(&object.meta, &object.data)?;
group_writer.write_frame(Bytes::from(frame));
group_writer.close();
Ok(())
}
pub fn track_name(&self) -> &str {
&self.track_name
}
}
pub struct MoqManifestPublisher {
track_name: String,
track: moq_lite::TrackProducer,
}
impl MoqManifestPublisher {
pub fn publish_manifest(&mut self, sequence: u64, manifest: &Manifest) -> Result<()> {
let Some(mut group_writer) = self.track.create_group(Group { sequence }) else {
return Err(anyhow!("manifest group {} already published", sequence));
};
let frame = encode_manifest_frame(manifest)?;
group_writer.write_frame(Bytes::from(frame));
group_writer.close();
Ok(())
}
pub fn track_name(&self) -> &str {
&self.track_name
}
}
pub struct MoqObjectStream {
receiver: mpsc::Receiver<ObjectPayload>,
_task: JoinHandle<()>,
_session: iroh_moq::MoqSession,
}
impl MoqObjectStream {
fn spawn(session: iroh_moq::MoqSession, mut track: moq_lite::TrackConsumer) -> Result<Self> {
let (tx, rx) = mpsc::channel(32);
let task = tokio::spawn(async move {
loop {
let next_group = track.next_group().await;
let Some(mut group) = (match next_group {
Ok(group) => group,
Err(err) => {
tracing::warn!("moq track error: {err:#}");
break;
}
}) else {
break;
};
let mut buffer = Vec::new();
loop {
match group.read_frame().await {
Ok(Some(frame)) => buffer.extend_from_slice(&frame),
Ok(None) => break,
Err(err) => {
tracing::warn!("moq group error: {err:#}");
break;
}
}
}
if buffer.is_empty() {
continue;
}
match decode_object_frame(&buffer) {
Ok(object) => {
if tx.send(object).await.is_err() {
break;
}
}
Err(err) => {
tracing::warn!("failed to decode object frame: {err:#}");
}
}
}
});
Ok(Self {
receiver: rx,
_task: task,
_session: session,
})
}
pub async fn recv(&mut self) -> Option<ObjectPayload> {
self.receiver.recv().await
}
}
pub struct MoqManifestStream {
receiver: mpsc::Receiver<Manifest>,
_task: JoinHandle<()>,
_session: iroh_moq::MoqSession,
}
impl MoqManifestStream {
fn spawn(session: iroh_moq::MoqSession, mut track: moq_lite::TrackConsumer) -> Result<Self> {
let (tx, rx) = mpsc::channel(8);
let task = tokio::spawn(async move {
loop {
let next_group = track.next_group().await;
let Some(mut group) = (match next_group {
Ok(group) => group,
Err(err) => {
tracing::warn!("moq manifest track error: {err:#}");
break;
}
}) else {
break;
};
let mut buffer = Vec::new();
loop {
match group.read_frame().await {
Ok(Some(frame)) => buffer.extend_from_slice(&frame),
Ok(None) => break,
Err(err) => {
tracing::warn!("moq manifest group error: {err:#}");
break;
}
}
}
if buffer.is_empty() {
continue;
}
match decode_manifest_frame(&buffer) {
Ok(manifest) => {
if tx.send(manifest).await.is_err() {
break;
}
}
Err(err) => {
tracing::warn!("failed to decode manifest frame: {err:#}");
}
}
}
});
Ok(Self {
receiver: rx,
_task: task,
_session: session,
})
}
pub async fn recv(&mut self) -> Option<Manifest> {
self.receiver.recv().await
}
}
fn subscribe_track(broadcast: &BroadcastConsumer, name: &str) -> Result<moq_lite::TrackConsumer> {
let track = broadcast.subscribe_track(&Track::new(name));
Ok(track)
}
#[derive(Debug, Clone)]
pub struct HlsWriter {
output_dir: PathBuf,
window: usize,
target_duration: f64,
init_filename: String,
segments: std::collections::VecDeque<HlsSegment>,
}
#[derive(Debug, Clone)]
struct HlsSegment {
index: u64,
duration: f64,
filename: String,
}
impl HlsWriter {
pub fn new_cmaf(
output_dir: impl Into<PathBuf>,
target_duration: f64,
window: usize,
) -> Result<Self> {
// CMAF-only writer: init.mp4 + segment_*.m4s + HLS playlist as a local compatibility artifact.
let output_dir = output_dir.into();
fs::create_dir_all(&output_dir)
.with_context(|| format!("failed to create {}", output_dir.display()))?;
Ok(Self {
output_dir,
window: window.max(1),
target_duration,
init_filename: "init.mp4".to_string(),
segments: std::collections::VecDeque::new(),
})
}
pub fn write_init_segment(&mut self, data: &[u8]) -> Result<PathBuf> {
let path = self.output_dir.join(&self.init_filename);
fs::write(&path, data).with_context(|| format!("failed to write {}", path.display()))?;
self.write_playlist()?;
Ok(path)
}
pub fn write_segment(&mut self, index: u64, duration: f64, data: &[u8]) -> Result<PathBuf> {
let filename = format!("segment_{index:06}.m4s");
let path = self.output_dir.join(&filename);
fs::write(&path, data).with_context(|| format!("failed to write {}", path.display()))?;
self.segments.push_back(HlsSegment {
index,
duration,
filename,
});
while self.segments.len() > self.window {
self.segments.pop_front();
}
self.write_playlist()?;
Ok(path)
}
fn write_playlist(&self) -> Result<()> {
let mut lines = Vec::new();
lines.push("#EXTM3U".to_string());
lines.push("#EXT-X-VERSION:7".to_string());
lines.push("#EXT-X-INDEPENDENT-SEGMENTS".to_string());
lines.push(format!("#EXT-X-MAP:URI=\"{}\"", self.init_filename));
let target = self.target_duration.ceil().max(1.0) as u64;
lines.push(format!("#EXT-X-TARGETDURATION:{target}"));
if let Some(first) = self.segments.front() {
lines.push(format!("#EXT-X-MEDIA-SEQUENCE:{}", first.index));
}
for seg in &self.segments {
lines.push(format!("#EXTINF:{:.3},", seg.duration));
lines.push(seg.filename.clone());
}
let playlist_path = self.output_dir.join("index.m3u8");
fs::write(&playlist_path, lines.join("\n") + "\n")
.with_context(|| format!("failed to write {}", playlist_path.display()))?;
Ok(())
}
}
pub fn chunk_duration_secs(meta: &ObjectMeta, fallback: Duration) -> f64 {
if let Some(timing) = &meta.timing {
let secs = timing.chunk_duration_27mhz as f64 / 27_000_000.0;
if secs > 0.0 {
return secs;
}
}
fallback.as_secs_f64()
}
#[cfg(test)]
mod tests {
use super::*;
use std::env;
#[test]
fn sanitize_component_is_stable() {
assert_eq!(sanitize_component("Hello World!"), "hello_world_");
assert_eq!(sanitize_component("a-b_C9"), "a-b_c9");
}
#[test]
fn object_frame_roundtrip() {
let meta = ObjectMeta {
created_unix_ms: 1,
content_type: "application/octet-stream".to_string(),
size_bytes: 3,
timing: Some(TimingMeta {
chunk_index: 7,
chunk_start_27mhz: 0,
chunk_duration_27mhz: 54_000_000,
utc_start_unix: None,
sync_status: "synthetic".to_string(),
}),
encryption: None,
chunk_hash: Some("00".repeat(32)),
chunk_hash_alg: Some("blake3".to_string()),
chunk_proof: Some(vec!["00".repeat(32)]),
chunk_proof_alg: Some("merkle+blake3".to_string()),
manifest_id: Some("m".to_string()),
};
let data = b"abc".to_vec();
let frame = encode_object_frame(&meta, &data).unwrap();
let decoded = decode_object_frame(&frame).unwrap();
assert_eq!(decoded.data, data);
assert_eq!(decoded.meta.created_unix_ms, meta.created_unix_ms);
assert_eq!(
decoded.meta.timing.as_ref().unwrap().chunk_index,
meta.timing.as_ref().unwrap().chunk_index
);
assert_eq!(decoded.meta.manifest_id, meta.manifest_id);
}
#[test]
fn decode_rejects_short_frame() {
assert!(decode_object_frame(&[]).is_err());
assert!(decode_object_frame(&[0, 0, 0]).is_err());
}
#[test]
fn manifest_frame_roundtrip() {
let manifest = ec_core::Manifest {
body: ec_core::ManifestBody {
stream_id: ec_core::StreamId("s".to_string()),
epoch_id: "e".to_string(),
chunk_duration_ms: 2000,
total_chunks: 1,
chunk_start_index: 0,
encoder_profile_id: "p".to_string(),
merkle_root: "00".repeat(32),
created_unix_ms: 1,
metadata: Vec::new(),
chunk_hashes: vec!["11".repeat(32)],
variants: None,
},
manifest_id: "m".to_string(),
signatures: Vec::new(),
};
let bytes = encode_manifest_frame(&manifest).unwrap();
let decoded = decode_manifest_frame(&bytes).unwrap();
assert_eq!(decoded.manifest_id, "m");
assert_eq!(decoded.body.epoch_id, "e");
}
#[test]
fn manifest_frame_signed_roundtrip_verifies() {
let prev = env::var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY").ok();
env::set_var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", "11".repeat(32));
let keypair = ec_crypto::load_manifest_keypair_from_env()
.expect("load should not error")
.expect("keypair should exist");
let chunk_hashes = vec![blake3::hash(b"chunk0").to_hex().to_string()];
let merkle_root = ec_core::merkle_root_from_hashes(&chunk_hashes).unwrap();
let body = ec_core::ManifestBody {
stream_id: ec_core::StreamId("s".to_string()),
epoch_id: "e".to_string(),
chunk_duration_ms: 2000,
total_chunks: 1,
chunk_start_index: 0,
encoder_profile_id: "p".to_string(),
merkle_root,
created_unix_ms: 1,
metadata: Vec::new(),
chunk_hashes,
variants: None,
};
let manifest_id = body.manifest_id().unwrap();
let sig = ec_crypto::sign_manifest_id(&manifest_id, &keypair);
assert!(ec_crypto::verify_manifest_signature(&manifest_id, &sig));
let manifest = ec_core::Manifest {
body,
manifest_id: manifest_id.clone(),
signatures: vec![sig],
};
let bytes = encode_manifest_frame(&manifest).unwrap();
let decoded = decode_manifest_frame(&bytes).unwrap();
assert_eq!(decoded.manifest_id, manifest_id);
assert_eq!(decoded.signatures.len(), 1);
assert!(ec_crypto::verify_manifest_signature(
&decoded.manifest_id,
&decoded.signatures[0]
));
match prev {
Some(value) => env::set_var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", value),
None => env::remove_var("EVERY_CHANNEL_MANIFEST_SIGNING_KEY"),
}
}
#[test]
fn object_frame_encrypt_decrypt_roundtrip_and_hash_matches_plaintext() {
let stream_id = "ec/stream/v1/source/test/device-a/channel-b";
let chunk_index = 7u64;
let plaintext = b"hello every.channel";
let expected_hash = blake3::hash(plaintext).to_hex().to_string();
let enc = ec_crypto::encrypt_stream_data(stream_id, chunk_index, plaintext, None);
let meta = ObjectMeta {
created_unix_ms: 1,
content_type: "application/octet-stream".to_string(),
size_bytes: enc.ciphertext.len() as u64,
timing: Some(TimingMeta {
chunk_index,
chunk_start_27mhz: 0,
chunk_duration_27mhz: 54_000_000,
utc_start_unix: None,
sync_status: "synthetic".to_string(),
}),
encryption: Some(EncryptionMeta {
alg: enc.alg.to_string(),
key_id: stream_id.to_string(),
nonce_hex: hex::encode(enc.nonce),
}),
chunk_hash: Some(expected_hash.clone()),
chunk_hash_alg: Some("blake3".to_string()),
chunk_proof: None,
chunk_proof_alg: None,
manifest_id: None,
};
let frame = encode_object_frame(&meta, &enc.ciphertext).unwrap();
let decoded = decode_object_frame(&frame).unwrap();
let out = ec_crypto::decrypt_stream_data(stream_id, chunk_index, &decoded.data, None)
.expect("decrypt should succeed");
assert_eq!(out, plaintext);
assert_eq!(
decoded.meta.chunk_hash.as_deref(),
Some(expected_hash.as_str())
);
assert_eq!(
blake3::hash(&out).to_hex().to_string(),
decoded.meta.chunk_hash.unwrap()
);
}
}

35
crates/ec-node/Cargo.toml Normal file
View file

@ -0,0 +1,35 @@
[package]
name = "ec-node"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
blake3.workspace = true
clap.workspace = true
ec-core = { path = "../ec-core" }
ec-crypto = { path = "../ec-crypto" }
ec-direct = { path = "../ec-direct" }
ec-moq = { path = "../ec-moq" }
ec-chopper = { path = "../ec-chopper" }
ec-hdhomerun = { path = "../ec-hdhomerun" }
ec-iroh = { path = "../ec-iroh" }
ec-linux-iptv = { path = "../ec-linux-iptv" }
hex = "0.4"
iroh = "0.96"
just-webrtc = "0.2"
bytes = "1"
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
urlencoding = "2"
serde.workspace = true
serde_json.workspace = true
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
tokio-tungstenite = { version = "0.24", default-features = false, features = ["connect", "rustls-tls-webpki-roots"] }
futures-util = "0.3"
tracing.workspace = true
tracing-subscriber.workspace = true
[dev-dependencies]
headless_chrome = "1"
which = "6"

4193
crates/ec-node/src/main.rs Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,283 @@
use anyhow::{anyhow, Result};
use clap::ValueEnum;
use ec_chopper::{deterministic_h264_profile, ffmpeg_profile_args};
use ec_core::SourceId;
use ec_hdhomerun::{find_lineup_entry_by_name, find_lineup_entry_by_number};
use ec_linux_iptv::LinuxDvbConfig;
use std::io::Read;
use std::process::{Child, Command, Stdio};
use std::thread;
pub trait StreamSource: Send {
fn open_stream(&self) -> Result<Box<dyn Read + Send>>;
fn source_id(&self) -> SourceId;
}
#[derive(Debug, Clone)]
pub struct HdhrSource {
pub host: Option<String>,
pub device_id: Option<String>,
pub channel: Option<String>,
pub name: Option<String>,
pub prefer_mdns: bool,
}
impl StreamSource for HdhrSource {
fn open_stream(&self) -> Result<Box<dyn Read + Send>> {
let device = resolve_hdhr_device(self)?;
let lineup = ec_hdhomerun::fetch_lineup(&device)?;
let entry = if let Some(channel) = &self.channel {
find_lineup_entry_by_number(&lineup, channel)
.or_else(|| find_lineup_entry_by_name(&lineup, channel))
.ok_or_else(|| anyhow!("channel not found: {channel}"))?
} else if let Some(name) = &self.name {
find_lineup_entry_by_name(&lineup, name)
.ok_or_else(|| anyhow!("channel not found: {name}"))?
} else {
return Err(anyhow!("--channel or --name required for hdhr"));
};
Ok(Box::new(ec_hdhomerun::open_stream_entry(entry, None)?))
}
fn source_id(&self) -> SourceId {
let device_id = self.device_id.clone().or_else(|| self.host.clone());
SourceId {
kind: "hdhr".to_string(),
device_id,
channel: self.channel.clone().or_else(|| self.name.clone()),
}
}
}
fn resolve_hdhr_device(source: &HdhrSource) -> Result<ec_hdhomerun::HdhomerunDevice> {
if let Some(host) = &source.host {
return ec_hdhomerun::discover_from_host(host);
}
if let Some(device_id) = &source.device_id {
let host = format!("{device_id}.local");
return ec_hdhomerun::discover_from_host(&host);
}
if source.prefer_mdns {
if let Ok(device) = ec_hdhomerun::discover_from_host("hdhomerun.local") {
return Ok(device);
}
}
let mut devices = ec_hdhomerun::discover()?;
devices
.pop()
.ok_or_else(|| anyhow!("no HDHomeRun devices found"))
}
#[derive(Debug, Clone)]
pub struct LinuxDvbSource {
pub adapter: u32,
pub dvr: u32,
pub tune_cmd: Vec<String>,
pub tune_wait_ms: Option<u64>,
}
impl StreamSource for LinuxDvbSource {
fn open_stream(&self) -> Result<Box<dyn Read + Send>> {
let config = LinuxDvbConfig {
adapter: self.adapter,
frontend: 0,
dvr: self.dvr,
tune_command: if self.tune_cmd.is_empty() {
None
} else {
Some(self.tune_cmd.clone())
},
tune_timeout_ms: self.tune_wait_ms,
};
Ok(Box::new(ec_linux_iptv::open_stream(&config)?))
}
fn source_id(&self) -> SourceId {
SourceId {
kind: "linux-dvb".to_string(),
device_id: Some(format!("adapter{}:dvr{}", self.adapter, self.dvr)),
channel: None,
}
}
}
#[derive(Debug, Clone)]
pub struct TsSource {
pub input: String,
}
impl StreamSource for TsSource {
fn open_stream(&self) -> Result<Box<dyn Read + Send>> {
if self.input.starts_with("http://") || self.input.starts_with("https://") {
Ok(Box::new(ec_hdhomerun::open_stream_url(&self.input, None)?))
} else {
Ok(Box::new(std::fs::File::open(&self.input)?))
}
}
fn source_id(&self) -> SourceId {
SourceId {
kind: "ts".to_string(),
device_id: None,
channel: None,
}
}
}
#[derive(Debug, Clone, Copy, ValueEnum)]
pub enum HlsMode {
Passthrough,
Remux,
Transcode,
}
impl Default for HlsMode {
fn default() -> Self {
HlsMode::Passthrough
}
}
#[derive(Debug, Clone)]
pub struct HlsSource {
pub url: String,
pub mode: HlsMode,
}
impl StreamSource for HlsSource {
fn open_stream(&self) -> Result<Box<dyn Read + Send>> {
let mut cmd = Command::new("ffmpeg");
cmd.arg("-hide_banner")
.arg("-loglevel")
.arg("error")
.arg("-nostdin")
.arg("-i")
.arg(&self.url);
match self.mode {
HlsMode::Passthrough => {
cmd.arg("-c").arg("copy");
}
HlsMode::Remux => {
cmd.arg("-fflags").arg("+genpts").arg("-c").arg("copy");
}
HlsMode::Transcode => {
let profile = deterministic_h264_profile();
for arg in ffmpeg_profile_args(&profile) {
cmd.arg(arg);
}
}
}
cmd.arg("-f")
.arg("mpegts")
.arg("pipe:1")
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
let mut child = cmd
.spawn()
.map_err(|err| anyhow!("failed to spawn ffmpeg: {err}"))?;
let stdout = child
.stdout
.take()
.ok_or_else(|| anyhow!("ffmpeg stdout unavailable"))?;
Ok(Box::new(FfmpegChildStream { child, stdout }))
}
fn source_id(&self) -> SourceId {
SourceId {
kind: "hls".to_string(),
device_id: None,
channel: Some(self.url.clone()),
}
}
}
struct FfmpegChildStream {
child: Child,
stdout: std::process::ChildStdout,
}
impl Read for FfmpegChildStream {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
self.stdout.read(buf)
}
}
impl Drop for FfmpegChildStream {
fn drop(&mut self) {
let _ = self.child.kill();
}
}
pub fn deterministic_transcode_stream(
reader: Box<dyn Read + Send>,
) -> Result<Box<dyn Read + Send>> {
let profile = deterministic_h264_profile();
let mut cmd = Command::new("ffmpeg");
cmd.arg("-hide_banner")
.arg("-loglevel")
.arg("error")
.arg("-nostdin")
.arg("-i")
.arg("pipe:0");
for arg in ffmpeg_profile_args(&profile) {
cmd.arg(arg);
}
cmd.arg("-f")
.arg("mpegts")
.arg("pipe:1")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
let mut child = cmd
.spawn()
.map_err(|err| anyhow!("failed to spawn ffmpeg: {err}"))?;
let mut stdin = child
.stdin
.take()
.ok_or_else(|| anyhow!("ffmpeg stdin unavailable"))?;
let stdout = child
.stdout
.take()
.ok_or_else(|| anyhow!("ffmpeg stdout unavailable"))?;
let writer = thread::spawn(move || {
let mut reader = reader;
let _ = std::io::copy(&mut reader, &mut stdin);
});
Ok(Box::new(FfmpegTranscodeStream {
child,
stdout,
writer: Some(writer),
}))
}
struct FfmpegTranscodeStream {
child: Child,
stdout: std::process::ChildStdout,
writer: Option<thread::JoinHandle<()>>,
}
impl Read for FfmpegTranscodeStream {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
self.stdout.read(buf)
}
}
impl Drop for FfmpegTranscodeStream {
fn drop(&mut self) {
let _ = self.child.kill();
if let Some(writer) = self.writer.take() {
let _ = writer.join();
}
}
}

View file

@ -0,0 +1,308 @@
use std::io::{BufRead, BufReader};
use std::path::Path;
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
fn ec_node_path() -> std::path::PathBuf {
if let Ok(value) = std::env::var("EC_NODE_BIN") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec_node") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec-node") {
return value.into();
}
let exe = std::env::current_exe().expect("current_exe");
let debug_dir = exe
.parent()
.and_then(|p| p.parent())
.expect("expected target/debug/deps");
debug_dir.join("ec-node")
}
fn wait_for_line_prefix(
lines: &mut dyn Iterator<Item = std::io::Result<String>>,
prefix: &str,
timeout: Duration,
) -> Option<String> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
match lines.next() {
Some(Ok(line)) => {
if let Some(rest) = line.strip_prefix(prefix) {
return Some(rest.trim().to_string());
}
}
Some(Err(_)) => continue,
None => break,
}
}
None
}
fn blake3_hex(path: &Path) -> anyhow::Result<String> {
let bytes = std::fs::read(path)?;
Ok(blake3::hash(&bytes).to_hex().to_string())
}
fn concat_init_and_segment(init: &Path, seg: &Path, out: &Path) -> anyhow::Result<()> {
let init_bytes = std::fs::read(init)?;
let seg_bytes = std::fs::read(seg)?;
let mut bytes = Vec::with_capacity(init_bytes.len() + seg_bytes.len());
bytes.extend_from_slice(&init_bytes);
bytes.extend_from_slice(&seg_bytes);
std::fs::write(out, bytes)?;
Ok(())
}
fn first_video_frame_keyframe_flag(mp4: &Path) -> anyhow::Result<u32> {
if Command::new("ffprobe")
.arg("-version")
.stdout(Stdio::null())
.stderr(Stdio::null())
.status()
.is_err()
{
// Cross-OS environments might not have ffprobe installed; treat as skip.
return Ok(1);
}
// Read only the first decoded frame record. For fMP4 this works reliably if we concat init+seg.
let out = Command::new("ffprobe")
.arg("-v")
.arg("error")
.arg("-select_streams")
.arg("v:0")
.arg("-show_frames")
.arg("-read_intervals")
.arg("%+#1")
.arg("-show_entries")
.arg("frame=key_frame")
.arg("-of")
.arg("csv=p=0")
.arg(mp4)
.output()?;
if !out.status.success() {
anyhow::bail!("ffprobe failed: {}", String::from_utf8_lossy(&out.stderr));
}
let s = String::from_utf8_lossy(&out.stdout);
let first = s.lines().next().unwrap_or("").trim();
// Some ffprobe builds may append extra columns (e.g. side data) even with restricted
// `-show_entries`. We only care about the first token.
let token = first.split(',').next().unwrap_or("").trim();
let flag: u32 = token
.parse()
.map_err(|_| anyhow::anyhow!("unexpected ffprobe output: {first:?}"))?;
Ok(flag)
}
fn write_deterministic_ts(out_path: &Path) -> anyhow::Result<()> {
// Deterministic synthetic A/V source: 30fps CFR with a fixed sine audio tone.
// Output: MPEG-TS, constrained to a stable keyframe cadence (g=60 -> 2s GOP).
let status = Command::new("ffmpeg")
.arg("-hide_banner")
.arg("-loglevel")
.arg("error")
.arg("-nostdin")
.arg("-y")
.arg("-f")
.arg("lavfi")
.arg("-i")
.arg("testsrc2=size=1280x720:rate=30")
.arg("-f")
.arg("lavfi")
.arg("-i")
.arg("sine=frequency=1000:sample_rate=48000")
.arg("-t")
.arg("10")
.arg("-map")
.arg("0:v:0")
.arg("-map")
.arg("1:a:0")
.arg("-c:v")
.arg("libx264")
.arg("-pix_fmt")
.arg("yuv420p")
.arg("-g")
.arg("60")
.arg("-keyint_min")
.arg("60")
.arg("-sc_threshold")
.arg("0")
.arg("-bf")
.arg("0")
.arg("-threads")
.arg("1")
.arg("-fflags")
.arg("+bitexact")
.arg("-flags:v")
.arg("+bitexact")
.arg("-c:a")
.arg("aac")
.arg("-b:a")
.arg("128k")
.arg("-ac")
.arg("2")
.arg("-ar")
.arg("48000")
.arg("-flags:a")
.arg("+bitexact")
.arg("-f")
.arg("mpegts")
.arg(out_path)
.status()?;
if !status.success() {
anyhow::bail!("ffmpeg synthetic TS generation failed with {status}");
}
Ok(())
}
fn run_ladder(ec_node: &Path, input_ts: &Path, out_dir: &Path) -> anyhow::Result<()> {
let signing_key = "11".repeat(32);
let network_secret = "22".repeat(32);
let stream_id = "every.channel/determinism/cmaf-ladder";
let broadcast_name = "every.channel/determinism/cmaf-ladder";
let mut cmd = Command::new(ec_node);
cmd.env("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", &signing_key)
.arg("moq-publish")
.arg("--publish-manifests")
.arg("--encode")
.arg("cmaf")
.arg("--cmaf-ladder")
.arg("hd3")
.arg("--epoch-chunks")
.arg("1")
.arg("--max-chunks")
.arg("3")
.arg("--chunk-ms")
.arg("2000")
.arg("--stream-id")
.arg(stream_id)
.arg("--broadcast-name")
.arg(broadcast_name)
.arg("--track-name")
.arg("chunks")
.arg("--init-track")
.arg("init")
.arg("--network-secret")
.arg(&network_secret)
.arg("--chunk-dir")
.arg(out_dir)
.arg("--startup-delay-ms")
.arg("0")
.arg("ts")
.arg(input_ts)
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
// This will run until --max-chunks is reached, then exit.
let mut child = cmd.spawn()?;
let stdout = child.stdout.take().expect("publisher stdout missing");
let mut lines = BufReader::new(stdout).lines();
let _remote = wait_for_line_prefix(&mut lines, "moq endpoint addr: ", Duration::from_secs(10))
.ok_or_else(|| anyhow::anyhow!("publisher did not print endpoint addr"))?;
let status = child.wait()?;
if !status.success() {
anyhow::bail!("publisher failed: {status}");
}
Ok(())
}
#[test]
#[ignore]
fn deterministic_cmaf_ladder_outputs_match_across_runs() {
let ec_node = ec_node_path();
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let tmp = std::env::temp_dir().join(format!("ec-determinism-cmaf-ladder-{ts}"));
let _ = std::fs::create_dir_all(&tmp);
let input_ts = tmp.join("input.ts");
write_deterministic_ts(&input_ts).expect("write deterministic TS");
let run1 = tmp.join("run1");
let run2 = tmp.join("run2");
let _ = std::fs::remove_dir_all(&run1);
let _ = std::fs::remove_dir_all(&run2);
std::fs::create_dir_all(&run1).unwrap();
std::fs::create_dir_all(&run2).unwrap();
run_ladder(&ec_node, &input_ts, &run1).expect("run ladder 1");
run_ladder(&ec_node, &input_ts, &run2).expect("run ladder 2");
for variant in ["1080p", "720p", "480p"] {
let v1 = run1.join("cmaf-ladder").join(variant);
let v2 = run2.join("cmaf-ladder").join(variant);
let init1 = v1.join("init.mp4");
let init2 = v2.join("init.mp4");
assert!(
init1.exists() && init2.exists(),
"missing init for {variant}"
);
assert_eq!(
blake3_hex(&init1).unwrap(),
blake3_hex(&init2).unwrap(),
"init differs for {variant}"
);
for idx in 0..3 {
let s1 = v1.join(format!("segment_{idx:06}.m4s"));
let s2 = v2.join(format!("segment_{idx:06}.m4s"));
assert!(
s1.exists() && s2.exists(),
"missing segment {idx} for {variant}"
);
assert_eq!(
blake3_hex(&s1).unwrap(),
blake3_hex(&s2).unwrap(),
"segment {idx} differs for {variant}"
);
}
}
}
#[test]
#[ignore]
fn cmaf_ladder_segments_start_with_keyframes() {
let ec_node = ec_node_path();
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let tmp = std::env::temp_dir().join(format!("ec-determinism-cmaf-ladder-kf-{ts}"));
let _ = std::fs::create_dir_all(&tmp);
let input_ts = tmp.join("input.ts");
write_deterministic_ts(&input_ts).expect("write deterministic TS");
let run = tmp.join("run");
let _ = std::fs::remove_dir_all(&run);
std::fs::create_dir_all(&run).unwrap();
run_ladder(&ec_node, &input_ts, &run).expect("run ladder");
for variant in ["1080p", "720p", "480p"] {
let v = run.join("cmaf-ladder").join(variant);
let init = v.join("init.mp4");
assert!(init.exists(), "missing init for {variant}");
for idx in 0..3 {
let seg = v.join(format!("segment_{idx:06}.m4s"));
assert!(seg.exists(), "missing segment {idx} for {variant}");
let stitched = tmp.join(format!("stitched-{variant}-{idx:06}.mp4"));
concat_init_and_segment(&init, &seg, &stitched).unwrap();
let keyflag = first_video_frame_keyframe_flag(&stitched).unwrap();
assert_eq!(
keyflag, 1,
"segment {idx} not keyframe-aligned for {variant}"
);
}
}
}

View file

@ -0,0 +1,231 @@
use std::io::{BufRead, BufReader, Read};
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
const TS_PACKET_SIZE: usize = 188;
fn env_required(key: &str) -> Option<String> {
std::env::var(key)
.ok()
.map(|v| v.trim().to_string())
.filter(|v| !v.is_empty())
}
fn ec_node_path() -> std::path::PathBuf {
if let Ok(value) = std::env::var("EC_NODE_BIN") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec_node") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec-node") {
return value.into();
}
let exe = std::env::current_exe().expect("current_exe");
let debug_dir = exe
.parent()
.and_then(|p| p.parent())
.expect("expected target/debug/deps");
debug_dir.join("ec-node")
}
fn wait_for_line_prefix(
lines: &mut dyn Iterator<Item = std::io::Result<String>>,
prefix: &str,
timeout: Duration,
) -> Option<String> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
match lines.next() {
Some(Ok(line)) => {
if let Some(rest) = line.strip_prefix(prefix) {
return Some(rest.trim().to_string());
}
}
Some(Err(_)) => continue,
None => break,
}
}
None
}
fn write_short_ts_recording(
host: &str,
channel: &str,
out_path: &std::path::Path,
) -> anyhow::Result<()> {
// Use lineup to resolve name -> number, but capture from the provided host.
// (OrbStack/Linux may not resolve the lineup URL's mDNS hostname.)
let device = ec_hdhomerun::discover_from_host(host)?;
let lineup = ec_hdhomerun::fetch_lineup(&device)?;
let entry = ec_hdhomerun::find_lineup_entry_by_number(&lineup, channel)
.or_else(|| ec_hdhomerun::find_lineup_entry_by_name(&lineup, channel))
.ok_or_else(|| anyhow::anyhow!("channel not found in lineup: {channel}"))?;
let guide_number = entry.channel.number.as_deref().unwrap_or(channel);
let capture_url = format!("http://{host}:5004/auto/v{guide_number}");
// Capture a short TS sample directly from the HDHR.
// Retry a few times to handle "no tuner available" 5xx responses.
let mut last_err: Option<anyhow::Error> = None;
for attempt in 0..10 {
match ec_hdhomerun::open_stream_url(&capture_url, Some(14)) {
Ok(mut stream) => {
let mut file = std::fs::File::create(out_path)?;
std::io::copy(&mut stream, &mut file)?;
last_err = None;
break;
}
Err(err) => {
last_err = Some(err);
std::thread::sleep(Duration::from_millis(400 * (attempt + 1) as u64));
continue;
}
}
}
if let Some(err) = last_err {
return Err(err);
}
let mut file = std::fs::File::open(out_path)?;
let mut bytes = Vec::new();
file.read_to_end(&mut bytes)?;
let mut len = bytes.len();
let rem = len % TS_PACKET_SIZE;
if rem != 0 {
len -= rem;
std::fs::write(out_path, &bytes[..len])?;
}
if len < 188 * 200 {
anyhow::bail!("recorded TS too small ({} bytes) from HDHR {}", len, host);
}
Ok(())
}
#[test]
#[ignore]
fn e2e_cmaf_ladder_one_publisher_three_subscribers_verify_manifests() {
let host = match env_required("EVERY_CHANNEL_E2E_HDHR_HOST") {
Some(v) => v,
None => return, // skip
};
let channel = match env_required("EVERY_CHANNEL_E2E_HDHR_CHANNEL") {
Some(v) => v,
None => return, // skip
};
let ec_node = ec_node_path();
// Keep secrets deterministic for reproducibility.
let signing_key = "11".repeat(32);
let network_secret = "22".repeat(32);
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let stream_id = format!("every.channel/e2e/cmaf-ladder/{ts}");
let broadcast_name = stream_id.clone();
let tmp = std::env::temp_dir().join(format!("ec-e2e-cmaf-ladder-{ts}"));
let _ = std::fs::create_dir_all(&tmp);
let input_ts = tmp.join("input.ts");
write_short_ts_recording(&host, &channel, &input_ts).expect("failed to record TS from HDHR");
let mut publisher = Command::new(&ec_node);
publisher
.env("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", &signing_key)
.arg("moq-publish")
.arg("--publish-manifests")
.arg("--encode")
.arg("cmaf")
.arg("--cmaf-ladder")
.arg("hd3")
.arg("--epoch-chunks")
.arg("1")
.arg("--max-chunks")
.arg("3")
.arg("--chunk-ms")
.arg("2000")
.arg("--stream-id")
.arg(&stream_id)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg("chunks")
.arg("--init-track")
.arg("init")
.arg("--manifest-track")
.arg("manifests")
.arg("--network-secret")
.arg(&network_secret)
.arg("--chunk-dir")
.arg(tmp.join("pub-chunks"))
.arg("--startup-delay-ms")
.arg("4000")
.arg("ts")
.arg(input_ts.to_string_lossy().to_string())
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
let mut pub_child = publisher.spawn().expect("spawn publisher");
let pub_stdout = pub_child.stdout.take().expect("publisher stdout missing");
let mut pub_lines = BufReader::new(pub_stdout).lines();
let remote = wait_for_line_prefix(
&mut pub_lines,
"moq endpoint addr: ",
Duration::from_secs(10),
)
.expect("publisher did not print endpoint addr");
let variants = ["1080p", "720p", "480p"];
let mut subscribers = Vec::new();
for variant in variants {
let out_dir = tmp.join(format!("sub-{variant}"));
let mut sub = Command::new(&ec_node);
sub.arg("moq-subscribe")
.arg("--remote")
.arg(&remote)
.arg("--remote-manifests")
.arg(&remote)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg(format!("chunks/{variant}"))
.arg("--subscribe-manifests")
.arg("--require-manifest")
.arg("--manifest-track")
.arg("manifests")
.arg("--container")
.arg("cmaf")
.arg("--subscribe-init")
.arg("--init-track")
.arg(format!("init/{variant}"))
.arg("--raw-cmaf")
.arg("--stop-after")
.arg("2")
.arg("--network-secret")
.arg(&network_secret)
.arg("--output-dir")
.arg(&out_dir)
.stdout(Stdio::inherit())
.stderr(Stdio::inherit());
subscribers.push((
variant.to_string(),
out_dir,
sub.spawn().expect("spawn subscriber"),
));
}
for (variant, out_dir, mut child) in subscribers {
let status = child.wait().expect("wait subscriber");
assert!(status.success(), "subscriber {variant} failed: {status}");
let init = out_dir.join("init.mp4");
assert!(init.exists(), "subscriber {variant} missing init.mp4");
let seg0 = out_dir.join("segment_000000.m4s");
assert!(seg0.exists(), "subscriber {variant} missing first segment");
}
let _ = pub_child.kill();
}

View file

@ -0,0 +1,211 @@
use std::io::{BufRead, BufReader};
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
fn env_required(key: &str) -> Option<String> {
std::env::var(key)
.ok()
.map(|v| v.trim().to_string())
.filter(|v| !v.is_empty())
}
fn looks_drm(value: &str) -> bool {
let value = value.to_lowercase();
value.contains("drm")
|| value.contains("encrypted")
|| value.contains("protected")
|| value.contains("copy")
|| value.contains("widevine")
}
fn autodiscover_hdhr_host_and_channel() -> Option<(String, String)> {
let devices = ec_hdhomerun::discover().ok()?;
let device = devices.into_iter().next()?;
let lineup = ec_hdhomerun::fetch_lineup(&device).ok()?;
let entry = lineup.iter().find(|e| {
let tag_drm = e.tags.iter().any(|t| looks_drm(t));
let raw_drm = e
.raw
.as_object()
.map(|obj| {
obj.iter()
.any(|(k, v)| looks_drm(k) || looks_drm(&v.to_string()))
})
.unwrap_or(false);
!tag_drm && !raw_drm && e.channel.number.as_deref().unwrap_or("").trim() != ""
})?;
let host = device.ip.clone();
let channel = entry
.channel
.number
.clone()
.or_else(|| Some(entry.channel.name.clone()))
.unwrap_or_else(|| "2.1".to_string());
Some((host, channel))
}
fn ec_node_path() -> std::path::PathBuf {
if let Ok(value) = std::env::var("EC_NODE_BIN") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec_node") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec-node") {
return value.into();
}
// Fallback: assume a standard cargo target layout.
let exe = std::env::current_exe().expect("current_exe");
let debug_dir = exe
.parent()
.and_then(|p| p.parent())
.expect("expected target/debug/deps");
let bin = debug_dir.join("ec-node");
bin
}
#[test]
#[ignore]
fn e2e_hdhr_publish_then_subscribe_with_manifest_and_encryption() {
let host = env_required("EVERY_CHANNEL_E2E_HDHR_HOST");
let channel = env_required("EVERY_CHANNEL_E2E_HDHR_CHANNEL");
let (host, channel) = match (host, channel) {
(Some(host), Some(channel)) => (host, channel),
_ => match autodiscover_hdhr_host_and_channel() {
Some(v) => v,
None => return, // skip
},
};
let ec_node = ec_node_path();
// Keep secrets deterministic for reproducibility.
let signing_key = "11".repeat(32);
let network_secret = "22".repeat(32);
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let broadcast_name = format!("every.channel/e2e/{ts}");
let tmp = std::env::temp_dir().join(format!("ec-e2e-hdhr-{ts}"));
let publish_chunks = tmp.join("publish-chunks");
let subscribe_out = tmp.join("subscribe-out");
let mut publisher = Command::new(&ec_node);
publisher
.env("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", &signing_key)
.arg("moq-publish")
.arg("--publish-manifests")
.arg("--epoch-chunks")
.arg("1")
.arg("--max-chunks")
.arg("8")
.arg("--chunk-ms")
.arg("2000")
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--network-secret")
.arg(&network_secret)
.arg("--chunk-dir")
.arg(&publish_chunks)
.arg("hdhr")
.arg("--host")
.arg(&host)
.arg("--channel")
.arg(&channel)
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
let mut child = publisher.spawn().expect("failed to spawn publisher");
let stdout = child.stdout.take().expect("publisher stdout missing");
let mut lines = BufReader::new(stdout).lines();
let mut remote: Option<String> = None;
let mut track: Option<String> = None;
let deadline = Instant::now() + Duration::from_secs(10);
while Instant::now() < deadline {
let line = match lines.next() {
Some(Ok(line)) => line,
Some(Err(_)) => continue,
None => break,
};
if let Some(rest) = line.strip_prefix("moq endpoint addr: ") {
remote = Some(rest.trim().to_string());
} else if let Some(rest) = line.strip_prefix("moq track: ") {
track = Some(rest.trim().to_string());
}
if remote.is_some() && track.is_some() {
break;
}
}
let remote = remote.expect("publisher did not print endpoint addr in time");
let track = track.expect("publisher did not print track in time");
let mut subscriber = Command::new(&ec_node);
subscriber
.arg("moq-subscribe")
.arg("--remote")
.arg(&remote)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg(&track)
.arg("--subscribe-manifests")
.arg("--require-manifest")
.arg("--max-invalid-chunks")
.arg("0")
.arg("--stop-after")
.arg("3")
.arg("--output-dir")
.arg(&subscribe_out)
.arg("--chunk-ms")
.arg("2000")
.arg("--network-secret")
.arg(&network_secret)
.stdout(Stdio::inherit())
.stderr(Stdio::inherit());
let mut sub_child = subscriber.spawn().expect("failed to spawn subscriber");
let start = Instant::now();
loop {
if let Ok(Some(status)) = sub_child.try_wait() {
assert!(status.success(), "subscriber exited with {status}");
break;
}
if start.elapsed() > Duration::from_secs(30) {
let _ = sub_child.kill();
panic!("subscriber timed out");
}
std::thread::sleep(Duration::from_millis(200));
}
// Publisher should exit after max chunks; don't hang forever.
let start = Instant::now();
loop {
if let Ok(Some(status)) = child.try_wait() {
assert!(status.success(), "publisher exited with {status}");
break;
}
if start.elapsed() > Duration::from_secs(30) {
let _ = child.kill();
panic!("publisher timed out");
}
std::thread::sleep(Duration::from_millis(200));
}
let playlist = subscribe_out.join("index.m3u8");
assert!(
playlist.exists(),
"missing playlist at {}",
playlist.display()
);
let segments = std::fs::read_dir(&subscribe_out)
.unwrap()
.filter_map(|e| e.ok())
.filter(|e| e.file_name().to_string_lossy().starts_with("segment_"))
.count();
assert!(segments >= 1, "expected at least one segment");
}

View file

@ -0,0 +1,305 @@
use std::io::{BufRead, BufReader, Read, Write};
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
const TS_PACKET_SIZE: usize = 188;
fn env_required(key: &str) -> Option<String> {
std::env::var(key)
.ok()
.map(|v| v.trim().to_string())
.filter(|v| !v.is_empty())
}
fn ec_node_path() -> std::path::PathBuf {
if let Ok(value) = std::env::var("EC_NODE_BIN") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec_node") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec-node") {
return value.into();
}
let exe = std::env::current_exe().expect("current_exe");
let debug_dir = exe
.parent()
.and_then(|p| p.parent())
.expect("expected target/debug/deps");
debug_dir.join("ec-node")
}
fn wait_for_line_prefix(
lines: &mut dyn Iterator<Item = std::io::Result<String>>,
prefix: &str,
timeout: Duration,
) -> Option<String> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
match lines.next() {
Some(Ok(line)) => {
if let Some(rest) = line.strip_prefix(prefix) {
return Some(rest.trim().to_string());
}
}
Some(Err(_)) => continue,
None => break,
}
}
None
}
fn write_short_ts_recording(
host: &str,
channel: &str,
out_path: &std::path::Path,
) -> anyhow::Result<()> {
// Use the lineup's stream URL so we get the correct host/port (often :5004).
// HDHomeRun supports `duration=...` on the stream URL on many models.
// We also cap by time/bytes to avoid hanging if duration is ignored.
let device = ec_hdhomerun::discover_from_host(host)?;
let lineup = ec_hdhomerun::fetch_lineup(&device)?;
let entry = ec_hdhomerun::find_lineup_entry_by_number(&lineup, channel)
.or_else(|| ec_hdhomerun::find_lineup_entry_by_name(&lineup, channel))
.ok_or_else(|| anyhow::anyhow!("channel not found in lineup: {channel}"))?;
// Tuner allocation can transiently fail (503) if another client is using all tuners.
// Retry briefly; we only need a short capture.
let mut last_err: Option<anyhow::Error> = None;
let mut stream = loop {
match ec_hdhomerun::open_stream_entry(entry, Some(8)) {
Ok(stream) => break stream,
Err(err) => {
let msg = format!("{err:#}");
last_err = Some(err);
if msg.contains("503") {
std::thread::sleep(Duration::from_millis(500));
continue;
}
return Err(last_err.unwrap());
}
}
};
let mut file = std::fs::File::create(out_path)?;
let start = Instant::now();
let mut bytes = 0usize;
let mut buf = [0u8; 64 * 1024];
loop {
let n = stream.read(&mut buf)?;
if n == 0 {
break;
}
file.write_all(&buf[..n])?;
bytes += n;
if bytes >= 8 * 1024 * 1024 {
break;
}
if start.elapsed() > Duration::from_secs(6) {
break;
}
}
file.flush()?;
// Ensure the TS file ends on a packet boundary.
let len = file.metadata()?.len();
let rem = (len as usize) % TS_PACKET_SIZE;
if rem != 0 {
file.set_len(len - rem as u64)?;
bytes = (len as usize) - rem;
}
if bytes < 188 * 20 {
anyhow::bail!("recorded TS too small ({} bytes) from HDHR {}", bytes, host);
}
Ok(())
}
#[test]
#[ignore]
fn e2e_split_sources_manifests_from_one_peer_objects_from_another() {
let host = match env_required("EVERY_CHANNEL_E2E_HDHR_HOST") {
Some(v) => v,
None => return, // skip
};
let channel = match env_required("EVERY_CHANNEL_E2E_HDHR_CHANNEL") {
Some(v) => v,
None => return, // skip
};
let ec_node = ec_node_path();
// Keep secrets deterministic for reproducibility.
let signing_key = "11".repeat(32);
let network_secret = "22".repeat(32);
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let stream_id = format!("every.channel/e2e/mesh/{ts}");
let broadcast_name = stream_id.clone();
let tmp = std::env::temp_dir().join(format!("ec-e2e-mesh-split-{ts}"));
let _ = std::fs::create_dir_all(&tmp);
let input_ts = tmp.join("input.ts");
let manifest_chunks = tmp.join("chunks-manifests");
let object_chunks = tmp.join("chunks-objects");
let subscribe_out = tmp.join("subscribe-out");
write_short_ts_recording(&host, &channel, &input_ts).expect("failed to record TS from HDHR");
// Publisher A: leader/signer, publishes manifests only.
// Give subscribers time to connect before ingest starts.
let mut pub_manifests = Command::new(&ec_node);
pub_manifests
.env("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", &signing_key)
.arg("moq-publish")
.arg("--publish-manifests")
.arg("--publish-chunks")
.arg("false")
.arg("--epoch-chunks")
.arg("1")
.arg("--max-chunks")
.arg("6")
.arg("--chunk-ms")
.arg("2000")
.arg("--stream-id")
.arg(&stream_id)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg("noop")
.arg("--manifest-track")
.arg("manifests")
.arg("--network-secret")
.arg(&network_secret)
.arg("--chunk-dir")
.arg(&manifest_chunks)
.arg("--startup-delay-ms")
.arg("5000")
.arg("ts")
.arg(input_ts.to_string_lossy().to_string())
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
let mut pub_a = pub_manifests.spawn().expect("spawn manifest publisher");
let a_stdout = pub_a
.stdout
.take()
.expect("manifest publisher stdout missing");
let mut a_lines = BufReader::new(a_stdout).lines();
let remote_manifests =
wait_for_line_prefix(&mut a_lines, "moq endpoint addr: ", Duration::from_secs(10))
.expect("manifest publisher did not print endpoint addr");
// Publisher B: relay/data, publishes chunk objects only.
// Delay longer than the manifest publisher so the subscriber can receive manifests first.
let mut pub_objects = Command::new(&ec_node);
pub_objects
.arg("moq-publish")
.arg("--publish-chunks")
.arg("true")
.arg("--max-chunks")
.arg("6")
.arg("--chunk-ms")
.arg("2000")
.arg("--stream-id")
.arg(&stream_id)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg("objects")
.arg("--network-secret")
.arg(&network_secret)
.arg("--chunk-dir")
.arg(&object_chunks)
.arg("--startup-delay-ms")
.arg("9000")
.arg("ts")
.arg(input_ts.to_string_lossy().to_string())
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
let mut pub_b = pub_objects.spawn().expect("spawn object publisher");
let b_stdout = pub_b
.stdout
.take()
.expect("object publisher stdout missing");
let mut b_lines = BufReader::new(b_stdout).lines();
let remote_objects =
wait_for_line_prefix(&mut b_lines, "moq endpoint addr: ", Duration::from_secs(10))
.expect("object publisher did not print endpoint addr");
// Subscriber: stitch objects from B with manifests from A.
let mut subscriber = Command::new(&ec_node);
subscriber
.arg("moq-subscribe")
.arg("--remote")
.arg(&remote_objects)
.arg("--remote-manifests")
.arg(&remote_manifests)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg("objects")
.arg("--manifest-track")
.arg("manifests")
.arg("--subscribe-manifests")
.arg("--require-manifest")
.arg("--max-invalid-chunks")
.arg("0")
.arg("--stop-after")
.arg("2")
.arg("--output-dir")
.arg(&subscribe_out)
.arg("--chunk-ms")
.arg("2000")
.arg("--stream-id")
.arg(&stream_id)
.arg("--network-secret")
.arg(&network_secret)
.stdout(Stdio::inherit())
.stderr(Stdio::inherit());
let mut sub_child = subscriber.spawn().expect("failed to spawn subscriber");
let start = Instant::now();
loop {
if let Ok(Some(status)) = sub_child.try_wait() {
assert!(status.success(), "subscriber exited with {status}");
break;
}
if start.elapsed() > Duration::from_secs(30) {
let _ = sub_child.kill();
panic!("subscriber timed out");
}
std::thread::sleep(Duration::from_millis(200));
}
// Ensure publishers exit after max chunks.
for child in [&mut pub_a, &mut pub_b] {
let start = Instant::now();
loop {
if let Ok(Some(status)) = child.try_wait() {
assert!(status.success(), "publisher exited with {status}");
break;
}
if start.elapsed() > Duration::from_secs(30) {
let _ = child.kill();
panic!("publisher timed out");
}
std::thread::sleep(Duration::from_millis(200));
}
}
let playlist = subscribe_out.join("index.m3u8");
assert!(
playlist.exists(),
"missing playlist at {}",
playlist.display()
);
let segments = std::fs::read_dir(&subscribe_out)
.unwrap()
.filter_map(|e| e.ok())
.filter(|e| e.file_name().to_string_lossy().starts_with("segment_"))
.count();
assert!(segments >= 1, "expected at least one segment");
}

View file

@ -0,0 +1,345 @@
use std::io::{BufRead, BufReader, Read};
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
const TS_PACKET_SIZE: usize = 188;
fn env_required(key: &str) -> Option<String> {
std::env::var(key)
.ok()
.map(|v| v.trim().to_string())
.filter(|v| !v.is_empty())
}
fn looks_drm(value: &str) -> bool {
let value = value.to_lowercase();
value.contains("drm")
|| value.contains("encrypted")
|| value.contains("protected")
|| value.contains("copy")
|| value.contains("widevine")
}
fn autodiscover_hdhr_host_and_channel() -> Option<(String, String)> {
let devices = ec_hdhomerun::discover().ok()?;
let device = devices.into_iter().next()?;
let lineup = ec_hdhomerun::fetch_lineup(&device).ok()?;
let entry = lineup.iter().find(|e| {
// Prefer a likely-clear channel to avoid false negatives in E2E.
let tag_drm = e.tags.iter().any(|t| looks_drm(t));
let raw_drm = e
.raw
.as_object()
.map(|obj| {
obj.iter()
.any(|(k, v)| looks_drm(k) || looks_drm(&v.to_string()))
})
.unwrap_or(false);
!tag_drm && !raw_drm && e.channel.number.as_deref().unwrap_or("").trim() != ""
})?;
let host = device.ip.clone();
let channel = entry
.channel
.number
.clone()
.or_else(|| Some(entry.channel.name.clone()))
.unwrap_or_else(|| "2.1".to_string());
Some((host, channel))
}
fn ec_node_path() -> std::path::PathBuf {
if let Ok(value) = std::env::var("EC_NODE_BIN") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec_node") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec-node") {
return value.into();
}
let exe = std::env::current_exe().expect("current_exe");
let debug_dir = exe
.parent()
.and_then(|p| p.parent())
.expect("expected target/debug/deps");
debug_dir.join("ec-node")
}
fn wait_for_line_prefix(
lines: &mut dyn Iterator<Item = std::io::Result<String>>,
prefix: &str,
timeout: Duration,
) -> Option<String> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
match lines.next() {
Some(Ok(line)) => {
if let Some(rest) = line.strip_prefix(prefix) {
return Some(rest.trim().to_string());
}
}
Some(Err(_)) => continue,
None => break,
}
}
None
}
fn write_short_ts_recording(
host: &str,
channel: &str,
out_path: &std::path::Path,
) -> anyhow::Result<()> {
// Use lineup to resolve name -> number, but capture from the provided host.
// (OrbStack/Linux may not resolve the lineup URL's mDNS hostname.)
let device = ec_hdhomerun::discover_from_host(host)?;
let lineup = ec_hdhomerun::fetch_lineup(&device)?;
let entry = ec_hdhomerun::find_lineup_entry_by_number(&lineup, channel)
.or_else(|| ec_hdhomerun::find_lineup_entry_by_name(&lineup, channel))
.ok_or_else(|| anyhow::anyhow!("channel not found in lineup: {channel}"))?;
let guide_number = entry.channel.number.as_deref().unwrap_or(channel);
let capture_url = format!("http://{host}:5004/auto/v{guide_number}");
// Capture a short TS sample directly from the HDHR.
// Retry a few times to handle "no tuner available" 5xx responses.
let mut last_err: Option<anyhow::Error> = None;
for attempt in 0..10 {
match ec_hdhomerun::open_stream_url(&capture_url, Some(12)) {
Ok(mut stream) => {
let mut file = std::fs::File::create(out_path)?;
std::io::copy(&mut stream, &mut file)?;
last_err = None;
break;
}
Err(err) => {
last_err = Some(err);
std::thread::sleep(Duration::from_millis(400 * (attempt + 1) as u64));
continue;
}
}
}
if let Some(err) = last_err {
return Err(err);
}
let mut file = std::fs::File::open(out_path)?;
let mut bytes = Vec::new();
file.read_to_end(&mut bytes)?;
let mut len = bytes.len();
// Ensure the TS file ends on a packet boundary.
let rem = len % TS_PACKET_SIZE;
if rem != 0 {
len -= rem;
std::fs::write(out_path, &bytes[..len])?;
}
if len < 188 * 200 {
anyhow::bail!("recorded TS too small ({} bytes) from HDHR {}", len, host);
}
Ok(())
}
#[test]
#[ignore]
fn e2e_split_sources_cmaf_init_from_objects_peer_segments_verified_by_manifests_peer() {
let host = env_required("EVERY_CHANNEL_E2E_HDHR_HOST");
let channel = env_required("EVERY_CHANNEL_E2E_HDHR_CHANNEL");
let (host, channel) = match (host, channel) {
(Some(host), Some(channel)) => (host, channel),
_ => match autodiscover_hdhr_host_and_channel() {
Some(v) => v,
None => return, // skip
},
};
let ec_node = ec_node_path();
// Keep secrets deterministic for reproducibility.
let signing_key = "11".repeat(32);
let network_secret = "22".repeat(32);
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let stream_id = format!("every.channel/e2e/mesh-cmaf/{ts}");
let broadcast_name = stream_id.clone();
let tmp = std::env::temp_dir().join(format!("ec-e2e-mesh-split-cmaf-{ts}"));
let _ = std::fs::create_dir_all(&tmp);
let input_ts = tmp.join("input.ts");
let manifest_chunks = tmp.join("chunks-manifests");
let object_chunks = tmp.join("chunks-objects");
let subscribe_out = tmp.join("subscribe-out");
write_short_ts_recording(&host, &channel, &input_ts).expect("failed to record TS from HDHR");
// Publisher A: leader/signer, publishes manifests only (for CMAF segments).
let mut pub_manifests = Command::new(&ec_node);
pub_manifests
.env("EVERY_CHANNEL_MANIFEST_SIGNING_KEY", &signing_key)
.arg("moq-publish")
.arg("--publish-manifests")
.arg("--publish-chunks")
.arg("false")
.arg("--encode")
.arg("cmaf")
.arg("--epoch-chunks")
.arg("1")
.arg("--max-chunks")
.arg("4")
.arg("--chunk-ms")
.arg("2000")
.arg("--stream-id")
.arg(&stream_id)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg("noop")
.arg("--manifest-track")
.arg("manifests")
.arg("--network-secret")
.arg(&network_secret)
.arg("--chunk-dir")
.arg(&manifest_chunks)
.arg("--startup-delay-ms")
.arg("6000")
.arg("ts")
.arg(input_ts.to_string_lossy().to_string())
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
let mut pub_a = pub_manifests.spawn().expect("spawn manifest publisher");
let a_stdout = pub_a
.stdout
.take()
.expect("manifest publisher stdout missing");
let mut a_lines = BufReader::new(a_stdout).lines();
let remote_manifests =
wait_for_line_prefix(&mut a_lines, "moq endpoint addr: ", Duration::from_secs(10))
.expect("manifest publisher did not print endpoint addr");
// Publisher B: publishes init + segments as objects only.
let mut pub_objects = Command::new(&ec_node);
pub_objects
.arg("moq-publish")
.arg("--publish-chunks")
.arg("true")
.arg("--encode")
.arg("cmaf")
.arg("--init-track")
.arg("init")
.arg("--max-chunks")
.arg("4")
.arg("--chunk-ms")
.arg("2000")
.arg("--stream-id")
.arg(&stream_id)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg("objects")
.arg("--network-secret")
.arg(&network_secret)
.arg("--chunk-dir")
.arg(&object_chunks)
.arg("--startup-delay-ms")
.arg("10000")
.arg("ts")
.arg(input_ts.to_string_lossy().to_string())
.stdout(Stdio::piped())
.stderr(Stdio::inherit());
let mut pub_b = pub_objects.spawn().expect("spawn object publisher");
let b_stdout = pub_b
.stdout
.take()
.expect("object publisher stdout missing");
let mut b_lines = BufReader::new(b_stdout).lines();
let remote_objects =
wait_for_line_prefix(&mut b_lines, "moq endpoint addr: ", Duration::from_secs(10))
.expect("object publisher did not print endpoint addr");
// Subscriber: init+segments from B, manifests from A.
let mut subscriber = Command::new(&ec_node);
subscriber
.arg("moq-subscribe")
.arg("--remote")
.arg(&remote_objects)
.arg("--remote-manifests")
.arg(&remote_manifests)
.arg("--broadcast-name")
.arg(&broadcast_name)
.arg("--track-name")
.arg("objects")
.arg("--manifest-track")
.arg("manifests")
.arg("--subscribe-manifests")
.arg("--require-manifest")
.arg("--max-invalid-chunks")
.arg("0")
.arg("--container")
.arg("cmaf")
.arg("--subscribe-init")
.arg("--init-track")
.arg("init")
.arg("--stop-after")
.arg("2")
.arg("--output-dir")
.arg(&subscribe_out)
.arg("--chunk-ms")
.arg("2000")
.arg("--stream-id")
.arg(&stream_id)
.arg("--network-secret")
.arg(&network_secret)
.stdout(Stdio::inherit())
.stderr(Stdio::inherit());
let mut sub_child = subscriber.spawn().expect("failed to spawn subscriber");
let start = Instant::now();
loop {
if let Ok(Some(status)) = sub_child.try_wait() {
assert!(status.success(), "subscriber exited with {status}");
break;
}
if start.elapsed() > Duration::from_secs(60) {
let _ = sub_child.kill();
panic!("subscriber timed out");
}
std::thread::sleep(Duration::from_millis(200));
}
// Ensure publishers exit after max chunks.
for child in [&mut pub_a, &mut pub_b] {
let start = Instant::now();
loop {
if let Ok(Some(status)) = child.try_wait() {
assert!(status.success(), "publisher exited with {status}");
break;
}
if start.elapsed() > Duration::from_secs(90) {
let _ = child.kill();
panic!("publisher timed out");
}
std::thread::sleep(Duration::from_millis(200));
}
}
let playlist = subscribe_out.join("index.m3u8");
assert!(
playlist.exists(),
"missing playlist at {}",
playlist.display()
);
let init = subscribe_out.join("init.mp4");
assert!(init.exists(), "missing init segment at {}", init.display());
let segments = std::fs::read_dir(&subscribe_out)
.unwrap()
.filter_map(|e| e.ok())
.filter(|e| e.file_name().to_string_lossy().ends_with(".m4s"))
.count();
assert!(segments >= 1, "expected at least one .m4s segment");
}

View file

@ -0,0 +1,314 @@
use std::ffi::OsStr;
use std::io::{BufRead, BufReader, Write};
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
fn which(cmd: &str) -> Option<std::path::PathBuf> {
if let Ok(path) = which::which(cmd) {
return Some(path);
}
None
}
fn chrome_path() -> Option<std::path::PathBuf> {
// Prefer the standard macOS Chrome app bundle.
let mac =
std::path::PathBuf::from("/Applications/Google Chrome.app/Contents/MacOS/Google Chrome");
if mac.exists() {
return Some(mac);
}
which("google-chrome")
.or_else(|| which("google-chrome-stable"))
.or_else(|| which("chromium"))
}
fn ec_node_path() -> std::path::PathBuf {
if let Ok(value) = std::env::var("EC_NODE_BIN") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec_node") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec-node") {
return value.into();
}
let exe = std::env::current_exe().expect("current_exe");
let debug_dir = exe
.parent()
.and_then(|p| p.parent())
.expect("expected target/debug/deps");
debug_dir.join("ec-node")
}
fn read_line_with_timeout(
lines: &mut dyn Iterator<Item = std::io::Result<String>>,
timeout: Duration,
) -> Option<String> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
match lines.next() {
Some(Ok(line)) => {
let line = line.trim().to_string();
if !line.is_empty() {
return Some(line);
}
}
Some(Err(_)) => continue,
None => break,
}
}
None
}
fn generate_ts_fixture(out: &std::path::Path) -> anyhow::Result<()> {
// Deterministic-ish fixture: single-threaded x264, fixed GOP, sine audio.
let status = Command::new("ffmpeg")
.arg("-hide_banner")
.arg("-loglevel")
.arg("error")
.arg("-nostdin")
.arg("-y")
.arg("-f")
.arg("lavfi")
.arg("-i")
.arg("testsrc2=size=1280x720:rate=30")
.arg("-f")
.arg("lavfi")
.arg("-i")
.arg("sine=frequency=1000:sample_rate=48000")
.arg("-t")
.arg("12")
.arg("-map")
.arg("0:v:0")
.arg("-map")
.arg("1:a:0")
.arg("-c:v")
.arg("libx264")
.arg("-pix_fmt")
.arg("yuv420p")
.arg("-g")
.arg("60")
.arg("-keyint_min")
.arg("60")
.arg("-sc_threshold")
.arg("0")
.arg("-bf")
.arg("0")
.arg("-threads")
.arg("1")
.arg("-c:a")
.arg("aac")
.arg("-b:a")
.arg("128k")
.arg("-ac")
.arg("2")
.arg("-ar")
.arg("48000")
.arg("-f")
.arg("mpegts")
.arg(out)
.status()?;
if !status.success() {
anyhow::bail!("ffmpeg fixture generation failed with {status}");
}
Ok(())
}
fn click_button_by_text(tab: &headless_chrome::Tab, text: &str) -> anyhow::Result<()> {
let js = format!(
r#"(function() {{
let btns = Array.from(document.querySelectorAll('button'));
let btn = btns.find(b => (b.innerText || '').trim() === {t});
if (!btn) return false;
btn.click();
return true;
}})();"#,
t = serde_json::to_string(text).unwrap()
);
let v = tab.evaluate(&js, false)?;
let ok = v.value.and_then(|v| v.as_bool()).unwrap_or(false);
if !ok {
anyhow::bail!("button not found: {text}");
}
Ok(())
}
fn fill_input_by_placeholder(
tab: &headless_chrome::Tab,
placeholder: &str,
value: &str,
) -> anyhow::Result<()> {
let js = format!(
r#"(function() {{
let input = document.querySelector('input[placeholder={p}]');
if (!input) return false;
input.focus();
input.value = {v};
input.dispatchEvent(new Event('input', {{ bubbles: true }}));
input.dispatchEvent(new Event('change', {{ bubbles: true }}));
return true;
}})();"#,
p = serde_json::to_string(placeholder).unwrap(),
v = serde_json::to_string(value).unwrap()
);
let v = tab.evaluate(&js, false)?;
let ok = v.value.and_then(|v| v.as_bool()).unwrap_or(false);
if !ok {
anyhow::bail!("input not found for placeholder: {placeholder}");
}
Ok(())
}
fn get_reply_link(tab: &headless_chrome::Tab) -> anyhow::Result<Option<String>> {
// Read the last readonly input inside the add menu; this is where we render the reply code.
let js = r#"(function() {
let menu = document.querySelector('.source-menu');
if (!menu) return null;
let inputs = Array.from(menu.querySelectorAll('input.source-menu-input[readonly]'));
if (!inputs.length) return null;
return inputs[inputs.length - 1].value || null;
})();"#;
let v = tab.evaluate(js, false)?;
Ok(v.value.and_then(|v| v.as_str().map(|s| s.to_string())))
}
fn wait_for_text(
tab: &headless_chrome::Tab,
needle: &str,
timeout: Duration,
) -> anyhow::Result<()> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
let js = format!(
r#"(function() {{
return document.body && (document.body.innerText || '').includes({n});
}})();"#,
n = serde_json::to_string(needle).unwrap()
);
let v = tab.evaluate(&js, false)?;
if v.value.and_then(|v| v.as_bool()).unwrap_or(false) {
return Ok(());
}
std::thread::sleep(Duration::from_millis(200));
}
anyhow::bail!("timed out waiting for text: {needle}");
}
fn wait_for_blob_video(tab: &headless_chrome::Tab, timeout: Duration) -> anyhow::Result<()> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
let js = r#"(function() {
let v = document.querySelector('video');
if (!v) return false;
if (typeof v.src !== 'string') return false;
return v.src.startsWith('blob:');
})();"#;
let v = tab.evaluate(js, false)?;
if v.value.and_then(|v| v.as_bool()).unwrap_or(false) {
return Ok(());
}
std::thread::sleep(Duration::from_millis(200));
}
anyhow::bail!("timed out waiting for video blob src");
}
#[test]
#[ignore]
fn e2e_remote_website_connects_to_local_direct_publisher() -> anyhow::Result<()> {
if which("ffmpeg").is_none() {
return Ok(()); // skip
}
let chrome = match chrome_path() {
Some(p) => p,
None => return Ok(()), // skip
};
let site_url = std::env::var("EVERY_CHANNEL_SITE_URL")
.unwrap_or_else(|_| "https://every.channel/".to_string());
let ec_node = ec_node_path();
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let tmp = std::env::temp_dir().join(format!("ec-e2e-remote-website-direct-{ts}"));
let _ = std::fs::create_dir_all(&tmp);
let input_ts = tmp.join("input.ts");
let chunk_dir = tmp.join("chunks");
generate_ts_fixture(&input_ts)?;
let mut pub_child = Command::new(&ec_node)
.arg("direct-publish")
.arg("--chunk-dir")
.arg(&chunk_dir)
.arg("--chunk-ms")
.arg("2000")
.arg("--max-segments")
.arg("6")
.arg("ts")
.arg(&input_ts)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::inherit())
.spawn()?;
let stdout = pub_child.stdout.take().expect("publisher stdout missing");
let mut lines = BufReader::new(stdout).lines();
let offer = read_line_with_timeout(&mut lines, Duration::from_secs(60))
.ok_or_else(|| anyhow::anyhow!("publisher did not print offer link in time"))?;
if !offer.starts_with("every.channel://direct?c=") {
anyhow::bail!("unexpected offer link: {offer}");
}
let launch_options = headless_chrome::LaunchOptionsBuilder::default()
.path(Some(chrome))
.headless(true)
.args(vec![
OsStr::new("--autoplay-policy=no-user-gesture-required"),
OsStr::new("--mute-audio"),
])
.build()
.unwrap();
let browser = headless_chrome::Browser::new(launch_options)?;
let tab = browser.new_tab()?;
tab.navigate_to(&site_url)?;
tab.wait_until_navigated()?;
// Open the add menu via class selector (stable).
tab.wait_for_element("button.add-source")?.click()?;
tab.wait_for_element(".source-menu")?;
// Use Watch a link flow.
fill_input_by_placeholder(&tab, "every.channel://watch?...", &offer)?;
click_button_by_text(&tab, "Parse link")?;
click_button_by_text(&tab, "Tune in")?;
// Poll for reply link.
let deadline = Instant::now() + Duration::from_secs(60);
let reply = loop {
if let Some(v) = get_reply_link(&tab)? {
if v.starts_with("every.channel://direct?c=") {
break v;
}
}
if Instant::now() > deadline {
anyhow::bail!("timed out waiting for reply link in UI");
}
std::thread::sleep(Duration::from_millis(200));
};
// Feed reply back to publisher.
let stdin = pub_child.stdin.as_mut().expect("publisher stdin missing");
writeln!(stdin, "{reply}")?;
stdin.flush()?;
// Website should go Live and show a blob video source.
wait_for_text(&tab, "Live", Duration::from_secs(60))?;
wait_for_blob_video(&tab, Duration::from_secs(60))?;
// Cleanup.
let _ = pub_child.kill();
let _ = pub_child.wait();
let _ = std::fs::remove_dir_all(&tmp);
Ok(())
}

View file

@ -0,0 +1,243 @@
use std::ffi::OsStr;
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
fn which(cmd: &str) -> Option<std::path::PathBuf> {
which::which(cmd).ok()
}
fn chrome_path() -> Option<std::path::PathBuf> {
// Prefer the standard macOS Chrome app bundle.
let mac =
std::path::PathBuf::from("/Applications/Google Chrome.app/Contents/MacOS/Google Chrome");
if mac.exists() {
return Some(mac);
}
which("google-chrome")
.or_else(|| which("google-chrome-stable"))
.or_else(|| which("chromium"))
}
fn ec_node_path() -> std::path::PathBuf {
if let Ok(value) = std::env::var("EC_NODE_BIN") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec_node") {
return value.into();
}
if let Ok(value) = std::env::var("CARGO_BIN_EXE_ec-node") {
return value.into();
}
let exe = std::env::current_exe().expect("current_exe");
let debug_dir = exe
.parent()
.and_then(|p| p.parent())
.expect("expected target/debug/deps");
debug_dir.join("ec-node")
}
fn generate_ts_fixture(out: &std::path::Path) -> anyhow::Result<()> {
// Deterministic-ish fixture: single-threaded x264, fixed GOP, sine audio.
let status = Command::new("ffmpeg")
.arg("-hide_banner")
.arg("-loglevel")
.arg("error")
.arg("-nostdin")
.arg("-y")
.arg("-f")
.arg("lavfi")
.arg("-i")
.arg("testsrc2=size=1280x720:rate=30")
.arg("-f")
.arg("lavfi")
.arg("-i")
.arg("sine=frequency=1000:sample_rate=48000")
.arg("-t")
.arg("12")
.arg("-map")
.arg("0:v:0")
.arg("-map")
.arg("1:a:0")
.arg("-c:v")
.arg("libx264")
.arg("-pix_fmt")
.arg("yuv420p")
.arg("-g")
.arg("60")
.arg("-keyint_min")
.arg("60")
.arg("-sc_threshold")
.arg("0")
.arg("-bf")
.arg("0")
.arg("-threads")
.arg("1")
.arg("-c:a")
.arg("aac")
.arg("-b:a")
.arg("128k")
.arg("-ac")
.arg("2")
.arg("-ar")
.arg("48000")
.arg("-f")
.arg("mpegts")
.arg(out)
.status()?;
if !status.success() {
anyhow::bail!("ffmpeg fixture generation failed with {status}");
}
Ok(())
}
fn click_css(tab: &headless_chrome::Tab, css: &str) -> anyhow::Result<()> {
tab.wait_for_element(css)?.click()?;
Ok(())
}
fn wait_for_text(
tab: &headless_chrome::Tab,
needle: &str,
timeout: Duration,
) -> anyhow::Result<()> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
let js = format!(
r#"(function() {{
return document.body && (document.body.innerText || '').includes({n});
}})();"#,
n = serde_json::to_string(needle).unwrap()
);
let v = tab.evaluate(&js, false)?;
if v.value.and_then(|v| v.as_bool()).unwrap_or(false) {
return Ok(());
}
std::thread::sleep(Duration::from_millis(200));
}
anyhow::bail!("timed out waiting for text: {needle}");
}
fn wait_for_blob_video(tab: &headless_chrome::Tab, timeout: Duration) -> anyhow::Result<()> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
let js = r#"(function() {
let v = document.querySelector('video');
if (!v) return false;
if (typeof v.src !== 'string') return false;
return v.src.startsWith('blob:');
})();"#;
let v = tab.evaluate(js, false)?;
if v.value.and_then(|v| v.as_bool()).unwrap_or(false) {
return Ok(());
}
std::thread::sleep(Duration::from_millis(200));
}
anyhow::bail!("timed out waiting for video blob src");
}
fn click_global_watch(tab: &headless_chrome::Tab, stream_id: &str) -> anyhow::Result<bool> {
let js = format!(
r#"(function() {{
let target = {sid};
let btn = document.querySelector(`button[data-stream-id="${{target}}"]`)
|| document.querySelector(`button[data_stream_id="${{target}}"]`);
if (!btn) return false;
btn.click();
return true;
}})();"#,
sid = serde_json::to_string(stream_id).unwrap()
);
let v = tab.evaluate(&js, false)?;
Ok(v.value.and_then(|v| v.as_bool()).unwrap_or(false))
}
#[test]
#[ignore]
fn e2e_remote_website_directory_connects_to_local_direct_publisher() -> anyhow::Result<()> {
if which("ffmpeg").is_none() {
return Ok(()); // skip
}
let chrome = match chrome_path() {
Some(p) => p,
None => return Ok(()), // skip
};
let site_url = std::env::var("EVERY_CHANNEL_SITE_URL")
.unwrap_or_else(|_| "https://every.channel/".to_string());
let directory_url = std::env::var("EVERY_CHANNEL_DIRECTORY_URL")
.unwrap_or_else(|_| "https://every.channel".to_string());
let ec_node = ec_node_path();
let ts = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
let stream_id = format!("every.channel/e2e/{ts}");
let title = format!("E2E {ts}");
let tmp = std::env::temp_dir().join(format!("ec-e2e-remote-website-directory-{ts}"));
let _ = std::fs::create_dir_all(&tmp);
let input_ts = tmp.join("input.ts");
let chunk_dir = tmp.join("chunks");
generate_ts_fixture(&input_ts)?;
let mut pub_child = Command::new(&ec_node)
.arg("direct-publish")
.arg("--directory-url")
.arg(&directory_url)
.arg("--stream-id")
.arg(&stream_id)
.arg("--title")
.arg(&title)
.arg("--chunk-dir")
.arg(&chunk_dir)
.arg("--chunk-ms")
.arg("2000")
.arg("--max-segments")
.arg("6")
.arg("ts")
.arg(&input_ts)
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::inherit())
.spawn()?;
let launch_options = headless_chrome::LaunchOptionsBuilder::default()
.path(Some(chrome))
.headless(true)
.args(vec![
OsStr::new("--autoplay-policy=no-user-gesture-required"),
OsStr::new("--mute-audio"),
])
.build()
.unwrap();
let browser = headless_chrome::Browser::new(launch_options)?;
let tab = browser.new_tab()?;
tab.navigate_to(&site_url)?;
tab.wait_until_navigated()?;
// Refresh public list and watch our stream_id.
click_css(&tab, "button[data-testid='global-refresh']")?;
let deadline = Instant::now() + Duration::from_secs(60);
loop {
if click_global_watch(&tab, &stream_id)? {
break;
}
if Instant::now() > deadline {
anyhow::bail!("timed out waiting for stream_id to appear in global list");
}
std::thread::sleep(Duration::from_millis(250));
let _ = click_global_watch(&tab, &stream_id)?;
}
// Website should go Live and show a blob video source.
wait_for_text(&tab, "Live", Duration::from_secs(60))?;
wait_for_blob_video(&tab, Duration::from_secs(60))?;
// Cleanup.
let _ = pub_child.kill();
let _ = pub_child.wait();
let _ = std::fs::remove_dir_all(&tmp);
Ok(())
}

View file

@ -0,0 +1,174 @@
use std::ffi::OsStr;
use std::time::{Duration, Instant};
fn which(cmd: &str) -> Option<std::path::PathBuf> {
which::which(cmd).ok()
}
fn chrome_path() -> Option<std::path::PathBuf> {
let mac =
std::path::PathBuf::from("/Applications/Google Chrome.app/Contents/MacOS/Google Chrome");
if mac.exists() {
return Some(mac);
}
which("google-chrome")
.or_else(|| which("google-chrome-stable"))
.or_else(|| which("chromium"))
}
fn click_css(tab: &headless_chrome::Tab, css: &str) -> anyhow::Result<()> {
tab.wait_for_element(css)?.click()?;
Ok(())
}
fn wait_for_text(
tab: &headless_chrome::Tab,
needle: &str,
timeout: Duration,
) -> anyhow::Result<()> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
let js = format!(
r#"(function() {{
return document.body && (document.body.innerText || '').includes({n});
}})();"#,
n = serde_json::to_string(needle).unwrap()
);
let v = tab.evaluate(&js, false)?;
if v.value.and_then(|v| v.as_bool()).unwrap_or(false) {
return Ok(());
}
std::thread::sleep(Duration::from_millis(200));
}
anyhow::bail!("timed out waiting for text: {needle}");
}
fn wait_for_blob_video(tab: &headless_chrome::Tab, timeout: Duration) -> anyhow::Result<()> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
let js = r#"(function() {
let v = document.querySelector('video');
if (!v) return false;
if (typeof v.src !== 'string') return false;
return v.src.startsWith('blob:');
})();"#;
let v = tab.evaluate(js, false)?;
if v.value.and_then(|v| v.as_bool()).unwrap_or(false) {
return Ok(());
}
std::thread::sleep(Duration::from_millis(200));
}
anyhow::bail!("timed out waiting for video blob src");
}
fn wait_for_video_element(tab: &headless_chrome::Tab, timeout: Duration) -> anyhow::Result<()> {
let deadline = Instant::now() + timeout;
while Instant::now() < deadline {
let js = r#"(function() {
return !!document.querySelector('video');
})();"#;
let v = tab.evaluate(js, false)?;
if v.value.and_then(|v| v.as_bool()).unwrap_or(false) {
return Ok(());
}
std::thread::sleep(Duration::from_millis(200));
}
anyhow::bail!("timed out waiting for <video> element");
}
fn debug_player_state(tab: &headless_chrome::Tab) -> anyhow::Result<String> {
let js = r#"(function() {
let v = document.querySelector('video');
let src = v ? (v.src || '') : null;
let placeholder = document.querySelector('.placeholder');
let placeholderText = placeholder ? (placeholder.innerText || '') : null;
let status = document.querySelector('.source-status');
let statusText = status ? (status.innerText || '') : null;
let sources = Array.from(document.querySelectorAll('button[data-testid="global-watch"]')).length;
return JSON.stringify({ hasVideo: !!v, videoSrc: src, placeholderText, statusText, sources });
})();"#;
let v = tab.evaluate(js, false)?;
Ok(v.value
.and_then(|v| v.as_str().map(|s| s.to_string()))
.unwrap_or_default())
}
fn click_global_watch(tab: &headless_chrome::Tab, stream_id: &str) -> anyhow::Result<bool> {
let js = format!(
r#"(function() {{
let target = {sid};
let btn = document.querySelector(`button[data-stream-id="${{target}}"]`);
if (!btn) return false;
// Some SPA frameworks attach delegated listeners; dispatch a real click event.
btn.dispatchEvent(new MouseEvent('click', {{ bubbles: true, cancelable: true, view: window }}));
return true;
}})();"#,
sid = serde_json::to_string(stream_id).unwrap()
);
let v = tab.evaluate(&js, false)?;
Ok(v.value.and_then(|v| v.as_bool()).unwrap_or(false))
}
#[test]
#[ignore]
fn e2e_remote_website_watch_existing_stream_id() -> anyhow::Result<()> {
let chrome = match chrome_path() {
Some(p) => p,
None => return Ok(()), // skip
};
// We still want ffmpeg around for parity with other E2Es (and to discourage "works only without media tools").
if which("ffmpeg").is_none() {
return Ok(()); // skip
}
let site_url = std::env::var("EVERY_CHANNEL_SITE_URL")
.unwrap_or_else(|_| "https://every.channel/".to_string());
let stream_id = match std::env::var("EVERY_CHANNEL_STREAM_ID") {
Ok(v) if !v.trim().is_empty() => v,
_ => return Ok(()), // skip
};
let launch_options = headless_chrome::LaunchOptionsBuilder::default()
.path(Some(chrome))
.headless(true)
.args(vec![
OsStr::new("--autoplay-policy=no-user-gesture-required"),
OsStr::new("--mute-audio"),
OsStr::new("--disable-application-cache"),
OsStr::new("--disable-service-worker"),
OsStr::new("--disk-cache-size=0"),
])
.build()
.unwrap();
let browser = headless_chrome::Browser::new(launch_options)?;
let tab = browser.new_tab()?;
tab.navigate_to(&site_url)?;
tab.wait_until_navigated()?;
click_css(&tab, "button[data-testid='global-refresh']")?;
let deadline = Instant::now() + Duration::from_secs(60);
loop {
if click_global_watch(&tab, &stream_id)? {
break;
}
if Instant::now() > deadline {
anyhow::bail!("timed out waiting for stream_id to appear in global list");
}
std::thread::sleep(Duration::from_millis(250));
}
// Ensure the player is instantiated.
if let Err(err) = wait_for_video_element(&tab, Duration::from_secs(90)) {
let st = debug_player_state(&tab).unwrap_or_default();
anyhow::bail!("{err}\nplayer_state={st}");
}
// We consider playback "started" when the video uses a blob: URL (MSE).
if let Err(err) = wait_for_blob_video(&tab, Duration::from_secs(90)) {
let st = debug_player_state(&tab).unwrap_or_default();
anyhow::bail!("{err}\nplayer_state={st}");
}
Ok(())
}

10
crates/ec-ts/Cargo.toml Normal file
View file

@ -0,0 +1,10 @@
[package]
name = "ec-ts"
version = "0.0.0"
edition.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
serde.workspace = true
serde-big-array = "0.5"

648
crates/ec-ts/src/lib.rs Normal file
View file

@ -0,0 +1,648 @@
//! Minimal MPEG-TS parsing for timing and table extraction.
use anyhow::{anyhow, Result};
use serde::{Deserialize, Serialize};
use serde_big_array::BigArray;
use std::collections::HashMap;
use std::io::Read;
pub const TS_PACKET_SIZE: usize = 188;
pub const PID_ATSC_PSIP: u16 = 0x1FFB;
pub const PID_DVB_TDT_TOT: u16 = 0x0014;
const SYNC_BYTE: u8 = 0x47;
const TABLE_ID_ATSC_STT: u8 = 0xCD;
const TABLE_ID_DVB_TDT: u8 = 0x70;
const TABLE_ID_DVB_TOT: u8 = 0x73;
const GPS_EPOCH_TO_UNIX: i64 = 315964800;
const MJD_UNIX_EPOCH: i64 = 40587;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TsPacket {
#[serde(with = "BigArray")]
data: [u8; TS_PACKET_SIZE],
pub pid: u16,
pub payload_unit_start: bool,
pub continuity_counter: u8,
pub discontinuity: bool,
pub pcr_27mhz: Option<u64>,
payload_offset: usize,
payload_len: usize,
}
impl TsPacket {
pub fn payload(&self) -> &[u8] {
&self.data[self.payload_offset..self.payload_offset + self.payload_len]
}
pub fn as_bytes(&self) -> &[u8; TS_PACKET_SIZE] {
&self.data
}
}
pub struct TsReader<R> {
reader: R,
}
impl<R: Read> TsReader<R> {
pub fn new(reader: R) -> Self {
Self { reader }
}
pub fn read_packet(&mut self) -> Result<Option<TsPacket>> {
let mut data = [0u8; TS_PACKET_SIZE];
let mut read = 0usize;
while read < TS_PACKET_SIZE {
let n = self.reader.read(&mut data[read..])?;
if n == 0 {
if read == 0 {
return Ok(None);
}
return Err(anyhow!("truncated TS packet"));
}
read += n;
}
let packet = parse_packet(data)?;
Ok(Some(packet))
}
}
pub fn parse_packet(data: [u8; TS_PACKET_SIZE]) -> Result<TsPacket> {
if data[0] != SYNC_BYTE {
return Err(anyhow!("missing sync byte"));
}
let payload_unit_start = (data[1] & 0x40) != 0;
let pid = ((data[1] as u16 & 0x1F) << 8) | data[2] as u16;
let continuity_counter = data[3] & 0x0F;
let adaptation_control = (data[3] >> 4) & 0x03;
let has_adaptation = adaptation_control == 2 || adaptation_control == 3;
let has_payload = adaptation_control == 1 || adaptation_control == 3;
let mut offset = 4usize;
let mut discontinuity = false;
let mut pcr_27mhz = None;
if has_adaptation {
let length = data[offset] as usize;
offset += 1;
if offset + length > TS_PACKET_SIZE {
return Err(anyhow!("invalid adaptation field length"));
}
if length > 0 {
let flags = data[offset];
discontinuity = (flags & 0x80) != 0;
let pcr_flag = (flags & 0x10) != 0;
if pcr_flag && length >= 7 {
let pcr_bytes = &data[offset + 1..offset + 7];
pcr_27mhz = Some(parse_pcr_27mhz(pcr_bytes));
}
}
offset += length;
}
let payload_len = if has_payload && offset <= TS_PACKET_SIZE {
TS_PACKET_SIZE - offset
} else {
0
};
Ok(TsPacket {
data,
pid,
payload_unit_start,
continuity_counter,
discontinuity,
pcr_27mhz,
payload_offset: offset,
payload_len,
})
}
fn parse_pcr_27mhz(data: &[u8]) -> u64 {
let base = ((data[0] as u64) << 25)
| ((data[1] as u64) << 17)
| ((data[2] as u64) << 9)
| ((data[3] as u64) << 1)
| ((data[4] as u64) >> 7);
let ext = (((data[4] as u64) & 0x01) << 8) | data[5] as u64;
base * 300 + ext
}
pub fn parse_pts_90khz(packet: &TsPacket) -> Option<u64> {
if !packet.payload_unit_start {
return None;
}
let payload = packet.payload();
if payload.len() < 14 {
return None;
}
if payload[0] != 0 || payload[1] != 0 || payload[2] != 1 {
return None;
}
let flags = payload[7];
let pts_dts_flags = (flags >> 6) & 0x03;
if pts_dts_flags == 0 {
return None;
}
let header_length = payload[8] as usize;
let pts_start = 9usize;
if header_length < 5 || payload.len() < pts_start + 5 {
return None;
}
let b = &payload[pts_start..pts_start + 5];
let pts = ((b[0] as u64 & 0x0E) << 29)
| ((b[1] as u64) << 22)
| ((b[2] as u64 & 0xFE) << 14)
| ((b[3] as u64) << 7)
| ((b[4] as u64) >> 1);
Some(pts)
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Section {
pub pid: u16,
pub table_id: u8,
pub data: Vec<u8>,
}
#[derive(Debug, Default)]
pub struct SectionAssembler {
buffers: HashMap<u16, SectionBuffer>,
}
#[derive(Debug)]
struct SectionBuffer {
expected_len: usize,
data: Vec<u8>,
}
impl SectionAssembler {
pub fn push_packet(&mut self, packet: &TsPacket) -> Vec<Section> {
let mut sections = Vec::new();
let payload = packet.payload();
if payload.is_empty() {
return sections;
}
if packet.payload_unit_start {
let pointer = payload[0] as usize;
if pointer + 1 > payload.len() {
return sections;
}
let mut idx = 1 + pointer;
while idx + 3 <= payload.len() {
let table_id = payload[idx];
let section_length =
(((payload[idx + 1] & 0x0F) as usize) << 8) | payload[idx + 2] as usize;
let total_len = 3 + section_length;
if idx + total_len <= payload.len() {
let data = payload[idx..idx + total_len].to_vec();
sections.push(Section {
pid: packet.pid,
table_id,
data,
});
idx += total_len;
} else {
let data = payload[idx..].to_vec();
self.buffers.insert(
packet.pid,
SectionBuffer {
expected_len: total_len,
data,
},
);
break;
}
}
} else if let Some(buffer) = self.buffers.get_mut(&packet.pid) {
buffer.data.extend_from_slice(payload);
if buffer.data.len() >= buffer.expected_len {
let data = buffer.data[..buffer.expected_len].to_vec();
let table_id = data[0];
sections.push(Section {
pid: packet.pid,
table_id,
data,
});
self.buffers.remove(&packet.pid);
}
}
sections
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum TimeSource {
AtscStt,
DvbTdt,
DvbTot,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BroadcastUtc {
pub unix_seconds: i64,
pub source: TimeSource,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TimeSyncUpdate {
pub pcr_27mhz: Option<u64>,
pub utc_unix_seconds: Option<i64>,
pub chunk_index: Option<u64>,
pub chunk_start_27mhz: Option<u64>,
pub utc_start_unix: Option<i64>,
pub synced: bool,
pub discontinuity: bool,
}
#[derive(Debug)]
pub struct TimeSyncEngine {
chunk_ticks: u64,
last_pcr: Option<u64>,
utc_offset_ticks: Option<i64>,
synced: bool,
last_chunk_index: Option<u64>,
}
impl TimeSyncEngine {
pub fn new(chunk_duration_ms: u64) -> Self {
let chunk_ticks = chunk_duration_ms * 27_000;
Self {
chunk_ticks,
last_pcr: None,
utc_offset_ticks: None,
synced: false,
last_chunk_index: None,
}
}
pub fn ingest_packet(
&mut self,
packet: &TsPacket,
assembler: &mut SectionAssembler,
) -> Vec<TimeSyncUpdate> {
let mut updates = Vec::new();
if packet.discontinuity {
self.last_pcr = None;
self.utc_offset_ticks = None;
self.synced = false;
self.last_chunk_index = None;
updates.push(TimeSyncUpdate {
pcr_27mhz: packet.pcr_27mhz,
utc_unix_seconds: None,
chunk_index: None,
chunk_start_27mhz: None,
utc_start_unix: None,
synced: false,
discontinuity: true,
});
}
for section in assembler.push_packet(packet) {
if let Some(utc) = parse_time_section(&section) {
if let Some(pcr) = self.last_pcr {
let utc_ticks = utc.unix_seconds.saturating_mul(27_000_000);
let offset = utc_ticks - pcr as i64;
self.utc_offset_ticks = Some(offset);
self.synced = true;
}
updates.push(TimeSyncUpdate {
pcr_27mhz: self.last_pcr,
utc_unix_seconds: Some(utc.unix_seconds),
chunk_index: self.current_chunk_index(),
chunk_start_27mhz: self.current_chunk_start_27mhz(),
utc_start_unix: self.current_chunk_utc_start(),
synced: self.synced,
discontinuity: false,
});
}
}
if let Some(pcr) = packet.pcr_27mhz {
self.last_pcr = Some(pcr);
let chunk_index = self.current_chunk_index();
if chunk_index != self.last_chunk_index {
self.last_chunk_index = chunk_index;
updates.push(TimeSyncUpdate {
pcr_27mhz: Some(pcr),
utc_unix_seconds: self.current_utc_seconds(),
chunk_index,
chunk_start_27mhz: self.current_chunk_start_27mhz(),
utc_start_unix: self.current_chunk_utc_start(),
synced: self.synced,
discontinuity: false,
});
}
}
updates
}
fn current_utc_seconds(&self) -> Option<i64> {
let pcr = self.last_pcr? as i64;
let offset = self.utc_offset_ticks?;
Some((pcr + offset) / 27_000_000)
}
fn current_chunk_index(&self) -> Option<u64> {
let pcr = self.last_pcr? as i128;
let offset = self.utc_offset_ticks.unwrap_or(0) as i128;
let t = pcr + offset;
if t < 0 {
return None;
}
Some((t as u128 / self.chunk_ticks as u128) as u64)
}
fn current_chunk_start_27mhz(&self) -> Option<u64> {
let chunk_index = self.current_chunk_index()? as i128;
let offset = self.utc_offset_ticks.unwrap_or(0) as i128;
let anchored = chunk_index * self.chunk_ticks as i128;
let pcr = anchored - offset;
if pcr < 0 {
return None;
}
Some(pcr as u64)
}
fn current_chunk_utc_start(&self) -> Option<i64> {
let _ = self.utc_offset_ticks?;
let chunk_index = self.current_chunk_index()? as i128;
let anchored = chunk_index * self.chunk_ticks as i128;
Some((anchored / 27_000_000) as i64)
}
}
pub fn parse_time_section(section: &Section) -> Option<BroadcastUtc> {
match section.table_id {
TABLE_ID_ATSC_STT if section.pid == PID_ATSC_PSIP => {
parse_atsc_stt(&section.data).map(|utc| BroadcastUtc {
unix_seconds: utc,
source: TimeSource::AtscStt,
})
}
TABLE_ID_DVB_TDT if section.pid == PID_DVB_TDT_TOT => {
parse_dvb_time(&section.data).map(|utc| BroadcastUtc {
unix_seconds: utc,
source: TimeSource::DvbTdt,
})
}
TABLE_ID_DVB_TOT if section.pid == PID_DVB_TDT_TOT => {
parse_dvb_time(&section.data).map(|utc| BroadcastUtc {
unix_seconds: utc,
source: TimeSource::DvbTot,
})
}
_ => None,
}
}
fn parse_atsc_stt(data: &[u8]) -> Option<i64> {
if data.len() < 3 + 1 + 4 + 1 {
return None;
}
let system_time = u32::from_be_bytes([data[4], data[5], data[6], data[7]]) as i64;
let gps_utc_offset = data[8] as i64;
let utc_since_1980 = system_time - gps_utc_offset;
Some(utc_since_1980 - GPS_EPOCH_TO_UNIX)
}
fn parse_dvb_time(data: &[u8]) -> Option<i64> {
if data.len() < 8 {
return None;
}
let mjd = u16::from_be_bytes([data[3], data[4]]);
let hour = bcd_to_dec(data[5])?;
let minute = bcd_to_dec(data[6])?;
let second = bcd_to_dec(data[7])?;
let days = mjd as i64 - MJD_UNIX_EPOCH;
Some(days * 86_400 + hour as i64 * 3_600 + minute as i64 * 60 + second as i64)
}
fn bcd_to_dec(value: u8) -> Option<u32> {
let high = (value >> 4) & 0x0F;
let low = value & 0x0F;
if high > 9 || low > 9 {
return None;
}
Some((high as u32) * 10 + low as u32)
}
#[cfg(test)]
mod tests {
use super::*;
fn build_ts_packet_with_adaptation_pcr(
pid: u16,
continuity_counter: u8,
pcr_27mhz: u64,
) -> [u8; TS_PACKET_SIZE] {
// Encode PCR into base (90kHz) and extension (27MHz remainder).
let base = pcr_27mhz / 300;
let ext = pcr_27mhz % 300;
let mut pcr = [0u8; 6];
pcr[0] = ((base >> 25) & 0xFF) as u8;
pcr[1] = ((base >> 17) & 0xFF) as u8;
pcr[2] = ((base >> 9) & 0xFF) as u8;
pcr[3] = ((base >> 1) & 0xFF) as u8;
pcr[4] = (((base & 0x1) << 7) as u8) | 0x7E | (((ext >> 8) & 0x1) as u8);
pcr[5] = (ext & 0xFF) as u8;
let mut data = [0u8; TS_PACKET_SIZE];
data[0] = SYNC_BYTE;
data[1] = ((pid >> 8) as u8) & 0x1F;
data[2] = (pid & 0xFF) as u8;
// adaptation only (no payload): adaptation_control=2
data[3] = (2 << 4) | (continuity_counter & 0x0F);
// adaptation length: 1 byte flags + 6 bytes PCR = 7
data[4] = 7;
// flags: PCR flag
data[5] = 0x10;
data[6..12].copy_from_slice(&pcr);
data
}
fn encode_pts_90khz(pts: u64) -> [u8; 5] {
let mut b = [0u8; 5];
b[0] = 0x20 | ((((pts >> 30) & 0x07) as u8) << 1) | 1;
b[1] = ((pts >> 22) & 0xFF) as u8;
b[2] = ((((pts >> 15) & 0x7F) as u8) << 1) | 1;
b[3] = ((pts >> 7) & 0xFF) as u8;
b[4] = (((pts & 0x7F) as u8) << 1) | 1;
b
}
#[test]
fn parse_packet_extracts_pid_and_payload() {
let pid = 0x0033u16;
let mut data = [0u8; TS_PACKET_SIZE];
data[0] = SYNC_BYTE;
data[1] = 0x40 | (((pid >> 8) as u8) & 0x1F); // payload_unit_start
data[2] = (pid & 0xFF) as u8;
data[3] = (1 << 4) | 0x0A; // payload only
data[4] = 0xAA;
let pkt = parse_packet(data).unwrap();
assert_eq!(pkt.pid, pid);
assert!(pkt.payload_unit_start);
assert_eq!(pkt.continuity_counter, 0x0A);
assert_eq!(pkt.payload()[0], 0xAA);
}
#[test]
fn parse_packet_rejects_bad_sync() {
let mut data = [0u8; TS_PACKET_SIZE];
data[0] = 0;
assert!(parse_packet(data).is_err());
}
#[test]
fn parse_packet_rejects_invalid_adaptation_length() {
let pid = 0x0011u16;
let mut data = [0u8; TS_PACKET_SIZE];
data[0] = SYNC_BYTE;
data[1] = ((pid >> 8) as u8) & 0x1F;
data[2] = (pid & 0xFF) as u8;
data[3] = (3 << 4) | 0x00; // adaptation + payload
data[4] = 250; // too large
assert!(parse_packet(data).is_err());
}
#[test]
fn parse_packet_reads_pcr_27mhz() {
let pcr = 54_000_123u64;
let data = build_ts_packet_with_adaptation_pcr(0x0100, 0, pcr);
let pkt = parse_packet(data).unwrap();
assert_eq!(pkt.pcr_27mhz, Some(pcr));
}
#[test]
fn parse_pts_extracts_expected_value() {
let pid = 0x0020u16;
let pts = 90_000u64 * 3;
let pts_bytes = encode_pts_90khz(pts);
let mut data = [0u8; TS_PACKET_SIZE];
data[0] = SYNC_BYTE;
data[1] = 0x40 | (((pid >> 8) as u8) & 0x1F);
data[2] = (pid & 0xFF) as u8;
data[3] = (1 << 4) | 0x00; // payload only
// Minimal PES header with PTS.
let payload = &mut data[4..];
payload[0..3].copy_from_slice(&[0, 0, 1]);
payload[3] = 0xE0;
payload[7] = 0x80; // pts_dts_flags = 2
payload[8] = 5; // header length
payload[9..14].copy_from_slice(&pts_bytes);
let pkt = parse_packet(data).unwrap();
let parsed = parse_pts_90khz(&pkt).unwrap();
assert_eq!(parsed, pts);
}
#[test]
fn section_assembler_reassembles_across_packets() {
let pid = 0x0014u16;
let table_id = TABLE_ID_DVB_TDT;
// A tiny "section" with declared length 10 (3 + 10 = 13 bytes total).
let total_len = 13usize;
let section_length = (total_len - 3) as u16;
let mut section = vec![0u8; total_len];
section[0] = table_id;
section[1] = 0x00 | (((section_length >> 8) as u8) & 0x0F);
section[2] = (section_length & 0xFF) as u8;
for i in 3..total_len {
section[i] = i as u8;
}
// Packet 1: payload is intentionally short (via a large adaptation field) so the assembler
// must buffer until packet 2 arrives.
let mut pkt1 = [0u8; TS_PACKET_SIZE];
pkt1[0] = SYNC_BYTE;
pkt1[1] = 0x40 | (((pid >> 8) as u8) & 0x1F);
pkt1[2] = (pid & 0xFF) as u8;
pkt1[3] = (3 << 4) | 0; // adaptation + payload
let payload_len_1 = 8usize; // 1 pointer + 7 bytes of section
let adaptation_len_1 = (TS_PACKET_SIZE - 5) - payload_len_1;
pkt1[4] = adaptation_len_1 as u8;
// adaptation flags at pkt1[5] left as 0; rest is stuffing 0.
let payload_start_1 = 4 + 1 + adaptation_len_1;
pkt1[payload_start_1] = 0; // pointer = 0
pkt1[payload_start_1 + 1..payload_start_1 + 1 + 7].copy_from_slice(&section[..7]);
let mut pkt2 = [0u8; TS_PACKET_SIZE];
pkt2[0] = SYNC_BYTE;
pkt2[1] = ((pid >> 8) as u8) & 0x1F;
pkt2[2] = (pid & 0xFF) as u8;
pkt2[3] = (1 << 4) | 1; // payload only
pkt2[4..4 + (total_len - 7)].copy_from_slice(&section[7..]);
let p1 = parse_packet(pkt1).unwrap();
let p2 = parse_packet(pkt2).unwrap();
let mut asm = SectionAssembler::default();
assert!(asm.push_packet(&p1).is_empty());
let out = asm.push_packet(&p2);
assert_eq!(out.len(), 1);
assert_eq!(out[0].pid, pid);
assert_eq!(out[0].table_id, table_id);
assert_eq!(out[0].data.len(), total_len);
assert_eq!(out[0].data[3], 3u8);
}
#[test]
fn parse_time_sections_for_dvb_and_atsc_epoch() {
// DVB TDT at UNIX epoch.
let mut dvb = vec![0u8; 8];
dvb[0] = TABLE_ID_DVB_TDT;
dvb[3] = 0x9E;
dvb[4] = 0x8B; // MJD 40587
dvb[5] = 0x00;
dvb[6] = 0x00;
dvb[7] = 0x00;
let section = Section {
pid: PID_DVB_TDT_TOT,
table_id: TABLE_ID_DVB_TDT,
data: dvb,
};
let utc = parse_time_section(&section).unwrap();
assert_eq!(utc.unix_seconds, 0);
// ATSC STT at UNIX epoch according to our parser logic.
let mut atsc = vec![0u8; 9];
atsc[0] = TABLE_ID_ATSC_STT;
let system_time = GPS_EPOCH_TO_UNIX as u32;
atsc[4..8].copy_from_slice(&system_time.to_be_bytes());
atsc[8] = 0;
let section = Section {
pid: PID_ATSC_PSIP,
table_id: TABLE_ID_ATSC_STT,
data: atsc,
};
let utc = parse_time_section(&section).unwrap();
assert_eq!(utc.unix_seconds, 0);
}
#[test]
fn time_sync_engine_emits_chunk_boundaries_from_pcr() {
let mut engine = TimeSyncEngine::new(1000);
let mut asm = SectionAssembler::default();
let p0 = parse_packet(build_ts_packet_with_adaptation_pcr(0x0100, 0, 0)).unwrap();
let p1 = parse_packet(build_ts_packet_with_adaptation_pcr(0x0100, 1, 27_000_000)).unwrap();
let u0 = engine.ingest_packet(&p0, &mut asm);
assert!(u0.iter().any(|u| u.chunk_index == Some(0)));
let u1 = engine.ingest_packet(&p1, &mut asm);
assert!(u1.iter().any(|u| u.chunk_index == Some(1)));
}
}

View file

@ -0,0 +1,5 @@
**
# Only include the container build context we need.
!containers/
!containers/**

View file

@ -0,0 +1,19 @@
[package]
name = "ec-cf-bootstrap-api"
version = "0.0.0"
edition = "2021"
license = "AGPL-3.0-only"
[dependencies]
anyhow = "1"
axum = { version = "0.7", features = ["json"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1", features = ["macros", "rt-multi-thread", "signal"] }
tower-http = { version = "0.6", features = ["trace"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
# Keep this out of the workspace; it's built in a container image.
[workspace]

View file

@ -0,0 +1,18 @@
# Cloudflare Containers build: compile a small, portable Rust HTTP server.
# This container only serves the bootstrap /api/* endpoints used for WebRTC rendezvous.
FROM rust:1.86-bookworm AS build
WORKDIR /app
# Pre-copy manifest to prime dependency caching.
COPY Cargo.toml /app/Cargo.toml
COPY src /app/src
RUN cargo build --release
FROM debian:bookworm-slim
WORKDIR /app
COPY --from=build /app/target/release/ec-cf-bootstrap-api /app/ec-cf-bootstrap-api
ENV RUST_LOG=info
EXPOSE 8080
CMD ["/app/ec-cf-bootstrap-api"]

View file

@ -0,0 +1,252 @@
use axum::{
extract::Query,
http::{HeaderMap, StatusCode},
response::IntoResponse,
routing::{get, post},
Json, Router,
};
use serde::{Deserialize, Serialize};
use std::{
collections::HashMap,
net::SocketAddr,
sync::Arc,
time::{Duration, SystemTime, UNIX_EPOCH},
};
use tokio::sync::RwLock;
use tower_http::trace::TraceLayer;
#[derive(Clone, Debug, Serialize, Deserialize)]
struct DirectoryEntry {
stream_id: String,
title: String,
offer: String,
updated_ms: u64,
expires_ms: u64,
}
#[derive(Clone, Debug, Serialize, Deserialize)]
struct AnswerEntry {
stream_id: String,
answer: String,
updated_ms: u64,
expires_ms: u64,
}
#[derive(Clone, Debug, Serialize)]
struct DirectoryList {
now_ms: u64,
entries: Vec<DirectoryEntry>,
}
#[derive(Clone, Debug, Serialize)]
struct HealthResp {
ok: bool,
}
#[derive(Clone, Debug, Deserialize)]
struct AnnounceReq {
stream_id: String,
title: String,
offer: String,
expires_ms: Option<u64>,
}
#[derive(Clone, Debug, Serialize)]
struct AnnounceResp {
ok: bool,
ttl_ms: u64,
entry: DirectoryEntry,
}
#[derive(Clone, Debug, Deserialize)]
struct AnswerPostReq {
stream_id: String,
answer: String,
}
#[derive(Clone, Debug, Deserialize)]
struct AnswerGetReq {
stream_id: String,
}
#[derive(Default)]
struct State {
entries: HashMap<String, DirectoryEntry>,
answers: HashMap<String, AnswerEntry>,
}
fn now_ms() -> u64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or(Duration::from_secs(0))
.as_millis() as u64
}
fn clamp_str(mut s: String, max_len: usize) -> String {
if s.len() <= max_len {
return s;
}
s.truncate(max_len);
s
}
fn json_headers() -> HeaderMap {
let mut headers = HeaderMap::new();
headers.insert("content-type", "application/json; charset=utf-8".parse().unwrap());
headers.insert("cache-control", "no-store".parse().unwrap());
headers
}
fn prune_state(state: &mut State, now: u64) {
state.entries.retain(|_, v| v.expires_ms > now);
state.answers.retain(|_, v| v.expires_ms > now);
// Cap growth defensively. This is not spam-resistant; it's a bootstrap rendezvous.
if state.entries.len() > 200 {
let mut items = state.entries.values().cloned().collect::<Vec<_>>();
items.sort_by_key(|e| std::cmp::Reverse(e.updated_ms));
items.truncate(200);
state.entries = items.into_iter().map(|e| (e.stream_id.clone(), e)).collect();
}
if state.answers.len() > 500 {
let mut items = state.answers.values().cloned().collect::<Vec<_>>();
items.sort_by_key(|e| std::cmp::Reverse(e.updated_ms));
items.truncate(500);
state.answers = items.into_iter().map(|e| (e.stream_id.clone(), e)).collect();
}
}
async fn health() -> impl IntoResponse {
(json_headers(), Json(HealthResp { ok: true }))
}
async fn directory(state: axum::extract::State<Arc<RwLock<State>>>) -> impl IntoResponse {
let now = now_ms();
let mut guard = state.write().await;
prune_state(&mut guard, now);
let mut entries = guard.entries.values().cloned().collect::<Vec<_>>();
entries.sort_by_key(|e| std::cmp::Reverse(e.updated_ms));
(json_headers(), Json(DirectoryList { now_ms: now, entries }))
}
async fn announce(
state: axum::extract::State<Arc<RwLock<State>>>,
Json(body): Json<AnnounceReq>,
) -> impl IntoResponse {
let now = now_ms();
if body.stream_id.is_empty() || body.title.is_empty() || body.offer.is_empty() {
let resp = serde_json::json!({ "error": "missing stream_id/title/offer" });
return (StatusCode::BAD_REQUEST, json_headers(), Json(resp)).into_response();
}
if body.offer.len() > 64_000 {
let resp = serde_json::json!({ "error": "offer too large" });
return (StatusCode::PAYLOAD_TOO_LARGE, json_headers(), Json(resp)).into_response();
}
let requested_expires = body.expires_ms.unwrap_or(now + 20_000);
let requested_ttl = requested_expires.saturating_sub(now);
let ttl_ms = requested_ttl.clamp(5_000, 60_000);
let entry = DirectoryEntry {
stream_id: clamp_str(body.stream_id, 256),
title: clamp_str(body.title, 128),
offer: body.offer,
updated_ms: now,
expires_ms: now + ttl_ms,
};
let mut guard = state.write().await;
prune_state(&mut guard, now);
guard.entries.insert(entry.stream_id.clone(), entry.clone());
(
json_headers(),
Json(AnnounceResp {
ok: true,
ttl_ms,
entry,
}),
)
.into_response()
}
async fn post_answer(
state: axum::extract::State<Arc<RwLock<State>>>,
Json(body): Json<AnswerPostReq>,
) -> impl IntoResponse {
let now = now_ms();
if body.stream_id.is_empty() || body.answer.is_empty() {
let resp = serde_json::json!({ "error": "missing stream_id/answer" });
return (StatusCode::BAD_REQUEST, json_headers(), Json(resp)).into_response();
}
if body.answer.len() > 64_000 {
let resp = serde_json::json!({ "error": "answer too large" });
return (StatusCode::PAYLOAD_TOO_LARGE, json_headers(), Json(resp)).into_response();
}
let entry = AnswerEntry {
stream_id: clamp_str(body.stream_id, 256),
answer: body.answer,
updated_ms: now,
expires_ms: now + 2 * 60_000,
};
let mut guard = state.write().await;
prune_state(&mut guard, now);
guard.answers.insert(entry.stream_id.clone(), entry);
(json_headers(), Json(serde_json::json!({ "ok": true }))).into_response()
}
async fn get_answer(
state: axum::extract::State<Arc<RwLock<State>>>,
Query(q): Query<AnswerGetReq>,
) -> impl IntoResponse {
let now = now_ms();
if q.stream_id.is_empty() {
let resp = serde_json::json!({ "error": "missing stream_id" });
return (StatusCode::BAD_REQUEST, json_headers(), Json(resp)).into_response();
}
let mut guard = state.write().await;
prune_state(&mut guard, now);
// One-shot: first reader consumes.
let Some(answer) = guard.answers.remove(&q.stream_id) else {
let resp = serde_json::json!({ "error": "not found" });
return (StatusCode::NOT_FOUND, json_headers(), Json(resp)).into_response();
};
(json_headers(), Json(answer)).into_response()
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
tracing_subscriber::fmt()
.with_env_filter(
std::env::var("RUST_LOG").unwrap_or_else(|_| "info,tower_http=info".to_string()),
)
.init();
let state = Arc::new(RwLock::new(State::default()));
let app = Router::new()
.route("/api/health", get(health))
.route("/api/directory", get(directory))
.route("/api/announce", post(announce))
.route("/api/answer", post(post_answer).get(get_answer))
.with_state(state)
.layer(TraceLayer::new_for_http());
let addr: SocketAddr = "0.0.0.0:8080".parse().unwrap();
tracing::info!("ec-cf-bootstrap-api listening on {}", addr);
let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app)
.with_graceful_shutdown(shutdown_signal())
.await?;
Ok(())
}
async fn shutdown_signal() {
let _ = tokio::signal::ctrl_c().await;
}

1519
deploy/cloudflare-worker/package-lock.json generated Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,15 @@
{
"name": "every-channel-site",
"private": true,
"type": "module",
"dependencies": {},
"devDependencies": {
"typescript": "5.9.3",
"wrangler": "^4.63.0"
},
"scripts": {
"cf-typegen": "wrangler types",
"dev": "wrangler dev --local --port 8787",
"deploy": "wrangler deploy"
}
}

View file

@ -0,0 +1,575 @@
function json(data: unknown, init?: ResponseInit): Response {
return new Response(JSON.stringify(data), {
...init,
headers: {
"content-type": "application/json; charset=utf-8",
...(init?.headers ?? {}),
},
});
}
function jsonNoStore(data: unknown, init?: ResponseInit): Response {
return json(data, {
...init,
headers: {
"cache-control": "no-store",
...(init?.headers ?? {}),
},
});
}
async function hmacBase64(
secret: string,
msg: string,
hash: "SHA-1" | "SHA-256",
): Promise<string> {
const enc = new TextEncoder();
const key = await crypto.subtle.importKey(
"raw",
enc.encode(secret),
{ name: "HMAC", hash: { name: hash } },
false,
["sign"],
);
const sig = await crypto.subtle.sign("HMAC", key, enc.encode(msg));
const bytes = new Uint8Array(sig);
let s = "";
for (const b of bytes) s += String.fromCharCode(b);
return btoa(s);
}
async function handleTurn(env: Env): Promise<Response> {
// Always provide STUN. TURN is optional and requires a shared secret.
// Response shape is compatible with `just-webrtc` types.
const ice_servers: Array<{
urls: string[];
username: string;
credential: string;
// `just-webrtc` uses serde defaults on enum variants (Password/Oauth).
credential_type: "Password" | "Oauth";
}> = [];
ice_servers.push({
urls: [
"stun:stun.cloudflare.com:3478",
"stun:stun.l.google.com:19302",
"stun:stun1.l.google.com:19302",
],
username: "",
credential: "",
credential_type: "Password",
});
const shared = env.EC_TURN_SHARED_SECRET?.trim();
if (shared) {
const ttlSec = Number(env.EC_TURN_TTL_SECS ?? "3600") || 3600;
const exp = Math.floor(Date.now() / 1000) + Math.max(60, ttlSec);
const prefix = (env.EC_TURN_USER_PREFIX ?? "every.channel").trim();
const username = `${exp}:${prefix}`;
const hash = (env.EC_TURN_HMAC ?? "sha1").toLowerCase() === "sha256" ? "SHA-256" : "SHA-1";
const credential = await hmacBase64(shared, username, hash);
const host = (env.EC_TURN_HOST ?? "turn.cloudflare.com").trim();
ice_servers.push({
urls: [
`turn:${host}:3478?transport=udp`,
`turn:${host}:3478?transport=tcp`,
`turns:${host}:5349?transport=tcp`,
],
username,
credential,
credential_type: "Password",
});
}
return json({ ice_servers });
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// ICE bootstrap for WebRTC clients (browser + native).
if (url.pathname === "/api/turn") {
return handleTurn(env);
}
// Stream relay (one-to-many) for CMAF object frames transported via direct-wire chunks.
// This is a bootstrap relay so browsers can watch live streams without direct P2P.
if (url.pathname === "/api/stream/ws") {
const stream_id = url.searchParams.get("stream_id") ?? "";
if (!stream_id) {
return jsonNoStore({ error: "missing stream_id" }, { status: 400 });
}
const id = env.EC_STREAM.idFromName(stream_id);
const stub = env.EC_STREAM.get(id);
return stub.fetch(request);
}
// Minimal bootstrap API: proxy /api/* to a single durable object instance ("global").
// This exists only to rendezvous WebRTC offers/answers and list "live" entries.
if (url.pathname.startsWith("/api/")) {
const id = env.EC_API.idFromName("global");
const stub = env.EC_API.get(id);
return stub.fetch(request);
}
// Serve static assets from the Worker Assets binding.
// SPA fallback: unknown paths serve the app shell (`/index.html`).
const assets = (env as unknown as { ASSETS?: Fetcher }).ASSETS;
if (!assets || typeof (assets as any).fetch !== "function") {
return new Response("Assets binding not configured", { status: 500 });
}
const res = await assets.fetch(request);
if (res.status !== 404) return res;
url.pathname = "/index.html";
return assets.fetch(new Request(url.toString(), request));
},
};
interface Env {
ASSETS: Fetcher;
EC_API: DurableObjectNamespace;
EC_STREAM: DurableObjectNamespace;
// Optional TURN REST shared secret (Worker secret). When set, `/api/turn` includes TURN URLs
// with short-lived credentials derived from this shared secret.
EC_TURN_SHARED_SECRET?: string;
EC_TURN_TTL_SECS?: string;
EC_TURN_USER_PREFIX?: string;
EC_TURN_HOST?: string;
EC_TURN_HMAC?: string;
}
type DirectoryEntry = {
stream_id: string;
title: string;
offer: string;
updated_ms: number;
expires_ms: number;
};
type AnswerEntry = {
stream_id: string;
answer: string;
updated_ms: number;
expires_ms: number;
};
type DirectoryList = {
now_ms: number;
entries: DirectoryEntry[];
};
function nowMs(): number {
return Date.now();
}
function clampStr(s: string, maxLen: number): string {
if (s.length <= maxLen) return s;
return s.slice(0, maxLen);
}
function entryKey(streamId: string): string {
return `e:${streamId}`;
}
function answerKey(streamId: string): string {
return `a:${streamId}`;
}
async function listWithPrefix<T>(
storage: DurableObjectStorage,
prefix: string,
): Promise<Array<[string, T]>> {
const out: Array<[string, T]> = [];
// Pull in small pages; this is a bootstrap rendezvous, not a global index.
let cursor: string | undefined = undefined;
for (;;) {
const page = await storage.list<T>({ prefix, cursor, limit: 256 });
for (const [k, v] of page) out.push([k, v]);
if (page.cursor) cursor = page.cursor;
else break;
}
return out;
}
async function pruneAndCap(
storage: DurableObjectStorage,
now: number,
): Promise<void> {
const entries = await listWithPrefix<DirectoryEntry>(storage, "e:");
const answers = await listWithPrefix<AnswerEntry>(storage, "a:");
const toDelete: string[] = [];
for (const [k, v] of entries) if (v.expires_ms <= now) toDelete.push(k);
for (const [k, v] of answers) if (v.expires_ms <= now) toDelete.push(k);
// Cap growth defensively. This is not spam-resistant; it's a bootstrap rendezvous.
if (entries.length > 200) {
const sorted = entries
.map(([, v]) => v)
.sort((a, b) => b.updated_ms - a.updated_ms)
.slice(0, 200);
const keep = new Set(sorted.map((e) => entryKey(e.stream_id)));
for (const [k] of entries) if (!keep.has(k)) toDelete.push(k);
}
if (answers.length > 500) {
const sorted = answers
.map(([, v]) => v)
.sort((a, b) => b.updated_ms - a.updated_ms)
.slice(0, 500);
const keep = new Set(sorted.map((a) => answerKey(a.stream_id)));
for (const [k] of answers) if (!keep.has(k)) toDelete.push(k);
}
if (toDelete.length > 0) {
// Delete in chunks to avoid oversized requests.
for (let i = 0; i < toDelete.length; i += 128) {
await storage.delete(toDelete.slice(i, i + 128));
}
}
}
type AnnounceReq = {
stream_id: string;
title: string;
offer: string;
expires_ms?: number;
};
type AnswerPostReq = {
stream_id: string;
answer: string;
};
// Minimal bootstrap API Durable Object. The binding name is historical; we keep it stable so
// existing migrations and wrangler config remain valid while removing Cloudflare Containers.
export class EcApiContainer implements DurableObject {
private state: DurableObjectState;
constructor(state: DurableObjectState) {
this.state = state;
}
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url);
const now = nowMs();
// Best-effort pruning on any request.
await pruneAndCap(this.state.storage, now);
if (url.pathname === "/api/health") {
return jsonNoStore({ ok: true });
}
if (url.pathname === "/api/directory") {
const items = await listWithPrefix<DirectoryEntry>(this.state.storage, "e:");
const entries = items
.map(([, v]) => v)
.filter((v) => v.expires_ms > now)
.sort((a, b) => b.updated_ms - a.updated_ms);
const resp: DirectoryList = { now_ms: now, entries };
return jsonNoStore(resp);
}
if (url.pathname === "/api/announce") {
if (request.method !== "POST") {
return jsonNoStore({ error: "method not allowed" }, { status: 405 });
}
let body: AnnounceReq;
try {
body = (await request.json()) as AnnounceReq;
} catch {
return jsonNoStore({ error: "invalid json" }, { status: 400 });
}
if (!body.stream_id || !body.title || !body.offer) {
return jsonNoStore({ error: "missing stream_id/title/offer" }, { status: 400 });
}
if (body.offer.length > 64_000) {
return jsonNoStore({ error: "offer too large" }, { status: 413 });
}
const requestedExpires = body.expires_ms ?? now + 20_000;
const requestedTtl = Math.max(0, requestedExpires - now);
const ttlMs = Math.min(60_000, Math.max(5_000, requestedTtl));
const entry: DirectoryEntry = {
stream_id: clampStr(body.stream_id, 256),
title: clampStr(body.title, 128),
offer: body.offer,
updated_ms: now,
expires_ms: now + ttlMs,
};
await this.state.storage.put(entryKey(entry.stream_id), entry);
return jsonNoStore({ ok: true, ttl_ms: ttlMs, entry });
}
if (url.pathname === "/api/answer") {
if (request.method === "POST") {
let body: AnswerPostReq;
try {
body = (await request.json()) as AnswerPostReq;
} catch {
return jsonNoStore({ error: "invalid json" }, { status: 400 });
}
if (!body.stream_id || !body.answer) {
return jsonNoStore({ error: "missing stream_id/answer" }, { status: 400 });
}
if (body.answer.length > 64_000) {
return jsonNoStore({ error: "answer too large" }, { status: 413 });
}
const entry: AnswerEntry = {
stream_id: clampStr(body.stream_id, 256),
answer: body.answer,
updated_ms: now,
expires_ms: now + 2 * 60_000,
};
await this.state.storage.put(answerKey(entry.stream_id), entry);
return jsonNoStore({ ok: true });
}
if (request.method === "GET") {
const streamId = url.searchParams.get("stream_id") ?? "";
if (!streamId) {
return jsonNoStore({ error: "missing stream_id" }, { status: 400 });
}
const key = answerKey(streamId);
const ans = await this.state.storage.get<AnswerEntry>(key);
if (!ans || ans.expires_ms <= now) {
await this.state.storage.delete(key);
return jsonNoStore({ error: "not found" }, { status: 404 });
}
// One-shot: first reader consumes.
await this.state.storage.delete(key);
return jsonNoStore(ans);
}
return jsonNoStore({ error: "method not allowed" }, { status: 405 });
}
return jsonNoStore({ error: "not found" }, { status: 404 });
}
}
// Historical class referenced by older migrations. It is not bound anymore,
// but exporting it keeps workerd/wrangler happy if migrations mention it.
export class DirectoryDO implements DurableObject {
async fetch(): Promise<Response> {
return new Response("gone", { status: 410 });
}
}
const DIRECT_WIRE_TAG_FRAME = 0x00;
const DIRECT_WIRE_TAG_STREAM = 0x01;
const DIRECT_WIRE_TAG_PING = 0x02;
const DIRECT_WIRE_CHUNK_BYTES = 16 * 1024;
type TimingMeta = {
chunk_index: number;
};
type ObjectMeta = {
content_type: string;
timing?: TimingMeta;
};
// One-to-many fanout for direct-wire message chunks. Publisher sends direct-wire messages,
// subscribers receive the same stream, plus a buffered init + recent segments upon join.
export class StreamRelayDO implements DurableObject {
private publisher: WebSocket | null = null;
private subs = new Set<WebSocket>();
// Reassemble publisher STREAM chunks into full object frames for buffering.
private buf = new Uint8Array(0);
private want: number | null = null;
private initFrame: Uint8Array | null = null;
private segFrames = new Map<number, Uint8Array>();
private maxSegments = 12;
async fetch(request: Request): Promise<Response> {
const upgrade = request.headers.get("Upgrade")?.toLowerCase();
if (upgrade !== "websocket") {
return json({ error: "expected websocket" }, { status: 400 });
}
const url = new URL(request.url);
const role = (url.searchParams.get("role") ?? "sub").toLowerCase();
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const pair = new (WebSocketPair as any)();
const client: WebSocket = pair[0];
const server: WebSocket = pair[1];
server.accept();
if (role === "pub") {
if (this.publisher) {
try {
this.publisher.close(1013, "publisher replaced");
} catch {}
}
this.publisher = server;
} else {
this.subs.add(server);
// Best-effort fast start: send init + last segments.
try {
if (this.initFrame) {
this.sendFrame(server, this.initFrame);
}
const idxs = Array.from(this.segFrames.keys()).sort((a, b) => a - b);
for (const idx of idxs) {
const fr = this.segFrames.get(idx);
if (fr) this.sendFrame(server, fr);
}
} catch {}
}
server.addEventListener("message", (evt: MessageEvent) => {
const data = evt.data;
if (!(data instanceof ArrayBuffer)) {
return;
}
const msg = new Uint8Array(data);
if (role === "pub") {
// Fanout immediately (no buffering delay).
for (const sub of this.subs) {
try {
sub.send(data);
} catch {}
}
// Also decode for buffered join.
this.handlePublisherMsg(msg);
} else {
// Subscribers can send pings; ignore.
if (msg.length > 0 && msg[0] === DIRECT_WIRE_TAG_PING) {
return;
}
}
});
server.addEventListener("close", () => {
if (server === this.publisher) {
this.publisher = null;
// Keep buffered init/segments; publisher may reconnect soon.
}
this.subs.delete(server);
});
server.addEventListener("error", () => {
if (server === this.publisher) {
this.publisher = null;
}
this.subs.delete(server);
});
return new Response(null, { status: 101, webSocket: client });
}
private handlePublisherMsg(msg: Uint8Array) {
if (msg.length === 0) return;
const tag = msg[0];
if (tag === DIRECT_WIRE_TAG_PING) return;
if (tag === DIRECT_WIRE_TAG_FRAME) {
const frame = msg.subarray(1);
this.bufferFrame(frame);
return;
}
if (tag !== DIRECT_WIRE_TAG_STREAM) {
// Legacy: assume the whole thing is a frame.
this.bufferFrame(msg);
return;
}
// Append to reassembly buffer.
const chunk = msg.subarray(1);
this.buf = concatU8(this.buf, chunk);
// Pull as many framed payloads as possible:
// [u32be len][frame bytes...]
// The `frame` bytes are `encode_object_frame(meta, data)`.
while (true) {
if (this.want === null) {
if (this.buf.length < 4) return;
this.want =
(this.buf[0] << 24) | (this.buf[1] << 16) | (this.buf[2] << 8) | this.buf[3];
this.buf = this.buf.subarray(4);
}
const want = this.want ?? 0;
if (this.buf.length < want) return;
const frame = this.buf.subarray(0, want);
this.buf = this.buf.subarray(want);
this.want = null;
this.bufferFrame(frame);
// Avoid unbounded growth if publisher sends junk.
if (this.buf.length > 4 * 1024 * 1024) {
this.buf = new Uint8Array(0);
this.want = null;
return;
}
}
}
private bufferFrame(frame: Uint8Array) {
const meta = tryDecodeObjectMeta(frame);
const idx = meta?.timing?.chunk_index;
if (typeof idx !== "number") return;
if (idx === 0) {
this.initFrame = frame.slice();
return;
}
this.segFrames.set(idx, frame.slice());
while (this.segFrames.size > this.maxSegments) {
const oldest = Math.min(...this.segFrames.keys());
this.segFrames.delete(oldest);
}
}
private sendFrame(ws: WebSocket, frame: Uint8Array) {
// Mirror the Rust `direct_wire_send_frame` format:
// Stream bytes are [u32be frame_len][frame]
const out = new Uint8Array(4 + frame.length);
const len = frame.length >>> 0;
out[0] = (len >>> 24) & 0xff;
out[1] = (len >>> 16) & 0xff;
out[2] = (len >>> 8) & 0xff;
out[3] = len & 0xff;
out.set(frame, 4);
for (let i = 0; i < out.length; i += DIRECT_WIRE_CHUNK_BYTES) {
const chunk = out.subarray(i, Math.min(out.length, i + DIRECT_WIRE_CHUNK_BYTES));
const msg = new Uint8Array(1 + chunk.length);
msg[0] = DIRECT_WIRE_TAG_STREAM;
msg.set(chunk, 1);
ws.send(msg);
}
}
}
function concatU8(a: Uint8Array, b: Uint8Array): Uint8Array {
if (a.length === 0) return b.slice();
if (b.length === 0) return a.slice();
const out = new Uint8Array(a.length + b.length);
out.set(a, 0);
out.set(b, a.length);
return out;
}
function tryDecodeObjectMeta(frame: Uint8Array): ObjectMeta | null {
if (frame.length < 4) return null;
const metaLen = (frame[0] << 24) | (frame[1] << 16) | (frame[2] << 8) | frame[3];
if (metaLen < 2 || frame.length < 4 + metaLen) return null;
const metaBytes = frame.subarray(4, 4 + metaLen);
try {
const txt = new TextDecoder().decode(metaBytes);
return JSON.parse(txt) as ObjectMeta;
} catch {
return null;
}
}

View file

@ -0,0 +1,37 @@
name = "every-channel-site"
main = "src/index.ts"
compatibility_date = "2026-02-08"
workers_dev = false
account_id = "9a54fd76c3d5d9abac437382a9027e9b"
# Bind this worker to the every.channel zone.
# This uses Workers Custom Domains (not "routes to an origin"), so it can serve the site without
# an application server behind it.
routes = [
{ pattern = "every.channel", custom_domain = true },
{ pattern = "www.every.channel", custom_domain = true },
]
# Static assets built by Trunk (apps/tauri/ui -> apps/tauri/dist)
[assets]
directory = "../../apps/tauri/dist"
[[durable_objects.bindings]]
name = "EC_API"
class_name = "EcApiContainer"
[[durable_objects.bindings]]
name = "EC_STREAM"
class_name = "StreamRelayDO"
[[migrations]]
tag = "v2"
new_sqlite_classes = ["DirectoryDO"] # historical; safe to keep (namespace already created)
[[migrations]]
tag = "v3"
new_sqlite_classes = ["EcApiContainer"]
[[migrations]]
tag = "v4"
new_sqlite_classes = ["StreamRelayDO"]

57
docs/ARCHITECTURE.md Normal file
View file

@ -0,0 +1,57 @@
# Architecture
## Layers
1. Capture
- Hardware: ATSC antennas -> HDHomeRun or Linux IPTV capture devices.
- Output: MPEG-TS or ATSC 3.0 streams.
2. Normalize
- Demux and normalize timestamps.
- Select program IDs and identify audio/video tracks.
3. Deterministic transcode
- Encode with a fixed profile (codec, GOP, bitrate, keyframe cadence).
- Emit fixed-duration chunks with stable ordering.
- Hash chunks to produce content identifiers.
4. MoQ publish
- Map each channel to a track namespace.
- Each chunk becomes a MoQ object in a group.
- Objects are named and addressed deterministically.
5. Relay mesh
- Relays cache objects and announce tracks.
- iroh provides programmable topology and peer routing.
- Multiple relays can serve identical objects.
6. Playback
- Desktop: Tauri app that subscribes to tracks.
- CLI: debugging, inspection, and headless clients.
- Web: static site that connects to a relay gateway.
## Roles
- Runner: owns capture + transcode + publish.
- Chopper: executes deterministic chunking profiles.
- Relay: stores and forwards MoQ objects.
- Manager: configures nodes and applies policy.
- Provisioner: bootstraps nodes and manages deployment.
## Determinism
- The same input with the same profile should yield identical chunks.
- Chunk hashes are the primitive for availability and de-duplication.
- Deterministic names allow relays to converge without coordination.
## Time synchronization
- Chunk boundaries are derived from PCR and, when available, broadcast UTC (ATSC STT / DVB TDT/TOT).
- Unsynced sources remain source-scoped until broadcast time is present.
- Discontinuities force a new chunk group boundary.

57
docs/BABY_STEPS.md Normal file
View file

@ -0,0 +1,57 @@
# Baby steps
These are the smallest useful steps to get a real MoQ pipeline running end-to-end.
1. Capture and inspect transport streams
- Confirm HDHomeRun discovery on the local network.
- Fetch lineup JSON and map it to `ec-core::Channel`.
- Open a raw MPEG-TS stream for a single channel and write it to disk.
2. Deterministic transcode + chunking
- Choose a reference ffmpeg profile (encoder, GOP, keyframe cadence).
- Emit fixed-duration chunks with deterministic timestamps.
- Hash chunks and verify that repeated runs are byte-identical.
3. MoQ object model
- Model track/group/object IDs and object metadata in `ec-moq`.
- Map each chunk to a MoQ object with deterministic naming.
- Validate that object IDs are stable across runs.
4. Single-node publish + replay
- Build a local relay that stores objects on disk.
- Publish from the chopper to the relay.
- Replay stored objects to a local subscriber and validate playback.
5. Multi-node relay mesh
- Integrate iroh for node discovery and routing.
- Replicate objects between two relays.
- Verify that a subscriber can pull from either relay.
6. Client surfaces
- Tauri shell that lists available tracks and plays one.
- CLI that can subscribe and dump objects for inspection.
- Static web UI that connects to a relay gateway.
7. Time-synchronized chunking
- Parse PCR + STT/TDT/TOT to anchor chunk boundaries to broadcast UTC.
- Emit deterministic TS chunks with sync metadata.
- Promote source-scoped streams to broadcast-scoped streams when synced.
8. MoQ publish path
- Wrap time-aligned chunks as MoQ objects (timing metadata attached).
- Write objects to the local relay store.
- Wire to iroh transport once MoQ session adapters are stable.
7. Production hardening
- Crash recovery and backfill.
- Observability: tracing, metrics, object retention.
- Network policies for DMCA resilience and takedown mitigation.

View file

@ -0,0 +1,45 @@
# Claude Code prompt (opus-4.6): design + IA pass
Goal: Do a design pass on the every.channel web site and apply the same information architecture and language to the desktop app UI.
Constraints:
- Keep it clean and minimal, but warmer (a subtle nod to old TV).
- Do not use protocol jargon in user-facing labels. Avoid words like "MoQ", "mDNS", "DHT", "endpoint", "broadcast", "track", "gossip", "pkarr".
- Prefer plain language:
- "Watch a link"
- "Nearby (same Wi-Fi)"
- "Public (internet)"
- "Directory"
- "Sharing key"
- Keep layout readable on mobile and desktop.
- Reuse the existing visual direction (soft gradient background, rounded cards) but tighten typography and spacing.
- Preserve existing functionality. This is a design and IA pass, not a behavior rewrite.
Scope:
1. Web site IA and design
- Implemented site lives at `apps/web/`:
- `apps/web/src/main.rs`
- `apps/web/style.css`
- `apps/web/index.html`
- Improve the IA and copy, but keep the same sections: Watch, Directory, Participate, About.
- Make the top nav and sections feel cohesive and calm.
- Add small touches that feel "old TV" without being kitsch. Very subtle scanlines/noise is OK.
2. App UI IA and design alignment
- Desktop UI lives at:
- `apps/tauri/ui/src/main.rs`
- `apps/tauri/ui/style.css`
- Ensure the same language and IA as the web site.
- Hide advanced fields. Default flows should be link-first.
- Make the Directory controls read like product features, not network plumbing.
3. Deliverables
- Update CSS and UI copy.
- If you add new components, keep them in the same files for now.
- Keep changes reversible and minimal in surface area.
- Provide a short summary of what changed and why.
Please edit the repo directly.

31
docs/COVERAGE.md Normal file
View file

@ -0,0 +1,31 @@
# Coverage
every.channel uses `cargo-llvm-cov` to produce formal coverage reports.
## Run (Recommended: Nix)
```sh
direnv allow
./scripts/coverage.sh
```
To run the unit-subset coverage (does not require FFmpeg headers):
```sh
./scripts/coverage.sh unit
```
Artifacts:
- `tmp/coverage/workspace.lcov`
- `tmp/coverage/workspace.summary.txt`
- `tmp/coverage/workspace-html/index.html`
## Run (Inside nix develop)
If you are already in `nix develop`, you can use:
```sh
just cov-workspace
just cov-workspace-html
```

22
docs/DEPLOY_CLOUDFLARE.md Normal file
View file

@ -0,0 +1,22 @@
# Cloudflare Deploy (Forgejo Actions)
This repo deploys `https://every.channel` via Wrangler.
## Prereqs
- Forgejo Actions enabled on the repo.
- A Cloudflare API token stored as a Forgejo Actions secret:
- name: `CLOUDFLARE_API_TOKEN`
The workflow is defined in `.forgejo/workflows/deploy-cloudflare.yml`.
## Manual deploy (local)
```sh
cd apps/tauri/ui
trunk build --release --public-url /
cd deploy/cloudflare-worker
npm ci
npm run deploy
```

46
docs/IROH_EXAMPLES.md Normal file
View file

@ -0,0 +1,46 @@
# iroh examples and references
Cloned under `third_party/iroh-org/` for local inspection.
## iroh-examples
- **browser-chat**: gossipbased chat in browser + CLI.
- **browser-blobs**: running irohblobs in WebAssembly.
- **custom-router**: manual protocol routing by ALPN.
- **dumbpipe-web**: HTTP forwarding over iroh.
- **tauri-todos**: Tauri + iroh documents.
## iroh-experiments
- **content-discovery**: tracker + pkarr discovery flow.
- **h3-iroh**: HTTP/3 over iroh connections.
- **iroh-pkarr-naming-system**: IPNSstyle naming experiment.
## dumbpipe
- P2P stream forwarding via iroh (good reference for live media piping).
## iroh-gossip
- Pub/sub swarm model with topic IDs; ALPN routing examples.
## iroh-docs
- Shows how to layer blobs + gossip + docs into a protocol stack with Router.
## iroh-willow
- Confidential sync + encryption concepts relevant to swarm security.
## callme / iroh-live
- Audio/video streaming examples over iroh (already cloned in `third_party/`).
## Recent iroh blog + changelog highlights
- The changelog lists releases through 0.33.0 (Feb 24, 2025) with browser/Wasm support, discovery data publishing, and 0-RTT connections.
- 0.32.0 added browser alpha, Quic Address Discovery (QAD) on relays, and the n0-future crate.
- 0.29.0 rebranded `iroh-net` to `iroh` and removed the bundled Node in favor of endpoints + routers (protocols moved to separate crates).
- Sources:
- https://iroh.computer/changelog
- https://iroh.computer/blog/iroh-0-29

13
docs/IROH_NOTES.md Normal file
View file

@ -0,0 +1,13 @@
# iroh notes
## Recent highlights (from the iroh blog)
- iroh 0.96.0 focuses on QUIC multipaths on the road to 1.0.
- Custom transports (including Tor via `iroh-tor`) can be used to establish anonymous connections.
- Address lookup migration to a new DNS system (`N0Dns`) is underway.
## Implications for every.channel
- Multipath QUIC aligns with our resilience goals.
- Tor transport is a strong fit for anti-takedown posture.
- Endpoint discovery and DNS naming should be designed to tolerate evolution in irohs discovery infrastructure.

View file

@ -0,0 +1,16 @@
# MoQ implementations (Rust)
## moq-rs (kixelated/moq)
- Rust implementation of Media over QUIC.
- Includes `moq-lite` and `hang` catalog components.
## iroh-live
- Uses `moq-rs` with iroh connections via `web-transport-iroh` and `iroh-moq` adapters.
- Provides publish/watch examples for media streams over iroh.
## callme (iroh-roq)
- Audio-only proof that low-latency media over iroh works at scale.
- Useful reference for handshake and session management patterns.

16
docs/MOQ_NOTES.md Normal file
View file

@ -0,0 +1,16 @@
# MoQ notes
Reference: https://blog.cloudflare.com/moq
- MoQ (Media over QUIC) is a publish/subscribe protocol for media on QUIC.
- The data model uses track namespaces, tracks, groups, and objects.
- Relays can cache and forward objects to many subscribers.
- MoQ is designed to bridge low-latency media delivery with scalable distribution.
- Browsers are expected to connect via WebTransport.
## Mapping to every.channel
- Each channel becomes a track namespace with tracks for video/audio.
- Each chunk is a MoQ object with deterministic IDs.
- Relays store objects and can serve them from any location.
- Object metadata includes optional timing fields (chunk index, PCR/UTC anchors) to support convergence.

279
docs/USAGE.md Normal file
View file

@ -0,0 +1,279 @@
# Usage
## Tauri viewer (local)
If `trunk` or `cargo-tauri` is missing, enter the nix shell (or install the tools manually).
`direnv allow` also sets `EVERY_CHANNEL_ROOT` so Tauri can find the UI folder.
```sh
direnv allow
cd apps/tauri
cargo tauri dev
```
If you want deterministic transcoding instead of stream copy:
```sh
EVERY_CHANNEL_TRANSCODE=1 cargo tauri dev
```
For node ingest/MoQ publish, you can force deterministic transcode with:
```sh
EVERY_CHANNEL_DETERMINISTIC=1 cargo run -p ec-node -- ingest hdhr --channel 8.1
```
iroh discovery is opt-in (DNS discovery is off by default). Enable DHT and/or mDNS like this:
```sh
EVERY_CHANNEL_IROH_DISCOVERY=dht,mdns cargo tauri dev
```
In the Tauri app, use **Add stream** to add an HDHomeRun host, a direct HLS URL, or a yt-dlp supported URL (e.g. YouTube Live). The flow rejects non-live sources.
Linux DVB sources can be added with a URL like:
```
linux-dvb://localhost?adapter=0&dvr=0&tune=dvbv5-zap&tune=-r&tune=Channel%20Name
```
On Linux, if `/dev/dvb` exists and a `channels.conf` is found, Linux DVB channels are auto-listed in the Channels panel.
You can override the `channels.conf` path with `EVERY_CHANNEL_DVB_CHANNELS_CONF=/path/to/channels.conf`.
In the UI, you can still type `linux-dvb` in the Add stream field to open the Linux DVB picker (adapter, channels.conf, channel).
Select a channel and click **Share** to start a MoQ publisher. The share bundle (endpoint addr, broadcast, track) appears under the viewer panel and can be pasted into **Manual MoQ connect** on another node.
For gossip announcements, you can provide peers in the UI or set `EVERY_CHANNEL_GOSSIP_PEERS` (comma-separated). mDNS peer discovery is used on LANs to supplement the peer list when available.
## Coverage
See `docs/COVERAGE.md`.
## HDHomeRun E2E Test (Local Network)
This runs two local `ec-node` processes (publish then subscribe) against a real HDHomeRun source and validates that:
chunks are encrypted, manifests are required, and the subscriber produces a playable HLS output directory.
Requires Nix (so `ac-ffmpeg` finds FFmpeg headers):
```sh
./scripts/e2e-hdhr.sh --host <HDHR_HOST> --channel <CHANNEL>
```
## Mesh E2E Test (Split Sources)
This runs two publishers over the same broadcast:
- peer A publishes **manifests only** (`--publish-chunks=false`)
- peer B publishes **objects only**
The subscriber fetches objects from peer B and manifests from peer A using `--remote-manifests`.
```sh
./scripts/e2e-mesh-split.sh --host <HDHR_HOST> --channel <CHANNEL>
```
### yt-dlp bundling (YouTube Live URLs)
To enable the “Add stream (yt-dlp)” UI flow, bundle the Python runtime + yt-dlp into the app resources:
```sh
scripts/vendor-yt-dlp.sh
```
The app will use the bundled runtime under `apps/tauri/resources/yt-dlp/<platform>/venv`. You can override with `EVERY_CHANNEL_YTDLP_PYTHON`.
## Node ingest (MoQ file relay)
### HDHomeRun
```sh
cargo run -p ec-node -- ingest hdhr --channel 8.1
```
Enable deterministic transcode:
```sh
cargo run -p ec-node -- ingest hdhr --channel 8.1 --deterministic
```
Use a specific device:
```sh
cargo run -p ec-node -- ingest hdhr --device-id <DEVICE_ID> --channel 8.1
```
### Linux DVB
```sh
cargo run -p ec-node -- ingest linux-dvb --adapter 0 --dvr 0 --tune-cmd dvbv5-zap --tune-cmd -r --tune-cmd "Channel Name"
```
### HLS playlist
```sh
cargo run -p ec-node -- ingest hls --url https://example.com/live.m3u8
```
Use deterministic transcode if needed:
```sh
cargo run -p ec-node -- ingest hls --url https://example.com/live.m3u8 --mode transcode
```
### Raw TS file or URL
```sh
cargo run -p ec-node -- ingest ts --input /path/to/stream.ts
```
## Time sync inspection
```sh
cargo run -p ec-cli -- ts-sync /path/to/stream.ts --chunk-ms 2000 --max-events 50
```
## MoQ publish over iroh
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1
```
Publish with deterministic transcode:
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1 --deterministic
```
Use a specific device:
```sh
cargo run -p ec-node -- moq-publish hdhr --device-id <DEVICE_ID> --channel 8.1
```
Publish an HLS source:
```sh
cargo run -p ec-node -- moq-publish hls --url https://example.com/live.m3u8
```
Set a stable identity:
```sh
IROH_SECRET=<hex> cargo run -p ec-node -- moq-publish ts --input /path/to/stream.ts
```
Enable discovery (DHT + mDNS):
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
--discovery dht,mdns
```
Announce to catalog gossip:
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
--announce --gossip-peer <ENDPOINT_ADDR>
```
Publish per-chunk manifests alongside chunks:
```sh
EVERY_CHANNEL_MANIFEST_SIGNING_KEY=<hex> cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
--publish-manifests --announce --gossip-peer <ENDPOINT_ADDR>
```
Batch multiple chunks per manifest (epoch):
```sh
cargo run -p ec-node -- moq-publish hdhr --channel 8.1 \\
--publish-manifests --epoch-chunks 8
```
## MoQ subscribe (HLS output)
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID>
```
Enable discovery (DHT + mDNS):
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID> \\
--discovery dht,mdns
```
Segments and `index.m3u8` will be written to `./tmp/moq-hls` by default.
Subscribe to manifests and require them for validation:
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID> \\
--subscribe-manifests --require-manifest
```
Restrict to a manifest signer:
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID> \\
--subscribe-manifests --require-manifest \\
--manifest-signers ed25519:<hex>
```
Throttle incoming data:
```sh
cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID> \\
--max-bytes-per-sec 5000000 --max-bytes-burst 10000000
```
If the stream is encrypted, pass the same network secret used by the publisher:
```sh
EVERY_CHANNEL_NETWORK_SECRET=<hex> cargo run -p ec-node -- moq-subscribe \\
--remote <ENDPOINT_ADDR> \\
--broadcast-name <STREAM_ID>
```
## MoQ self-test (local round trip)
```sh
cargo run -p ec-node -- moq-selftest /path/to/stream.ts
```
Pass a URL (for HDHomeRun):
```sh
cargo run -p ec-node -- moq-selftest http://<hdhr>/auto/v8.1 --max-chunks 8
```
## Determinism test (libx264 single-thread)
```sh
cargo run -p ec-cli -- determinism-test /path/to/stream.ts ./tmp/determinism --runs 3
```
## Tests and coverage
Run the core unit tests:
```sh
just test-core
```
Generate a coverage report (requires `nix develop` so ffmpeg headers are visible to `ac-ffmpeg`):
```sh
just cov-core-html
```

1
docs/allowed_signers Normal file
View file

@ -0,0 +1 @@
founder@every.channel ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJCBTSEEcBOhOkf3WF1e8xmblAZHvgTibFsqck2GY8D/

View file

@ -0,0 +1,56 @@
# ECP-0001: every.channel proposals process
Status: Draft
## Problem
We need a lightweight, consistent way to propose and review changes while keeping the constitution focused on long-lived principles.
## Decision
Adopt ECP (every.channel proposals) as the primary decision record.
Each ECP is a short, versioned document in `evolution/proposals/` that records context, decisions, and rollout steps.
### Required sections
- Title and status
- Problem / context
- Decision
- Alternatives considered
- Rollout / teardown plan
- Open questions (optional)
### Status values
- Draft
- Accepted
- Implemented
- Superseded
- Rejected
### Numbering
- `ECP-0001` is this process.
- New ECPs increment by 1 and use a short, descriptive slug.
### Review and acceptance
- Non-trivial changes require an ECP.
- The founder is the final reviewer and sign-off authority.
- Once accepted, implementation may proceed or continue.
### Identity and signing
- Commits must be signed with SSH or age identities (minimum: SSH-signed commits).
- If signing is not configured, implementation pauses until clarified.
## Alternatives considered
- Free-form issues: rejected due to loss of decision history.
- Heavyweight RFCs: rejected due to unnecessary overhead.
## Rollout / teardown
- Add this process and create initial ECPs.
- If ECPs become too heavy, supersede this with a lighter process.

View file

@ -0,0 +1,30 @@
# ECP-0002: initial technical direction
Status: Draft
## Problem
We need a coherent starting stack that aligns with the constitution and can evolve without locking the project into fragile dependencies.
## Decision
Adopt the following initial technical direction:
- Rust-first core for protocol and node runtime.
- Media over QUIC (MoQ) for publish/subscribe media delivery.
- Deterministic chunking profiles to support de-duplication and availability.
- iroh as the programmable mesh substrate for peer routing.
- Client surfaces: Tauri desktop app, CLI, and static web UI.
These are starting points, not immutable commitments. They may be revised by later ECPs.
## Alternatives considered
- HLS/DASH for primary delivery: rejected due to mismatch with low-latency pub/sub goals.
- Centralized relay only: rejected due to resilience and user sovereignty goals.
- Non-Rust core: deferred to keep early integration cohesive.
## Rollout / teardown
- Scaffold crates and documentation that align with this direction.
- Revisit once MoQ implementations and browser support stabilize.

View file

@ -0,0 +1,29 @@
# ECP-0003: HDHomeRun discovery and lineup ingest
Status: Draft
## Problem
We need a reliable way to discover HDHomeRun devices on a LAN and ingest their channel lineups (including all fields) so we can map channels into every.channel.
## Decision
Implement a two-path discovery and ingest flow:
1. UDP discovery broadcast to port 65001 using the HDHomeRun TLV packet format, wildcard device ID, and tuner device type filter.
2. HTTP hydration using `/discover.json` and `/lineup.json` for full metadata.
The UDP response supplies the IP and base capabilities, while the HTTP endpoints provide rich metadata and lineup entries. The ingest layer stores all unknown fields as raw JSON to keep future flexibility.
Where mDNS is available, allow host-based discovery via `hdhomerun.local` or `<deviceid>.local` by fetching `http://<host>/discover.json`.
## Alternatives considered
- mDNS-only discovery: rejected because UDP discovery is the documented primary path and does not depend on mDNS configuration.
- Manual IP entry only: rejected because it prevents zero-config onboarding.
## Rollout / teardown
- Add discovery and lineup ingestion to `ec-hdhomerun`.
- Expose CLI commands for discovery and lineup JSON parsing.
- If discovery proves unreliable on some platforms, add interface-specific broadcast addresses or a user-provided host override.

View file

@ -0,0 +1,54 @@
# ECP-0004: global stream identifiers and swarms
Status: Draft
## Problem
We need a global, deterministic way to identify streams so that identical broadcasts from different antennas can converge in the same swarm without coordination.
## Decision
Adopt a two-layer identifier scheme:
1. **Broadcast identity**: a logical identifier derived from broadcast metadata (PSIP/ATSC) such as transport stream ID and program number. This is the primary convergence key.
2. **Source identity**: a physical identifier used when broadcast identity is not yet available (e.g., early ingestion). This is a fallback.
A canonical stream ID string will be generated as:
`ec/stream/v1/<scope>/<fields...>/profile-<profile>/variant-<variant>`
Where `<scope>` is `broadcast` or `source`.
### Broadcast scope fields
- `standard` (e.g., `atsc`)
- `tsid-<transport_stream_id>` (when known)
- `program-<program_number>` (when known)
- optional `callsign-<callsign>` and `region-<region>` as hints
### Source scope fields
- `kind` (e.g., `hdhr` or `linux-dvb`)
- `device-<device_id>` when available
- `channel-<channel_reference>` when available
### Profile and variant
- `profile` identifies the deterministic encoding profile.
- `variant` identifies audio language, resolution, or alternate tracks.
## Consequences
- The network can converge on the same stream even when multiple relays ingest the same broadcast.
- Early ingestion may start with `source` scope and migrate to `broadcast` scope once PSIP metadata is parsed.
## Alternatives considered
- Single opaque UUID per stream: rejected because it prevents convergence without coordination.
- Content hash as stream ID: deferred; may be layered later as an availability primitive.
## Rollout / teardown
- Implement `StreamKey` in `ec-core`.
- Extend ingest pipeline to parse PSIP/ATSC IDs and promote `source` to `broadcast` IDs.
- Revise once PSIP parsing is in place.

View file

@ -0,0 +1,30 @@
# ECP-0005: deterministic chunking pipeline
Status: Draft
## Problem
We need a deterministic encoding and chunking pipeline so that identical broadcasts produce identical objects across relays.
## Decision
Adopt a deterministic chunking profile with the following constraints:
- Single-threaded encoding by default.
- Fixed GOP cadence and keyframe placement.
- Bitexact flags enabled wherever possible.
- Fixed chunk duration (default 2 seconds) and stable object naming.
Initial implementation uses ffmpeg CLI piping into the ac-ffmpeg chunker for rapid iteration, while the primary goal is to move to libav bindings for deeper control and determinism validation.
## Alternatives considered
- Multithreaded encoders: rejected because they reduce reproducibility.
- Hardware encoders: deferred because determinism is uncertain.
- HLS segmentation: rejected as the primary format; MoQ objects will be derived directly.
## Rollout / teardown
- Introduce `ec-chopper` with a deterministic profile and ffmpeg-based segmenter.
- Measure determinism with repeated runs.
- Replace the CLI with libav bindings once the control surface is verified.

View file

@ -0,0 +1,27 @@
# ECP-0006: Linux IPTV (LinuxDVB) ingest
Status: Draft
## Problem
We need a second ingest source beyond HDHomeRun so that Linux tuner stacks can participate in the relay mesh.
## Decision
Provide a LinuxDVB ingest module that:
- Opens `/dev/dvb/adapterX/dvrY` as a byte stream.
- Optionally spawns a tuning command (e.g., `dvbv5-zap -r`) before opening the DVR stream.
- Enumerates local adapters from `/dev/dvb` to support "zero config" device discovery in the UI.
- Optionally reads channel names from a `channels.conf` file (common locations like `~/.dvb/channels.conf`) to populate pickers without scanning.
- Runs on Linux only, returning an explicit error on other platforms.
## Alternatives considered
- Raw ioctl tuning in Rust: deferred due to complexity and lack of immediate testing hardware.
- IPTV UDP-only ingest: deferred to a later proposal.
## Rollout / teardown
- Add `ec-linux-iptv` crate.
- Expand with ioctl tuning once hardware is available.

View file

@ -0,0 +1,30 @@
# ECP-0007: Tauri + Dioxus viewer bridge
Status: Draft
## Problem
We need a macOS-ready viewer so we can watch every.channel streams while MoQ playback matures.
## Decision
Ship an embedded Dioxus web frontend inside a Tauri shell. The viewer lists generic stream descriptors (no HDHomeRun-specific UI) and plays a selected stream via a local HTTP bridge.
The bridge:
- Spawns ffmpeg to transcode a stream URL into short CMAF-style HLS segments (fMP4).
- Serves the generated playlist and segments from a local HTTP server.
- Returns a playback URL to the frontend.
This preserves the stream abstraction while providing a pragmatic playback path on macOS (WebKit/HLS).
## Alternatives considered
- Wait for MoQ playback: rejected for now because it blocks local iteration.
- Native AVFoundation player: deferred to keep UI portable and simple.
## Rollout / teardown
- Implement `list_streams` + `start_stream` commands in the Tauri backend.
- Add Dioxus frontend with channel list and player.
- Replace the HLS bridge with MoQ playback when the transport stack stabilizes.

View file

@ -0,0 +1,33 @@
# ECP-0008: iroh discovery + transport design
Status: Draft
## Problem
We need a discovery and transport layer that can survive hostile networks, reduce central points of failure, and support stream swarms across independently hosted relays.
## Decision
Adopt iroh as the mesh substrate with the following design:
- **Dial by public key**: every node has an iroh endpoint identity; connections are established by endpoint id with automatic hole punching and relay fallback.
- **MoQ over iroh**: run MoQ sessions over iroh QUIC connections using a dedicated ALPN (e.g. `every.channel/moq/0`).
- **Discovery via gossip**: use iroh-gossip to disseminate stream catalogs and track announcements, keyed by `StreamId`.
- **Transport flexibility**: support iroh custom transports (e.g. Tor) for nodes that need anonymity or censorship resistance.
- **Multipath first**: leverage irohs multipath QUIC capabilities to increase resilience and adapt to changing network conditions.
## Consequences
- Nodes can join the swarm with minimal configuration and no centralized registry.
- The mesh can degrade gracefully under takedown pressure by routing through relays or Tor.
## Alternatives considered
- Centralized rendezvous servers: rejected due to fragility and governance risk.
- Manual peer lists: rejected due to onboarding friction.
## Rollout / teardown
- Add an iroh transport crate that exposes a `connect` API for MoQ sessions.
- Implement gossip-based stream catalog synchronization.
- Add optional Tor transport profile for high-risk regions.

View file

@ -0,0 +1,36 @@
# ECP-0009: MoQ implementation selection
Status: Draft
## Problem
We need a practical MoQ implementation to begin integration with iroh while the protocol evolves.
## Survey (Rust implementations)
- **moq-dev/moq**: MoQ reference repository with Rust crates including `moq-lite`, `moq-relay`, and `hang` catalogs.
- https://github.com/moq-dev/moq
- **cloudflare/moq-rs**: Rust implementation of the IETF MoQ Transport draft.
- https://github.com/cloudflare/moq-rs
## Decision
Start with `moq-lite` (from moq-dev/moq) and reuse the adapters proven by `iroh-live`:
- `web-transport-iroh` for WebTransport-style bindings over iroh.
- `iroh-moq` for creating MoQ sessions over iroh connections.
Track `cloudflare/moq-rs` in parallel for standards-aligned MoQ Transport and schedule
interoperability tests once we have live relays.
## Alternatives considered
- Building a fresh MoQ stack: rejected due to duplication and slow iteration.
- Waiting for browser-native MoQ: rejected because it delays core network validation.
- Starting directly on cloudflare/moq-rs: deferred until we need draft-accurate transport.
## Rollout / teardown
- Vendor or depend on `moq-rs` crates.
- Implement a minimal publish/subscribe pipeline over iroh.
- Re-evaluate once MoQ standardization and browser support stabilize.

View file

@ -0,0 +1,46 @@
# ECP-0010: time-synchronized chunking
Status: Draft
## Problem
To achieve byte-identical chunks across uncoordinated antennas, we need a shared timeline for chunk boundaries. If chunk boundaries are derived from local capture start time, identical broadcasts will diverge.
## Decision
Define a **time-synchronized chunking model** based on broadcast clocks:
1. **Primary clock: PCR**
- Use MPEG-TS PCR (27 MHz) as the canonical timeline for chunk boundaries.
2. **UTC anchor (when available)**
- ATSC: use PSIP STT to compute UTC time.
- DVB: use TDT/TOT to compute UTC time.
3. **Chunk boundary rule**
- Compute `chunk_index = floor((t_anchor) / D)` where `t_anchor` is UTC time when available, otherwise PCR time.
- `D` is fixed chunk duration (e.g., 2000 ms).
4. **Alignment policy**
- On ingest, drop data until the next full boundary.
- Each chunk contains frames/packets whose PTS fall within `[chunk_start, chunk_end)`.
5. **Discontinuity handling**
- If PCR/PTS discontinuity is detected, close the current group and start a new one at the next boundary.
6. **Sync status**
- Streams with UTC anchor are marked `synced` and eligible for swarm convergence.
- Streams without UTC anchor are `unsynced` and remain source-scoped.
## Consequences
- Identical broadcasts on different antennas converge to the same chunk boundaries and can produce byte-identical object streams.
- Early ingestion can begin unsynced and promote to synced when broadcast time appears.
- Chunk IDs become deterministic functions of (UTC or PCR timeline, duration).
## Alternatives considered
- Local wall clock (NTP/PTP): rejected as primary anchor because it is not broadcast-authoritative and varies between hosts.
- Capture-start-based chunking: rejected because it prevents convergence without coordination.
## Rollout / teardown
- Add a time model to the ingest pipeline (PCR + optional UTC anchor).
- Require chunkers to align to the computed boundary.
- Promote `StreamId` from source scope to broadcast scope only when `synced`.
- Supersede this ECP if MoQ introduces a standardized global timeline.

View file

@ -0,0 +1,33 @@
# ECP-0011: stream encryption keys
Status: Draft
## Problem
We need a consistent encryption model so streams can be protected in transit while remaining discoverable by stream id.
## Decision
Derive a symmetric stream key deterministically from the stream id, with an optional network secret:
- `stream_key = BLAKE3-derive("every.channel stream key v1", network_secret || 0x00 || stream_id)`
- If `network_secret` is absent, the key is public and provides obfuscation only.
- If `network_secret` is present, the stream is private to holders of the secret.
Encryption will be applied at the object layer (MoQ objects), not at the transport layer. This allows relays to store and forward encrypted objects without visibility.
## Consequences
- Streams can be encrypted deterministically without coordination.
- Private swarms can be created by sharing a network secret.
## Alternatives considered
- Per-session negotiated keys: rejected because it prevents deterministic convergence.
- PKI per stream: deferred due to operational complexity.
## Rollout / teardown
- Add key derivation helper in `ec-crypto`.
- Implement object-layer encryption in the MoQ publisher.
- Add configuration for network secret.

View file

@ -0,0 +1,42 @@
# ECP-0012: MoQ object wire format + track mapping
Status: Draft
## Problem
We need a concrete, interoperable way to represent every.channel chunk objects over MoQ tracks so that publishers and subscribers can exchange data without relying on file relay conventions.
## Constraints
- Works over `moq-lite` tracks with group/frame semantics.
- Must carry both metadata (timing, encryption) and chunk bytes.
- Should be simple to parse and deterministic.
## Decision
- **Broadcast name**: the canonical stream id (e.g. `ec/stream/v1/...`).
- **Track name**: fixed `chunks` track for object payloads.
- **Group sequence**: use `chunk_index` (u64) as the group sequence.
- **Frame payload**: a single frame per group containing:
- 4-byte big-endian length prefix for JSON metadata
- JSON-encoded `ObjectMeta`
- raw chunk bytes (ciphertext or plaintext)
This yields a deterministic, self-contained payload with minimal framing and easy debugging.
`ObjectMeta` may include optional integrity helpers:
- `chunk_hash` (blake3 of plaintext chunk bytes)
- `chunk_proof` (Merkle branch proving membership in an epoch manifest)
## Alternatives considered
- Separate meta/data frames: rejected due to ordering ambiguity and more framing.
- Binary codec (postcard/bincode): deferred until interop is proven.
## Rollout / teardown
- Implement encode/decode helpers in `ec-moq`.
- Update publisher to emit one frame per chunk group.
- Update subscriber to decode into `ObjectPayload`.
- If superseded, provide a compatibility adapter in `ec-moq`.

View file

@ -0,0 +1,30 @@
# ECP-0013: stream catalog gossip for discovery
Status: Draft
## Problem
We need a decentralized way to discover live streams and their MoQ endpoints without a central registry.
## Constraints
- Must work over iroh transports (QUIC, relays).
- Should be encrypted at the transport layer and signed by endpoint identities.
- Lightweight enough to broadcast frequently.
## Decision
- Introduce a `StreamCatalogEntry` structure that pairs a `StreamDescriptor` with MoQ transport metadata and optional encryption parameters.
- Publish catalog entries over `iroh-gossip` on a well-known topic derived from `every.channel/catalog/v1`.
- `ec-node` can optionally announce entries when publishing a stream.
## Alternatives considered
- Centralized HTTP registry: rejected due to resilience and governance risks.
- Static peer lists only: rejected due to poor discoverability.
## Rollout / teardown
- Add catalog types to `ec-core` and gossip helpers to `ec-iroh`.
- Add CLI flags to `ec-node` to announce catalog entries.
- Defer full catalog sync/merge semantics until initial interoperability is validated.

View file

@ -0,0 +1,37 @@
# ECP-0014: In-app MoQ sharing (relay-first)
## Status
Draft
## Context
We need a low-friction way for a viewer node to share a live stream with other nodes. Today the CLI can publish MoQ streams, but the Tauri app cannot initiate a publish session or surface share details. Early adoption needs a quick path to “click Share, send details.”
We also need a near-term relay path that works across NATs without extra configuration. iroh provides default public relays; we can use those until we add custom relay selection.
## Decision
Add an in-app MoQ publish path for the currently selected channel. When a user clicks **Share**, the app starts a MoQ publisher and returns a share bundle (endpoint addr, broadcast, track). The bundle is shown in the UI for copy/paste and can be used by any MoQ subscriber.
For now, the publish flow relies on irohs default relay configuration (relay-first). A later ECP can formalize relay selection and custom relay registries.
## Details
- New `start_moq_publish` Tauri command that:
- Opens the selected stream source.
- Chunks with the existing ffmpeg pipeline.
- Publishes objects over MoQ with deterministic encryption metadata.
- Returns a share bundle: `{ endpoint_addr, broadcast_name, track_name, stream_id }`.
- The viewer UI shows a **Share** button in the Viewer panel and surfaces the share bundle.
- Manual MoQ connect stays available in the **Add source** menu for now.
## Consequences
- Sharing a stream consumes a tuner when the source is a live HDHomeRun stream.
- Publishing is long-lived; the app keeps a MoQ node alive until exit.
- The share bundle is ephemeral unless a stable iroh secret is configured.
## Risks
- Relay capacity and policy may change; a future ECP should specify relay configuration and redundancy.
- DRM-protected streams may fail to publish or play; UI should surface DRM hints.
## Follow-ups
- Add stable identity and share token signing.
- Add catalog gossip announcements for published streams.
- Provide a web gateway (MoQ -> HLS/MSE) for browsers without MoQ support.

View file

@ -0,0 +1,23 @@
# ECP-0015: Gossip announce toggle and bootstrap peers
## Status
Draft
## Context
Sharing a stream should optionally announce it into the global catalog so other nodes can discover it. Today gossip requires explicit peers, and there is no UI surface for managing announcements.
## Decision
Add an "Announce on share" toggle and a "Gossip peers" field to the viewer UI. When enabled, the app sends a catalog announcement over iroh-gossip immediately after a share begins. Peers can be supplied in-app or via `EVERY_CHANNEL_GOSSIP_PEERS` (comma-separated).
## Details
- `start_moq_publish` accepts `announce` and `gossip_peers`.
- If `announce` is true and no peers are provided, we fall back to `EVERY_CHANNEL_GOSSIP_PEERS`.
- The share card surfaces announce status (announced / error).
## Consequences
- Announcements remain opt-in and require at least one peer.
- Users can keep their share local by leaving announce off.
## Follow-ups
- Add local peer discovery via mDNS to remove manual peer entry for LANs.
- Add public bootstrap peer lists and relay selection once governance approves.

Some files were not shown because too many files have changed in this diff Show more