media-sucker/doc/architecture.md

55 lines
1.5 KiB
Markdown
Raw Normal View History

2022-08-25 10:17:25 -06:00
# Web Server
There is one web server,
which provides static content,
and a single entrypoint for dynamic state information.
The static content is some HTML and JavaScript,
which the browser runs to pull the dynamic state,
and update the page with current status of everything.
# Workers
There are at least two Workers:
a Reader and an Encoder.
Each Worker runs in its own thread,
and can do its job without interfering with another Worker.
## Readers
Readers monitor a device for media.
Right now, those devices are always CD-ROM drives.
As soon as media is inserted,
a MediaHandler is created to scan and then copy it.
## Encoders
Encoders wait for jobs to show up,
and then they re-invoke a MediaHandler to encode everything in that job.
# MediaHandlers
MediaHandlers have a work directory,
where they store all their stuff.
They have the following stages of execution:
1. *scan* the media to figure out its title, list of tracks, and other metadata
2. *copy* the media to the work directory
3. *encode* the work directory into the desired format (eg. MP3, MKV)
4. *clean* the work directory
Before each step,
state is read out of the work directory.
During each step,
a MediaHandler continually updates its Worker with a completion percentage.
This is passed up to the Web Server's dynamic state.
After each step,
a MediaHandler updates its state,
which is stored on disk.
The only way to communicate state between execution stages is by writing to disk.
This provides some tolerance of job interruption, power loss, etc.