PLUS4的moonraker

This commit is contained in:
whb0514
2024-09-02 13:31:06 +08:00
parent 1006bcb85e
commit f8fad844d7
101 changed files with 27357 additions and 7021 deletions

8
.gitignore vendored
View File

@@ -3,5 +3,11 @@ __pycache__/
*.py[cod]
*$py.class
.devel
.venv
.venv
venv
start_moonraker
*.env
.pdm-python
build
dist

View File

@@ -1,10 +1,14 @@
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.11"
mkdocs:
configuration: mkdocs.yml
fail_on_warning: false
python:
version: 3.8
install:
- requirements: docs/doc-requirements.txt

View File

@@ -1,4 +1,4 @@
GNU GENERAL PUBLIC LICENSE
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>

View File

@@ -1,40 +1,43 @@
<p align="center"><img src="https://github.com/QIDITECH/QIDI_MAX3/blob/main/other/QIDI.png" height="240" alt="QIDI's logo" /></p>
<p align="center"><a href="LICENSE"><img alt="GPL-V3.0 License" src="https://github.com/QIDITECH/QIDI_MAX3/blob/main/other/qidi.svg"></a></p>
# Document Instructions
The 3D printers of QIDI are based on Klipper.Based on the Klipper open source project, we have made some modifications to its source code to meet some of the user's needs.At the same time, we have also made modifications to Moonraker, so that the screens we set can correspond to the operations on the page.
Thanks to the developers and maintainers of these open source projects.Please consider using or supporting these powerful projects.
- <a href="https://github.com/Klipper3d/klipper">**Klipper**</a>
- <a href="https://github.com/Arksine/moonraker">**Moonraker**</a>
# Moonraker - API Web Server for Klipper
1. This document provides the modified moonraker version of QIDI.
2. This document only provides methods for replacing source code for updates.
***Please note that manual updates may affect normal after-sales service.***
Moonraker is a Python 3 based web server that exposes APIs with which
client applications may use to interact with the 3D printing firmware
[Klipper](https://github.com/KevinOConnor/klipper). Communication between
the Klippy host and Moonraker is done over a Unix Domain Socket. Tornado
is used to provide Moonraker's server functionality.
## Detailed update process
1. Connect your printer device through SSH.
2. Confirm which software you need to replace.Download the corresponding file and replace the software through SSH connection.The following are the paths of each software within the system.
Documentation for users and developers can be found on
[Read the Docs](https://moonraker.readthedocs.io/en/latest/).
Software|Directory
---|---
klipper|/home/mks/
moonraker|/home/mks/
### Clients
3. If there is no need to update xindi, simply replace it. For example, if I replace the klipper folder, save it and restart it.
Note that Moonraker does not come bundled with a client, you will need to
install one. The following clients are currently available:
## Report Issues and Make Suggestions
You can contact [After-Sales Service](https://qidi3d.com/pages/warranty-policy-after-sales-support) to report issues and make suggestions.
- [Mainsail](https://github.com/mainsail-crew/mainsail) by [Mainsail-Crew](https://github.com/mainsail-crew)
- [Fluidd](https://github.com/fluidd-core/fluidd) by Cadriel
- [KlipperScreen](https://github.com/jordanruthe/KlipperScreen) by jordanruthe
- [mooncord](https://github.com/eliteSchwein/mooncord) by eliteSchwein
### Raspberry Pi Images
Moonraker is available pre-installed with the following Raspberry Pi images:
- [MainsailOS](https://github.com/mainsail-crew/MainsailOS) by [Mainsail-Crew](https://github.com/mainsail-crew)
- Includes Klipper, Moonraker, and Mainsail
- [FluiddPi](https://github.com/fluidd-core/FluiddPi) by Cadriel
- Includes Klipper, Moonraker, and Fluidd
### Docker Containers
The following projects deploy Moonraker via Docker:
- [prind](https://github.com/mkuf/prind) by mkuf
- A suite of containers which allow you to run Klipper in
Docker. Includes support for OctoPrint and Moonraker.
### Changes
Please refer to the [changelog](https://moonraker.readthedocs.io/en/latest/changelog)
for a list of notable changes to Moonraker.

View File

@@ -1,10 +1,53 @@
##
This document keeps a record of all changes to Moonraker's web APIs.
This document keeps a record of notable changes to Moonraker's Web API.
### July 18th 2023
- Moonraker API Version 1.3.0
- Added [Spoolman](web_api.md#spoolman-apis) APIs.
- Added [Rollback](web_api.md#rollback-to-the-previous-version) API to
the `update_manager`
- The `update_manager` status response has new fields for items of the
`git_repo` and `web` types:
- `recovery_url`: Url of the repo a "hard" recovery will fetch from
- `rollback_version`: Version the extension will revert to when a rollback
is requested
- `warnings`: An array of strings containing various warnings detected
during repo init. Some warnings may explain an invalid state while
others may alert users to potential issues, such as a `git_repo` remote
url not matching the expected (ie: configured) url.
- Additionally, the `need_channel_update` field has been removed as the method
changing channels is done exclusively in the configuration.
### February 20th 2023
- The following new endpoints are available when at least one `[sensor]`
section has been configured:
- `GET /server/sensors/list`
- `GET /server/sensors/sensor`
- `GET /server/sensors/measurements`
See [web_api.md](web_api.md) for details on these new endpoints.
- A `sensors:sensor_update` notification has been added. When at least one
monitored sensor is reporting a changed value Moonraker will broadcast this
notification.
See [web_api.md](web_api.md) for details on this new notification.
### February 17 2023
- Moonraker API Version 1.2.1
- An error in the return value for some file manager endpoints has
been corrected. Specifically, the returned result contains an `item` object
with a `path` field that was prefixed with the root (ie: "gcodes").
This is inconsistent with the websocket notification and has been corrected
to remove the prefix. This affects the following endpoints:
- `POST /server/files/directory` | `server.files.post_directory`
- `DELETE /server/files/directory` | `server.files.delete_directory`
- `POST /server/files/move` | `server.files.move`
- `POST /server/files/copy` | `server.files.copy`
### March 4th 2022
- Moonraker API Version 1.0.1
- The `server.websocket.id` endpoint has been deprecated. It is
recommended to use `server.connection.idenitfy` method to identify
recommended to use `server.connection.identify` method to identify
your client. This method returns a `connection_id` which is
the websocket's unique id. See
[the documentation](web_api.md#identify-connection) for details.

181
docs/changelog.md Normal file
View File

@@ -0,0 +1,181 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog].
## [Unreleased]
### Added
- **notifier**: The `attach` option now supports Jinja2 templates.
- **notifier**: The `attach` option may now contain multiple attachments,
each separated by a newline.
- **notifier**: Added support for a configurable `body_format`
- **power**: Added support for generic `http` type switches.
- **metadata**: Added support for OrcaSlicer
- **zeroconf**: Added support for a configurable mDNS hostname.
- **zeroconf**: Added support for UPnP/SSDP Discovery.
- **spoolman**: Added integration to the
[Spoolman](https://github.com/Donkie/Spoolman) filament manager.
- **update_manager**: Added support for update rollbacks
- **update_manager**: Added support for stable `git_repo` updates
- **server**: Added a `--unixsocket` command line option
- **server**: Command line options may also be specified as env variables
- **server**: Added a `route_prefix` option
- **webcam**: Webcam APIs can now specify cameras by `uid` or `name`
- **deps**: Added support for optional `msgspec` and `uvloop` packages
- **extensions**: Agents may now register remote methods with Klipper
- **file_manager**: Add `check_klipper_config_path` option
- **button**: Added `debounce_period` option
- **history**: Added a check for previous jobs not finished (ie: when power is
lost during a print). These jobs will report their status as `interrupted`.
- **build**: Added support for optional speedup dependencies `uvloop` and `msgspec`
- **update_manager**: Added support for "zipped" application updates
- **file_manager**: Added `enable_config_write_access` option
- **machine**: Add support for system peripheral queries
- **mqtt**: Added the `status_interval` option to support rate limiting
- **mqtt**: Added the `enable_tls` option to support ssl/tls connections
- **history**: Added `user` field to job history data
- **history**: Added support for auxiliary history fields
- **spoolman**: Report spool ids set during a print in history auxiliary data
- **sensor**: Added support for history fields reported in auxiliary data
- **power**: Added support for `uhubctl` devices
- **update_manager**: Add support for pinned git commits
### Fixed
- **simplyprint**: Fixed import error preventing the component from loading.
- **update_manager**: Moonraker will now restart the correct "moonraker" and
"klipper" services if they are not the default values.
- **job_queue**: Fixed transition when auto is disabled
- **history**: Added modification time to file existence checks.
- **dbus_manager**: Fixed PolKit warning when PolKit features are not used.
- **job_queue**: Fixed a bug where the `job_transition_gcode` runs when the
queue is started. It will now only run between jobs during automatic
transition.
- **klippy_connection**: Fixed a race condition that can result in
skipped subscription updates.
- **configheler**: Fixed inline comment parsing.
- **authorization**: Fixed blocking call to `socket.getfqdn()`
- **power**: Fixed "on_when_job_queued" behavior when the internal device
state is stale.
### Changed
- **build**: Bumped apprise to version `1.8.0`.
- **build**: Bumped lmdb to version `1.4.1`
- **build**: Bumped tornado to version `6.4.0`
- **build**: Bumped jinja2 to version `3.1.4`
- **build**: Bumped zeroconf to version `0.131.0`
- **build**: Bumped libnacl to version `2.1.0`
- **build**: Bumped distro to version `1.9.0`
- **build**: Bumped pillow to version `10.3.0`
- **build**: Bumped streaming-form-data to version `1.15.0`
- **machine**: Added `ratos-configurator` to list of default allowed services
- **update_manager**: It is now required that an application be "allowed"
for Moonraker to restart it after an update.
- **update_manager**: Git repo validation no longer requires a match for the
remote URL and/or branch.
- **update_manager**: Fixed potential security vulnerabilities in `web` type updates.
This change adds a validation step to the install, front-end developers may refer to
the [configuration documentation](./configuration.md#web-type-front-end-configuration)
for details.
- **update_manager**: The `env` option for the `git_repo` type has been deprecated, new
configurations should use the `virtualenv` option.
- **update_manager**: The `install_script` option for the `git_repo` has been
deprecated, new configurations should use the `system_dependencies` option.
- **update_manager**: APIs that return status report additional fields.
See the [API Documentation](./web_api.md#get-update-status) for details.
- **proc_stats**: Improved performance of Raspberry Pi CPU throttle detection.
- **power**: Bound services are now processed during initialization when
`initial_state` is configured.
- **gpio**: Migrate from libgpiod to python-periphery
- **authorization**: The authorization module is now loaded as part of Moonraker's
core.
- **database**: Migrated the underlying database from LMDB to Sqlite.
- **history**: Use dedicated SQL tables to store job history and job totals.
- **authorization**: Use a dedicated SQL table to store user data.
## [0.8.0] - 2023-02-23
!!! Note
This is the first tagged release since a changelog was introduced. The list
below contains notable changes introduced beginning in Feburary 2023. Prior
notable changes were kept in [user_changes.md] and [api_changes.md].
### Added
- Added this changelog!
- Added pyproject.toml with support for builds through [pdm](https://pdm.fming.dev/latest/).
- **sensor**: New component for generic sensor configuration.
- [Configuration Docs](configuration.md#sensor)
- [API Docs](web_api.md#sensor-apis)
- [Websocket Notification Docs](web_api.md#sensor-events)
- **file_manager**: Added new [scan metadata](web_api.md#scan-gcode-metadata) endpoint.
- **file_manager**: Added new [thumbnails](web_api.md#get-gcode-thumbnails) endpoint.
- **file_manager**: Added [file_system_observer](configuration.md#file_manager)
configuration option.
- **file_manager**: Added [enable_observer_warnings](configuration.md#file_manager)
configuration option.
- **file_manager**: Added ability to upload to symbolic links.
- **metadata**: Added support for Simplify3D V5 metadata parsing
- **machine**: Added [shutdown_action](configuration.md#machine) configuration
option.
- **machine**: Added service detection to the `supervisord_cli` provider.
- **machine**: Added `octoeverywhere` to the list of default allowed service.
- **power**: Added support for "Hue" device groups.
- **websockets**: Added support for [direct bridge](web_api.md#bridge-websocket)
connections.
- **update_manager**: Added new [refresh](web_api.md#refresh-update-status) endpoint.
- **update_manager**: Added support for pinned pip upgrades.
- **websockets**: Added support for post connection authentication over the websocket.
- **scripts**: Added database backup and restore scripts.
### Changed
- Converted Moonraker source into a Python package.
- The source from `moonraker.py` has been moved to `server.py`. The remaining code in
`moonraker.py` serves as a legacy entry point for launching Moonraker.
- **file_manager**: Improved inotify synchronization with API requests.
- **file_manager**: Endpoint return values are now consistent with their
respective websocket notifications.
- **machine**: The [provider](configuration.md#machine) configuration option
now expects `supervisord_cli` instead of `supervisord`.
- **update_manager**: Relaxed requirement for git repo tag detection. Now only two
parts are required (ie: v1.5 and v1.5.0 are acceptable).
### Deprecated
- **file_manager**: The `enable_inotify_warnings` configuration option has been
deprecated in favor of `enable_observer_warnings`.
### Fixed
- **file_manager**: Fix edge condition where `create_file` notifications
may be sent before a `create_dir` notification.
- **power** - Fixed URL encoding issues for http devices.
- **template**: A ConfigError is now raised when a template fails to
render during configuration.
- **machine**: Fixed support for Supervisord Version 4 and above.
- **update_manager**: Added package resolution step to the APT backend.
- **update_manger**: Fixed PackageKit resolution step for 64-bit systems.
- **update_manager**: Fixed Python requirements file parsing. Comments are now ignored.
### Removed
- Pycurl dependency. Moonraker no longer uses Tornado's curl based http client.
## [0.7.1] - 2021-07-08
- Experimental pre-release
<!-- Links -->
[keep a changelog]: https://keepachangelog.com/en/1.0.0/
[semantic versioning]: https://semver.org/spec/v2.0.0.html
[user_changes.md]: user_changes.md
[api_changes.md]: api_changes.md
<!-- Versions -->
[unreleased]: https://github.com/Arksine/moonraker/compare/v0.8.0...HEAD
[0.8.0]: https://github.com/Arksine/moonraker/compare/v0.7.1...v0.8.0
[0.7.1]: https://github.com/Arksine/moonraker/releases/tag/v0.7.1

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,28 @@
# Contributing to Moonraker
While Moonraker exists as a service independently from Klipper, it relies
on Klipper to be useful. Thus, the tentative plan is to eventually merge
the Moonraker application into the Klipper repo after Moonraker matures,
at which point this repo will be archived. As such, contibuting guidelines
are near those of Klipper:
Prior to submitting a pull request prospective contributors must read this
entire document. Care should be taken to [format git commits](#git-commit-format)
correctly. This eases the review process and provides the reviewer with
confidence that the submission will be of sufficient quality.
Prospective contributors should consider the following:
- Does the contribution have significant impact? Bug fixes to existing
functionality and new features requested by 100+ users qualify as
items of significant impact.
- Has the submission been well tested? Submissions with substantial code
change must include details about the testing procedure and results.
- Does the submission include blocking code? Moonraker is an asynchronous
application, thus blocking code must be avoided.
- If any dependencies are included, are they pure python? Many low-powered SBCs
running Armbian do not have prebuilt wheels and are not capable of building wheels
themselves, thus breaking updates on these systems.
- Does the submission change the API? If so, could the change potentially break
frontends using the API?
- Does the submission include updates to the documentation?
When performing reviews these are the questions that will be asked during the
initial stages.
#### New Module Contributions
@@ -105,24 +123,23 @@ By making a contribution to this project, I certify that:
```
#### Code Style
Python methods should be fully annotated. Variables should be annotated where
the type cannot be inferred. Moonraker uses the `mypy` static type checker for
code validation with the following options:
the type cannot be inferred. Moonraker uses `mypy` version 1.5.1 for static
type checking with the following options:
- `--ignore-missing-imports`
- `--follow-imports=silent`
No line in the source code should exceed 80 characters. Be sure there is no
No line in the source code should exceed 88 characters. Be sure there is no
trailing whitespace. To validate code before submission one may use
`pycodestyle` with the following options:
`flake8` version 6.1.0 with the following options:
- `--ignore=E226,E301,E302,E303,W503,W504`
- `--max-line-length=80`
- `--max-doc-length=80`
- `--max-line-length=88`
- `--max-doc-length=88`
Generally speaking, each line in submitted documentation should also be no
longer than 80 characters, however there are situations where this isn't
possible, such as long hyperlinks or example return values. Documentation
isn't linted, so it
longer than 88 characters, however there are situations where this isn't
possible, such as long hyperlinks or example return values.
Don't peek into the member variables of another class. Use getters or
Avoid peeking into the member variables of another class. Use getters or
properties to access object state.

View File

@@ -1,2 +1,2 @@
mkdocs==1.3.0
pymdown-extensions==9.1
mkdocs-material==9.5.4
compact_tables@git+https://github.com/Arksine/markdown-compact-tables@v1.0.0

View File

@@ -2,7 +2,7 @@
Moonraker is a Python 3 based web server that exposes APIs with which
client applications may use to interact with the 3D printing firmware
[Klipper](https://github.com/KevinOConnor/klipper). Communcation between
[Klipper](https://github.com/Klipper3d/klipper). Communication between
the Klippy host and Moonraker is done over a Unix Domain Socket. Tornado
is used to provide Moonraker's server functionality.
@@ -14,7 +14,7 @@ Client developers may refer to the [Client API](web_api.md)
documentation.
Backend developers should refer to the
[contibuting](contributing.md) section for basic contribution
[contributing](contributing.md) section for basic contribution
guidelines prior to creating a pull request. The
[components](components.md) document provides a brief overview
of how to create a component and interact with Moonraker's

View File

@@ -31,10 +31,10 @@ missing one or both, you can simply add the bare sections to `printer.cfg`:
[display_status]
[virtual_sdcard]
path: ~/gcode_files
path: ~/printer_data/gcodes
```
### Enabling the Unix Socket
### Enabling Klipper's Unix Domain Socket Server
After Klipper is installed it may be necessary to modify its `defaults` file in
order to enable the Unix Domain Socket. Begin by opening the file in your
@@ -69,12 +69,9 @@ KLIPPY_ARGS="/home/pi/klipper/klippy/klippy.py /home/pi/printer.cfg -l /tmp/klip
the default LSB script. In this case, you need to modify the
klipper.service file.
You may also want to take this opportunity to change the location of
printer.cfg to match Moonraker's `config_path` option (see the
[configuration document](configuration.md#primary-configuration)
for more information on the config_path). For example, if the `config_path`
is set to `~/printer_config`, your klipper defaults file might look
like the following:
You may also want to take this opportunity to configure `printer.cfg` and
`klippy.log` so they are located in Moonraker's `data_path`, for example:
```
# Configuration for /etc/init.d/klipper
@@ -82,14 +79,17 @@ KLIPPY_USER=pi
KLIPPY_EXEC=/home/pi/klippy-env/bin/python
KLIPPY_ARGS="/home/pi/klipper/klippy/klippy.py /home/pi/printer_config/printer.cfg -l /tmp/klippy.log -a /tmp/klippy_uds"
KLIPPY_ARGS="/home/pi/klipper/klippy/klippy.py /home/pi/printer_data/config/printer.cfg -l /home/pi/printer_data/logs/klippy.log -a /tmp/klippy_uds"
```
If necessary, create the config directory and move printer.cfg to it:
Moonraker's install script will create the data folder, however you
may wish to create it now and move `printer.cfg` to the correct
location, ie:
```
cd ~
mkdir printer_config
mv printer.cfg printer_config
mkdir ~/printer_data
mkdir ~/printer_data/logs
mkdir ~/printer_data/config
mv printer.cfg ~/printer_data/config
```
### Installing Moonraker
@@ -101,10 +101,15 @@ cd ~
git clone https://github.com/Arksine/moonraker.git
```
Now is a good time to create [moonraker.conf](configuration.md). If you are
using the `config_path`, create it in the specified directory otherwise create
it in the HOME directory. The [sample moonraker.conf](./moonraker.conf) in
the `docs` directory may be used as a starting point.
The install script will attempt to create a basic configuration if
`moonraker.conf` does not exist at the expected location, however if you
prefer to have Moonraker start with a robust configuration you may create
it now. By default the configuration file should be located at
`$HOME/printer_data/config/moonraker.conf`, however the location of the
data path may be configured using the script's command line options.
The [sample moonraker.conf](./moonraker.conf) may be used as a starting
point, full details can be found in the
[confguration documentation](./configuration.md).
For a default installation run the following commands:
```
@@ -112,29 +117,40 @@ cd ~/moonraker/scripts
./install-moonraker.sh
```
Or to install with `moonraker.conf` in the `config_path`:
```
cd ~/moonraker/scripts
./install-moonraker.sh -f -c /home/pi/printer_config/moonraker.conf
```
The install script has a few command line options that may be useful,
particularly for those upgrading:
- `-r`:
Rebuilds the virtual environment for existing installations.
Sometimes this is necessary when a dependency has been added.
- `-f`:
Force an overwrite of Moonraker's systemd script. By default the
the systemd script will not be modified if it exists.
- `-c /home/pi/moonraker.conf`:
Specifies the path to Moonraker's config file. The default location
is `/home/<user>/moonraker.conf`. When using this option to modify
an existing installation it is necessary to add `-f` as well.
- `-a <alias>`:
The installer uses this option to determine the name of the service
to install. If `-d` is not provided then this options will also be
used to determine the name of the data path folder. If omitted this
defaults to `moonraker`.
- `-d <path to data folder>`:
Specifies the path to Moonraker's data folder. This folder organizes
files and directories used by moonraker. See the `Data Folder Structure`
section for details. If omitted this defaults to `$HOME/printer_data`.
- `-c <path to configuration file>`
Specifies the path to Moonraker's configuation file. By default the
configuration is expected at `<data_folder>/config/moonraker.conf`. ie:
`/home/pi/printer_data/config/moonraker.conf`.
- `-l <path to log file>`
Specifies the path to Moonraker's log file. By default Moonraker logs
to `<data_folder>/logs/moonraker.log`. ie:
`/home/pi/printer_data/logs/moonraker.log`.
- `-z`:
Disables `systemctl` commands during install (ie: daemon-reload, restart).
This is useful for installations that occur outside of a standard environment
where systemd is not running.
- `-x`:
Skips installation of [polkit rules](#policykit-permissions). This may be
necessary to install Moonraker on systems that do not have policykit
installed.
- `-s`:
Installs Moonraker's [speedup](#optional-speedups) Python packages in the
Python environment.
Additionally, installation may be customized with the following environment
variables:
@@ -143,17 +159,20 @@ variables:
- `MOONRAKER_REBUILD_ENV`
- `MOONRAKER_FORCE_DEFAULTS`
- `MOONRAKER_DISABLE_SYSTEMCTL`
- `MOONRAKER_SKIP_POLKIT`
- `MOONRAKER_CONFIG_PATH`
- `MOONRAKER_LOG_PATH`
- `MOONAKER_LOG_PATH`
- `MOONRAKER_DATA_PATH`
- `MOONRAKER_SPEEDUPS`
When the script completes it should start both Moonraker and Klipper. In
`/tmp/klippy.log` you should find the following entry:
`klippy.log` you should find the following entry:
`webhooks client <uid>: Client info {'program': 'Moonraker', 'version': '<version>'}`
Now you may install a client, such as
[Mainsail](https://github.com/mainsail-crew/mainsail) or
[Fluidd](https://github.com/cadriel/fluidd).
[Fluidd](https://github.com/fluidd-core/fluidd).
!!! Note
Moonraker's install script no longer includes the nginx dependency.
@@ -162,42 +181,267 @@ Now you may install a client, such as
debian/ubuntu distros).
### Data Folder Structure
As mentioned previously, files and folders used by Moonraker are organized
in a primary data folder. The example below illustrates the folder
structure using the default data path of `$HOME/printer_data`.
```
/home/pi/printer_data
├── backup
│   └── 20220822T202419Z
│   ├── config
│   │   └── moonraker.conf
│   └── service
│   └── moonraker.service
├── certs
│   ├── moonraker.cert (optional)
│   └── moonraker.key (optional)
├── config
│   ├── moonraker.conf
│   └── printer.cfg
├── database
│   └── moonraker-sql.db
├── gcodes
│   ├── test_gcode_one.gcode
│   └── test_gcode_two.gcode
├── logs
│   ├── klippy.log
│   └── moonraker.log
├── systemd
│ └── moonraker.env
├── moonraker.secrets (optional)
└── moonraker.asvc
```
If it is not desirable for the files and folders to exist in these specific
locations it is acceptable to use symbolic links. For example, it is common
for the gcode folder to be located at `$HOME/gcode_files`. Rather than
reconfigure Klipper's `virtual_sdcard` it may be desirable to create a
`gcodes` symbolic link in the data path pointing to this location.
!!! Note
It is still possible to directly configure the paths to the configuration
and log files if you do not wish to use the default file names of
`moonraker.conf` and `moonraker.log`
When Moonraker attempts to update legacy installations symbolic links
are used to avoid an unrecoverable error. Additionally a `backup`
folder is created which contains the prior configuration and/or
systemd service unit, ie:
```
/home/pi/printer_data
├── backup
│   └── 20220822T202419Z
│   ├── config
│   │   ├── include
│   │   │   ├── extras.conf
│   │   │   ├── power.conf
│   │   │   └── updates.conf
│   │   └── moonraker.conf
│   └── service
│   └── moonraker.service
├── certs
│   ├── moonraker.cert -> /home/pi/certs/certificate.pem
│   └── moonraker.key -> /home/pi/certs/key.pem
├── config -> /home/pi/klipper_config
├── database -> /home/pi/.moonraker_database
├── gcodes -> /home/pi/gcode_files
├── logs -> /home/pi/logs
├── systemd
│ └── moonraker.env
└── moonraker.secrets -> /home/pi/moonraker_secrets.ini
```
!!! Warning
The gcode and config paths should not contain symbolic links
that result in an "overlap" of on another. Moonraker uses
inotify to watch files in each of these folders and takes action
when a file change is detected. The action taken depends on the
"root" folder, thus it is important that they be distinct.
### The systemd service file
The default installation will create `/etc/systemd/system/moonraker.service`.
Below is a common example of service file, installed on a Raspberry Pi:
```ini
# systemd service file for moonraker
[Unit]
Description=API Server for Klipper SV1
Requires=network-online.target
After=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
Type=simple
User=pi
SupplementaryGroups=moonraker-admin
RemainAfterExit=yes
WorkingDirectory=/home/pi/moonraker
EnvironmentFile=/home/pi/printer_data/systemd/moonraker.env
ExecStart=/home/pi/moonraker-env/bin/python $MOONRAKER_ARGS
Restart=always
RestartSec=10
```
Following are some items to take note of:
- The `Description` contains a string that Moonraker uses to validate
the version of the service file, (notice `SV1` at the end, ie: Service
Version 1).
- The `moonraker-admin` supplementary group is used to grant policykit
permissions.
- The `EnvironmentFile` field contains Moonraker's arguments. See the
[environment file section](#the-environment-file) for details.
- The `ExecStart` field begins with the python executable, followed by
by the enviroment variable `MOONRAKER_ARGS`. This variable is set in
the environment file.
### Command line usage
This section is intended for users that need to write their own
installation script. Detailed are the command line arguments
available to Moonraker:
```
usage: moonraker.py [-h] [-c <configfile>] [-l <logfile>] [-n]
usage: moonraker.py [-h] [-d <data path>] [-c <configfile>] [-l <logfile>] [-u <unixsocket>] [-n] [-v] [-g] [-o]
Moonraker - Klipper API Server
optional arguments:
options:
-h, --help show this help message and exit
-d <data path>, --datapath <data path>
Location of Moonraker Data File Path
-c <configfile>, --configfile <configfile>
Location of moonraker configuration file
Path to Moonraker's configuration file
-l <logfile>, --logfile <logfile>
log file name and location
Path to Moonraker's log file
-u <unixsocket>, --unixsocket <unixsocket>
Path to Moonraker's unix domain socket
-n, --nologfile disable logging to a file
-v, --verbose Enable verbose logging
-g, --debug Enable Moonraker debug features
-o, --asyncio-debug Enable asyncio debug flag
```
The default configuration is:
- config file path- `~/moonraker.conf`
- log file path - `/tmp/moonraker.log`
- logging to a file is enabled
If one needs to start moonraker without generating a log file, the
- `data path`: `$HOME/printer_data`
- `config file`: `$HOME/printer_data/config/moonraker.conf`
- `log file`: `$HOME/printer_data/logs/moonraker.log`
- `unix socket`: `$HOME/printer_data/comms/moonraker.sock`
- logging to a file is enabled
- Verbose logging is disabled
- Moonraker's debug features are disabled
- The asyncio debug flag is set to false
!!! Tip
While the `data path` option may be omitted it is recommended that it
always be included for new installations. This allows Moonraker
to differentiate between new and legacy installations.
!!! Warning
Moonraker's `--unixsocket` option should not be confused with Klipper's
`--api-server` option. The `unixsocket` option for Moonraker specifies
the path where Moonraker will create a unix domain socket that serves its
JSON-RPC API.
If is necessary to run Moonraker without logging to a file the
`-n` option may be used, for example:
```
~/moonraker-env/bin/python ~/moonraker/moonraker/moonraker.py -n -c /path/to/moonraker.conf
~/moonraker-env/bin/python ~/moonraker/moonraker/moonraker.py -d ~/printer_data -n
```
In general it is not recommended to install moonraker with this option.
While moonraker will still log to stdout, all requests for support must
be accompanied by moonraker.log.
These options may be changed by editing
`/etc/systemd/system/moonraker.service`. The `install-moonraker.sh` script
may also be used to modify the config file location.
!!! Tip
It is not recommended to install Moonraker with file logging disabled
While moonraker will still log to stdout, all requests for support
must be accompanied by `moonraker.log`.
Each command line argument has an associated enviroment variable that may
be used to specify options in place of the command line.
- `MOONRAKER_DATA_PATH="<data path>"`: equivalent to `-d <data path>`
- `MOONRAKER_CONFIG_PATH="<configfile>"`: equivalent to `-c <configfile>`
- `MOONRAKER_LOG_PATH="<logfile>"`: equivalent to `-l <logfile>`
- `MOONRAKER_UDS_PATH="<unixsocket>"`: equivalent to `-u <unixsocket>`
- `MOONRAKER_DISABLE_FILE_LOG="y"`: equivalent to `-n`
- `MOONRAKER_VERBOSE_LOGGING="y"`: equivalent to `-v`
- `MOONRAKER_ENABLE_DEBUG="y"`: equivalent to `-g`.
- `MOONRAKER_ASYNCIO_DEBUG="y"`: equivalent to `-o`
!!! Note
Command line arguments take priority over environment variables when
both are specified.
[The environment file](#the-environment-file) may be used to set Moonraker's
command line arguments and/or environment variables.
### The environment file
The environment file, `moonraker.env`. is created in the data path during
installation. A default installation's environment file will contain the path
to `moonraker.py` and the data path option, ie:
```
MOONRAKER_DATA_PATH="/home/pi/printer_data"
MOONRAKER_ARGS="-m moonraker"
PYTHONPATH="/home/pi/moonraker"
```
A legacy installation converted to the updated flexible service unit
might contain the following. Note that this example uses command line
arguments instead of environment variables, either would be acceptable:
```
MOONRAKER_ARGS="/home/pi/moonraker/moonraker/moonraker.py -d /home/pi/printer_data -c /home/pi/klipper_config/moonraker.conf -l /home/pi/klipper_logs/moonraker.log"
```
Post installation it is simple to customize
[arguments and/or environment variables](#command-line-usage)
supplied to Moonraker by editing this file and restarting the service.
The following example sets a custom config file path, log file path,
enables verbose logging, and enables debug features:
```
MOONRAKER_DATA_PATH="/home/pi/printer_data"
MOONRAKER_CONFIG_PATH="/home/pi/printer_data/config/moonraker-1.conf"
MOONRAKER_LOG_PATH="/home/pi/printer_data/logs/moonraker-1.log"
MOONRAKER_VERBOSE_LOGGING="y"
MOONRAKER_ENABLE_DEBUG="y"
MOONRAKER_ARGS="-m moonraker"
PYTHONPATH="/home/pi/moonraker"
```
# Optional Speedups
Moonraker supports two optional Python packages that can be used to reduce
its CPU load:
- [msgspec](https://github.com/jcrist/msgspec): Replaces the builtin `json`
encoder/decoder. Requires Python >= 3.8.
- [uvloop](https://github.com/MagicStack/uvloop/): Replaces the default asyncio
eventloop implementation.
If these packages are installed in Moonraker's python environment Moonraker will
load them. For existing installations this can be done manually with a command
like:
```
~/moonraker-env/bin/pip install -r ~/moonraker/scripts/moonraker-speedups.txt
```
After installing the speedup packages it is possible to revert back to the
default implementation by specifying one or both of the following
environment variables in [moonraker.env](#the-environment-file):
- `MOONRAKER_ENABLE_MSGSPEC="n"`
- `MOONRAKER_ENABLE_UVLOOP="n"`
### PolicyKit Permissions
@@ -267,6 +511,37 @@ enable_system_updates: False
Previously installed PolicyKit rules can be removed by running
`set-policykit-rules.sh -c`
### Completing Privileged Upgrades
At times an update to Moonraker may require a change to the systemd service
file, which requires sudo permission to complete. Moonraker will present
an announcement when it need's the user's password and the process can
be completed by entering the password through Moonraker's landing page.
Some users prefer not to provide these credentials via the web browser and
instead would like to do so over ssh. These users may run
`scripts/finish-upgrade.sh` to provide Moonraker the necessary credentials
via ssh:
```
Utility to complete privileged upgrades for Moonraker
usage: finish-upgrade.sh [-h] [-a <address>] [-p <port>] [-k <api_key>]
optional arguments:
-h show this message
-a <address> address for Moonraker instance
-p <port> port for Moonraker instance
-k <api_key> API Key for authorization
```
By default the script will connect to a Moonraker instances on the local
machine at port 7125. If the instance is not bound to localhost or is
bound to another port the user may specify a custom address and port.
The API Key (`-k`) option is only necessary if the localhost is not authorized
to access Moonraker's API.
### Retrieving the API Key
Some clients may require an API Key to connect to Moonraker. After the
@@ -290,6 +565,86 @@ Retrieve the API Key via the browser from a trusted client:
{"result": "8ce6ae5d354a4365812b83140ed62e4b"}
### Database Backup and Restore
Moonraker stores persistent data using an Sqlite database. By default
the database file is located at `<data_folder>/database/moonraker-sql.db`.
API Endpoints are available to backup and restore the database. All
backups are stored at `<data_folder>/backup/database/<backup_name>` and
restored from the same location. Database files may contain sensitive
information, therefore they are not served by Moonraker. Another protocol
such as SCP, SMB, etc is required to transfer a backup off of the host.
Alternatively it is possible to perform a manual backup by copying the
existing database file when the Moonraker service has been stopped.
Restoration can be performed by stopping the Moonraker service and
overwriting the existing database with the backup.
#### LDMB Database (deprecated)
Previous versions of Moonraker used a [LMDB Database](http://www.lmdb.tech/doc/)
for persistent storage of procedurally generated data. LMDB database files are
platform dependent, and thus cannot be easily transferred between different
machines. A file generated on a Raspberry Pi cannot be directly transferred
to an x86 machine. Likewise, a file generated on a 32-bit version of Linux
cannot be transferred to a 64-bit machine.
Moonraker includes two scripts, `backup-database.sh` and `restore-database.sh`
to help facilitate database backups and transfers.
```shell
~/moonraker/scripts/backup-database.sh -h
Moonraker Database Backup Utility
usage: backup-database.sh [-h] [-e <python env path>] [-d <database path>] [-o <output file>]
optional arguments:
-h show this message
-e <env path> Moonraker Python Environment
-d <database path> Moonraker LMDB database to backup
-o <output file> backup file to save to
```
```shell
~/moonraker/scripts/restore-database.sh -h
Moonraker Database Restore Utility
usage: restore-database.sh [-h] [-e <python env path>] [-d <database path>] [-i <input file>]
optional arguments:
-h show this message
-e <env path> Moonraker Python Environment
-d <database path> Moonraker LMDB database path to restore to
-i <input file> backup file to restore from
```
Both scripts include default values for the Moonraker Environment and Database
Path. These are `$HOME/moonraker-env` and `$HOME/printer_data/database`
respectively. The `backup` script defaults the output value to
`$HOME/database.backup`. The `restore` script requires that the user specify
the input file using the `-i` option.
To backup a database for a default Moonraker installation the user may ssh into
the machine and run the following command:
```shell
~/moonraker/scripts/backup-database.sh -o ~/moonraker-database.backup
```
And to restore the database:
```shell
sudo service moonraker stop
~/moonraker/scripts/restore-database.sh -i ~/moonraker-database.backup
sudo service moonraker start
```
The backup file contains [cdb like](https://manpages.org/cdb/5) entries
for each key/value pair in the database. All keys and values are base64
encoded, however the data is not encrypted. Moonraker's database may
contain credentials and other sensitive information, so users should treat
this file accordingly. It is not recommended to keep backups in any folder
served by Moonraker.
### Recovering a broken repo
Currently Moonraker is deployed using `git`. Without going into the gritty
@@ -327,16 +682,44 @@ git clone https://github.com/Klipper3d/klipper.git
sudo systemctl restart klipper
```
### Additional Notes
### Debug options for developers
- Make sure that Moonraker and Klipper both have read and write access to the
directory set in the `path` option for the `[virtual_sdcard]` in
`printer.cfg`.
- Upon first starting Moonraker is not aware of the gcode file path, thus
it cannot serve gcode files, add directories, etc. After Klippy enters
the "ready" state it sends Moonraker the gcode file path.
Once Moonraker receives the path it will retain it regardless of Klippy's
state, and update it if the path is changed in printer.cfg.
Moonraker accepts several command line arguments that can be used to
assist both front end developers and developers interested in extending
Moonraker.
Please see [configuration.md](configuration.md) for details on how to
configure moonraker.conf.
- The `-v` (`--verbose`) argument enables verbose logging. This includes
logging that reports information on all requests received and responses.
- The `-g` (`--debug`) argument enables Moonraker's debug features,
including:
- Debug endpoints
- The `update_manager` will bypass strict git repo validation, allowing
updates from unofficial remotes and repos in a `detached HEAD` state.
- The `-o` (`--asyncio-debug`) argument enables the asyncio debug flag. This
will substantially increase logging and is intended for low level debugging
of the asyncio event loop.
!!! Warning
The debug option should not be enabled in production environments. The
database debug endpoints grant read/write access to all namespaces,
including those typically exclusive to Moonraker. Items such as user
credentials are exposed.
Installations using systemd can enable debug options by editing `moonraker.env`
via ssh:
```
nano ~/printer_data/systemd/moonraker.env
```
Once the file is open, append the debug option(s) (`-v` and `-g` in this example) to the
value of `MOONRAKER_ARGS`:
```
MOONRAKER_ARGS="/home/pi/moonraker/moonraker/moonraker.py -d /home/pi/printer_data -c /home/pi/klipper_config/moonraker.conf -l /home/pi/klipper_logs/moonraker.log -v -g"
```
Save the file, exit the text editor, and restart the Moonraker service:
```
sudo systemctl restart moonraker
```

View File

@@ -6,8 +6,7 @@
[server]
# Bind server defaults of 0.0.0.0, port 7125
enable_debug_logging: True
config_path: ~/printer_config
enable_debug_logging: False
[authorization]
enabled: True

View File

@@ -47,7 +47,7 @@ The `gcode_move` object reports the current gcode state:
- `speed_factor`: AKA "feedrate", this is the current speed multiplier
- `speed`: The current gcode speed in mm/s.
- `extrude_factor`: AKA "extrusion multiplier".
- `absolute_coorinates`: true if the machine axes are moved using
- `absolute_coordinates`: true if the machine axes are moved using
absolute coordinates, false if using relative coordinates.
- `absolute_extrude`: true if the extruder is moved using absolute
coordinates, false if using relative coordinates.
@@ -236,7 +236,11 @@ The `virtual_sdcard` object reports the state of the virtual sdcard:
"print_duration": 0.0,
"filament_used": 0.0,
"state": "standby",
"message": ""
"message": "",
"info": {
"total_layer": null,
"current_layer": null
}
}
```
The `print_stats` object reports `virtual_sdcard` print state:
@@ -260,6 +264,17 @@ The `print_stats` object reports `virtual_sdcard` print state:
- `"error"` - Note that if an error is detected the print will abort
- `message`: If an error is detected, this field contains the error
message generated. Otherwise it will be a null string.
- `info`: This is a dict containing information about the print provided by the
slicer. Currently this is limited to the `total_layer` and `current_layer` values.
Note that these values are set by the
[SET_PRINT_STATS_INFO](https://www.klipper3d.org/G-Codes.html#set_print_stats_info)
gcode command. It is necessary to configure the slicer to include this command
in the print. `SET_PRINT_STATS_INFO TOTAL_LAYER=total_layer_count` should
be called in the slicer's "start gcode" to initalize the total layer count.
`SET_PRINT_STATS_INFO CURRENT_LAYER=current_layer` should be called in the
slicer's "on layer change" gcode. The user must substitute the
`total_layer_count` and `current_layer` with the appropriate
"placeholder syntax" for the slicer.
!!! Note
After a print has started all of the values above will persist until

7
docs/src/css/extras.css Normal file
View File

@@ -0,0 +1,7 @@
[data-md-color-scheme="slate"] {
--md-table-color: rgb(20, 20, 20);
}
thead th {
background-color: var(--md-table-color)
}

View File

@@ -1,6 +1,81 @@
##
This file will track changes that require user intervention,
such as a configuration change or a reinstallation.
This file tracks configuration changes and deprecations. Additionally
changest to Moonraker that require user intervention will be tracked
here.
### December 24th 2023
- The `gpio` component no longer depends on `libgpiod`. Instead,
Moonraker now uses the [python-periphery](https://github.com/vsergeev/python-periphery)
library to manage GPIOs. This comes with several benefits:
- Distributions that do no ship with `libgpiod` will not fail during
installation if the `python3-libgpiod` package isn't present.
- Distributions with a Kernel Version of 5.5 or higher support bias
flags (ie: pull up or pull down). Previously this functionality
was tied to the `libgpiod` version. Specifically, Debian Buster
ships with a Kernel that supports bias, however the `libgpiod`
version does not.
- Version 2.0+ of `libgpiod` includes dramatic API changes that are
wholly incompatible with prior versions. Therefore maintaining
future versions would effectively require supporting two APIs.
- The `[button]` component now includes a `debounce_period` option.
This addition is the result of a behavior change in how gpio state
changes are debounced. Debouncing will now delay the event by the
time specified in the `debounce_period`. Additional state changes
received during this delay will not trigger a button event. The
`[button]` module retains the `minimum_event_time` option which will
ignore events shorter than the specified time.
### July 18th 2023
- The following changes have been made to `[update_manager <name>]`
extensions of the `git_repo` type:
- The `env` option has been deprecated. New configurations should
use the `virtualenv` option in its place.
- The `install_script` option has been deprecated. New configurations
should use the `system_dependencies` option to specify system package
dependencies.
- Configuration options for `[spoolman]` have been added
- Configuration options for `[sensor]` have been added
### Februrary 8th 2023
- The `provider` option in the `[machine]` section no longer accepts
`supervisord` as an option. It has been renamed to `supervisord_cli`.
### January 2nd 2023
- The `bound_service` option for `[power]` devices has been deprecated in
favor of `bound_services`. Currently this change does not generate a
warning as it can be reliably resolved internally.
### October 14th 2022
- The systemd service file is now versioned. Moonraker can now detect when
the file is out of date and automate corrections as necessary.
- Moonraker's command line options are now specified in an environment file,
making it possible to change these options without modifying the service file
and reloading the systemd daemon. The default location of the environment
file is `~/printer_data/systemd/moonraker.env`.
- Moonraker now manages files and folders in a primary data folder supplied
by the `-d` (`--data-path`) command line option. As a result, the following
options have been deprecated:
- `ssl_certificate_path` in `[server]`
- `ssl_key_path` in `[server]`
- `database_path` in `[database]`
- `config_path` in `[file_manager]`
- `log_path` in `[file_manager]`
- `secrets_path` in `[secrets]`
- Debugging options are now supplied to Moonraker via the command line.
The `-v` (`--verbose`) option enables verbose logging, while the `-g`
(`--debug`) option enables debug features, including access to debug
endpoints and the repo debug feature in `update_manager`. As a result,
the following options are deprecated:
- `enable_debug_logging` in `[server]`
- `enable_repo_debug` in `[update_manager]`
### July 27th 2022
- The behavior of `[include]` directives has changed. Included files
are now parsed as they are encountered. If sections are duplicated
options in the last section parsed take precendence. If you are
using include directives to override configuration in `moonraker.conf`
the directives should be moved to the bottom of the file.
- Configuration files now support inline comments.
### April 6th 2022
- The ability to configure core components in the `[server]`section

File diff suppressed because it is too large Load Diff

View File

@@ -2,24 +2,116 @@ site_name: Moonraker
site_url: https://moonraker.readthedocs.io
repo_url: https://github.com/Arksine/moonraker
nav:
- 'User Documentation':
- Installation: installation.md
- Configuration : configuration.md
- User Changes: user_changes.md
- 'Client Developers':
- Client API: web_api.md
- Installation: installation.md
- Configuration : configuration.md
- 'Developer Documentation':
- Remote API: web_api.md
- Printer Objects: printer_objects.md
- API Changes: api_changes.md
- 'Backend Developers':
- Contributing: contributing.md
- Components: components.md
- Contribution Guidelines: contributing.md
- Changelog: changelog.md
theme:
name: readthedocs
name: material
palette:
- scheme: default
primary: blue grey
accent: light blue
toggle:
icon: material/weather-sunny
name: Switch to Dark Mode
- scheme: slate
primary: black
accent: light blue
toggle:
icon: material/weather-night
name: Switch to Light Mode
font:
text: Roboto
code: Roboto Mono
features:
- navigation.top
- navigation.instant
- navigation.indexes
- navigation.expand
- toc.follow
- content.tabs.link
- search.share
- search.highlight
- search.suggest
- content.code.copy
- content.code.annotations
plugins:
- search
markdown_extensions:
- abbr
- admonition
- pymdownx.superfences
- pymdownx.highlight:
use_pygments: false
- attr_list
- def_list
- footnotes
- md_in_html
- toc:
permalink: true
- pymdownx.arithmatex:
generic: true
- pymdownx.betterem:
smart_enable: all
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:materialx.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg
- pymdownx.highlight
- pymdownx.inlinehilite
- pymdownx.keys
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.superfences
- pymdownx.tabbed:
alternate_style: true
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
- pymdownx.blocks.details:
types:
- name: details-new
class: new
- name: details-settings
class: settings
- name: details-note
class: note
- name: details-abstract
class: abstract
- name: details-info
class: info
- name: details-tip
class: tip
- name: details-success
class: success
- name: details-question
class: question
- name: details-warning
class: warning
- name: details-failure
class: failure
- name: details-danger
class: danger
- name: details-bug
class: bug
- name: details-example
class: example
- name: details-quote
class: quote
- name: api-example-response
class: example
title: "Example Response"
- name: api-response-schema
class: info
title: "Response Schema"
- name: api-parameters
class: info
title: "Parameters"
- tables
- compact_tables:
auto_insert_break: true
extra_css:
- src/css/extras.css

5
moonraker/__init__.py Normal file
View File

@@ -0,0 +1,5 @@
# Top level package definition for Moonraker
#
# Copyright (C) 2022 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license

9
moonraker/__main__.py Normal file
View File

@@ -0,0 +1,9 @@
# Package entry point for Moonraker
#
# Copyright (C) 2022 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from .server import main
main()

View File

@@ -0,0 +1 @@
# Assets Package Definition

View File

@@ -0,0 +1,10 @@
klipper_mcu
webcamd
MoonCord
KlipperScreen
moonraker-telegram-bot
moonraker-obico
sonar
crowsnest
octoeverywhere
ratos-configurator

View File

@@ -123,6 +123,137 @@
background-color: rgb(160, 64, 8);
}
}
.modal {
display: none;
position: fixed;
z-index: 1;
left: 0;
top: 0;
width: 100%;
height: 100%;
overflow: auto;
background-color: rgb(0,0,0);
background-color: rgba(0,0,0,0.4);
}
.modal-card {
background: none;
position: relative;
border: 0px;
border-radius: 1rem;
background-color: #1a1a1a;
margin: 20% auto 2rem auto;
padding: 0rem;
border: 0px;
width: 50%;
animation-name: fadein;
animation-duration: .5s;
}
.modal-card h1 {
background-color: #006f7e;
text-align: center;
line-height: 3rem;
font-size: 1.1rem;
height: 3rem;
margin: 0;
border-top-left-radius: 1rem;
border-top-right-radius: 1rem;
}
.modal-content {
background-color: #3e3e3e;
padding: 1rem;
margin: 0;
height: auto
}
.modal-content .entry {
display: inline-block;
width: 100%;
}
.modal-content .entry:not(:last-child) {
margin-bottom: .5rem;
}
.modal-content .value {
float: right;
display: inline;
}
.modal-content input {
width: 100%;
padding: 8px;
border-radius: 4px;
-moz-border-radius: 4px;
-webkit-border-radius: 4px;
font-size: 1rem; color: #222;
background: #F7F7F7;
}
.modal-footer {
display: inline-block;
background-color: #3e3e3e;
margin: 0;
height: auto;
width: 100%;
border-bottom-left-radius: 1rem;
border-bottom-right-radius: 1rem;
}
.modal-button {
float: right;
background: #cecece;
border: none;
width: auto;
overflow: visible;
font-size: 1rem;
font-weight: bold;
color: rgb(0, 0, 0);
padding: .4rem .5rem;
margin: 0rem .5rem .5rem 0rem;
border-radius: .5rem;
-webkit-border-radius: .5rem;
-moz-border-radius: .5rem;
}
.modal-button:hover {
color: rgb(8, 154, 45);
text-decoration: none;
cursor: pointer;
}
.modal-status {
display: none;
position: relative;
border: 0;
border-radius: 1rem;
background-color: #3e3e3e;
margin: auto;
padding: 0rem;
width: 50%;
animation-name: fadebottom;
animation-duration: .5s;
}
.modal-status:hover {
cursor: pointer;
}
.modal-status .content {
display: inline-block;
margin: 1rem;
}
@keyframes fadebottom {
from {top: 10em; opacity: 0}
to {top: 0em; opacity: 1}
}
@keyframes fadein {
from {opacity: 0}
to {opacity: 1}
}
</style>
<script>
function setClickable(id) {
@@ -162,11 +293,11 @@
<div class="content">
<div class="entry">
Request IP:
<div class="value">{{ ip_address }}</div>
<div class="value">{{ remote_ip }}</div>
</div>
<div class="entry">
Trusted:
<div class="value">{{ authorized}}</div>
Authorized:
<div class="value">{{ authorized }}</div>
</div>
<div class="entry">
CORS Enabled:
@@ -197,10 +328,10 @@
<div class="content">
{% for item in summary %}
<article class="item">{{ item }}</article>
{% end %}
{% endfor %}
</div>
</article>
{% end %}
{% endif %}
{% if announcements %}
<article class="card messages">
<h1>Announcements</h1>
@@ -215,21 +346,155 @@
<script>
setClickable("{{ id }}");
</script>
{% end %}
{% endfor %}
</div>
</article>
{% end %}
{% endif %}
{% if warnings %}
<article class="card messages warning">
<h1>Warnings</h1>
<div class="content">
{% for warn in warnings %}
<article class="item">{{ warn }}</article>
{% end %}
{% endfor %}
</div>
</article>
{% end %}
{% endif %}
</div>
<div id="update_modal" class="modal">
<div class="modal-card">
<h1 id="modal_header_msg">
Moonraker Sudo Password Request
</h1>
<div id="modal_body" class="modal-content">
<div id="main_form">
<div class="entry">
Service Name:
<div class="value">
{{ service_name }}
</div>
</div>
<div class="entry">
Host Name:
<div class="value">
{{ hostname }}
</div>
</div>
<div class="entry">
Host IP Address:
<div class="value">
{{ local_ip }}
</div>
</div>
<div class="entry">
{{ sudo_request_message }}
Please enter the password for linux user <b>{{ linux_user }}</b>:
</div>
<div class="entry">
<input id="sudo_passwd" name="sudo_passwd" type="password" />
</div>
</div>
</div>
<div class="modal-footer">
<button type="button" id="modal_close" class="modal-button">Cancel</button>
<button type="button" id="modal_submit" class="modal-button">Submit</button>
</div>
</div>
<div id="modal_status" class="modal-status">
<div id="status_msg" class="content">
Status Text
</div>
</div>
</div>
<script>
const modal = document.getElementById("update_modal");
{% if sudo_requested %}
modal.style.display = "block";
{% endif %}
const main_form = document.getElementById("main_form")
const status_item = document.getElementById("modal_status");
const status_div = document.getElementById("status_msg");
function update_modal(status_msg) {
status_div.innerHTML = status_msg
status_item.style.display = "block";
}
function dismiss_status() {
status_item.style.display = "none";
}
function check_success(req) {
return (req.status < 205 || req.status == 304);
}
function post_machine_password(passwd) {
let pwd_req = new XMLHttpRequest();
pwd_req.onload = () => {
if (check_success(pwd_req)) {
console.log("Successfully Set Sudo Password");
let resp = JSON.parse(pwd_req.responseText)
let msg = resp.result.sudo_responses.join("<br/>")
msg += "<br/><br/>You may close this window and return to the front end.";
update_modal(msg);
} else {
console.log("Password Request Error");
let err_msg = `Code ${pwd_req.status}: `;
let response = pwd_req.responseText;
try {
let json_resp = JSON.parse(response);
err_msg = json_resp.error.message;
} catch (error) {}
update_modal(
"Request failed with error:<br/><br/>" + err_msg +
"<br/><br/>You may need to manually update your installation."
);
}
};
pwd_req.onerror = () => {
console.log("Error setting password");
update_modal(
"Request to set sudo password failed with " +
"a network error."
)
};
pwd_req.open("POST", "/machine/sudo/password");
pwd_req.setRequestHeader("Content-Type", "application/json");
pwd_req.send(JSON.stringify({"password": passwd}));
}
const modal_submit = document.getElementById("modal_submit");
const pwd_input = document.getElementById("sudo_passwd");
const modal_close = document.getElementById("modal_close");
modal_submit.onclick = () => {
let val = pwd_input.value;
pwd_input.value = "";
dismiss_status();
post_machine_password(val);
};
pwd_input.addEventListener("keypress", (event) => {
if (event.key === "Enter") {
event.preventDefault();
modal_submit.click();
}
});
modal_close.onclick = () => {
modal.style.display = "none";
};
status_item.onclick = () => {
dismiss_status();
}
window.onclick = (event) => {
if (event.target == modal) {
modal.style.display = "none";
}
};
</script>
</main>
</body>
</html>

1302
moonraker/common.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -11,20 +11,20 @@ import asyncio
import logging
import email.utils
import xml.etree.ElementTree as etree
from ..common import RequestType
from typing import (
TYPE_CHECKING,
Awaitable,
List,
Dict,
Any,
Optional,
Union
Optional
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest
from http_client import HttpClient
from components.database import MoonrakerDatabase
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .http_client import HttpClient
from .database import MoonrakerDatabase
MOONLIGHT_URL = "https://arksine.github.io/moonlight"
@@ -58,23 +58,23 @@ class Announcements:
)
self.server.register_endpoint(
"/server/announcements/list", ["GET"],
"/server/announcements/list", RequestType.GET,
self._list_announcements
)
self.server.register_endpoint(
"/server/announcements/dismiss", ["POST"],
"/server/announcements/dismiss", RequestType.POST,
self._handle_dismiss_request
)
self.server.register_endpoint(
"/server/announcements/update", ["POST"],
"/server/announcements/update", RequestType.POST,
self._handle_update_request
)
self.server.register_endpoint(
"/server/announcements/feed", ["POST", "DELETE"],
"/server/announcements/feed", RequestType.POST | RequestType.DELETE,
self._handle_feed_request
)
self.server.register_endpoint(
"/server/announcements/feeds", ["GET"],
"/server/announcements/feeds", RequestType.GET,
self._handle_list_feeds
)
self.server.register_notification(
@@ -143,12 +143,7 @@ class Announcements:
async def _handle_update_request(
self, web_request: WebRequest
) -> Dict[str, Any]:
subs: Optional[Union[str, List[str]]]
subs = web_request.get("subscriptions", None)
if isinstance(subs, str):
subs = [sub.strip() for sub in subs.split(",") if sub.strip()]
elif subs is None:
subs = list(self.subscriptions.keys())
subs = web_request.get_list("subscriptions", list(self.subscriptions.keys()))
for sub in subs:
if sub not in self.subscriptions:
raise self.server.error(f"No subscription for {sub}")
@@ -176,13 +171,13 @@ class Announcements:
async def _handle_feed_request(
self, web_request: WebRequest
) -> Dict[str, Any]:
action = web_request.get_action()
req_type = web_request.get_request_type()
name: str = web_request.get("name")
name = name.lower()
changed: bool = False
db: MoonrakerDatabase = self.server.lookup_component("database")
result = "skipped"
if action == "POST":
if req_type == RequestType.POST:
if name not in self.subscriptions:
feed = RssFeed(name, self.entry_mgr, self.dev_mode)
self.subscriptions[name] = feed
@@ -193,7 +188,7 @@ class Announcements:
"moonraker", "announcements.stored_feeds", self.stored_feeds
)
result = "added"
elif action == "DELETE":
elif req_type == RequestType.DELETE:
if name not in self.stored_feeds:
raise self.server.error(f"Feed '{name}' not stored")
if name in self.configured_feeds:
@@ -241,8 +236,15 @@ class Announcements:
"feed": feed
}
self.entry_mgr.add_entry(entry)
self.eventloop.create_task(self._notify_internal())
return entry
async def _notify_internal(self) -> None:
entries = await self.entry_mgr.list_entries()
self.server.send_event(
"announcements:entries_updated", {"entries": entries}
)
async def remove_announcement(self, entry_id: str) -> None:
ret = await self.entry_mgr.remove_entry(entry_id)
if ret is not None:
@@ -260,6 +262,15 @@ class Announcements:
) -> List[Dict[str, Any]]:
return await self.entry_mgr.list_entries(include_dismissed)
def register_feed(self, name: str) -> None:
name = name.lower()
if name in self.subscriptions:
logging.info(f"Feed {name} already configured")
return
logging.info(f"Registering feed {name}")
self.configured_feeds.append(name)
self.subscriptions[name] = RssFeed(name, self.entry_mgr, self.dev_mode)
def close(self):
self.entry_mgr.close()

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,6 @@
from __future__ import annotations
import asyncio
import logging
from confighelper import SentinelClass
from typing import (
TYPE_CHECKING,
@@ -14,11 +13,9 @@ from typing import (
Dict
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from .gpio import GpioFactory
from app import InternalTransport as ITransport
from ..confighelper import ConfigHelper
from .application import InternalTransport as ITransport
SENTINEL = SentinelClass.get_instance()
class ButtonManager:
def __init__(self, config: ConfigHelper) -> None:
@@ -29,7 +26,7 @@ class ButtonManager:
for section in prefix_sections:
cfg = config[section]
# Reserve the "type" option for future use
btn_type = cfg.get('type', "gpio")
btn_type = cfg.get('type', "gpio") # noqa: F841
try:
btn = GpioButton(cfg)
except Exception as e:
@@ -48,25 +45,21 @@ class GpioButton:
self.server = config.get_server()
self.eventloop = self.server.get_event_loop()
self.name = config.get_name().split()[-1]
self.itransport: ITransport = self.server.lookup_component(
'internal_transport')
self.itransport: ITransport = self.server.lookup_component("internal_transport")
self.mutex = asyncio.Lock()
gpio: GpioFactory = self.server.load_component(config, 'gpio')
self.gpio_event = gpio.register_gpio_event(
config.get('pin'), self._on_gpio_event)
min_event_time = config.getfloat(
'minimum_event_time', .05, minval=.010)
self.gpio_event.setup_debounce(min_event_time, self._on_gpio_error)
self.press_template = config.gettemplate(
"on_press", None, is_async=True)
self.release_template = config.gettemplate(
"on_release", None, is_async=True)
self.gpio_event = config.getgpioevent("pin", self._on_gpio_event)
self.min_event_time = config.getfloat("minimum_event_time", 0, minval=0.0)
debounce_period = config.getfloat("debounce_period", .05, minval=0.01)
self.gpio_event.setup_debounce(debounce_period, self._on_gpio_error)
self.press_template = config.gettemplate("on_press", None, is_async=True)
self.release_template = config.gettemplate("on_release", None, is_async=True)
if (
self.press_template is None and
self.release_template is None
):
raise config.error(
f"[{config.get_name()}]: No template option configured")
f"[{config.get_name()}]: No template option configured"
)
self.notification_sent: bool = False
self.user_data: Dict[str, Any] = {}
self.context: Dict[str, Any] = {
@@ -101,11 +94,11 @@ class GpioButton:
data['aux'] = result
self.server.send_event("button:button_event", data)
async def _on_gpio_event(self,
eventtime: float,
elapsed_time: float,
pressed: int
) -> None:
async def _on_gpio_event(
self, eventtime: float, elapsed_time: float, pressed: int
) -> None:
if elapsed_time < self.min_event_time:
return
template = self.press_template if pressed else self.release_template
if template is None:
return

View File

@@ -8,6 +8,7 @@ from __future__ import annotations
import logging
import time
from collections import deque
from ..common import RequestType
# Annotation imports
from typing import (
@@ -16,19 +17,23 @@ from typing import (
Optional,
Dict,
List,
Tuple,
Deque,
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest
from . import klippy_apis
APIComp = klippy_apis.KlippyAPI
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .klippy_connection import KlippyConnection
from .klippy_apis import KlippyAPI as APIComp
GCQueue = Deque[Dict[str, Any]]
TempStore = Dict[str, Dict[str, Deque[float]]]
TempStore = Dict[str, Dict[str, Deque[Optional[float]]]]
TEMP_UPDATE_TIME = 1.
def _round_null(val: Optional[float], ndigits: int) -> Optional[float]:
if val is None:
return val
return round(val, ndigits)
class DataStore:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
@@ -36,16 +41,15 @@ class DataStore:
self.gcode_store_size = config.getint('gcode_store_size', 1000)
# Temperature Store Tracking
self.last_temps: Dict[str, Tuple[float, ...]] = {}
kconn: KlippyConnection = self.server.lookup_component("klippy_connection")
self.subscription_cache = kconn.get_subscription_cache()
self.gcode_queue: GCQueue = deque(maxlen=self.gcode_store_size)
self.temperature_store: TempStore = {}
self.temp_monitors: List[str] = []
eventloop = self.server.get_event_loop()
self.temp_update_timer = eventloop.register_timer(
self._update_temperature_store)
# Register status update event
self.server.register_event_handler(
"server:status_update", self._set_current_temps)
self.server.register_event_handler(
"server:gcode_response", self._update_gcode_store)
self.server.register_event_handler(
@@ -56,11 +60,13 @@ class DataStore:
# Register endpoints
self.server.register_endpoint(
"/server/temperature_store", ['GET'],
self._handle_temp_store_request)
"/server/temperature_store", RequestType.GET,
self._handle_temp_store_request
)
self.server.register_endpoint(
"/server/gcode_store", ['GET'],
self._handle_gcode_store_request)
"/server/gcode_store", RequestType.GET,
self._handle_gcode_store_request
)
async def _init_sensors(self) -> None:
klippy_apis: APIComp = self.server.lookup_component('klippy_apis')
@@ -71,8 +77,10 @@ class DataStore:
except self.server.error as e:
logging.info(f"Error Configuring Sensors: {e}")
return
sensors: List[str]
sensors = result.get("heaters", {}).get("available_sensors", [])
heaters: Dict[str, List[str]] = result.get("heaters", {})
sensors = heaters.get("available_sensors", [])
self.temp_monitors = heaters.get("available_monitors", [])
sensors.extend(self.temp_monitors)
if sensors:
# Add Subscription
@@ -85,59 +93,56 @@ class DataStore:
return
logging.info(f"Configuring available sensors: {sensors}")
new_store: TempStore = {}
valid_fields = ("temperature", "target", "power", "speed")
for sensor in sensors:
fields = list(status.get(sensor, {}).keys())
reported_fields = [
f for f in list(status.get(sensor, {}).keys()) if f in valid_fields
]
if not reported_fields:
logging.info(f"No valid fields reported for sensor: {sensor}")
self.temperature_store.pop(sensor, None)
continue
if sensor in self.temperature_store:
new_store[sensor] = self.temperature_store[sensor]
for field in list(new_store[sensor].keys()):
if field not in reported_fields:
new_store[sensor].pop(field, None)
else:
initial_val: Optional[float]
initial_val = _round_null(status[sensor][field], 2)
new_store[sensor][field].append(initial_val)
else:
new_store[sensor] = {
'temperatures': deque(maxlen=self.temp_store_size)}
for item in ["target", "power", "speed"]:
if item in fields:
new_store[sensor][f"{item}s"] = deque(
maxlen=self.temp_store_size)
if sensor not in self.last_temps:
self.last_temps[sensor] = (0., 0., 0., 0.)
new_store[sensor] = {}
for field in reported_fields:
if field not in new_store[sensor]:
initial_val = _round_null(status[sensor][field], 2)
new_store[sensor][field] = deque(
[initial_val], maxlen=self.temp_store_size
)
self.temperature_store = new_store
# Prune unconfigured sensors in self.last_temps
for sensor in list(self.last_temps.keys()):
if sensor not in self.temperature_store:
del self.last_temps[sensor]
# Update initial temperatures
self._set_current_temps(status)
self.temp_update_timer.start()
self.temp_update_timer.start(delay=1.)
else:
logging.info("No sensors found")
self.last_temps = {}
self.temperature_store = {}
self.temp_monitors = []
self.temp_update_timer.stop()
def _set_current_temps(self, data: Dict[str, Any]) -> None:
for sensor in self.temperature_store:
if sensor in data:
last_val = self.last_temps[sensor]
self.last_temps[sensor] = (
round(data[sensor].get('temperature', last_val[0]), 2),
data[sensor].get('target', last_val[1]),
data[sensor].get('power', last_val[2]),
data[sensor].get('speed', last_val[3]))
def _update_temperature_store(self, eventtime: float) -> float:
# XXX - If klippy is not connected, set values to zero
# as they are unknown?
for sensor, vals in self.last_temps.items():
self.temperature_store[sensor]['temperatures'].append(vals[0])
for val, item in zip(vals[1:], ["targets", "powers", "speeds"]):
if item in self.temperature_store[sensor]:
self.temperature_store[sensor][item].append(val)
for sensor_name, sensor in self.temperature_store.items():
sdata: Dict[str, Any] = self.subscription_cache.get(sensor_name, {})
for field, store in sensor.items():
store.append(_round_null(sdata.get(field, store[-1]), 2))
return eventtime + TEMP_UPDATE_TIME
async def _handle_temp_store_request(self,
web_request: WebRequest
) -> Dict[str, Dict[str, List[float]]]:
async def _handle_temp_store_request(
self, web_request: WebRequest
) -> Dict[str, Dict[str, List[Optional[float]]]]:
include_monitors = web_request.get_boolean("include_monitors", False)
store = {}
for name, sensor in self.temperature_store.items():
store[name] = {k: list(v) for k, v in sensor.items()}
if not include_monitors and name in self.temp_monitors:
continue
store[name] = {f"{k}s": list(v) for k, v in sensor.items()}
return store
async def close(self) -> None:

File diff suppressed because it is too large Load Diff

View File

@@ -5,6 +5,7 @@
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import os
import asyncio
import pathlib
import logging
import dbus_next
@@ -16,11 +17,13 @@ from typing import (
TYPE_CHECKING,
List,
Optional,
Any,
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ..confighelper import ConfigHelper
STAT_PATH = "/proc/self/stat"
DOC_URL = (
"https://moonraker.readthedocs.io/en/latest/"
"installation/#policykit-permissions"
@@ -34,7 +37,11 @@ class DbusManager:
self.bus: Optional[MessageBus] = None
self.polkit: Optional[ProxyInterface] = None
self.warned: bool = False
proc_data = pathlib.Path(f"/proc/self/stat").read_text()
st_path = pathlib.Path(STAT_PATH)
self.polkit_subject: List[Any] = []
if not st_path.is_file():
return
proc_data = st_path.read_text()
start_clk_ticks = int(proc_data.split()[21])
self.polkit_subject = [
"unix-process",
@@ -51,6 +58,8 @@ class DbusManager:
try:
self.bus = MessageBus(bus_type=BusType.SYSTEM)
await self.bus.connect()
except asyncio.CancelledError:
raise
except Exception:
logging.info("Unable to Connect to D-Bus")
return
@@ -60,20 +69,31 @@ class DbusManager:
"org.freedesktop.PolicyKit1",
"/org/freedesktop/PolicyKit1/Authority",
"org.freedesktop.PolicyKit1.Authority")
except self.DbusError:
self.server.add_warning(
"Unable to find DBus PolKit Interface, this suggests PolKit "
"is not installed on your OS.")
except asyncio.CancelledError:
raise
except Exception as e:
if self.server.is_debug_enabled():
logging.exception("Failed to get PolKit interface")
else:
logging.info(f"Failed to get PolKit interface: {e}")
self.polkit = None
async def check_permission(self,
action: str,
err_msg: str = ""
) -> bool:
if self.polkit is None:
self.server.add_warning(
"Unable to find DBus PolKit Interface, this suggests PolKit "
"is not installed on your OS.",
"dbus_polkit"
)
return False
try:
ret = await self.polkit.call_check_authorization( # type: ignore
self.polkit_subject, action, {}, 0, "")
except asyncio.CancelledError:
raise
except Exception as e:
self._check_warned()
self.server.add_warning(

View File

@@ -4,8 +4,11 @@
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
from websockets import WebSocket
import asyncio
import pathlib
import logging
from ..common import BaseRemoteConnection, RequestType, TransportType
from ..utils import get_unix_peer_credentials
# Annotation imports
from typing import (
@@ -18,25 +21,36 @@ from typing import (
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest
from ..server import Server
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .klippy_connection import KlippyConnection as Klippy
UNIX_BUFFER_LIMIT = 20 * 1024 * 1024
class ExtensionManager:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.agents: Dict[str, WebSocket] = {}
self.agents: Dict[str, BaseRemoteConnection] = {}
self.agent_methods: Dict[int, List[str]] = {}
self.uds_server: Optional[asyncio.AbstractServer] = None
self.server.register_endpoint(
"/connection/send_event", ["POST"], self._handle_agent_event,
transports=["websocket"]
"/connection/register_remote_method", RequestType.POST,
self._register_agent_method,
transports=TransportType.WEBSOCKET
)
self.server.register_endpoint(
"/server/extensions/list", ["GET"], self._handle_list_extensions
"/connection/send_event", RequestType.POST, self._handle_agent_event,
transports=TransportType.WEBSOCKET
)
self.server.register_endpoint(
"/server/extensions/request", ["POST"], self._handle_call_agent
"/server/extensions/list", RequestType.GET, self._handle_list_extensions
)
self.server.register_endpoint(
"/server/extensions/request", RequestType.POST, self._handle_call_agent
)
def register_agent(self, connection: WebSocket) -> None:
def register_agent(self, connection: BaseRemoteConnection) -> None:
data = connection.client_data
name = data["name"]
client_type = data["type"]
@@ -55,16 +69,20 @@ class ExtensionManager:
}
connection.send_notification("agent_event", [evt])
def remove_agent(self, connection: WebSocket) -> None:
def remove_agent(self, connection: BaseRemoteConnection) -> None:
name = connection.client_data["name"]
if name in self.agents:
klippy: Klippy = self.server.lookup_component("klippy_connection")
registered_methods = self.agent_methods.pop(connection.uid, [])
for method in registered_methods:
klippy.unregister_method(method)
del self.agents[name]
evt: Dict[str, Any] = {"agent": name, "event": "disconnected"}
connection.send_notification("agent_event", [evt])
async def _handle_agent_event(self, web_request: WebRequest) -> str:
conn = web_request.get_connection()
if not isinstance(conn, WebSocket):
conn = web_request.get_client_connection()
if conn is None:
raise self.server.error("No connection detected")
if conn.client_data["type"] != "agent":
raise self.server.error(
@@ -82,6 +100,16 @@ class ExtensionManager:
conn.send_notification("agent_event", [evt])
return "ok"
async def _register_agent_method(self, web_request: WebRequest) -> str:
conn = web_request.get_client_connection()
if conn is None:
raise self.server.error("No connection detected")
method_name = web_request.get_str("method_name")
klippy: Klippy = self.server.lookup_component("klippy_connection")
klippy.register_method_from_agent(conn, method_name)
self.agent_methods.setdefault(conn.uid, []).append(method_name)
return "ok"
async def _handle_list_extensions(
self, web_request: WebRequest
) -> Dict[str, List[Dict[str, Any]]]:
@@ -101,7 +129,129 @@ class ExtensionManager:
if agent not in self.agents:
raise self.server.error(f"Agent {agent} not connected")
conn = self.agents[agent]
return await conn.call_method(method, args)
return await conn.call_method_with_response(method, args)
async def start_unix_server(self) -> None:
sockfile: str = self.server.get_app_args()["unix_socket_path"]
sock_path = pathlib.Path(sockfile).expanduser().resolve()
logging.info(f"Creating Unix Domain Socket at '{sock_path}'")
try:
self.uds_server = await asyncio.start_unix_server(
self.on_unix_socket_connected, sock_path, limit=UNIX_BUFFER_LIMIT
)
except asyncio.CancelledError:
raise
except Exception:
logging.exception(f"Failed to create Unix Domain Socket: {sock_path}")
self.uds_server = None
def on_unix_socket_connected(
self, reader: asyncio.StreamReader, writer: asyncio.StreamWriter
) -> None:
peercred = get_unix_peer_credentials(writer, "Unix Client Connection")
UnixSocketClient(self.server, reader, writer, peercred)
async def close(self) -> None:
if self.uds_server is not None:
self.uds_server.close()
await self.uds_server.wait_closed()
self.uds_server = None
class UnixSocketClient(BaseRemoteConnection):
def __init__(
self,
server: Server,
reader: asyncio.StreamReader,
writer: asyncio.StreamWriter,
peercred: Dict[str, int]
) -> None:
self.on_create(server)
self.writer = writer
self._peer_cred = peercred
self._connected_time = self.eventloop.get_loop_time()
pid = self._peer_cred.get("process_id")
uid = self._peer_cred.get("user_id")
gid = self._peer_cred.get("group_id")
self.wsm.add_client(self)
logging.info(
f"Unix Socket Opened - Client ID: {self.uid}, "
f"Process ID: {pid}, User ID: {uid}, Group ID: {gid}"
)
self.eventloop.register_callback(self._read_messages, reader)
async def _read_messages(self, reader: asyncio.StreamReader) -> None:
errors_remaining: int = 10
while not reader.at_eof():
try:
data = await reader.readuntil(b'\x03')
decoded = data[:-1].decode(encoding="utf-8")
except (ConnectionError, asyncio.IncompleteReadError):
break
except asyncio.CancelledError:
logging.exception("Unix Client Stream Read Cancelled")
raise
except Exception:
logging.exception("Unix Client Stream Read Error")
errors_remaining -= 1
if not errors_remaining or self.is_closed:
break
continue
errors_remaining = 10
self.eventloop.register_callback(self._process_message, decoded)
logging.debug("Unix Socket Disconnection From _read_messages()")
await self._on_close(reason="Read Exit")
async def write_to_socket(self, message: Union[bytes, str]) -> None:
if isinstance(message, str):
data = message.encode() + b"\x03"
else:
data = message + b"\x03"
try:
self.writer.write(data)
await self.writer.drain()
except asyncio.CancelledError:
raise
except Exception:
logging.debug("Unix Socket Disconnection From write_to_socket()")
await self._on_close(reason="Write Exception")
async def _on_close(
self,
code: Optional[int] = None,
reason: Optional[str] = None
) -> None:
if self.is_closed:
return
self.is_closed = True
kconn: Klippy = self.server.lookup_component("klippy_connection")
kconn.remove_subscription(self)
if not self.writer.is_closing():
self.writer.close()
try:
await self.writer.wait_closed()
except Exception:
pass
self.message_buf = []
for resp in self.pending_responses.values():
resp.set_exception(
self.server.error("Client Socket Disconnected", 500)
)
self.pending_responses = {}
logging.info(
f"Unix Socket Closed: ID: {self.uid}, "
f"Close Code: {code}, "
f"Close Reason: {reason}"
)
if self._client_data["type"] == "agent":
extensions: ExtensionManager
extensions = self.server.lookup_component("extensions")
extensions.remove_agent(self)
self.wsm.remove_client(self)
def close_socket(self, code: int, reason: str) -> None:
if not self.is_closed:
self.eventloop.register_callback(self._on_close, code, reason)
def load_component(config: ConfigHelper) -> ExtensionManager:
return ExtensionManager(config)

View File

@@ -9,7 +9,7 @@ from . import file_manager as fm
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ...confighelper import ConfigHelper
def load_component(config: ConfigHelper) -> fm.FileManager:
return fm.load_component(config)

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,7 @@ import tempfile
import zipfile
import shutil
import uuid
import logging
from PIL import Image
# Annotation imports
@@ -35,40 +36,39 @@ if TYPE_CHECKING:
UFP_MODEL_PATH = "/3D/model.gcode"
UFP_THUMB_PATH = "/Metadata/thumbnail.png"
def log_to_stderr(msg: str) -> None:
sys.stderr.write(f"{msg}\n")
sys.stderr.flush()
logging.basicConfig(stream=sys.stderr, level=logging.INFO)
logger = logging.getLogger("metadata")
# regex helpers
def _regex_find_floats(pattern: str,
data: str,
strict: bool = False
) -> List[float]:
# If strict is enabled, pattern requires a floating point
# value, otherwise it can be an integer value
fptrn = r'\d+\.\d*' if strict else r'\d+\.?\d*'
# Regex helpers. These methods take patterns with placeholders
# to insert the correct regex capture group for floats, ints,
# and strings:
# Float: (%F) = (\d*\.?\d+)
# Integer: (%D) = (\d+)
# String: (%S) = (.+)
def regex_find_floats(pattern: str, data: str) -> List[float]:
pattern = pattern.replace(r"(%F)", r"([0-9]*\.?[0-9]+)")
matches = re.findall(pattern, data)
if matches:
# return the maximum height value found
try:
return [float(h) for h in re.findall(
fptrn, " ".join(matches))]
return [float(h) for h in matches]
except Exception:
pass
return []
def _regex_find_ints(pattern: str, data: str) -> List[int]:
def regex_find_ints(pattern: str, data: str) -> List[int]:
pattern = pattern.replace(r"(%D)", r"([0-9]+)")
matches = re.findall(pattern, data)
if matches:
# return the maximum height value found
try:
return [int(h) for h in re.findall(
r'\d+', " ".join(matches))]
return [int(h) for h in matches]
except Exception:
pass
return []
def _regex_find_first(pattern: str, data: str) -> Optional[float]:
def regex_find_float(pattern: str, data: str) -> Optional[float]:
pattern = pattern.replace(r"(%F)", r"([0-9]*\.?[0-9]+)")
match = re.search(pattern, data)
val: Optional[float] = None
if match:
@@ -78,7 +78,8 @@ def _regex_find_first(pattern: str, data: str) -> Optional[float]:
return None
return val
def _regex_find_int(pattern: str, data: str) -> Optional[int]:
def regex_find_int(pattern: str, data: str) -> Optional[int]:
pattern = pattern.replace(r"(%D)", r"([0-9]+)")
match = re.search(pattern, data)
val: Optional[int] = None
if match:
@@ -88,12 +89,22 @@ def _regex_find_int(pattern: str, data: str) -> Optional[int]:
return None
return val
def _regex_find_string(pattern: str, data: str) -> Optional[str]:
def regex_find_string(pattern: str, data: str) -> Optional[str]:
pattern = pattern.replace(r"(%S)", r"(.*)")
match = re.search(pattern, data)
if match:
return match.group(1).strip('"')
return None
def regex_find_min_float(pattern: str, data: str) -> Optional[float]:
result = regex_find_floats(pattern, data)
return min(result) if result else None
def regex_find_max_float(pattern: str, data: str) -> Optional[float]:
result = regex_find_floats(pattern, data)
return max(result) if result else None
# Slicer parsing implementations
class BaseSlicer(object):
def __init__(self, file_path: str) -> None:
@@ -111,28 +122,6 @@ class BaseSlicer(object):
self.footer_data = footer_data
self.size: int = fsize
def _parse_min_float(self,
pattern: str,
data: str,
strict: bool = False
) -> Optional[float]:
result = _regex_find_floats(pattern, data, strict)
if result:
return min(result)
else:
return None
def _parse_max_float(self,
pattern: str,
data: str,
strict: bool = False
) -> Optional[float]:
result = _regex_find_floats(pattern, data, strict)
if result:
return max(result)
else:
return None
def _check_has_objects(self,
data: str,
pattern: Optional[str] = None
@@ -144,12 +133,12 @@ class BaseSlicer(object):
if match is not None:
# Objects already processed
fname = os.path.basename(self.path)
log_to_stderr(
logger.info(
f"File '{fname}' currently supports cancellation, "
"processing aborted"
)
if match.group(1).startswith("DEFINE_OBJECT"):
log_to_stderr(
logger.info(
"Legacy object processing detected. This is not "
"compatible with official versions of Klipper."
)
@@ -229,61 +218,73 @@ class BaseSlicer(object):
try:
os.mkdir(thumb_dir)
except Exception:
log_to_stderr(f"Unable to create thumb dir: {thumb_dir}")
logger.info(f"Unable to create thumb dir: {thumb_dir}")
return None
thumb_base = os.path.splitext(os.path.basename(self.path))[0]
parsed_matches: List[Dict[str, Any]] = []
has_miniature: bool = False
#has_miniature: bool = False
for match in thumb_matches:
lines = re.split(r"\r?\n", match.replace('; ', ''))
info = _regex_find_ints(r".*", lines[0])
info = regex_find_ints(r"(%D)", lines[0])
data = "".join(lines[1:-1])
if len(info) != 3:
log_to_stderr(
logger.info(
f"MetadataError: Error parsing thumbnail"
f" header: {lines[0]}")
continue
if len(data) != info[2]:
log_to_stderr(
logger.info(
f"MetadataError: Thumbnail Size Mismatch: "
f"detected {info[2]}, actual {len(data)}")
continue
thumb_name = f"{thumb_base}-{info[0]}x{info[1]}.png"
thumb_path = os.path.join(thumb_dir, thumb_name)
thumb_jpg_name = f"{thumb_base}-{info[0]}x{info[1]}.jpg"
thumb_jpg_path = os.path.join(thumb_dir, thumb_jpg_name)
rel_thumb_path = os.path.join(".thumbs", thumb_name)
with open(thumb_path, "wb") as f:
f.write(base64.b64decode(data.encode()))
with Image.open(thumb_path) as img:
if img.mode != "RGBA":
img = img.convert("RGBA")
new_img = Image.new("RGB", size=(info[0], info[1]), color=(255, 255, 255))
img = img.resize((info[0], info[1]))
new_img.paste(img, (0, 0), mask=img)
new_img.save(thumb_jpg_path, "JPEG", quality=90)
parsed_matches.append({
'width': info[0], 'height': info[1],
'size': os.path.getsize(thumb_path),
'relative_path': rel_thumb_path})
if info[0] == 32 and info[1] == 32:
has_miniature = True
if len(parsed_matches) > 0 and not has_miniature:
# find the largest thumb index
largest_match = parsed_matches[0]
for item in parsed_matches:
if item['size'] > largest_match['size']:
largest_match = item
# Create miniature thumbnail if one does not exist
thumb_full_name = largest_match['relative_path'].split("/")[-1]
thumb_path = os.path.join(thumb_dir, f"{thumb_full_name}")
rel_path_small = os.path.join(".thumbs", f"{thumb_base}-32x32.png")
thumb_path_small = os.path.join(
thumb_dir, f"{thumb_base}-32x32.png")
# read file
try:
with Image.open(thumb_path) as im:
# Create 32x32 thumbnail
im.thumbnail((32, 32))
im.save(thumb_path_small, format="PNG")
parsed_matches.insert(0, {
'width': im.width, 'height': im.height,
'size': os.path.getsize(thumb_path_small),
'relative_path': rel_path_small
})
except Exception as e:
log_to_stderr(str(e))
# find the smallest thumb index
smallest_match = parsed_matches[0]
max_size = min_size = smallest_match['size']
for item in parsed_matches:
if item['size'] < smallest_match['size']:
smallest_match = item
if item["size"] < min_size:
min_size = item["size"]
if item["size"] > max_size:
max_size = item["size"]
# Create thumbnail for screen
thumb_full_name = smallest_match['relative_path'].split("/")[-1]
thumb_path = os.path.join(thumb_dir, f"{thumb_full_name}")
thumb_QD_full_name = f"{thumb_base}-{smallest_match['width']}x{smallest_match['height']}_QD.jpg"
thumb_QD_path = os.path.join(thumb_dir, f"{thumb_QD_full_name}")
rel_path_QD = os.path.join(".thumbs", thumb_QD_full_name)
try:
with Image.open(thumb_path) as img:
if img.mode != "RGBA":
img = img.convert("RGBA")
new_img = Image.new("RGB", size=(smallest_match['width'], smallest_match['height']), color=(255, 255, 255))
img = img.resize((smallest_match['width'], smallest_match['height']))
new_img.paste(img, (0, 0), mask=img)
new_img.save(thumb_QD_path, "JPEG", quality=90)
except Exception as e:
logger.info(str(e))
parsed_matches.append({
'width': smallest_match['width'], 'height': smallest_match['height'],
'size': (max_size + min_size) // 2,
'relative_path': rel_path_QD})
return parsed_matches
def parse_layer_count(self) -> Optional[int]:
@@ -297,22 +298,19 @@ class UnknownSlicer(BaseSlicer):
return {'slicer': "Unknown"}
def parse_first_layer_height(self) -> Optional[float]:
return self._parse_min_float(r"G1\sZ\d+\.\d*", self.header_data)
return regex_find_min_float(r"G1\sZ(%F)\s", self.header_data)
def parse_object_height(self) -> Optional[float]:
return self._parse_max_float(r"G1\sZ\d+\.\d*", self.footer_data)
return regex_find_max_float(r"G1\sZ(%F)\s", self.footer_data)
def parse_first_layer_extr_temp(self) -> Optional[float]:
return _regex_find_first(
r"M109 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M109 S(%F)", self.header_data)
def parse_first_layer_bed_temp(self) -> Optional[float]:
return _regex_find_first(
r"M190 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M190 S(%F)", self.header_data)
def parse_chamber_temp(self) -> Optional[float]:
return _regex_find_first(
r"M191 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M191 S(%F)", self.header_data)
def parse_thumbnails(self) -> Optional[List[Dict[str, Any]]]:
return None
@@ -320,10 +318,12 @@ class UnknownSlicer(BaseSlicer):
class PrusaSlicer(BaseSlicer):
def check_identity(self, data: str) -> Optional[Dict[str, str]]:
aliases = {
'QIDIStudio': r"QIDIStudio\s(.*)",
'QIDISlicer': r"QIDISlicer\s(.*)\son",
'PrusaSlicer': r"PrusaSlicer\s(.*)\son",
'SuperSlicer': r"SuperSlicer\s(.*)\son",
'OrcaSlicer': r"OrcaSlicer\s(.*)\son",
'MomentSlicer': r"MomentSlicer\s(.*)\son",
'SliCR-3D': r"SliCR-3D\s(.*)\son",
'BambuStudio': r"BambuStudio[^ ]*\s(.*)\n",
'A3dp-Slicer': r"A3dp-Slicer\s(.*)\son",
@@ -343,20 +343,19 @@ class PrusaSlicer(BaseSlicer):
def parse_first_layer_height(self) -> Optional[float]:
# Check percentage
pct = _regex_find_first(
r"; first_layer_height = (\d+)%", self.footer_data)
pct = regex_find_float(r"; first_layer_height = (%F)%", self.footer_data)
if pct is not None:
if self.layer_height is None:
# Failed to parse the original layer height, so it is not
# possible to calculate a percentage
return None
return round(pct / 100. * self.layer_height, 6)
return _regex_find_first(
r"; first_layer_height = (\d+\.?\d*)", self.footer_data)
return regex_find_float(r"; first_layer_height = (%F)", self.footer_data)
def parse_layer_height(self) -> Optional[float]:
self.layer_height = _regex_find_first(
r"; layer_height = (\d+\.?\d*)", self.footer_data)
self.layer_height = regex_find_float(
r"; layer_height = (%F)", self.footer_data
)
return self.layer_height
def parse_object_height(self) -> Optional[float]:
@@ -369,23 +368,31 @@ class PrusaSlicer(BaseSlicer):
pass
else:
return max(matches)
return self._parse_max_float(r"G1\sZ\d+\.\d*\sF", self.footer_data)
return regex_find_max_float(r"G1\sZ(%F)\sF", self.footer_data)
def parse_filament_total(self) -> Optional[float]:
return _regex_find_first(
r"filament\sused\s\[mm\]\s=\s(\d+\.\d*)", self.footer_data)
line = regex_find_string(r'filament\sused\s\[mm\]\s=\s(%S)\n', self.footer_data)
if line:
filament = regex_find_floats(
r"(%F)", line
)
if filament:
return sum(filament)
return None
def parse_filament_weight_total(self) -> Optional[float]:
return _regex_find_first(
r"total\sfilament\sused\s\[g\]\s=\s(\d+\.\d*)", self.footer_data)
return regex_find_float(
r"total\sfilament\sused\s\[g\]\s=\s(%F)",
self.footer_data
)
def parse_filament_type(self) -> Optional[str]:
return _regex_find_string(
r";\sfilament_type\s=\s(.*)", self.footer_data)
return regex_find_string(r";\sfilament_type\s=\s(%S)", self.footer_data)
def parse_filament_name(self) -> Optional[str]:
return _regex_find_string(
r";\sfilament_settings_id\s=\s(.*)", self.footer_data)
def parse_filament_name(self) -> Optional[str]:
return regex_find_string(
r";\sfilament_settings_id\s=\s(%S)", self.footer_data
)
def parse_estimated_time(self) -> Optional[float]:
time_match = re.search(
@@ -406,33 +413,36 @@ class PrusaSlicer(BaseSlicer):
return round(total_time, 2)
def parse_first_layer_extr_temp(self) -> Optional[float]:
return _regex_find_first(
r"; first_layer_temperature = (\d+\.?\d*)", self.footer_data)
return regex_find_float(
r"; first_layer_temperature = (%F)", self.footer_data
)
def parse_first_layer_bed_temp(self) -> Optional[float]:
return _regex_find_first(
r"; first_layer_bed_temperature = (\d+\.?\d*)", self.footer_data)
return regex_find_float(
r"; first_layer_bed_temperature = (%F)", self.footer_data
)
def parse_chamber_temp(self) -> Optional[float]:
return _regex_find_first(
r"; chamber_temperature = (\d+\.?\d*)", self.footer_data)
return regex_find_float(
r"; chamber_temperature = (%F)", self.footer_data
)
def parse_nozzle_diameter(self) -> Optional[float]:
return _regex_find_first(
r";\snozzle_diameter\s=\s(\d+\.\d*)", self.footer_data)
return regex_find_float(
r";\snozzle_diameter\s=\s(%F)", self.footer_data
)
def parse_layer_count(self) -> Optional[int]:
return _regex_find_int(
r"; total layers count = (\d+)", self.footer_data)
return regex_find_int(r"; total layers count = (%D)", self.footer_data)
def parse_gimage(self) -> Optional[str]:
return _regex_find_string(
return regex_find_string(
r";gimage:(.*)", self.footer_data)
def parse_simage(self) -> Optional[str]:
return _regex_find_string(
return regex_find_string(
r";simage:(.*)", self.footer_data)
class Slic3rPE(PrusaSlicer):
def check_identity(self, data: str) -> Optional[Dict[str, str]]:
match = re.search(r"Slic3r\sPrusa\sEdition\s(.*)\son", data)
@@ -444,8 +454,7 @@ class Slic3rPE(PrusaSlicer):
return None
def parse_filament_total(self) -> Optional[float]:
return _regex_find_first(
r"filament\sused\s=\s(\d+\.\d+)mm", self.footer_data)
return regex_find_float(r"filament\sused\s=\s(%F)mm", self.footer_data)
def parse_thumbnails(self) -> Optional[List[Dict[str, Any]]]:
return None
@@ -461,15 +470,15 @@ class Slic3r(Slic3rPE):
return None
def parse_filament_total(self) -> Optional[float]:
filament = _regex_find_first(
r";\sfilament\_length\_m\s=\s(\d+\.\d*)", self.footer_data)
filament = regex_find_float(
r";\sfilament\_length\_m\s=\s(%F)", self.footer_data
)
if filament is not None:
filament *= 1000
return filament
def parse_filament_weight_total(self) -> Optional[float]:
return _regex_find_first(
r";\sfilament\smass\_g\s=\s(\d+\.\d*)", self.footer_data)
return regex_find_float(r";\sfilament\smass\_g\s=\s(%F)", self.footer_data)
def parse_estimated_time(self) -> Optional[float]:
return None
@@ -485,61 +494,52 @@ class Cura(BaseSlicer):
return None
def has_objects(self) -> bool:
return self._check_has_objects(
self.header_data, r"\n;MESH:")
return self._check_has_objects(self.header_data, r"\n;MESH:")
def parse_first_layer_height(self) -> Optional[float]:
return _regex_find_first(r";MINZ:(\d+\.?\d*)", self.header_data)
return regex_find_float(r";MINZ:(%F)", self.header_data)
def parse_layer_height(self) -> Optional[float]:
self.layer_height = _regex_find_first(
r";Layer\sheight:\s(\d+\.?\d*)", self.header_data)
self.layer_height = regex_find_float(
r";Layer\sheight:\s(%F)", self.header_data
)
return self.layer_height
def parse_object_height(self) -> Optional[float]:
return _regex_find_first(r";MAXZ:(\d+\.?\d*)", self.header_data)
return regex_find_float(r";MAXZ:(%F)", self.header_data)
def parse_filament_total(self) -> Optional[float]:
filament = _regex_find_first(
r";Filament\sused:\s(\d+\.?\d*)m", self.header_data)
filament = regex_find_float(r";Filament\sused:\s(%F)m", self.header_data)
if filament is not None:
filament *= 1000
return filament
def parse_filament_weight_total(self) -> Optional[float]:
return _regex_find_first(
r";Filament\sweight\s=\s.(\d+\.\d+).", self.header_data)
return regex_find_float(r";Filament\sweight\s=\s.(%F).", self.header_data)
def parse_filament_type(self) -> Optional[str]:
return _regex_find_string(
r";Filament\stype\s=\s(.*)", self.header_data)
return regex_find_string(r";Filament\stype\s=\s(%S)", self.header_data)
def parse_filament_name(self) -> Optional[str]:
return _regex_find_string(
r";Filament\sname\s=\s(.*)", self.header_data)
return regex_find_string(r";Filament\sname\s=\s(%S)", self.header_data)
def parse_estimated_time(self) -> Optional[float]:
return self._parse_max_float(r";TIME:.*", self.header_data)
return regex_find_max_float(r";TIME:(%F)", self.header_data)
def parse_first_layer_extr_temp(self) -> Optional[float]:
return _regex_find_first(
r"M109 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M109 S(%F)", self.header_data)
def parse_first_layer_bed_temp(self) -> Optional[float]:
return _regex_find_first(
r"M190 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M190 S(%F)", self.header_data)
def parse_chamber_temp(self) -> Optional[float]:
return _regex_find_first(
r"M191 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M191 S(%F)", self.header_data)
def parse_layer_count(self) -> Optional[int]:
return _regex_find_int(
r";LAYER_COUNT\:(\d+)", self.header_data)
return regex_find_int(r";LAYER_COUNT\:(%D)", self.header_data)
def parse_nozzle_diameter(self) -> Optional[float]:
return _regex_find_first(
r";Nozzle\sdiameter\s=\s(\d+\.\d*)", self.header_data)
return regex_find_float(r";Nozzle\sdiameter\s=\s(%F)", self.header_data)
def parse_thumbnails(self) -> Optional[List[Dict[str, Any]]]:
# Attempt to parse thumbnails from file metadata
@@ -565,7 +565,7 @@ class Cura(BaseSlicer):
'relative_path': rel_path_full
})
# Create 32x32 thumbnail
im.thumbnail((32, 32), Image.ANTIALIAS)
im.thumbnail((32, 32), Image.Resampling.LANCZOS)
im.save(thumb_path_small, format="PNG")
thumbs.insert(0, {
'width': im.width, 'height': im.height,
@@ -573,16 +573,16 @@ class Cura(BaseSlicer):
'relative_path': rel_path_small
})
except Exception as e:
log_to_stderr(str(e))
logger.info(str(e))
return None
return thumbs
def parse_gimage(self) -> Optional[str]:
return _regex_find_string(
return regex_find_string(
r";gimage:(.*)", self.header_data)
def parse_simage(self) -> Optional[str]:
return _regex_find_string(
return regex_find_string(
r";simage:(.*)", self.header_data)
class Simplify3D(BaseSlicer):
@@ -598,39 +598,39 @@ class Simplify3D(BaseSlicer):
return None
def parse_first_layer_height(self) -> Optional[float]:
return self._parse_min_float(r"G1\sZ\d+\.\d*", self.header_data)
return regex_find_min_float(r"G1\sZ(%F)\s", self.header_data)
def parse_layer_height(self) -> Optional[float]:
self.layer_height = _regex_find_first(
r";\s+layerHeight,(\d+\.?\d*)", self.header_data)
self.layer_height = regex_find_float(
r";\s+layerHeight,(%F)", self.header_data
)
return self.layer_height
def parse_object_height(self) -> Optional[float]:
return self._parse_max_float(r"G1\sZ\d+\.\d*", self.footer_data)
return regex_find_max_float(r"G1\sZ(%F)\s", self.footer_data)
def parse_filament_total(self) -> Optional[float]:
return _regex_find_first(
r";\s+(?:Filament\slength|Material\sLength):\s(\d+\.?\d*)\smm",
return regex_find_float(
r";\s+(?:Filament\slength|Material\sLength):\s(%F)\smm",
self.footer_data
)
def parse_filament_weight_total(self) -> Optional[float]:
return _regex_find_first(
r";\s+(?:Plastic\sweight|Material\sWeight):\s(\d+\.?\d*)\sg",
return regex_find_float(
r";\s+(?:Plastic\sweight|Material\sWeight):\s(%F)\sg",
self.footer_data
)
def parse_filament_name(self) -> Optional[str]:
return _regex_find_string(
r";\s+printMaterial,(.*)", self.header_data)
return regex_find_string(
r";\s+printMaterial,(%S)", self.header_data)
def parse_filament_type(self) -> Optional[str]:
return _regex_find_string(
r";\s+makerBotModelMaterial,(.*)", self.footer_data)
return regex_find_string(
r";\s+makerBotModelMaterial,(%S)", self.footer_data)
def parse_estimated_time(self) -> Optional[float]:
time_match = re.search(
r';\s+Build (t|T)ime:.*', self.footer_data)
time_match = re.search(r';\s+Build (t|T)ime:.*', self.footer_data)
if not time_match:
return None
total_time = 0
@@ -690,8 +690,8 @@ class Simplify3D(BaseSlicer):
return self._get_first_layer_temp("Heated Bed")
def parse_nozzle_diameter(self) -> Optional[float]:
return _regex_find_first(
r";\s+(?:extruderDiameter|nozzleDiameter),(\d+\.\d*)",
return regex_find_float(
r";\s+(?:extruderDiameter|nozzleDiameter),(%F)",
self.header_data
)
@@ -708,28 +708,28 @@ class KISSlicer(BaseSlicer):
return None
def parse_first_layer_height(self) -> Optional[float]:
return _regex_find_first(
r";\s+first_layer_thickness_mm\s=\s(\d+\.?\d*)", self.header_data)
return regex_find_float(
r";\s+first_layer_thickness_mm\s=\s(%F)", self.header_data)
def parse_layer_height(self) -> Optional[float]:
self.layer_height = _regex_find_first(
r";\s+max_layer_thickness_mm\s=\s(\d+\.?\d*)", self.header_data)
self.layer_height = regex_find_float(
r";\s+max_layer_thickness_mm\s=\s(%F)", self.header_data)
return self.layer_height
def parse_object_height(self) -> Optional[float]:
return self._parse_max_float(
r";\sEND_LAYER_OBJECT\sz.*", self.footer_data)
return regex_find_max_float(
r";\sEND_LAYER_OBJECT\sz=(%F)", self.footer_data)
def parse_filament_total(self) -> Optional[float]:
filament = _regex_find_floats(
r";\s+Ext\s.*mm", self.footer_data, strict=True)
filament = regex_find_floats(
r";\s+Ext #\d+\s+=\s+(%F)\s*mm", self.footer_data)
if filament:
return sum(filament)
return None
def parse_estimated_time(self) -> Optional[float]:
time = _regex_find_first(
r";\sCalculated.*Build\sTime:\s(\d+\.?\d*)\sminutes",
time = regex_find_float(
r";\sCalculated.*Build\sTime:\s(%F)\sminutes",
self.footer_data)
if time is not None:
time *= 60
@@ -737,16 +737,13 @@ class KISSlicer(BaseSlicer):
return None
def parse_first_layer_extr_temp(self) -> Optional[float]:
return _regex_find_first(
r"; first_layer_C = (\d+\.?\d*)", self.header_data)
return regex_find_float(r"; first_layer_C = (%F)", self.header_data)
def parse_first_layer_bed_temp(self) -> Optional[float]:
return _regex_find_first(
r"; bed_C = (\d+\.?\d*)", self.header_data)
return regex_find_float(r"; bed_C = (%F)", self.header_data)
def parse_chamber_temp(self) -> Optional[float]:
return _regex_find_first(
r"; chamber_C = (\d+\.?\d*)", self.header_data)
return regex_find_float(r"; chamber_C = (%F)", self.header_data)
class IdeaMaker(BaseSlicer):
@@ -760,54 +757,49 @@ class IdeaMaker(BaseSlicer):
return None
def has_objects(self) -> bool:
return self._check_has_objects(
self.header_data, r"\n;PRINTING:")
return self._check_has_objects(self.header_data, r"\n;PRINTING:")
def parse_first_layer_height(self) -> Optional[float]:
layer_info = _regex_find_floats(
r";LAYER:0\s*.*\s*;HEIGHT.*", self.header_data)
if len(layer_info) >= 3:
return layer_info[2]
return None
return regex_find_float(
r";LAYER:0\s*.*\s*;HEIGHT:(%F)", self.header_data
)
def parse_layer_height(self) -> Optional[float]:
layer_info = _regex_find_floats(
r";LAYER:1\s*.*\s*;HEIGHT.*", self.header_data)
if len(layer_info) >= 3:
self.layer_height = layer_info[2]
return self.layer_height
return None
return regex_find_float(
r";LAYER:1\s*.*\s*;HEIGHT:(%F)", self.header_data
)
def parse_object_height(self) -> Optional[float]:
bounds = _regex_find_floats(
r";Bounding Box:.*", self.header_data)
if len(bounds) >= 6:
return bounds[5]
return None
return regex_find_float(r";Bounding Box:(?:\s+(%F))+", self.header_data)
def parse_filament_total(self) -> Optional[float]:
filament = _regex_find_floats(
r";Material.\d\sUsed:.*", self.footer_data, strict=True)
filament = regex_find_floats(
r";Material.\d\sUsed:\s+(%F)", self.footer_data
)
if filament:
return sum(filament)
return None
def parse_filament_type(self) -> Optional[str]:
return _regex_find_string(
r";Filament\stype\s=\s(.*)", self.header_data)
return (
regex_find_string(r";Filament\sType\s.\d:\s(%S)", self.header_data) or
regex_find_string(r";Filament\stype\s=\s(%S)", self.header_data)
)
def parse_filament_name(self) -> Optional[str]:
return _regex_find_string(
r";Filament\sname\s=\s(.*)", self.header_data)
return (
regex_find_string(r";Filament\sName\s.\d:\s(%S)", self.header_data) or
regex_find_string(r";Filament\sname\s=\s(%S)", self.header_data)
)
def parse_filament_weight_total(self) -> Optional[float]:
pi = 3.141592653589793
length = _regex_find_floats(
r";Material.\d\sUsed:.*", self.footer_data, strict=True)
diameter = _regex_find_floats(
r";Filament\sDiameter\s.\d:.*", self.header_data, strict=True)
density = _regex_find_floats(
r";Filament\sDensity\s.\d:.*", self.header_data, strict=True)
length = regex_find_floats(
r";Material.\d\sUsed:\s+(%F)", self.footer_data)
diameter = regex_find_floats(
r";Filament\sDiameter\s.\d:\s+(%F)", self.header_data)
density = regex_find_floats(
r";Filament\sDensity\s.\d:\s+(%F)", self.header_data)
if len(length) == len(density) == len(diameter):
# calc individual weight for each filament with m=pi/4*d²*l*rho
weights = [(pi/4 * diameter[i]**2 * length[i] * density[i]/10**6)
@@ -816,24 +808,20 @@ class IdeaMaker(BaseSlicer):
return None
def parse_estimated_time(self) -> Optional[float]:
return _regex_find_first(
r";Print\sTime:\s(\d+\.?\d*)", self.footer_data)
return regex_find_float(r";Print\sTime:\s(%F)", self.footer_data)
def parse_first_layer_extr_temp(self) -> Optional[float]:
return _regex_find_first(
r"M109 T0 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M109 T0 S(%F)", self.header_data)
def parse_first_layer_bed_temp(self) -> Optional[float]:
return _regex_find_first(
r"M190 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M190 S(%F)", self.header_data)
def parse_chamber_temp(self) -> Optional[float]:
return _regex_find_first(
r"M191 S(\d+\.?\d*)", self.header_data)
return regex_find_float(r"M191 S(%F)", self.header_data)
def parse_nozzle_diameter(self) -> Optional[float]:
return _regex_find_first(
r";Dimension:(?:\s\d+\.\d+){3}\s(\d+\.\d+)", self.header_data)
return regex_find_float(
r";Dimension:(?:\s\d+\.\d+){3}\s(%F)", self.header_data)
class IceSL(BaseSlicer):
def check_identity(self, data) -> Optional[Dict[str, Any]]:
@@ -847,59 +835,59 @@ class IceSL(BaseSlicer):
return None
def parse_first_layer_height(self) -> Optional[float]:
return _regex_find_first(
r";\sz_layer_height_first_layer_mm\s:\s+(\d+\.\d+)",
return regex_find_float(
r";\sz_layer_height_first_layer_mm\s:\s+(%F)",
self.header_data)
def parse_layer_height(self) -> Optional[float]:
self.layer_height = _regex_find_first(
r";\sz_layer_height_mm\s:\s+(\d+\.\d+)",
self.layer_height = regex_find_float(
r";\sz_layer_height_mm\s:\s+(%F)",
self.header_data)
return self.layer_height
def parse_object_height(self) -> Optional[float]:
return _regex_find_first(
r";\sprint_height_mm\s:\s+(\d+\.\d+)", self.header_data)
return regex_find_float(
r";\sprint_height_mm\s:\s+(%F)", self.header_data)
def parse_first_layer_extr_temp(self) -> Optional[float]:
return _regex_find_first(
r";\sextruder_temp_degree_c_0\s:\s+(\d+\.?\d*)", self.header_data)
return regex_find_float(
r";\sextruder_temp_degree_c_0\s:\s+(%F)", self.header_data)
def parse_first_layer_bed_temp(self) -> Optional[float]:
return _regex_find_first(
r";\sbed_temp_degree_c\s:\s+(\d+\.?\d*)", self.header_data)
return regex_find_float(
r";\sbed_temp_degree_c\s:\s+(%F)", self.header_data)
def parse_chamber_temp(self) -> Optional[float]:
return _regex_find_first(
r";\schamber_temp_degree_c\s:\s+(\d+\.?\d*)", self.header_data)
return regex_find_float(
r";\schamber_temp_degree_c\s:\s+(%F)", self.header_data)
def parse_filament_total(self) -> Optional[float]:
return _regex_find_first(
r";\sfilament_used_mm\s:\s+(\d+\.\d+)", self.header_data)
return regex_find_float(
r";\sfilament_used_mm\s:\s+(%F)", self.header_data)
def parse_filament_weight_total(self) -> Optional[float]:
return _regex_find_first(
r";\sfilament_used_g\s:\s+(\d+\.\d+)", self.header_data)
return regex_find_float(
r";\sfilament_used_g\s:\s+(%F)", self.header_data)
def parse_filament_name(self) -> Optional[str]:
return _regex_find_string(
r";\sfilament_name\s:\s+(.*)", self.header_data)
return regex_find_string(
r";\sfilament_name\s:\s+(%S)", self.header_data)
def parse_filament_type(self) -> Optional[str]:
return _regex_find_string(
r";\sfilament_type\s:\s+(.*)", self.header_data)
return regex_find_string(
r";\sfilament_type\s:\s+(%S)", self.header_data)
def parse_estimated_time(self) -> Optional[float]:
return _regex_find_first(
r";\sestimated_print_time_s\s:\s+(\d*\.*\d*)", self.header_data)
return regex_find_float(
r";\sestimated_print_time_s\s:\s+(%F)", self.header_data)
def parse_layer_count(self) -> Optional[int]:
return _regex_find_int(
r";\slayer_count\s:\s+(\d+)", self.header_data)
return regex_find_int(
r";\slayer_count\s:\s+(%D)", self.header_data)
def parse_nozzle_diameter(self) -> Optional[float]:
return _regex_find_first(
r";\snozzle_diameter_mm_0\s:\s+(\d+\.\d+)", self.header_data)
return regex_find_float(
r";\snozzle_diameter_mm_0\s:\s+(%F)", self.header_data)
class KiriMoto(BaseSlicer):
def check_identity(self, data) -> Optional[Dict[str, Any]]:
@@ -917,20 +905,19 @@ class KiriMoto(BaseSlicer):
return None
def parse_first_layer_height(self) -> Optional[float]:
return _regex_find_first(
r"; firstSliceHeight = (\d+\.\d+)", self.header_data
return regex_find_float(
r"; firstSliceHeight = (%F)", self.header_data
)
def parse_layer_height(self) -> Optional[float]:
self.layer_height = _regex_find_first(
r"; sliceHeight = (\d+\.\d+)", self.header_data
self.layer_height = regex_find_float(
r"; sliceHeight = (%F)", self.header_data
)
return self.layer_height
def parse_object_height(self) -> Optional[float]:
return self._parse_max_float(
r"G1 Z\d+\.\d+ (?:; z-hop end|F\d+\n)",
self.footer_data, strict=True
return regex_find_max_float(
r"G1 Z(%F) (?:; z-hop end|F\d+\n)", self.footer_data
)
def parse_layer_count(self) -> Optional[int]:
@@ -945,25 +932,25 @@ class KiriMoto(BaseSlicer):
return None
def parse_estimated_time(self) -> Optional[float]:
return _regex_find_int(r"; --- print time: (\d+)s", self.footer_data)
return regex_find_int(r"; --- print time: (%D)s", self.footer_data)
def parse_filament_total(self) -> Optional[float]:
return _regex_find_first(
r"; --- filament used: (\d+\.?\d*) mm", self.footer_data
return regex_find_float(
r"; --- filament used: (%F) mm", self.footer_data
)
def parse_first_layer_extr_temp(self) -> Optional[float]:
return _regex_find_first(
r"; firstLayerNozzleTemp = (\d+\.?\d*)", self.header_data
return regex_find_float(
r"; firstLayerNozzleTemp = (%F)", self.header_data
)
def parse_first_layer_bed_temp(self) -> Optional[float]:
return _regex_find_first(
r"; firstLayerBedTemp = (\d+\.?\d*)", self.header_data
return regex_find_float(
r"; firstLayerBedTemp = (%F)", self.header_data
)
READ_SIZE = 512 * 1024
READ_SIZE = 1024 * 1024 # 1 MiB
SUPPORTED_SLICERS: List[Type[BaseSlicer]] = [
PrusaSlicer, Slic3rPE, Slic3r, Cura, Simplify3D,
KISSlicer, IdeaMaker, IceSL, KiriMoto
@@ -997,10 +984,10 @@ def process_objects(file_path: str, slicer: BaseSlicer, name: str) -> bool:
preprocess_m486
)
except ImportError:
log_to_stderr("Module 'preprocess-cancellation' failed to load")
logger.info("Module 'preprocess-cancellation' failed to load")
return False
fname = os.path.basename(file_path)
log_to_stderr(
logger.info(
f"Performing Object Processing on file: {fname}, "
f"sliced by {name}"
)
@@ -1018,7 +1005,7 @@ def process_objects(file_path: str, slicer: BaseSlicer, name: str) -> bool:
elif isinstance(slicer, IdeaMaker):
processor = preprocess_ideamaker
else:
log_to_stderr(
logger.info(
f"Object Processing Failed, slicer {name}"
"not supported"
)
@@ -1026,7 +1013,7 @@ def process_objects(file_path: str, slicer: BaseSlicer, name: str) -> bool:
for line in processor(in_file):
out_file.write(line)
except Exception as e:
log_to_stderr(f"Object processing failed: {e}")
logger.info(f"Object processing failed: {e}")
return False
if os.path.islink(file_path):
file_path = os.path.realpath(file_path)
@@ -1084,7 +1071,7 @@ def extract_metadata(
def extract_ufp(ufp_path: str, dest_path: str) -> None:
if not os.path.isfile(ufp_path):
log_to_stderr(f"UFP file Not Found: {ufp_path}")
logger.info(f"UFP file Not Found: {ufp_path}")
sys.exit(-1)
thumb_name = os.path.splitext(
os.path.basename(dest_path))[0] + ".png"
@@ -1107,12 +1094,12 @@ def extract_ufp(ufp_path: str, dest_path: str) -> None:
os.mkdir(dest_thumb_dir)
shutil.move(tmp_thumb_path, dest_thumb_path)
except Exception:
log_to_stderr(traceback.format_exc())
logger.info(traceback.format_exc())
sys.exit(-1)
try:
os.remove(ufp_path)
except Exception:
log_to_stderr(f"Error removing ufp file: {ufp_path}")
logger.info(f"Error removing ufp file: {ufp_path}")
def main(path: str,
filename: str,
@@ -1124,12 +1111,12 @@ def main(path: str,
extract_ufp(ufp, file_path)
metadata: Dict[str, Any] = {}
if not os.path.isfile(file_path):
log_to_stderr(f"File Not Found: {file_path}")
logger.info(f"File Not Found: {file_path}")
sys.exit(-1)
try:
metadata = extract_metadata(file_path, check_objects)
except Exception:
log_to_stderr(traceback.format_exc())
logger.info(traceback.format_exc())
sys.exit(-1)
fd = sys.stdout.fileno()
data = json.dumps(
@@ -1164,5 +1151,5 @@ if __name__ == "__main__":
args = parser.parse_args()
check_objects = args.check_objects
enabled_msg = "enabled" if check_objects else "disabled"
log_to_stderr(f"Object Processing is {enabled_msg}")
main(args.path, args.filename, args.ufp, check_objects)
logger.info(f"Object Processing is {enabled_msg}")
main(args.path, args.filename, args.ufp, check_objects)

View File

@@ -4,8 +4,13 @@
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import os
import re
import asyncio
import pathlib
import logging
from utils import load_system_module
import periphery
from ..utils import KERNEL_VERSION
# Annotation imports
from typing import (
@@ -18,167 +23,135 @@ from typing import (
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from eventloop import EventLoop
GPIO_CALLBACK = Callable[[float, float, int], Optional[Awaitable[None]]]
from ..confighelper import ConfigHelper
from ..eventloop import EventLoop
GpioEventCallback = Callable[[float, float, int], Optional[Awaitable[None]]]
GPIO_PATTERN = r"""
(?P<bias>[~^])?
(?P<inverted>!)?
(?:(?P<chip_id>gpiochip[0-9]+)/)?
(?P<pin_name>gpio(?P<pin_id>[0-9]+))
"""
BIAS_FLAG_TO_DESC: Dict[str, str] = {
"^": "pull_up",
"~": "pull_down",
"*": "disable" if KERNEL_VERSION >= (5, 5) else "default"
}
class GpioFactory:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.gpiod: Any = load_system_module("gpiod")
GpioEvent.init_constants(self.gpiod)
self.chips: Dict[str, Any] = {}
self.reserved_gpios: Dict[str, GpioBase] = {}
version: str = self.gpiod.version_string()
self.gpiod_version = tuple(int(v) for v in version.split('.'))
self.server.add_log_rollover_item(
"gpiod_version", f"libgpiod version: {version}")
def _get_gpio_chip(self, chip_name) -> Any:
if chip_name in self.chips:
return self.chips[chip_name]
chip = self.gpiod.Chip(chip_name, self.gpiod.Chip.OPEN_BY_NAME)
self.chips[chip_name] = chip
return chip
def setup_gpio_out(self,
pin_name: str,
initial_value: int = 0
) -> GpioOutputPin:
def setup_gpio_out(self, pin_name: str, initial_value: int = 0) -> GpioOutputPin:
initial_value = int(not not initial_value)
pparams = self._parse_pin(pin_name)
pparams['initial_value'] = initial_value
line = self._request_gpio(pparams)
pparams = self._parse_pin(pin_name, initial_value)
gpio = self._request_gpio(pparams)
try:
gpio_out = GpioOutputPin(line, pparams)
gpio_out = GpioOutputPin(gpio, pparams)
except Exception:
logging.exception("Error Instantiating GpioOutputPin")
line.release()
gpio.close()
raise
full_name = pparams['full_name']
full_name = pparams["full_name"]
self.reserved_gpios[full_name] = gpio_out
return gpio_out
def register_gpio_event(self,
pin_name: str,
callback: GPIO_CALLBACK
) -> GpioEvent:
pin_params = self._parse_pin(pin_name, type="event")
line = self._request_gpio(pin_params)
def register_gpio_event(
self, pin_name: str, callback: GpioEventCallback
) -> GpioEvent:
pin_params = self._parse_pin(pin_name, req_type="event")
gpio = self._request_gpio(pin_params)
event_loop = self.server.get_event_loop()
try:
gpio_event = GpioEvent(event_loop, line, pin_params, callback)
gpio_event = GpioEvent(event_loop, gpio, pin_params, callback)
except Exception:
logging.exception("Error Instantiating GpioEvent")
line.release()
gpio.close()
raise
full_name = pin_params['full_name']
full_name = pin_params["full_name"]
self.reserved_gpios[full_name] = gpio_event
return gpio_event
def _request_gpio(self, pin_params: Dict[str, Any]) -> Any:
full_name = pin_params['full_name']
def _request_gpio(self, pin_params: Dict[str, Any]) -> periphery.GPIO:
full_name = pin_params["full_name"]
if full_name in self.reserved_gpios:
raise self.server.error(f"GPIO {full_name} already reserved")
chip_path = pathlib.Path("/dev").joinpath(pin_params["chip_id"])
if not chip_path.exists():
raise self.server.error(f"Chip path {chip_path} does not exist")
try:
chip = self._get_gpio_chip(pin_params['chip_id'])
line = chip.get_line(pin_params['pin_id'])
args: Dict[str, Any] = {
'consumer': "moonraker",
'type': pin_params['request_type']
}
if 'flags' in pin_params:
args['flags'] = pin_params['flags']
if 'initial_value' in pin_params:
if self.gpiod_version < (1, 3):
args['default_vals'] = [pin_params['initial_value']]
else:
args['default_val'] = pin_params['initial_value']
line.request(**args)
gpio = periphery.GPIO(
str(chip_path),
pin_params["pin_id"],
pin_params["direction"],
edge=pin_params.get("edge", "none"),
bias=pin_params.get("bias", "default"),
inverted=pin_params["inverted"],
label="moonraker"
)
except Exception:
logging.exception(
f"Unable to init {full_name}. Make sure the gpio is not in "
"use by another program or exported by sysfs.")
raise
return line
return gpio
def _parse_pin(self,
pin_name: str,
type: str = "out"
) -> Dict[str, Any]:
def _parse_pin(
self, pin_desc: str, initial_value: int = 0, req_type: str = "out"
) -> Dict[str, Any]:
params: Dict[str, Any] = {
'orig': pin_name,
'invert': False,
"orig": pin_desc,
"inverted": False,
"request_type": req_type,
"initial_value": initial_value
}
pin = pin_name
if type == "event":
params['request_type'] = self.gpiod.LINE_REQ_EV_BOTH_EDGES
flag: str = "disable"
if pin[0] == "^":
pin = pin[1:]
flag = "pullup"
elif pin[0] == "~":
pin = pin[1:]
flag = "pulldown"
if self.gpiod_version >= (1, 5):
flag_to_enum = {
"disable": self.gpiod.LINE_REQ_FLAG_BIAS_DISABLE,
"pullup": self.gpiod.LINE_REQ_FLAG_BIAS_PULL_UP,
"pulldown": self.gpiod.LINE_REQ_FLAG_BIAS_PULL_DOWN
}
params['flags'] = flag_to_enum[flag]
elif flag != "disable":
raise self.server.error(
f"Flag {flag} configured for event GPIO '{pin_name}'"
" requires libgpiod version 1.5 or later. "
f"Current Version: {self.gpiod.version_string()}")
elif type == "out":
params['request_type'] = self.gpiod.LINE_REQ_DIR_OUT
if pin[0] == "!":
pin = pin[1:]
params['invert'] = True
if 'flags' in params:
params['flags'] |= self.gpiod.LINE_REQ_FLAG_ACTIVE_LOW
else:
params['flags'] = self.gpiod.LINE_REQ_FLAG_ACTIVE_LOW
chip_id: str = "gpiochip0"
pin_parts = pin.split("/")
if len(pin_parts) == 2:
chip_id, pin = pin_parts
elif len(pin_parts) == 1:
pin = pin_parts[0]
# Verify pin
if not chip_id.startswith("gpiochip") or \
not chip_id[-1].isdigit() or \
not pin.startswith("gpio") or \
not pin[4:].isdigit():
pin_match = re.match(GPIO_PATTERN, pin_desc, re.VERBOSE)
if pin_match is None:
raise self.server.error(
f"Invalid Gpio Pin: {pin_name}")
pin_id = int(pin[4:])
params['pin_id'] = pin_id
params['chip_id'] = chip_id
params['full_name'] = f"{chip_id}:{pin}"
f"Invalid pin format {pin_desc}. Refer to the configuration "
"documentation for details on the pin format."
)
bias_flag: Optional[str] = pin_match.group("bias")
params["inverted"] = pin_match.group("inverted") is not None
if req_type == "event":
params["direction"] = "in"
params["edge"] = "both"
params["bias"] = BIAS_FLAG_TO_DESC[bias_flag or "*"]
elif req_type == "out":
if bias_flag is not None:
raise self.server.error(
f"Invalid pin format {pin_desc}. Bias flag {bias_flag} "
"not available for output pins."
)
initial_state = bool(initial_value) ^ params["inverted"]
params["direction"] = "low" if not initial_state else "high"
chip_id: str = pin_match.group("chip_id") or "gpiochip0"
pin_name: str = pin_match.group("pin_name")
params["pin_id"] = int(pin_match.group("pin_id"))
params["chip_id"] = chip_id
params["full_name"] = f"{chip_id}:{pin_name}"
return params
def close(self) -> None:
for line in self.reserved_gpios.values():
line.release()
for chip in self.chips.values():
chip.close()
for gpio in self.reserved_gpios.values():
gpio.close()
class GpioBase:
def __init__(self,
line: Any,
pin_params: Dict[str, Any]
) -> None:
self.orig: str = pin_params['orig']
self.name: str = pin_params['full_name']
self.inverted: bool = pin_params['invert']
self.line: Any = line
self.value: int = pin_params.get('initial_value', 0)
def __init__(
self, gpio: periphery.GPIO, pin_params: Dict[str, Any]
) -> None:
self.orig: str = pin_params["orig"]
self.name: str = pin_params["full_name"]
self.inverted: bool = pin_params["inverted"]
self.gpio = gpio
self.value: int = pin_params.get("initial_value", 0)
def release(self) -> None:
self.line.release()
def close(self) -> None:
self.gpio.close()
def is_inverted(self) -> bool:
return self.inverted
@@ -195,85 +168,107 @@ class GpioBase:
class GpioOutputPin(GpioBase):
def write(self, value: int) -> None:
self.value = int(not not value)
self.line.set_value(self.value)
self.gpio.write(bool(self.value))
MAX_ERRORS = 20
MAX_ERRORS = 50
ERROR_RESET_TIME = 5.
class GpioEvent(GpioBase):
EVENT_FALLING_EDGE = 0
EVENT_RISING_EDGE = 1
def __init__(self,
event_loop: EventLoop,
line: Any,
pin_params: Dict[str, Any],
callback: GPIO_CALLBACK
) -> None:
super().__init__(line, pin_params)
def __init__(
self,
event_loop: EventLoop,
gpio: periphery.GPIO,
pin_params: Dict[str, Any],
callback: GpioEventCallback
) -> None:
super().__init__(gpio, pin_params)
self.event_loop = event_loop
self.fd = line.event_get_fd()
self.callback = callback
self.on_error: Optional[Callable[[str], None]] = None
self.min_evt_time = 0.
self.last_event_time = 0.
self.debounce_period: float = 0
self.last_event_time: float = 0.
self.error_count = 0
self.last_error_reset = 0.
self.started = False
self.debounce_task: Optional[asyncio.Task] = None
os.set_blocking(self.gpio.fd, False)
@classmethod
def init_constants(cls, gpiod: Any) -> None:
cls.EVENT_RISING_EDGE = gpiod.LineEvent.RISING_EDGE
cls.EVENT_FALLING_EDGE = gpiod.LineEvent.FALLING_EDGE
def fileno(self) -> int:
return self.gpio.fd
def setup_debounce(self,
min_evt_time: float,
err_callback: Optional[Callable[[str], None]]
) -> None:
self.min_evt_time = max(min_evt_time, 0.)
def setup_debounce(
self, debounce_period: float, err_callback: Optional[Callable[[str], None]]
) -> None:
self.debounce_period = max(debounce_period, 0)
self.on_error = err_callback
def start(self) -> None:
if not self.started:
self.value = self.line.get_value()
self.value = int(self.gpio.read())
self.last_event_time = self.event_loop.get_loop_time()
self.event_loop.add_reader(self.fd, self._on_event_trigger)
self.event_loop.add_reader(self.gpio.fd, self._on_event_trigger)
self.started = True
logging.debug(f"GPIO {self.name}: Listening for events, "
f"current state: {self.value}")
def stop(self) -> None:
if self.debounce_task is not None:
self.debounce_task.cancel()
self.debounce_task = None
if self.started:
self.event_loop.remove_reader(self.fd)
self.event_loop.remove_reader(self.gpio.fd)
self.started = False
def release(self) -> None:
def close(self) -> None:
self.stop()
self.line.release()
self.gpio.close()
def _on_event_trigger(self) -> None:
evt = self.line.event_read()
last_val = self.value
if evt.type == self.EVENT_RISING_EDGE:
evt = self.gpio.read_event()
last_value = self.value
if evt.edge == "rising": # type: ignore
self.value = 1
elif evt.type == self.EVENT_FALLING_EDGE:
elif evt.edge == "falling": # type: ignore
self.value = 0
else:
return
if self.debounce_period:
if self.debounce_task is None:
coro = self._debounce(last_value)
self.debounce_task = self.event_loop.create_task(coro)
else:
self._increment_error()
elif last_value != self.value:
# No debounce period and change detected
self._run_callback()
async def _debounce(self, last_value: int) -> None:
await asyncio.sleep(self.debounce_period)
self.debounce_task = None
if last_value != self.value:
self._run_callback()
def _run_callback(self) -> None:
eventtime = self.event_loop.get_loop_time()
evt_duration = eventtime - self.last_event_time
if last_val == self.value or evt_duration < self.min_evt_time:
self._increment_error()
return
self.last_event_time = eventtime
self.error_count = 0
ret = self.callback(eventtime, evt_duration, self.value)
if ret is not None:
self.event_loop.create_task(ret)
self.event_loop.create_task(ret) # type: ignore
def _increment_error(self) -> None:
eventtime = self.event_loop.get_loop_time()
if eventtime - self.last_error_reset > ERROR_RESET_TIME:
self.error_count = 0
self.last_error_reset = eventtime
self.error_count += 1
if self.error_count >= MAX_ERRORS:
self.stop()
if self.on_error is not None:
self.on_error("Too Many Consecutive Errors, "
f"GPIO Event Disabled on {self.name}")
self.on_error(
f"Too Many Consecutive Errors, GPIO Event Disabled on {self.name}"
)
def load_component(config: ConfigHelper) -> GpioFactory:

View File

@@ -1,11 +1,20 @@
# History cache for printer jobs
#
# Copyright (C) 2024 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import time
import logging
from asyncio import Lock
from ..common import (
JobEvent,
RequestType,
HistoryFieldData,
FieldTracker,
SqlTableDefinition
)
# Annotation imports
from typing import (
@@ -15,97 +24,284 @@ from typing import (
Optional,
Dict,
List,
Tuple
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest
from ..confighelper import ConfigHelper
from ..common import WebRequest, UserInfo
from .database import MoonrakerDatabase as DBComp
from .job_state import JobState
from .file_manager.file_manager import FileManager
from .database import DBProviderWrapper
Totals = Dict[str, Union[float, int]]
AuxTotals = List[Dict[str, Any]]
HIST_NAMESPACE = "history"
MAX_JOBS = 10000
BASE_TOTALS = {
"total_jobs": 0,
"total_time": 0.,
"total_print_time": 0.,
"total_filament_used": 0.,
"longest_job": 0.,
"longest_print": 0.
}
HIST_TABLE = "job_history"
TOTALS_TABLE = "job_totals"
def _create_totals_list(
job_totals: Dict[str, Any],
aux_totals: List[Dict[str, Any]],
instance: str = "default"
) -> List[Tuple[str, str, Any, Any, str]]:
"""
Returns a list of Tuples formatted for SQL Database insertion.
Fields of each tuple are in the following order:
provider, field, maximum, total, instance_id
"""
totals_list: List[Tuple[str, str, Any, Any, str]] = []
for key, value in job_totals.items():
total = value if key.startswith("total_") else None
maximum = value if total is None else None
totals_list.append(("history", key, maximum, total, instance))
for item in aux_totals:
if not isinstance(item, dict):
continue
totals_list.append(
(
item["provider"],
item["field"],
item["maximum"],
item["total"],
instance
)
)
return totals_list
class TotalsSqlDefinition(SqlTableDefinition):
name = TOTALS_TABLE
prototype = (
f"""
{TOTALS_TABLE} (
provider TEXT NOT NULL,
field TEXT NOT NULL,
maximum REAL,
total REAL,
instance_id TEXT NOT NULL,
PRIMARY KEY (provider, field, instance_id)
)
"""
)
version = 1
def migrate(self, last_version: int, db_provider: DBProviderWrapper) -> None:
if last_version == 0:
# Migrate from "moonraker" namespace to a table
logging.info("Migrating history totals from moonraker namespace...")
hist_ns: Dict[str, Any] = db_provider.get_item("moonraker", "history", {})
job_totals: Dict[str, Any] = hist_ns.get("job_totals", BASE_TOTALS)
aux_totals: List[Dict[str, Any]] = hist_ns.get("aux_totals", [])
if not isinstance(job_totals, dict):
job_totals = dict(BASE_TOTALS)
if not isinstance(aux_totals, list):
aux_totals = []
totals_list = _create_totals_list(job_totals, aux_totals)
sql_conn = db_provider.connection
with sql_conn:
sql_conn.executemany(
f"INSERT OR IGNORE INTO {TOTALS_TABLE} VALUES(?, ?, ?, ?, ?)",
totals_list
)
try:
db_provider.delete_item("moonraker", "history")
except Exception:
pass
class HistorySqlDefinition(SqlTableDefinition):
name = HIST_TABLE
prototype = (
f"""
{HIST_TABLE} (
job_id INTEGER PRIMARY KEY ASC,
user TEXT NOT NULL,
filename TEXT,
status TEXT NOT NULL,
start_time REAL NOT NULL,
end_time REAL,
print_duration REAL NOT NULL,
total_duration REAL NOT NULL,
filament_used REAL NOT NULL,
metadata pyjson,
auxiliary_data pyjson NOT NULL,
instance_id TEXT NOT NULL
)
"""
)
version = 1
def _get_entry_item(
self, entry: Dict[str, Any], name: str, default: Any = 0.
) -> Any:
val = entry.get(name)
if val is None:
return default
return val
def migrate(self, last_version: int, db_provider: DBProviderWrapper) -> None:
if last_version == 0:
conn = db_provider.connection
for batch in db_provider.iter_namespace("history", 1000):
conv_vals: List[Tuple[Any, ...]] = []
entry: Dict[str, Any]
for key, entry in batch.items():
if not isinstance(entry, dict):
logging.info(
f"History migration, skipping invalid value: {key} {entry}"
)
continue
try:
conv_vals.append(
(
None,
self._get_entry_item(entry, "user", "No User"),
self._get_entry_item(entry, "filename", "unknown"),
self._get_entry_item(entry, "status", "error"),
self._get_entry_item(entry, "start_time"),
self._get_entry_item(entry, "end_time"),
self._get_entry_item(entry, "print_duration"),
self._get_entry_item(entry, "total_duration"),
self._get_entry_item(entry, "filament_used"),
self._get_entry_item(entry, "metadata", {}),
self._get_entry_item(entry, "auxiliary_data", []),
"default"
)
)
except KeyError:
continue
if not conv_vals:
continue
placeholders = ",".join("?" * len(conv_vals[0]))
with conn:
conn.executemany(
f"INSERT INTO {HIST_TABLE} VALUES({placeholders})",
conv_vals
)
db_provider.wipe_local_namespace("history")
class History:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.file_manager: FileManager = self.server.lookup_component(
'file_manager')
self.file_manager: FileManager = self.server.lookup_component('file_manager')
self.request_lock = Lock()
FieldTracker.class_init(self)
self.auxiliary_fields: List[HistoryFieldData] = []
database: DBComp = self.server.lookup_component("database")
self.job_totals: Dict[str, float] = database.get_item(
"moonraker", "history.job_totals",
{
'total_jobs': 0,
'total_time': 0.,
'total_print_time': 0.,
'total_filament_used': 0.,
'longest_job': 0.,
'longest_print': 0.
}).result()
self.history_table = database.register_table(HistorySqlDefinition())
self.totals_table = database.register_table(TotalsSqlDefinition())
self.job_totals: Totals = dict(BASE_TOTALS)
self.aux_totals: AuxTotals = []
self.server.register_event_handler(
"server:klippy_disconnect", self._handle_disconnect)
self.server.register_event_handler(
"server:klippy_shutdown", self._handle_shutdown)
self.server.register_event_handler(
"job_state:started", self._on_job_started)
"job_state:state_changed", self._on_job_state_changed)
self.server.register_event_handler(
"job_state:complete", self._on_job_complete)
self.server.register_event_handler(
"job_state:cancelled", self._on_job_cancelled)
self.server.register_event_handler(
"job_state:standby", self._on_job_standby)
self.server.register_event_handler(
"job_state:error", self._on_job_error)
"klippy_apis:job_start_complete", self._on_job_requested)
self.server.register_notification("history:history_changed")
self.server.register_endpoint(
"/server/history/job", ['GET', 'DELETE'], self._handle_job_request)
"/server/history/job", RequestType.GET | RequestType.DELETE,
self._handle_job_request
)
self.server.register_endpoint(
"/server/history/list", ['GET'], self._handle_jobs_list)
"/server/history/list", RequestType.GET, self._handle_jobs_list
)
self.server.register_endpoint(
"/server/history/totals", ['GET'], self._handle_job_totals)
"/server/history/totals", RequestType.GET, self._handle_job_totals
)
self.server.register_endpoint(
"/server/history/reset_totals", ['POST'],
self._handle_job_total_reset)
database.register_local_namespace(HIST_NAMESPACE)
self.history_ns = database.wrap_namespace(HIST_NAMESPACE,
parse_keys=False)
"/server/history/reset_totals", RequestType.POST,
self._handle_job_total_reset
)
self.current_job: Optional[PrinterJob] = None
self.current_job_id: Optional[str] = None
self.next_job_id: int = 0
self.cached_job_ids = self.history_ns.keys().result()
if self.cached_job_ids:
self.next_job_id = int(self.cached_job_ids[-1], 16) + 1
self.current_job_id: Optional[int] = None
self.job_user: str = "No User"
self.job_paused: bool = False
async def component_init(self) -> None:
# Popluate totals
valid_aux_totals = [
(item.provider, item.name) for item in self.auxiliary_fields
if item.has_totals()
]
cursor = await self.totals_table.execute(f"SELECT * from {TOTALS_TABLE}")
await cursor.set_arraysize(200)
for row in await cursor.fetchall():
provider, field, maximum, total, _ = tuple(row)
if provider == "history":
self.job_totals[field] = total if maximum is None else maximum
elif (provider, field) in valid_aux_totals:
item = dict(row)
item.pop("instance_id", None)
self.aux_totals.append(item)
# Check for interupted jobs
cursor = await self.history_table.execute(
f"SELECT job_id FROM {HIST_TABLE} WHERE status = 'in_progress'"
)
interrupted_jobs: List[int] = [row[0] for row in await cursor.fetchall()]
if interrupted_jobs:
async with self.history_table as tx:
await tx.execute(
f"UPDATE {HIST_TABLE} SET status = 'interrupted' "
"WHERE status = 'in_progress'"
)
self.server.add_log_rollover_item(
"interrupted_history",
"The following jobs were detected as interrupted: "
f"{interrupted_jobs}"
)
async def _handle_job_request(self,
web_request: WebRequest
) -> Dict[str, Any]:
async with self.request_lock:
action = web_request.get_action()
if action == "GET":
req_type = web_request.get_request_type()
if req_type == RequestType.GET:
job_id = web_request.get_str("uid")
if job_id not in self.cached_job_ids:
cursor = await self.history_table.execute(
f"SELECT * FROM {HIST_TABLE} WHERE job_id = ?", (int(job_id, 16),)
)
result = await cursor.fetchone()
if result is None:
raise self.server.error(f"Invalid job uid: {job_id}", 404)
job = await self.history_ns[job_id]
job = dict(result)
return {"job": self._prep_requested_job(job, job_id)}
if action == "DELETE":
if req_type == RequestType.DELETE:
all = web_request.get_boolean("all", False)
if all:
deljobs = self.cached_job_ids
self.history_ns.clear()
self.cached_job_ids = []
self.next_job_id = 0
cursor = await self.history_table.execute(
f"SELECT job_id FROM {HIST_TABLE} WHERE instance_id = ?",
("default",)
)
await cursor.set_arraysize(1000)
deljobs = [f"{row[0]:06X}" for row in await cursor.fetchall()]
async with self.history_table as tx:
await tx.execute(
f"DELETE FROM {HIST_TABLE} WHERE instance_id = ?",
("default",)
)
return {'deleted_jobs': deljobs}
job_id = web_request.get_str("uid")
if job_id not in self.cached_job_ids:
async with self.history_table as tx:
cursor = await tx.execute(
f"DELETE FROM {HIST_TABLE} WHERE job_id = ?", (int(job_id, 16),)
)
if cursor.rowcount < 1:
raise self.server.error(f"Invalid job uid: {job_id}", 404)
self.delete_job(job_id)
return {'deleted_jobs': [job_id]}
raise self.server.error("Invalid Request Method")
@@ -113,199 +309,205 @@ class History:
web_request: WebRequest
) -> Dict[str, Any]:
async with self.request_lock:
i = 0
count = 0
end_num = len(self.cached_job_ids)
jobs: List[Dict[str, Any]] = []
start_num = 0
before = web_request.get_float("before", -1)
since = web_request.get_float("since", -1)
limit = web_request.get_int("limit", 50)
start = web_request.get_int("start", 0)
order = web_request.get_str("order", "desc")
order = web_request.get_str("order", "desc").upper()
if order not in ["asc", "desc"]:
if order not in ["ASC", "DESC"]:
raise self.server.error(f"Invalid `order` value: {order}", 400)
reverse_order = (order == "desc")
# cached jobs is asc order, find lower and upper boundary
if since != -1:
while start_num < end_num:
job_id = self.cached_job_ids[start_num]
job: Dict[str, Any] = await self.history_ns[job_id]
if job['start_time'] > since:
break
start_num += 1
# Build SQL Select Statement
values: List[Any] = ["default"]
sql_statement = f"SELECT * FROM {HIST_TABLE} WHERE instance_id = ?"
if before != -1:
while end_num > 0:
job_id = self.cached_job_ids[end_num-1]
job = await self.history_ns[job_id]
if job['end_time'] < before:
break
end_num -= 1
if start_num >= end_num or end_num == 0:
return {"count": 0, "jobs": []}
i = start
count = end_num - start_num
if limit == 0:
limit = MAX_JOBS
while i < count and len(jobs) < limit:
if reverse_order:
job_id = self.cached_job_ids[end_num - i - 1]
else:
job_id = self.cached_job_ids[start_num + i]
job = await self.history_ns[job_id]
sql_statement += " and end_time < ?"
values.append(before)
if since != -1:
sql_statement += " and start_time > ?"
values.append(since)
sql_statement += f" ORDER BY job_id {order}"
if limit > 0:
sql_statement += " LIMIT ? OFFSET ?"
values.append(limit)
values.append(start)
cursor = await self.history_table.execute(sql_statement, values)
await cursor.set_arraysize(1000)
jobs: List[Dict[str, Any]] = []
for row in await cursor.fetchall():
job = dict(row)
job_id = f"{row['job_id']:06X}"
jobs.append(self._prep_requested_job(job, job_id))
i += 1
return {"count": len(jobs), "jobs": jobs}
return {"count": count, "jobs": jobs}
async def _handle_job_totals(self,
web_request: WebRequest
) -> Dict[str, Dict[str, float]]:
return {'job_totals': self.job_totals}
async def _handle_job_total_reset(self,
web_request: WebRequest,
) -> Dict[str, Dict[str, float]]:
if self.current_job is not None:
raise self.server.error(
"Job in progress, cannot reset totals")
last_totals = dict(self.job_totals)
self.job_totals = {
'total_jobs': 0,
'total_time': 0.,
'total_print_time': 0.,
'total_filament_used': 0.,
'longest_job': 0.,
'longest_print': 0.
async def _handle_job_totals(
self, web_request: WebRequest
) -> Dict[str, Union[Totals, AuxTotals]]:
return {
"job_totals": self.job_totals,
"auxiliary_totals": self.aux_totals
}
database: DBComp = self.server.lookup_component("database")
await database.insert_item(
"moonraker", "history.job_totals", self.job_totals)
return {'last_totals': last_totals}
def _on_job_started(self,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
async def _handle_job_total_reset(
self, web_request: WebRequest
) -> Dict[str, Union[Totals, AuxTotals]]:
if self.current_job is not None:
# Finish with the previous state
self.finish_job("cancelled", prev_stats)
self.add_job(PrinterJob(new_stats))
raise self.server.error("Job in progress, cannot reset totals")
last_totals = self.job_totals
self.job_totals = dict(BASE_TOTALS)
last_aux_totals = self.aux_totals
self._update_aux_totals(reset=True)
totals_list = _create_totals_list(self.job_totals, self.aux_totals)
async with self.totals_table as tx:
await tx.execute(
f"DELETE FROM {TOTALS_TABLE} WHERE instance_id = ?", ("default",)
)
await tx.executemany(
f"INSERT INTO {TOTALS_TABLE} VALUES(?, ?, ?, ?, ?)", totals_list
)
return {
"last_totals": last_totals,
"last_auxiliary_totals": last_aux_totals
}
def _on_job_complete(self,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
self.finish_job("completed", new_stats)
async def _on_job_state_changed(
self,
job_event: JobEvent,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
self.job_paused = job_event == JobEvent.PAUSED
if job_event == JobEvent.STARTED:
if self.current_job is not None:
# Finish with the previous state
await self.finish_job("cancelled", prev_stats)
await self.add_job(PrinterJob(new_stats))
elif job_event == JobEvent.COMPLETE:
await self.finish_job("completed", new_stats)
elif job_event == JobEvent.ERROR:
await self.finish_job("error", new_stats)
elif job_event in (JobEvent.CANCELLED, JobEvent.STANDBY):
# Cancel on "standby" for backward compatibility with
# `CLEAR_PAUSE/SDCARD_RESET_FILE` workflow
await self.finish_job("cancelled", prev_stats)
def _on_job_cancelled(self,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
self.finish_job("cancelled", new_stats)
def _on_job_requested(self, user: Optional[UserInfo]) -> None:
username = user.username if user is not None else "No User"
self.job_user = username
if self.current_job is not None:
self.current_job.user = username
def _on_job_error(self,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
self.finish_job("error", new_stats)
def _on_job_standby(self,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
# Backward compatibility with
# `CLEAR_PAUSE/SDCARD_RESET_FILE` workflow
self.finish_job("cancelled", prev_stats)
def _handle_shutdown(self) -> None:
async def _handle_shutdown(self) -> None:
jstate: JobState = self.server.lookup_component("job_state")
last_ps = jstate.get_last_stats()
self.finish_job("klippy_shutdown", last_ps)
await self.finish_job("klippy_shutdown", last_ps)
def _handle_disconnect(self) -> None:
async def _handle_disconnect(self) -> None:
jstate: JobState = self.server.lookup_component("job_state")
last_ps = jstate.get_last_stats()
self.finish_job("klippy_disconnect", last_ps)
await self.finish_job("klippy_disconnect", last_ps)
def add_job(self, job: PrinterJob) -> None:
if len(self.cached_job_ids) >= MAX_JOBS:
self.delete_job(self.cached_job_ids[0])
job_id = f"{self.next_job_id:06X}"
self.current_job = job
self.current_job_id = job_id
self.grab_job_metadata()
self.history_ns[job_id] = job.get_stats()
self.cached_job_ids.append(job_id)
self.next_job_id += 1
logging.debug(
f"History Job Added - Id: {job_id}, File: {job.filename}"
)
self.send_history_event("added")
async def add_job(self, job: PrinterJob) -> None:
async with self.request_lock:
self.current_job = job
self.current_job_id = None
self.current_job.user = self.job_user
self.grab_job_metadata()
for field in self.auxiliary_fields:
field.tracker.reset()
self.current_job.set_aux_data(self.auxiliary_fields)
new_id = await self.save_job(job, None)
if new_id is None:
logging.info(f"Error saving job, filename '{job.filename}'")
return
self.current_job_id = new_id
job_id = f"{new_id:06X}"
self.update_metadata(job_id)
logging.debug(
f"History Job Added - Id: {job_id}, File: {job.filename}"
)
self.send_history_event("added")
def delete_job(self, job_id: Union[int, str]) -> None:
if isinstance(job_id, int):
job_id = f"{job_id:06X}"
async def save_job(self, job: PrinterJob, job_id: Optional[int]) -> Optional[int]:
values: List[Any] = [
job_id,
job.user,
job.filename,
job.status,
job.start_time,
job.end_time,
job.print_duration,
job.total_duration,
job.filament_used,
job.metadata,
job.auxiliary_data,
"default"
]
placeholders = ",".join("?" * len(values))
async with self.history_table as tx:
cursor = await tx.execute(
f"REPLACE INTO {HIST_TABLE} VALUES({placeholders})", values
)
return cursor.lastrowid
if job_id in self.cached_job_ids:
del self.history_ns[job_id]
self.cached_job_ids.remove(job_id)
async def delete_job(self, job_id: Union[int, str]) -> None:
if isinstance(job_id, str):
job_id = int(job_id, 16)
async with self.history_table as tx:
tx.execute(
f"DELETE FROM {HIST_TABLE} WHERE job_id = ?", (job_id,)
)
def finish_job(self, status: str, pstats: Dict[str, Any]) -> None:
if self.current_job is None:
return
cj = self.current_job
if (
pstats.get('filename') != cj.get('filename') or
pstats.get('total_duration', 0.) < cj.get('total_duration')
):
# Print stats have been reset, do not update this job with them
pstats = {}
async def finish_job(self, status: str, pstats: Dict[str, Any]) -> None:
async with self.request_lock:
if self.current_job is None or self.current_job_id is None:
self._reset_current_job()
return
if (
pstats.get('filename') != self.current_job.filename or
pstats.get('total_duration', 0.) < self.current_job.total_duration
):
# Print stats have been reset, do not update this job with them
pstats = {}
self.current_job.user = self.job_user
self.current_job.finish(status, pstats)
# Regrab metadata incase metadata wasn't parsed yet due to file upload
self.grab_job_metadata()
self.current_job.set_aux_data(self.auxiliary_fields)
job_id = f"{self.current_job_id:06X}"
await self.save_job(self.current_job, self.current_job_id)
self.update_metadata(job_id)
await self._update_job_totals()
logging.debug(
f"History Job Finished - Id: {job_id}, "
f"File: {self.current_job.filename}, "
f"Status: {status}"
)
self.send_history_event("finished")
self._reset_current_job()
self.current_job.finish(status, pstats)
# Regrab metadata incase metadata wasn't parsed yet due to file upload
self.grab_job_metadata()
self.save_current_job()
self._update_job_totals()
logging.debug(
f"History Job Finished - Id: {self.current_job_id}, "
f"File: {self.current_job.filename}, "
f"Status: {status}"
)
self.send_history_event("finished")
def _reset_current_job(self) -> None:
self.current_job = None
self.current_job_id = None
self.job_user = "No User"
async def get_job(self,
job_id: Union[int, str]
) -> Optional[Dict[str, Any]]:
if isinstance(job_id, int):
job_id = f"{job_id:06X}"
return await self.history_ns.get(job_id, None)
async def get_job(
self, job_id: Union[int, str]
) -> Optional[Dict[str, Any]]:
if isinstance(job_id, str):
job_id = int(job_id, 16)
cursor = await self.history_table.execute(
f"SELECT * FROM {HIST_TABLE} WHERE job_id = ?", (job_id,)
)
result = await cursor.fetchone()
return dict(result) if result is not None else result
def grab_job_metadata(self) -> None:
if self.current_job is None:
return
filename: str = self.current_job.get("filename")
filename: str = self.current_job.filename
mdst = self.file_manager.get_metadata_storage()
metadata: Dict[str, Any] = mdst.get(filename, {})
if metadata:
# Add the start time and job id to the
# persistent metadata storage
metadata.update({
'print_start_time': self.current_job.get('start_time'),
'job_id': self.current_job_id
})
mdst.insert(filename, metadata.copy())
# We don't need to store these fields in the
# job metadata, as they are redundant
metadata.pop('print_start_time', None)
@@ -314,61 +516,108 @@ class History:
thumb: Dict[str, Any]
for thumb in metadata['thumbnails']:
thumb.pop('data', None)
self.current_job.set("metadata", metadata)
self.current_job.metadata = metadata
def save_current_job(self) -> None:
if self.current_job is None or self.current_job_id is None:
def update_metadata(self, job_id: str) -> None:
if self.current_job is None:
return
self.history_ns[self.current_job_id] = self.current_job.get_stats()
mdst = self.file_manager.get_metadata_storage()
filename: str = self.current_job.filename
metadata: Dict[str, Any] = mdst.get(filename, {})
if metadata:
# Add the start time and job id to the
# persistent metadata storage
metadata.update({
'print_start_time': self.current_job.get('start_time'),
'job_id': job_id
})
mdst.insert(filename, metadata)
def _update_job_totals(self) -> None:
async def _update_job_totals(self) -> None:
if self.current_job is None:
return
job = self.current_job
self.job_totals['total_jobs'] += 1
self.job_totals['total_time'] += job.get('total_duration')
self.job_totals['total_print_time'] += job.get('print_duration')
self.job_totals['total_filament_used'] += job.get('filament_used')
self.job_totals['longest_job'] = max(
self.job_totals['longest_job'], job.get('total_duration'))
self.job_totals['longest_print'] = max(
self.job_totals['longest_print'], job.get('print_duration'))
database: DBComp = self.server.lookup_component("database")
database.insert_item(
"moonraker", "history.job_totals", self.job_totals)
self._accumulate_total("total_jobs", 1)
self._accumulate_total("total_time", job.total_duration)
self._accumulate_total("total_print_time", job.print_duration)
self._accumulate_total("total_filament_used", job.filament_used)
self._maximize_total("longest_job", job.total_duration)
self._maximize_total("longest_print", job.print_duration)
self._update_aux_totals()
totals_list = _create_totals_list(self.job_totals, self.aux_totals)
async with self.totals_table as tx:
await tx.executemany(
f"REPLACE INTO {TOTALS_TABLE} VALUES(?, ?, ?, ?, ?)", totals_list
)
def _accumulate_total(self, field: str, val: Union[int, float]) -> None:
self.job_totals[field] += val
def _maximize_total(self, field: str, val: Union[int, float]) -> None:
self.job_totals[field] = max(self.job_totals[field], val)
def _update_aux_totals(self, reset: bool = False) -> None:
last_totals = self.aux_totals
self.aux_totals = [
field.get_totals(last_totals, reset)
for field in self.auxiliary_fields
if field.has_totals()
]
def send_history_event(self, evt_action: str) -> None:
if self.current_job is None or self.current_job_id is None:
return
job = self._prep_requested_job(
self.current_job.get_stats(), self.current_job_id)
self.server.send_event("history:history_changed",
{'action': evt_action, 'job': job})
job_id = f"{self.current_job_id:06X}"
job = self._prep_requested_job(self.current_job.get_stats(), job_id)
self.server.send_event(
"history:history_changed", {'action': evt_action, 'job': job}
)
def _prep_requested_job(self,
job: Dict[str, Any],
job_id: str
) -> Dict[str, Any]:
job['job_id'] = job_id
job['exists'] = self.file_manager.check_file_exists(
"gcodes", job['filename'])
def _prep_requested_job(
self, job: Dict[str, Any], job_id: str
) -> Dict[str, Any]:
fm = self.file_manager
mtime = job.get("metadata", {}).get("modified", None)
job["exists"] = fm.check_file_exists("gcodes", job['filename'], mtime)
job["job_id"] = job_id
job.pop("instance_id", None)
return job
def on_exit(self) -> None:
def register_auxiliary_field(self, new_field: HistoryFieldData) -> None:
if new_field.provider == "history":
raise self.server.error("Provider name 'history' is reserved")
for field in self.auxiliary_fields:
if field == new_field:
raise self.server.error(
f"Field {field.name} already registered by "
f"provider {field.provider}."
)
self.auxiliary_fields.append(new_field)
def tracking_enabled(self, check_paused: bool) -> bool:
if self.current_job is None:
return False
return not self.job_paused if check_paused else True
async def on_exit(self) -> None:
if self.current_job is None:
return
jstate: JobState = self.server.lookup_component("job_state")
last_ps = jstate.get_last_stats()
self.finish_job("server_exit", last_ps)
await self.finish_job("server_exit", last_ps)
class PrinterJob:
def __init__(self, data: Dict[str, Any] = {}) -> None:
self.end_time: Optional[float] = None
self.filament_used: float = 0
self.filename: Optional[str] = None
self.metadata: Optional[Dict[str, Any]] = None
self.filename: str = ""
self.metadata: Dict[str, Any] = {}
self.print_duration: float = 0.
self.status: str = "in_progress"
self.start_time = time.time()
self.total_duration: float = 0.
self.auxiliary_data: List[Dict[str, Any]] = []
self.user: str = "No User"
self.update_from_ps(data)
def finish(self,
@@ -376,7 +625,7 @@ class PrinterJob:
print_stats: Dict[str, Any] = {}
) -> None:
self.end_time = time.time()
self.status = status
self.status = status if status is not None else "error"
self.update_from_ps(print_stats)
def get(self, name: str) -> Any:
@@ -392,10 +641,14 @@ class PrinterJob:
return
setattr(self, name, val)
def set_aux_data(self, fields: List[HistoryFieldData]) -> None:
self.auxiliary_data = [field.as_dict() for field in fields]
def update_from_ps(self, data: Dict[str, Any]) -> None:
for i in data:
if hasattr(self, i):
if hasattr(self, i) and data[i] is not None:
setattr(self, i, data[i])
def load_component(config: ConfigHelper) -> History:
return History(config)

View File

@@ -6,14 +6,15 @@
from __future__ import annotations
import re
import json
import time
import asyncio
import pathlib
import tempfile
import logging
from utils import ServerError
from tornado.escape import url_escape, url_unescape
import copy
from ..utils import ServerError
from ..utils import json_wrapper as jsonw
from tornado.escape import url_unescape
from tornado.httpclient import AsyncHTTPClient, HTTPRequest, HTTPError
from tornado.httputil import HTTPHeaders
from typing import (
@@ -27,8 +28,8 @@ from typing import (
Any
)
if TYPE_CHECKING:
from moonraker import Server
from confighelper import ConfigHelper
from ..server import Server
from ..confighelper import ConfigHelper
from io import BufferedWriter
StrOrPath = Union[str, pathlib.Path]
@@ -40,18 +41,6 @@ AsyncHTTPClient.configure(
GITHUB_PREFIX = "https://api.github.com/"
def escape_query_string(qs: str) -> str:
parts = qs.split("&")
escaped: List[str] = []
for p in parts:
item = p.split("=", 1)
key = url_escape(item[0])
if len(item) == 2:
escaped.append(f"{key}={url_escape(item[1])}")
else:
escaped.append(key)
return "&".join(escaped)
class HttpClient:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
@@ -76,29 +65,14 @@ class HttpClient:
if len(headers) == 0:
raise self.server.error(
"Either an Etag or Last Modified Date must be specified")
empty_resp = HttpResponse(url, 200, b"", headers, None)
empty_resp = HttpResponse(url, url, 200, b"", headers, None)
self.response_cache[url] = empty_resp
def escape_url(self, url: str) -> str:
# escape the url
match = re.match(r"(https?://[^/?#]+)([^?#]+)?(\?[^#]+)?(#.+)?", url)
if match is not None:
uri, path, qs, fragment = match.groups()
if path is not None:
uri += "/".join([url_escape(p, plus=False)
for p in path.split("/")])
if qs is not None:
uri += "?" + escape_query_string(qs[1:])
if fragment is not None:
uri += "#" + url_escape(fragment[1:], plus=False)
url = uri
return url
async def request(
self,
method: str,
url: str,
body: Optional[Union[str, List[Any], Dict[str, Any]]] = None,
body: Optional[Union[bytes, str, List[Any], Dict[str, Any]]] = None,
headers: Optional[Dict[str, Any]] = None,
connect_timeout: float = 5.,
request_timeout: float = 10.,
@@ -113,7 +87,7 @@ class HttpClient:
# prepare the body if required
req_headers: Dict[str, Any] = {}
if isinstance(body, (list, dict)):
body = json.dumps(body)
body = jsonw.dumps(body)
req_headers["Content-Type"] = "application/json"
cached: Optional[HttpResponse] = None
if enable_cache:
@@ -160,10 +134,13 @@ class HttpClient:
continue
else:
result = resp.body
ret = HttpResponse(url, resp.code, result, resp.headers, err)
ret = HttpResponse(
url, resp.effective_url, resp.code, result,
resp.headers, err
)
break
else:
ret = HttpResponse(url, 500, b"", HTTPHeaders(), err)
ret = HttpResponse(url, url, 500, b"", HTTPHeaders(), err)
if enable_cache and ret.is_cachable():
logging.debug(f"Caching HTTP Response: {url}")
self.response_cache[cache_key] = ret
@@ -291,18 +268,70 @@ class HttpClient:
return dl.dest_file
raise self.server.error(f"Retries exceeded for request: {url}")
def wrap_request(self, default_url: str, **kwargs) -> HttpRequestWrapper:
return HttpRequestWrapper(self, default_url, **kwargs)
def close(self):
self.client.close()
class HttpRequestWrapper:
def __init__(
self, client: HttpClient, default_url: str, **kwargs
) -> None:
self._do_request = client.request
self._last_response: Optional[HttpResponse] = None
self.default_request_args: Dict[str, Any] = {
"method": "GET",
"url": default_url,
}
self.default_request_args.update(kwargs)
self.request_args = copy.deepcopy(self.default_request_args)
self.reset()
async def send(self, **kwargs) -> HttpResponse:
req_args = copy.deepcopy(self.request_args)
req_args.update(kwargs)
method = req_args.pop("method", self.default_request_args["method"])
url = req_args.pop("url", self.default_request_args["url"])
self._last_response = await self._do_request(method, url, **req_args)
return self._last_response
def set_method(self, method: str) -> None:
self.request_args["method"] = method
def set_url(self, url: str) -> None:
self.request_args["url"] = url
def set_body(
self, body: Optional[Union[str, List[Any], Dict[str, Any]]]
) -> None:
self.request_args["body"] = body
def add_header(self, name: str, value: str) -> None:
headers = self.request_args.get("headers", {})
headers[name] = value
self.request_args["headers"] = headers
def set_headers(self, headers: Dict[str, str]) -> None:
self.request_args["headers"] = headers
def reset(self) -> None:
self.request_args = copy.deepcopy(self.default_request_args)
def last_response(self) -> Optional[HttpResponse]:
return self._last_response
class HttpResponse:
def __init__(self,
url: str,
final_url: str,
code: int,
result: bytes,
response_headers: HTTPHeaders,
error: Optional[BaseException]
) -> None:
self._url = url
self._final_url = final_url
self._code = code
self._result: bytes = result
self._encoding: str = "utf-8"
@@ -312,8 +341,8 @@ class HttpResponse:
self._last_modified: Optional[str] = response_headers.get(
"last-modified", None)
def json(self, **kwargs) -> Union[List[Any], Dict[str, Any]]:
return json.loads(self._result, **kwargs)
def json(self) -> Union[List[Any], Dict[str, Any]]:
return jsonw.loads(self._result)
def is_cachable(self) -> bool:
return self._last_modified is not None or self._etag is not None
@@ -353,6 +382,10 @@ class HttpResponse:
def url(self) -> str:
return self._url
@property
def final_url(self) -> str:
return self._final_url
@property
def status_code(self) -> int:
return self._code

View File

@@ -8,6 +8,7 @@ from __future__ import annotations
import asyncio
import time
import logging
from ..common import JobEvent, RequestType
# Annotation imports
from typing import (
@@ -19,8 +20,8 @@ from typing import (
Union,
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest
from ..confighelper import ConfigHelper
from ..common import WebRequest, UserInfo
from .klippy_apis import KlippyAPI
from .file_manager.file_manager import FileManager
@@ -46,11 +47,8 @@ class JobQueue:
self.server.register_event_handler(
"server:klippy_shutdown", self._handle_shutdown)
self.server.register_event_handler(
"job_state:complete", self._on_job_complete)
self.server.register_event_handler(
"job_state:error", self._on_job_abort)
self.server.register_event_handler(
"job_state:cancelled", self._on_job_abort)
"job_state:state_changed", self._on_job_state_changed
)
self.server.register_notification("job_queue:job_queue_changed")
self.server.register_remote_method("pause_job_queue", self.pause_queue)
@@ -58,14 +56,21 @@ class JobQueue:
self.start_queue)
self.server.register_endpoint(
"/server/job_queue/job", ['POST', 'DELETE'],
self._handle_job_request)
"/server/job_queue/job", RequestType.POST | RequestType.DELETE,
self._handle_job_request
)
self.server.register_endpoint(
"/server/job_queue/pause", ['POST'], self._handle_pause_queue)
"/server/job_queue/pause", RequestType.POST, self._handle_pause_queue
)
self.server.register_endpoint(
"/server/job_queue/start", ['POST'], self._handle_start_queue)
"/server/job_queue/start", RequestType.POST, self._handle_start_queue
)
self.server.register_endpoint(
"/server/job_queue/status", ['GET'], self._handle_queue_status)
"/server/job_queue/status", RequestType.GET, self._handle_queue_status
)
self.server.register_endpoint(
"/server/job_queue/jump", RequestType.POST, self._handle_jump
)
async def _handle_ready(self) -> None:
async with self.lock:
@@ -83,10 +88,15 @@ class JobQueue:
if not self.queued_jobs and self.automatic:
self._set_queue_state("ready")
async def _on_job_complete(self,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
async def _on_job_state_changed(self, job_event: JobEvent, *args) -> None:
if job_event == JobEvent.COMPLETE:
await self._on_job_complete()
elif job_event.aborted:
await self._on_job_abort()
async def _on_job_complete(self) -> None:
if not self.automatic:
return
async with self.lock:
# Transition to the next job in the queue
if self.queue_state == "ready" and self.queued_jobs:
@@ -95,10 +105,7 @@ class JobQueue:
self.pop_queue_handle = event_loop.delay_callback(
self.job_delay, self._pop_job)
async def _on_job_abort(self,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
async def _on_job_abort(self) -> None:
async with self.lock:
if self.queued_jobs:
self._set_queue_state("paused")
@@ -128,7 +135,9 @@ class JobQueue:
raise self.server.error(
"Queue State Changed during Transition Gcode")
self._set_queue_state("starting")
await kapis.start_print(filename)
await kapis.start_print(
filename, wait_klippy_started=True, user=job.user
)
except self.server.error:
logging.exception(f"Error Loading print: {filename}")
self._set_queue_state("paused")
@@ -157,7 +166,9 @@ class JobQueue:
async def queue_job(self,
filenames: Union[str, List[str]],
check_exists: bool = True
check_exists: bool = True,
reset: bool = False,
user: Optional[UserInfo] = None
) -> None:
async with self.lock:
# Make sure that the file exists
@@ -167,8 +178,10 @@ class JobQueue:
# Make sure all files exist before adding them to the queue
for fname in filenames:
self._check_job_file(fname)
if reset:
self.queued_jobs.clear()
for fname in filenames:
queued_job = QueuedJob(fname)
queued_job = QueuedJob(fname, user)
self.queued_jobs[queued_job.job_id] = queued_job
self._send_queue_event(action="jobs_added")
@@ -209,9 +222,12 @@ class JobQueue:
self._set_queue_state("loading")
event_loop = self.server.get_event_loop()
self.pop_queue_handle = event_loop.delay_callback(
0.01, self._pop_job)
0.01, self._pop_job, False
)
else:
self._set_queue_state("ready")
qs = "ready" if self.automatic else "paused"
self._set_queue_state(qs)
def _job_map_to_list(self) -> List[Dict[str, Any]]:
cur_time = time.time()
return [job.as_dict(cur_time) for
@@ -241,27 +257,24 @@ class JobQueue:
'queue_state': self.queue_state
})
async def _handle_job_request(self,
web_request: WebRequest
) -> Dict[str, Any]:
action = web_request.get_action()
if action == "POST":
files: Union[List[str], str] = web_request.get('filenames')
if isinstance(files, str):
files = [f.strip() for f in files.split(',') if f.strip()]
async def _handle_job_request(
self, web_request: WebRequest
) -> Dict[str, Any]:
req_type = web_request.get_request_type()
if req_type == RequestType.POST:
files = web_request.get_list('filenames')
reset = web_request.get_boolean("reset", False)
# Validate that all files exist before queueing
await self.queue_job(files)
elif action == "DELETE":
user = web_request.get_current_user()
await self.queue_job(files, reset=reset, user=user)
elif req_type == RequestType.DELETE:
if web_request.get_boolean("all", False):
await self.delete_job([], all=True)
else:
job_ids: Union[List[str], str] = web_request.get('job_ids')
if isinstance(job_ids, str):
job_ids = [f.strip() for f in job_ids.split(',')
if f.strip()]
job_ids = web_request.get_list('job_ids')
await self.delete_job(job_ids)
else:
raise self.server.error(f"Invalid action: {action}")
raise self.server.error(f"Invalid request type: {req_type}")
return {
'queued_jobs': self._job_map_to_list(),
'queue_state': self.queue_state
@@ -293,18 +306,37 @@ class JobQueue:
'queue_state': self.queue_state
}
async def _handle_jump(self, web_request: WebRequest) -> Dict[str, Any]:
job_id: str = web_request.get("job_id")
async with self.lock:
job = self.queued_jobs.pop(job_id, None)
if job is None:
raise self.server.error(f"Invalid job id: {job_id}")
new_queue = {job_id: job}
new_queue.update(self.queued_jobs)
self.queued_jobs = new_queue
return {
'queued_jobs': self._job_map_to_list(),
'queue_state': self.queue_state
}
async def close(self):
await self.pause_queue()
class QueuedJob:
def __init__(self, filename: str) -> None:
def __init__(self, filename: str, user: Optional[UserInfo] = None) -> None:
self.filename = filename
self.job_id = f"{id(self):016X}"
self.time_added = time.time()
self._user = user
def __str__(self) -> str:
return self.filename
@property
def user(self) -> Optional[UserInfo]:
return self._user
def as_dict(self, cur_time: float) -> Dict[str, Any]:
return {
'filename': self.filename,

View File

@@ -15,34 +15,45 @@ from typing import (
Dict,
List,
)
from ..common import JobEvent, KlippyState
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ..confighelper import ConfigHelper
from .klippy_apis import KlippyAPI
class JobState:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.last_print_stats: Dict[str, Any] = {}
self.last_event: JobEvent = JobEvent.STANDBY
self.server.register_event_handler(
"server:klippy_started", self._handle_started)
"server:klippy_started", self._handle_started
)
self.server.register_event_handler(
"server:status_update", self._status_update)
"server:klippy_disconnect", self._handle_disconnect
)
async def _handle_started(self, state: str) -> None:
if state != "ready":
def _handle_disconnect(self):
state = self.last_print_stats.get("state", "")
if state in ("printing", "paused"):
# set error state
self.last_print_stats["state"] = "error"
self.last_event = JobEvent.ERROR
async def _handle_started(self, state: KlippyState) -> None:
if state != KlippyState.READY:
return
kapis: KlippyAPI = self.server.lookup_component('klippy_apis')
sub: Dict[str, Optional[List[str]]] = {"print_stats": None}
try:
result = await kapis.subscribe_objects(sub)
except self.server.error as e:
logging.info(f"Error subscribing to print_stats")
result = await kapis.subscribe_objects(sub, self._status_update)
except self.server.error:
logging.info("Error subscribing to print_stats")
self.last_print_stats = result.get("print_stats", {})
if "state" in self.last_print_stats:
state = self.last_print_stats["state"]
logging.info(f"Job state initialized: {state}")
async def _status_update(self, data: Dict[str, Any]) -> None:
async def _status_update(self, data: Dict[str, Any], _: float) -> None:
if 'print_stats' not in data:
return
ps = data['print_stats']
@@ -67,8 +78,24 @@ class JobState:
f"Job State Changed - Prev State: {old_state}, "
f"New State: {new_state}"
)
# NOTE: Individual job_state events are DEPRECATED. New modules
# should register handlers for "job_state: status_changed" and
# match against the JobEvent object provided.
self.server.send_event(f"job_state:{new_state}", prev_ps, new_ps)
self.last_event = JobEvent.from_string(new_state)
self.server.send_event(
f"job_state:{new_state}", prev_ps, new_ps)
"job_state:state_changed",
self.last_event,
prev_ps,
new_ps
)
if "info" in ps:
cur_layer: Optional[int] = ps["info"].get("current_layer")
if cur_layer is not None:
total: int = ps["info"].get("total_layer", 0)
self.server.send_event(
"job_state:layer_changed", cur_layer, total
)
self.last_print_stats.update(ps)
def _check_resumed(self,
@@ -84,5 +111,8 @@ class JobState:
def get_last_stats(self) -> Dict[str, Any]:
return dict(self.last_print_stats)
def get_last_job_event(self) -> JobEvent:
return self.last_event
def load_component(config: ConfigHelper) -> JobState:
return JobState(config)

View File

@@ -5,8 +5,12 @@
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
from utils import SentinelClass
from websockets import WebRequest, Subscribable
import logging
from ..utils import Sentinel
from ..common import WebRequest, APITransport, RequestType
import os
import shutil
import json
# Annotation imports
from typing import (
@@ -18,12 +22,16 @@ from typing import (
List,
TypeVar,
Mapping,
Callable,
Coroutine
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest
from klippy_connection import KlippyConnection as Klippy
from ..confighelper import ConfigHelper
from ..common import UserInfo
from .klippy_connection import KlippyConnection as Klippy
from .file_manager.file_manager import FileManager
Subscription = Dict[str, Optional[List[Any]]]
SubCallback = Callable[[Dict[str, Dict[str, Any]], float], Optional[Coroutine]]
_T = TypeVar("_T")
INFO_ENDPOINT = "info"
@@ -35,31 +43,55 @@ SUBSCRIPTION_ENDPOINT = "objects/subscribe"
STATUS_ENDPOINT = "objects/query"
OBJ_LIST_ENDPOINT = "objects/list"
REG_METHOD_ENDPOINT = "register_remote_method"
SENTINEL = SentinelClass.get_instance()
class KlippyAPI(Subscribable):
class KlippyAPI(APITransport):
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.klippy: Klippy = self.server.lookup_component("klippy_connection")
self.fm: FileManager = self.server.lookup_component("file_manager")
self.eventloop = self.server.get_event_loop()
app_args = self.server.get_app_args()
self.version = app_args.get('software_version')
# Maintain a subscription for all moonraker requests, as
# we do not want to overwrite them
self.host_subscription: Subscription = {}
self.subscription_callbacks: List[SubCallback] = []
# Register GCode Aliases
self.server.register_endpoint(
"/printer/print/pause", ['POST'], self._gcode_pause)
"/printer/print/pause", RequestType.POST, self._gcode_pause
)
self.server.register_endpoint(
"/printer/print/resume", ['POST'], self._gcode_resume)
"/printer/print/resume", RequestType.POST, self._gcode_resume
)
self.server.register_endpoint(
"/printer/print/cancel", ['POST'], self._gcode_cancel)
"/printer/print/cancel", RequestType.POST, self._gcode_cancel
)
self.server.register_endpoint(
"/printer/print/start", ['POST'], self._gcode_start_print)
"/printer/print/start", RequestType.POST, self._gcode_start_print
)
self.server.register_endpoint(
"/printer/restart", ['POST'], self._gcode_restart)
"/printer/restart", RequestType.POST, self._gcode_restart
)
self.server.register_endpoint(
"/printer/firmware_restart", ['POST'], self._gcode_firmware_restart)
"/printer/firmware_restart", RequestType.POST, self._gcode_firmware_restart
)
self.server.register_event_handler(
"server:klippy_disconnect", self._on_klippy_disconnect
)
self.server.register_endpoint(
"/printer/list_endpoints", RequestType.GET, self.list_endpoints
)
self.server.register_endpoint(
"/printer/breakheater", RequestType.POST, self.breakheater
)
self.server.register_endpoint(
"/printer/breakmacro", RequestType.POST, self.breakmacro
)
def _on_klippy_disconnect(self) -> None:
self.host_subscription.clear()
self.subscription_callbacks.clear()
async def _gcode_pause(self, web_request: WebRequest) -> str:
return await self.pause_print()
@@ -72,7 +104,8 @@ class KlippyAPI(Subscribable):
async def _gcode_start_print(self, web_request: WebRequest) -> str:
filename: str = web_request.get_str('filename')
return await self.start_print(filename)
user = web_request.get_current_user()
return await self.start_print(filename, user=user)
async def _gcode_restart(self, web_request: WebRequest) -> str:
return await self.do_restart("RESTART")
@@ -80,32 +113,39 @@ class KlippyAPI(Subscribable):
async def _gcode_firmware_restart(self, web_request: WebRequest) -> str:
return await self.do_restart("FIRMWARE_RESTART")
async def _send_klippy_request(self,
method: str,
params: Dict[str, Any],
default: Any = SENTINEL
) -> Any:
async def _send_klippy_request(
self,
method: str,
params: Dict[str, Any],
default: Any = Sentinel.MISSING,
transport: Optional[APITransport] = None
) -> Any:
try:
result = await self.klippy.request(
WebRequest(method, params, conn=self))
req = WebRequest(method, params, transport=transport or self)
result = await self.klippy.request(req)
except self.server.error:
if isinstance(default, SentinelClass):
if default is Sentinel.MISSING:
raise
result = default
return result
async def run_gcode(self,
script: str,
default: Any = SENTINEL
default: Any = Sentinel.MISSING
) -> str:
params = {'script': script}
result = await self._send_klippy_request(
GCODE_ENDPOINT, params, default)
return result
async def start_print(self, filename: str) -> str:
async def start_print(
self,
filename: str,
wait_klippy_started: bool = False,
user: Optional[UserInfo] = None
) -> str:
# WARNING: Do not call this method from within the following
# event handlers:
# event handlers when "wait_klippy_started" is set to True:
# klippy_identified, klippy_started, klippy_ready, klippy_disconnect
# Doing so will result in "wait_started" blocking for the specifed
# timeout (default 20s) and returning False.
@@ -114,38 +154,78 @@ class KlippyAPI(Subscribable):
filename = filename[1:]
# Escape existing double quotes in the file name
filename = filename.replace("\"", "\\\"")
homedir = os.path.expanduser("~")
if os.path.split(filename)[0].split(os.path.sep)[0] != ".cache":
base_path = os.path.join(homedir, "printer_data/gcodes")
target = os.path.join(".cache", os.path.basename(filename))
cache_path = os.path.join(base_path, ".cache")
if not os.path.exists(cache_path):
os.makedirs(cache_path)
shutil.rmtree(cache_path)
os.makedirs(cache_path)
metadata = self.fm.gcode_metadata.metadata.get(filename, None)
self.copy_file_to_cache(os.path.join(base_path, filename), os.path.join(base_path, target))
msg = "// metadata=" + json.dumps(metadata)
self.server.send_event("server:gcode_response", msg)
filename = target
script = f'SDCARD_PRINT_FILE FILENAME="{filename}"'
await self.klippy.wait_started()
return await self.run_gcode(script)
if wait_klippy_started:
await self.klippy.wait_started()
logging.info(f"Requesting Job Start, filename = {filename}")
ret = await self.run_gcode(script)
self.server.send_event("klippy_apis:job_start_complete", user)
return ret
async def pause_print(
self, default: Union[SentinelClass, _T] = SENTINEL
self, default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, str]:
self.server.send_event("klippy_apis:pause_requested")
logging.info("Requesting job pause...")
return await self._send_klippy_request(
"pause_resume/pause", {}, default)
async def resume_print(
self, default: Union[SentinelClass, _T] = SENTINEL
self, default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, str]:
self.server.send_event("klippy_apis:resume_requested")
logging.info("Requesting job resume...")
return await self._send_klippy_request(
"pause_resume/resume", {}, default)
async def cancel_print(
self, default: Union[SentinelClass, _T] = SENTINEL
self, default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, str]:
self.server.send_event("klippy_apis:cancel_requested")
logging.info("Requesting job cancel...")
await self._send_klippy_request(
"breakmacro", {}, default)
await self._send_klippy_request(
"breakheater", {}, default)
return await self._send_klippy_request(
"pause_resume/cancel", {}, default)
async def breakheater(
self, default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, str]:
return await self._send_klippy_request(
"breakheater", {}, default)
async def breakmacro(
self, default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, str]:
return await self._send_klippy_request(
"breakmacro", {}, default)
async def do_restart(self, gc: str) -> str:
async def do_restart(
self, gc: str, wait_klippy_started: bool = False
) -> str:
# WARNING: Do not call this method from within the following
# event handlers:
# event handlers when "wait_klippy_started" is set to True:
# klippy_identified, klippy_started, klippy_ready, klippy_disconnect
# Doing so will result in "wait_started" blocking for the specifed
# timeout (default 20s) and returning False.
await self.klippy.wait_started()
if wait_klippy_started:
await self.klippy.wait_started()
try:
result = await self.run_gcode(gc)
except self.server.error as e:
@@ -156,7 +236,7 @@ class KlippyAPI(Subscribable):
return result
async def list_endpoints(self,
default: Union[SentinelClass, _T] = SENTINEL
default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, Dict[str, List[str]]]:
return await self._send_klippy_request(
LIST_EPS_ENDPOINT, {}, default)
@@ -166,7 +246,7 @@ class KlippyAPI(Subscribable):
async def get_klippy_info(self,
send_id: bool = False,
default: Union[SentinelClass, _T] = SENTINEL
default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, Dict[str, Any]]:
params = {}
if send_id:
@@ -175,29 +255,36 @@ class KlippyAPI(Subscribable):
return await self._send_klippy_request(INFO_ENDPOINT, params, default)
async def get_object_list(self,
default: Union[SentinelClass, _T] = SENTINEL
default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, List[str]]:
result = await self._send_klippy_request(
OBJ_LIST_ENDPOINT, {}, default)
if isinstance(result, dict) and 'objects' in result:
return result['objects']
return result
if default is not Sentinel.MISSING:
return default
raise self.server.error("Invalid response received from Klippy", 500)
async def query_objects(self,
objects: Mapping[str, Optional[List[str]]],
default: Union[SentinelClass, _T] = SENTINEL
default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, Dict[str, Any]]:
params = {'objects': objects}
result = await self._send_klippy_request(
STATUS_ENDPOINT, params, default)
if isinstance(result, dict) and 'status' in result:
return result['status']
return result
if isinstance(result, dict) and "status" in result:
return result["status"]
if default is not Sentinel.MISSING:
return default
raise self.server.error("Invalid response received from Klippy", 500)
async def subscribe_objects(self,
objects: Mapping[str, Optional[List[str]]],
default: Union[SentinelClass, _T] = SENTINEL
) -> Union[_T, Dict[str, Any]]:
async def subscribe_objects(
self,
objects: Mapping[str, Optional[List[str]]],
callback: Optional[SubCallback] = None,
default: Union[Sentinel, _T] = Sentinel.MISSING
) -> Union[_T, Dict[str, Any]]:
# The host transport shares subscriptions amongst all components
for obj, items in objects.items():
if obj in self.host_subscription:
prev = self.host_subscription[obj]
@@ -208,12 +295,31 @@ class KlippyAPI(Subscribable):
self.host_subscription[obj] = uitems
else:
self.host_subscription[obj] = items
params = {'objects': self.host_subscription}
params = {"objects": dict(self.host_subscription)}
result = await self._send_klippy_request(SUBSCRIPTION_ENDPOINT, params, default)
if isinstance(result, dict) and "status" in result:
if callback is not None:
self.subscription_callbacks.append(callback)
return result["status"]
if default is not Sentinel.MISSING:
return default
raise self.server.error("Invalid response received from Klippy", 500)
async def subscribe_from_transport(
self,
objects: Mapping[str, Optional[List[str]]],
transport: APITransport,
default: Union[Sentinel, _T] = Sentinel.MISSING,
) -> Union[_T, Dict[str, Any]]:
params = {"objects": dict(objects)}
result = await self._send_klippy_request(
SUBSCRIPTION_ENDPOINT, params, default)
if isinstance(result, dict) and 'status' in result:
return result['status']
return result
SUBSCRIPTION_ENDPOINT, params, default, transport
)
if isinstance(result, dict) and "status" in result:
return result["status"]
if default is not Sentinel.MISSING:
return default
raise self.server.error("Invalid response received from Klippy", 500)
async def subscribe_gcode_output(self) -> str:
template = {'response_template':
@@ -226,11 +332,23 @@ class KlippyAPI(Subscribable):
{'response_template': {"method": method_name},
'remote_method': method_name})
def send_status(self,
status: Dict[str, Any],
eventtime: float
) -> None:
def send_status(
self, status: Dict[str, Any], eventtime: float
) -> None:
for cb in self.subscription_callbacks:
self.eventloop.register_callback(cb, status, eventtime)
self.server.send_event("server:status_update", status)
def copy_file_to_cache(self, origin, target):
stat = os.statvfs("/")
free_space = stat.f_frsize * stat.f_bfree
filesize = os.path.getsize(os.path.join(origin))
if (filesize < free_space):
shutil.copy(origin, target)
else:
msg = "!! Insufficient disk space, unable to read the file."
self.server.send_event("server:gcode_response", msg)
raise self.server.error("Insufficient disk space, unable to read the file.", 500)
def load_component(config: ConfigHelper) -> KlippyAPI:
return KlippyAPI(config)

View File

@@ -0,0 +1,816 @@
# KlippyConnection - manage unix socket connection to Klipper
#
# Copyright (C) 2022 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import os
import time
import logging
import getpass
import asyncio
import pathlib
from ..utils import ServerError, get_unix_peer_credentials
from ..utils import json_wrapper as jsonw
from ..common import KlippyState, RequestType
# Annotation imports
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Optional,
Callable,
Coroutine,
Dict,
List,
Set,
Tuple,
Union
)
if TYPE_CHECKING:
from ..common import WebRequest, APITransport, BaseRemoteConnection
from ..confighelper import ConfigHelper
from .klippy_apis import KlippyAPI
from .file_manager.file_manager import FileManager
from .machine import Machine
from .job_state import JobState
from .database import MoonrakerDatabase as Database
FlexCallback = Callable[..., Optional[Coroutine]]
Subscription = Dict[str, Optional[List[str]]]
# These endpoints are reserved for klippy/moonraker communication only and are
# not exposed via http or the websocket
RESERVED_ENDPOINTS = [
"list_endpoints",
"gcode/subscribe_output",
"register_remote_method",
]
# Items to exclude from the subscription cache. They never change and can be
# quite large.
CACHE_EXCLUSIONS = {
"configfile": ["config", "settings"]
}
INIT_TIME = .25
LOG_ATTEMPT_INTERVAL = int(2. / INIT_TIME + .5)
MAX_LOG_ATTEMPTS = 10 * LOG_ATTEMPT_INTERVAL
UNIX_BUFFER_LIMIT = 20 * 1024 * 1024
SVC_INFO_KEY = "klippy_connection.service_info"
class KlippyConnection:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.uds_address = config.getpath(
"klippy_uds_address", pathlib.Path("/tmp/klippy_uds")
)
self.writer: Optional[asyncio.StreamWriter] = None
self.connection_mutex: asyncio.Lock = asyncio.Lock()
self.event_loop = self.server.get_event_loop()
self.log_no_access = True
# Connection State
self.connection_task: Optional[asyncio.Task] = None
self.closing: bool = False
self.subscription_lock = asyncio.Lock()
self._klippy_info: Dict[str, Any] = {}
self._klippy_identified: bool = False
self._klippy_initializing: bool = False
self._klippy_started: bool = False
self._methods_registered: bool = False
self._klipper_version: str = ""
self._missing_reqs: Set[str] = set()
self._peer_cred: Dict[str, int] = {}
self._service_info: Dict[str, Any] = {}
self.init_attempts: int = 0
self._state: KlippyState = KlippyState.DISCONNECTED
self._state.set_message("Klippy Disconnected")
self.subscriptions: Dict[APITransport, Subscription] = {}
self.subscription_cache: Dict[str, Dict[str, Any]] = {}
# Setup remote methods accessable to Klippy. Note that all
# registered remote methods should be of the notification type,
# they do not return a response to Klippy after execution
self.pending_requests: Dict[int, KlippyRequest] = {}
self.remote_methods: Dict[str, FlexCallback] = {}
self.klippy_reg_methods: List[str] = []
self.register_remote_method(
'process_gcode_response', self._process_gcode_response,
need_klippy_reg=False)
self.register_remote_method(
'process_status_update', self._process_status_update,
need_klippy_reg=False)
@property
def klippy_apis(self) -> KlippyAPI:
return self.server.lookup_component("klippy_apis")
@property
def state(self) -> KlippyState:
if self.is_connected() and not self._klippy_started:
return KlippyState.STARTUP
return self._state
@property
def state_message(self) -> str:
return self._state.message
@property
def klippy_info(self) -> Dict[str, Any]:
return self._klippy_info
@property
def missing_requirements(self) -> List[str]:
return list(self._missing_reqs)
@property
def peer_credentials(self) -> Dict[str, int]:
return dict(self._peer_cred)
@property
def service_info(self) -> Dict[str, Any]:
return self._service_info
@property
def unit_name(self) -> str:
svc_info = self._service_info
unit_name = svc_info.get("unit_name", "klipper.service")
return unit_name.split(".", 1)[0]
async def component_init(self) -> None:
db: Database = self.server.lookup_component('database')
machine: Machine = self.server.lookup_component("machine")
self._service_info = await db.get_item("moonraker", SVC_INFO_KEY, {})
if self._service_info:
machine.log_service_info(self._service_info)
async def wait_connected(self) -> bool:
if (
self.connection_task is None or
self.connection_task.done()
):
return self.is_connected()
try:
await self.connection_task
except Exception:
pass
return self.is_connected()
async def wait_started(self, timeout: float = 20.) -> bool:
if self.connection_task is None or not self.is_connected():
return False
if not self.connection_task.done():
await asyncio.wait_for(
asyncio.shield(self.connection_task), timeout=timeout)
return self.is_connected()
async def _read_stream(self, reader: asyncio.StreamReader) -> None:
errors_remaining: int = 10
while not reader.at_eof():
try:
data = await reader.readuntil(b'\x03')
except (ConnectionError, asyncio.IncompleteReadError):
break
except asyncio.CancelledError:
logging.exception("Klippy Stream Read Cancelled")
raise
except Exception:
logging.exception("Klippy Stream Read Error")
errors_remaining -= 1
if not errors_remaining or not self.is_connected():
break
continue
errors_remaining = 10
try:
decoded_cmd = jsonw.loads(data[:-1])
self._process_command(decoded_cmd)
except Exception:
logging.exception(
f"Error processing Klippy Host Response: {data.decode()}")
if not self.closing:
logging.debug("Klippy Disconnection From _read_stream()")
await self.close()
async def _write_request(self, request: KlippyRequest) -> None:
if self.writer is None or self.closing:
request.set_exception(ServerError("Klippy Host not connected", 503))
return
data = jsonw.dumps(request.to_dict()) + b"\x03"
try:
self.writer.write(data)
await self.writer.drain()
except asyncio.CancelledError:
request.set_exception(ServerError("Klippy Write Request Cancelled", 503))
raise
except Exception:
request.set_exception(ServerError("Klippy Write Request Error", 503))
if not self.closing:
logging.debug("Klippy Disconnection From _write_request()")
await self.close()
def register_remote_method(self,
method_name: str,
cb: FlexCallback,
need_klippy_reg: bool = True
) -> None:
if method_name in self.remote_methods:
raise self.server.error(
f"Remote method ({method_name}) already registered")
if self.server.is_running():
raise self.server.error(
f"Failed to register remote method {method_name}, "
"methods must be registered during initialization")
self.remote_methods[method_name] = cb
if need_klippy_reg:
# These methods need to be registered with Klippy
self.klippy_reg_methods.append(method_name)
def register_method_from_agent(
self, connection: BaseRemoteConnection, method_name: str
) -> Optional[Awaitable]:
if connection.client_data["type"] != "agent":
raise self.server.error(
"Only connections of the 'agent' type can register methods"
)
if method_name in self.remote_methods:
raise self.server.error(
f"Remote method ({method_name}) already registered"
)
def _on_agent_method_received(**kwargs) -> None:
connection.call_method(method_name, kwargs)
self.remote_methods[method_name] = _on_agent_method_received
self.klippy_reg_methods.append(method_name)
if self._methods_registered and self._state != KlippyState.DISCONNECTED:
coro = self.klippy_apis.register_method(method_name)
return self.event_loop.create_task(coro)
return None
def unregister_method(self, method_name: str):
self.remote_methods.pop(method_name, None)
try:
self.klippy_reg_methods.remove(method_name)
except ValueError:
pass
def connect(self) -> Awaitable[bool]:
if (
self.is_connected() or
not self.server.is_running() or
(self.connection_task is not None and
not self.connection_task.done())
):
# already connecting
fut = self.event_loop.create_future()
fut.set_result(self.is_connected())
return fut
self.connection_task = self.event_loop.create_task(self._do_connect())
return self.connection_task
async def _do_connect(self) -> bool:
async with self.connection_mutex:
while self.writer is None:
await asyncio.sleep(INIT_TIME)
if self.closing or not self.server.is_running():
return False
if not self.uds_address.exists():
continue
if not os.access(str(self.uds_address), os.R_OK | os.W_OK):
if self.log_no_access:
user = getpass.getuser()
logging.info(
f"Cannot connect to Klippy, Linux user '{user}' "
"lacks permission to open Unix Domain Socket: "
f"{self.uds_address}")
self.log_no_access = False
continue
self.log_no_access = True
try:
reader, writer = await self.open_klippy_connection(True)
except asyncio.CancelledError:
raise
except Exception:
continue
logging.info("Klippy Connection Established")
self.writer = writer
if self._get_peer_credentials(writer):
await self._get_service_info(self._peer_cred["process_id"])
self.event_loop.create_task(self._read_stream(reader))
return await self._init_klippy_connection()
async def open_klippy_connection(
self, primary: bool = False
) -> Tuple[asyncio.StreamReader, asyncio.StreamWriter]:
if not primary and not self.is_connected():
raise ServerError("Klippy Unix Connection Not Available", 503)
return await asyncio.open_unix_connection(
str(self.uds_address), limit=UNIX_BUFFER_LIMIT)
def _get_peer_credentials(self, writer: asyncio.StreamWriter) -> bool:
peer_cred = get_unix_peer_credentials(writer, "Klippy")
if not peer_cred:
return False
if peer_cred.get("process_id") == 1:
logging.debug("Klipper Unix Socket created via Systemd Socket Activation")
return False
self._peer_cred = peer_cred
logging.debug(
f"Klippy Connection: Received Peer Credentials: {self._peer_cred}"
)
return True
async def _get_service_info(self, process_id: int) -> None:
machine: Machine = self.server.lookup_component("machine")
provider = machine.get_system_provider()
svc_info = await provider.extract_service_info("klipper", process_id)
if svc_info != self._service_info:
db: Database = self.server.lookup_component('database')
db.insert_item("moonraker", SVC_INFO_KEY, svc_info)
self._service_info = svc_info
machine.log_service_info(svc_info)
async def _init_klippy_connection(self) -> bool:
self._klippy_identified = False
self._klippy_started = False
self._klippy_initializing = True
self._methods_registered = False
self._missing_reqs.clear()
self.init_attempts = 0
self._state = KlippyState.STARTUP
while self.server.is_running():
await asyncio.sleep(INIT_TIME)
await self._check_ready()
if not self._klippy_initializing:
logging.debug("Klippy Connection Initialized")
return True
if not self.is_connected():
self._klippy_initializing = False
break
else:
self.init_attempts += 1
logging.debug("Klippy Connection Failed to Init")
return False
async def _request_endpoints(self) -> None:
result = await self.klippy_apis.list_endpoints(default=None)
if result is None:
return
endpoints = result.get('endpoints', [])
for ep in endpoints:
if ep not in RESERVED_ENDPOINTS:
self.server.register_endpoint(
ep, RequestType.GET | RequestType.POST, self.request,
is_remote=True
)
async def _request_initial_subscriptions(self) -> None:
try:
await self.klippy_apis.subscribe_objects({'webhooks': None})
except ServerError:
logging.exception("Unable to subscribe to webhooks object")
else:
logging.info("Webhooks Subscribed")
try:
await self.klippy_apis.subscribe_gcode_output()
except ServerError:
logging.exception(
"Unable to register gcode output subscription"
)
else:
logging.info("GCode Output Subscribed")
async def _check_ready(self) -> None:
send_id = not self._klippy_identified
result: Dict[str, Any]
try:
result = await self.klippy_apis.get_klippy_info(send_id)
except ServerError as e:
if self.init_attempts % LOG_ATTEMPT_INTERVAL == 0 and \
self.init_attempts <= MAX_LOG_ATTEMPTS:
logging.info(
f"{e}\nKlippy info request error. This indicates that\n"
f"Klippy may have experienced an error during startup.\n"
f"Please check klippy.log for more information")
return
version = result.get("software_version", "")
if version != self._klipper_version:
self._klipper_version = version
msg = f"Klipper Version: {version}"
self.server.add_log_rollover_item("klipper_version", msg)
klipper_pid: Optional[int] = result.get("process_id")
if klipper_pid is not None:
cur_pid: Optional[int] = self._peer_cred.get("process_id")
if cur_pid is None or klipper_pid != cur_pid:
self._peer_cred = dict(
process_id=klipper_pid,
group_id=result.get("group_id", -1),
user_id=result.get("user_id", -1)
)
await self._get_service_info(klipper_pid)
self._klippy_info = dict(result)
state_message: str = self._state.message
if "state_message" in self._klippy_info:
state_message = self._klippy_info["state_message"]
self._state.set_message(state_message)
if "state" not in result:
return
if send_id:
self._klippy_identified = True
await self.server.send_event("server:klippy_identified")
# Request initial endpoints to register info, emergency stop APIs
await self._request_endpoints()
self._state = KlippyState.from_string(result["state"], state_message)
if self._state != KlippyState.STARTUP:
await self._request_initial_subscriptions()
# Register remaining endpoints available
await self._request_endpoints()
startup_state = self._state
await self.server.send_event("server:klippy_started", startup_state)
self._klippy_started = True
if self._state != KlippyState.READY:
logging.info("\n" + self._state.message)
if (
self._state == KlippyState.SHUTDOWN and
startup_state != KlippyState.SHUTDOWN
):
# Klippy shutdown during startup event
self.server.send_event("server:klippy_shutdown")
else:
await self._verify_klippy_requirements()
# register methods with klippy
for method in self.klippy_reg_methods:
try:
await self.klippy_apis.register_method(method)
except ServerError:
logging.exception(
f"Unable to register method '{method}'")
self._methods_registered = True
if self._state == KlippyState.READY:
logging.info("Klippy ready")
await self.server.send_event("server:klippy_ready")
if self._state == KlippyState.SHUTDOWN:
# Klippy shutdown during ready event
self.server.send_event("server:klippy_shutdown")
else:
logging.info(
"Klippy state transition from ready during init, "
f"new state: {self._state}"
)
self._klippy_initializing = False
async def _verify_klippy_requirements(self) -> None:
result = await self.klippy_apis.get_object_list(default=None)
if result is None:
logging.info("Unable to retrieve Klipper Object List")
return
req_objs = set(["virtual_sdcard", "display_status", "pause_resume"])
self._missing_reqs = req_objs - set(result)
if self._missing_reqs:
err_str = ", ".join([f"[{o}]" for o in self._missing_reqs])
logging.info(
f"\nWarning, unable to detect the following printer "
f"objects:\n{err_str}\nPlease add the the above sections "
f"to printer.cfg for full Moonraker functionality.")
if "virtual_sdcard" not in self._missing_reqs:
# Update the gcode path
query_res = await self.klippy_apis.query_objects(
{'configfile': None}, default=None)
if query_res is None:
logging.info("Unable to set SD Card path")
else:
config = query_res.get('configfile', {}).get('config', {})
vsd_config = config.get('virtual_sdcard', {})
vsd_path = vsd_config.get('path', None)
if vsd_path is not None:
file_manager: FileManager = self.server.lookup_component(
'file_manager')
file_manager.validate_gcode_path(vsd_path)
else:
logging.info(
"Configuration for [virtual_sdcard] not found,"
" unable to set SD Card path")
def _process_command(self, cmd: Dict[str, Any]) -> None:
method = cmd.get('method', None)
if method is not None:
# This is a remote method called from klippy
if method in self.remote_methods:
params = cmd.get('params', {})
self.event_loop.register_callback(
self._execute_method, method, **params)
else:
logging.info(f"Unknown method received: {method}")
return
# This is a response to a request, process
req_id = cmd.get('id', None)
request: Optional[KlippyRequest]
request = self.pending_requests.pop(req_id, None)
if request is None:
logging.info(
f"No request matching request ID: {req_id}, "
f"response: {cmd}")
return
if 'result' in cmd:
result = cmd['result']
if not result:
result = "ok"
request.set_result(result)
else:
err: Union[str, Dict[str, str]]
err = cmd.get('error', "Malformed Klippy Response")
if isinstance(err, dict):
err = err.get("message", "Malformed Klippy Response")
request.set_exception(ServerError(err, 400))
async def _execute_method(self, method_name: str, **kwargs) -> None:
try:
ret = self.remote_methods[method_name](**kwargs)
if ret is not None:
await ret
except Exception:
logging.exception(f"Error running remote method: {method_name}")
def _process_gcode_response(self, response: str) -> None:
self.server.send_event("server:gcode_response", response)
def _process_status_update(
self, eventtime: float, status: Dict[str, Dict[str, Any]]
) -> None:
for field, item in status.items():
self.subscription_cache.setdefault(field, {}).update(item)
if 'webhooks' in status:
wh: Dict[str, str] = status['webhooks']
state_message: str = self._state.message
if "state_message" in wh:
state_message = wh["state_message"]
self._state.set_message(state_message)
# XXX - process other states (startup, ready, error, etc)?
if "state" in wh:
new_state = KlippyState.from_string(wh["state"], state_message)
if (
new_state == KlippyState.SHUTDOWN and
not self._klippy_initializing and
self._state != KlippyState.SHUTDOWN
):
# If the shutdown state is received during initialization
# defer the event, the init routine will handle it.
logging.info("Klippy has shutdown")
self.server.send_event("server:klippy_shutdown")
self._state = new_state
for conn, sub in self.subscriptions.items():
conn_status: Dict[str, Any] = {}
for name, fields in sub.items():
if name in status:
val: Dict[str, Any] = dict(status[name])
if fields is not None:
val = {k: v for k, v in val.items() if k in fields}
if val:
conn_status[name] = val
conn.send_status(conn_status, eventtime)
async def request(self, web_request: WebRequest) -> Any:
if not self.is_connected():
raise ServerError("Klippy Host not connected", 503)
rpc_method = web_request.get_endpoint()
if rpc_method == "objects/subscribe":
return await self._request_subscripton(web_request)
else:
if rpc_method == "gcode/script":
script = web_request.get_str('script', "")
if script:
self.server.send_event(
"klippy_connection:gcode_received", script)
return await self._request_standard(web_request)
async def _request_subscripton(self, web_request: WebRequest) -> Dict[str, Any]:
async with self.subscription_lock:
args = web_request.get_args()
conn = web_request.get_subscribable()
if conn is None:
raise self.server.error(
"No connection associated with subscription request"
)
requested_sub: Subscription = args.get('objects', {})
all_subs: Subscription = dict(requested_sub)
# Build the subscription request from a superset of all client subscriptions
for sub in self.subscriptions.values():
for obj, items in sub.items():
if obj in all_subs:
prev_items = all_subs[obj]
if items is None or prev_items is None:
all_subs[obj] = None
else:
uitems = list(set(prev_items) | set(items))
all_subs[obj] = uitems
else:
all_subs[obj] = items
args['objects'] = all_subs
args['response_template'] = {'method': "process_status_update"}
result = await self._request_standard(web_request, 20.0)
# prune the status response
pruned_status: Dict[str, Dict[str, Any]] = {}
status_diff: Dict[str, Dict[str, Any]] = {}
all_status: Dict[str, Dict[str, Any]] = result['status']
for obj, fields in all_status.items():
# Diff the current cache, then update the cache
if obj in self.subscription_cache:
cached_status = self.subscription_cache[obj]
for field_name, value in fields.items():
if field_name not in cached_status:
continue
if value != cached_status[field_name]:
status_diff.setdefault(obj, {})[field_name] = value
if obj in CACHE_EXCLUSIONS:
# Make a shallow copy so we can pop off fields we want to
# exclude from the cache without modifying the return value
fields_to_cache = dict(fields)
removed: List[str] = []
for excluded_field in CACHE_EXCLUSIONS[obj]:
if excluded_field in fields_to_cache:
removed.append(excluded_field)
del fields_to_cache[excluded_field]
if removed:
logging.debug(
"Removed excluded fields from subscription cache: "
f"{obj}: {removed}"
)
self.subscription_cache[obj] = fields_to_cache
else:
self.subscription_cache[obj] = fields
# Prune Response
if obj in requested_sub:
valid_fields = requested_sub[obj]
if valid_fields is None:
pruned_status[obj] = fields
else:
pruned_status[obj] = {
k: v for k, v in fields.items() if k in valid_fields
}
if status_diff:
# The response to the status request contains changed data, so it
# is necessary to manually push the status update to existing
# subscribers
logging.debug(
f"Detected status difference during subscription: {status_diff}"
)
self._process_status_update(result["eventtime"], status_diff)
for obj_name in list(self.subscription_cache.keys()):
# Prune the cache to match the current status response
if obj_name not in all_status:
del self.subscription_cache[obj_name]
result['status'] = pruned_status
self.subscriptions[conn] = requested_sub
return result
async def _request_standard(
self, web_request: WebRequest, timeout: Optional[float] = None
) -> Any:
rpc_method = web_request.get_endpoint()
args = web_request.get_args()
# Create a base klippy request
base_request = KlippyRequest(rpc_method, args)
self.pending_requests[base_request.id] = base_request
self.event_loop.register_callback(self._write_request, base_request)
try:
return await base_request.wait(timeout)
finally:
self.pending_requests.pop(base_request.id, None)
def remove_subscription(self, conn: APITransport) -> None:
self.subscriptions.pop(conn, None)
def is_connected(self) -> bool:
return self.writer is not None and not self.closing
def is_ready(self) -> bool:
return self._state == KlippyState.READY
def is_printing(self) -> bool:
if not self.is_ready():
return False
job_state: JobState = self.server.lookup_component("job_state")
stats = job_state.get_last_stats()
return stats.get("state", "") == "printing"
def get_subscription_cache(self) -> Dict[str, Dict[str, Any]]:
return self.subscription_cache
async def rollover_log(self) -> None:
if "unit_name" not in self._service_info:
raise self.server.error(
"Unable to detect Klipper Service, cannot perform "
"manual rollover"
)
logfile: Optional[str] = self._klippy_info.get("log_file", None)
if logfile is None:
raise self.server.error(
"Unable to detect path to Klipper log file"
)
if self.is_printing():
raise self.server.error("Cannot rollover log while printing")
logpath = pathlib.Path(logfile).expanduser().resolve()
if not logpath.is_file():
raise self.server.error(
f"No file at {logpath} exists, cannot perform rollover"
)
machine: Machine = self.server.lookup_component("machine")
await machine.do_service_action("stop", self.unit_name)
suffix = time.strftime("%Y-%m-%d_%H-%M-%S", time.localtime())
new_path = pathlib.Path(f"{logpath}.{suffix}")
def _do_file_op() -> None:
if new_path.exists():
new_path.unlink()
logpath.rename(new_path)
await self.event_loop.run_in_thread(_do_file_op)
await machine.do_service_action("start", self.unit_name)
async def _on_connection_closed(self) -> None:
self._klippy_identified = False
self._klippy_initializing = False
self._klippy_started = False
self._methods_registered = False
self._state = KlippyState.DISCONNECTED
self._state.set_message("Klippy Disconnected")
for request in self.pending_requests.values():
request.set_exception(ServerError("Klippy Disconnected", 503))
self.pending_requests = {}
self.subscriptions = {}
self.subscription_cache.clear()
self._peer_cred = {}
self._missing_reqs.clear()
logging.info("Klippy Connection Removed")
await self.server.send_event("server:klippy_disconnect")
if self.server.is_running():
# Reconnect if server is running
loop = self.event_loop
self.connection_task = loop.create_task(self._do_connect())
async def close(self, wait_closed: bool = False) -> None:
if self.closing:
if wait_closed:
await self.connection_mutex.acquire()
self.connection_mutex.release()
return
self.closing = True
if (
self.connection_task is not None and
not self.connection_task.done()
):
self.connection_task.cancel()
async with self.connection_mutex:
if self.writer is not None:
try:
self.writer.close()
await self.writer.wait_closed()
except Exception:
logging.exception("Error closing Klippy Unix Socket")
self.writer = None
await self._on_connection_closed()
self.closing = False
# Basic KlippyRequest class, easily converted to dict for json encoding
class KlippyRequest:
def __init__(self, rpc_method: str, params: Dict[str, Any]) -> None:
self.id = id(self)
self.rpc_method = rpc_method
self.params = params
self._fut = asyncio.get_running_loop().create_future()
async def wait(self, timeout: Optional[float] = None) -> Any:
start_time = time.time()
to = timeout or 60.
while True:
try:
return await asyncio.wait_for(asyncio.shield(self._fut), to)
except asyncio.TimeoutError:
if timeout is not None:
self._fut.cancel()
raise ServerError("Klippy request timed out", 500) from None
pending_time = time.time() - start_time
logging.info(
f"Request '{self.rpc_method}' pending: "
f"{pending_time:.2f} seconds"
)
def set_exception(self, exc: Exception) -> None:
if not self._fut.done():
self._fut.set_exception(exc)
def set_result(self, result: Any) -> None:
if not self._fut.done():
self._fut.set_result(result)
def to_dict(self) -> Dict[str, Any]:
return {
'id': self.id,
'method': self.rpc_method,
'params': self.params
}
def load_component(config: ConfigHelper) -> KlippyConnection:
return KlippyConnection(config)

View File

@@ -18,7 +18,7 @@ from typing import (
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ..confighelper import ConfigHelper
from ldap3.abstract.entry import Entry
class MoonrakerLDAP:
@@ -46,6 +46,15 @@ class MoonrakerLDAP:
"required when 'bind_dn' is provided"
)
self.bind_password = bind_pass_template.render()
self.user_filter: Optional[str] = None
user_filter_template = config.gettemplate('user_filter', None)
if user_filter_template is not None:
self.user_filter = user_filter_template.render()
if "USERNAME" not in self.user_filter:
raise config.error(
"Section [ldap]: Option 'user_filter' is "
"is missing required token USERNAME"
)
self.lock = asyncio.Lock()
async def authenticate_ldap_user(self, username, password) -> None:
@@ -67,6 +76,8 @@ class MoonrakerLDAP:
}
attr_name = "sAMAccountName" if self.active_directory else "uid"
ldfilt = f"(&(objectClass=Person)({attr_name}={username}))"
if self.user_filter:
ldfilt = self.user_filter.replace("USERNAME", username)
try:
with ldap3.Connection(server, **conn_args) as conn:
ret = conn.search(

File diff suppressed because it is too large Load Diff

View File

@@ -8,12 +8,19 @@ from __future__ import annotations
import socket
import asyncio
import logging
import json
import pathlib
import ssl
from collections import deque
import paho.mqtt.client as paho_mqtt
from websockets import Subscribable, WebRequest, JsonRPC, APITransport
import paho.mqtt
from ..common import (
TransportType,
RequestType,
WebRequest,
APITransport,
KlippyState
)
from ..utils import json_wrapper as jsonw
# Annotation imports
from typing import (
@@ -30,12 +37,14 @@ from typing import (
Deque,
)
if TYPE_CHECKING:
from app import APIDefinition
from confighelper import ConfigHelper
from klippy_connection import KlippyConnection as Klippy
from ..confighelper import ConfigHelper
from ..common import JsonRPC, APIDefinition
from ..eventloop import FlexTimer
from .klippy_apis import KlippyAPI
FlexCallback = Callable[[bytes], Optional[Coroutine]]
RPCCallback = Callable[..., Coroutine]
PAHO_MQTT_VERSION = tuple([int(p) for p in paho.mqtt.__version__.split(".")])
DUP_API_REQ_CODE = -10000
MQTT_PROTOCOLS = {
'v3.1': paho_mqtt.MQTTv31,
@@ -54,22 +63,38 @@ class ExtPahoClient(paho_mqtt.Client):
if self._port <= 0:
raise ValueError('Invalid port number.')
self._in_packet = {
"command": 0,
"have_remaining": 0,
"remaining_count": [],
"remaining_mult": 1,
"remaining_length": 0,
"packet": b"",
"to_process": 0,
"pos": 0}
if PAHO_MQTT_VERSION >= (2, 0):
return self._v2_reconnect(sock)
if PAHO_MQTT_VERSION < (1, 6):
# Paho Mqtt Version < 1.6.x
self._in_packet = {
"command": 0,
"have_remaining": 0,
"remaining_count": [],
"remaining_mult": 1,
"remaining_length": 0,
"packet": b"",
"to_process": 0,
"pos": 0
}
with self._out_packet_mutex:
self._out_packet = deque() # type: ignore
with self._out_packet_mutex:
with self._current_out_packet_mutex:
self._current_out_packet = None
else:
self._in_packet = {
"command": 0,
"have_remaining": 0,
"remaining_count": [],
"remaining_mult": 1,
"remaining_length": 0,
"packet": bytearray(b""),
"to_process": 0,
"pos": 0
}
self._out_packet = deque() # type: ignore
with self._current_out_packet_mutex:
self._current_out_packet = None
with self._msgtime_mutex:
self._last_msg_in = paho_mqtt.time_func()
self._last_msg_out = paho_mqtt.time_func()
@@ -120,7 +145,7 @@ class ExtPahoClient(paho_mqtt.Client):
sock.do_handshake()
if verify_host:
ssl.match_hostname(sock.getpeercert(), self._host)
ssl.match_hostname(sock.getpeercert(), self._host) # type: ignore
if self._transport == "websockets":
sock.settimeout(self._keepalive)
@@ -137,6 +162,65 @@ class ExtPahoClient(paho_mqtt.Client):
return self._send_connect(self._keepalive)
def _v2_reconnect(self, sock: Optional[socket.socket] = None):
self._in_packet = {
"command": 0,
"have_remaining": 0,
"remaining_count": [],
"remaining_mult": 1,
"remaining_length": 0,
"packet": bytearray(b""),
"to_process": 0,
"pos": 0,
}
self._ping_t = 0.0 # type: ignore
self._state = paho_mqtt._ConnectionState.MQTT_CS_CONNECTING
self._sock_close()
# Mark all currently outgoing QoS = 0 packets as lost,
# or `wait_for_publish()` could hang forever
for pkt in self._out_packet:
if (
pkt["command"] & 0xF0 == paho_mqtt.PUBLISH and
pkt["qos"] == 0 and pkt["info"] is not None
):
pkt["info"].rc = paho_mqtt.MQTT_ERR_CONN_LOST
pkt["info"]._set_as_published()
self._out_packet.clear()
with self._msgtime_mutex:
self._last_msg_in = paho_mqtt.time_func()
self._last_msg_out = paho_mqtt.time_func()
# Put messages in progress in a valid state.
self._messages_reconnect_reset()
with self._callback_mutex:
on_pre_connect = self.on_pre_connect
if on_pre_connect:
try:
on_pre_connect(self, self._userdata)
except Exception as err:
self._easy_log(
paho_mqtt.MQTT_LOG_ERR,
'Caught exception in on_pre_connect: %s', err
)
if not self.suppress_exceptions:
raise
self._sock = sock or self._create_socket()
self._sock.setblocking(False) # type: ignore[attr-defined]
self._registered_write = False
self._call_socket_open(self._sock)
return self._send_connect(self._keepalive)
class SubscriptionHandle:
def __init__(self, topic: str, callback: FlexCallback) -> None:
self.callback = callback
@@ -227,13 +311,13 @@ class AIOHelper:
logging.info("MQTT Misc Loop Complete")
class MQTTClient(APITransport, Subscribable):
class MQTTClient(APITransport):
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.event_loop = self.server.get_event_loop()
self.klippy: Klippy = self.server.lookup_component("klippy_connection")
self.eventloop = self.server.get_event_loop()
self.address: str = config.get('address')
self.port: int = config.getint('port', 1883)
self.tls_enabled: bool = config.getboolean("enable_tls", False)
user = config.gettemplate('username', None)
self.user_name: Optional[str] = None
if user:
@@ -266,7 +350,14 @@ class MQTTClient(APITransport, Subscribable):
raise config.error(
"Option 'default_qos' in section [mqtt] must be "
"between 0 and 2")
self.client = ExtPahoClient(protocol=self.protocol)
self.publish_split_status = \
config.getboolean("publish_split_status", False)
if PAHO_MQTT_VERSION < (2, 0):
self.client = ExtPahoClient(protocol=self.protocol)
else:
self.client = ExtPahoClient(
paho_mqtt.CallbackAPIVersion.VERSION1, protocol=self.protocol
)
self.client.on_connect = self._on_connect
self.client.on_message = self._on_message
self.client.on_disconnect = self._on_disconnect
@@ -280,42 +371,54 @@ class MQTTClient(APITransport, Subscribable):
self.pending_responses: List[asyncio.Future] = []
self.pending_acks: Dict[int, asyncio.Future] = {}
# We don't need to register these endpoints over the MQTT transport as they
# are redundant. MQTT clients can already publish and subscribe.
ep_transports = TransportType.all() & ~TransportType.MQTT
self.server.register_endpoint(
"/server/mqtt/publish", ["POST"],
self._handle_publish_request,
transports=["http", "websocket", "internal"])
"/server/mqtt/publish", RequestType.POST, self._handle_publish_request,
transports=ep_transports
)
self.server.register_endpoint(
"/server/mqtt/subscribe", ["POST"],
"/server/mqtt/subscribe", RequestType.POST,
self._handle_subscription_request,
transports=["http", "websocket", "internal"])
transports=ep_transports
)
# Subscribe to API requests
self.json_rpc = JsonRPC(transport="MQTT")
self.api_request_topic = f"{self.instance_name}/moonraker/api/request"
self.api_resp_topic = f"{self.instance_name}/moonraker/api/response"
self.klipper_status_topic = f"{self.instance_name}/klipper/status"
self.klipper_state_prefix = f"{self.instance_name}/klipper/state"
self.moonraker_status_topic = f"{self.instance_name}/moonraker/status"
status_cfg: Dict[str, Any] = config.getdict("status_objects", {},
allow_empty_fields=True)
self.status_objs: Dict[str, Any] = {}
status_cfg: Dict[str, str] = config.getdict(
"status_objects", {}, allow_empty_fields=True
)
self.status_interval = config.getfloat("status_interval", 0, above=.25)
self.status_cache: Dict[str, Dict[str, Any]] = {}
self.status_update_timer: Optional[FlexTimer] = None
self.last_status_time = 0.
self.status_objs: Dict[str, Optional[List[str]]] = {}
for key, val in status_cfg.items():
if val is not None:
self.status_objs[key] = [v.strip() for v in val.split(',')
if v.strip()]
self.status_objs[key] = [v.strip() for v in val.split(',') if v.strip()]
else:
self.status_objs[key] = None
if status_cfg:
logging.debug(f"MQTT: Status Objects Set: {self.status_objs}")
self.server.register_event_handler("server:klippy_identified",
self._handle_klippy_identified)
self.server.register_event_handler(
"server:klippy_started", self._handle_klippy_started
)
self.server.register_event_handler(
"server:klippy_disconnect", self._handle_klippy_disconnect
)
if self.status_interval:
self.status_update_timer = self.eventloop.register_timer(
self._handle_timed_status_update
)
self.timestamp_deque: Deque = deque(maxlen=20)
self.api_qos = config.getint('api_qos', self.qos)
if config.getboolean("enable_moonraker_api", True):
api_cache = self.server.register_api_transport("mqtt", self)
for api_def in api_cache.values():
if "mqtt" in api_def.supported_transports:
self.register_api_handler(api_def)
self.subscribe_topic(self.api_request_topic,
self._process_api_request,
self.api_qos)
@@ -336,21 +439,31 @@ class MQTTClient(APITransport, Subscribable):
if self.user_name is not None:
self.client.username_pw_set(self.user_name, self.password)
self.client.will_set(self.moonraker_status_topic,
payload=json.dumps({'server': 'offline'}),
payload=jsonw.dumps({'server': 'offline'}),
qos=self.qos, retain=True)
if self.tls_enabled:
self.client.tls_set()
self.client.connect_async(self.address, self.port)
self.connect_task = self.event_loop.create_task(
self.connect_task = self.eventloop.create_task(
self._do_reconnect(first=True)
)
async def _handle_klippy_identified(self) -> None:
async def _handle_klippy_started(self, state: KlippyState) -> None:
if self.status_objs:
args = {'objects': self.status_objs}
try:
await self.klippy.request(
WebRequest("objects/subscribe", args, conn=self))
except self.server.error:
pass
kapi: KlippyAPI = self.server.lookup_component("klippy_apis")
await kapi.subscribe_from_transport(
self.status_objs, self, default=None,
)
if self.status_update_timer is not None:
self.status_update_timer.start(delay=self.status_interval)
def _handle_klippy_disconnect(self):
if self.status_update_timer is not None:
self.status_update_timer.stop()
if self.status_cache:
payload = self.status_cache
self.status_cache = {}
self._publish_status_update(payload, self.last_status_time)
def _on_message(self,
client: str,
@@ -361,7 +474,7 @@ class MQTTClient(APITransport, Subscribable):
if topic in self.subscribed_topics:
cb_hdls = self.subscribed_topics[topic][1]
for hdl in cb_hdls:
self.event_loop.register_callback(
self.eventloop.register_callback(
hdl.callback, message.payload)
else:
logging.debug(
@@ -383,7 +496,7 @@ class MQTTClient(APITransport, Subscribable):
if subs:
res, msg_id = client.subscribe(subs)
if msg_id is not None:
sub_fut: asyncio.Future = asyncio.Future()
sub_fut: asyncio.Future = self.eventloop.create_future()
topics = list(self.subscribed_topics.keys())
sub_fut.add_done_callback(
BrokerAckLogger(topics, "subscribe"))
@@ -457,14 +570,14 @@ class MQTTClient(APITransport, Subscribable):
raise
first = False
try:
sock = await self.event_loop.create_socket_connection(
sock = await self.eventloop.create_socket_connection(
(self.address, self.port), timeout=10
)
self.client.reconnect(sock)
except asyncio.CancelledError:
raise
except Exception as e:
if type(last_err) != type(e) or last_err.args != e.args:
if type(last_err) is not type(e) or last_err.args != e.args:
logging.exception("MQTT Connection Error")
last_err = e
continue
@@ -505,7 +618,7 @@ class MQTTClient(APITransport, Subscribable):
if self.is_connected() and need_sub:
res, msg_id = self.client.subscribe(topic, qos)
if msg_id is not None:
sub_fut: asyncio.Future = asyncio.Future()
sub_fut: asyncio.Future = self.eventloop.create_future()
sub_fut.add_done_callback(
BrokerAckLogger([topic], "subscribe"))
self.pending_acks[msg_id] = sub_fut
@@ -523,7 +636,7 @@ class MQTTClient(APITransport, Subscribable):
del self.subscribed_topics[topic]
res, msg_id = self.client.unsubscribe(topic)
if msg_id is not None:
unsub_fut: asyncio.Future = asyncio.Future()
unsub_fut: asyncio.Future = self.eventloop.create_future()
unsub_fut.add_done_callback(
BrokerAckLogger([topic], "unsubscribe"))
self.pending_acks[msg_id] = unsub_fut
@@ -537,11 +650,11 @@ class MQTTClient(APITransport, Subscribable):
qos = qos or self.qos
if qos > 2 or qos < 0:
raise self.server.error("QOS must be between 0 and 2")
pub_fut: asyncio.Future = asyncio.Future()
pub_fut: asyncio.Future = self.eventloop.create_future()
if isinstance(payload, (dict, list)):
try:
payload = json.dumps(payload)
except json.JSONDecodeError:
payload = jsonw.dumps(payload)
except jsonw.JSONDecodeError:
raise self.server.error(
"Dict or List is not json encodable") from None
elif isinstance(payload, bool):
@@ -584,7 +697,7 @@ class MQTTClient(APITransport, Subscribable):
qos = qos or self.qos
if qos > 2 or qos < 0:
raise self.server.error("QOS must be between 0 and 2")
resp_fut: asyncio.Future = asyncio.Future()
resp_fut: asyncio.Future = self.eventloop.create_future()
resp_hdl = self.subscribe_topic(
response_topic, resp_fut.set_result, qos)
self.pending_responses.append(resp_fut)
@@ -626,7 +739,7 @@ class MQTTClient(APITransport, Subscribable):
topic: str = web_request.get_str("topic")
qos: int = web_request.get_int("qos", self.qos)
timeout: Optional[float] = web_request.get_float('timeout', None)
resp: asyncio.Future = asyncio.Future()
resp: asyncio.Future = self.eventloop.create_future()
hdl: Optional[SubscriptionHandle] = None
try:
hdl = self.subscribe_topic(topic, resp.set_result, qos)
@@ -643,8 +756,8 @@ class MQTTClient(APITransport, Subscribable):
if hdl is not None:
self.unsubscribe(hdl)
try:
payload = json.loads(ret)
except json.JSONDecodeError:
payload = jsonw.loads(ret)
except jsonw.JSONDecodeError:
payload = ret.decode()
return {
'topic': topic,
@@ -652,51 +765,19 @@ class MQTTClient(APITransport, Subscribable):
}
async def _process_api_request(self, payload: bytes) -> None:
response = await self.json_rpc.dispatch(payload.decode())
rpc: JsonRPC = self.server.lookup_component("jsonrpc")
response = await rpc.dispatch(payload, self)
if response is not None:
await self.publish_topic(self.api_resp_topic, response,
self.api_qos)
def register_api_handler(self, api_def: APIDefinition) -> None:
if api_def.callback is None:
# Remote API, uses RPC to reach out to Klippy
mqtt_method = api_def.jrpc_methods[0]
rpc_cb = self._generate_remote_callback(api_def.endpoint)
self.json_rpc.register_method(mqtt_method, rpc_cb)
else:
# Local API, uses local callback
for mqtt_method, req_method in \
zip(api_def.jrpc_methods, api_def.request_methods):
rpc_cb = self._generate_local_callback(
api_def.endpoint, req_method, api_def.callback)
self.json_rpc.register_method(mqtt_method, rpc_cb)
logging.info(
"Registering MQTT JSON-RPC methods: "
f"{', '.join(api_def.jrpc_methods)}")
@property
def transport_type(self) -> TransportType:
return TransportType.MQTT
def remove_api_handler(self, api_def: APIDefinition) -> None:
for jrpc_method in api_def.jrpc_methods:
self.json_rpc.remove_method(jrpc_method)
def _generate_local_callback(self,
endpoint: str,
request_method: str,
callback: Callable[[WebRequest], Coroutine]
) -> RPCCallback:
async def func(args: Dict[str, Any]) -> Any:
self._check_timestamp(args)
result = await callback(WebRequest(endpoint, args, request_method))
return result
return func
def _generate_remote_callback(self, endpoint: str) -> RPCCallback:
async def func(args: Dict[str, Any]) -> Any:
self._check_timestamp(args)
result = await self.klippy.request(WebRequest(endpoint, args))
return result
return func
def _check_timestamp(self, args: Dict[str, Any]) -> None:
def screen_rpc_request(
self, api_def: APIDefinition, req_type: RequestType, args: Dict[str, Any]
) -> None:
ts = args.pop("mqtt_timestamp", None)
if ts is not None:
if ts in self.timestamp_deque:
@@ -706,19 +787,43 @@ class MQTTClient(APITransport, Subscribable):
else:
self.timestamp_deque.append(ts)
def send_status(self,
status: Dict[str, Any],
eventtime: float
) -> None:
def send_status(self, status: Dict[str, Any], eventtime: float) -> None:
if not status or not self.is_connected():
return
payload = {'eventtime': eventtime, 'status': status}
self.publish_topic(self.klipper_status_topic, payload)
if not self.status_interval:
self._publish_status_update(status, eventtime)
else:
for key, val in status.items():
self.status_cache.setdefault(key, {}).update(val)
self.last_status_time = eventtime
def _handle_timed_status_update(self, eventtime: float) -> float:
if self.status_cache:
payload = self.status_cache
self.status_cache = {}
self._publish_status_update(payload, self.last_status_time)
return eventtime + self.status_interval
def _publish_status_update(self, status: Dict[str, Any], eventtime: float) -> None:
if self.publish_split_status:
for objkey in status:
objval = status[objkey]
for statekey in objval:
payload = {'eventtime': eventtime, 'value': objval[statekey]}
self.publish_topic(
f"{self.klipper_state_prefix}/{objkey}/{statekey}",
payload, retain=True)
else:
payload = {'eventtime': eventtime, 'status': status}
self.publish_topic(self.klipper_status_topic, payload)
def get_instance_name(self) -> str:
return self.instance_name
async def close(self) -> None:
if self.status_update_timer is not None:
self.status_update_timer.stop()
if self.connect_task is not None:
self.connect_task.cancel()
self.connect_task = None

View File

@@ -8,114 +8,112 @@ from __future__ import annotations
import apprise
import logging
import pathlib
import re
from ..common import JobEvent, RequestType
# Annotation imports
from typing import (
TYPE_CHECKING,
Type,
Optional,
Dict,
Any,
List,
Union,
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from . import klippy_apis
APIComp = klippy_apis.KlippyAPI
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .file_manager.file_manager import FileManager
from .klippy_apis import KlippyAPI as APIComp
class Notifier:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.notifiers: Dict[str, NotifierInstance] = {}
self.events: Dict[str, NotifierEvent] = {}
self.events: Dict[str, List[NotifierInstance]] = {}
prefix_sections = config.get_prefix_sections("notifier")
self.register_events(config)
self.register_remote_actions()
for section in prefix_sections:
cfg = config[section]
try:
notifier = NotifierInstance(cfg)
for event in self.events:
if event in notifier.events or "*" in notifier.events:
self.events[event].register_notifier(notifier)
for job_event in list(JobEvent):
if job_event == JobEvent.STANDBY:
continue
evt_name = str(job_event)
if "*" in notifier.events or evt_name in notifier.events:
self.events.setdefault(evt_name, []).append(notifier)
logging.info(f"Registered notifier: '{notifier.get_name()}'")
except Exception as e:
msg = f"Failed to load notifier[{cfg.get_name()}]\n{e}"
self.server.add_warning(msg)
continue
self.notifiers[notifier.get_name()] = notifier
def register_events(self, config: ConfigHelper):
self.register_endpoints(config)
self.server.register_event_handler(
"job_state:state_changed", self._on_job_state_changed
)
self.events["started"] = NotifierEvent(
"started",
"job_state:started",
config)
def register_remote_actions(self):
self.server.register_remote_method("notify", self.notify_action)
self.events["complete"] = NotifierEvent(
"complete",
"job_state:complete",
config)
async def notify_action(self, name: str, message: str = ""):
if name not in self.notifiers:
raise self.server.error(f"Notifier '{name}' not found", 404)
notifier = self.notifiers[name]
await notifier.notify("remote_action", [], message)
self.events["error"] = NotifierEvent(
"error",
"job_state:error",
config)
async def _on_job_state_changed(
self,
job_event: JobEvent,
prev_stats: Dict[str, Any],
new_stats: Dict[str, Any]
) -> None:
evt_name = str(job_event)
for notifier in self.events.get(evt_name, []):
await notifier.notify(evt_name, [prev_stats, new_stats])
self.events["cancelled"] = NotifierEvent(
"cancelled",
"job_state:cancelled",
config)
def register_endpoints(self, config: ConfigHelper):
self.server.register_endpoint(
"/server/notifiers/list", RequestType.GET, self._handle_notifier_list
)
self.server.register_debug_endpoint(
"/debug/notifiers/test", RequestType.POST, self._handle_notifier_test
)
self.events["paused"] = NotifierEvent(
"paused",
"job_state:paused",
config)
async def _handle_notifier_list(
self, web_request: WebRequest
) -> Dict[str, Any]:
return {"notifiers": self._list_notifiers()}
self.events["resumed"] = NotifierEvent(
"resumed",
"job_state:resumed",
config)
def _list_notifiers(self) -> List[Dict[str, Any]]:
return [notifier.as_dict() for notifier in self.notifiers.values()]
async def _handle_notifier_test(
self, web_request: WebRequest
) -> Dict[str, Any]:
class NotifierEvent:
def __init__(self, identifier: str, event_name: str, config: ConfigHelper):
self.identifier = identifier
self.event_name = event_name
self.server = config.get_server()
self.notifiers: Dict[str, NotifierInstance] = {}
self.config = config
name = web_request.get_str("name")
if name not in self.notifiers:
raise self.server.error(f"Notifier '{name}' not found", 404)
notifier = self.notifiers[name]
self.server.register_event_handler(self.event_name, self._handle)
kapis: APIComp = self.server.lookup_component('klippy_apis')
result: Dict[str, Any] = await kapis.query_objects(
{'print_stats': None}, default={})
print_stats = result.get('print_stats', {})
print_stats["filename"] = "notifier_test.gcode" # Mock the filename
def register_notifier(self, notifier: NotifierInstance):
self.notifiers[notifier.get_name()] = notifier
async def _handle(self, *args) -> None:
logging.info(f"'{self.identifier}' notifier event triggered'")
await self.invoke_notifiers(args)
async def invoke_notifiers(self, args):
for notifier_name in self.notifiers:
try:
notifier = self.notifiers[notifier_name]
await notifier.notify(self.identifier, args)
except Exception as e:
logging.info(f"Failed to notify [{notifier_name}]\n{e}")
continue
await notifier.notify(notifier.events[0], [print_stats, print_stats])
return {
"status": "success",
"stats": print_stats
}
class NotifierInstance:
def __init__(self, config: ConfigHelper) -> None:
self.config = config
name_parts = config.get_name().split(maxsplit=1)
if len(name_parts) != 2:
@@ -123,32 +121,40 @@ class NotifierInstance:
self.server = config.get_server()
self.name = name_parts[1]
self.apprise = apprise.Apprise()
self.warned = False
self.attach_requires_file_system_check = True
self.attach = config.get("attach", None)
if self.attach is None or \
(self.attach.startswith("http://") or
self.attach.startswith("https://")):
self.attach_requires_file_system_check = False
url_template = config.gettemplate('url')
self.attach = config.gettemplate("attach", None)
url_template = config.gettemplate("url")
self.url = url_template.render()
if len(self.url) < 2:
if re.match(r"\w+?://", self.url) is None:
raise config.error(f"Invalid url for: {config.get_name()}")
self.title = config.gettemplate('title', None)
self.title = config.gettemplate("title", None)
self.body = config.gettemplate("body", None)
upper_body_format = config.get("body_format", 'text').upper()
if not hasattr(apprise.NotifyFormat, upper_body_format):
raise config.error(f"Invalid body_format for {config.get_name()}")
self.body_format = getattr(apprise.NotifyFormat, upper_body_format)
self.events: List[str] = config.getlist("events", separator=",")
self.apprise.add(self.url)
async def notify(self, event_name: str, event_args: List) -> None:
def as_dict(self):
return {
"name": self.name,
"url": self.config.get("url"),
"title": self.config.get("title", None),
"body": self.config.get("body", None),
"body_format": self.config.get("body_format", None),
"events": self.events,
"attach": self.attach
}
async def notify(
self, event_name: str, event_args: List, message: str = ""
) -> None:
context = {
"event_name": event_name,
"event_args": event_args
"event_args": event_args,
"event_message": message
}
rendered_title = (
@@ -159,22 +165,47 @@ class NotifierInstance:
)
# Verify the attachment
if self.attach_requires_file_system_check and self.attach is not None:
fm = self.server.lookup_component("file_manager")
if not fm.can_access_path(self.attach):
if not self.warned:
self.server.add_warning(
f"Attachment of notifier '{self.name}' is not "
"valid. The location of the "
"attachment is not "
"accessible.")
self.warned = True
attachments: List[str] = []
if self.attach is not None:
fm: FileManager = self.server.lookup_component("file_manager")
try:
rendered = self.attach.render(context)
except self.server.error:
logging.exception(f"notifier {self.name}: Failed to render attachment")
self.server.add_warning(
f"[notifier {self.name}]: The attachment is not valid. The "
"template failed to render.",
f"notifier {self.name}"
)
self.attach = None
else:
for item in rendered.splitlines():
item = item.strip()
if not item:
continue
if re.match(r"https?://", item) is not None:
# Attachment is a url, system check not necessary
attachments.append(item)
continue
attach_path = pathlib.Path(item).expanduser().resolve()
if not attach_path.is_file():
self.server.add_warning(
f"[notifier {self.name}]: Invalid attachment detected, "
f"file does not exist: {attach_path}.",
f"notifier {self.name}"
)
elif not fm.can_access_path(attach_path):
self.server.add_warning(
f"[notifier {self.name}]: Invalid attachment detected, "
f"no read permission for the file {attach_path}.",
f"notifier {self.name}"
)
else:
attachments.append(str(attach_path))
await self.apprise.async_notify(
rendered_body.strip(),
rendered_title.strip(),
attach=self.attach
rendered_body.strip(), rendered_title.strip(),
body_format=self.body_format,
attach=None if not attachments else attachments
)
def get_name(self) -> str:

View File

@@ -6,6 +6,7 @@
from __future__ import annotations
import logging
from ..common import RequestType, TransportType, KlippyState
# Annotation imports
from typing import (
@@ -15,8 +16,9 @@ from typing import (
List,
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest
from .klippy_connection import KlippyConnection
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .klippy_apis import KlippyAPI as APIComp
from .file_manager.file_manager import FileManager
from .job_queue import JobQueue
@@ -65,22 +67,27 @@ class OctoPrintCompat:
# Version & Server information
self.server.register_endpoint(
'/api/version', ['GET'], self._get_version,
transports=['http'], wrap_result=False)
'/api/version', RequestType.GET, self._get_version,
transports=TransportType.HTTP, wrap_result=False
)
self.server.register_endpoint(
'/api/server', ['GET'], self._get_server,
transports=['http'], wrap_result=False)
'/api/server', RequestType.GET, self._get_server,
transports=TransportType.HTTP, wrap_result=False
)
# Login, User & Settings
self.server.register_endpoint(
'/api/login', ['POST'], self._post_login_user,
transports=['http'], wrap_result=False)
'/api/login', RequestType.POST, self._post_login_user,
transports=TransportType.HTTP, wrap_result=False
)
self.server.register_endpoint(
'/api/currentuser', ['GET'], self._post_login_user,
transports=['http'], wrap_result=False)
'/api/currentuser', RequestType.GET, self._post_login_user,
transports=TransportType.HTTP, wrap_result=False
)
self.server.register_endpoint(
'/api/settings', ['GET'], self._get_settings,
transports=['http'], wrap_result=False)
'/api/settings', RequestType.GET, self._get_settings,
transports=TransportType.HTTP, wrap_result=False
)
# File operations
# Note that file upload is handled in file_manager.py
@@ -88,30 +95,34 @@ class OctoPrintCompat:
# Job operations
self.server.register_endpoint(
'/api/job', ['GET'], self._get_job,
transports=['http'], wrap_result=False)
'/api/job', RequestType.GET, self._get_job,
transports=TransportType.HTTP, wrap_result=False
)
# TODO: start/cancel/restart/pause jobs
# Printer operations
self.server.register_endpoint(
'/api/printer', ['GET'], self._get_printer,
transports=['http'], wrap_result=False)
'/api/printer', RequestType.GET, self._get_printer,
transports=TransportType.HTTP, wrap_result=False)
self.server.register_endpoint(
'/api/printer/command', ['POST'], self._post_command,
transports=['http'], wrap_result=False)
'/api/printer/command', RequestType.POST, self._post_command,
transports=TransportType.HTTP, wrap_result=False
)
# TODO: head/tool/bed/chamber specific read/issue
# Printer profiles
self.server.register_endpoint(
'/api/printerprofiles', ['GET'], self._get_printerprofiles,
transports=['http'], wrap_result=False)
'/api/printerprofiles', RequestType.GET, self._get_printerprofiles,
transports=TransportType.HTTP, wrap_result=False
)
# Upload Handlers
self.server.register_upload_handler(
"/api/files/local", location_prefix="api/files/moonraker")
self.server.register_endpoint(
"/api/files/moonraker/(?P<relative_path>.+)", ['POST'],
self._select_file, transports=['http'], wrap_result=False)
"/api/files/moonraker/(?P<relative_path>.+)", RequestType.POST,
self._select_file, transports=TransportType.HTTP, wrap_result=False
)
# System
# TODO: shutdown/reboot/restart operations
@@ -143,10 +154,11 @@ class OctoPrintCompat:
data.update(status[heater_name])
def printer_state(self) -> str:
klippy_state = self.server.get_klippy_state()
if klippy_state in ["disconnected", "startup"]:
kconn: KlippyConnection = self.server.lookup_component("klippy_connection")
klippy_state = kconn.state
if not klippy_state.startup_complete():
return 'Offline'
elif klippy_state != 'ready':
elif klippy_state != KlippyState.READY:
return 'Error'
return {
'standby': 'Operational',
@@ -192,11 +204,11 @@ class OctoPrintCompat:
"""
Server status
"""
klippy_state = self.server.get_klippy_state()
kconn: KlippyConnection = self.server.lookup_component("klippy_connection")
klippy_state = kconn.state
return {
'server': OCTO_VERSION,
'safemode': (
None if klippy_state == 'ready' else 'settings')
'safemode': None if klippy_state == KlippyState.READY else 'settings'
}
async def _post_login_user(self,
@@ -355,12 +367,12 @@ class OctoPrintCompat:
async def _select_file(self,
web_request: WebRequest
) -> None:
command: str = web_request.get('command')
rel_path: str = web_request.get('relative_path')
command: str = web_request.get_str('command')
rel_path: str = web_request.get_str('relative_path')
root, filename = rel_path.strip("/").split("/", 1)
fmgr: FileManager = self.server.lookup_component('file_manager')
if command == "select":
start_print: bool = web_request.get('print', False)
start_print: bool = web_request.get_boolean('print', False)
if not start_print:
# No-op, selecting a file has no meaning in Moonraker
return
@@ -376,9 +388,10 @@ class OctoPrintCompat:
except self.server.error:
pstate = "not_avail"
started: bool = False
user = web_request.get_current_user()
if pstate not in ["printing", "paused", "not_avail"]:
try:
await self.klippy_apis.start_print(filename)
await self.klippy_apis.start_print(filename, user=user)
except self.server.error:
started = False
else:
@@ -388,7 +401,7 @@ class OctoPrintCompat:
if fmgr.upload_queue_enabled():
job_queue: JobQueue = self.server.lookup_component(
'job_queue')
await job_queue.queue_job(filename, check_exists=False)
await job_queue.queue_job(filename, check_exists=False, user=user)
logging.debug(f"Job '{filename}' queued via OctoPrint API")
else:
raise self.server.error("Conflict", 409)

View File

@@ -8,12 +8,12 @@ from __future__ import annotations
import serial
import os
import time
import json
import errno
import logging
import asyncio
from collections import deque
from utils import ServerError
from ..utils import ServerError
from ..utils import json_wrapper as jsonw
# Annotation imports
from typing import (
@@ -28,11 +28,10 @@ from typing import (
Coroutine,
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from . import klippy_apis
from .file_manager import file_manager
APIComp = klippy_apis.KlippyAPI
FMComp = file_manager.FileManager
from ..confighelper import ConfigHelper
from .klippy_connection import KlippyConnection
from .klippy_apis import KlippyAPI as APIComp
from .file_manager.file_manager import FileManager as FMComp
FlexCallback = Callable[..., Optional[Coroutine]]
MIN_EST_TIME = 10.
@@ -169,10 +168,8 @@ class PanelDue:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.event_loop = self.server.get_event_loop()
self.file_manager: FMComp = \
self.server.lookup_component('file_manager')
self.klippy_apis: APIComp = \
self.server.lookup_component('klippy_apis')
self.file_manager: FMComp = self.server.lookup_component('file_manager')
self.klippy_apis: APIComp = self.server.lookup_component('klippy_apis')
self.kinematics: str = "none"
self.machine_name = config.get('machine_name', "Klipper")
self.firmware_name: str = "Repetier | Klipper"
@@ -184,10 +181,8 @@ class PanelDue:
self.debug_queue: Deque[str] = deque(maxlen=100)
# Initialize tracked state.
self.printer_state: Dict[str, Dict[str, Any]] = {
'gcode_move': {}, 'toolhead': {}, 'virtual_sdcard': {},
'fan': {}, 'display_status': {}, 'print_stats': {},
'idle_timeout': {}, 'gcode_macro PANELDUE_BEEP': {}}
kconn: KlippyConnection = self.server.lookup_component("klippy_connection")
self.printer_state: Dict[str, Dict[str, Any]] = kconn.get_subscription_cache()
self.extruder_count: int = 0
self.heaters: List[str] = []
self.is_ready: bool = False
@@ -218,26 +213,24 @@ class PanelDue:
# command is the value
self.confirmed_macros = {m.split()[0]: m for m in conf_macros}
self.available_macros.update(self.confirmed_macros)
self.non_trivial_keys = config.getlist('non_trivial_keys',
["Klipper state"])
self.non_trivial_keys = config.getlist('non_trivial_keys', ["Klipper state"])
self.ser_conn = SerialConnection(config, self)
logging.info("PanelDue Configured")
# Register server events
self.server.register_event_handler(
"server:klippy_ready", self._process_klippy_ready)
"server:klippy_ready", self._process_klippy_ready
)
self.server.register_event_handler(
"server:klippy_shutdown", self._process_klippy_shutdown)
"server:klippy_shutdown", self._process_klippy_shutdown
)
self.server.register_event_handler(
"server:klippy_disconnect", self._process_klippy_disconnect)
"server:klippy_disconnect", self._process_klippy_disconnect
)
self.server.register_event_handler(
"server:status_update", self.handle_status_update)
self.server.register_event_handler(
"server:gcode_response", self.handle_gcode_response)
self.server.register_remote_method(
"paneldue_beep", self.paneldue_beep)
"server:gcode_response", self.handle_gcode_response
)
self.server.register_remote_method("paneldue_beep", self.paneldue_beep)
# These commands are directly executued on the server and do not to
# make a request to Klippy
@@ -270,12 +263,12 @@ class PanelDue:
async def _process_klippy_ready(self) -> None:
# Request "info" and "configfile" status
retries = 10
printer_info = cfg_status = {}
printer_info: Dict[str, Any] = {}
cfg_status: Dict[str, Any] = {}
while retries:
try:
printer_info = await self.klippy_apis.get_klippy_info()
cfg_status = await self.klippy_apis.query_objects(
{'configfile': None})
cfg_status = await self.klippy_apis.query_objects({'configfile': None})
except self.server.error:
logging.exception("PanelDue initialization request failed")
retries -= 1
@@ -285,10 +278,8 @@ class PanelDue:
continue
break
self.firmware_name = "Repetier | Klipper " + \
printer_info['software_version']
config: Dict[str, Any] = cfg_status.get(
'configfile', {}).get('config', {})
self.firmware_name = "Repetier | Klipper " + printer_info['software_version']
config: Dict[str, Any] = cfg_status.get('configfile', {}).get('config', {})
printer_cfg: Dict[str, Any] = config.get('printer', {})
self.kinematics = printer_cfg.get('kinematics', "none")
@@ -298,34 +289,35 @@ class PanelDue:
f"Kinematics: {self.kinematics}\n"
f"Printer Config: {config}\n")
# Initalize printer state and make subscription request
self.printer_state = {
'gcode_move': {}, 'toolhead': {}, 'virtual_sdcard': {},
'fan': {}, 'display_status': {}, 'print_stats': {},
'idle_timeout': {}, 'gcode_macro PANELDUE_BEEP': {}}
sub_args = {k: None for k in self.printer_state.keys()}
# Make subscription request
sub_args: Dict[str, Optional[List[str]]] = {
"motion_report": None,
"gcode_move": None,
"toolhead": None,
"virtual_sdcard": None,
"fan": None,
"display_status": None,
"print_stats": None,
"idle_timeout": None,
"gcode_macro PANELDUE_BEEP": None
}
self.extruder_count = 0
self.heaters = []
extruders = []
for cfg in config:
if cfg.startswith("extruder"):
self.extruder_count += 1
self.printer_state[cfg] = {}
extruders.append(cfg)
sub_args[cfg] = None
elif cfg == "heater_bed":
self.printer_state[cfg] = {}
self.heaters.append(cfg)
sub_args[cfg] = None
extruders.sort()
self.heaters.extend(extruders)
try:
status: Dict[str, Any]
status = await self.klippy_apis.subscribe_objects(sub_args)
await self.klippy_apis.subscribe_objects(sub_args)
except self.server.error:
logging.exception("Unable to complete subscription request")
else:
self.printer_state.update(status)
self.is_shutdown = False
self.is_ready = True
@@ -336,15 +328,9 @@ class PanelDue:
# Tell the PD that the printer is "off"
self.write_response({'status': 'O'})
self.last_printer_state = 'O'
self.is_ready = False
self.is_shutdown = self.is_shutdown = False
def handle_status_update(self, status: Dict[str, Any]) -> None:
for obj, items in status.items():
if obj in self.printer_state:
self.printer_state[obj].update(items)
else:
self.printer_state[obj] = items
def paneldue_beep(self, frequency: int, duration: float) -> None:
duration = int(duration * 1000.)
self.write_response(
@@ -550,8 +536,8 @@ class PanelDue:
return
def write_response(self, response: Dict[str, Any]) -> None:
byte_resp = json.dumps(response) + "\r\n"
self.ser_conn.send(byte_resp.encode())
byte_resp = jsonw.dumps(response) + b"\r\n"
self.ser_conn.send(byte_resp)
def _get_printer_status(self) -> str:
# PanelDue States applicable to Klipper:
@@ -561,9 +547,9 @@ class PanelDue:
if self.is_shutdown:
return 'S'
printer_state = self.printer_state
p_state = self.printer_state
sd_state: str
sd_state = printer_state['print_stats'].get('state', "standby")
sd_state = p_state.get("print_stats", {}).get("state", "standby")
if sd_state == "printing":
if self.last_printer_state == 'A':
# Resuming
@@ -571,8 +557,9 @@ class PanelDue:
# Printing
return 'P'
elif sd_state == "paused":
p_active = printer_state['idle_timeout'].get(
'state', 'Idle') == "Printing"
p_active = (
p_state.get("idle_timeout", {}).get("state", 'Idle') == "Printing"
)
if p_active and self.last_printer_state != 'A':
# Pausing
return 'D'
@@ -618,25 +605,28 @@ class PanelDue:
response['axes'] = 3
p_state = self.printer_state
toolhead = p_state.get("toolhead", {})
gcode_move = p_state.get("gcode_move", {})
self.last_printer_state = self._get_printer_status()
response['status'] = self.last_printer_state
response['babystep'] = round(p_state['gcode_move'].get(
'homing_origin', [0., 0., 0., 0.])[2], 3)
response['babystep'] = round(
gcode_move.get('homing_origin', [0., 0., 0., 0.])[2], 3
)
# Current position
pos: List[float]
homed_pos: str
sfactor: float
pos = p_state['toolhead'].get('position', [0., 0., 0., 0.])
pos = p_state.get("motion_report", {}).get('live_position', [0., 0., 0., 0.])
response['pos'] = [round(p, 2) for p in pos[:3]]
homed_pos = p_state['toolhead'].get('homed_axes', "")
homed_pos = toolhead.get('homed_axes', "")
response['homed'] = [int(a in homed_pos) for a in "xyz"]
sfactor = round(p_state['gcode_move'].get('speed_factor', 1.) * 100, 2)
sfactor = round(gcode_move.get('speed_factor', 1.) * 100, 2)
response['sfactor'] = sfactor
# Print Progress Tracking
sd_status = p_state['virtual_sdcard']
print_stats = p_state['print_stats']
sd_status = p_state.get('virtual_sdcard', {})
print_stats = p_state.get('print_stats', {})
fname: str = print_stats.get('filename', "")
sd_print_state: Optional[str] = print_stats.get('state')
if sd_print_state in ['printing', 'paused']:
@@ -664,8 +654,9 @@ class PanelDue:
obj_height: Optional[float]
obj_height = self.file_metadata.get('object_height')
if obj_height:
cur_height: float = p_state['gcode_move'].get(
'gcode_position', [0., 0., 0., 0.])[2]
cur_height: float = gcode_move.get(
'gcode_position', [0., 0., 0., 0.]
)[2]
hpct = min(1., cur_height / obj_height)
times_left.append(int(est_time - est_time * hpct))
else:
@@ -679,13 +670,13 @@ class PanelDue:
self.current_file = ""
self.file_metadata = {}
fan_speed: Optional[float] = p_state['fan'].get('speed')
fan_speed: Optional[float] = p_state.get('fan', {}).get('speed')
if fan_speed is not None:
response['fanPercent'] = [round(fan_speed * 100, 1)]
extruder_name: str = ""
if self.extruder_count > 0:
extruder_name = p_state['toolhead'].get('extruder', "")
extruder_name = toolhead.get('extruder', "")
if extruder_name:
tool = 0
if extruder_name != "extruder":
@@ -693,12 +684,12 @@ class PanelDue:
response['tool'] = tool
# Report Heater Status
efactor: float = round(p_state['gcode_move'].get(
'extrude_factor', 1.) * 100., 2)
efactor: float = round(gcode_move.get('extrude_factor', 1.) * 100., 2)
for name in self.heaters:
temp: float = round(p_state[name].get('temperature', 0.0), 1)
target: float = round(p_state[name].get('target', 0.0), 1)
htr_state = p_state.get(name, {})
temp: float = round(htr_state.get('temperature', 0.0), 1)
target: float = round(htr_state.get('target', 0.0), 1)
response.setdefault('heaters', []).append(temp)
response.setdefault('active', []).append(target)
response.setdefault('standby', []).append(target)
@@ -711,7 +702,7 @@ class PanelDue:
response.setdefault('hstat', []).append(2 if target else 0)
# Display message (via M117)
msg: str = p_state['display_status'].get('message', "")
msg: str = p_state.get('display_status', {}).get('message', "")
if msg and msg != self.last_message:
response['message'] = msg
# reset the message so it only shows once. The paneldue

File diff suppressed because it is too large Load Diff

View File

@@ -6,12 +6,16 @@
from __future__ import annotations
import asyncio
import struct
import fcntl
import time
import re
import os
import pathlib
import logging
from collections import deque
from ..utils import ioctl_macros
from ..common import RequestType
# Annotation imports
from typing import (
@@ -26,12 +30,13 @@ from typing import (
Dict,
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest, WebsocketManager
from . import shell_command
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .websockets import WebsocketManager
STAT_CALLBACK = Callable[[int], Optional[Awaitable]]
VC_GEN_CMD_FILE = "/usr/bin/vcgencmd"
VCIO_PATH = "/dev/vcio"
STATM_FILE_PATH = "/proc/self/smaps_rollup"
NET_DEV_PATH = "/proc/net/dev"
TEMPERATURE_PATH = "/sys/class/thermal/thermal_zone0/temp"
@@ -61,13 +66,10 @@ class ProcStats:
self.watchdog = Watchdog(self)
self.stat_update_timer = self.event_loop.register_timer(
self._handle_stat_update)
self.vcgencmd: Optional[shell_command.ShellCommand] = None
if os.path.exists(VC_GEN_CMD_FILE):
self.vcgencmd: Optional[VCGenCmd] = None
if os.path.exists(VC_GEN_CMD_FILE) and os.path.exists(VCIO_PATH):
logging.info("Detected 'vcgencmd', throttle checking enabled")
shell_cmd: shell_command.ShellCommandFactory
shell_cmd = self.server.load_component(config, "shell_command")
self.vcgencmd = shell_cmd.build_shell_command(
"vcgencmd get_throttled")
self.vcgencmd = VCGenCmd()
self.server.register_notification("proc_stats:cpu_throttled")
else:
logging.info("Unable to find 'vcgencmd', throttle checking "
@@ -78,9 +80,11 @@ class ProcStats:
self.cpu_stats_file = pathlib.Path(CPU_STAT_PATH)
self.meminfo_file = pathlib.Path(MEM_AVAIL_PATH)
self.server.register_endpoint(
"/machine/proc_stats", ["GET"], self._handle_stat_request)
"/machine/proc_stats", RequestType.GET, self._handle_stat_request
)
self.server.register_event_handler(
"server:klippy_shutdown", self._handle_shutdown)
"server:klippy_shutdown", self._handle_shutdown
)
self.server.register_notification("proc_stats:proc_stat_update")
self.proc_stat_queue: Deque[Dict[str, Any]] = deque(maxlen=30)
self.last_update_time = time.time()
@@ -170,17 +174,19 @@ class ProcStats:
'system_memory': self.memory_usage,
'websocket_connections': websocket_count
})
if not self.update_sequence % THROTTLE_CHECK_INTERVAL:
if self.vcgencmd is not None:
ts = await self._check_throttled_state()
cur_throttled = ts['bits']
if cur_throttled & ~self.total_throttled:
self.server.add_log_rollover_item(
'throttled', f"CPU Throttled Flags: {ts['flags']}")
if cur_throttled != self.last_throttled:
self.server.send_event("proc_stats:cpu_throttled", ts)
self.last_throttled = cur_throttled
self.total_throttled |= cur_throttled
if (
not self.update_sequence % THROTTLE_CHECK_INTERVAL
and self.vcgencmd is not None
):
ts = await self._check_throttled_state()
cur_throttled = ts['bits']
if cur_throttled & ~self.total_throttled:
self.server.add_log_rollover_item(
'throttled', f"CPU Throttled Flags: {ts['flags']}")
if cur_throttled != self.last_throttled:
self.server.send_event("proc_stats:cpu_throttled", ts)
self.last_throttled = cur_throttled
self.total_throttled |= cur_throttled
for cb in self.stat_callbacks:
ret = cb(self.update_sequence)
if ret is not None:
@@ -191,19 +197,18 @@ class ProcStats:
return eventtime + STAT_UPDATE_TIME
async def _check_throttled_state(self) -> Dict[str, Any]:
async with self.throttle_check_lock:
assert self.vcgencmd is not None
try:
resp = await self.vcgencmd.run_with_response(
timeout=.5, log_complete=False)
ts = int(resp.strip().split("=")[-1], 16)
except Exception:
return {'bits': 0, 'flags': ["?"]}
flags = []
for flag, desc in THROTTLED_FLAGS.items():
if flag & ts:
flags.append(desc)
return {'bits': ts, 'flags': flags}
ret = {'bits': 0, 'flags': ["?"]}
if self.vcgencmd is not None:
async with self.throttle_check_lock:
try:
resp = await self.event_loop.run_in_thread(self.vcgencmd.run)
ret["bits"] = tstate = int(resp.strip().split("=")[-1], 16)
ret["flags"] = [
desc for flag, desc in THROTTLED_FLAGS.items() if flag & tstate
]
except Exception:
pass
return ret
def _read_system_files(self) -> Tuple:
mem, units = self._get_memory_usage()
@@ -242,7 +247,13 @@ class ProcStats:
parsed_stats = stats.strip().split()
net_stats[dev_name] = {
'rx_bytes': int(parsed_stats[0]),
'tx_bytes': int(parsed_stats[8])
'tx_bytes': int(parsed_stats[8]),
'rx_packets': int(parsed_stats[1]),
'tx_packets': int(parsed_stats[9]),
'rx_errs': int(parsed_stats[2]),
'tx_errs': int(parsed_stats[10]),
'rx_drop': int(parsed_stats[3]),
'tx_drop': int(parsed_stats[11])
}
return net_stats
except Exception:
@@ -332,5 +343,52 @@ class Watchdog:
def stop(self):
self.watchdog_timer.stop()
class VCGenCmd:
"""
This class uses the BCM2835 Mailbox to directly query the throttled
state. This should be less resource intensive than calling "vcgencmd"
in a subprocess.
"""
MAX_STRING_SIZE = 1024
GET_RESULT_CMD = 0x00030080
UINT_SIZE = struct.calcsize("@I")
def __init__(self) -> None:
self.cmd_struct = struct.Struct(f"@6I{self.MAX_STRING_SIZE}sI")
self.cmd_buf = bytearray(self.cmd_struct.size)
self.mailbox_req = ioctl_macros.IOWR(100, 0, "c_char_p")
self.err_logged: bool = False
def run(self, cmd: str = "get_throttled") -> str:
try:
fd = os.open(VCIO_PATH, os.O_RDWR)
self.cmd_struct.pack_into(
self.cmd_buf, 0,
self.cmd_struct.size,
0x00000000,
self.GET_RESULT_CMD,
self.MAX_STRING_SIZE,
0,
0,
cmd.encode("utf-8"),
0x00000000
)
fcntl.ioctl(fd, self.mailbox_req, self.cmd_buf)
except OSError:
if not self.err_logged:
logging.exception("VCIO vcgencmd failed")
self.err_logged = True
return ""
finally:
os.close(fd)
result = self.cmd_struct.unpack_from(self.cmd_buf)
ret: int = result[5]
if ret:
logging.info(f"vcgencmd returned {ret}")
resp: bytes = result[6]
null_index = resp.find(b'\x00')
if null_index <= 0:
return ""
return resp[:null_index].decode()
def load_component(config: ConfigHelper) -> ProcStats:
return ProcStats(config)

View File

@@ -7,7 +7,7 @@ from __future__ import annotations
import pathlib
import logging
import configparser
import json
from ..utils import json_wrapper as jsonw
from typing import (
TYPE_CHECKING,
Dict,
@@ -15,22 +15,21 @@ from typing import (
Any
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ..confighelper import ConfigHelper
class Secrets:
def __init__(self, config: ConfigHelper) -> None:
server = config.get_server()
self.secrets_file: Optional[pathlib.Path] = None
path: Optional[str] = config.get('secrets_path', None)
path: Optional[str] = config.get("secrets_path", None, deprecate=True)
app_args = server.get_app_args()
data_path = app_args["data_path"]
fpath = pathlib.Path(data_path).joinpath("moonraker.secrets")
if not fpath.is_file() and path is not None:
fpath = pathlib.Path(path).expanduser().resolve()
self.type = "invalid"
self.values: Dict[str, Any] = {}
if path is not None:
self.secrets_file = pathlib.Path(path).expanduser().resolve()
if not self.secrets_file.is_file():
server.add_warning(
"[secrets]: option 'secrets_path', file does not exist: "
f"'{self.secrets_file}'")
return
self.secrets_file = fpath
if fpath.is_file():
data = self.secrets_file.read_text()
vals = self._parse_json(data)
if vals is not None:
@@ -52,10 +51,17 @@ class Secrets:
self.type = "ini"
logging.debug(f"[secrets]: Loaded {self.type} file: "
f"{self.secrets_file}")
elif path is not None:
server.add_warning(
"[secrets]: option 'secrets_path', file does not exist: "
f"'{self.secrets_file}'")
else:
logging.debug(
"[secrets]: Option `secrets_path` not supplied")
def get_secrets_file(self) -> pathlib.Path:
return self.secrets_file
def _parse_ini(self, data: str) -> Optional[Dict[str, Any]]:
try:
cfg = configparser.ConfigParser(interpolation=None)
@@ -66,8 +72,8 @@ class Secrets:
def _parse_json(self, data: str) -> Optional[Dict[str, Any]]:
try:
return json.loads(data)
except json.JSONDecodeError:
return jsonw.loads(data)
except jsonw.JSONDecodeError:
return None
def get_type(self) -> str:

View File

@@ -0,0 +1,346 @@
# Generic sensor support
#
# Copyright (C) 2022 Morton Jonuschat <mjonuschat+moonraker@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
# Component to read additional generic sensor data and make it
# available to clients
from __future__ import annotations
import logging
from collections import defaultdict, deque
from functools import partial
from ..common import RequestType, HistoryFieldData
# Annotation imports
from typing import (
Any,
DefaultDict,
Deque,
Dict,
List,
Optional,
Type,
TYPE_CHECKING,
Union,
Callable
)
if TYPE_CHECKING:
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .mqtt import MQTTClient
from .history import History
SENSOR_UPDATE_TIME = 1.0
SENSOR_EVENT_NAME = "sensors:sensor_update"
def _set_result(
name: str, value: Union[int, float], store: Dict[str, Union[int, float]]
) -> None:
if not isinstance(value, (int, float)):
store[name] = float(value)
else:
store[name] = value
class BaseSensor:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.error_state: Optional[str] = None
self.id = config.get_name().split(maxsplit=1)[-1]
self.type = config.get("type")
self.name = config.get("name", self.id)
self.last_measurements: Dict[str, Union[int, float]] = {}
self.last_value: Dict[str, Union[int, float]] = {}
store_size = config.getint("sensor_store_size", 1200)
self.values: DefaultDict[str, Deque[Union[int, float]]] = defaultdict(
lambda: deque(maxlen=store_size)
)
self.param_info: List[Dict[str, str]] = []
history: History = self.server.lookup_component("history")
self.field_info: Dict[str, List[HistoryFieldData]] = {}
all_opts = list(config.get_options().keys())
cfg_name = config.get_name()
param_prefix = "parameter_"
hist_field_prefix = "history_field_"
for opt in all_opts:
if opt.startswith(param_prefix):
name = opt[len(param_prefix):]
data = config.getdict(opt)
data["name"] = opt[len(param_prefix):]
self.param_info.append(data)
continue
if not opt.startswith(hist_field_prefix):
continue
name = opt[len(hist_field_prefix):]
field_cfg: Dict[str, str] = config.getdict(opt)
ident: Optional[str] = field_cfg.pop("parameter", None)
if ident is None:
raise config.error(
f"[{cfg_name}]: option '{opt}', key 'parameter' must be"
f"specified"
)
do_init: str = field_cfg.pop("init_tracker", "false").lower()
reset_cb = self._gen_reset_callback(ident) if do_init == "true" else None
excl_paused: str = field_cfg.pop("exclude_paused", "false").lower()
report_total: str = field_cfg.pop("report_total", "false").lower()
report_max: str = field_cfg.pop("report_maximum", "false").lower()
precision: Optional[str] = field_cfg.pop("precision", None)
try:
fdata = HistoryFieldData(
name,
cfg_name,
field_cfg.pop("desc", f"{ident} tracker"),
field_cfg.pop("strategy", "basic"),
units=field_cfg.pop("units", None),
reset_callback=reset_cb,
exclude_paused=excl_paused == "true",
report_total=report_total == "true",
report_maximum=report_max == "true",
precision=int(precision) if precision is not None else None,
)
except Exception as e:
raise config.error(
f"[{cfg_name}]: option '{opt}', error encountered during "
f"sensor field configuration: {e}"
) from e
for key in field_cfg.keys():
self.server.add_warning(
f"[{cfg_name}]: Option '{opt}' contains invalid key '{key}'"
)
self.field_info.setdefault(ident, []).append(fdata)
history.register_auxiliary_field(fdata)
def _gen_reset_callback(self, param_name: str) -> Callable[[], float]:
def on_reset() -> float:
return self.last_measurements.get(param_name, 0)
return on_reset
def _update_sensor_value(self, eventtime: float) -> None:
"""
Append the last updated value to the store.
"""
for key, value in self.last_measurements.items():
self.values[key].append(value)
# Copy the last measurements data
self.last_value = {**self.last_measurements}
async def initialize(self) -> bool:
"""
Sensor initialization executed on Moonraker startup.
"""
logging.info("Registered sensor '%s'", self.name)
return True
def get_sensor_info(self, extended: bool = False) -> Dict[str, Any]:
ret: Dict[str, Any] = {
"id": self.id,
"friendly_name": self.name,
"type": self.type,
"values": self.last_measurements,
}
if extended:
ret["parameter_info"] = self.param_info
history_fields: List[Dict[str, Any]] = []
for parameter, field_list in self.field_info.items():
for field_data in field_list:
field_config = field_data.get_configuration()
field_config["parameter"] = parameter
history_fields.append(field_config)
ret["history_fields"] = history_fields
return ret
def get_sensor_measurements(self) -> Dict[str, List[Union[int, float]]]:
return {key: list(values) for key, values in self.values.items()}
def get_name(self) -> str:
return self.name
def close(self) -> None:
pass
class MQTTSensor(BaseSensor):
def __init__(self, config: ConfigHelper) -> None:
super().__init__(config=config)
self.mqtt: MQTTClient = self.server.load_component(config, "mqtt")
self.state_topic: str = config.get("state_topic")
self.state_response = config.gettemplate("state_response_template")
self.qos: Optional[int] = config.getint("qos", None, minval=0, maxval=2)
self.server.register_event_handler(
"mqtt:disconnected", self._on_mqtt_disconnected
)
def _on_state_update(self, payload: bytes) -> None:
measurements: Dict[str, Union[int, float]] = {}
context = {
"payload": payload.decode(),
"set_result": partial(_set_result, store=measurements),
"log_debug": logging.debug
}
try:
self.state_response.render(context)
except Exception as e:
logging.error("Error updating sensor results: %s", e)
self.error_state = str(e)
else:
self.error_state = None
self.last_measurements = measurements
for name, value in measurements.items():
fdata_list = self.field_info.get(name)
if fdata_list is None:
continue
for fdata in fdata_list:
fdata.tracker.update(value)
async def _on_mqtt_disconnected(self):
self.error_state = "MQTT Disconnected"
self.last_measurements = {}
async def initialize(self) -> bool:
await super().initialize()
try:
self.mqtt.subscribe_topic(
self.state_topic,
self._on_state_update,
self.qos,
)
self.error_state = None
return True
except Exception as e:
self.error_state = str(e)
return False
class Sensors:
__sensor_types: Dict[str, Type[BaseSensor]] = {"MQTT": MQTTSensor}
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.sensors: Dict[str, BaseSensor] = {}
# Register timer to update sensor values in store
self.sensors_update_timer = self.server.get_event_loop().register_timer(
self._update_sensor_values
)
# Register endpoints
self.server.register_endpoint(
"/server/sensors/list",
RequestType.GET,
self._handle_sensor_list_request,
)
self.server.register_endpoint(
"/server/sensors/info",
RequestType.GET,
self._handle_sensor_info_request,
)
self.server.register_endpoint(
"/server/sensors/measurements",
RequestType.GET,
self._handle_sensor_measurements_request,
)
# Register notifications
self.server.register_notification(SENSOR_EVENT_NAME)
prefix_sections = config.get_prefix_sections("sensor ")
for section in prefix_sections:
cfg = config[section]
try:
try:
_, name = cfg.get_name().split(maxsplit=1)
except ValueError:
raise cfg.error(f"Invalid section name: {cfg.get_name()}")
logging.info(f"Configuring sensor: {name}")
sensor_type: str = cfg.get("type")
sensor_class: Optional[Type[BaseSensor]] = self.__sensor_types.get(
sensor_type.upper(), None
)
if sensor_class is None:
raise config.error(f"Unsupported sensor type: {sensor_type}")
self.sensors[name] = sensor_class(cfg)
except Exception as e:
# Ensures that configuration errors are shown to the user
self.server.add_warning(
f"Failed to configure sensor [{cfg.get_name()}]\n{e}"
)
continue
def _update_sensor_values(self, eventtime: float) -> float:
"""
Iterate through the sensors and store the last updated value.
"""
changed_data: Dict[str, Dict[str, Union[int, float]]] = {}
for sensor_name, sensor in self.sensors.items():
base_value = sensor.last_value
sensor._update_sensor_value(eventtime=eventtime)
# Notify if a change in sensor values was detected
if base_value != sensor.last_value:
changed_data[sensor_name] = sensor.last_value
if changed_data:
self.server.send_event(SENSOR_EVENT_NAME, changed_data)
return eventtime + SENSOR_UPDATE_TIME
async def component_init(self) -> None:
try:
logging.debug("Initializing sensor component")
for sensor in self.sensors.values():
if not await sensor.initialize():
self.server.add_warning(
f"Sensor '{sensor.get_name()}' failed to initialize"
)
self.sensors_update_timer.start()
except Exception as e:
logging.exception(e)
async def _handle_sensor_list_request(
self, web_request: WebRequest
) -> Dict[str, Dict[str, Any]]:
extended = web_request.get_boolean("extended", False)
return {
"sensors": {
key: sensor.get_sensor_info(extended)
for key, sensor in self.sensors.items()
}
}
async def _handle_sensor_info_request(
self, web_request: WebRequest
) -> Dict[str, Any]:
sensor_name: str = web_request.get_str("sensor")
extended = web_request.get_boolean("extended", False)
if sensor_name not in self.sensors:
raise self.server.error(f"No valid sensor named {sensor_name}")
sensor = self.sensors[sensor_name]
return sensor.get_sensor_info(extended)
async def _handle_sensor_measurements_request(
self, web_request: WebRequest
) -> Dict[str, Dict[str, Any]]:
sensor_name: str = web_request.get_str("sensor", "")
if sensor_name:
sensor = self.sensors.get(sensor_name, None)
if sensor is None:
raise self.server.error(f"No valid sensor named {sensor_name}")
return {sensor_name: sensor.get_sensor_measurements()}
else:
return {
key: sensor.get_sensor_measurements()
for key, sensor in self.sensors.items()
}
def close(self) -> None:
self.sensors_update_timer.stop()
for sensor in self.sensors.values():
sensor.close()
def load_component(config: ConfigHelper) -> Sensors:
return Sensors(config)

View File

@@ -10,7 +10,7 @@ import shlex
import logging
import signal
import asyncio
from utils import ServerError
from ..utils import ServerError
# Annotation imports
from typing import (
@@ -22,47 +22,48 @@ from typing import (
Coroutine,
Dict,
Set,
cast
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ..confighelper import ConfigHelper
OutputCallback = Optional[Callable[[bytes], None]]
class ShellCommandError(ServerError):
def __init__(self,
message: str,
return_code: Optional[int],
stdout: Optional[bytes] = b"",
stderr: Optional[bytes] = b"",
status_code: int = 500
) -> None:
def __init__(
self,
message: str,
return_code: Optional[int],
stdout: Optional[bytes] = b"",
stderr: Optional[bytes] = b"",
status_code: int = 500
) -> None:
super().__init__(message, status_code=status_code)
self.stdout = stdout or b""
self.stderr = stderr or b""
self.return_code = return_code
class ShellCommandProtocol(asyncio.subprocess.SubprocessStreamProtocol):
def __init__(self,
limit: int,
loop: asyncio.events.AbstractEventLoop,
program_name: str = "",
std_out_cb: OutputCallback = None,
std_err_cb: OutputCallback = None,
log_stderr: bool = False
) -> None:
def __init__(
self,
limit: int,
loop: asyncio.events.AbstractEventLoop,
std_out_cb: OutputCallback = None,
std_err_cb: OutputCallback = None,
log_stderr: bool = False
) -> None:
self._loop = loop
self._pipe_fds: List[int] = []
super().__init__(limit, loop)
self.program_name = program_name
self.std_out_cb = std_out_cb
self.std_err_cb = std_err_cb
self.log_stderr = log_stderr
self.pending_data: List[bytes] = [b"", b""]
def connection_made(self,
transport: asyncio.transports.BaseTransport
) -> None:
def connection_made(
self, transport: asyncio.transports.BaseTransport
) -> None:
transport = cast(asyncio.SubprocessTransport, transport)
self._transport = transport
assert isinstance(transport, asyncio.SubprocessTransport)
stdout_transport = transport.get_pipe_transport(1)
if stdout_transport is not None:
self._pipe_fds.append(1)
@@ -74,10 +75,11 @@ class ShellCommandProtocol(asyncio.subprocess.SubprocessStreamProtocol):
stdin_transport = transport.get_pipe_transport(0)
if stdin_transport is not None:
self.stdin = asyncio.streams.StreamWriter(
stdin_transport,
stdin_transport, # type: ignore
protocol=self,
reader=None,
loop=self._loop)
loop=self._loop
)
def pipe_data_received(self, fd: int, data: bytes | str) -> None:
cb = None
@@ -91,7 +93,7 @@ class ShellCommandProtocol(asyncio.subprocess.SubprocessStreamProtocol):
msg = data.decode(errors='ignore')
else:
msg = data
logging.info(f"{self.program_name}: {msg}")
logging.info(msg)
if cb is not None:
if isinstance(data, str):
data = data.encode()
@@ -103,10 +105,9 @@ class ShellCommandProtocol(asyncio.subprocess.SubprocessStreamProtocol):
continue
cb(line)
def pipe_connection_lost(self,
fd: int,
exc: Exception | None
) -> None:
def pipe_connection_lost(
self, fd: int, exc: Exception | None
) -> None:
cb = None
pending = b""
if fd == 1:
@@ -124,15 +125,16 @@ class ShellCommand:
IDX_SIGINT = 0
IDX_SIGTERM = 1
IDX_SIGKILL = 2
def __init__(self,
factory: ShellCommandFactory,
cmd: str,
std_out_callback: OutputCallback,
std_err_callback: OutputCallback,
env: Optional[Dict[str, str]] = None,
log_stderr: bool = False,
cwd: Optional[str] = None
) -> None:
def __init__(
self,
factory: ShellCommandFactory,
cmd: str,
std_out_callback: OutputCallback,
std_err_callback: OutputCallback,
env: Optional[Dict[str, str]] = None,
log_stderr: bool = False,
cwd: Optional[str] = None
) -> None:
self.factory = factory
self.name = cmd
self.std_out_cb = std_out_callback
@@ -178,13 +180,15 @@ class ShellCommand:
self.return_code = self.proc = None
self.cancelled = False
async def run(self,
timeout: float = 2.,
verbose: bool = True,
log_complete: bool = True,
sig_idx: int = 1,
proc_input: Optional[str] = None
) -> bool:
async def run(
self,
timeout: float = 2.,
verbose: bool = True,
log_complete: bool = True,
sig_idx: int = 1,
proc_input: Optional[str] = None,
success_codes: Optional[List[int]] = None
) -> bool:
async with self.run_lock:
self.factory.add_running_command(self)
self._reset_command_data()
@@ -217,22 +221,26 @@ class ShellCommand:
else:
complete = not self.cancelled
self.factory.remove_running_command(self)
return self._check_proc_success(complete, log_complete)
return self._check_proc_success(
complete, log_complete, success_codes
)
async def run_with_response(self,
timeout: float = 2.,
retries: int = 1,
log_complete: bool = True,
sig_idx: int = 1,
proc_input: Optional[str] = None
) -> str:
async def run_with_response(
self,
timeout: float = 2.,
attempts: int = 1,
log_complete: bool = True,
sig_idx: int = 1,
proc_input: Optional[str] = None,
success_codes: Optional[List[int]] = None
) -> str:
async with self.run_lock:
self.factory.add_running_command(self)
retries = max(1, retries)
attempts = max(1, attempts)
stdin: Optional[bytes] = None
if proc_input is not None:
stdin = proc_input.encode()
while retries > 0:
while attempts > 0:
self._reset_command_data()
timed_out = False
stdout = stderr = b""
@@ -252,7 +260,9 @@ class ShellCommand:
logging.info(
f"{self.command[0]}: "
f"{stderr.decode(errors='ignore')}")
if self._check_proc_success(complete, log_complete):
if self._check_proc_success(
complete, log_complete, success_codes
):
self.factory.remove_running_command(self)
return stdout.decode(errors='ignore').rstrip("\n")
if stdout:
@@ -261,24 +271,25 @@ class ShellCommand:
f"\n{stdout.decode(errors='ignore')}")
if self.cancelled and not timed_out:
break
retries -= 1
attempts -= 1
await asyncio.sleep(.5)
self.factory.remove_running_command(self)
raise ShellCommandError(
f"Error running shell command: '{self.command}'",
f"Error running shell command: '{self.name}'",
self.return_code, stdout, stderr)
async def _create_subprocess(self,
use_callbacks: bool = False,
has_input: bool = False
) -> bool:
async def _create_subprocess(
self,
use_callbacks: bool = False,
has_input: bool = False
) -> bool:
loop = asyncio.get_running_loop()
def protocol_factory():
return ShellCommandProtocol(
limit=2**20, loop=loop, program_name=self.command[0],
std_out_cb=self.std_out_cb, std_err_cb=self.std_err_cb,
log_stderr=self.log_stderr)
limit=2**20, loop=loop, std_out_cb=self.std_out_cb,
std_err_cb=self.std_err_cb, log_stderr=self.log_stderr
)
try:
stdpipe: Optional[int] = None
if has_input:
@@ -299,19 +310,25 @@ class ShellCommand:
*self.command, stdin=stdpipe,
stdout=asyncio.subprocess.PIPE,
stderr=errpipe, env=self.env, cwd=self.cwd)
except asyncio.CancelledError:
raise
except Exception:
logging.exception(
f"shell_command: Command ({self.name}) failed")
return False
return True
def _check_proc_success(self,
complete: bool,
log_complete: bool
) -> bool:
def _check_proc_success(
self,
complete: bool,
log_complete: bool,
success_codes: Optional[List[int]] = None
) -> bool:
assert self.proc is not None
if success_codes is None:
success_codes = [0]
self.return_code = self.proc.returncode
success = self.return_code == 0 and complete
success = self.return_code in success_codes and complete
if success:
msg = f"Command ({self.name}) successfully finished"
elif self.cancelled:
@@ -339,32 +356,77 @@ class ShellCommandFactory:
except KeyError:
pass
def build_shell_command(self,
cmd: str,
callback: OutputCallback = None,
std_err_callback: OutputCallback = None,
env: Optional[Dict[str, str]] = None,
log_stderr: bool = False,
cwd: Optional[str] = None
) -> ShellCommand:
return ShellCommand(self, cmd, callback, std_err_callback, env,
log_stderr, cwd)
def build_shell_command(
self,
cmd: str,
callback: OutputCallback = None,
std_err_callback: OutputCallback = None,
env: Optional[Dict[str, str]] = None,
log_stderr: bool = False,
cwd: Optional[str] = None
) -> ShellCommand:
return ShellCommand(
self, cmd, callback, std_err_callback, env, log_stderr, cwd
)
def exec_cmd(self,
cmd: str,
timeout: float = 2.,
retries: int = 1,
sig_idx: int = 1,
proc_input: Optional[str] = None,
log_complete: bool = True,
log_stderr: bool = False,
env: Optional[Dict[str, str]] = None,
cwd: Optional[str] = None
) -> Awaitable:
def run_cmd_async(
self,
cmd: str,
callback: OutputCallback = None,
std_err_callback: OutputCallback = None,
timeout: float = 2.,
attempts: int = 1,
verbose: bool = True,
sig_idx: int = 1,
proc_input: Optional[str] = None,
log_complete: bool = True,
log_stderr: bool = False,
env: Optional[Dict[str, str]] = None,
cwd: Optional[str] = None,
success_codes: Optional[List[int]] = None
) -> Awaitable[None]:
"""
Runs a command and processes responses as they are received. Optional
callbacks may be provided to handle stdout and stderr.
"""
scmd = ShellCommand(
self, cmd, callback, std_err_callback, env, log_stderr, cwd
)
attempts = max(1, attempts)
async def _wrapper() -> None:
for _ in range(attempts):
if await scmd.run(
timeout, verbose, log_complete, sig_idx,
proc_input, success_codes
):
break
else:
ret_code = scmd.get_return_code()
raise ShellCommandError(f"Error running command {cmd}", ret_code)
return asyncio.create_task(_wrapper())
def exec_cmd(
self,
cmd: str,
timeout: float = 2.,
attempts: int = 1,
sig_idx: int = 1,
proc_input: Optional[str] = None,
log_complete: bool = True,
log_stderr: bool = False,
env: Optional[Dict[str, str]] = None,
cwd: Optional[str] = None,
success_codes: Optional[List[int]] = None
) -> Awaitable[str]:
"""
Executes a command and returns UTF-8 decoded stdout upon completion.
"""
scmd = ShellCommand(self, cmd, None, None, env,
log_stderr, cwd)
coro = scmd.run_with_response(timeout, retries, log_complete,
sig_idx, proc_input)
coro = scmd.run_with_response(
timeout, attempts, log_complete, sig_idx,
proc_input, success_codes
)
return asyncio.create_task(coro)
async def close(self) -> None:

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,424 @@
# Integration with Spoolman
#
# Copyright (C) 2023 Daniel Hultgren <daniel.cf.hultgren@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import asyncio
import logging
import re
import contextlib
import tornado.websocket as tornado_ws
from ..common import RequestType, HistoryFieldData
from ..utils import json_wrapper as jsonw
from typing import (
TYPE_CHECKING,
List,
Dict,
Any,
Optional,
Union,
cast
)
if TYPE_CHECKING:
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .http_client import HttpClient, HttpResponse
from .database import MoonrakerDatabase
from .announcements import Announcements
from .klippy_apis import KlippyAPI as APIComp
from .history import History
from tornado.websocket import WebSocketClientConnection
DB_NAMESPACE = "moonraker"
ACTIVE_SPOOL_KEY = "spoolman.spool_id"
class SpoolManager:
def __init__(self, config: ConfigHelper):
self.server = config.get_server()
self.eventloop = self.server.get_event_loop()
self._get_spoolman_urls(config)
self.sync_rate_seconds = config.getint("sync_rate", default=5, minval=1)
self.report_timer = self.eventloop.register_timer(self.report_extrusion)
self.pending_reports: Dict[int, float] = {}
self.spoolman_ws: Optional[WebSocketClientConnection] = None
self.connection_task: Optional[asyncio.Task] = None
self.spool_check_task: Optional[asyncio.Task] = None
self.ws_connected: bool = False
self.reconnect_delay: float = 2.
self.is_closing: bool = False
self.spool_id: Optional[int] = None
self._error_logged: bool = False
self._highest_epos: float = 0
self._current_extruder: str = "extruder"
self.spool_history = HistoryFieldData(
"spool_ids", "spoolman", "Spool IDs used", "collect",
reset_callback=self._on_history_reset
)
history: History = self.server.lookup_component("history")
history.register_auxiliary_field(self.spool_history)
self.klippy_apis: APIComp = self.server.lookup_component("klippy_apis")
self.http_client: HttpClient = self.server.lookup_component("http_client")
self.database: MoonrakerDatabase = self.server.lookup_component("database")
announcements: Announcements = self.server.lookup_component("announcements")
announcements.register_feed("spoolman")
self._register_notifications()
self._register_listeners()
self._register_endpoints()
self.server.register_remote_method(
"spoolman_set_active_spool", self.set_active_spool
)
def _get_spoolman_urls(self, config: ConfigHelper) -> None:
orig_url = config.get('server')
url_match = re.match(r"(?i:(?P<scheme>https?)://)?(?P<host>.+)", orig_url)
if url_match is None:
raise config.error(
f"Section [spoolman], Option server: {orig_url}: Invalid URL format"
)
scheme = url_match["scheme"] or "http"
host = url_match["host"].rstrip("/")
ws_scheme = "wss" if scheme == "https" else "ws"
self.spoolman_url = f"{scheme}://{host}/api"
self.ws_url = f"{ws_scheme}://{host}/api/v1/spool"
def _register_notifications(self):
self.server.register_notification("spoolman:active_spool_set")
self.server.register_notification("spoolman:spoolman_status_changed")
def _register_listeners(self):
self.server.register_event_handler(
"server:klippy_ready", self._handle_klippy_ready
)
def _register_endpoints(self):
self.server.register_endpoint(
"/server/spoolman/spool_id",
RequestType.GET | RequestType.POST,
self._handle_spool_id_request,
)
self.server.register_endpoint(
"/server/spoolman/proxy",
RequestType.POST,
self._proxy_spoolman_request,
)
self.server.register_endpoint(
"/server/spoolman/status",
RequestType.GET,
self._handle_status_request,
)
def _on_history_reset(self) -> List[int]:
if self.spool_id is None:
return []
return [self.spool_id]
async def component_init(self) -> None:
self.spool_id = await self.database.get_item(
DB_NAMESPACE, ACTIVE_SPOOL_KEY, None
)
self.connection_task = self.eventloop.create_task(self._connect_websocket())
async def _connect_websocket(self) -> None:
log_connect: bool = True
err_list: List[Exception] = []
while not self.is_closing:
if log_connect:
logging.info(f"Connecting To Spoolman: {self.ws_url}")
log_connect = False
try:
self.spoolman_ws = await tornado_ws.websocket_connect(
self.ws_url,
connect_timeout=5.,
ping_interval=20.,
ping_timeout=60.
)
setattr(self.spoolman_ws, "on_ping", self._on_ws_ping)
cur_time = self.eventloop.get_loop_time()
self._last_ping_received = cur_time
except asyncio.CancelledError:
raise
except Exception as e:
if len(err_list) < 10:
# Allow up to 10 unique errors.
for err in err_list:
if type(err) is type(e) and err.args == e.args:
break
else:
err_list.append(e)
verbose = self.server.is_verbose_enabled()
if verbose:
logging.exception("Failed to connect to Spoolman")
self.server.add_log_rollover_item(
"spoolman_connect", f"Failed to Connect to spoolman: {e}",
not verbose
)
else:
err_list = []
self.ws_connected = True
self._error_logged = False
self.report_timer.start()
self.server.add_log_rollover_item(
"spoolman_connect", "Connected to Spoolman Spool Manager"
)
if self.spool_id is not None:
self._cancel_spool_check_task()
coro = self._check_spool_deleted()
self.spool_check_task = self.eventloop.create_task(coro)
self._send_status_notification()
await self._read_messages()
log_connect = True
if not self.is_closing:
await asyncio.sleep(self.reconnect_delay)
async def _read_messages(self) -> None:
message: Union[str, bytes, None]
while self.spoolman_ws is not None:
message = await self.spoolman_ws.read_message()
if isinstance(message, str):
self._decode_message(message)
elif message is None:
self.report_timer.stop()
self.ws_connected = False
cur_time = self.eventloop.get_loop_time()
ping_time: float = cur_time - self._last_ping_received
reason = code = None
if self.spoolman_ws is not None:
reason = self.spoolman_ws.close_reason
code = self.spoolman_ws.close_code
logging.info(
f"Spoolman Disconnected - Code: {code}, Reason: {reason}, "
f"Server Ping Time Elapsed: {ping_time}"
)
self.spoolman_ws = None
if not self.is_closing:
self._send_status_notification()
break
def _decode_message(self, message: str) -> None:
event: Dict[str, Any] = jsonw.loads(message)
if event.get("resource") != "spool":
return
if self.spool_id is not None and event.get("type") == "deleted":
payload: Dict[str, Any] = event.get("payload", {})
if payload.get("id") == self.spool_id:
self.pending_reports.pop(self.spool_id, None)
self.set_active_spool(None)
def _cancel_spool_check_task(self) -> None:
if self.spool_check_task is None or self.spool_check_task.done():
return
self.spool_check_task.cancel()
async def _check_spool_deleted(self) -> None:
if self.spool_id is not None:
response = await self.http_client.get(
f"{self.spoolman_url}/v1/spool/{self.spool_id}",
connect_timeout=1., request_timeout=2.
)
if response.status_code == 404:
logging.info(f"Spool ID {self.spool_id} not found, setting to None")
self.pending_reports.pop(self.spool_id, None)
self.set_active_spool(None)
elif response.has_error():
err_msg = self._get_response_error(response)
logging.info(f"Attempt to check spool status failed: {err_msg}")
else:
logging.info(f"Found Spool ID {self.spool_id} on spoolman instance")
self.spool_check_task = None
def connected(self) -> bool:
return self.ws_connected
def _on_ws_ping(self, data: bytes = b"") -> None:
self._last_ping_received = self.eventloop.get_loop_time()
async def _handle_klippy_ready(self) -> None:
result: Dict[str, Dict[str, Any]]
result = await self.klippy_apis.subscribe_objects(
{"toolhead": ["position", "extruder"]}, self._handle_status_update, {}
)
toolhead = result.get("toolhead", {})
self._current_extruder = toolhead.get("extruder", "extruder")
initial_e_pos = toolhead.get("position", [None]*4)[3]
logging.debug(f"Initial epos: {initial_e_pos}")
if initial_e_pos is not None:
self._highest_epos = initial_e_pos
else:
logging.error("Spoolman integration unable to subscribe to epos")
raise self.server.error("Unable to subscribe to e position")
def _get_response_error(self, response: HttpResponse) -> str:
err_msg = f"HTTP error: {response.status_code} {response.error}"
with contextlib.suppress(Exception):
msg: Optional[str] = cast(dict, response.json())["message"]
err_msg += f", Spoolman message: {msg}"
return err_msg
def _handle_status_update(self, status: Dict[str, Any], _: float) -> None:
toolhead: Optional[Dict[str, Any]] = status.get("toolhead")
if toolhead is None:
return
epos: float = toolhead.get("position", [0, 0, 0, self._highest_epos])[3]
extr = toolhead.get("extruder", self._current_extruder)
if extr != self._current_extruder:
self._highest_epos = epos
self._current_extruder = extr
elif epos > self._highest_epos:
if self.spool_id is not None:
self._add_extrusion(self.spool_id, epos - self._highest_epos)
self._highest_epos = epos
def _add_extrusion(self, spool_id: int, used_length: float) -> None:
if spool_id in self.pending_reports:
self.pending_reports[spool_id] += used_length
else:
self.pending_reports[spool_id] = used_length
def set_active_spool(self, spool_id: Union[int, None]) -> None:
assert spool_id is None or isinstance(spool_id, int)
if self.spool_id == spool_id:
logging.info(f"Spool ID already set to: {spool_id}")
return
self.spool_history.tracker.update(spool_id)
self.spool_id = spool_id
self.database.insert_item(DB_NAMESPACE, ACTIVE_SPOOL_KEY, spool_id)
self.server.send_event(
"spoolman:active_spool_set", {"spool_id": spool_id}
)
logging.info(f"Setting active spool to: {spool_id}")
async def report_extrusion(self, eventtime: float) -> float:
if not self.ws_connected:
return eventtime + self.sync_rate_seconds
pending_reports = self.pending_reports
self.pending_reports = {}
for spool_id, used_length in pending_reports.items():
if not self.ws_connected:
self._add_extrusion(spool_id, used_length)
continue
logging.debug(
f"Sending spool usage: ID: {spool_id}, Length: {used_length:.3f}mm"
)
response = await self.http_client.request(
method="PUT",
url=f"{self.spoolman_url}/v1/spool/{spool_id}/use",
body={"use_length": used_length}
)
if response.has_error():
if response.status_code == 404:
# Since the spool is deleted we can remove any pending reports
# added while waiting for the request
self.pending_reports.pop(spool_id, None)
if spool_id == self.spool_id:
logging.info(f"Spool ID {spool_id} not found, setting to None")
self.set_active_spool(None)
else:
if not self._error_logged:
error_msg = self._get_response_error(response)
self._error_logged = True
logging.info(
f"Failed to update extrusion for spool id {spool_id}, "
f"received {error_msg}"
)
# Add missed reports back to pending reports for the next cycle
self._add_extrusion(spool_id, used_length)
continue
self._error_logged = False
return self.eventloop.get_loop_time() + self.sync_rate_seconds
async def _handle_spool_id_request(self, web_request: WebRequest):
if web_request.get_request_type() == RequestType.POST:
spool_id = web_request.get_int("spool_id", None)
self.set_active_spool(spool_id)
# For GET requests we will simply return the spool_id
return {"spool_id": self.spool_id}
async def _proxy_spoolman_request(self, web_request: WebRequest):
method = web_request.get_str("request_method")
path = web_request.get_str("path")
query = web_request.get_str("query", None)
body = web_request.get("body", None)
use_v2_response = web_request.get_boolean("use_v2_response", False)
if method not in {"GET", "POST", "PUT", "PATCH", "DELETE"}:
raise self.server.error(f"Invalid HTTP method: {method}")
if body is not None and method == "GET":
raise self.server.error("GET requests cannot have a body")
if len(path) < 4 or path[:4] != "/v1/":
raise self.server.error(
"Invalid path, must start with the API version, e.g. /v1"
)
query = f"?{query}" if query is not None else ""
full_url = f"{self.spoolman_url}{path}{query}"
if not self.ws_connected:
if not use_v2_response:
raise self.server.error("Spoolman server not available", 503)
return {
"response": None,
"error": {
"status_code": 503,
"message": "Spoolman server not available"
}
}
logging.debug(f"Proxying {method} request to {full_url}")
response = await self.http_client.request(
method=method,
url=full_url,
body=body,
)
if not use_v2_response:
response.raise_for_status()
return response.json()
if response.has_error():
msg: str = str(response.error or "")
with contextlib.suppress(Exception):
spoolman_msg = cast(dict, response.json()).get("message", msg)
msg = spoolman_msg
return {
"response": None,
"error": {
"status_code": response.status_code,
"message": msg
}
}
else:
return {
"response": response.json(),
"response_headers": dict(response.headers.items()),
"error": None
}
async def _handle_status_request(self, web_request: WebRequest) -> Dict[str, Any]:
pending: List[Dict[str, Any]] = [
{"spool_id": sid, "filament_used": used} for sid, used in
self.pending_reports.items()
]
return {
"spoolman_connected": self.ws_connected,
"pending_reports": pending,
"spool_id": self.spool_id
}
def _send_status_notification(self) -> None:
self.server.send_event(
"spoolman:spoolman_status_changed",
{"spoolman_connected": self.ws_connected}
)
async def close(self):
self.is_closing = True
self.report_timer.stop()
if self.spoolman_ws is not None:
self.spoolman_ws.close(1001, "Moonraker Shutdown")
self._cancel_spool_check_task()
if self.connection_task is None or self.connection_task.done():
return
try:
await asyncio.wait_for(self.connection_task, 2.)
except asyncio.TimeoutError:
pass
def load_component(config: ConfigHelper) -> SpoolManager:
return SpoolManager(config)

View File

@@ -5,8 +5,10 @@
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import logging
import asyncio
import jinja2
import json
from ..utils import json_wrapper as jsonw
from ..common import RenderableTemplate
# Annotation imports
from typing import (
@@ -16,8 +18,8 @@ from typing import (
)
if TYPE_CHECKING:
from moonraker import Server
from confighelper import ConfigHelper
from ..server import Server
from ..confighelper import ConfigHelper
from .secrets import Secrets
class TemplateFactory:
@@ -25,12 +27,16 @@ class TemplateFactory:
self.server = config.get_server()
secrets: Secrets = self.server.load_component(config, 'secrets')
self.jenv = jinja2.Environment('{%', '%}', '{', '}')
self.async_env = jinja2.Environment('{%', '%}', '{', '}',
enable_async=True)
self.async_env = jinja2.Environment(
'{%', '%}', '{', '}', enable_async=True
)
self.ui_env = jinja2.Environment(enable_async=True)
self.jenv.add_extension("jinja2.ext.do")
self.jenv.filters['fromjson'] = json.loads
self.jenv.filters['fromjson'] = jsonw.loads
self.async_env.add_extension("jinja2.ext.do")
self.async_env.filters['fromjson'] = json.loads
self.async_env.filters['fromjson'] = jsonw.loads
self.ui_env.add_extension("jinja2.ext.do")
self.ui_env.filters['fromjson'] = jsonw.loads
self.add_environment_global('raise_error', self._raise_error)
self.add_environment_global('secrets', secrets)
@@ -56,8 +62,16 @@ class TemplateFactory:
raise
return JinjaTemplate(source, self.server, template, is_async)
def create_ui_template(self, source: str) -> JinjaTemplate:
try:
template = self.ui_env.from_string(source)
except Exception:
logging.exception(f"Error creating template from source:\n{source}")
raise
return JinjaTemplate(source, self.server, template, True)
class JinjaTemplate:
class JinjaTemplate(RenderableTemplate):
def __init__(self,
source: str,
server: Server,
@@ -77,10 +91,21 @@ class JinjaTemplate:
raise self.server.error(
"Cannot render async templates with the render() method"
", use render_async()")
return self.template.render(context).strip()
try:
return self.template.render(context).strip()
except Exception as e:
msg = "Error rending Jinja2 Template"
if self.server.is_configured():
raise self.server.error(msg, 500) from e
raise self.server.config_error(msg) from e
async def render_async(self, context: Dict[str, Any] = {}) -> str:
ret = await self.template.render_async(context)
try:
ret = await self.template.render_async(context)
except asyncio.CancelledError:
raise
except Exception as e:
raise self.server.error("Error rending Jinja2 Template", 500) from e
return ret.strip()
def load_component(config: ConfigHelper) -> TemplateFactory:

View File

@@ -9,7 +9,7 @@ from . import update_manager as um
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ...confighelper import ConfigHelper
def load_component(config: ConfigHelper) -> um.UpdateManager:
return um.load_component(config)

View File

@@ -5,11 +5,18 @@
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import os
import pathlib
import shutil
import hashlib
import logging
import re
import distro
import asyncio
import importlib
from .common import AppType, Channel
from .base_deploy import BaseDeploy
from ...utils import pip_utils
from ...utils import json_wrapper as jsonw
# Annotation imports
from typing import (
@@ -19,67 +26,145 @@ from typing import (
Union,
Dict,
List,
Tuple
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ...confighelper import ConfigHelper
from ..klippy_connection import KlippyConnection as Klippy
from .update_manager import CommandHelper
from ..machine import Machine
from ..file_manager.file_manager import FileManager
MIN_PIP_VERSION = (23, 3, 2)
SUPPORTED_CHANNELS = {
"zip": ["stable", "beta"],
"git_repo": ["dev", "beta"]
AppType.WEB: [Channel.STABLE, Channel.BETA],
AppType.ZIP: [Channel.STABLE, Channel.BETA],
AppType.GIT_REPO: list(Channel)
}
TYPE_TO_CHANNEL = {
"zip": "stable",
"zip_beta": "beta",
"git_repo": "dev"
AppType.WEB: Channel.STABLE,
AppType.ZIP: Channel.STABLE,
AppType.GIT_REPO: Channel.DEV
}
class AppDeploy(BaseDeploy):
def __init__(self, config: ConfigHelper, cmd_helper: CommandHelper) -> None:
super().__init__(config, cmd_helper, prefix="Application")
self.config = config
self.debug = self.cmd_helper.is_debug_enabled()
type_choices = list(TYPE_TO_CHANNEL.keys())
self.type = config.get('type').lower()
if self.type not in type_choices:
raise config.error(
f"Config Error: Section [{config.get_name()}], Option "
f"'type: {self.type}': value must be one "
f"of the following choices: {type_choices}"
)
self.channel = config.get(
"channel", TYPE_TO_CHANNEL[self.type]
)
if self.type == "zip_beta":
self.server.add_warning(
f"Config Section [{config.get_name()}], Option 'type: "
"zip_beta', value 'zip_beta' is deprecated. Set 'type' "
"to zip and 'channel' to 'beta'")
self.type = "zip"
self.path = pathlib.Path(
config.get('path')).expanduser().resolve()
executable = config.get('env', None)
if self.channel not in SUPPORTED_CHANNELS[self.type]:
raise config.error(
f"Invalid Channel '{self.channel}' for config "
f"section [{config.get_name()}], type: {self.type}")
self._verify_path(config, 'path', self.path)
self.executable: Optional[pathlib.Path] = None
self.pip_exe: Optional[pathlib.Path] = None
self.venv_args: Optional[str] = None
if executable is not None:
self.executable = pathlib.Path(executable).expanduser()
self.pip_exe = self.executable.parent.joinpath("pip")
if not self.pip_exe.exists():
self.server.add_warning(
f"Update Manger {self.name}: Unable to locate pip "
"executable")
self._verify_path(config, 'env', self.executable)
self.venv_args = config.get('venv_args', None)
DISTRO_ALIASES = [distro.id()]
DISTRO_ALIASES.extend(distro.like().split())
class AppDeploy(BaseDeploy):
def __init__(
self, config: ConfigHelper, cmd_helper: CommandHelper, prefix: str
) -> None:
super().__init__(config, cmd_helper, prefix=prefix)
self.config = config
type_choices = list(TYPE_TO_CHANNEL.keys())
self.type = AppType.from_string(config.get('type'))
if self.type not in type_choices:
str_types = [str(t) for t in type_choices]
raise config.error(
f"Section [{config.get_name()}], Option 'type: {self.type}': "
f"value must be one of the following choices: {str_types}"
)
self.channel = Channel.from_string(
config.get("channel", str(TYPE_TO_CHANNEL[self.type]))
)
self.channel_invalid: bool = False
if self.channel not in SUPPORTED_CHANNELS[self.type]:
str_channels = [str(c) for c in SUPPORTED_CHANNELS[self.type]]
self.channel_invalid = True
invalid_channel = self.channel
self.channel = TYPE_TO_CHANNEL[self.type]
self.server.add_warning(
f"[{config.get_name()}]: Invalid value '{invalid_channel}' for "
f"option 'channel'. Type '{self.type}' supports the following "
f"channels: {str_channels}. Falling back to channel '{self.channel}'"
)
self._is_valid: bool = False
self.virtualenv: Optional[pathlib.Path] = None
self.py_exec: Optional[pathlib.Path] = None
self.pip_cmd: Optional[str] = None
self.pip_version: Tuple[int, ...] = tuple()
self.venv_args: Optional[str] = None
self.npm_pkg_json: Optional[pathlib.Path] = None
self.python_reqs: Optional[pathlib.Path] = None
self.install_script: Optional[pathlib.Path] = None
self.system_deps_json: Optional[pathlib.Path] = None
self.info_tags: List[str] = config.getlist("info_tags", [])
self.managed_services: List[str] = []
def _configure_path(self, config: ConfigHelper, reserve: bool = True) -> None:
self.path = pathlib.Path(config.get('path')).expanduser().resolve()
self._verify_path(config, 'path', self.path, check_file=False)
if (
reserve and self.name not in ["moonraker", "klipper"]
and not self.path.joinpath(".writeable").is_file()
):
fm: FileManager = self.server.lookup_component("file_manager")
fm.add_reserved_path(f"update_manager {self.name}", self.path)
def _configure_virtualenv(self, config: ConfigHelper) -> None:
venv_path: Optional[pathlib.Path] = None
if config.has_option("virtualenv"):
venv_path = pathlib.Path(config.get("virtualenv")).expanduser()
if not venv_path.is_absolute():
venv_path = self.path.joinpath(venv_path)
self._verify_path(config, 'virtualenv', venv_path, check_file=False)
elif config.has_option("env"):
# Deprecated
if self.name != "klipper":
self.log_info("Option 'env' is deprecated, use 'virtualenv' instead.")
py_exec = pathlib.Path(config.get("env")).expanduser()
self._verify_path(config, 'env', py_exec, check_exe=True)
venv_path = py_exec.expanduser().parent.parent.resolve()
if venv_path is not None:
act_path = venv_path.joinpath("bin/activate")
if not act_path.is_file():
raise config.error(
f"[{config.get_name()}]: Invalid virtualenv at path {venv_path}. "
f"Verify that the 'virtualenv' option is set to a valid "
"virtualenv path."
)
self.py_exec = venv_path.joinpath("bin/python")
if not (self.py_exec.is_file() and os.access(self.py_exec, os.X_OK)):
raise config.error(
f"[{config.get_name()}]: Invalid python executable at "
f"{self.py_exec}. Verify that the 'virtualenv' option is set "
"to a valid virtualenv path."
)
self.log_info(f"Detected virtualenv: {venv_path}")
self.virtualenv = venv_path
pip_exe = self.virtualenv.joinpath("bin/pip")
if pip_exe.is_file():
self.pip_cmd = f"{self.py_exec} -m pip"
else:
self.log_info("Unable to locate pip executable")
self.venv_args = config.get('venv_args', None)
self.pip_env_vars = config.getdict("pip_environment_variables", None)
def _configure_dependencies(
self, config: ConfigHelper, node_only: bool = False
) -> None:
if config.getboolean("enable_node_updates", False):
self.npm_pkg_json = self.path.joinpath("package-lock.json")
self._verify_path(config, 'enable_node_updates', self.npm_pkg_json)
if node_only:
return
if self.py_exec is not None:
self.python_reqs = self.path.joinpath(config.get("requirements"))
self._verify_path(config, 'requirements', self.python_reqs)
deps = config.get("system_dependencies", None)
if deps is not None:
self.system_deps_json = self.path.joinpath(deps).resolve()
self._verify_path(config, 'system_dependencies', self.system_deps_json)
else:
# Fall back on deprecated "install_script" option if dependencies file
# not present
install_script = config.get('install_script', None)
if install_script is not None:
self.install_script = self.path.joinpath(install_script).resolve()
self._verify_path(config, 'install_script', self.install_script)
def _configure_managed_services(self, config: ConfigHelper) -> None:
svc_default = []
if config.getboolean("is_system_service", True):
svc_default.append(self.name)
@@ -87,11 +172,25 @@ class AppDeploy(BaseDeploy):
services: List[str] = config.getlist(
"managed_services", svc_default, separator=None
)
if self.name in services:
machine: Machine = self.server.lookup_component("machine")
data_path: str = self.server.get_app_args()["data_path"]
asvc = pathlib.Path(data_path).joinpath("moonraker.asvc")
if not machine.is_service_allowed(self.name):
self.server.add_warning(
f"[{config.get_name()}]: Moonraker is not permitted to "
f"restart service '{self.name}'. To enable management "
f"of this service add {self.name} to the bottom of the "
f"file {asvc}. To disable management for this service "
"set 'is_system_service: False' in the configuration "
"for this section."
)
services.clear()
for svc in services:
if svc not in svc_choices:
raw = " ".join(services)
self.server.add_warning(
f"[{config.get_name()}]: Option 'restart_action: {raw}' "
f"[{config.get_name()}]: Option 'managed_services: {raw}' "
f"contains an invalid value '{svc}'. All values must be "
f"one of the following choices: {svc_choices}"
)
@@ -99,56 +198,38 @@ class AppDeploy(BaseDeploy):
for svc in svc_choices:
if svc in services and svc not in self.managed_services:
self.managed_services.append(svc)
logging.debug(
f"Extension {self.name} managed services: {self.managed_services}"
self.log_debug(
f"Managed services: {self.managed_services}"
)
# We need to fetch all potential options for an Application. Not
# all options apply to each subtype, however we can't limit the
# options in children if we want to switch between channels and
# satisfy the confighelper's requirements.
self.moved_origin: Optional[str] = config.get('moved_origin', None)
self.origin: str = config.get('origin')
self.primary_branch = config.get("primary_branch", "master")
self.npm_pkg_json: Optional[pathlib.Path] = None
if config.getboolean("enable_node_updates", False):
self.npm_pkg_json = self.path.joinpath("package-lock.json")
self._verify_path(config, 'enable_node_updates', self.npm_pkg_json)
self.python_reqs: Optional[pathlib.Path] = None
if self.executable is not None:
self.python_reqs = self.path.joinpath(config.get("requirements"))
self._verify_path(config, 'requirements', self.python_reqs)
self.install_script: Optional[pathlib.Path] = None
install_script = config.get('install_script', None)
if install_script is not None:
self.install_script = self.path.joinpath(install_script).resolve()
self._verify_path(config, 'install_script', self.install_script)
@staticmethod
def _is_git_repo(app_path: Union[str, pathlib.Path]) -> bool:
if isinstance(app_path, str):
app_path = pathlib.Path(app_path).expanduser()
return app_path.joinpath('.git').exists()
def _verify_path(
self,
config: ConfigHelper,
option: str,
path: pathlib.Path,
check_file: bool = True,
check_exe: bool = False
) -> None:
base_msg = (
f"Invalid path for option `{option}` in section "
f"[{config.get_name()}]: Path `{path}`"
)
if not path.exists():
raise config.error(f"{base_msg} does not exist")
if check_file and not path.is_file():
raise config.error(f"{base_msg} is not a file")
if check_exe and not os.access(path, os.X_OK):
raise config.error(f"{base_msg} is not executable")
async def initialize(self) -> Dict[str, Any]:
storage = await super().initialize()
self.need_channel_update = storage.get('need_channel_upate', False)
self._is_valid = storage.get('is_valid', False)
self.pip_version = tuple(storage.get("pip_version", []))
if self.pip_version:
ver_str = ".".join([str(part) for part in self.pip_version])
self.log_info(f"Stored pip version: {ver_str}")
return storage
def _verify_path(self,
config: ConfigHelper,
option: str,
file_path: pathlib.Path
) -> None:
if not file_path.exists():
raise config.error(
f"Invalid path for option `{option}` in section "
f"[{config.get_name()}]: Path `{file_path}` does not exist")
def check_need_channel_swap(self) -> bool:
return self.need_channel_update
def get_configured_type(self) -> str:
def get_configured_type(self) -> AppType:
return self.type
def check_same_paths(self,
@@ -161,11 +242,13 @@ class AppDeploy(BaseDeploy):
executable = pathlib.Path(executable)
app_path = app_path.expanduser()
executable = executable.expanduser()
if self.executable is None:
if self.py_exec is None:
return False
try:
return self.path.samefile(app_path) and \
self.executable.samefile(executable)
return (
self.path.samefile(app_path) and
self.py_exec.samefile(executable)
)
except Exception:
return False
@@ -175,12 +258,10 @@ class AppDeploy(BaseDeploy):
) -> None:
raise NotImplementedError
async def reinstall(self):
raise NotImplementedError
async def restart_service(self):
async def restart_service(self) -> None:
if not self.managed_services:
return
machine: Machine = self.server.lookup_component("machine")
is_full = self.cmd_helper.is_full_update()
for svc in self.managed_services:
if is_full and svc != self.name:
@@ -192,36 +273,89 @@ class AppDeploy(BaseDeploy):
if svc == "moonraker":
# Launch restart async so the request can return
# before the server restarts
event_loop = self.server.get_event_loop()
event_loop.delay_callback(.1, self._do_restart, svc)
machine.restart_moonraker_service()
else:
await self._do_restart(svc)
if svc == "klipper":
kconn: Klippy = self.server.lookup_component("klippy_connection")
svc = kconn.unit_name
await machine.do_service_action("restart", svc)
async def _do_restart(self, svc_name: str) -> None:
machine: Machine = self.server.lookup_component("machine")
async def _read_system_dependencies(self) -> List[str]:
eventloop = self.server.get_event_loop()
if self.system_deps_json is not None:
deps_json = self.system_deps_json
try:
ret = await eventloop.run_in_thread(deps_json.read_bytes)
dep_info: Dict[str, List[str]] = jsonw.loads(ret)
except asyncio.CancelledError:
raise
except Exception:
logging.exception(f"Error reading system deps: {deps_json}")
return []
for distro_id in DISTRO_ALIASES:
if distro_id in dep_info:
if not dep_info[distro_id]:
self.log_info(
f"Dependency file '{deps_json.name}' contains an empty "
f"package definition for linux distro '{distro_id}'"
)
return dep_info[distro_id]
else:
self.log_info(
f"Dependency file '{deps_json.name}' has no package definition "
f" for linux distro '{DISTRO_ALIASES[0]}'"
)
return []
# Fall back on install script if configured
if self.install_script is None:
return []
# Open install file file and read
inst_path: pathlib.Path = self.install_script
if not inst_path.is_file():
self.log_info(f"Failed to open install script: {inst_path}")
return []
try:
await machine.do_service_action("restart", svc_name)
data = await eventloop.run_in_thread(inst_path.read_text)
except asyncio.CancelledError:
raise
except Exception:
if svc_name == "moonraker":
# We will always get an error when restarting moonraker
# from within the child process, so ignore it
return
raise self.log_exc("Error restarting service")
logging.exception(f"Error reading install script: {deps_json}")
return []
plines: List[str] = re.findall(r'PKGLIST="(.*)"', data)
plines = [p.lstrip("${PKGLIST}").strip() for p in plines]
packages: List[str] = []
for line in plines:
packages.extend(line.split())
if not packages:
self.log_info(f"No packages found in script: {inst_path}")
return packages
async def _read_python_reqs(self) -> List[str]:
if self.python_reqs is None:
return []
pyreqs = self.python_reqs
if not pyreqs.is_file():
self.log_info(f"Failed to open python requirements file: {pyreqs}")
return []
eventloop = self.server.get_event_loop()
return await eventloop.run_in_thread(
pip_utils.read_requirements_file, self.python_reqs
)
def get_update_status(self) -> Dict[str, Any]:
return {
'channel': self.channel,
'debug_enabled': self.debug,
'need_channel_update': self.need_channel_update,
'channel': str(self.channel),
'debug_enabled': self.server.is_debug_enabled(),
'channel_invalid': self.channel_invalid,
'is_valid': self._is_valid,
'configured_type': self.type,
'configured_type': str(self.type),
'info_tags': self.info_tags
}
def get_persistent_data(self) -> Dict[str, Any]:
storage = super().get_persistent_data()
storage['is_valid'] = self._is_valid
storage['need_channel_update'] = self.need_channel_update
storage['pip_version'] = list(self.pip_version)
return storage
async def _get_file_hash(self,
@@ -257,45 +391,88 @@ class AppDeploy(BaseDeploy):
self.log_exc("Error updating packages")
return
async def _update_virtualenv(self,
requirements: Union[pathlib.Path, List[str]]
) -> None:
if self.pip_exe is None:
async def _update_python_requirements(
self, requirements: Union[pathlib.Path, List[str]]
) -> None:
if self.pip_cmd is None:
return
# Update python dependencies
if isinstance(requirements, pathlib.Path):
if not requirements.is_file():
self.log_info(
f"Invalid path to requirements_file '{requirements}'")
return
args = f"-r {requirements}"
else:
args = " ".join(requirements)
if self.name == "moonraker":
importlib.reload(pip_utils)
pip_exec = pip_utils.AsyncPipExecutor(
self.pip_cmd, self.server, self.cmd_helper.notify_update_response
)
# Check the current pip version
self.notify_status("Checking pip version...")
try:
pip_ver = await pip_exec.get_pip_version()
if pip_utils.check_pip_needs_update(pip_ver):
cur_ver = pip_ver.pip_version_string
update_ver = ".".join([str(part) for part in pip_utils.MIN_PIP_VERSION])
self.notify_status(
f"Updating pip from version {cur_ver} to {update_ver}..."
)
await pip_exec.update_pip()
self.pip_version = pip_utils.MIN_PIP_VERSION
except asyncio.CancelledError:
raise
except Exception as e:
self.notify_status(f"Pip Version Check Error: {e}")
self.log_exc("Pip Version Check Error")
self.notify_status("Updating python packages...")
try:
# First attempt to update pip
# await self.cmd_helper.run_cmd(
# f"{self.pip_exe} install -U pip", timeout=1200., notify=True,
# retries=3)
await self.cmd_helper.run_cmd(
f"{self.pip_exe} install {args}", timeout=1200., notify=True,
retries=3)
await pip_exec.install_packages(requirements, self.pip_env_vars)
except asyncio.CancelledError:
raise
except Exception:
self.log_exc("Error updating python requirements")
async def _build_virtualenv(self) -> None:
if self.pip_exe is None or self.venv_args is None:
return
bin_dir = self.pip_exe.parent
env_path = bin_dir.parent.resolve()
self.notify_status(f"Creating virtualenv at: {env_path}...")
if env_path.exists():
shutil.rmtree(env_path)
try:
await self.cmd_helper.run_cmd(
f"virtualenv {self.venv_args} {env_path}", timeout=300.)
except Exception:
self.log_exc(f"Error creating virtualenv")
return
if not self.pip_exe.exists():
raise self.log_exc("Failed to create new virtualenv", False)
async def _collect_dependency_info(self) -> Dict[str, Any]:
pkg_deps = await self._read_system_dependencies()
pyreqs = await self._read_python_reqs()
npm_hash = await self._get_file_hash(self.npm_pkg_json)
logging.debug(
f"\nApplication {self.name}: Pre-update dependencies:\n"
f"Packages: {pkg_deps}\n"
f"Python Requirements: {pyreqs}"
)
return {
"system_packages": pkg_deps,
"python_modules": pyreqs,
"npm_hash": npm_hash
}
async def _update_dependencies(
self, dep_info: Dict[str, Any], force: bool = False
) -> None:
packages = await self._read_system_dependencies()
modules = await self._read_python_reqs()
logging.debug(
f"\nApplication {self.name}: Post-update dependencies:\n"
f"Packages: {packages}\n"
f"Python Requirements: {modules}"
)
if not force:
packages = list(set(packages) - set(dep_info["system_packages"]))
modules = list(set(modules) - set(dep_info["python_modules"]))
logging.debug(
f"\nApplication {self.name}: Dependencies to install:\n"
f"Packages: {packages}\n"
f"Python Requirements: {modules}\n"
f"Force All: {force}"
)
if packages:
await self._install_packages(packages)
if modules:
await self._update_python_requirements(self.python_reqs or modules)
npm_hash: Optional[str] = dep_info["npm_hash"]
ret = await self._check_need_update(npm_hash, self.npm_pkg_json)
if force or ret:
if self.npm_pkg_json is not None:
self.notify_status("Updating Node Packages...")
try:
await self.cmd_helper.run_cmd(
"npm ci --only=prod", notify=True, timeout=600.,
cwd=str(self.path)
)
except Exception:
self.notify_status("Node Package Update failed")

View File

@@ -7,11 +7,12 @@
from __future__ import annotations
import logging
import time
from ...utils import pretty_print_time
from typing import TYPE_CHECKING, Dict, Any, Optional
from typing import TYPE_CHECKING, Dict, Any, Optional, Coroutine
if TYPE_CHECKING:
from confighelper import ConfigHelper
from utils import ServerError
from ...confighelper import ConfigHelper
from ...utils import ServerError
from .update_manager import CommandHelper
class BaseDeploy:
@@ -23,7 +24,7 @@ class BaseDeploy:
cfg_hash: Optional[str] = None
) -> None:
if name is None:
name = config.get_name().split()[-1]
name = self.parse_name(config)
self.name = name
if prefix:
prefix = f"{prefix} {self.name}: "
@@ -38,6 +39,14 @@ class BaseDeploy:
cfg_hash = config.get_hash().hexdigest()
self.cfg_hash = cfg_hash
@staticmethod
def parse_name(config: ConfigHelper) -> str:
name = config.get_name().split(maxsplit=1)[-1]
if name.startswith("client "):
# allow deprecated [update_manager client app] style names
name = name[7:]
return name
async def initialize(self) -> Dict[str, Any]:
umdb = self.cmd_helper.get_umdb()
storage: Dict[str, Any] = await umdb.get(self.name, {})
@@ -45,12 +54,14 @@ class BaseDeploy:
self.last_cfg_hash: str = storage.get('last_config_hash', "")
return storage
def needs_refresh(self) -> bool:
def needs_refresh(self, log_remaining_time: bool = False) -> bool:
next_refresh_time = self.last_refresh_time + self.refresh_interval
return (
self.cfg_hash != self.last_cfg_hash or
time.time() > next_refresh_time
)
remaining_time = int(next_refresh_time - time.time() + .5)
if self.cfg_hash != self.last_cfg_hash or remaining_time <= 0:
return True
if log_remaining_time:
self.log_info(f"Next refresh in: {pretty_print_time(remaining_time)}")
return False
def get_last_refresh_time(self) -> float:
return self.last_refresh_time
@@ -61,6 +72,9 @@ class BaseDeploy:
async def update(self) -> bool:
return False
async def rollback(self) -> bool:
raise self.server.error(f"Rollback not available for {self.name}")
def get_update_status(self) -> Dict[str, Any]:
return {}
@@ -88,7 +102,14 @@ class BaseDeploy:
log_msg = f"{self.prefix}{msg}"
logging.info(log_msg)
def log_debug(self, msg: str) -> None:
log_msg = f"{self.prefix}{msg}"
logging.debug(log_msg)
def notify_status(self, msg: str, is_complete: bool = False) -> None:
log_msg = f"{self.prefix}{msg}"
logging.debug(log_msg)
self.cmd_helper.notify_update_response(log_msg, is_complete)
def close(self) -> Optional[Coroutine]:
return None

View File

@@ -0,0 +1,97 @@
# Moonraker/Klipper update configuration
#
# Copyright (C) 2022 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import os
import sys
import copy
import pathlib
from ...common import ExtendedEnum
from ...utils import source_info
from typing import (
TYPE_CHECKING,
Dict,
Union
)
if TYPE_CHECKING:
from ...confighelper import ConfigHelper
from ..database import MoonrakerDatabase
KLIPPER_DEFAULT_PATH = os.path.expanduser("~/klipper")
KLIPPER_DEFAULT_EXEC = os.path.expanduser("~/klippy-env/bin/python")
BASE_CONFIG: Dict[str, Dict[str, str]] = {
"moonraker": {
"origin": "https://github.com/arksine/moonraker.git",
"requirements": "scripts/moonraker-requirements.txt",
"venv_args": "-p python3",
"system_dependencies": "scripts/system-dependencies.json",
"host_repo": "arksine/moonraker",
"virtualenv": sys.exec_prefix,
"pip_environment_variables": "SKIP_CYTHON=Y",
"path": str(source_info.source_path()),
"managed_services": "moonraker"
},
"klipper": {
"moved_origin": "https://github.com/kevinoconnor/klipper.git",
"origin": "https://github.com/Klipper3d/klipper.git",
"requirements": "scripts/klippy-requirements.txt",
"venv_args": "-p python2",
"install_script": "scripts/install-octopi.sh",
"host_repo": "arksine/moonraker",
"managed_services": "klipper"
}
}
class AppType(ExtendedEnum):
NONE = 1
WEB = 2
GIT_REPO = 3
ZIP = 4
class Channel(ExtendedEnum):
STABLE = 1
BETA = 2
DEV = 3
def get_app_type(app_path: Union[str, pathlib.Path]) -> AppType:
if isinstance(app_path, str):
app_path = pathlib.Path(app_path).expanduser()
# None type will perform checks on Moonraker
if source_info.is_git_repo(app_path):
return AppType.GIT_REPO
else:
return AppType.NONE
def get_base_configuration(config: ConfigHelper) -> ConfigHelper:
server = config.get_server()
base_cfg = copy.deepcopy(BASE_CONFIG)
base_cfg["moonraker"]["type"] = str(get_app_type(source_info.source_path()))
db: MoonrakerDatabase = server.lookup_component('database')
base_cfg["klipper"]["path"] = db.get_item(
"moonraker", "update_manager.klipper_path", KLIPPER_DEFAULT_PATH
).result()
base_cfg["klipper"]["env"] = db.get_item(
"moonraker", "update_manager.klipper_exec", KLIPPER_DEFAULT_EXEC
).result()
base_cfg["klipper"]["type"] = str(get_app_type(base_cfg["klipper"]["path"]))
channel = config.get("channel", "dev")
base_cfg["moonraker"]["channel"] = channel
base_cfg["klipper"]["channel"] = channel
if config.has_section("update_manager moonraker"):
mcfg = config["update_manager moonraker"]
base_cfg["moonraker"]["channel"] = mcfg.get("channel", channel)
commit = mcfg.get("pinned_commit", None)
if commit is not None:
base_cfg["moonraker"]["pinned_commit"] = commit
if config.has_section("update_manager klipper"):
kcfg = config["update_manager klipper"]
base_cfg["klipper"]["channel"] = kcfg.get("channel", channel)
commit = kcfg.get("pinned_commit", None)
if commit is not None:
base_cfg["klipper"]["pinned_commit"] = commit
return config.read_supplemental_dict(base_cfg)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,556 @@
# Provides System Package Updates
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import asyncio
import logging
import time
import re
from ...thirdparty.packagekit import enums as PkEnum
from .base_deploy import BaseDeploy
# Annotation imports
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Optional,
Union,
Dict,
List,
)
if TYPE_CHECKING:
from ...confighelper import ConfigHelper
from ..shell_command import ShellCommandFactory as SCMDComp
from ..dbus_manager import DbusManager
from ..machine import Machine
from .update_manager import CommandHelper
from dbus_next import Variant
from dbus_next.aio import ProxyInterface
JsonType = Union[List[Any], Dict[str, Any]]
class PackageDeploy(BaseDeploy):
def __init__(self,
config: ConfigHelper,
cmd_helper: CommandHelper
) -> None:
super().__init__(config, cmd_helper, "system", "", "")
cmd_helper.set_package_updater(self)
self.use_packagekit = config.getboolean("enable_packagekit", True)
self.available_packages: List[str] = []
async def initialize(self) -> Dict[str, Any]:
storage = await super().initialize()
self.available_packages = storage.get('packages', [])
provider: BasePackageProvider
try_fallback = True
if self.use_packagekit:
try:
provider = PackageKitProvider(self.cmd_helper)
await provider.initialize()
except Exception:
pass
else:
self.log_info("PackageDeploy: PackageKit Provider Configured")
self.prefix = "PackageKit: "
try_fallback = False
if try_fallback:
# Check to see of the apt command is available
fallback = await self._get_fallback_provider()
if fallback is None:
provider = BasePackageProvider(self.cmd_helper)
machine: Machine = self.server.lookup_component("machine")
dist_info = machine.get_system_info()['distribution']
dist_id: str = dist_info['id'].lower()
self.server.add_warning(
"Unable to initialize System Update Provider for "
f"distribution: {dist_id}")
else:
self.log_info("PackageDeploy: Using APT CLI Provider")
self.prefix = "Package Manager APT: "
provider = fallback
self.provider = provider
return storage
async def _get_fallback_provider(self) -> Optional[BasePackageProvider]:
# Currently only the API Fallback provider is available
shell_cmd: SCMDComp
shell_cmd = self.server.lookup_component("shell_command")
cmd = shell_cmd.build_shell_command("sh -c 'command -v apt'")
try:
ret = await cmd.run_with_response()
except shell_cmd.error:
return None
# APT Command found should be available
self.log_debug(f"APT package manager detected: {ret}")
provider = AptCliProvider(self.cmd_helper)
try:
await provider.initialize()
except Exception:
return None
return provider
async def refresh(self) -> None:
try:
# Do not force a refresh until the server has started
if self.server.is_running():
await self._update_package_cache(force=True)
self.available_packages = await self.provider.get_packages()
pkg_msg = "\n".join(self.available_packages)
self.log_info(
f"Detected {len(self.available_packages)} package updates:"
f"\n{pkg_msg}"
)
except Exception:
self.log_exc("Error Refreshing System Packages")
# Update Persistent Storage
self._save_state()
def get_persistent_data(self) -> Dict[str, Any]:
storage = super().get_persistent_data()
storage['packages'] = self.available_packages
return storage
async def update(self) -> bool:
if not self.available_packages:
return False
self.cmd_helper.notify_update_response("Updating packages...")
try:
await self._update_package_cache(force=True, notify=True)
await self.provider.upgrade_system()
except Exception:
raise self.server.error("Error updating system packages")
self.available_packages = []
self._save_state()
self.cmd_helper.notify_update_response(
"Package update finished...", is_complete=True)
return True
async def _update_package_cache(self,
force: bool = False,
notify: bool = False
) -> None:
curtime = time.time()
if force or curtime > self.last_refresh_time + 3600.:
# Don't update if a request was done within the last hour
await self.provider.refresh_packages(notify)
async def install_packages(self,
package_list: List[str],
**kwargs
) -> None:
await self.provider.install_packages(package_list, **kwargs)
def get_update_status(self) -> Dict[str, Any]:
return {
'package_count': len(self.available_packages),
'package_list': self.available_packages
}
class BasePackageProvider:
def __init__(self, cmd_helper: CommandHelper) -> None:
self.server = cmd_helper.get_server()
self.cmd_helper = cmd_helper
async def initialize(self) -> None:
pass
async def refresh_packages(self, notify: bool = False) -> None:
raise self.server.error("Cannot refresh packages, no provider set")
async def get_packages(self) -> List[str]:
raise self.server.error("Cannot retrieve packages, no provider set")
async def install_packages(self,
package_list: List[str],
**kwargs
) -> None:
raise self.server.error("Cannot install packages, no provider set")
async def upgrade_system(self) -> None:
raise self.server.error("Cannot upgrade packages, no provider set")
class AptCliProvider(BasePackageProvider):
APT_CMD = "sudo DEBIAN_FRONTEND=noninteractive apt-get"
async def refresh_packages(self, notify: bool = False) -> None:
await self.cmd_helper.run_cmd(
f"{self.APT_CMD} update", timeout=600., notify=notify)
async def get_packages(self) -> List[str]:
shell_cmd = self.cmd_helper.get_shell_command()
res = await shell_cmd.exec_cmd("apt list --upgradable", timeout=60.)
pkg_list = [p.strip() for p in res.split("\n") if p.strip()]
if pkg_list:
pkg_list = pkg_list[2:]
return [p.split("/", maxsplit=1)[0] for p in pkg_list]
return []
async def resolve_packages(self, package_list: List[str]) -> List[str]:
self.cmd_helper.notify_update_response("Resolving packages...")
search_regex = "|".join([f"^{pkg}$" for pkg in package_list])
cmd = f"apt-cache search --names-only \"{search_regex}\""
shell_cmd = self.cmd_helper.get_shell_command()
ret = await shell_cmd.exec_cmd(cmd, timeout=600.)
resolved = [
pkg.strip().split()[0] for pkg in ret.split("\n") if pkg.strip()
]
return [avail for avail in package_list if avail in resolved]
async def install_packages(self,
package_list: List[str],
**kwargs
) -> None:
timeout: float = kwargs.get('timeout', 300.)
retries: int = kwargs.get('retries', 3)
notify: bool = kwargs.get('notify', False)
await self.refresh_packages(notify=notify)
resolved = await self.resolve_packages(package_list)
if not resolved:
self.cmd_helper.notify_update_response("No packages detected")
return
logging.debug(f"Resolved packages: {resolved}")
pkgs = " ".join(resolved)
await self.cmd_helper.run_cmd(
f"{self.APT_CMD} install --yes {pkgs}", timeout=timeout,
attempts=retries, notify=notify)
async def upgrade_system(self) -> None:
await self.cmd_helper.run_cmd(
f"{self.APT_CMD} upgrade --yes", timeout=3600.,
notify=True)
class PackageKitProvider(BasePackageProvider):
def __init__(self, cmd_helper: CommandHelper) -> None:
super().__init__(cmd_helper)
dbus_mgr: DbusManager = self.server.lookup_component("dbus_manager")
self.dbus_mgr = dbus_mgr
self.pkgkit: Optional[ProxyInterface] = None
async def initialize(self) -> None:
if not self.dbus_mgr.is_connected():
raise self.server.error("DBus Connection Not available")
# Check for PolicyKit permissions
await self.dbus_mgr.check_permission(
"org.freedesktop.packagekit.system-sources-refresh",
"The Update Manager will fail to fetch package updates")
await self.dbus_mgr.check_permission(
"org.freedesktop.packagekit.package-install",
"The Update Manager will fail to install packages")
await self.dbus_mgr.check_permission(
"org.freedesktop.packagekit.system-update",
"The Update Manager will fail to update packages"
)
# Fetch the PackageKit DBus Inteface
self.pkgkit = await self.dbus_mgr.get_interface(
"org.freedesktop.PackageKit",
"/org/freedesktop/PackageKit",
"org.freedesktop.PackageKit")
async def refresh_packages(self, notify: bool = False) -> None:
await self.run_transaction("refresh_cache", False, notify=notify)
async def get_packages(self) -> List[str]:
flags = PkEnum.Filter.NONE
pkgs = await self.run_transaction("get_updates", flags.value)
pkg_ids = [info['package_id'] for info in pkgs if 'package_id' in info]
return [pkg_id.split(";")[0] for pkg_id in pkg_ids]
async def install_packages(self,
package_list: List[str],
**kwargs
) -> None:
notify: bool = kwargs.get('notify', False)
await self.refresh_packages(notify=notify)
flags = (
PkEnum.Filter.NEWEST | PkEnum.Filter.NOT_INSTALLED |
PkEnum.Filter.BASENAME | PkEnum.Filter.ARCH
)
pkgs = await self.run_transaction("resolve", flags.value, package_list)
pkg_ids = [info['package_id'] for info in pkgs if 'package_id' in info]
if pkg_ids:
logging.debug(f"Installing Packages: {pkg_ids}")
tflag = PkEnum.TransactionFlag.ONLY_TRUSTED
await self.run_transaction("install_packages", tflag.value,
pkg_ids, notify=notify)
async def upgrade_system(self) -> None:
# Get Updates, Install Packages
flags = PkEnum.Filter.NONE
pkgs = await self.run_transaction("get_updates", flags.value)
pkg_ids = [info['package_id'] for info in pkgs if 'package_id' in info]
if pkg_ids:
logging.debug(f"Upgrading Packages: {pkg_ids}")
tflag = PkEnum.TransactionFlag.ONLY_TRUSTED
await self.run_transaction("update_packages", tflag.value,
pkg_ids, notify=True)
def create_transaction(self) -> PackageKitTransaction:
if self.pkgkit is None:
raise self.server.error("PackageKit Interface Not Available")
return PackageKitTransaction(self.dbus_mgr, self.pkgkit,
self.cmd_helper)
async def run_transaction(self,
method: str,
*args,
notify: bool = False
) -> Any:
transaction = self.create_transaction()
return await transaction.run(method, *args, notify=notify)
class PackageKitTransaction:
GET_PKG_ROLES = (
PkEnum.Role.RESOLVE | PkEnum.Role.GET_PACKAGES |
PkEnum.Role.GET_UPDATES
)
QUERY_ROLES = GET_PKG_ROLES | PkEnum.Role.GET_REPO_LIST
PROGRESS_STATUS = (
PkEnum.Status.RUNNING | PkEnum.Status.INSTALL |
PkEnum.Status.UPDATE
)
def __init__(self,
dbus_mgr: DbusManager,
pkgkit: ProxyInterface,
cmd_helper: CommandHelper
) -> None:
self.server = cmd_helper.get_server()
self.eventloop = self.server.get_event_loop()
self.cmd_helper = cmd_helper
self.dbus_mgr = dbus_mgr
self.pkgkit = pkgkit
# Transaction Properties
self.notify = False
self._status = PkEnum.Status.UNKNOWN
self._role = PkEnum.Role.UNKNOWN
self._tflags = PkEnum.TransactionFlag.NONE
self._percentage = 101
self._dl_remaining = 0
self.speed = 0
self.elapsed_time = 0
self.remaining_time = 0
self.caller_active = False
self.allow_cancel = True
self.uid = 0
# Transaction data tracking
self.tfut: Optional[asyncio.Future] = None
self.last_progress_notify_time: float = 0.
self.result: List[Dict[str, Any]] = []
self.err_msg: str = ""
def run(self,
method: str,
*args,
notify: bool = False
) -> Awaitable:
if self.tfut is not None:
raise self.server.error(
"PackageKit transaction can only be used once")
self.notify = notify
self.tfut = self.eventloop.create_future()
coro = self._start_transaction(method, *args)
self.eventloop.create_task(coro)
return self.tfut
async def _start_transaction(self,
method: str,
*args
) -> None:
assert self.tfut is not None
try:
# Create Transaction
tid = await self.pkgkit.call_create_transaction() # type: ignore
transaction, props = await self.dbus_mgr.get_interfaces(
"org.freedesktop.PackageKit", tid,
["org.freedesktop.PackageKit.Transaction",
"org.freedesktop.DBus.Properties"])
# Set interface callbacks
transaction.on_package(self._on_package_signal) # type: ignore
transaction.on_repo_detail( # type: ignore
self._on_repo_detail_signal)
transaction.on_item_progress( # type: ignore
self._on_item_progress_signal)
transaction.on_error_code(self._on_error_signal) # type: ignore
transaction.on_finished(self._on_finished_signal) # type: ignore
props.on_properties_changed( # type: ignore
self._on_properties_changed)
# Run method
logging.debug(f"PackageKit: Running transaction call_{method}")
func = getattr(transaction, f"call_{method}")
await func(*args)
except Exception as e:
self.tfut.set_exception(e)
def _on_package_signal(self,
info_code: int,
package_id: str,
summary: str
) -> None:
info = PkEnum.Info.from_index(info_code)
if self._role in self.GET_PKG_ROLES:
pkg_data = {
'package_id': package_id,
'info': info.desc,
'summary': summary
}
self.result.append(pkg_data)
else:
self._notify_package(info, package_id)
def _on_repo_detail_signal(self,
repo_id: str,
description: str,
enabled: bool
) -> None:
if self._role == PkEnum.Role.GET_REPO_LIST:
repo_data = {
"repo_id": repo_id,
"description": description,
"enabled": enabled
}
self.result.append(repo_data)
else:
self._notify_repo(repo_id, description)
def _on_item_progress_signal(self,
item_id: str,
status_code: int,
percent_complete: int
) -> None:
status = PkEnum.Status.from_index(status_code) # noqa: F841
# NOTE: This signal doesn't seem to fire predictably,
# nor does it seem to provide a consistent "percent complete"
# parameter.
# logging.debug(
# f"Role {self._role.name}: Item Progress Signal Received\n"
# f"Item ID: {item_id}\n"
# f"Percent Complete: {percent_complete}\n"
# f"Status: {status.desc}")
def _on_error_signal(self,
error_code: int,
details: str
) -> None:
err = PkEnum.Error.from_index(error_code)
self.err_msg = f"{err.name}: {details}"
def _on_finished_signal(self, exit_code: int, run_time: int) -> None:
if self.tfut is None:
return
ext = PkEnum.Exit.from_index(exit_code)
secs = run_time / 1000.
if ext == PkEnum.Exit.SUCCESS:
self.tfut.set_result(self.result)
else:
err = self.err_msg or ext.desc
server = self.cmd_helper.get_server()
self.tfut.set_exception(server.error(err))
msg = f"Transaction {self._role.desc}: Exit {ext.desc}, " \
f"Run time: {secs:.2f} seconds"
if self.notify:
self.cmd_helper.notify_update_response(msg)
logging.debug(msg)
def _on_properties_changed(self,
iface_name: str,
changed_props: Dict[str, Variant],
invalid_props: Dict[str, Variant]
) -> None:
for name, var in changed_props.items():
formatted = re.sub(r"(\w)([A-Z])", r"\g<1>_\g<2>", name).lower()
setattr(self, formatted, var.value)
def _notify_package(self, info: PkEnum.Info, package_id: str) -> None:
if self.notify:
if info == PkEnum.Info.FINISHED:
return
pkg_parts = package_id.split(";")
msg = f"{info.desc}: {pkg_parts[0]} ({pkg_parts[1]})"
self.cmd_helper.notify_update_response(msg)
def _notify_repo(self, repo_id: str, description: str) -> None:
if self.notify:
if not repo_id.strip():
repo_id = description
# TODO: May want to eliminate dups
msg = f"GET: {repo_id}"
self.cmd_helper.notify_update_response(msg)
def _notify_progress(self) -> None:
if self.notify and self._percentage <= 100:
msg = f"{self._status.desc}...{self._percentage}%"
if self._status == PkEnum.Status.DOWNLOAD and self._dl_remaining:
if self._dl_remaining < 1024:
msg += f", Remaining: {self._dl_remaining} B"
elif self._dl_remaining < 1048576:
msg += f", Remaining: {self._dl_remaining // 1024} KiB"
else:
msg += f", Remaining: {self._dl_remaining // 1048576} MiB"
if self.speed:
speed = self.speed // 8
if speed < 1024:
msg += f", Speed: {speed} B/s"
elif speed < 1048576:
msg += f", Speed: {speed // 1024} KiB/s"
else:
msg += f", Speed: {speed // 1048576} MiB/s"
self.cmd_helper.notify_update_response(msg)
@property
def role(self) -> PkEnum.Role:
return self._role
@role.setter
def role(self, role_code: int) -> None:
self._role = PkEnum.Role.from_index(role_code)
if self._role in self.QUERY_ROLES:
# Never Notify Queries
self.notify = False
if self.notify:
msg = f"Transaction {self._role.desc} started..."
self.cmd_helper.notify_update_response(msg)
logging.debug(f"PackageKit: Current Role: {self._role.desc}")
@property
def status(self) -> PkEnum.Status:
return self._status
@status.setter
def status(self, status_code: int) -> None:
self._status = PkEnum.Status.from_index(status_code)
self._percentage = 101
self.speed = 0
logging.debug(f"PackageKit: Current Status: {self._status.desc}")
@property
def transaction_flags(self) -> PkEnum.TransactionFlag:
return self._tflags
@transaction_flags.setter
def transaction_flags(self, bits: int) -> None:
self._tflags = PkEnum.TransactionFlag(bits)
@property
def percentage(self) -> int:
return self._percentage
@percentage.setter
def percentage(self, percent: int) -> None:
self._percentage = percent
if self._status in self.PROGRESS_STATUS:
self._notify_progress()
@property
def download_size_remaining(self) -> int:
return self._dl_remaining
@download_size_remaining.setter
def download_size_remaining(self, bytes_remaining: int) -> None:
self._dl_remaining = bytes_remaining
self._notify_progress()

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +1,18 @@
# Zip Application Deployment implementation
#
# Copyright (C) 2021 Eric Callahan <arksine.code@gmail.com>
# Copyright (C) 2024 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import os
import pathlib
import json
import shutil
import re
import time
import zipfile
import logging
from .app_deploy import AppDeploy
from utils import verify_source
from .common import Channel, AppType
from ...utils import source_info
from ...utils import json_wrapper as jsonw
# Annotation imports
from typing import (
@@ -23,409 +22,402 @@ from typing import (
Optional,
Dict,
List,
Union,
cast
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ...confighelper import ConfigHelper
from .update_manager import CommandHelper
RINFO_KEYS = [
"git_version", "long_version", "commit_hash", "source_checksum",
"ignored_exts", "ignored_dirs", "build_date", "channel",
"owner_repo", "host_repo", "release_tag"
]
from ..file_manager.file_manager import FileManager
class ZipDeploy(AppDeploy):
def __init__(self, config: ConfigHelper, cmd_helper: CommandHelper) -> None:
super().__init__(config, cmd_helper)
self.need_channel_update = self.type != "zip"
self.official_repo: str = "?"
self.owner: str = "?"
# Extract repo from origin for validation
match = re.match(r"https?://(?:www\.)?github.com/([^/]+/[^.]+)",
self.origin)
if match is not None:
self.official_repo = match.group(1)
self.owner = self.official_repo.split('/')[0]
else:
raise config.error(
"Invalid url set for 'origin' option in section "
f"[{config.get_name()}]. Unable to extract owner/repo.")
self.host_repo: str = config.get('host_repo', self.official_repo)
self.package_list: List[str] = []
self.python_pkg_list: List[str] = []
self.release_download_info: Tuple[str, str, int] = ("?", "?", 0)
def __init__(
self,
config: ConfigHelper,
cmd_helper: CommandHelper
) -> None:
super().__init__(config, cmd_helper, "Zip Application")
self._configure_path(config, False)
if self.type == AppType.ZIP:
self._configure_virtualenv(config)
self._configure_dependencies(config)
self._configure_managed_services(config)
elif self.type == AppType.WEB:
self.prefix = f"Web Client {self.name}: "
self.repo = config.get('repo').strip().strip("/")
self.owner, self.project_name = self.repo.split("/", 1)
self.persistent_files: List[str] = []
self.warnings: List[str] = []
self.anomalies: List[str] = []
self.version: str = "?"
self.remote_version: str = "?"
self.rollback_version: str = "?"
self.rollback_repo: str = "?"
self.last_error: str = "?"
self._dl_info: Tuple[str, str, int] = ("?", "?", 0)
self._is_fallback: bool = False
self._is_prerelease: bool = False
self._path_writable: bool = False
self._configure_persistent_files(config)
@staticmethod
async def from_application(app: AppDeploy) -> ZipDeploy:
new_app = ZipDeploy(app.config, app.cmd_helper)
await new_app.reinstall()
return new_app
def _configure_persistent_files(self, config: ConfigHelper) -> None:
pfiles = config.getlist('persistent_files', None)
if pfiles is not None:
self.persistent_files = [pf.strip("/") for pf in pfiles]
for fname in (".version", "release_info.json"):
if fname in self.persistent_files:
raise config.error(
"Invalid value for option 'persistent_files': "
f"'{fname}' can not be persistent."
)
if (
self.type == AppType.ZIP and
self.virtualenv is not None and
self.virtualenv in self.path.parents
):
rel_path = str(self.virtualenv.relative_to(self.path))
if rel_path not in self.persistent_files:
self.persistent_files.append(rel_path)
if self.persistent_files:
self.log_info(f"Configured persistent files: {self.persistent_files}")
async def _validate_release_info(self) -> None:
self._is_valid = False
self._is_fallback = False
eventloop = self.server.get_event_loop()
self.warnings.clear()
repo_parent = source_info.find_git_repo(self.path)
homedir = pathlib.Path("~").expanduser()
if not self._path_writable:
self.warnings.append(
f"Location at option 'path: {self.path}' is not writable."
)
elif not self.path.is_dir():
self.warnings.append(
f"Location at option 'path: {self.path}' is not a directory."
)
elif repo_parent is not None and repo_parent != homedir:
self.warnings.append(
f"Location at option 'path: {self.path}' is within a git repo. Found "
f".git folder at '{repo_parent.joinpath('.git')}'"
)
else:
rinfo = self.path.joinpath("release_info.json")
if rinfo.is_file():
try:
data = await eventloop.run_in_thread(rinfo.read_text)
uinfo: Dict[str, str] = jsonw.loads(data)
project_name = uinfo["project_name"]
owner = uinfo["project_owner"]
self.version = uinfo["version"]
except Exception:
logging.exception("Failed to load release_info.json.")
else:
self._is_valid = True
detected_repo = f"{owner}/{project_name}"
if self.repo.lower() != detected_repo.lower():
self.anomalies.append(
f"Value at option 'repo: {self.repo}' does not match "
f"detected repo '{detected_repo}', falling back to "
"detected version."
)
self.repo = detected_repo
self.owner = owner
self.project_name = project_name
elif self.type == AppType.WEB:
version_path = self.path.joinpath(".version")
if version_path.is_file():
version = await eventloop.run_in_thread(version_path.read_text)
self.version = version.strip()
self._is_valid = await self._detect_fallback()
if not self._is_valid:
self.warnings.append("Failed to validate installation")
if self.server.is_debug_enabled():
self.log_info("Debug Enabled, overriding validity checks")
async def _detect_fallback(self) -> bool:
# Only used by "web" app types to fallback on the previous version info
fallback_defs = {
"mainsail": "mainsail-crew",
"fluidd": "fluidd-core"
}
for fname in ("manifest.json", "manifest.webmanifest"):
manifest = self.path.joinpath(fname)
eventloop = self.server.get_event_loop()
if manifest.is_file():
try:
mtext = await eventloop.run_in_thread(manifest.read_text)
mdata: Dict[str, Any] = jsonw.loads(mtext)
proj_name: str = mdata["name"].lower()
except Exception:
self.log_exc(f"Failed to load json from {manifest}")
continue
if proj_name in fallback_defs:
owner = fallback_defs[proj_name]
detected_repo = f"{owner}/{proj_name}"
if detected_repo != self.repo.lower():
self.anomalies.append(
f"Value at option 'repo: {self.repo}' does not match "
f"detected repo '{detected_repo}', falling back to "
"detected version."
)
self.repo = detected_repo
self.owner = owner
self.project_name = proj_name
self._is_fallback = True
return True
return False
async def initialize(self) -> Dict[str, Any]:
storage = await super().initialize()
self.source_checksum: str = storage.get("source_checksum", "?")
self.pristine = storage.get('pristine', False)
self.verified = storage.get('verified', False)
self.build_date: int = storage.get('build_date', 0)
self.full_version: str = storage.get('full_version', "?")
self.short_version: str = storage.get('short_version', "?")
self.commit_hash: str = storage.get('commit_hash', "?")
self.lastest_hash: str = storage.get('latest_hash', "?")
self.latest_version: str = storage.get('latest_version', "?")
self.latest_checksum: str = storage.get('latest_checksum', "?")
self.latest_build_date: int = storage.get('latest_build_date', 0)
self.errors: List[str] = storage.get('errors', [])
self.commit_log: List[Dict[str, Any]] = storage.get('commit_log', [])
fm: FileManager = self.server.lookup_component("file_manager")
self._path_writable = not fm.check_reserved_path(
self.path, need_write=True, raise_error=False
)
if self._path_writable and not self.path.joinpath(".writeable").is_file():
fm.add_reserved_path(f"update_manager {self.name}", self.path)
await self._validate_release_info()
if self.version == "?":
self.version = storage.get("version", "?")
self.remote_version = storage.get('remote_version', "?")
self.rollback_version = storage.get('rollback_version', self.version)
self.rollback_repo = storage.get(
'rollback_repo', self.repo if self._is_valid else "?"
)
self.last_error = storage.get('last_error', "")
dl_info: List[Any] = storage.get('dl_info', ["?", "?", 0])
self.dl_info = cast(Tuple[str, str, int], tuple(dl_info))
if not self.needs_refresh():
self._log_zipapp_info()
return storage
def get_persistent_data(self) -> Dict[str, Any]:
storage = super().get_persistent_data()
storage.update({
'source_checksum': self.source_checksum,
'pristine': self.pristine,
'verified': self.verified,
'build_date': self.build_date,
'full_version': self.full_version,
'short_version': self.short_version,
'commit_hash': self.commit_hash,
'latest_hash': self.lastest_hash,
'latest_version': self.latest_version,
'latest_checksum': self.latest_checksum,
'latest_build_date': self.latest_build_date,
'commit_log': self.commit_log,
'errors': self.errors
"version": self.version,
"remote_version": self.remote_version,
"rollback_version": self.rollback_version,
"rollback_repo": self.rollback_repo,
"dl_info": list(self.dl_info),
"last_error": self.last_error
})
return storage
async def _parse_info_file(self, file_name: str) -> Dict[str, Any]:
info_file = self.path.joinpath(file_name)
if not info_file.exists():
self.log_info(f"Unable to locate file '{info_file}'")
return {}
try:
event_loop = self.server.get_event_loop()
info_bytes = await event_loop.run_in_thread(info_file.read_text)
info: Dict[str, Any] = json.loads(info_bytes)
except Exception:
self.log_exc(f"Unable to parse info file {file_name}")
info = {}
return info
def _get_tag_version(self, version_string: str) -> str:
tag_version: str = "?"
ver_match = re.match(r"v\d+\.\d+\.\d-\d+", version_string)
if ver_match:
tag_version = ver_match.group()
return tag_version
async def refresh(self) -> None:
try:
await self._update_repo_state()
await self._validate_release_info()
await self._get_remote_version()
except Exception:
self.verified = False
self.log_exc("Error refreshing application state")
async def _update_repo_state(self) -> None:
self.errors = []
self._is_valid = False
self.verified = False
release_info = await self._parse_info_file(".release_info")
dep_info = await self._parse_info_file(".dependencies")
for key in RINFO_KEYS:
if key not in release_info:
self._add_error(f"Missing release info item: {key}")
if 'channel' in release_info:
local_channel = release_info['channel']
if self.channel == "stable" and local_channel == "beta":
self.need_channel_update = True
self.full_version = release_info.get('long_version', "?")
self.short_version = self._get_tag_version(
release_info.get('git_version', ""))
self.commit_hash = release_info.get('commit_hash', "?")
self.build_date = release_info.get('build_date', 0)
owner_repo = release_info.get('owner_repo', "?")
if self.official_repo != owner_repo:
self._add_error(
f"Owner repo mismatch. Received {owner_repo}, "
f"official: {self.official_repo}")
# validate the local source code
event_loop = self.server.get_event_loop()
res = await event_loop.run_in_thread(verify_source, self.path)
if res is not None:
self.source_checksum, self.pristine = res
if self.name in ["moonraker", "klipper"]:
self.server.add_log_rollover_item(
f"{self.name}_validation",
f"{self.name} checksum: {self.source_checksum}, "
f"pristine: {self.pristine}")
else:
self._add_error("Unable to validate source checksum")
self.source_checksum = ""
self.pristine = False
self.package_list = sorted(dep_info.get(
'debian', {}).get('packages', []))
self.python_pkg_list = sorted(dep_info.get('python', []))
# Retrieve version info from github to check for updates and
# validate local release info
host_repo = release_info.get('host_repo', "?")
release_tag = release_info.get('release_tag', "?")
if host_repo != self.host_repo:
self._add_error(
f"Host repo mismatch, received: {host_repo}, "
f"expected: {self.host_repo}. This could result in "
" a failed update.")
resource = f"repos/{self.host_repo}/releases"
current_release, latest_release = await self._fetch_github_releases(
resource, release_tag)
await self._validate_current_release(release_info, current_release)
if not self.errors:
self.verified = True
await self._process_latest_release(latest_release)
self._save_state()
logging.exception("Error Refreshing Client")
self._log_zipapp_info()
self._save_state()
async def _fetch_github_releases(self,
resource: str,
current_tag: Optional[str] = None
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
try:
client = self.cmd_helper.get_http_client()
resp = await client.github_api_request(resource, attempts=3)
resp.raise_for_status()
releases = resp.json()
assert isinstance(releases, list)
except Exception:
self.log_exc("Error fetching releases from GitHub")
return {}, {}
release: Dict[str, Any]
latest_release: Dict[str, Any] = {}
current_release: Dict[str, Any] = {}
for release in releases:
if not latest_release:
if self.channel != "stable":
# Allow the beta channel to update regardless
latest_release = release
elif not release['prerelease']:
# This is a stable release on the stable channle
latest_release = release
if current_tag is not None:
if not current_release and release['tag_name'] == current_tag:
current_release = release
if latest_release and current_release:
break
elif latest_release:
break
return current_release, latest_release
async def _validate_current_release(self,
release_info: Dict[str, Any],
release: Dict[str, Any]
) -> None:
if not release:
self._add_error("Unable to find current release on GitHub")
return
asset_info = self._get_asset_urls(release, ["RELEASE_INFO"])
if "RELEASE_INFO" not in asset_info:
self._add_error(
"RELEASE_INFO not found in current release assets")
info_url, content_type, size = asset_info['RELEASE_INFO']
async def _fetch_github_version(
self, repo: Optional[str] = None, tag: Optional[str] = None
) -> Dict[str, Any]:
if repo is None:
if not self._is_valid:
self.log_info("Invalid Installation, aborting remote refresh")
return {}
repo = self.repo
if tag is not None:
resource = f"repos/{repo}/releases/tags/{tag}"
elif self.channel == Channel.STABLE:
resource = f"repos/{repo}/releases/latest"
else:
resource = f"repos/{repo}/releases?per_page=1"
client = self.cmd_helper.get_http_client()
rinfo_bytes = await client.get_file(info_url, content_type)
github_rinfo: Dict[str, Any] = json.loads(rinfo_bytes)
if github_rinfo.get(self.name, {}) != release_info:
self._add_error(
"Local release info does not match the remote")
resp = await client.github_api_request(
resource, attempts=3, retry_pause_time=.5
)
release: Union[List[Any], Dict[str, Any]] = {}
if resp.status_code == 304:
if resp.content:
# Not modified, however we need to restore state from
# cached content
release = resp.json()
else:
# Either not necessary or not possible to restore from cache
return {}
elif resp.has_error():
self.log_info(f"Github Request Error - {resp.error}")
self.last_error = str(resp.error)
return {}
else:
self.log_info("Current Release Info Validated")
release = resp.json()
result: Dict[str, Any] = {}
if isinstance(release, list):
if release:
result = release[0]
else:
result = release
self.last_error = ""
return result
async def _process_latest_release(self, release: Dict[str, Any]):
if not release:
self._add_error("Unable to find latest release on GitHub")
async def _get_remote_version(self) -> None:
result = await self._fetch_github_version()
if not result:
return
zip_file_name = f"{self.name}.zip"
asset_names = ["RELEASE_INFO", "COMMIT_LOG", zip_file_name]
asset_info = self._get_asset_urls(release, asset_names)
if "RELEASE_INFO" in asset_info:
asset_url, content_type, size = asset_info['RELEASE_INFO']
client = self.cmd_helper.get_http_client()
rinfo_bytes = await client.get_file(asset_url, content_type)
update_release_info: Dict[str, Any] = json.loads(rinfo_bytes)
update_info = update_release_info.get(self.name, {})
self.lastest_hash = update_info.get('commit_hash', "?")
self.latest_checksum = update_info.get('source_checksum', "?")
self.latest_version = self._get_tag_version(
update_info.get('git_version', "?"))
self.latest_build_date = update_info.get('build_date', 0)
else:
self._add_error(
"RELEASE_INFO not found in latest release assets")
self.commit_log = []
if self.short_version != self.latest_version:
# Only report commit log if versions change
if "COMMIT_LOG" in asset_info:
asset_url, content_type, size = asset_info['COMMIT_LOG']
client = self.cmd_helper.get_http_client()
commit_bytes = await client.get_file(asset_url, content_type)
commit_info: Dict[str, Any] = json.loads(commit_bytes)
self.commit_log = commit_info.get(self.name, [])
if zip_file_name in asset_info:
self.release_download_info = asset_info[zip_file_name]
self._is_valid = True
else:
self.release_download_info = ("?", "?", 0)
self._add_error(f"Release asset {zip_file_name} not found")
def _get_asset_urls(self,
release: Dict[str, Any],
filenames: List[str]
) -> Dict[str, Tuple[str, str, int]]:
asset_info: Dict[str, Tuple[str, str, int]] = {}
asset: Dict[str, Any]
for asset in release.get('assets', []):
name = asset['name']
if name in filenames:
rinfo_url = asset['browser_download_url']
content_type = asset['content_type']
size = asset['size']
asset_info[name] = (rinfo_url, content_type, size)
filenames.remove(name)
if not filenames:
break
return asset_info
def _add_error(self, warning: str):
self.log_info(warning)
self.errors.append(warning)
self.remote_version = result.get('name', "?")
release_asset: Dict[str, Any] = result.get('assets', [{}])[0]
dl_url: str = release_asset.get('browser_download_url', "?")
content_type: str = release_asset.get('content_type', "?")
size: int = release_asset.get('size', 0)
self.dl_info = (dl_url, content_type, size)
self._is_prerelease = result.get('prerelease', False)
def _log_zipapp_info(self):
warn_str = ""
if self.warnings or self.anomalies:
warn_str = "\nWarnings:\n"
warn_str += "\n".join(
[f" {item}" for item in self.warnings + self.anomalies]
)
dl_url, content_type, size = self.dl_info
self.log_info(
"\nZip Application Distribution Detected\n"
f" Valid: {self._is_valid}\n"
f" Verified: {self.verified}\n"
f" Channel: {self.channel}\n"
f" Repo: {self.official_repo}\n"
f" Path: {self.path}\n"
f" Pristine: {self.pristine}\n"
f" Need Channel Update: {self.need_channel_update}\n"
f" Commits Behind: {len(self.commit_log)}\n"
f"Current Release Info:\n"
f" Source Checksum: {self.source_checksum}\n"
f" Commit SHA: {self.commit_hash}\n"
f" Long Version: {self.full_version}\n"
f" Short Version: {self.short_version}\n"
f" Build Date: {time.ctime(self.build_date)}\n"
f"Latest Available Release Info:\n"
f" Source Checksum: {self.latest_checksum}\n"
f" Commit SHA: {self.lastest_hash}\n"
f" Version: {self.latest_version}\n"
f" Build Date: {time.ctime(self.latest_build_date)}\n"
f" Download URL: {self.release_download_info[0]}\n"
f" Content Type: {self.release_download_info[1]}\n"
f" Download Size: {self.release_download_info[2]}"
f"Detected\n"
f"Repo: {self.repo}\n"
f"Channel: {self.channel}\n"
f"Path: {self.path}\n"
f"Local Version: {self.version}\n"
f"Remote Version: {self.remote_version}\n"
f"Valid: {self._is_valid}\n"
f"Fallback Detected: {self._is_fallback}\n"
f"Pre-release: {self._is_prerelease}\n"
f"Download Url: {dl_url}\n"
f"Download Size: {size}\n"
f"Content Type: {content_type}\n"
f"Rollback Version: {self.rollback_version}\n"
f"Rollback Repo: {self.rollback_repo}"
f"{warn_str}"
)
async def _update_dependencies(self,
npm_hash,
force: bool = False
) -> None:
new_deps = await self._parse_info_file('.dependencies')
system_pkgs = sorted(
new_deps.get('debian', {}).get('packages', []))
python_pkgs = sorted(new_deps.get('python', []))
if system_pkgs:
if force or system_pkgs != self.package_list:
await self._install_packages(system_pkgs)
if python_pkgs:
if force or python_pkgs != self.python_pkg_list:
await self._update_virtualenv(python_pkgs)
ret = await self._check_need_update(npm_hash, self.npm_pkg_json)
if force or ret:
if self.npm_pkg_json is not None:
self.notify_status("Updating Node Packages...")
try:
await self.cmd_helper.run_cmd(
"npm ci --only=prod", notify=True, timeout=600.,
cwd=str(self.path))
except Exception:
self.notify_status("Node Package Update failed")
def _extract_release(self, release_zip: pathlib.Path) -> None:
def _extract_release(
self, persist_dir: pathlib.Path, release_file: pathlib.Path
) -> None:
if not persist_dir.exists():
persist_dir.mkdir()
if self.path.is_dir():
# find and move persistent files
for src_path in self.path.iterdir():
fname = src_path.name
if fname in self.persistent_files:
dest_path = persist_dir.joinpath(fname)
dest_dir = dest_path.parent
dest_dir.mkdir(parents=True, exist_ok=True)
shutil.move(str(src_path), str(dest_path))
shutil.rmtree(self.path)
os.mkdir(self.path)
with zipfile.ZipFile(release_zip) as zf:
zf.extractall(self.path)
self.path.mkdir()
with zipfile.ZipFile(release_file) as zf:
for zip_entry in zf.filelist:
dest = pathlib.Path(zf.extract(zip_entry, str(self.path)))
dest.chmod((zip_entry.external_attr >> 16) & 0o777)
# Move temporary files back into
for src_path in persist_dir.iterdir():
dest_path = self.path.joinpath(src_path.name)
dest_dir = dest_path.parent
dest_dir.mkdir(parents=True, exist_ok=True)
shutil.move(str(src_path), str(dest_path))
async def update(self, force_dep_update: bool = False) -> bool:
async def update(
self,
rollback_info: Optional[Tuple[str, str, int]] = None,
is_recover: bool = False,
force_dep_update: bool = False
) -> bool:
if not self._is_valid:
raise self.log_exc("Update aborted, repo not valid", False)
if self.short_version == self.latest_version:
# already up to date
return False
self.cmd_helper.notify_update_response(
f"Updating Application {self.name}...")
npm_hash = await self._get_file_hash(self.npm_pkg_json)
dl_url, content_type, size = self.release_download_info
self.notify_status("Starting Download...")
raise self.server.error(
f"{self.prefix}Invalid install detected, aborting update"
)
if rollback_info is not None:
dl_url, content_type, size = rollback_info
start_msg = "Rolling Back..." if not is_recover else "Recovering..."
else:
if self.remote_version == "?":
await self._get_remote_version()
if self.remote_version == "?":
raise self.server.error(
f"{self.prefix}Unable to locate update"
)
dl_url, content_type, size = self.dl_info
if self.version == self.remote_version:
# Already up to date
return False
start_msg = "Updating..."
if dl_url == "?":
raise self.server.error(f"{self.prefix}Invalid download url")
current_version = self.version
event_loop = self.server.get_event_loop()
self.notify_status(start_msg)
self.notify_status("Downloading Release...")
dep_info: Optional[Dict[str, Any]] = None
if self.type == AppType.ZIP:
dep_info = await self._collect_dependency_info()
td = await self.cmd_helper.create_tempdir(self.name, "app")
try:
tempdir = pathlib.Path(td.name)
temp_download_file = tempdir.joinpath(f"{self.name}.zip")
temp_persist_dir = tempdir.joinpath(self.name)
client = self.cmd_helper.get_http_client()
await client.download_file(
dl_url, content_type, temp_download_file, size,
self.cmd_helper.on_download_progress)
self.cmd_helper.on_download_progress
)
self.notify_status(
f"Download Complete, extracting release to '{self.path}'")
event_loop = self.server.get_event_loop()
f"Download Complete, extracting release to '{self.path}'"
)
await event_loop.run_in_thread(
self._extract_release, temp_download_file)
self._extract_release, temp_persist_dir, temp_download_file
)
finally:
await event_loop.run_in_thread(td.cleanup)
await self._update_dependencies(npm_hash, force=force_dep_update)
await self._update_repo_state()
if dep_info is not None:
await self._update_dependencies(dep_info, force_dep_update)
self.version = self.remote_version
await self._validate_release_info()
if self._is_valid and rollback_info is None:
self.rollback_version = current_version
self.rollback_repo = self.repo
self._log_zipapp_info()
self._save_state()
await self.restart_service()
self.notify_status("Update Finished...", is_complete=True)
msg = "Update Finished..." if rollback_info is None else "Rollback Complete"
self.notify_status(msg, is_complete=True)
return True
async def recover(self,
hard: bool = False,
force_dep_update: bool = False
) -> None:
res = f"repos/{self.host_repo}/releases"
releases = await self._fetch_github_releases(res)
await self._process_latest_release(releases[1])
await self.update(force_dep_update=force_dep_update)
async def recover(
self, hard: bool = False, force_dep_update: bool = False
) -> None:
await self.update(self.dl_info, True, force_dep_update)
async def reinstall(self) -> None:
# Clear the persistent storage prior to a channel swap.
# After the next update is complete new data will be
# restored.
umdb = self.cmd_helper.get_umdb()
await umdb.pop(self.name, None)
await self.initialize()
await self.recover(force_dep_update=True)
async def rollback(self) -> bool:
if self.rollback_version == "?" or self.rollback_repo == "?":
raise self.server.error("Incomplete Rollback Data", False)
if self.rollback_version == self.version:
return False
result = await self._fetch_github_version(
self.rollback_repo, self.rollback_version
)
if not result:
raise self.server.error("Failed to retrieve release asset data")
release_asset: Dict[str, Any] = result.get('assets', [{}])[0]
dl_url: str = release_asset.get('browser_download_url', "?")
content_type: str = release_asset.get('content_type', "?")
size: int = release_asset.get('size', 0)
dl_info = (dl_url, content_type, size)
return await self.update(dl_info)
def get_update_status(self) -> Dict[str, Any]:
status = super().get_update_status()
# XXX - Currently this reports status matching
# that of the git repo so as to not break existing
# client functionality. In the future it would be
# good to report values that are specifc
status.update({
'detected_type': "zip",
'remote_alias': "origin",
'branch': "master",
'name': self.name,
'repo_name': self.project_name,
'owner': self.owner,
'version': self.short_version,
'remote_version': self.latest_version,
'current_hash': self.commit_hash,
'remote_hash': self.lastest_hash,
'is_dirty': False,
'detached': not self.verified,
'commits_behind': self.commit_log,
'git_messages': self.errors,
'full_version_string': self.full_version,
'pristine': self.pristine,
'version': self.version,
'remote_version': self.remote_version,
'rollback_version': self.rollback_version,
'last_error': self.last_error,
'warnings': self.warnings,
'anomalies': self.anomalies
})
return status

View File

@@ -10,19 +10,20 @@ import ipaddress
import socket
import uuid
import logging
from ..common import RequestType
from typing import (
TYPE_CHECKING,
Optional,
Dict,
List,
Any,
Tuple
)
if TYPE_CHECKING:
from moonraker import Server
from confighelper import ConfigHelper
from websockets import WebRequest
from asyncio import Future
from ..server import Server
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .database import MoonrakerDatabase
from .machine import Machine
from .shell_command import ShellCommandFactory
@@ -33,7 +34,9 @@ if TYPE_CHECKING:
CAM_FIELDS = {
"name": "name", "service": "service", "target_fps": "targetFps",
"stream_url": "urlStream", "snapshot_url": "urlSnapshot",
"flip_horizontal": "flipX", "flip_vertical": "flipY"
"flip_horizontal": "flipX", "flip_vertical": "flipY",
"enabled": "enabled", "target_fps_idle": "targetFpsIdle",
"aspect_ratio": "aspectRatio", "icon": "icon"
}
class WebcamManager:
@@ -48,36 +51,68 @@ class WebcamManager:
self.webcams[webcam.name] = webcam
self.server.register_endpoint(
"/server/webcams/list", ["GET"], self._handle_webcam_list
"/server/webcams/list", RequestType.GET, self._handle_webcam_list
)
self.server.register_endpoint(
"/server/webcams/item", ["GET", "POST", "DELETE"],
"/server/webcams/item", RequestType.all(),
self._handle_webcam_request
)
self.server.register_endpoint(
"/server/webcams/test", ["POST"], self._handle_webcam_test
"/server/webcams/test", RequestType.POST, self._handle_webcam_test
)
self.server.register_notification("webcam:webcams_changed")
self.server.register_event_handler(
"machine:public_ip_changed", self._set_default_host_ip
)
async def component_init(self) -> None:
machine: Machine = self.server.lookup_component("machine")
pubnet = await machine.get_public_network()
ip: Optional[str] = pubnet.get("address")
default_host = f"http://{pubnet['hostname']}"
if ip is not None:
default_host = f"http://{ip}"
WebCam.set_default_host(default_host)
if machine.public_ip:
self._set_default_host_ip(machine.public_ip)
all_uids = [wc.uid for wc in self.webcams.values()]
db: MoonrakerDatabase = self.server.lookup_component("database")
saved_cams: Dict[str, Any] = await db.get_item("webcams", default={})
for cam_data in saved_cams.values():
db_cams: Dict[str, Dict[str, Any]] = await db.get_item("webcams", default={})
ro_info: List[str] = []
# Process configured cams
for uid, cam_data in db_cams.items():
try:
cam_data["uid"] = uid
webcam = WebCam.from_database(self.server, cam_data)
if uid in all_uids:
# Unlikely but possible collision between random UUID4
# and UUID5 generated from a configured webcam.
await db.delete_item("webcams", uid)
webcam.uid = self._get_guaranteed_uuid()
await self._save_cam(webcam, False)
ro_info.append(f"Detected webcam UID collision: {uid}")
all_uids.append(webcam.uid)
if webcam.name in self.webcams:
ro_info.append(
f"Detected webcam name collision: {webcam.name}, uuid: "
f"{uid}. This camera will be ignored."
)
continue
self.webcams[webcam.name] = webcam
except Exception:
logging.exception("Failed to process webcam from db")
continue
if ro_info:
self.server.add_log_rollover_item("webcam", "\n".join(ro_info))
def _set_default_host_ip(self, ip: str) -> None:
default_host = "http://127.0.0.1"
if ip:
try:
addr = ipaddress.ip_address(ip)
except Exception:
logging.debug(f"Invalid IP Recd: {ip}")
else:
if addr.version == 6:
default_host = f"http://[{addr}]"
else:
default_host = f"http://{addr}"
WebCam.set_default_host(default_host)
logging.info(f"Default public webcam address set: {default_host}")
def get_webcams(self) -> Dict[str, WebCam]:
return self.webcams
@@ -85,103 +120,113 @@ class WebcamManager:
def _list_webcams(self) -> List[Dict[str, Any]]:
return [wc.as_dict() for wc in self.webcams.values()]
async def _find_dbcam_by_uuid(
self, name: str
) -> Tuple[str, Dict[str, Any]]:
db: MoonrakerDatabase = self.server.lookup_component("database")
saved_cams: Dict[str, Dict[str, Any]]
saved_cams = await db.get_item("webcams", default={})
for uid, cam_data in saved_cams.items():
if name == cam_data["name"]:
return uid, cam_data
return "", {}
async def _save_cam(self, webcam: WebCam) -> None:
uid, cam_data = await self._find_dbcam_by_uuid(webcam.name)
if not uid:
uid = str(uuid.uuid4())
def _save_cam(self, webcam: WebCam, save_local: bool = True) -> Future:
if save_local:
self.webcams[webcam.name] = webcam
cam_data: Dict[str, Any] = {}
for mfield, dbfield in CAM_FIELDS.items():
cam_data[dbfield] = getattr(webcam, mfield)
cam_data["location"] = webcam.location
cam_data["rotation"] = webcam.rotation
if "icon" not in cam_data:
cam_data["icon"] = "mdi-webcam"
cam_data["extra_data"] = webcam.extra_data
db: MoonrakerDatabase = self.server.lookup_component("database")
db.insert_item("webcams", uid, cam_data)
return db.insert_item("webcams", webcam.uid, cam_data)
async def _delete_cam(self, webcam: WebCam) -> None:
uid, cam = await self._find_dbcam_by_uuid(webcam.name)
if not uid:
return
def _delete_cam(self, webcam: WebCam) -> Future:
db: MoonrakerDatabase = self.server.lookup_component("database")
db.delete_item("webcams", uid)
self.webcams.pop(webcam.name, None)
return db.delete_item("webcams", webcam.uid)
async def _handle_webcam_request(
self, web_request: WebRequest
) -> Dict[str, Any]:
action = web_request.get_action()
def _get_guaranteed_uuid(self) -> str:
cur_uids = [wc.uid for wc in self.webcams.values()]
while True:
uid = str(uuid.uuid4())
if uid not in cur_uids:
break
return uid
def get_cam_by_uid(self, uid: str) -> WebCam:
for cam in self.webcams.values():
if cam.uid == uid:
return cam
raise self.server.error(f"Webcam with UID {uid} not found", 404)
def _lookup_camera(
self, web_request: WebRequest, required: bool = True
) -> Optional[WebCam]:
args = web_request.get_args()
if "uid" in args:
return self.get_cam_by_uid(web_request.get_str("uid"))
name = web_request.get_str("name")
webcam = self.webcams.get(name, None)
if required and webcam is None:
raise self.server.error(f"Webcam {name} not found", 404)
return webcam
async def _handle_webcam_request(self, web_request: WebRequest) -> Dict[str, Any]:
req_type = web_request.get_request_type()
webcam = self._lookup_camera(web_request, req_type != RequestType.POST)
webcam_data: Dict[str, Any] = {}
if action == "GET":
if name not in self.webcams:
raise self.server.error(f"Webcam {name} not found", 404)
webcam_data = self.webcams[name].as_dict()
elif action == "POST":
if (
name in self.webcams and
self.webcams[name].source == "config"
):
raise self.server.error(
f"Cannot overwrite webcam '{name}' sourced from "
"Moonraker configuration"
)
webcam = WebCam.from_web_request(self.server, web_request)
self.webcams[name] = webcam
if req_type == RequestType.GET:
assert webcam is not None
webcam_data = webcam.as_dict()
elif req_type == RequestType.POST:
if webcam is not None:
if webcam.source == "config":
raise self.server.error(
f"Cannot overwrite webcam '{webcam.name}' sourced from "
"Moonraker configuration"
)
new_name = web_request.get_str("name", None)
if new_name is not None and webcam.name != new_name:
if new_name in self.webcams:
raise self.server.error(
f"Cannot rename webcam from '{webcam.name}' to "
f"'{new_name}'. Webcam with requested name '{new_name}' "
"already exists."
)
self.webcams.pop(webcam.name, None)
webcam.update(web_request)
else:
uid = self._get_guaranteed_uuid()
webcam = WebCam.from_web_request(self.server, web_request, uid)
await self._save_cam(webcam)
elif action == "DELETE":
if name not in self.webcams:
raise self.server.error(f"Webcam {name} not found", 404)
elif self.webcams[name].source == "config":
webcam_data = webcam.as_dict()
elif req_type == RequestType.DELETE:
assert webcam is not None
if webcam.source == "config":
raise self.server.error(
f"Cannot delete webcam '{name}' sourced from "
f"Cannot delete webcam '{webcam.name}' sourced from "
"Moonraker configuration"
)
webcam = self.webcams.pop(name)
webcam_data = webcam.as_dict()
await self._delete_cam(webcam)
if action != "GET":
self._delete_cam(webcam)
if req_type != RequestType.GET:
self.server.send_event(
"webcam:webcams_changed", {"webcams": self._list_webcams()}
)
return {"webcam": webcam_data}
async def _handle_webcam_list(
self, web_request: WebRequest
) -> Dict[str, Any]:
async def _handle_webcam_list(self, web_request: WebRequest) -> Dict[str, Any]:
return {"webcams": self._list_webcams()}
async def _handle_webcam_test(
self, web_request: WebRequest
) -> Dict[str, Any]:
name = web_request.get_str("name")
if name not in self.webcams:
raise self.server.error(f"Webcam '{name}' not found", 404)
async def _handle_webcam_test(self, web_request: WebRequest) -> Dict[str, Any]:
client: HttpClient = self.server.lookup_component("http_client")
cam = self.webcams[name]
webcam = self._lookup_camera(web_request)
assert webcam is not None
result: Dict[str, Any] = {
"name": name,
"name": webcam.name,
"snapshot_reachable": False
}
for img_type in ["snapshot", "stream"]:
try:
func = getattr(cam, f"get_{img_type}_url")
func = getattr(webcam, f"get_{img_type}_url")
result[f"{img_type}_url"] = await func(True)
except Exception:
logging.exception(f"Error Processing {img_type} url")
result[f"{img_type}_url"] = ""
if result.get("snapshot_url", "").startswith("http"):
url = client.escape_url(result["snapshot_url"])
url: str = result["snapshot_url"]
if url.startswith("http"):
ret = await client.get(url, connect_timeout=1., request_timeout=1.)
result["snapshot_reachable"] = not ret.has_error()
return result
@@ -189,18 +234,32 @@ class WebcamManager:
class WebCam:
_default_host: str = "http://127.0.0.1"
_protected_fields: List[str] = ["source", "uid"]
def __init__(self, server: Server, **kwargs) -> None:
self._server = server
self.name: str = kwargs["name"]
self.enabled: bool = kwargs["enabled"]
self.icon: str = kwargs["icon"]
self.aspect_ratio: str = kwargs["aspect_ratio"]
self.target_fps: int = kwargs["target_fps"]
self.target_fps_idle: int = kwargs["target_fps_idle"]
self.location: str = kwargs["location"]
self.service: str = kwargs["service"]
self.target_fps: int = kwargs["target_fps"]
self.stream_url: str = kwargs["stream_url"]
self.snapshot_url: str = kwargs["snapshot_url"]
self.flip_horizontal: bool = kwargs["flip_horizontal"]
self.flip_vertical: bool = kwargs["flip_vertical"]
self.rotation: int = kwargs["rotation"]
self.source: str = kwargs["source"]
self.extra_data: Dict[str, Any] = kwargs.get("extra_data", {})
self.uid: str = kwargs["uid"]
if self.rotation not in [0, 90, 180, 270]:
raise server.error(f"Invalid value for 'rotation': {self.rotation}")
prefix, sep, postfix = self.aspect_ratio.partition(":")
if not (prefix.isdigit() and sep == ":" and postfix.isdigit()):
raise server.error(
f"Invalid value for 'aspect_ratio': {self.aspect_ratio}"
)
def as_dict(self):
return {k: v for k, v in self.__dict__.items() if k[0] != "_"}
@@ -301,59 +360,107 @@ class WebCam:
pass
return url
def update(self, web_request: WebRequest) -> None:
valid_fields = [
f for f in self.__dict__.keys() if f[0] != "_"
and f not in self._protected_fields
]
for field in web_request.get_args().keys():
if field not in valid_fields:
continue
try:
attr_type = type(getattr(self, field))
except AttributeError:
continue
if attr_type is bool:
val: Any = web_request.get_boolean(field)
elif attr_type is int:
val = web_request.get_int(field)
elif attr_type is float:
val = web_request.get_float(field)
elif attr_type is str:
val = web_request.get_str(field)
else:
val = web_request.get(field)
setattr(self, field, val)
@staticmethod
def set_default_host(host: str) -> None:
WebCam._default_host = host
@classmethod
def from_config(cls, config: ConfigHelper) -> WebCam:
webcam: Dict[str, Any] = {}
webcam["name"] = config.get_name().split(maxsplit=1)[-1]
webcam["location"] = config.get("location", "printer")
webcam["service"] = config.get("service", "mjpegstreamer")
webcam["target_fps"] = config.getint("target_fps", 15)
webcam["stream_url"] = config.get("stream_url")
webcam["snapshot_url"] = config.get("snapshot_url")
webcam["flip_horizontal"] = config.getboolean("flip_horizontal", False)
webcam["flip_vertical"] = config.getboolean("flip_vertical", False)
webcam["rotation"] = config.getint("rotation", 0)
if webcam["rotation"] not in [0, 90, 180, 270]:
raise config.error("Invalid value for option 'rotation'")
webcam["source"] = "config"
return cls(config.get_server(), **webcam)
server = config.get_server()
name = config.get_name().split(maxsplit=1)[-1]
ns = uuid.UUID(server.get_app_args()["instance_uuid"])
try:
return cls(
server,
name=name,
enabled=config.getboolean("enabled", True),
icon=config.get("icon", "mdiWebcam"),
aspect_ratio=config.get("aspect_ratio", "4:3"),
target_fps=config.getint("target_fps", 15),
target_fps_idle=config.getint("target_fps_idle", 5),
location=config.get("location", "printer"),
service=config.get("service", "mjpegstreamer"),
stream_url=config.get("stream_url"),
snapshot_url=config.get("snapshot_url", ""),
flip_horizontal=config.getboolean("flip_horizontal", False),
flip_vertical=config.getboolean("flip_vertical", False),
rotation=config.getint("rotation", 0),
source="config",
uid=str(uuid.uuid5(ns, f"moonraker.webcam.{name}"))
)
except server.error as err:
raise config.error(str(err)) from err
@classmethod
def from_web_request(
cls, server: Server, web_request: WebRequest
cls, server: Server, web_request: WebRequest, uid: str
) -> WebCam:
webcam: Dict[str, Any] = {}
webcam["name"] = web_request.get_str("name")
webcam["location"] = web_request.get_str("location", "printer")
webcam["service"] = web_request.get_str("service", "mjpegstreamer")
webcam["target_fps"] = web_request.get_int("target_fps", 15)
webcam["stream_url"] = web_request.get_str("stream_url")
webcam["snapshot_url"] = web_request.get_str("snapshot_url")
webcam["flip_horizontal"] = web_request.get_boolean(
"flip_horizontal", False
name = web_request.get_str("name")
return cls(
server,
name=name,
enabled=web_request.get_boolean("enabled", True),
icon=web_request.get_str("icon", "mdiWebcam"),
aspect_ratio=web_request.get_str("aspect_ratio", "4:3"),
target_fps=web_request.get_int("target_fps", 15),
target_fps_idle=web_request.get_int("target_fps_idle", 5),
location=web_request.get_str("location", "printer"),
service=web_request.get_str("service", "mjpegstreamer"),
stream_url=web_request.get_str("stream_url"),
snapshot_url=web_request.get_str("snapshot_url", ""),
flip_horizontal=web_request.get_boolean("flip_horizontal", False),
flip_vertical=web_request.get_boolean("flip_vertical", False),
rotation=web_request.get_int("rotation", 0),
source="database",
extra_data=web_request.get("extra_data", {}),
uid=uid
)
webcam["flip_vertical"] = web_request.get_boolean(
"flip_vertical", False
)
webcam["rotation"] = web_request.get_str("rotation", 0)
if webcam["rotation"] not in [0, 90, 180, 270]:
raise server.error("Invalid value for parameter 'rotate'")
webcam["source"] = "database"
return cls(server, **webcam)
@classmethod
def from_database(cls, server: Server, cam_data: Dict[str, Any]) -> WebCam:
webcam: Dict[str, Any] = {}
for mfield, dbfield in CAM_FIELDS.items():
webcam[mfield] = cam_data[dbfield]
webcam["location"] = webcam.get("location", "printer")
webcam["rotation"] = webcam.get("rotation", 0)
webcam["source"] = "database"
return cls(server, **webcam)
return cls(
server,
name=str(cam_data["name"]),
enabled=bool(cam_data.get("enabled", True)),
icon=str(cam_data.get("icon", "mdiWebcam")),
aspect_ratio=str(cam_data.get("aspectRatio", "4:3")),
target_fps=int(cam_data.get("targetFps", 15)),
target_fps_idle=int(cam_data.get("targetFpsIdle", 5)),
location=str(cam_data.get("location", "printer")),
service=str(cam_data.get("service", "mjpegstreamer")),
stream_url=str(cam_data.get("urlStream", "")),
snapshot_url=str(cam_data.get("urlSnapshot", "")),
flip_horizontal=bool(cam_data.get("flipX", False)),
flip_vertical=bool(cam_data.get("flipY", False)),
rotation=int(cam_data.get("rotation", cam_data.get("rotate", 0))),
source="database",
extra_data=cam_data.get("extra_data", {}),
uid=cam_data["uid"]
)
def load_component(config: ConfigHelper) -> WebcamManager:
return WebcamManager(config)

View File

@@ -0,0 +1,498 @@
# Websocket Request/Response Handler
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import logging
import asyncio
from tornado.websocket import WebSocketHandler, WebSocketClosedError
from tornado.web import HTTPError
from ..common import (
RequestType,
WebRequest,
BaseRemoteConnection,
TransportType,
)
from ..utils import ServerError, parse_ip_address
# Annotation imports
from typing import (
TYPE_CHECKING,
Any,
Optional,
Callable,
Coroutine,
Tuple,
Union,
Dict,
List,
)
if TYPE_CHECKING:
from ..server import Server
from .klippy_connection import KlippyConnection as Klippy
from ..confighelper import ConfigHelper
from .application import MoonrakerApp
from .extensions import ExtensionManager
from .authorization import Authorization
from ..utils import IPAddress
ConvType = Union[str, bool, float, int]
ArgVal = Union[None, int, float, bool, str]
RPCCallback = Callable[..., Coroutine]
AuthComp = Optional[Authorization]
CLIENT_TYPES = ["web", "mobile", "desktop", "display", "bot", "agent", "other"]
class WebsocketManager:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.clients: Dict[int, BaseRemoteConnection] = {}
self.bridge_connections: Dict[int, BridgeSocket] = {}
self.closed_event: Optional[asyncio.Event] = None
app: MoonrakerApp = self.server.lookup_component("application")
app.register_websocket_handler("/websocket", WebSocket)
app.register_websocket_handler("/klippysocket", BridgeSocket)
self.server.register_endpoint(
"/server/websocket/id", RequestType.GET, self._handle_id_request,
TransportType.WEBSOCKET
)
self.server.register_endpoint(
"/server/connection/identify", RequestType.POST, self._handle_identify,
TransportType.WEBSOCKET, auth_required=False
)
def register_notification(
self,
event_name: str,
notify_name: Optional[str] = None,
event_type: Optional[str] = None
) -> None:
if notify_name is None:
notify_name = event_name.split(':')[-1]
if event_type == "logout":
def notify_handler(*args):
self.notify_clients(notify_name, args)
self._process_logout(*args)
else:
def notify_handler(*args):
self.notify_clients(notify_name, args)
self.server.register_event_handler(event_name, notify_handler)
async def _handle_id_request(self, web_request: WebRequest) -> Dict[str, int]:
sc = web_request.get_client_connection()
assert sc is not None
return {'websocket_id': sc.uid}
async def _handle_identify(self, web_request: WebRequest) -> Dict[str, int]:
sc = web_request.get_client_connection()
assert sc is not None
if sc.identified:
raise self.server.error(
f"Connection already identified: {sc.client_data}"
)
name = web_request.get_str("client_name")
version = web_request.get_str("version")
client_type: str = web_request.get_str("type").lower()
url = web_request.get_str("url")
sc.authenticate(
token=web_request.get_str("access_token", None),
api_key=web_request.get_str("api_key", None)
)
if client_type not in CLIENT_TYPES:
raise self.server.error(f"Invalid Client Type: {client_type}")
sc.client_data = {
"name": name,
"version": version,
"type": client_type,
"url": url
}
if client_type == "agent":
extensions: ExtensionManager
extensions = self.server.lookup_component("extensions")
try:
extensions.register_agent(sc)
except ServerError:
sc.client_data["type"] = ""
raise
logging.info(
f"Websocket {sc.uid} Client Identified - "
f"Name: {name}, Version: {version}, Type: {client_type}"
)
self.server.send_event("websockets:client_identified", sc)
return {'connection_id': sc.uid}
def _process_logout(self, user: Dict[str, Any]) -> None:
if "username" not in user:
return
name = user["username"]
for sc in self.clients.values():
sc.on_user_logout(name)
def has_socket(self, ws_id: int) -> bool:
return ws_id in self.clients
def get_client(self, uid: int) -> Optional[BaseRemoteConnection]:
return self.clients.get(uid, None)
def get_client_ws(self, ws_id: int) -> Optional[WebSocket]:
sc = self.clients.get(ws_id, None)
if sc is None or not isinstance(sc, WebSocket):
return None
return sc
def get_clients_by_type(
self, client_type: str
) -> List[BaseRemoteConnection]:
if not client_type:
return []
ret: List[BaseRemoteConnection] = []
for sc in self.clients.values():
if sc.client_data.get("type", "") == client_type.lower():
ret.append(sc)
return ret
def get_clients_by_name(self, name: str) -> List[BaseRemoteConnection]:
if not name:
return []
ret: List[BaseRemoteConnection] = []
for sc in self.clients.values():
if sc.client_data.get("name", "").lower() == name.lower():
ret.append(sc)
return ret
def get_unidentified_clients(self) -> List[BaseRemoteConnection]:
ret: List[BaseRemoteConnection] = []
for sc in self.clients.values():
if not sc.client_data:
ret.append(sc)
return ret
def add_client(self, sc: BaseRemoteConnection) -> None:
self.clients[sc.uid] = sc
self.server.send_event("websockets:client_added", sc)
logging.debug(f"New Websocket Added: {sc.uid}")
def remove_client(self, sc: BaseRemoteConnection) -> None:
old_sc = self.clients.pop(sc.uid, None)
if old_sc is not None:
self.server.send_event("websockets:client_removed", sc)
logging.debug(f"Websocket Removed: {sc.uid}")
self._check_closed_event()
def add_bridge_connection(self, bc: BridgeSocket) -> None:
self.bridge_connections[bc.uid] = bc
logging.debug(f"New Bridge Connection Added: {bc.uid}")
def remove_bridge_connection(self, bc: BridgeSocket) -> None:
old_bc = self.bridge_connections.pop(bc.uid, None)
if old_bc is not None:
logging.debug(f"Bridge Connection Removed: {bc.uid}")
self._check_closed_event()
def _check_closed_event(self) -> None:
if (
self.closed_event is not None and
not self.clients and
not self.bridge_connections
):
self.closed_event.set()
def notify_clients(
self,
name: str,
data: Union[List, Tuple] = [],
mask: List[int] = []
) -> None:
msg: Dict[str, Any] = {'jsonrpc': "2.0", 'method': "notify_" + name}
if data:
msg['params'] = data
for sc in list(self.clients.values()):
if sc.uid in mask or sc.need_auth:
continue
sc.queue_message(msg)
def get_count(self) -> int:
return len(self.clients)
async def close(self) -> None:
if not self.clients:
return
self.closed_event = asyncio.Event()
for bc in list(self.bridge_connections.values()):
bc.close_socket(1001, "Server Shutdown")
for sc in list(self.clients.values()):
sc.close_socket(1001, "Server Shutdown")
try:
await asyncio.wait_for(self.closed_event.wait(), 2.)
except asyncio.TimeoutError:
pass
self.closed_event = None
class WebSocket(WebSocketHandler, BaseRemoteConnection):
connection_count: int = 0
def initialize(self) -> None:
self.on_create(self.settings['server'])
self._ip_addr = parse_ip_address(self.request.remote_ip or "")
self.last_pong_time: float = self.eventloop.get_loop_time()
self.cors_allowed: bool = False
@property
def ip_addr(self) -> Optional[IPAddress]:
return self._ip_addr
@property
def hostname(self) -> str:
return self.request.host_name
def get_current_user(self) -> Any:
return self._user_info
def open(self, *args, **kwargs) -> None:
self.__class__.connection_count += 1
self.set_nodelay(True)
self._connected_time = self.eventloop.get_loop_time()
agent = self.request.headers.get("User-Agent", "")
is_proxy = False
if (
"X-Forwarded-For" in self.request.headers or
"X-Real-Ip" in self.request.headers
):
is_proxy = True
logging.info(f"Websocket Opened: ID: {self.uid}, "
f"Proxied: {is_proxy}, "
f"User Agent: {agent}, "
f"Host Name: {self.hostname}")
self.wsm.add_client(self)
def on_message(self, message: Union[bytes, str]) -> None:
self.eventloop.register_callback(self._process_message, message)
def on_pong(self, data: bytes) -> None:
self.last_pong_time = self.eventloop.get_loop_time()
def on_close(self) -> None:
self.is_closed = True
self.__class__.connection_count -= 1
kconn: Klippy = self.server.lookup_component("klippy_connection")
kconn.remove_subscription(self)
self.message_buf = []
now = self.eventloop.get_loop_time()
pong_elapsed = now - self.last_pong_time
for resp in self.pending_responses.values():
resp.set_exception(ServerError("Client Socket Disconnected", 500))
self.pending_responses = {}
logging.info(f"Websocket Closed: ID: {self.uid} "
f"Close Code: {self.close_code}, "
f"Close Reason: {self.close_reason}, "
f"Pong Time Elapsed: {pong_elapsed:.2f}")
if self._client_data["type"] == "agent":
extensions: ExtensionManager
extensions = self.server.lookup_component("extensions")
extensions.remove_agent(self)
self.wsm.remove_client(self)
async def write_to_socket(self, message: Union[bytes, str]) -> None:
try:
await self.write_message(message)
except WebSocketClosedError:
self.is_closed = True
self.message_buf.clear()
logging.info(
f"Websocket closed while writing: {self.uid}")
except Exception:
logging.exception(
f"Error sending data over websocket: {self.uid}")
def check_origin(self, origin: str) -> bool:
if not super(WebSocket, self).check_origin(origin):
return self.cors_allowed
return True
def on_user_logout(self, user: str) -> bool:
if super().on_user_logout(user):
self._need_auth = True
return True
return False
# Check Authorized User
async def prepare(self) -> None:
max_conns = self.settings["max_websocket_connections"]
if self.__class__.connection_count >= max_conns:
raise self.server.error(
"Maximum Number of Websocket Connections Reached"
)
auth: AuthComp = self.server.lookup_component('authorization', None)
if auth is not None:
try:
self._user_info = await auth.authenticate_request(self.request)
except Exception as e:
logging.info(f"Websocket Failed Authentication: {e}")
self._user_info = None
self._need_auth = True
if "Origin" in self.request.headers:
origin = self.request.headers.get("Origin")
else:
origin = self.request.headers.get("Sec-Websocket-Origin", None)
self.cors_allowed = await auth.check_cors(origin)
def close_socket(self, code: int, reason: str) -> None:
self.close(code, reason)
class BridgeSocket(WebSocketHandler):
def initialize(self) -> None:
self.server: Server = self.settings['server']
self.wsm: WebsocketManager = self.server.lookup_component("websockets")
self.eventloop = self.server.get_event_loop()
self.uid = id(self)
self._ip_addr = parse_ip_address(self.request.remote_ip or "")
self.last_pong_time: float = self.eventloop.get_loop_time()
self.is_closed = False
self.klippy_writer: Optional[asyncio.StreamWriter] = None
self.klippy_write_buf: List[bytes] = []
self.klippy_queue_busy: bool = False
self.cors_allowed: bool = False
@property
def ip_addr(self) -> Optional[IPAddress]:
return self._ip_addr
@property
def hostname(self) -> str:
return self.request.host_name
def open(self, *args, **kwargs) -> None:
WebSocket.connection_count += 1
self.set_nodelay(True)
self._connected_time = self.eventloop.get_loop_time()
agent = self.request.headers.get("User-Agent", "")
is_proxy = False
if (
"X-Forwarded-For" in self.request.headers or
"X-Real-Ip" in self.request.headers
):
is_proxy = True
logging.info(f"Bridge Socket Opened: ID: {self.uid}, "
f"Proxied: {is_proxy}, "
f"User Agent: {agent}, "
f"Host Name: {self.hostname}")
self.wsm.add_bridge_connection(self)
def on_message(self, message: Union[bytes, str]) -> None:
if isinstance(message, str):
message = message.encode(encoding="utf-8")
self.klippy_write_buf.append(message)
if self.klippy_queue_busy:
return
self.klippy_queue_busy = True
self.eventloop.register_callback(self._write_klippy_messages)
async def _write_klippy_messages(self) -> None:
while self.klippy_write_buf:
if self.klippy_writer is None or self.is_closed:
break
msg = self.klippy_write_buf.pop(0)
try:
self.klippy_writer.write(msg + b"\x03")
await self.klippy_writer.drain()
except asyncio.CancelledError:
raise
except Exception:
if not self.is_closed:
logging.debug("Klippy Disconnection From _write_request()")
self.close(1001, "Klippy Disconnected")
break
self.klippy_queue_busy = False
def on_pong(self, data: bytes) -> None:
self.last_pong_time = self.eventloop.get_loop_time()
def on_close(self) -> None:
WebSocket.connection_count -= 1
self.is_closed = True
self.klippy_write_buf.clear()
if self.klippy_writer is not None:
self.klippy_writer.close()
self.klippy_writer = None
now = self.eventloop.get_loop_time()
pong_elapsed = now - self.last_pong_time
logging.info(f"Bridge Socket Closed: ID: {self.uid} "
f"Close Code: {self.close_code}, "
f"Close Reason: {self.close_reason}, "
f"Pong Time Elapsed: {pong_elapsed:.2f}")
self.wsm.remove_bridge_connection(self)
async def _read_unix_stream(self, reader: asyncio.StreamReader) -> None:
errors_remaining: int = 10
while not reader.at_eof():
try:
data = memoryview(await reader.readuntil(b'\x03'))
except (ConnectionError, asyncio.IncompleteReadError):
break
except asyncio.CancelledError:
logging.exception("Klippy Stream Read Cancelled")
raise
except Exception:
logging.exception("Klippy Stream Read Error")
errors_remaining -= 1
if not errors_remaining or self.is_closed:
break
continue
try:
await self.write_message(data[:-1].tobytes())
except WebSocketClosedError:
logging.info(
f"Bridge closed while writing: {self.uid}")
break
except asyncio.CancelledError:
raise
except Exception:
logging.exception(
f"Error sending data over Bridge: {self.uid}")
errors_remaining -= 1
if not errors_remaining or self.is_closed:
break
continue
errors_remaining = 10
if not self.is_closed:
logging.debug("Bridge Disconnection From _read_unix_stream()")
self.close_socket(1001, "Klippy Disconnected")
def check_origin(self, origin: str) -> bool:
if not super().check_origin(origin):
return self.cors_allowed
return True
# Check Authorized User
async def prepare(self) -> None:
max_conns = self.settings["max_websocket_connections"]
if WebSocket.connection_count >= max_conns:
raise self.server.error(
"Maximum Number of Bridge Connections Reached"
)
auth: AuthComp = self.server.lookup_component("authorization", None)
if auth is not None:
self.current_user = await auth.authenticate_request(self.request)
if "Origin" in self.request.headers:
origin = self.request.headers.get("Origin")
else:
origin = self.request.headers.get("Sec-Websocket-Origin", None)
self.cors_allowed = await auth.check_cors(origin)
kconn: Klippy = self.server.lookup_component("klippy_connection")
try:
reader, writer = await kconn.open_klippy_connection()
except ServerError as err:
raise HTTPError(err.status_code, str(err)) from None
except Exception as e:
raise HTTPError(503, "Failed to open connection to Klippy") from e
self.klippy_writer = writer
self.eventloop.register_callback(self._read_unix_stream, reader)
def close_socket(self, code: int, reason: str) -> None:
self.close(code, reason)
def load_component(config: ConfigHelper) -> WebsocketManager:
return WebsocketManager(config)

View File

@@ -11,11 +11,12 @@
from __future__ import annotations
from enum import Enum
import logging
import json
import asyncio
import serial_asyncio
from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPRequest
from ..utils import json_wrapper as jsonw
from ..common import RequestType
# Annotation imports
from typing import (
@@ -28,10 +29,8 @@ from typing import (
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from websockets import WebRequest
from . import klippy_apis
APIComp = klippy_apis.KlippyAPI
from ..confighelper import ConfigHelper
from ..common import WebRequest
class OnOff(str, Enum):
on: str = "on"
@@ -295,7 +294,7 @@ class StripHttp(Strip):
request = HTTPRequest(url=self.url,
method="POST",
headers=headers,
body=json.dumps(state),
body=jsonw.dumps(state),
connect_timeout=self.timeout,
request_timeout=self.timeout)
for i in range(retries):
@@ -331,7 +330,7 @@ class StripSerial(Strip):
logging.debug(f"WLED: serial:{self.serialport} json:{state}")
self.ser.write(json.dumps(state).encode())
self.ser.write(jsonw.dumps(state))
def close(self: StripSerial):
if hasattr(self, 'ser'):
@@ -390,23 +389,24 @@ class WLED:
# As moonraker is about making things a web api, let's try it
# Yes, this is largely a cut-n-paste from power.py
self.server.register_endpoint(
"/machine/wled/strips", ["GET"],
self._handle_list_strips)
"/machine/wled/strips", RequestType.GET, self._handle_list_strips
)
self.server.register_endpoint(
"/machine/wled/status", ["GET"],
self._handle_batch_wled_request)
"/machine/wled/status", RequestType.GET, self._handle_batch_wled_request
)
self.server.register_endpoint(
"/machine/wled/on", ["POST"],
self._handle_batch_wled_request)
"/machine/wled/on", RequestType.POST, self._handle_batch_wled_request
)
self.server.register_endpoint(
"/machine/wled/off", ["POST"],
self._handle_batch_wled_request)
"/machine/wled/off", RequestType.POST, self._handle_batch_wled_request
)
self.server.register_endpoint(
"/machine/wled/toggle", ["POST"],
self._handle_batch_wled_request)
"/machine/wled/toggle", RequestType.POST, self._handle_batch_wled_request
)
self.server.register_endpoint(
"/machine/wled/strip", ["GET", "POST"],
self._handle_single_wled_request)
"/machine/wled/strip", RequestType.GET | RequestType.POST,
self._handle_single_wled_request
)
async def component_init(self) -> None:
try:
@@ -447,9 +447,15 @@ class WLED:
# Full control of wled
# state: True, False, "on", "off"
# preset: wled preset (int) to use (ignored if state False or "Off")
async def set_wled_state(self: WLED, strip: str, state: str = None,
preset: int = -1, brightness: int = -1,
intensity: int = -1, speed: int = -1) -> None:
async def set_wled_state(
self: WLED,
strip: str,
state: Optional[str] = None,
preset: int = -1,
brightness: int = -1,
intensity: int = -1,
speed: int = -1
) -> None:
status = None
if isinstance(state, bool):
@@ -462,7 +468,8 @@ class WLED:
if status is None and preset == -1 and brightness == -1 and \
intensity == -1 and speed == -1:
logging.info(
f"Invalid state received but no control or preset data passed")
"Invalid state received but no control or preset data passed"
)
return
if strip not in self.strips:
@@ -516,19 +523,19 @@ class WLED:
intensity: int = web_request.get_int('intensity', -1)
speed: int = web_request.get_int('speed', -1)
req_action = web_request.get_action()
req_type = web_request.get_request_type()
if strip_name not in self.strips:
raise self.server.error(f"No valid strip named {strip_name}")
strip = self.strips[strip_name]
if req_action == 'GET':
if req_type == RequestType.GET:
return {strip_name: strip.get_strip_info()}
elif req_action == "POST":
elif req_type == RequestType.POST:
action = web_request.get_str('action').lower()
if action not in ["on", "off", "toggle", "control"]:
raise self.server.error(
f"Invalid requested action '{action}'")
result = await self._process_request(strip, action, preset,
brightness, intensity, speed)
raise self.server.error(f"Invalid requested action '{action}'")
result = await self._process_request(
strip, action, preset, brightness, intensity, speed
)
return {strip_name: result}
async def _handle_batch_wled_request(self: WLED,

View File

@@ -7,15 +7,32 @@ from __future__ import annotations
import socket
import asyncio
import logging
import ipaddress
import random
import uuid
from itertools import cycle
from email.utils import formatdate
from zeroconf import IPVersion
from zeroconf.asyncio import AsyncServiceInfo, AsyncZeroconf
from ..common import RequestType, TransportType
from typing import TYPE_CHECKING, Any, Dict, Iterator, List, Optional
from typing import (
TYPE_CHECKING,
Any,
Dict,
Iterator,
List,
Optional,
Tuple
)
if TYPE_CHECKING:
from confighelper import ConfigHelper
from ..confighelper import ConfigHelper
from ..common import WebRequest
from .application import MoonrakerApp
from .machine import Machine
ZC_SERVICE_TYPE = "_moonraker._tcp.local."
class AsyncRunner:
def __init__(self, ip_version: IPVersion) -> None:
@@ -48,54 +65,355 @@ class AsyncRunner:
class ZeroconfRegistrar:
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.runner = AsyncRunner(IPVersion.All)
hi = self.server.get_host_info()
addresses: Optional[List[bytes]] = [socket.inet_aton(hi["address"])]
self.bound_all = hi["address"] == "0.0.0.0"
self.service_info = self._build_service_info(addresses)
self.mdns_name = config.get("mdns_hostname", hi["hostname"])
addr: str = hi["address"]
self.ip_version = IPVersion.All
if addr.lower() == "all":
addr = "::"
else:
addr_obj = ipaddress.ip_address(addr)
self.ip_version = (
IPVersion.V4Only if addr_obj.version == 4 else IPVersion.V6Only
)
self.runner = AsyncRunner(self.ip_version)
self.cfg_addr = addr
self.bound_all = addr in ["0.0.0.0", "::"]
if self.bound_all:
self.server.register_event_handler(
"machine:net_state_changed", self._update_service)
self.ssdp_server: Optional[SSDPServer] = None
if config.getboolean("enable_ssdp", False):
self.ssdp_server = SSDPServer(config)
async def component_init(self) -> None:
logging.info("Starting Zeroconf services")
app: MoonrakerApp = self.server.lookup_component("application")
machine: Machine = self.server.lookup_component("machine")
app_args = self.server.get_app_args()
instance_uuid: str = app_args["instance_uuid"]
if (
machine.get_provider_type().startswith("systemd") and
"unit_name" in machine.get_moonraker_service_info()
):
# Use the name of the systemd service unit to identify service
instance_name = machine.unit_name.capitalize()
else:
# Use the UUID. First 8 hex digits should be unique enough
instance_name = f"Moonraker-{instance_uuid[:8]}"
hi = self.server.get_host_info()
host = self.mdns_name
zc_service_props = {
"uuid": instance_uuid,
"https_port": hi["ssl_port"] if app.https_enabled() else "",
"version": app_args["software_version"],
"route_prefix": app.route_prefix
}
if self.bound_all:
machine: Machine = self.server.lookup_component("machine")
if not host:
host = machine.public_ip
network = machine.get_system_info()["network"]
addresses = [x for x in self._extract_ip_addresses(network)]
self.service_info = self._build_service_info(addresses)
addresses: List[bytes] = [x for x in self._extract_ip_addresses(network)]
else:
if not host:
host = self.cfg_addr
host_addr = ipaddress.ip_address(self.cfg_addr)
addresses = [host_addr.packed]
zc_service_name = f"{instance_name} @ {host}.{ZC_SERVICE_TYPE}"
server_name = self.mdns_name or instance_name.lower()
self.service_info = AsyncServiceInfo(
ZC_SERVICE_TYPE,
zc_service_name,
addresses=addresses,
port=hi["port"],
properties=zc_service_props,
server=f"{server_name}.local.",
)
await self.runner.register_services([self.service_info])
if self.ssdp_server is not None:
addr = self.cfg_addr if not self.bound_all else machine.public_ip
if not addr:
addr = f"{self.mdns_name}.local"
name = f"{instance_name} ({host})"
if len(name) > 64:
name = instance_name
await self.ssdp_server.start()
self.ssdp_server.register_service(name, addr, hi["port"])
async def close(self) -> None:
await self.runner.unregister_services([self.service_info])
if self.ssdp_server is not None:
await self.ssdp_server.stop()
async def _update_service(self, network: Dict[str, Any]) -> None:
if self.bound_all:
addresses = [x for x in self._extract_ip_addresses(network)]
self.service_info = self._build_service_info(addresses)
self.service_info.addresses = addresses
await self.runner.update_services([self.service_info])
def _build_service_info(self,
addresses: Optional[List[bytes]] = None
) -> AsyncServiceInfo:
hi = self.server.get_host_info()
return AsyncServiceInfo(
"_moonraker._tcp.local.",
f"Moonraker Instance on {hi['hostname']}._moonraker._tcp.local.",
addresses=addresses,
port=hi["port"],
properties={"path": "/"},
server=f"{hi['hostname']}.local.",
)
def _extract_ip_addresses(self, network: Dict[str, Any]) -> Iterator[bytes]:
for ifname, ifinfo in network.items():
for addr_info in ifinfo["ip_addresses"]:
if addr_info["is_link_local"]:
continue
is_ipv6 = addr_info['family'] == "ipv6"
family = socket.AF_INET6 if is_ipv6 else socket.AF_INET
yield socket.inet_pton(family, addr_info["address"])
addr_obj = ipaddress.ip_address(addr_info["address"])
ver = addr_obj.version
if (
(self.ip_version == IPVersion.V4Only and ver == 6) or
(self.ip_version == IPVersion.V6Only and ver == 4)
):
continue
yield addr_obj.packed
SSDP_ADDR = ("239.255.255.250", 1900)
SSDP_SERVER_ID = "Moonraker SSDP/UPNP Server"
SSDP_MAX_AGE = 1800
SSDP_DEVICE_TYPE = "urn:arksine.github.io:device:Moonraker:1"
SSDP_DEVICE_XML = """
<?xml version="1.0"?>
<root xmlns="urn:schemas-upnp-org:device-1-0" configId="{config_id}">
<specVersion>
<major>2</major>
<minor>0</minor>
</specVersion>
<device>
<deviceType>{device_type}</deviceType>
<friendlyName>{friendly_name}</friendlyName>
<manufacturer>Arksine</manufacturer>
<manufacturerURL>https://github.com/Arksine/moonraker</manufacturerURL>
<modelDescription>API Server for Klipper</modelDescription>
<modelName>Moonraker</modelName>
<modelNumber>{model_number}</modelNumber>
<modelURL>https://github.com/Arksine/moonraker</modelURL>
<serialNumber>{serial_number}</serialNumber>
<UDN>uuid:{device_uuid}</UDN>
<presentationURL>{presentation_url}</presentationURL>
</device>
</root>
""".strip()
class SSDPServer(asyncio.protocols.DatagramProtocol):
def __init__(self, config: ConfigHelper) -> None:
self.server = config.get_server()
self.unique_id = uuid.UUID(self.server.get_app_args()["instance_uuid"])
self.name: str = "Moonraker"
self.base_url: str = ""
self.response_headers: List[str] = []
self.registered: bool = False
self.running: bool = False
self.close_fut: Optional[asyncio.Future] = None
self.response_handle: Optional[asyncio.TimerHandle] = None
eventloop = self.server.get_event_loop()
self.boot_id = int(eventloop.get_loop_time())
self.config_id = 1
self.ad_timer = eventloop.register_timer(self._advertise_presence)
self.server.register_endpoint(
"/server/zeroconf/ssdp",
RequestType.GET,
self._handle_xml_request,
transports=TransportType.HTTP,
wrap_result=False,
content_type="application/xml",
auth_required=False
)
def _create_ssdp_socket(
self,
source_addr: Tuple[str, int] = ("0.0.0.0", 0),
target_addr: Tuple[str, int] = SSDP_ADDR
) -> socket.socket:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
try:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
except AttributeError:
pass
sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
source_ip = socket.inet_aton(source_addr[0])
target_ip = socket.inet_aton(target_addr[0])
ip_combo = target_ip + source_ip
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_IF, source_ip)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 2)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, ip_combo)
return sock
async def start(self) -> None:
if self.running:
return
try:
sock = self._create_ssdp_socket()
sock.settimeout(0)
sock.setblocking(False)
sock.bind(("", SSDP_ADDR[1]))
_loop = asyncio.get_running_loop()
ret = await _loop.create_datagram_endpoint(lambda: self, sock=sock)
self.transport, _ = ret
except (socket.error, OSError):
return
self.running = True
async def stop(self) -> None:
if not self.running:
return
self.running = False
self.ad_timer.stop()
if self.response_handle is not None:
self.response_handle.cancel()
self.response_handle = None
if self.transport.is_closing():
logging.info("Transport already closing")
return
for notification in self._build_notifications("ssdp:byebye"):
self.transport.sendto(notification, SSDP_ADDR)
self.close_fut = self.server.get_event_loop().create_future()
self.transport.close()
try:
await asyncio.wait_for(self.close_fut, 2.)
except asyncio.TimeoutError:
pass
self.close_fut = None
def register_service(
self, name: str, host_name_or_ip: str, port: int
) -> None:
if len(name) > 64:
name = name[:64]
self.name = name
app: MoonrakerApp = self.server.lookup_component("application")
self.base_url = f"http://{host_name_or_ip}:{port}{app.route_prefix}"
self.response_headers = [
f"USN: uuid:{self.unique_id}::upnp:rootdevice",
f"LOCATION: {self.base_url}/server/zeroconf/ssdp",
"ST: upnp:rootdevice",
"EXT:",
f"SERVER: {SSDP_SERVER_ID}",
f"CACHE-CONTROL: max-age={SSDP_MAX_AGE}",
f"BOOTID.UPNP.ORG: {self.boot_id}",
f"CONFIGID.UPNP.ORG: {self.config_id}",
]
self.registered = True
advertisements = self._build_notifications("ssdp:alive")
if self.running:
for ad in advertisements:
self.transport.sendto(ad, SSDP_ADDR)
self.advertisements = cycle(advertisements)
self.ad_timer.start()
async def _handle_xml_request(self, web_request: WebRequest) -> str:
if not self.registered:
raise self.server.error("Moonraker SSDP Device not registered", 404)
app_args = self.server.get_app_args()
return SSDP_DEVICE_XML.format(
device_type=SSDP_DEVICE_TYPE,
config_id=str(self.config_id),
friendly_name=self.name,
model_number=app_args["software_version"],
serial_number=self.unique_id.hex,
device_uuid=str(self.unique_id),
presentation_url=self.base_url
)
def _advertise_presence(self, eventtime: float) -> float:
if self.running and self.registered:
cur_ad = next(self.advertisements)
self.transport.sendto(cur_ad, SSDP_ADDR)
delay = random.uniform(SSDP_MAX_AGE / 6., SSDP_MAX_AGE / 3.)
return eventtime + delay
def connection_made(
self, transport: asyncio.transports.BaseTransport
) -> None:
logging.debug("SSDP Server Connected")
def connection_lost(self, exc: Exception | None) -> None:
logging.debug("SSDP Server Disconnected")
if self.close_fut is not None:
self.close_fut.set_result(None)
def pause_writing(self) -> None:
logging.debug("SSDP Pause Writing Requested")
def resume_writing(self) -> None:
logging.debug("SSDP Resume Writing Requested")
def datagram_received(self, data: bytes, addr: tuple[str | Any, int]) -> None:
if not self.registered:
return
try:
parts = data.decode().split("\r\n\r\n", maxsplit=1)
header = parts[0]
except ValueError:
logging.exception("Data Decode Error")
return
hlines = header.splitlines()
ssdp_command = hlines[0].strip()
headers = {}
for line in hlines[1:]:
parts = line.strip().split(":", maxsplit=1)
if len(parts) < 2:
continue
headers[parts[0].upper()] = parts[1].strip()
if (
ssdp_command != "M-SEARCH * HTTP/1.1" or
headers.get("MAN") != '"ssdp:discover"'
):
# Not a discovery request
return
if headers.get("ST") not in ["upnp:rootdevice", "ssdp:all"]:
# Service Type doesn't apply
return
if self.response_handle is not None:
# response in progress
return
if "MX" in headers:
delay_time = random.uniform(0, float(headers["MX"]))
eventloop = self.server.get_event_loop()
self.response_handle = eventloop.delay_callback(
delay_time, self._respond_to_discovery, addr
)
else:
self._respond_to_discovery(addr)
def _respond_to_discovery(self, addr: tuple[str | Any, int]) -> None:
if not self.running:
return
self.response_handle = None
response: List[str] = ["HTTP/1.1 200 OK"]
response.extend(self.response_headers)
response.append(f"DATE: {formatdate(usegmt=True)}")
response.extend(["", ""])
self.transport.sendto("\r\n".join(response).encode(), addr)
def _build_notifications(self, nts: str) -> List[bytes]:
notifications: List[bytes] = []
notify_types = [
("upnp:rootdevice", f"uuid:{self.unique_id}::upnp:rootdevice"),
(f"uuid:{self.unique_id}", f"uuid:{self.unique_id}"),
(SSDP_DEVICE_TYPE, f"uuid:{self.unique_id}::{SSDP_DEVICE_TYPE}")
]
for (nt, usn) in notify_types:
notifications.append(
"\r\n".join([
"NOTIFY * HTTP/1.1",
f"HOST: {SSDP_ADDR[0]}:{SSDP_ADDR[1]}",
f"NTS: {nts}",
f"NT: {nt}",
f"USN: {usn}",
f"LOCATION: {self.base_url}/server/zeroconf/ssdp",
"EXT:",
f"SERVER: {SSDP_SERVER_ID}",
f"CACHE-CONTROL: max-age={SSDP_MAX_AGE}",
f"BOOTID.UPNP.ORG: {self.boot_id}",
f"CONFIGID.UPNP.ORG: {self.config_id}",
"",
""
]).encode()
)
return notifications
def error_received(self, exc: Exception) -> None:
logging.info(f"SSDP Server Error: {exc}")
def load_component(config: ConfigHelper) -> ZeroconfRegistrar:

File diff suppressed because it is too large Load Diff

View File

@@ -5,6 +5,8 @@
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import os
import contextlib
import asyncio
import inspect
import functools
@@ -15,23 +17,36 @@ from typing import (
TYPE_CHECKING,
Awaitable,
Callable,
Coroutine,
Optional,
Tuple,
TypeVar,
Union
)
_uvl_var = os.getenv("MOONRAKER_ENABLE_UVLOOP", "y").lower()
_uvl_enabled = False
if _uvl_var in ["y", "yes", "true"]:
with contextlib.suppress(ImportError):
import uvloop
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
_uvl_enabled = True
if TYPE_CHECKING:
from asyncio import AbstractEventLoop
_T = TypeVar("_T")
FlexCallback = Callable[..., Optional[Awaitable]]
TimerCallback = Callable[[float], Union[float, Awaitable[float]]]
class EventLoop:
UVLOOP_ENABLED = _uvl_enabled
TimeoutError = asyncio.TimeoutError
def __init__(self) -> None:
self.reset()
@property
def asyncio_loop(self) -> AbstractEventLoop:
return self.aioloop
def reset(self) -> None:
self.aioloop = self._create_new_loop()
self.add_signal_handler = self.aioloop.add_signal_handler
@@ -67,11 +82,16 @@ class EventLoop:
*args,
**kwargs
) -> None:
if inspect.iscoroutinefunction(callback):
self.aioloop.create_task(callback(*args, **kwargs)) # type: ignore
else:
self.aioloop.call_soon(
functools.partial(callback, *args, **kwargs))
async def _wrapper():
try:
ret = callback(*args, **kwargs)
if inspect.isawaitable(ret):
await ret
except asyncio.CancelledError:
raise
except Exception:
logging.exception("Error Running Callback")
self.aioloop.create_task(_wrapper())
def delay_callback(self,
delay: float,
@@ -79,23 +99,14 @@ class EventLoop:
*args,
**kwargs
) -> asyncio.TimerHandle:
if inspect.iscoroutinefunction(callback):
return self.aioloop.call_later(
delay, self._async_callback,
functools.partial(callback, *args, **kwargs))
else:
return self.aioloop.call_later(
delay, functools.partial(callback, *args, **kwargs))
return self.aioloop.call_later(
delay, self.register_callback,
functools.partial(callback, *args, **kwargs)
)
def register_timer(self, callback: TimerCallback):
return FlexTimer(self, callback)
def _async_callback(self, callback: Callable[[], Coroutine]) -> None:
# This wrapper delays creation of the coroutine object. In the
# event that a callback is cancelled this prevents "coroutine
# was never awaited" warnings in asyncio
self.aioloop.create_task(callback())
def run_in_thread(self,
callback: Callable[..., _T],
*args
@@ -158,12 +169,18 @@ class FlexTimer:
self.eventloop = eventloop
self.callback = callback
self.timer_handle: Optional[asyncio.TimerHandle] = None
self.timer_task: Optional[asyncio.Task] = None
self.running: bool = False
def in_callback(self) -> bool:
return self.timer_task is not None and not self.timer_task.done()
def start(self, delay: float = 0.):
if self.running:
return
self.running = True
if self.in_callback():
return
call_time = self.eventloop.get_loop_time() + delay
self.timer_handle = self.eventloop.call_at(
call_time, self._schedule_task)
@@ -176,9 +193,14 @@ class FlexTimer:
self.timer_handle.cancel()
self.timer_handle = None
async def wait_timer_done(self) -> None:
if self.timer_task is None:
return
await self.timer_task
def _schedule_task(self):
self.timer_handle = None
self.eventloop.create_task(self._call_wrapper())
self.timer_task = self.eventloop.create_task(self._call_wrapper())
def is_running(self) -> bool:
return self.running
@@ -186,8 +208,14 @@ class FlexTimer:
async def _call_wrapper(self):
if not self.running:
return
ret = self.callback(self.eventloop.get_loop_time())
if isinstance(ret, Awaitable):
ret = await ret
try:
ret = self.callback(self.eventloop.get_loop_time())
if isinstance(ret, Awaitable):
ret = await ret
except Exception:
self.running = False
raise
finally:
self.timer_task = None
if self.running:
self.timer_handle = self.eventloop.call_at(ret, self._schedule_task)

165
moonraker/loghelper.py Normal file
View File

@@ -0,0 +1,165 @@
# Log Management
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import logging
import logging.handlers
import time
import os
import sys
import asyncio
import platform
from queue import SimpleQueue as Queue
from .common import RequestType
# Annotation imports
from typing import (
TYPE_CHECKING,
Optional,
Awaitable,
Dict,
List,
Any,
)
if TYPE_CHECKING:
from .server import Server
from .common import WebRequest
from .components.klippy_connection import KlippyConnection
# Coroutine friendly QueueHandler courtesy of Martjin Pieters:
# https://www.zopatista.com/python/2019/05/11/asyncio-logging/
class LocalQueueHandler(logging.handlers.QueueHandler):
def emit(self, record: logging.LogRecord) -> None:
# Removed the call to self.prepare(), handle task cancellation
try:
self.enqueue(record)
except asyncio.CancelledError:
raise
except Exception:
self.handleError(record)
# Timed Rotating File Handler, based on Klipper's implementation
class MoonrakerLoggingHandler(logging.handlers.TimedRotatingFileHandler):
def __init__(self, app_args: Dict[str, Any], **kwargs) -> None:
super().__init__(app_args['log_file'], **kwargs)
self.app_args = app_args
self.rollover_info: Dict[str, str] = {}
def set_rollover_info(self, name: str, item: str) -> None:
self.rollover_info[name] = item
def doRollover(self) -> None:
super().doRollover()
self.write_header()
def write_header(self) -> None:
if self.stream is None:
return
strtime = time.asctime(time.gmtime())
header = f"{'-'*20} Log Start | {strtime} {'-'*20}\n"
self.stream.write(header)
self.stream.write(f"platform: {platform.platform(terse=True)}\n")
app_section = "\n".join([f"{k}: {v}" for k, v in self.app_args.items()])
self.stream.write(app_section + "\n")
if self.rollover_info:
lines = [line for line in self.rollover_info.values() if line]
self.stream.write("\n".join(lines) + "\n")
class LogManager:
def __init__(
self, app_args: Dict[str, Any], startup_warnings: List[str]
) -> None:
root_logger = logging.getLogger()
while root_logger.hasHandlers():
root_logger.removeHandler(root_logger.handlers[0])
queue: Queue = Queue()
queue_handler = LocalQueueHandler(queue)
root_logger.addHandler(queue_handler)
root_logger.setLevel(logging.INFO)
stdout_hdlr = logging.StreamHandler(sys.stdout)
stdout_fmt = logging.Formatter(
'[%(filename)s:%(funcName)s()] - %(message)s')
stdout_hdlr.setFormatter(stdout_fmt)
app_args_str = f"platform: {platform.platform(terse=True)}\n"
app_args_str += "\n".join([f"{k}: {v}" for k, v in app_args.items()])
sys.stdout.write(f"\nApplication Info:\n{app_args_str}\n")
self.file_hdlr: Optional[MoonrakerLoggingHandler] = None
self.listener: Optional[logging.handlers.QueueListener] = None
log_file: str = app_args.get('log_file', "")
if log_file:
try:
self.file_hdlr = MoonrakerLoggingHandler(
app_args, when='midnight', backupCount=2)
formatter = logging.Formatter(
'%(asctime)s [%(filename)s:%(funcName)s()] - %(message)s')
self.file_hdlr.setFormatter(formatter)
self.listener = logging.handlers.QueueListener(
queue, self.file_hdlr, stdout_hdlr)
self.file_hdlr.write_header()
except Exception:
log_file = os.path.normpath(log_file)
dir_name = os.path.dirname(log_file)
startup_warnings.append(
f"Unable to create log file at '{log_file}'. "
f"Make sure that the folder '{dir_name}' exists "
"and Moonraker has Read/Write access to the folder. "
)
if self.listener is None:
self.listener = logging.handlers.QueueListener(
queue, stdout_hdlr)
self.listener.start()
def set_server(self, server: Server) -> None:
self.server = server
self.server.register_endpoint(
"/server/logs/rollover", RequestType.POST, self._handle_log_rollover
)
def set_rollover_info(self, name: str, item: str) -> None:
if self.file_hdlr is not None:
self.file_hdlr.set_rollover_info(name, item)
def rollover_log(self) -> Awaitable[None]:
if self.file_hdlr is None:
raise self.server.error("File Logging Disabled")
eventloop = self.server.get_event_loop()
return eventloop.run_in_thread(self.file_hdlr.doRollover)
def stop_logging(self):
self.listener.stop()
async def _handle_log_rollover(
self, web_request: WebRequest
) -> Dict[str, Any]:
log_apps = ["moonraker", "klipper"]
app = web_request.get_str("application", None)
result: Dict[str, Any] = {"rolled_over": [], "failed": {}}
if app is not None:
if app not in log_apps:
raise self.server.error(f"Unknown application {app}")
log_apps = [app]
if "moonraker" in log_apps:
try:
ret = self.rollover_log()
if ret is not None:
await ret
except asyncio.CancelledError:
raise
except Exception as e:
result["failed"]["moonraker"] = str(e)
else:
result["rolled_over"].append("moonraker")
if "klipper" in log_apps:
kconn: KlippyConnection
kconn = self.server.lookup_component("klippy_connection")
try:
await kconn.rollover_log()
except self.server.error as e:
result["failed"]["klipper"] = str(e)
else:
result["rolled_over"].append("klipper")
return result

View File

@@ -1,517 +1,17 @@
#!/usr/bin/env python3
# Moonraker - HTTP/Websocket API Server for Klipper
# Legacy entry point for Moonraker
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
# Copyright (C) 2022 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import pathlib
import sys
import argparse
import importlib
import os
import io
import time
import socket
import logging
import signal
import confighelper
import utils
import asyncio
from eventloop import EventLoop
from app import MoonrakerApp
from klippy_connection import KlippyConnection
from utils import ServerError, SentinelClass
# Annotation imports
from typing import (
TYPE_CHECKING,
Any,
Optional,
Callable,
Coroutine,
Dict,
List,
Tuple,
Union,
TypeVar,
)
if TYPE_CHECKING:
from websockets import WebRequest, WebsocketManager
from components.file_manager.file_manager import FileManager
FlexCallback = Callable[..., Optional[Coroutine]]
_T = TypeVar("_T")
API_VERSION = (1, 0, 5)
CORE_COMPONENTS = [
'dbus_manager', 'database', 'file_manager', 'klippy_apis',
'machine', 'data_store', 'shell_command', 'proc_stats',
'job_state', 'job_queue', 'http_client', 'announcements',
'webcam', 'extensions',
]
SENTINEL = SentinelClass.get_instance()
class Server:
error = ServerError
def __init__(self,
args: Dict[str, Any],
file_logger: Optional[utils.MoonrakerLoggingHandler],
event_loop: EventLoop
) -> None:
self.event_loop = event_loop
self.file_logger = file_logger
self.app_args = args
self.config = config = self._parse_config()
self.host: str = config.get('host', "0.0.0.0")
self.port: int = config.getint('port', 7125)
self.ssl_port: int = config.getint('ssl_port', 7130)
self.exit_reason: str = ""
self.server_running: bool = False
# Configure Debug Logging
self.debug = config.getboolean('enable_debug_logging', False)
asyncio_debug = config.getboolean('enable_asyncio_debug', False)
log_level = logging.DEBUG if self.debug else logging.INFO
logging.getLogger().setLevel(log_level)
self.event_loop.set_debug(asyncio_debug)
# Event initialization
self.events: Dict[str, List[FlexCallback]] = {}
self.components: Dict[str, Any] = {}
self.failed_components: List[str] = []
self.warnings: List[str] = []
self.klippy_connection = KlippyConnection(config)
# Tornado Application/Server
self.moonraker_app = app = MoonrakerApp(config)
self.register_endpoint = app.register_local_handler
self.register_static_file_handler = app.register_static_file_handler
self.register_upload_handler = app.register_upload_handler
self.register_api_transport = app.register_api_transport
log_warn = args.get('log_warning', "")
if log_warn:
self.add_warning(log_warn)
cfg_warn = args.get("config_warning", "")
if cfg_warn:
self.add_warning(cfg_warn)
self.register_endpoint(
"/server/info", ['GET'], self._handle_info_request)
self.register_endpoint(
"/server/config", ['GET'], self._handle_config_request)
self.register_endpoint(
"/server/restart", ['POST'], self._handle_server_restart)
self.register_notification("server:klippy_ready")
self.register_notification("server:klippy_shutdown")
self.register_notification("server:klippy_disconnect",
"klippy_disconnected")
self.register_notification("server:gcode_response")
def get_app_args(self) -> Dict[str, Any]:
return dict(self.app_args)
def get_event_loop(self) -> EventLoop:
return self.event_loop
def get_api_version(self) -> Tuple[int, int, int]:
return API_VERSION
def get_warnings(self) -> List[str]:
return self.warnings
def is_running(self) -> bool:
return self.server_running
def is_debug_enabled(self) -> bool:
return self.debug
def _parse_config(self) -> confighelper.ConfigHelper:
config = confighelper.get_configuration(self, self.app_args)
# log config file
cfg_files = "\n".join(config.get_config_files())
strio = io.StringIO()
config.write_config(strio)
cfg_item = f"\n{'#'*20} Moonraker Configuration {'#'*20}\n\n"
cfg_item += strio.getvalue()
cfg_item += "#"*65
cfg_item += f"\nAll Configuration Files:\n{cfg_files}\n"
cfg_item += "#"*65
strio.close()
self.add_log_rollover_item('config', cfg_item)
return config
async def server_init(self, start_server: bool = True) -> None:
self.event_loop.add_signal_handler(
signal.SIGTERM, self._handle_term_signal)
# Perform asynchronous init after the event loop starts
optional_comps: List[Coroutine] = []
for name, component in self.components.items():
if not hasattr(component, "component_init"):
continue
if name in CORE_COMPONENTS:
# Process core components in order synchronously
await self._initialize_component(name, component)
else:
optional_comps.append(
self._initialize_component(name, component))
# Asynchronous Optional Component Initialization
if optional_comps:
await asyncio.gather(*optional_comps)
if not self.warnings:
await self.event_loop.run_in_thread(self.config.create_backup)
if start_server:
await self.start_server()
async def start_server(self, connect_to_klippy: bool = True) -> None:
# Start HTTP Server
logging.info(
f"Starting Moonraker on ({self.host}, {self.port}), "
f"Hostname: {socket.gethostname()}")
self.moonraker_app.listen(self.host, self.port, self.ssl_port)
self.server_running = True
if connect_to_klippy:
self.klippy_connection.connect()
def add_log_rollover_item(self, name: str, item: str,
log: bool = True) -> None:
if self.file_logger is not None:
self.file_logger.set_rollover_info(name, item)
if log and item is not None:
logging.info(item)
def add_warning(self, warning: str, log: bool = True) -> None:
self.warnings.append(warning)
if log:
logging.warning(warning)
# ***** Component Management *****
async def _initialize_component(self, name: str, component: Any) -> None:
logging.info(f"Performing Component Post Init: [{name}]")
try:
ret = component.component_init()
if ret is not None:
await ret
except Exception as e:
logging.exception(f"Component [{name}] failed post init")
self.add_warning(f"Component '{name}' failed to load with "
f"error: {e}")
self.set_failed_component(name)
def load_components(self) -> None:
config = self.config
cfg_sections = [s.split()[0] for s in config.sections()]
cfg_sections.remove('server')
# load core components
for component in CORE_COMPONENTS:
self.load_component(config, component)
if component in cfg_sections:
cfg_sections.remove(component)
# load remaining optional components
for section in cfg_sections:
self.load_component(config, section, None)
config.validate_config()
def load_component(self,
config: confighelper.ConfigHelper,
component_name: str,
default: Union[SentinelClass, _T] = SENTINEL
) -> Union[_T, Any]:
if component_name in self.components:
return self.components[component_name]
try:
module = importlib.import_module("components." + component_name)
is_core = component_name in CORE_COMPONENTS
fallback: Optional[str] = "server" if is_core else None
config = config.getsection(component_name, fallback)
load_func = getattr(module, "load_component")
component = load_func(config)
except Exception:
msg = f"Unable to load component: ({component_name})"
logging.exception(msg)
if component_name not in self.failed_components:
self.failed_components.append(component_name)
if isinstance(default, SentinelClass):
raise ServerError(msg)
return default
self.components[component_name] = component
logging.info(f"Component ({component_name}) loaded")
return component
def lookup_component(self,
component_name: str,
default: Union[SentinelClass, _T] = SENTINEL
) -> Union[_T, Any]:
component = self.components.get(component_name, default)
if isinstance(component, SentinelClass):
raise ServerError(f"Component ({component_name}) not found")
return component
def set_failed_component(self, component_name: str) -> None:
if component_name not in self.failed_components:
self.failed_components.append(component_name)
def register_component(self, component_name: str, component: Any) -> None:
if component_name in self.components:
raise self.error(
f"Component '{component_name}' already registered")
self.components[component_name] = component
def register_notification(self,
event_name: str,
notify_name: Optional[str] = None
) -> None:
wsm: WebsocketManager = self.lookup_component("websockets")
wsm.register_notification(event_name, notify_name)
def register_event_handler(self,
event: str,
callback: FlexCallback
) -> None:
self.events.setdefault(event, []).append(callback)
def send_event(self, event: str, *args) -> asyncio.Future:
fut = self.event_loop.create_future()
self.event_loop.register_callback(
self._process_event, fut, event, *args)
return fut
async def _process_event(self,
fut: asyncio.Future,
event: str,
*args
) -> None:
events = self.events.get(event, [])
coroutines: List[Coroutine] = []
try:
for func in events:
ret = func(*args)
if ret is not None:
coroutines.append(ret)
if coroutines:
await asyncio.gather(*coroutines)
except ServerError as e:
logging.exception(f"Error Processing Event: {fut}")
if not fut.done():
fut.set_result(None)
def register_remote_method(self,
method_name: str,
cb: FlexCallback
) -> None:
self.klippy_connection.register_remote_method(method_name, cb)
def get_host_info(self) -> Dict[str, Any]:
return {
'hostname': socket.gethostname(),
'address': self.host,
'port': self.port,
'ssl_port': self.ssl_port
}
def get_klippy_info(self) -> Dict[str, Any]:
return self.klippy_connection.klippy_info
def get_klippy_state(self) -> str:
return self.klippy_connection.state
def _handle_term_signal(self) -> None:
logging.info(f"Exiting with signal SIGTERM")
self.event_loop.register_callback(self._stop_server, "terminate")
async def _stop_server(self, exit_reason: str = "restart") -> None:
self.server_running = False
# Call each component's "on_exit" method
for name, component in self.components.items():
if hasattr(component, "on_exit"):
func: FlexCallback = getattr(component, "on_exit")
try:
ret = func()
if ret is not None:
await ret
except Exception:
logging.exception(
f"Error executing 'on_exit()' for component: {name}")
# Sleep for 100ms to allow connected websockets to write out
# remaining data
await asyncio.sleep(.1)
try:
await self.moonraker_app.close()
except Exception:
logging.exception("Error Closing App")
# Disconnect from Klippy
try:
await asyncio.wait_for(
asyncio.shield(self.klippy_connection.close(
wait_closed=True)), 2.)
except Exception:
logging.exception("Klippy Disconnect Error")
# Close all components
for name, component in self.components.items():
if name in ["application", "websockets", "klippy_connection"]:
# These components have already been closed
continue
if hasattr(component, "close"):
func = getattr(component, "close")
try:
ret = func()
if ret is not None:
await ret
except Exception:
logging.exception(
f"Error executing 'close()' for component: {name}")
# Allow cancelled tasks a chance to run in the eventloop
await asyncio.sleep(.001)
self.exit_reason = exit_reason
self.event_loop.remove_signal_handler(signal.SIGTERM)
self.event_loop.stop()
async def _handle_server_restart(self, web_request: WebRequest) -> str:
self.event_loop.register_callback(self._stop_server)
return "ok"
async def _handle_info_request(self,
web_request: WebRequest
) -> Dict[str, Any]:
file_manager: Optional[FileManager] = self.lookup_component(
'file_manager', None)
reg_dirs = []
if file_manager is not None:
reg_dirs = file_manager.get_registered_dirs()
wsm: WebsocketManager = self.lookup_component('websockets')
mreqs = self.klippy_connection.missing_requirements
return {
'klippy_connected': self.klippy_connection.is_connected(),
'klippy_state': self.klippy_connection.state,
'components': list(self.components.keys()),
'failed_components': self.failed_components,
'registered_directories': reg_dirs,
'warnings': self.warnings,
'websocket_count': wsm.get_count(),
'moonraker_version': self.app_args['software_version'],
'missing_klippy_requirements': mreqs,
'api_version': API_VERSION,
'api_version_string': ".".join([str(v) for v in API_VERSION])
}
async def _handle_config_request(self,
web_request: WebRequest
) -> Dict[str, Any]:
cfg_file_list: List[Dict[str, Any]] = []
cfg_parent = pathlib.Path(
self.app_args["config_file"]
).expanduser().resolve().parent
for fname, sections in self.config.get_file_sections().items():
path = pathlib.Path(fname)
try:
rel_path = str(path.relative_to(str(cfg_parent)))
except ValueError:
rel_path = fname
cfg_file_list.append({"filename": rel_path, "sections": sections})
return {
'config': self.config.get_parsed_config(),
'orig': self.config.get_orig_config(),
'files': cfg_file_list
}
def main(cmd_line_args: argparse.Namespace) -> None:
cfg_file = cmd_line_args.configfile
app_args = {'config_file': cfg_file}
# Setup Logging
version = utils.get_software_version()
if cmd_line_args.nologfile:
app_args['log_file'] = ""
else:
app_args['log_file'] = os.path.normpath(
os.path.expanduser(cmd_line_args.logfile))
app_args['software_version'] = version
app_args['python_version'] = sys.version.replace("\n", " ")
ql, file_logger, warning = utils.setup_logging(app_args)
if warning is not None:
app_args['log_warning'] = warning
# Start asyncio event loop and server
event_loop = EventLoop()
alt_config_loaded = False
estatus = 0
while True:
try:
server = Server(app_args, file_logger, event_loop)
server.load_components()
except confighelper.ConfigError as e:
backup_cfg = confighelper.find_config_backup(cfg_file)
logging.exception("Server Config Error")
if alt_config_loaded or backup_cfg is None:
estatus = 1
break
app_args['config_file'] = backup_cfg
app_args['config_warning'] = (
f"Server configuration error: {e}\n"
f"Loaded server from most recent working configuration:"
f" '{app_args['config_file']}'\n"
f"Please fix the issue in moonraker.conf and restart "
f"the server."
)
alt_config_loaded = True
continue
except Exception:
logging.exception("Moonraker Error")
estatus = 1
break
try:
event_loop.register_callback(server.server_init)
event_loop.start()
except Exception:
logging.exception("Server Running Error")
estatus = 1
break
if server.exit_reason == "terminate":
break
# Restore the original config and clear the warning
# before the server restarts
if alt_config_loaded:
app_args['config_file'] = cfg_file
app_args.pop('config_warning', None)
alt_config_loaded = False
event_loop.close()
# Since we are running outside of the the server
# it is ok to use a blocking sleep here
time.sleep(.5)
logging.info("Attempting Server Restart...")
event_loop.reset()
event_loop.close()
logging.info("Server Shutdown")
ql.stop()
exit(estatus)
if __name__ == '__main__':
# Parse start arguments
parser = argparse.ArgumentParser(
description="Moonraker - Klipper API Server")
parser.add_argument(
"-c", "--configfile", default="~/moonraker.conf",
metavar='<configfile>',
help="Location of moonraker configuration file")
parser.add_argument(
"-l", "--logfile", default="/tmp/moonraker.log", metavar='<logfile>',
help="log file name and location")
parser.add_argument(
"-n", "--nologfile", action='store_true',
help="disable logging to a file")
main(parser.parse_args())
if __name__ == "__main__":
import sys
import importlib
import pathlib
pkg_parent = pathlib.Path(__file__).parent.parent
sys.path.pop(0)
sys.path.insert(0, str(pkg_parent))
svr = importlib.import_module(".server", "moonraker")
svr.main(False) # type: ignore

712
moonraker/server.py Normal file
View File

@@ -0,0 +1,712 @@
#!/usr/bin/env python3
# Moonraker - HTTP/Websocket API Server for Klipper
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import pathlib
import sys
import argparse
import importlib
import os
import io
import time
import socket
import logging
import signal
import asyncio
import uuid
import traceback
from . import confighelper
from .eventloop import EventLoop
from .utils import (
ServerError,
Sentinel,
get_software_info,
json_wrapper,
pip_utils,
source_info
)
from .loghelper import LogManager
from .common import RequestType
# Annotation imports
from typing import (
TYPE_CHECKING,
Any,
Optional,
Callable,
Coroutine,
Dict,
List,
Tuple,
Union,
TypeVar,
)
if TYPE_CHECKING:
from .common import WebRequest
from .components.application import MoonrakerApp
from .components.websockets import WebsocketManager
from .components.klippy_connection import KlippyConnection
from .components.file_manager.file_manager import FileManager
from .components.machine import Machine
from .components.extensions import ExtensionManager
FlexCallback = Callable[..., Optional[Coroutine]]
_T = TypeVar("_T", Sentinel, Any)
API_VERSION = (1, 4, 0)
SERVER_COMPONENTS = ['application', 'websockets', 'klippy_connection']
CORE_COMPONENTS = [
'dbus_manager', 'database', 'file_manager', 'authorization',
'klippy_apis', 'machine', 'data_store', 'shell_command',
'proc_stats', 'job_state', 'job_queue', 'history',
'http_client', 'announcements', 'webcam', 'extensions'
]
class Server:
error = ServerError
config_error = confighelper.ConfigError
def __init__(self,
args: Dict[str, Any],
log_manager: LogManager,
event_loop: EventLoop
) -> None:
self.event_loop = event_loop
self.log_manager = log_manager
self.app_args = args
self.events: Dict[str, List[FlexCallback]] = {}
self.components: Dict[str, Any] = {}
self.failed_components: List[str] = []
self.warnings: Dict[str, str] = {}
self._is_configured: bool = False
self.config = config = self._parse_config()
self.host: str = config.get('host', "0.0.0.0")
self.port: int = config.getint('port', 7125)
self.ssl_port: int = config.getint('ssl_port', 7130)
self.exit_reason: str = ""
self.server_running: bool = False
self.pip_recovery_attempted: bool = False
# Configure Debug Logging
config.getboolean('enable_debug_logging', False, deprecate=True)
self.debug = args["debug"]
log_level = logging.DEBUG if args["verbose"] else logging.INFO
logging.getLogger().setLevel(log_level)
self.event_loop.set_debug(args["asyncio_debug"])
self.klippy_connection: KlippyConnection
self.klippy_connection = self.load_component(config, "klippy_connection")
# Tornado Application/Server
self.moonraker_app: MoonrakerApp = self.load_component(config, "application")
app = self.moonraker_app
self.register_endpoint = app.register_endpoint
self.register_debug_endpoint = app.register_debug_endpoint
self.register_static_file_handler = app.register_static_file_handler
self.register_upload_handler = app.register_upload_handler
self.log_manager.set_server(self)
self.websocket_manager: WebsocketManager
self.websocket_manager = self.load_component(config, "websockets")
for warning in args.get("startup_warnings", []):
self.add_warning(warning)
self.register_endpoint(
"/server/info", RequestType.GET, self._handle_info_request
)
self.register_endpoint(
"/server/config", RequestType.GET, self._handle_config_request
)
self.register_endpoint(
"/server/restart", RequestType.POST, self._handle_server_restart
)
self.register_notification("server:klippy_ready")
self.register_notification("server:klippy_shutdown")
self.register_notification("server:klippy_disconnect",
"klippy_disconnected")
self.register_notification("server:gcode_response")
def get_app_args(self) -> Dict[str, Any]:
return dict(self.app_args)
def get_app_arg(self, key: str, default=Sentinel.MISSING) -> Any:
val = self.app_args.get(key, default)
if val is Sentinel.MISSING:
raise KeyError(f"No key '{key}' in Application Arguments")
return val
def get_event_loop(self) -> EventLoop:
return self.event_loop
def get_api_version(self) -> Tuple[int, int, int]:
return API_VERSION
def get_warnings(self) -> List[str]:
return list(self.warnings.values())
def is_running(self) -> bool:
return self.server_running
def is_configured(self) -> bool:
return self._is_configured
def is_debug_enabled(self) -> bool:
return self.debug
def is_verbose_enabled(self) -> bool:
return self.app_args["verbose"]
def _parse_config(self) -> confighelper.ConfigHelper:
config = confighelper.get_configuration(self, self.app_args)
# log config file
cfg_files = "\n".join(config.get_config_files())
strio = io.StringIO()
config.write_config(strio)
cfg_item = f"\n{'#'*20} Moonraker Configuration {'#'*20}\n\n"
cfg_item += strio.getvalue()
cfg_item += "#"*65
cfg_item += f"\nAll Configuration Files:\n{cfg_files}\n"
cfg_item += "#"*65
strio.close()
self.add_log_rollover_item('config', cfg_item)
return config
async def server_init(self, start_server: bool = True) -> None:
self.event_loop.add_signal_handler(
signal.SIGTERM, self._handle_term_signal)
# Perform asynchronous init after the event loop starts
optional_comps: List[Coroutine] = []
for name, component in self.components.items():
if not hasattr(component, "component_init"):
continue
if name in CORE_COMPONENTS:
# Process core components in order synchronously
await self._initialize_component(name, component)
else:
optional_comps.append(
self._initialize_component(name, component))
# Asynchronous Optional Component Initialization
if optional_comps:
await asyncio.gather(*optional_comps)
if not self.warnings:
await self.event_loop.run_in_thread(self.config.create_backup)
machine: Machine = self.lookup_component("machine")
if await machine.validate_installation():
return
if start_server:
await self.start_server()
async def start_server(self, connect_to_klippy: bool = True) -> None:
# Open Unix Socket Server
extm: ExtensionManager = self.lookup_component("extensions")
await extm.start_unix_server()
# Start HTTP Server
logging.info(
f"Starting Moonraker on ({self.host}, {self.port}), "
f"Hostname: {socket.gethostname()}")
self.moonraker_app.listen(self.host, self.port, self.ssl_port)
self.server_running = True
if connect_to_klippy:
self.klippy_connection.connect()
def add_log_rollover_item(
self, name: str, item: str, log: bool = True
) -> None:
self.log_manager.set_rollover_info(name, item)
if log and item is not None:
logging.info(item)
def add_warning(
self, warning: str, warn_id: Optional[str] = None, log: bool = True
) -> str:
if warn_id is None:
warn_id = str(id(warning))
self.warnings[warn_id] = warning
if log:
logging.warning(warning)
return warn_id
def remove_warning(self, warn_id: str) -> None:
self.warnings.pop(warn_id, None)
# ***** Component Management *****
async def _initialize_component(self, name: str, component: Any) -> None:
logging.info(f"Performing Component Post Init: [{name}]")
try:
ret = component.component_init()
if ret is not None:
await ret
except Exception as e:
logging.exception(f"Component [{name}] failed post init")
self.add_warning(f"Component '{name}' failed to load with "
f"error: {e}")
self.set_failed_component(name)
def load_components(self) -> None:
config = self.config
cfg_sections = set([s.split()[0] for s in config.sections()])
cfg_sections.remove('server')
# load core components
for component in CORE_COMPONENTS:
self.load_component(config, component)
if component in cfg_sections:
cfg_sections.remove(component)
# load remaining optional components
for section in cfg_sections:
self.load_component(config, section, None)
config.validate_config()
self._is_configured = True
def load_component(
self,
config: confighelper.ConfigHelper,
component_name: str,
default: _T = Sentinel.MISSING
) -> Union[_T, Any]:
if component_name in self.components:
return self.components[component_name]
if self.is_configured():
raise self.error(
"Cannot load components after configuration", 500
)
if component_name in self.failed_components:
raise self.error(
f"Component {component_name} previously failed to load", 500
)
try:
full_name = f"moonraker.components.{component_name}"
module = importlib.import_module(full_name)
# Server components use the [server] section for configuration
if component_name not in SERVER_COMPONENTS:
is_core = component_name in CORE_COMPONENTS
fallback: Optional[str] = "server" if is_core else None
config = config.getsection(component_name, fallback)
load_func = getattr(module, "load_component")
component = load_func(config)
except Exception as e:
ucomps: List[str] = self.app_args.get("unofficial_components", [])
if isinstance(e, ModuleNotFoundError) and component_name not in ucomps:
if self.try_pip_recovery(e.name or "unknown"):
return self.load_component(config, component_name, default)
msg = f"Unable to load component: ({component_name})"
logging.exception(msg)
if component_name not in self.failed_components:
self.failed_components.append(component_name)
if default is Sentinel.MISSING:
raise
return default
self.components[component_name] = component
logging.info(f"Component ({component_name}) loaded")
return component
def try_pip_recovery(self, missing_module: str) -> bool:
if self.pip_recovery_attempted:
return False
self.pip_recovery_attempted = True
src_dir = source_info.source_path()
req_file = src_dir.joinpath("scripts/moonraker-requirements.txt")
if not req_file.is_file():
return False
pip_cmd = f"{sys.executable} -m pip"
pip_exec = pip_utils.PipExecutor(pip_cmd, logging.info)
logging.info(f"Module '{missing_module}' not found. Attempting Pip Update...")
logging.info("Checking Pip Version...")
try:
pipver = pip_exec.get_pip_version()
if pip_utils.check_pip_needs_update(pipver):
cur_ver = pipver.pip_version_string
new_ver = ".".join([str(part) for part in pip_utils.MIN_PIP_VERSION])
logging.info(f"Updating Pip from {cur_ver} to {new_ver}...")
pip_exec.update_pip()
except Exception:
logging.exception("Pip version check failed")
return False
logging.info("Installing Moonraker python dependencies...")
try:
pip_exec.install_packages(req_file, {"SKIP_CYTHON": "Y"})
except Exception:
logging.exception("Failed to install python packages")
return False
return True
def lookup_component(
self, component_name: str, default: _T = Sentinel.MISSING
) -> Union[_T, Any]:
component = self.components.get(component_name, default)
if component is Sentinel.MISSING:
raise ServerError(f"Component ({component_name}) not found")
return component
def set_failed_component(self, component_name: str) -> None:
if component_name not in self.failed_components:
self.failed_components.append(component_name)
def register_component(self, component_name: str, component: Any) -> None:
if component_name in self.components:
raise self.error(
f"Component '{component_name}' already registered")
self.components[component_name] = component
def register_notification(
self, event_name: str, notify_name: Optional[str] = None
) -> None:
self.websocket_manager.register_notification(event_name, notify_name)
def register_event_handler(
self, event: str, callback: FlexCallback
) -> None:
self.events.setdefault(event, []).append(callback)
def send_event(self, event: str, *args) -> asyncio.Future:
fut = self.event_loop.create_future()
self.event_loop.register_callback(
self._process_event, fut, event, *args)
return fut
async def _process_event(
self, fut: asyncio.Future, event: str, *args
) -> None:
events = self.events.get(event, [])
coroutines: List[Coroutine] = []
for func in events:
try:
ret = func(*args)
except Exception:
logging.exception(f"Error processing callback in event {event}")
else:
if ret is not None:
coroutines.append(ret)
if coroutines:
results = await asyncio.gather(*coroutines, return_exceptions=True)
for val in results:
if isinstance(val, Exception):
if sys.version_info < (3, 10):
exc_info = "".join(traceback.format_exception(
type(val), val, val.__traceback__
))
else:
exc_info = "".join(traceback.format_exception(val))
logging.info(
f"\nError processing callback in event {event}\n{exc_info}"
)
if not fut.done():
fut.set_result(None)
def register_remote_method(
self, method_name: str, cb: FlexCallback
) -> None:
self.klippy_connection.register_remote_method(method_name, cb)
def get_host_info(self) -> Dict[str, Any]:
return {
'hostname': socket.gethostname(),
'address': self.host,
'port': self.port,
'ssl_port': self.ssl_port
}
def get_klippy_info(self) -> Dict[str, Any]:
return self.klippy_connection.klippy_info
def _handle_term_signal(self) -> None:
logging.info("Exiting with signal SIGTERM")
self.event_loop.register_callback(self._stop_server, "terminate")
def restart(self, delay: Optional[float] = None) -> None:
if delay is None:
self.event_loop.register_callback(self._stop_server)
else:
self.event_loop.delay_callback(delay, self._stop_server)
async def _stop_server(self, exit_reason: str = "restart") -> None:
self.server_running = False
# Call each component's "on_exit" method
for name, component in self.components.items():
if hasattr(component, "on_exit"):
func: FlexCallback = getattr(component, "on_exit")
try:
ret = func()
if ret is not None:
await ret
except Exception:
logging.exception(
f"Error executing 'on_exit()' for component: {name}")
# Sleep for 100ms to allow connected websockets to write out
# remaining data
await asyncio.sleep(.1)
try:
await self.moonraker_app.close()
await self.websocket_manager.close()
except Exception:
logging.exception("Error Closing App")
# Disconnect from Klippy
try:
await asyncio.wait_for(
asyncio.shield(self.klippy_connection.close(
wait_closed=True)), 2.)
except Exception:
logging.exception("Klippy Disconnect Error")
# Close all components
for name, component in self.components.items():
if name in ["application", "websockets", "klippy_connection"]:
# These components have already been closed
continue
if hasattr(component, "close"):
func = getattr(component, "close")
try:
ret = func()
if ret is not None:
await ret
except Exception:
logging.exception(
f"Error executing 'close()' for component: {name}")
# Allow cancelled tasks a chance to run in the eventloop
await asyncio.sleep(.001)
self.exit_reason = exit_reason
self.event_loop.remove_signal_handler(signal.SIGTERM)
self.event_loop.stop()
async def _handle_server_restart(self, web_request: WebRequest) -> str:
self.event_loop.register_callback(self._stop_server)
return "ok"
async def _handle_info_request(self, web_request: WebRequest) -> Dict[str, Any]:
raw = web_request.get_boolean("raw", False)
file_manager: Optional[FileManager] = self.lookup_component(
'file_manager', None)
reg_dirs = []
if file_manager is not None:
reg_dirs = file_manager.get_registered_dirs()
mreqs = self.klippy_connection.missing_requirements
if raw:
warnings = list(self.warnings.values())
else:
warnings = [
w.replace("\n", "<br/>") for w in self.warnings.values()
]
return {
'klippy_connected': self.klippy_connection.is_connected(),
'klippy_state': str(self.klippy_connection.state),
'components': list(self.components.keys()),
'failed_components': self.failed_components,
'registered_directories': reg_dirs,
'warnings': warnings,
'websocket_count': self.websocket_manager.get_count(),
'moonraker_version': self.app_args['software_version'],
'missing_klippy_requirements': mreqs,
'api_version': API_VERSION,
'api_version_string': ".".join([str(v) for v in API_VERSION])
}
async def _handle_config_request(self, web_request: WebRequest) -> Dict[str, Any]:
cfg_file_list: List[Dict[str, Any]] = []
cfg_parent = pathlib.Path(
self.app_args["config_file"]
).expanduser().resolve().parent
for fname, sections in self.config.get_file_sections().items():
path = pathlib.Path(fname)
try:
rel_path = str(path.relative_to(str(cfg_parent)))
except ValueError:
rel_path = fname
cfg_file_list.append({"filename": rel_path, "sections": sections})
return {
'config': self.config.get_parsed_config(),
'orig': self.config.get_orig_config(),
'files': cfg_file_list
}
def main(from_package: bool = True) -> None:
def get_env_bool(key: str) -> bool:
return os.getenv(key, "").lower() in ["y", "yes", "true"]
# Parse start arguments
parser = argparse.ArgumentParser(
description="Moonraker - Klipper API Server")
parser.add_argument(
"-d", "--datapath",
default=os.getenv("MOONRAKER_DATA_PATH"),
metavar='<data path>',
help="Location of Moonraker Data File Path"
)
parser.add_argument(
"-c", "--configfile",
default=os.getenv("MOONRAKER_CONFIG_PATH"),
metavar='<configfile>',
help="Path to Moonraker's configuration file"
)
parser.add_argument(
"-l", "--logfile",
default=os.getenv("MOONRAKER_LOG_PATH"),
metavar='<logfile>',
help="Path to Moonraker's log file"
)
parser.add_argument(
"-u", "--unixsocket",
default=os.getenv("MOONRAKER_UDS_PATH"),
metavar="<unixsocket>",
help="Path to Moonraker's unix domain socket"
)
parser.add_argument(
"-n", "--nologfile",
action='store_const',
const=True,
default=get_env_bool("MOONRAKER_DISABLE_FILE_LOG"),
help="disable logging to a file"
)
parser.add_argument(
"-v", "--verbose",
action='store_const',
const=True,
default=get_env_bool("MOONRAKER_VERBOSE_LOGGING"),
help="Enable verbose logging"
)
parser.add_argument(
"-g", "--debug",
action='store_const',
const=True,
default=get_env_bool("MOONRAKER_ENABLE_DEBUG"),
help="Enable Moonraker debug features"
)
parser.add_argument(
"-o", "--asyncio-debug",
action='store_const',
const=True,
default=get_env_bool("MOONRAKER_ASYNCIO_DEBUG"),
help="Enable asyncio debug flag"
)
cmd_line_args = parser.parse_args()
startup_warnings: List[str] = []
dp: str = cmd_line_args.datapath or "~/printer_data"
data_path = pathlib.Path(dp).expanduser().resolve()
if not data_path.exists():
try:
data_path.mkdir()
except Exception:
startup_warnings.append(
f"Unable to create data path folder at {data_path}"
)
uuid_path = data_path.joinpath(".moonraker.uuid")
if not uuid_path.is_file():
instance_uuid = uuid.uuid4().hex
uuid_path.write_text(instance_uuid)
else:
instance_uuid = uuid_path.read_text().strip()
if cmd_line_args.configfile is not None:
cfg_file: str = cmd_line_args.configfile
else:
cfg_file = str(data_path.joinpath("config/moonraker.conf"))
if cmd_line_args.unixsocket is not None:
unix_sock: str = cmd_line_args.unixsocket
else:
comms_dir = data_path.joinpath("comms")
if not comms_dir.exists():
comms_dir.mkdir()
unix_sock = str(comms_dir.joinpath("moonraker.sock"))
misc_dir = data_path.joinpath("misc")
if not misc_dir.exists():
misc_dir.mkdir()
app_args = {
"data_path": str(data_path),
"is_default_data_path": cmd_line_args.datapath is None,
"config_file": cfg_file,
"startup_warnings": startup_warnings,
"verbose": cmd_line_args.verbose,
"debug": cmd_line_args.debug,
"asyncio_debug": cmd_line_args.asyncio_debug,
"is_backup_config": False,
"is_python_package": from_package,
"instance_uuid": instance_uuid,
"unix_socket_path": unix_sock
}
# Setup Logging
app_args.update(get_software_info())
if cmd_line_args.nologfile:
app_args["log_file"] = ""
elif cmd_line_args.logfile:
app_args["log_file"] = os.path.normpath(
os.path.expanduser(cmd_line_args.logfile))
else:
app_args["log_file"] = str(data_path.joinpath("logs/moonraker.log"))
app_args["python_version"] = sys.version.replace("\n", " ")
app_args["launch_args"] = " ".join([sys.executable] + sys.argv).strip()
app_args["msgspec_enabled"] = json_wrapper.MSGSPEC_ENABLED
app_args["uvloop_enabled"] = EventLoop.UVLOOP_ENABLED
log_manager = LogManager(app_args, startup_warnings)
# Start asyncio event loop and server
event_loop = EventLoop()
alt_config_loaded = False
estatus = 0
while True:
try:
server = Server(app_args, log_manager, event_loop)
server.load_components()
except confighelper.ConfigError as e:
backup_cfg = confighelper.find_config_backup(cfg_file)
logging.exception("Server Config Error")
if alt_config_loaded or backup_cfg is None:
estatus = 1
break
app_args["config_file"] = backup_cfg
app_args["is_backup_config"] = True
warn_list = list(startup_warnings)
app_args["startup_warnings"] = warn_list
warn_list.append(
f"Server configuration error: {e}\n"
f"Loaded server from most recent working configuration:"
f" '{app_args['config_file']}'\n"
f"Please fix the issue in moonraker.conf and restart "
f"the server."
)
alt_config_loaded = True
continue
except Exception:
logging.exception("Moonraker Error")
estatus = 1
break
try:
event_loop.register_callback(server.server_init)
event_loop.start()
except Exception:
logging.exception("Server Running Error")
estatus = 1
break
if server.exit_reason == "terminate":
break
# Restore the original config and clear the warning
# before the server restarts
if alt_config_loaded:
app_args["config_file"] = cfg_file
app_args["startup_warnings"] = startup_warnings
app_args["is_backup_config"] = False
alt_config_loaded = False
event_loop.close()
# Since we are running outside of the the server
# it is ok to use a blocking sleep here
time.sleep(.5)
logging.info("Attempting Server Restart...")
del server
event_loop.reset()
event_loop.close()
logging.info("Server Shutdown")
log_manager.stop_logging()
exit(estatus)

281
moonraker/utils/__init__.py Normal file
View File

@@ -0,0 +1,281 @@
# General Server Utilities
#
# Copyright (C) 2020 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import logging
import os
import glob
import importlib
import pathlib
import sys
import subprocess
import asyncio
import hashlib
import shlex
import re
import struct
import socket
import enum
import ipaddress
import platform
from . import source_info
from . import json_wrapper
# Annotation imports
from typing import (
TYPE_CHECKING,
List,
Optional,
Any,
Tuple,
Dict,
Union
)
if TYPE_CHECKING:
from types import ModuleType
from asyncio.trsock import TransportSocket
SYS_MOD_PATHS = glob.glob("/usr/lib/python3*/dist-packages")
SYS_MOD_PATHS += glob.glob("/usr/lib/python3*/site-packages")
SYS_MOD_PATHS += glob.glob("/usr/lib/*-linux-gnu/python3*/site-packages")
IPAddress = Union[ipaddress.IPv4Address, ipaddress.IPv6Address]
try:
KERNEL_VERSION = tuple([int(part) for part in platform.release().split(".")[:2]])
except Exception:
KERNEL_VERSION = (0, 0)
class ServerError(Exception):
def __init__(self, message: str, status_code: int = 400) -> None:
Exception.__init__(self, message)
self.status_code = status_code
class Sentinel(enum.Enum):
MISSING = object()
def _run_git_command(cmd: str) -> str:
prog = shlex.split(cmd)
process = subprocess.Popen(prog, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
ret, err = process.communicate()
retcode = process.wait()
if retcode == 0:
return ret.strip().decode()
raise Exception(
f"Failed to run git command '{cmd}': {err.decode(errors='ignore')}"
)
def _retrieve_git_tag(source_path: str) -> str:
cmd = f"git -C {source_path} rev-list --tags --max-count=1"
hash = _run_git_command(cmd)
cmd = f"git -C {source_path} describe --tags {hash}"
tag = _run_git_command(cmd)
cmd = f"git -C {source_path} rev-list {tag}..HEAD --count"
count = _run_git_command(cmd)
return f"{tag}-{count}"
# Parse the git version from the command line. This code
# is borrowed from Klipper.
def retrieve_git_version(source_path: str) -> str:
# Obtain version info from "git" program
cmd = f"git -C {source_path} describe --always --tags --long --dirty"
ver = _run_git_command(cmd)
tag_match = re.match(r"v\d+\.\d+\.\d+", ver)
if tag_match is not None:
return ver
# This is likely a shallow clone. Resolve the tag and manually create
# the version string
tag = _retrieve_git_tag(source_path)
return f"t{tag}-g{ver}-shallow"
def get_repo_info(source_path: str) -> Dict[str, Any]:
repo_info: Dict[str, Any] = {
"software_version": "?",
"git_branch": "?",
"git_remote": "?",
"git_repo_url": "?",
"modified_files": [],
"unofficial_components": []
}
try:
repo_info["software_version"] = retrieve_git_version(source_path)
cmd = f"git -C {source_path} branch --no-color"
branch_list = _run_git_command(cmd)
for line in branch_list.split("\n"):
if line[0] == "*":
repo_info["git_branch"] = line[1:].strip()
break
else:
return repo_info
if repo_info["git_branch"].startswith("(HEAD detached"):
parts = repo_info["git_branch"] .strip("()").split()[-1]
remote, _, _ = parts.partition("/")
if not remote:
return repo_info
repo_info["git_remote"] = remote
else:
branch = repo_info["git_branch"]
cmd = f"git -C {source_path} config --get branch.{branch}.remote"
repo_info["git_remote"] = _run_git_command(cmd)
cmd = f"git -C {source_path} remote get-url {repo_info['git_remote']}"
repo_info["git_repo_url"] = _run_git_command(cmd)
cmd = f"git -C {source_path} status --porcelain --ignored"
status = _run_git_command(cmd)
for line in status.split("\n"):
parts = line.strip().split(maxsplit=1)
if len(parts) != 2:
continue
if parts[0] == "M":
repo_info["modified_files"].append(parts[1])
elif (
parts[0] in ("??", "!!")
and parts[1].endswith(".py")
and parts[1].startswith("components")
):
comp = parts[1].split("/", maxsplit=1)[-1]
repo_info["unofficial_components"].append(comp)
except Exception:
logging.exception("Error Retreiving Git Repo Info")
return repo_info
def get_software_info() -> Dict[str, Any]:
src_path = source_info.source_path()
if source_info.is_git_repo():
return get_repo_info(str(src_path))
pkg_ver = source_info.package_version()
if pkg_ver is not None:
return {"software_version": pkg_ver}
version: str = "?"
vfile = src_path.joinpath("moonraker/.version")
if vfile.exists():
try:
version = vfile.read_text().strip()
except Exception:
logging.exception("Unable to extract version from file")
version = "?"
return {"software_version": version}
def hash_directory(
dir_path: Union[str, pathlib.Path],
ignore_exts: List[str],
ignore_dirs: List[str]
) -> str:
if isinstance(dir_path, str):
dir_path = pathlib.Path(dir_path)
checksum = hashlib.blake2s()
if not dir_path.exists():
return ""
for dpath, dnames, fnames in os.walk(dir_path):
valid_dirs: List[str] = []
for dname in sorted(dnames):
if dname[0] == '.' or dname in ignore_dirs:
continue
valid_dirs.append(dname)
dnames[:] = valid_dirs
for fname in sorted(fnames):
ext = os.path.splitext(fname)[-1].lower()
if fname[0] == '.' or ext in ignore_exts:
continue
fpath = pathlib.Path(os.path.join(dpath, fname))
try:
checksum.update(fpath.read_bytes())
except Exception:
pass
return checksum.hexdigest()
def verify_source(
path: Optional[Union[str, pathlib.Path]] = None
) -> Optional[Tuple[str, bool]]:
if path is None:
path = source_info.source_path()
elif isinstance(path, str):
path = pathlib.Path(path)
rfile = path.joinpath(".release_info")
if not rfile.exists():
return None
try:
rinfo = json_wrapper.loads(rfile.read_text())
except Exception:
return None
orig_chksum = rinfo['source_checksum']
ign_dirs = rinfo['ignored_dirs']
ign_exts = rinfo['ignored_exts']
checksum = hash_directory(path, ign_exts, ign_dirs)
return checksum, checksum == orig_chksum
def load_system_module(name: str) -> ModuleType:
if not SYS_MOD_PATHS:
# no dist path detected, fall back to direct import attempt
try:
return importlib.import_module(name)
except ImportError as e:
raise ServerError(f"Unable to import module {name}") from e
for module_path in SYS_MOD_PATHS:
sys.path.insert(0, module_path)
try:
module = importlib.import_module(name)
except ImportError as e:
if not isinstance(e, ModuleNotFoundError):
logging.exception(f"Failed to load {name} module")
else:
break
finally:
sys.path.pop(0)
else:
raise ServerError(f"Unable to import module {name}")
return module
def get_unix_peer_credentials(
writer: asyncio.StreamWriter, name: str
) -> Dict[str, int]:
sock: TransportSocket
sock = writer.get_extra_info("socket", None)
if sock is None:
logging.debug(
f"Unable to get underlying Unix Socket for {name}, "
"cant fetch peer credentials"
)
return {}
data: bytes = b""
try:
size = struct.calcsize("3I")
data = sock.getsockopt(socket.SOL_SOCKET, socket.SO_PEERCRED, size)
pid, uid, gid = struct.unpack("3I", data)
except asyncio.CancelledError:
raise
except Exception:
logging.exception(
f"Failed to get Unix Socket Peer Credentials for {name}"
f", raw: 0x{data.hex()}"
)
return {}
return {
"process_id": pid,
"user_id": uid,
"group_id": gid
}
def pretty_print_time(seconds: int) -> str:
if seconds == 0:
return "0 Seconds"
fmt_list: List[str] = []
times: Dict[str, int] = {}
times["Day"], seconds = divmod(seconds, 86400)
times["Hour"], seconds = divmod(seconds, 3600)
times["Minute"], times["Second"] = divmod(seconds, 60)
for ident, val in times.items():
if val == 0:
continue
fmt_list.append(f"{val} {ident}" if val == 1 else f"{val} {ident}s")
return ", ".join(fmt_list)
def parse_ip_address(address: str) -> Optional[IPAddress]:
try:
return ipaddress.ip_address(address)
except Exception:
return None

View File

@@ -0,0 +1,199 @@
# Async CAN Socket utility
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import socket
import asyncio
import errno
import struct
import logging
from . import ServerError
from typing import List, Dict, Optional, Union
CAN_FMT = "<IB3x8s"
CAN_READER_LIMIT = 1024 * 1024
KLIPPER_ADMIN_ID = 0x3f0
KLIPPER_SET_NODE_CMD = 0x01
KATAPULT_SET_NODE_CMD = 0x11
CMD_QUERY_UNASSIGNED = 0x00
CANBUS_RESP_NEED_NODEID = 0x20
class CanNode:
def __init__(self, node_id: int, cansocket: CanSocket) -> None:
self.node_id = node_id
self._reader = asyncio.StreamReader(CAN_READER_LIMIT)
self._cansocket = cansocket
async def read(
self, n: int = -1, timeout: Optional[float] = 2
) -> bytes:
return await asyncio.wait_for(self._reader.read(n), timeout)
async def readexactly(
self, n: int, timeout: Optional[float] = 2
) -> bytes:
return await asyncio.wait_for(self._reader.readexactly(n), timeout)
async def readuntil(
self, sep: bytes = b"\x03", timeout: Optional[float] = 2
) -> bytes:
return await asyncio.wait_for(self._reader.readuntil(sep), timeout)
def write(self, payload: Union[bytes, bytearray]) -> None:
if isinstance(payload, bytearray):
payload = bytes(payload)
self._cansocket.send(self.node_id, payload)
async def write_with_response(
self,
payload: Union[bytearray, bytes],
resp_length: int,
timeout: Optional[float] = 2.
) -> bytes:
self.write(payload)
return await self.readexactly(resp_length, timeout)
def feed_data(self, data: bytes) -> None:
self._reader.feed_data(data)
def close(self) -> None:
self._reader.feed_eof()
class CanSocket:
def __init__(self, interface: str):
self._loop = asyncio.get_running_loop()
self.nodes: Dict[int, CanNode] = {}
self.cansock = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW)
self.input_buffer = b""
self.output_packets: List[bytes] = []
self.input_busy = False
self.output_busy = False
self.closed = True
try:
self.cansock.bind((interface,))
except Exception:
raise ServerError(f"Unable to bind socket to interface '{interface}'", 500)
self.closed = False
self.cansock.setblocking(False)
self._loop.add_reader(self.cansock.fileno(), self._handle_can_response)
def register_node(self, node_id: int) -> CanNode:
if node_id in self.nodes:
return self.nodes[node_id]
node = CanNode(node_id, self)
self.nodes[node_id + 1] = node
return node
def remove_node(self, node_id: int) -> None:
node = self.nodes.pop(node_id + 1, None)
if node is not None:
node.close()
def _handle_can_response(self) -> None:
try:
data = self.cansock.recv(4096)
except socket.error as e:
# If bad file descriptor allow connection to be
# closed by the data check
if e.errno == errno.EBADF:
logging.exception("Can Socket Read Error, closing")
data = b''
else:
return
if not data:
# socket closed
self.close()
return
self.input_buffer += data
if self.input_busy:
return
self.input_busy = True
while len(self.input_buffer) >= 16:
packet = self.input_buffer[:16]
self._process_packet(packet)
self.input_buffer = self.input_buffer[16:]
self.input_busy = False
def _process_packet(self, packet: bytes) -> None:
can_id, length, data = struct.unpack(CAN_FMT, packet)
can_id &= socket.CAN_EFF_MASK
payload = data[:length]
node = self.nodes.get(can_id)
if node is not None:
node.feed_data(payload)
def send(self, can_id: int, payload: bytes = b"") -> None:
if can_id > 0x7FF:
can_id |= socket.CAN_EFF_FLAG
if not payload:
packet = struct.pack(CAN_FMT, can_id, 0, b"")
self.output_packets.append(packet)
else:
while payload:
length = min(len(payload), 8)
pkt_data = payload[:length]
payload = payload[length:]
packet = struct.pack(
CAN_FMT, can_id, length, pkt_data)
self.output_packets.append(packet)
if self.output_busy:
return
self.output_busy = True
asyncio.create_task(self._do_can_send())
async def _do_can_send(self):
while self.output_packets:
packet = self.output_packets.pop(0)
try:
await self._loop.sock_sendall(self.cansock, packet)
except socket.error:
logging.info("Socket Write Error, closing")
self.close()
break
self.output_busy = False
def close(self):
if self.closed:
return
self.closed = True
for node in self.nodes.values():
node.close()
self._loop.remove_reader(self.cansock.fileno())
self.cansock.close()
async def query_klipper_uuids(can_socket: CanSocket) -> List[Dict[str, str]]:
loop = asyncio.get_running_loop()
admin_node = can_socket.register_node(KLIPPER_ADMIN_ID)
payload = bytes([CMD_QUERY_UNASSIGNED])
admin_node.write(payload)
curtime = loop.time()
endtime = curtime + 2.
uuids: List[Dict[str, str]] = []
while curtime < endtime:
timeout = max(.1, endtime - curtime)
try:
resp = await admin_node.read(8, timeout)
except asyncio.TimeoutError:
continue
finally:
curtime = loop.time()
if len(resp) < 7 or resp[0] != CANBUS_RESP_NEED_NODEID:
continue
app_names = {
KLIPPER_SET_NODE_CMD: "Klipper",
KATAPULT_SET_NODE_CMD: "Katapult"
}
app = "Unknown"
if len(resp) > 7:
app = app_names.get(resp[7], "Unknown")
data = resp[1:7]
uuids.append(
{
"uuid": data.hex(),
"application": app
}
)
return uuids

111
moonraker/utils/filelock.py Normal file
View File

@@ -0,0 +1,111 @@
# Async file locking using flock
#
# Copyright (C) 2024 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import os
import fcntl
import errno
import logging
import pathlib
import contextlib
import asyncio
from . import ServerError
from typing import Optional, Type, Union
from types import TracebackType
class LockTimeout(ServerError):
pass
class AsyncExclusiveFileLock(contextlib.AbstractAsyncContextManager):
def __init__(
self, file_path: pathlib.Path, timeout: Union[int, float] = 0
) -> None:
self.lock_path = file_path.parent.joinpath(f".{file_path.name}.lock")
self.timeout = timeout
self.fd: int = -1
self.locked: bool = False
self.required_wait: bool = False
async def __aenter__(self) -> AsyncExclusiveFileLock:
await self.acquire()
return self
async def __aexit__(
self,
__exc_type: Optional[Type[BaseException]],
__exc_value: Optional[BaseException],
__traceback: Optional[TracebackType]
) -> None:
await self.release()
def _get_lock(self) -> bool:
flags = os.O_RDWR | os.O_CREAT | os.O_TRUNC
fd = os.open(str(self.lock_path), flags, 0o644)
with contextlib.suppress(PermissionError):
os.chmod(fd, 0o644)
try:
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
except OSError as err:
os.close(fd)
if err.errno == errno.ENOSYS:
raise
return False
stat = os.fstat(fd)
if stat.st_nlink == 0:
# File was deleted before opening and after acquiring
# lock, create a new one
os.close(fd)
return False
self.fd = fd
return True
async def acquire(self) -> None:
self.required_wait = False
if self.timeout < 0:
return
loop = asyncio.get_running_loop()
endtime = loop.time() + self.timeout
logged: bool = False
while True:
try:
self.locked = await loop.run_in_executor(None, self._get_lock)
except OSError as err:
logging.info(
"Failed to aquire advisory lock, allowing unlocked entry."
f"Error: {err}"
)
self.locked = False
return
if self.locked:
return
self.required_wait = True
await asyncio.sleep(.25)
if not logged:
logged = True
logging.info(
f"File lock {self.lock_path} is currently acquired by another "
"process, waiting for release."
)
if self.timeout > 0 and endtime >= loop.time():
raise LockTimeout(
f"Attempt to acquire lock '{self.lock_path}' timed out"
)
def _release_file(self) -> None:
with contextlib.suppress(OSError, PermissionError):
if self.lock_path.is_file():
self.lock_path.unlink()
with contextlib.suppress(OSError, PermissionError):
fcntl.flock(self.fd, fcntl.LOCK_UN)
with contextlib.suppress(OSError, PermissionError):
os.close(self.fd)
async def release(self) -> None:
if not self.locked:
return
loop = asyncio.get_running_loop()
await loop.run_in_executor(None, self._release_file)
self.locked = False

View File

@@ -0,0 +1,77 @@
# Methods to create IOCTL requests
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import ctypes
from typing import Union, Type, TYPE_CHECKING
"""
This module contains of Python port of the macros avaialble in
"/include/uapi/asm-generic/ioctl.h" from the linux kernel.
"""
if TYPE_CHECKING:
IOCParamSize = Union[int, str, Type[ctypes._CData]]
_IOC_NRBITS = 8
_IOC_TYPEBITS = 8
# NOTE: The following could be platform specific.
_IOC_SIZEBITS = 14
_IOC_DIRBITS = 2
_IOC_NRMASK = (1 << _IOC_NRBITS) - 1
_IOC_TYPEMASK = (1 << _IOC_TYPEBITS) - 1
_IOC_SIZEMASK = (1 << _IOC_SIZEBITS) - 1
_IOC_DIRMASK = (1 << _IOC_DIRBITS) - 1
_IOC_NRSHIFT = 0
_IOC_TYPESHIFT = _IOC_NRSHIFT + _IOC_NRBITS
_IOC_SIZESHIFT = _IOC_TYPESHIFT + _IOC_TYPEBITS
_IOC_DIRSHIFT = _IOC_SIZESHIFT + _IOC_SIZEBITS
# The constants below may also be platform specific
IOC_NONE = 0
IOC_WRITE = 1
IOC_READ = 2
def _check_value(val: int, name: str, maximum: int):
if val > maximum:
raise ValueError(f"Value '{val}' for '{name}' exceeds max of {maximum}")
def _IOC_TYPECHECK(param_size: IOCParamSize) -> int:
if isinstance(param_size, int):
return param_size
elif isinstance(param_size, bytearray):
return len(param_size)
elif isinstance(param_size, str):
ctcls = getattr(ctypes, param_size)
return ctypes.sizeof(ctcls)
return ctypes.sizeof(param_size)
def IOC(direction: int, cmd_type: int, cmd_number: int, param_size: int) -> int:
_check_value(direction, "direction", _IOC_DIRMASK)
_check_value(cmd_type, "cmd_type", _IOC_TYPEMASK)
_check_value(cmd_number, "cmd_number", _IOC_NRMASK)
_check_value(param_size, "ioc_size", _IOC_SIZEMASK)
return (
(direction << _IOC_DIRSHIFT) |
(param_size << _IOC_SIZESHIFT) |
(cmd_type << _IOC_TYPESHIFT) |
(cmd_number << _IOC_NRSHIFT)
)
def IO(cmd_type: int, cmd_number: int) -> int:
return IOC(IOC_NONE, cmd_type, cmd_number, 0)
def IOR(cmd_type: int, cmd_number: int, param_size: IOCParamSize) -> int:
return IOC(IOC_READ, cmd_type, cmd_number, _IOC_TYPECHECK(param_size))
def IOW(cmd_type: int, cmd_number: int, param_size: IOCParamSize) -> int:
return IOC(IOC_WRITE, cmd_type, cmd_number, _IOC_TYPECHECK(param_size))
def IOWR(cmd_type: int, cmd_number: int, param_size: IOCParamSize) -> int:
return IOC(IOC_READ | IOC_WRITE, cmd_type, cmd_number, _IOC_TYPECHECK(param_size))

View File

@@ -0,0 +1,33 @@
# Wrapper for msgspec with stdlib fallback
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import os
import contextlib
from typing import Any, Union, TYPE_CHECKING
if TYPE_CHECKING:
def dumps(obj: Any) -> bytes: ... # type: ignore # noqa: E704
def loads(data: Union[str, bytes, bytearray]) -> Any: ... # noqa: E704
MSGSPEC_ENABLED = False
_msgspc_var = os.getenv("MOONRAKER_ENABLE_MSGSPEC", "y").lower()
if _msgspc_var in ["y", "yes", "true"]:
with contextlib.suppress(ImportError):
import msgspec
from msgspec import DecodeError as JSONDecodeError
encoder = msgspec.json.Encoder()
decoder = msgspec.json.Decoder()
dumps = encoder.encode # noqa: F811
loads = decoder.decode # noqa: F811
MSGSPEC_ENABLED = True
if not MSGSPEC_ENABLED:
import json
from json import JSONDecodeError # type: ignore # noqa: F401,F811
loads = json.loads # type: ignore
def dumps(obj) -> bytes: # type: ignore # noqa: F811
return json.dumps(obj).encode("utf-8")

View File

@@ -0,0 +1,247 @@
# Utilities for managing python packages using Pip
#
# Copyright (C) 2024 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import os
import re
import shlex
import subprocess
import pathlib
import shutil
import threading
from dataclasses import dataclass
# Annotation imports
from typing import (
TYPE_CHECKING,
Any,
Optional,
Union,
Dict,
List,
Tuple,
Callable,
IO
)
if TYPE_CHECKING:
from ..server import Server
from ..components.shell_command import ShellCommandFactory
MIN_PIP_VERSION = (23, 3, 2)
MIN_PYTHON_VERSION = (3, 7)
# Synchronous Subprocess Helpers
def _run_subprocess_with_response(
cmd: str,
timeout: Optional[float] = None,
env: Optional[Dict[str, str]] = None
) -> str:
prog = shlex.split(cmd)
proc = subprocess.run(
prog, capture_output=True, timeout=timeout, env=env,
check=True, text=True, errors="ignore", encoding="utf-8"
)
if proc.returncode == 0:
return proc.stdout.strip()
err = proc.stderr
raise Exception(f"Failed to run pip command '{cmd}': {err}")
def _process_subproc_output(
stdout: IO[str],
callback: Callable[[str], None]
) -> None:
for line in stdout:
callback(line.rstrip("\n"))
def _run_subprocess(
cmd: str,
timeout: Optional[float] = None,
env: Optional[Dict[str, str]] = None,
response_cb: Optional[Callable[[str], None]] = None
) -> None:
prog = shlex.split(cmd)
params: Dict[str, Any] = {"errors": "ignore", "encoding": "utf-8"}
if response_cb is not None:
params = {"stdout": subprocess.PIPE, "stderr": subprocess.STDOUT}
with subprocess.Popen(prog, text=True, env=env, **params) as process:
if process.stdout is not None and response_cb is not None:
reader_thread = threading.Thread(
target=_process_subproc_output, args=(process.stdout, response_cb)
)
reader_thread.start()
reader_thread.join(timeout)
if reader_thread.is_alive():
process.kill()
elif timeout is not None:
process.wait(timeout)
ret = process.poll()
if ret != 0:
raise Exception(f"Failed to run pip command '{cmd}'")
@ dataclass(frozen=True)
class PipVersionInfo:
pip_version_string: str
python_version_string: str
@property
def pip_version(self) -> Tuple[int, ...]:
return tuple(int(part) for part in self.pip_version_string.split("."))
@property
def python_version(self) -> Tuple[int, ...]:
return tuple(int(part) for part in self.python_version_string.split("."))
class PipExecutor:
def __init__(
self, pip_cmd: str, response_handler: Optional[Callable[[str], None]] = None
) -> None:
self.pip_cmd = pip_cmd
self.response_hdlr = response_handler
def call_pip_with_response(
self,
args: str,
timeout: Optional[float] = None,
env: Optional[Dict[str, str]] = None
) -> str:
return _run_subprocess_with_response(f"{self.pip_cmd} {args}", timeout, env)
def call_pip(
self,
args: str,
timeout: Optional[float] = None,
env: Optional[Dict[str, str]] = None
) -> None:
_run_subprocess(f"{self.pip_cmd} {args}", timeout, env, self.response_hdlr)
def get_pip_version(self) -> PipVersionInfo:
resp = self.call_pip_with_response("--version", 10.)
return parse_pip_version(resp)
def update_pip(self) -> None:
pip_ver = ".".join([str(part) for part in MIN_PIP_VERSION])
self.call_pip(f"install pip=={pip_ver}", 120.)
def install_packages(
self,
packages: Union[pathlib.Path, List[str]],
sys_env_vars: Optional[Dict[str, Any]] = None
) -> None:
args = prepare_install_args(packages)
env: Optional[Dict[str, str]] = None
if sys_env_vars is not None:
env = dict(os.environ)
env.update(sys_env_vars)
self.call_pip(f"install {args}", timeout=1200., env=env)
def build_virtualenv(self, py_exec: pathlib.Path, args: str) -> None:
bin_dir = py_exec.parent
env_path = bin_dir.parent.resolve()
if env_path.exists():
shutil.rmtree(env_path)
_run_subprocess(
f"virtualenv {args} {env_path}",
timeout=600.,
response_cb=self.response_hdlr
)
if not py_exec.exists():
raise Exception("Failed to create new virtualenv", 500)
class AsyncPipExecutor:
def __init__(
self,
pip_cmd: str,
server: Server,
notify_callback: Optional[Callable[[bytes], None]] = None
) -> None:
self.pip_cmd = pip_cmd
self.server = server
self.notify_callback = notify_callback
def get_shell_cmd(self) -> ShellCommandFactory:
return self.server.lookup_component("shell_command")
async def get_pip_version(self) -> PipVersionInfo:
resp: str = await self.get_shell_cmd().exec_cmd(
f"{self.pip_cmd} --version", timeout=30., attempts=3, log_stderr=True
)
return parse_pip_version(resp)
async def update_pip(self) -> None:
pip_ver = ".".join([str(part) for part in MIN_PIP_VERSION])
shell_cmd = self.get_shell_cmd()
await shell_cmd.run_cmd_async(
f"{self.pip_cmd} install pip=={pip_ver}",
self.notify_callback, timeout=1200., attempts=3, log_stderr=True
)
async def install_packages(
self,
packages: Union[pathlib.Path, List[str]],
sys_env_vars: Optional[Dict[str, Any]] = None
) -> None:
# Update python dependencies
args = prepare_install_args(packages)
env: Optional[Dict[str, str]] = None
if sys_env_vars is not None:
env = dict(os.environ)
env.update(sys_env_vars)
shell_cmd = self.get_shell_cmd()
await shell_cmd.run_cmd_async(
f"{self.pip_cmd} install {args}", self.notify_callback,
timeout=1200., attempts=3, env=env, log_stderr=True
)
async def build_virtualenv(self, py_exec: pathlib.Path, args: str) -> None:
bin_dir = py_exec.parent
env_path = bin_dir.parent.resolve()
if env_path.exists():
shutil.rmtree(env_path)
shell_cmd = self.get_shell_cmd()
await shell_cmd.exec_cmd(f"virtualenv {args} {env_path}", timeout=600.)
if not py_exec.exists():
raise self.server.error("Failed to create new virtualenv", 500)
def read_requirements_file(requirements_path: pathlib.Path) -> List[str]:
if not requirements_path.is_file():
raise FileNotFoundError(f"Requirements file {requirements_path} not found")
data = requirements_path.read_text()
modules: List[str] = []
for line in data.split("\n"):
line = line.strip()
if not line or line[0] in "#-":
continue
match = re.search(r"\s#", line)
if match is not None:
line = line[:match.start()].strip()
modules.append(line)
return modules
def parse_pip_version(pip_response: str) -> PipVersionInfo:
match = re.match(
r"^pip ([0-9.]+) from .+? \(python ([0-9.]+)\)$", pip_response.strip()
)
if match is None:
raise ValueError("Unable to parse pip version from response")
pipver_str: str = match.group(1).strip()
pyver_str: str = match.group(2).strip()
return PipVersionInfo(pipver_str, pyver_str)
def check_pip_needs_update(version_info: PipVersionInfo) -> bool:
if version_info.python_version < MIN_PYTHON_VERSION:
return False
return version_info.pip_version < MIN_PIP_VERSION
def prepare_install_args(packages: Union[pathlib.Path, List[str]]) -> str:
if isinstance(packages, pathlib.Path):
if not packages.is_file():
raise FileNotFoundError(
f"Invalid path to requirements_file '{packages}'"
)
return f"-r {packages}"
reqs = [req.replace("\"", "'") for req in packages]
return " ".join([f"\"{req}\"" for req in reqs])

View File

@@ -0,0 +1,88 @@
# General Server Utilities
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import importlib.resources as ilr
import pathlib
import sys
import site
# Annotation imports
from typing import (
Optional,
)
def package_path() -> pathlib.Path:
return pathlib.Path(__file__).parent.parent
def source_path() -> pathlib.Path:
return package_path().parent
def is_git_repo(src_path: Optional[pathlib.Path] = None) -> bool:
if src_path is None:
src_path = source_path()
return src_path.joinpath(".git").is_dir()
def find_git_repo(src_path: Optional[pathlib.Path] = None) -> Optional[pathlib.Path]:
if src_path is None:
src_path = source_path()
if src_path.joinpath(".git").is_dir():
return src_path
for parent in src_path.parents:
if parent.joinpath(".git").is_dir():
return parent
return None
def is_dist_package(src_path: Optional[pathlib.Path] = None) -> bool:
if src_path is None:
# Check Moonraker's source path
src_path = source_path()
if hasattr(site, "getsitepackages"):
# The site module is present, get site packages for Moonraker's venv.
# This is more "correct" than the fallback method.
site_dirs = site.getsitepackages()
return str(src_path) in site_dirs
# Make an assumption based on the source path. If its name is
# site-packages or dist-packages then presumably it is an
# installed package
return src_path.name in ["dist-packages", "site-packages"]
def package_version() -> Optional[str]:
try:
import moonraker.__version__ as ver # type: ignore
version = ver.__version__
except Exception:
pass
else:
if version:
return version
return None
def read_asset(asset_name: str) -> Optional[str]:
if sys.version_info < (3, 10):
with ilr.path("moonraker.assets", asset_name) as p:
if not p.is_file():
return None
return p.read_text()
else:
files = ilr.files("moonraker.assets")
with ilr.as_file(files.joinpath(asset_name)) as p:
if not p.is_file():
return None
return p.read_text()
def get_asset_path() -> Optional[pathlib.Path]:
if sys.version_info < (3, 10):
with ilr.path("moonraker.assets", "__init__.py") as p:
asset_path = p.parent
else:
files = ilr.files("moonraker.assets")
with ilr.as_file(files.joinpath("__init__.py")) as p:
asset_path = p.parent
if not asset_path.is_dir():
# Somehow running in a zipapp. This is an error.
return None
return asset_path

View File

@@ -0,0 +1,467 @@
# Utilities for enumerating devices using sysfs
#
# Copyright (C) 2024 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import os
import fcntl
import ctypes
import pathlib
import enum
from ..common import ExtendedFlag
from . import ioctl_macros
from typing import (
Dict,
List,
Any,
Union,
Optional
)
DEFAULT_USB_IDS_PATH = "/usr/share/misc/usb.ids"
USB_DEVICE_PATH = "/sys/bus/usb/devices"
TTY_PATH = "/sys/class/tty"
SER_BYPTH_PATH = "/dev/serial/by-path"
SER_BYID_PATH = "/dev/serial/by-id"
V4L_DEVICE_PATH = "/sys/class/video4linux"
V4L_BYPTH_PATH = "/dev/v4l/by-path"
V4L_BYID_PATH = "/dev/v4l/by-id"
OPTIONAL_USB_INFO = ["manufacturer", "product", "serial"]
NULL_DESCRIPTIONS = [
"?", "none", "undefined", "reserved/undefined", "unused", "no subclass"
]
def read_item(parent: pathlib.Path, filename: str) -> str:
return parent.joinpath(filename).read_text().strip()
def find_usb_folder(usb_path: pathlib.Path) -> Optional[str]:
# Find the sysfs usb folder from a child folder
while usb_path.is_dir() and usb_path.name:
dnum_file = usb_path.joinpath("devnum")
bnum_file = usb_path.joinpath("busnum")
if not dnum_file.is_file() or not bnum_file.is_file():
usb_path = usb_path.parent
continue
devnum = int(dnum_file.read_text().strip())
busnum = int(bnum_file.read_text().strip())
return f"{busnum}:{devnum}"
return None
class UsbIdData:
_usb_info_cache: Dict[str, str] = {
"DI:1d50": "OpenMoko, Inc",
"DI:1d50:614e": "Klipper 3d-Printer Firmware",
"DI:1d50:6177": "Katapult Bootloader (CDC_ACM)"
}
def __init__(self, usb_id_path: Union[str, pathlib.Path]) -> None:
if isinstance(usb_id_path, str):
usb_id_path = pathlib.Path(usb_id_path)
self.usb_id_path = usb_id_path.expanduser().resolve()
self.parsed: bool = False
self.usb_info: Dict[str, str] = {}
def _is_hex(self, item: str) -> bool:
try:
int(item, 16)
except ValueError:
return False
return True
def get_item(self, key: str, check_null: bool = False) -> Optional[str]:
item = self.usb_info.get(key, self._usb_info_cache.get(key))
if item is None:
if self.parsed:
return None
self.parse_usb_ids()
item = self.usb_info.get(key)
if item is None:
return None
self._usb_info_cache[key] = item
if check_null and item.lower() in NULL_DESCRIPTIONS:
return None
return item
def parse_usb_ids(self) -> None:
self.parsed = True
if not self.usb_id_path.is_file():
return
top_key: str = ""
sub_key: str = ""
with self.usb_id_path.open(encoding="latin-1") as f:
while True:
line = f.readline()
if not line:
break
stripped_line = line.strip()
if not stripped_line or stripped_line[0] == "#":
continue
if line[:2] == "\t\t":
if not sub_key:
continue
tertiary_id, desc = stripped_line.split(maxsplit=1)
self.usb_info[f"{sub_key}:{tertiary_id.lower()}"] = desc
elif line[0] == "\t":
if not top_key:
continue
sub_id, desc = stripped_line.split(maxsplit=1)
sub_key = f"{top_key}:{sub_id.lower()}"
self.usb_info[sub_key] = desc
else:
id_type, data = line.rstrip().split(maxsplit=1)
if len(id_type) == 4 and self._is_hex(id_type):
# This is a vendor ID
top_key = f"DI:{id_type.lower()}"
self.usb_info[top_key] = data
elif id_type:
# This is a subtype
num_id, desc = data.split(maxsplit=1)
top_key = f"{id_type}:{num_id.lower()}"
self.usb_info[top_key] = desc
else:
break
def get_product_info(self, vendor_id: str, product_id: str) -> Dict[str, Any]:
vendor_name = self.get_item(f"DI:{vendor_id}")
if vendor_name is None:
return {
"description": None,
"manufacturer": None,
"product": None,
}
product_name = self.get_item(f"DI:{vendor_id}:{product_id}")
return {
"description": f"{vendor_name} {product_name or ''}".strip(),
"manufacturer": vendor_name,
"product": product_name,
}
def get_class_info(
self, cls_id: str, subcls_id: str, proto_id: str
) -> Dict[str, Any]:
cls_desc = self.get_item(f"C:{cls_id}")
if cls_desc is None or cls_id == "00":
return {
"class": None,
"subclass": None,
"protocol": None
}
return {
"class": cls_desc,
"subclass": self.get_item(f"C:{cls_id}:{subcls_id}", True),
"protocol": self.get_item(f"C:{cls_id}:{subcls_id}:{proto_id}", True)
}
def find_usb_devices() -> List[Dict[str, Any]]:
dev_folder = pathlib.Path(USB_DEVICE_PATH)
if not dev_folder.is_dir():
return []
usb_devs: List[Dict[str, Any]] = []
# Find sysfs usb device descriptors
for dev_cfg_path in dev_folder.glob("*/bDeviceClass"):
dev_folder = dev_cfg_path.parent
device_info: Dict[str, Any] = {}
try:
device_info["device_num"] = int(read_item(dev_folder, "devnum"))
device_info["bus_num"] = int(read_item(dev_folder, "busnum"))
device_info["vendor_id"] = read_item(dev_folder, "idVendor").lower()
device_info["product_id"] = read_item(dev_folder, "idProduct").lower()
usb_location = f"{device_info['bus_num']}:{device_info['device_num']}"
device_info["usb_location"] = usb_location
dev_cls = read_item(dev_folder, "bDeviceClass").lower()
dev_subcls = read_item(dev_folder, "bDeviceSubClass").lower()
dev_proto = read_item(dev_folder, "bDeviceProtocol").lower()
device_info["class_ids"] = [dev_cls, dev_subcls, dev_proto]
for field in OPTIONAL_USB_INFO:
if dev_folder.joinpath(field).is_file():
device_info[field] = read_item(dev_folder, field)
elif field not in device_info:
device_info[field] = None
except Exception:
continue
usb_devs.append(device_info)
return usb_devs
def find_serial_devices() -> List[Dict[str, Any]]:
serial_devs: List[Dict[str, Any]] = []
devs_by_path: Dict[str, str] = {}
devs_by_id: Dict[str, str] = {}
by_path_dir = pathlib.Path(SER_BYPTH_PATH)
by_id_dir = pathlib.Path(SER_BYID_PATH)
dev_root_folder = pathlib.Path("/dev")
if by_path_dir.is_dir():
devs_by_path = {
dev.resolve().name: str(dev) for dev in by_path_dir.iterdir()
}
if by_id_dir.is_dir():
devs_by_id = {
dev.resolve().name: str(dev) for dev in by_id_dir.iterdir()
}
tty_dir = pathlib.Path(TTY_PATH)
for tty_path in tty_dir.iterdir():
device_folder = tty_path.joinpath("device")
if not device_folder.is_dir():
continue
uartclk_file = tty_path.joinpath("uartclk")
port_file = tty_path.joinpath("port")
device_name = tty_path.name
driver_name = device_folder.joinpath("driver").resolve().name
device_info: Dict[str, Any] = {
"device_type": "unknown",
"device_path": str(dev_root_folder.joinpath(device_name)),
"device_name": device_name,
"driver_name": driver_name,
"path_by_hardware": devs_by_path.get(device_name),
"path_by_id": devs_by_id.get(device_name),
"usb_location": None
}
if uartclk_file.is_file() and port_file.is_file():
# This is a potential hardware uart. Need to
# validate that "serial8250" devices have a port
# number of zero
if driver_name == "serial8250":
portnum = int(port_file.read_text().strip(), 16)
if portnum != 0:
# Not a usable UART
continue
device_info["device_type"] = "hardware_uart"
else:
usb_path = device_folder.resolve()
usb_location: Optional[str] = find_usb_folder(usb_path)
if usb_location is not None:
device_info["device_type"] = "usb"
device_info["usb_location"] = usb_location
serial_devs.append(device_info)
return serial_devs
class struct_v4l2_capability(ctypes.Structure):
_fields_ = [
("driver", ctypes.c_char * 16),
("card", ctypes.c_char * 32),
("bus_info", ctypes.c_char * 32),
("version", ctypes.c_uint32),
("capabilities", ctypes.c_uint32),
("device_caps", ctypes.c_uint32),
("reserved", ctypes.c_uint32 * 3),
]
class struct_v4l2_fmtdesc(ctypes.Structure):
_fields_ = [
("index", ctypes.c_uint32),
("type", ctypes.c_uint32),
("flags", ctypes.c_uint32),
("description", ctypes.c_char * 32),
("pixelformat", ctypes.c_uint32),
("reserved", ctypes.c_uint32 * 4)
]
class struct_v4l2_frmsize_discrete(ctypes.Structure):
_fields_ = [
("width", ctypes.c_uint32),
("height", ctypes.c_uint32),
]
class struct_v4l2_frmsize_stepwise(ctypes.Structure):
_fields_ = [
("min_width", ctypes.c_uint32),
("max_width", ctypes.c_uint32),
("step_width", ctypes.c_uint32),
("min_height", ctypes.c_uint32),
("max_height", ctypes.c_uint32),
("step_height", ctypes.c_uint32),
]
class struct_v4l2_frmsize_union(ctypes.Union):
_fields_ = [
("discrete", struct_v4l2_frmsize_discrete),
("stepwise", struct_v4l2_frmsize_stepwise)
]
class struct_v4l2_frmsizeenum(ctypes.Structure):
_anonymous_ = ("size",)
_fields_ = [
("index", ctypes.c_uint32),
("pixel_format", ctypes.c_uint32),
("type", ctypes.c_uint32),
("size", struct_v4l2_frmsize_union),
("reserved", ctypes.c_uint32 * 2)
]
class V4L2Capability(ExtendedFlag):
VIDEO_CAPTURE = 0x00000001 # noqa: E221
VIDEO_OUTPUT = 0x00000002 # noqa: E221
VIDEO_OVERLAY = 0x00000004 # noqa: E221
VBI_CAPTURE = 0x00000010 # noqa: E221
VBI_OUTPUT = 0x00000020 # noqa: E221
SLICED_VBI_CAPTURE = 0x00000040 # noqa: E221
SLICED_VBI_OUTPUT = 0x00000080 # noqa: E221
RDS_CAPTURE = 0x00000100 # noqa: E221
VIDEO_OUTPUT_OVERLAY = 0x00000200
HW_FREQ_SEEK = 0x00000400 # noqa: E221
RDS_OUTPUT = 0x00000800 # noqa: E221
VIDEO_CAPTURE_MPLANE = 0x00001000
VIDEO_OUTPUT_MPLANE = 0x00002000 # noqa: E221
VIDEO_M2M_MPLANE = 0x00004000 # noqa: E221
VIDEO_M2M = 0x00008000 # noqa: E221
TUNER = 0x00010000 # noqa: E221
AUDIO = 0x00020000 # noqa: E221
RADIO = 0x00040000 # noqa: E221
MODULATOR = 0x00080000 # noqa: E221
SDR_CAPTURE = 0x00100000 # noqa: E221
EXT_PIX_FORMAT = 0x00200000 # noqa: E221
SDR_OUTPUT = 0x00400000 # noqa: E221
META_CAPTURE = 0x00800000 # noqa: E221
READWRITE = 0x01000000 # noqa: E221
STREAMING = 0x04000000 # noqa: E221
META_OUTPUT = 0x08000000 # noqa: E221
TOUCH = 0x10000000 # noqa: E221
IO_MC = 0x20000000 # noqa: E221
SET_DEVICE_CAPS = 0x80000000 # noqa: E221
class V4L2FrameSizeTypes(enum.IntEnum):
DISCRETE = 1
CONTINUOUS = 2
STEPWISE = 3
class V4L2FormatFlags(ExtendedFlag):
COMPRESSED = 0x0001
EMULATED = 0x0002
V4L2_BUF_TYPE_VIDEO_CAPTURE = 1
V4L2_QUERYCAP = ioctl_macros.IOR(ord("V"), 0, struct_v4l2_capability)
V4L2_ENUM_FMT = ioctl_macros.IOWR(ord("V"), 2, struct_v4l2_fmtdesc)
V4L2_ENUM_FRAMESIZES = ioctl_macros.IOWR(ord("V"), 74, struct_v4l2_frmsizeenum)
def v4l2_fourcc_from_fmt(pixelformat: int) -> str:
fmt = bytes([((pixelformat >> (8 * i)) & 0xFF) for i in range(4)])
return fmt.decode(encoding="ascii", errors="ignore")
def v4l2_fourcc(format: str) -> int:
assert len(format) == 4
result: int = 0
for idx, val in enumerate(format.encode()):
result |= (val << (8 * idx)) & 0xFF
return result
def _get_resolutions(fd: int, pixel_format: int) -> List[str]:
res_info = struct_v4l2_frmsizeenum()
result: List[str] = []
for idx in range(128):
res_info.index = idx
res_info.pixel_format = pixel_format
try:
fcntl.ioctl(fd, V4L2_ENUM_FRAMESIZES, res_info)
except OSError:
break
if res_info.type != V4L2FrameSizeTypes.DISCRETE:
break
width = res_info.discrete.width
height = res_info.discrete.height
result.append(f"{width}x{height}")
return result
def _get_modes(fd: int) -> List[Dict[str, Any]]:
pix_info = struct_v4l2_fmtdesc()
result: List[Dict[str, Any]] = []
for idx in range(128):
pix_info.index = idx
pix_info.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
try:
fcntl.ioctl(fd, V4L2_ENUM_FMT, pix_info)
except OSError:
break
desc: str = pix_info.description.decode()
pixel_format: int = pix_info.pixelformat
flags = V4L2FormatFlags(pix_info.flags)
resolutions = _get_resolutions(fd, pixel_format)
if not resolutions:
continue
result.append(
{
"format": v4l2_fourcc_from_fmt(pixel_format),
"description": desc,
"flags": [f.name for f in flags],
"resolutions": resolutions
}
)
return result
def find_video_devices() -> List[Dict[str, Any]]:
v4lpath = pathlib.Path(V4L_DEVICE_PATH)
if not v4lpath.is_dir():
return []
v4l_by_path_dir = pathlib.Path(V4L_BYPTH_PATH)
v4l_by_id_dir = pathlib.Path(V4L_BYID_PATH)
dev_root_folder = pathlib.Path("/dev")
v4l_devs_by_path: Dict[str, str] = {}
v4l_devs_by_id: Dict[str, str] = {}
if v4l_by_path_dir.is_dir():
v4l_devs_by_path = {
dev.resolve().name: str(dev) for dev in v4l_by_path_dir.iterdir()
}
if v4l_by_id_dir.is_dir():
v4l_devs_by_id = {
dev.resolve().name: str(dev) for dev in v4l_by_id_dir.iterdir()
}
v4l_devices: List[Dict[str, Any]] = []
for v4ldev_path in v4lpath.iterdir():
devfs_name = v4ldev_path.name
devfs_path = dev_root_folder.joinpath(devfs_name)
# The video4linux sysfs implmentation provides limited device
# info. Use the VIDEOC_QUERYCAPS ioctl to retreive extended
# information about the v4l2 device.
fd: int = -1
try:
fd = os.open(str(devfs_path), os.O_RDONLY | os.O_NONBLOCK)
cap_info = struct_v4l2_capability()
fcntl.ioctl(fd, V4L2_QUERYCAP, cap_info)
capabilities = V4L2Capability(cap_info.device_caps)
if not capabilities & V4L2Capability.VIDEO_CAPTURE:
# Skip devices that do not capture video
continue
modes = _get_modes(fd)
except Exception:
continue
finally:
if fd != -1:
os.close(fd)
ver_tuple = tuple(
[str((cap_info.version >> (i)) & 0xFF) for i in range(16, -1, -8)]
)
video_device: Dict[str, Any] = {
"device_name": devfs_name,
"device_path": str(devfs_path),
"camera_name": cap_info.card.decode(),
"driver_name": cap_info.driver.decode(),
"hardware_bus": cap_info.bus_info.decode(),
"capabilities": [cap.name for cap in capabilities],
"version": ".".join(ver_tuple),
"path_by_hardware": v4l_devs_by_path.get(devfs_name),
"path_by_id": v4l_devs_by_id.get(devfs_name),
"alt_name": None,
"usb_location": None,
"modes": modes
}
name_file = v4ldev_path.joinpath("name")
if name_file.is_file():
video_device["alt_name"] = read_item(v4ldev_path, "name")
device_path = v4ldev_path.joinpath("device")
if device_path.is_dir():
usb_location = find_usb_folder(device_path.resolve())
if usb_location is not None:
video_device["usb_location"] = usb_location
v4l_devices.append(video_device)
def idx_sorter(item: Dict[str, Any]) -> int:
try:
return int(item["device_name"][5:])
except ValueError:
return -1
# Sort by string first, then index
v4l_devices.sort(key=lambda item: item["device_name"])
v4l_devices.sort(key=idx_sorter)
return v4l_devices

383
moonraker/utils/versions.py Normal file
View File

@@ -0,0 +1,383 @@
# Semantic Version Parsing and Comparison
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license.
from __future__ import annotations
import re
from enum import Flag, auto
from typing import Tuple, Optional, Dict, List
# Python regex for parsing version strings from PEP 440
# https://peps.python.org/pep-0440/#appendix-b-parsing-version-strings-with-regular-expressions
VERSION_PATTERN = r"""
v?
(?:
(?:(?P<epoch>[0-9]+)!)? # epoch
(?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
(?P<pre> # pre-release
[-_\.]?
(?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview))
[-_\.]?
(?P<pre_n>[0-9]+)?
)?
(?P<post> # post release
(?:-(?P<post_n1>[0-9]+))
|
(?:
[-_\.]?
(?P<post_l>post|rev|r)
[-_\.]?
(?P<post_n2>[0-9]+)?
)
)?
(?P<dev> # dev release
[-_\.]?
(?P<dev_l>dev)
[-_\.]?
(?P<dev_n>[0-9]+)?
)?
)
(?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
"""
GIT_VERSION_PATTERN = r"""
(?P<tag>
v?
(?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
(?P<pre> # pre-release
[-_\.]?
(?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview))
[-_\.]?
(?P<pre_n>[0-9]+)?
)?
)
(?:
(?:-(?P<dev_n>[0-9]+)) # dev count
(?:-g(?P<hash>[a-fA-F0-9]+))? # abbrev hash
)?
(?P<dirty>-dirty)?
(?P<inferred>-(?:inferred|shallow))?
"""
_py_version_regex = re.compile(
r"^\s*" + VERSION_PATTERN + r"\s*$",
re.VERBOSE | re.IGNORECASE,
)
_git_version_regex = re.compile(
r"^\s*" + GIT_VERSION_PATTERN + r"\s*$",
re.VERBOSE | re.IGNORECASE,
)
class ReleaseType(Flag):
FINAL = auto()
ALPHA = auto()
BETA = auto()
RELEASE_CANDIDATE = auto()
POST = auto()
DEV = auto()
class BaseVersion:
def __init__(self, version: str) -> None:
self._release: str = "?"
self._release_type = ReleaseType(0)
self._tag: str = "?"
self._orig: str = version.strip()
self._release_tup: Tuple[int, ...] = tuple()
self._extra_tup: Tuple[int, ...] = tuple()
self._has_dev_part: bool = False
self._dev_count: int = 0
self._valid_version: bool = False
@property
def full_version(self) -> str:
return self._orig
@property
def release(self) -> str:
return self._release
@property
def tag(self) -> str:
return self._tag
@property
def release_type(self) -> ReleaseType:
return self._release_type
@property
def dev_count(self) -> int:
return self._dev_count
def is_pre_release(self) -> bool:
for pr_idx in (1, 2, 3):
if ReleaseType(1 << pr_idx) in self._release_type:
return True
return False
def is_post_release(self) -> bool:
return ReleaseType.POST in self._release_type
def is_dev_release(self) -> bool:
return ReleaseType.DEV in self._release_type
def is_alpha_release(self) -> bool:
return ReleaseType.ALPHA in self._release_type
def is_beta_release(self) -> bool:
return ReleaseType.BETA in self._release_type
def is_release_candidate(self) -> bool:
return ReleaseType.RELEASE_CANDIDATE in self._release_type
def is_final_release(self) -> bool:
return ReleaseType.FINAL in self._release_type
def is_valid_version(self) -> bool:
return self._valid_version
def __str__(self) -> str:
return self._orig
def _validate(self, other: BaseVersion) -> None:
if not self._valid_version:
raise ValueError(
f"Version {self._orig} is not a valid version string "
f"for type {type(self).__name__}"
)
if not other._valid_version:
raise ValueError(
f"Version {other._orig} is not a valid version string "
f"for type {type(self).__name__}"
)
def __eq__(self, __value: object) -> bool:
if not isinstance(__value, type(self)):
raise ValueError("Invalid type for comparison")
self._validate(__value)
if self._release_tup != __value._release_tup:
return False
if self._extra_tup != __value._extra_tup:
return False
if self._has_dev_part != __value._has_dev_part:
return False
if self._dev_count != __value._dev_count:
return False
return True
def __lt__(self, __value: object) -> bool:
if not isinstance(__value, type(self)):
raise ValueError("Invalid type for comparison")
self._validate(__value)
if self._release_tup != __value._release_tup:
return self._release_tup < __value._release_tup
if self._extra_tup != __value._extra_tup:
return self._extra_tup < __value._extra_tup
if self._has_dev_part != __value._has_dev_part:
return self._has_dev_part
return self._dev_count < __value._dev_count
def __le__(self, __value: object) -> bool:
if not isinstance(__value, type(self)):
raise ValueError("Invalid type for comparison")
self._validate(__value)
if self._release_tup > __value._release_tup:
return False
if self._extra_tup > __value._extra_tup:
return False
if self._has_dev_part != __value._has_dev_part:
return self._has_dev_part
return self._dev_count <= __value._dev_count
def __ne__(self, __value: object) -> bool:
if not isinstance(__value, type(self)):
raise ValueError("Invalid type for comparison")
self._validate(__value)
if self._release_tup != __value._release_tup:
return True
if self._extra_tup != __value._extra_tup:
return True
if self._has_dev_part != __value._has_dev_part:
return True
if self._dev_count != __value._dev_count:
return True
return False
def __gt__(self, __value: object) -> bool:
if not isinstance(__value, type(self)):
raise ValueError("Invalid type for comparison")
self._validate(__value)
if self._release_tup != __value._release_tup:
return self._release_tup > __value._release_tup
if self._extra_tup != __value._extra_tup:
return self._extra_tup > __value._extra_tup
if self._has_dev_part != __value._has_dev_part:
return __value._has_dev_part
return self._dev_count > __value._dev_count
def __ge__(self, __value: object) -> bool:
if not isinstance(__value, type(self)):
raise ValueError("Invalid type for comparison")
self._validate(__value)
if self._release_tup < __value._release_tup:
return False
if self._extra_tup < __value._extra_tup:
return False
if self._has_dev_part != __value._has_dev_part:
return __value._has_dev_part
return self._dev_count >= __value._dev_count
class PyVersion(BaseVersion):
def __init__(self, version: str) -> None:
super().__init__(version)
ver_match = _py_version_regex.match(version)
if ver_match is None:
return
version_info = ver_match.groupdict()
release: Optional[str] = version_info["release"]
if release is None:
return
self._valid_version = True
self._release = release
self._tag = f"v{release}" if self._orig[0].lower() == "v" else release
self._release_tup = tuple(int(part) for part in release.split("."))
self._extra_tup = (1, 0, 0)
if version_info["pre"] is not None:
pre_conv = dict([("a", 1), ("b", 2), ("c", 3), ("r", 3), ("p", 3)])
lbl = version_info["pre_l"][0].lower()
self._extra_tup = (0, pre_conv.get(lbl, 0), int(version_info["pre_n"] or 0))
self._tag += version_info["pre"]
self._release_type |= ReleaseType(1 << pre_conv.get(lbl, 1))
if version_info["post"] is not None:
# strange combination of a "post" pre-release.
num = version_info["post_n1"] or version_info["post_n2"]
self._extra_tup += (int(num or 0),)
self._tag += version_info["post"]
self._release_type |= ReleaseType.POST
elif version_info["post"] is not None:
num = version_info["post_n1"] or version_info["post_n2"]
self._extra_tup = (2, int(num or 0), 0)
self._tag += version_info["post"]
self._release_type |= ReleaseType.POST
self._has_dev_part = version_info["dev"] is not None
if self._has_dev_part:
self._release_type |= ReleaseType.DEV
elif self._release_type.value == 0:
self._release_type = ReleaseType.FINAL
elif self._release_type.value == ReleaseType.POST.value:
self._release_type |= ReleaseType.FINAL
self._dev_count = int(version_info["dev_n"] or 0)
self.local: Optional[str] = version_info["local"]
def convert_to_git(self, version_info: Dict[str, Optional[str]]) -> GitVersion:
git_version: Optional[str] = version_info["release"]
if git_version is None:
raise ValueError("Invalid version string")
if self._orig[0].lower() == "v":
git_version == f"v{git_version}"
local: str = version_info["local"] or ""
# Assume semantic versioning, convert the version string.
if version_info["dev_n"] is not None:
major, _, minor = git_version.rpartition(".")
if major:
git_version = f"v{major}.{max(int(minor) - 1, 0)}"
if version_info["pre"] is not None:
git_version = f"{git_version}{version_info['pre']}"
dev_num = version_info["dev_n"] or 0
git_version = f"{git_version}-{dev_num}"
local_parts = local.split(".", 1)[0]
if local_parts[0]:
git_version = f"{git_version}-{local_parts[0]}"
if len(local_parts) > 1:
git_version = f"{git_version}-dirty"
return GitVersion(git_version)
class GitVersion(BaseVersion):
def __init__(self, version: str) -> None:
super().__init__(version)
self._is_dirty: bool = False
self._is_inferred: bool = False
ver_match = _git_version_regex.match(version)
if ver_match is None:
# Check Fallback
fb_match = re.match(r"(?P<hash>[a-fA-F0-9]+)(?P<dirty>-dirty)?", self._orig)
if fb_match is None:
return
self._tag = ""
self._release = fb_match["hash"]
self._is_dirty = fb_match["dirty"] is not None
self._is_inferred = True
return
version_info = ver_match.groupdict()
release: Optional[str] = version_info["release"]
if release is None:
return
self._valid_version = True
self._release = release
self._tag = version_info["tag"] or "?"
self._release_tup = tuple(int(part) for part in release.split("."))
self._extra_tup = (1, 0, 0)
if version_info["pre"] is not None:
pre_conv = dict([("a", 1), ("b", 2), ("c", 3), ("r", 3), ("p", 3)])
lbl = version_info["pre_l"][0].lower()
self._extra_tup = (0, pre_conv.get(lbl, 0), int(version_info["pre_n"] or 0))
self._release_type = ReleaseType(1 << pre_conv.get(lbl, 1))
# All git versions are considered to have a dev part. Contrary to python
# versioning, a version with a dev number is greater than the same version
# without one.
self._has_dev_part = True
self._dev_count = int(version_info["dev_n"] or 0)
if self._dev_count > 0:
self._release_type |= ReleaseType.DEV
if self._release_type.value == 0:
self._release_type = ReleaseType.FINAL
self._is_inferred = version_info["inferred"] is not None
self._is_dirty = version_info["dirty"] is not None
@property
def short_version(self) -> str:
if not self._valid_version:
return "?"
return f"{self._tag}-{self._dev_count}"
@property
def dirty(self) -> bool:
return self._is_dirty
@property
def inferred(self) -> bool:
return self._is_inferred
def is_fallback(self) -> bool:
return self._is_inferred and not self._valid_version
def infer_last_tag(self) -> str:
if self._valid_version:
if self._is_inferred:
# We can't infer a previous release from another inferred release
return self._tag
type_choices = dict([(1, "a"), (2, "b"), (3, "rc")])
if self.is_pre_release() and self._extra_tup > (0, 1, 0):
type_idx = self._extra_tup[1]
type_count = self._extra_tup[2]
if type_count == 0:
type_idx -= 1
else:
type_count -= 1
pretype = type_choices.get(type_idx, "rc")
return f"{self._release}.{pretype}{type_count}"
else:
parts = [int(ver) for ver in self._release.split(".")]
new_ver: List[str] = []
need_decrement = True
for part in reversed(parts):
if part > 0 and need_decrement:
need_decrement = False
part -= 1
new_ver.insert(0, str(part))
return "v" + ".".join(new_ver)
return "v0.0.0"

71
pyproject.toml Normal file
View File

@@ -0,0 +1,71 @@
[project]
name = "moonraker"
dynamic = ["version"]
description = "API Server for Klipper"
authors = [
{name = "Eric Callahan", email = "arksine.code@gmail.com"},
]
dependencies = [
"tornado==6.2.0 ; python_version=='3.7'",
"tornado==6.4.0 ; python_version>='3.8'",
"pyserial==3.4",
"pyserial-asyncio==0.6",
"pillow==9.5.0 ; python_version=='3.7'",
"pillow==10.3.0 ; python_version>='3.8'",
"streaming-form-data==1.11.0 ; python_version=='3.7'",
"streaming-form-data==1.15.0 ; python_version>='3.8'",
"distro==1.9.0",
"inotify-simple==1.3.5",
"libnacl==2.1.0",
"paho-mqtt==1.6.1",
"zeroconf==0.131.0",
"preprocess-cancellation==0.2.1",
"jinja2==3.1.4",
"dbus-next==0.2.3",
"apprise==1.8.0",
"ldap3==2.9.1",
"python-periphery==2.4.1"
]
requires-python = ">=3.7"
readme = "README.md"
license = {text = "GPL-3.0-only"}
keywords = ["klipper", "3D printing", "server", "moonraker"]
classifiers = [
"Development Status :: 4 - Beta",
"Topic :: Internet :: WWW/HTTP :: HTTP Servers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
]
[project.urls]
homepage = "https://github.com/Arksine/moonraker"
repository = "https://github.com/Arksine/moonraker"
documentation = "https://moonraker.readthedocs.io"
changelog = "https://moonraker.readthedocs.io/en/latest/changelog/"
[project.optional-dependencies]
msgspec=["msgspec>=0.18.4 ; python_version>='3.8'"]
uvloop=["uvloop>=0.17.0"]
speedups = ["moonraker[msgspec,uvloop]"]
[tool.pdm.version]
source = "scm"
write_to = "moonraker/__version__.py"
write_template = "__version__ = '{}'\n"
[tool.pdm.build]
excludes = ["./**/.git", "moonraker/moonraker.py"]
includes = ["moonraker"]
editable-backend = "path"
custom-hook = "scripts/pdm_build_dist.py"
[project.scripts]
moonraker = "moonraker.server:main"
[build-system]
requires = ["pdm-backend"]
build-backend = "pdm.backend"

View File

@@ -0,0 +1,50 @@
#!/bin/bash
# LMDB Database backup utility
DATABASE_PATH="${HOME}/printer_data/database"
MOONRAKER_ENV="${HOME}/moonraker-env"
OUPUT_FILE="${HOME}/database.backup"
print_help()
{
echo "Moonraker Database Backup Utility"
echo
echo "usage: backup-database.sh [-h] [-e <python env path>] [-d <database path>] [-o <output file>]"
echo
echo "optional arguments:"
echo " -h show this message"
echo " -e <env path> Moonraker Python Environment"
echo " -d <database path> Moonraker LMDB database to backup"
echo " -o <output file> backup file to save to"
exit 0
}
# Parse command line arguments
while getopts "he:d:o:" arg; do
case $arg in
h) print_help;;
e) MOONRAKER_ENV=$OPTARG;;
d) DATABASE_PATH=$OPTARG;;
o) OUPUT_FILE=$OPTARG;;
esac
done
PYTHON_BIN="${MOONRAKER_ENV}/bin/python"
DB_TOOL="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/dbtool.py"
if [ ! -f $PYTHON_BIN ]; then
echo "No Python binary found at '${PYTHON_BIN}'"
exit -1
fi
if [ ! -f "$DATABASE_PATH/data.mdb" ]; then
echo "No Moonraker database found at '${DATABASE_PATH}'"
exit -1
fi
if [ ! -f $DB_TOOL ]; then
echo "Unable to locate dbtool.py at '${DB_TOOL}'"
exit -1
fi
${PYTHON_BIN} ${DB_TOOL} backup ${DATABASE_PATH} ${OUPUT_FILE}

65
scripts/data-path-fix.sh Normal file
View File

@@ -0,0 +1,65 @@
#!/bin/bash
# Data Path Fix for legacy MainsailOS and FluiddPi installations running
# a single instance of Moonraker with a default configuration
DATA_PATH="${HOME}/printer_data"
DATA_PATH_BKP="${HOME}/.broken_printer_data"
DB_PATH="${HOME}/.moonraker_database"
CONFIG_PATH="${HOME}/klipper_config"
LOG_PATH="${HOME}/klipper_logs"
GCODE_PATH="${HOME}/gcode_files"
MOONRAKER_CONF="${CONFIG_PATH}/moonraker.conf"
MOONRAKER_LOG="${LOG_PATH}/moonraker.log"
ALIAS="moonraker"
# Parse command line arguments
while getopts "c:l:d:a:m:g:" arg; do
case $arg in
c)
MOONRAKER_CONF=$OPTARG
CONFIG_PATH="$( dirname $OPTARG )"
;;
l)
MOONRAKER_LOG=$OPTARG
LOG_PATH="$( dirname $OPTARG )"
;;
d)
DATA_PATH=$OPTARG
dpbase="$( basename $OPTARG )"
DATA_PATH_BKP="${HOME}/.broken_${dpbase}"
;;
a)
ALIAS=$OPTARG
;;
m)
DB_PATH=$OPTARG
[ ! -f "${DB_PATH}/data.mdb" ] && echo "No valid database found at ${DB_PATH}" && exit 1
;;
g)
GCODE_PATH=$OPTARG
[ ! -d "${GCODE_PATH}" ] && echo "No GCode Path found at ${GCODE_PATH}" && exit 1
;;
esac
done
[ ! -f "${MOONRAKER_CONF}" ] && echo "Error: unable to find config: ${MOONRAKER_CONF}" && exit 1
[ ! -d "${LOG_PATH}" ] && echo "Error: unable to find log path: ${LOG_PATH}" && exit 1
sudo systemctl stop ${ALIAS}
[ -d "${DATA_PATH_BKP}" ] && rm -rf ${DATA_PATH_BKP}
[ -d "${DATA_PATH}" ] && echo "Moving broken datapath to ${DATA_PATH_BKP}" && mv ${DATA_PATH} ${DATA_PATH_BKP}
mkdir ${DATA_PATH}
echo "Creating symbolic links..."
[ -f "${DB_PATH}/data.mdb" ] && ln -s ${DB_PATH} "$DATA_PATH/database"
[ -d "${GCODE_PATH}" ] && ln -s ${GCODE_PATH} "$DATA_PATH/gcodes"
ln -s ${LOG_PATH} "$DATA_PATH/logs"
ln -s ${CONFIG_PATH} "$DATA_PATH/config"
[ -f "${DB_PATH}/data.mdb" ] && ~/moonraker-env/bin/python -mlmdb -e ${DB_PATH} -d moonraker edit --delete=validate_install
echo "Running Moonraker install script..."
~/moonraker/scripts/install-moonraker.sh -f -a ${ALIAS} -d ${DATA_PATH} -c ${MOONRAKER_CONF} -l ${MOONRAKER_LOG}

View File

@@ -9,6 +9,7 @@ import pathlib
import base64
import tempfile
import re
import time
from typing import Any, Dict, Optional, TextIO, Tuple
import lmdb
@@ -16,7 +17,9 @@ MAX_NAMESPACES = 100
MAX_DB_SIZE = 200 * 2**20
HEADER_KEY = b"MOONRAKER_DATABASE_START"
LINE_MATCH = re.compile(r"\+(\d+),(\d+):(.+?)->(.+)")
LINE_MATCH = re.compile(
r"^\+(\d+),(\d+):([A-Za-z0-9+/]+={0,2})->([A-Za-z0-9+/]+={0,2})$"
)
class DBToolError(Exception):
pass
@@ -157,10 +160,13 @@ def restore(args: Dict[str, Any]):
print(f"Restoring backup from '{input_db}' to '{dest_path}'...")
bkp_dir: Optional[pathlib.Path] = None
if dest_path.joinpath("data.mdb").exists():
tmp_dir = pathlib.Path(tempfile.gettempdir())
bkp_dir = tmp_dir.joinpath("moonrakerdb_backup")
bkp_dir = dest_path.parent.joinpath("backup")
if not bkp_dir.exists():
bkp_dir = pathlib.Path(tempfile.gettempdir())
str_time = time.strftime("%Y%m%dT%H%M%SZ", time.gmtime())
bkp_dir = bkp_dir.joinpath(f"{str_time}/database")
if not bkp_dir.is_dir():
bkp_dir.mkdir()
bkp_dir.mkdir(parents=True)
print(f"Warning: database file at found in '{dest_path}', "
"all data will be overwritten. Copying existing DB "
f"to '{bkp_dir}'")

View File

@@ -1,6 +1,6 @@
#!/bin/bash
# Helper Script for fetching the API Key from a moonraker database
DATABASE_PATH="${HOME}/.moonraker_database"
DATABASE_PATH="${HOME}/printer_data/database"
MOONRAKER_ENV="${HOME}/moonraker-env"
DB_ARGS="--read=READ --db=authorized_users get _API_KEY_USER_"
API_REGEX='(?<="api_key": ")([^"]+)'

104
scripts/finish-upgrade.sh Normal file
View File

@@ -0,0 +1,104 @@
#!/bin/bash
# Helper script for completing service upgrades via ssh
ADDRESS="localhost"
PORT="7125"
API_KEY=""
# Python Helper Scripts
check_sudo_request=$( cat << EOF
import sys
import json
try:
ret = json.load(sys.stdin)
except Exception:
exit(0)
entries = ret.get('result', {}).get('entries', [])
for item in entries:
if item['dismissed'] is False and item['title'] == 'Sudo Password Required':
sys.stdout.write('true')
exit(0)
sys.stdout.write('false')
EOF
)
check_pw_response=$( cat << EOF
import sys
import json
try:
ret = json.load(sys.stdin)
except Exception:
exit(0)
responses = ret.get('result', {}).get('sudo_responses', [])
if responses:
sys.stdout.write('\n'.join(responses))
EOF
)
print_help_message()
{
echo "Utility to complete privileged upgrades for Moonraker"
echo
echo "usage: finish-upgrade.sh [-h] [-a <address>] [-p <port>] [-k <api_key>]"
echo
echo "optional arguments:"
echo " -h show this message"
echo " -a <address> address for Moonraker instance"
echo " -p <port> port for Moonraker instance"
echo " -k <api_key> API Key for authorization"
}
while getopts "a:p:k:h" arg; do
case $arg in
a) ADDRESS=${OPTARG};;
b) PORT=${OPTARG};;
k) API_KEY=${OPTARG};;
h)
print_help_message
exit 0
;;
esac
done
base_url="http://${ADDRESS}:${PORT}"
echo "Completing Upgrade for Moonraker at ${base_url}"
echo "Requesting Announcements..."
ann_url="${base_url}/server/announcements/list"
curl_cmd=(curl -f -s -S "${ann_url}")
[ -n "${API_KEY}" ] && curl_cmd+=(-H "X-Api-Key: ${API_KEY}")
result="$( "${curl_cmd[@]}" 2>&1 )"
if [ $? -ne 0 ]; then
echo "Moonraker announcement request failed with error: ${result}"
echo "Make sure the address and port are correct. If authorization"
echo "is required supply the API Key with the -k option."
exit -1
fi
has_req="$( echo "$result" | python3 -c "${check_sudo_request}" )"
if [ "$has_req" != "true" ]; then
echo "No sudo request detected, aborting"
exit -1
fi
# Request Password, send to Moonraker
echo "Sudo request announcement found, please enter your password"
read -sp "Password: " passvar
echo -e "\n"
sudo_url="${base_url}/machine/sudo/password"
curl_cmd=(curl -f -s -S -X POST "${sudo_url}")
curl_cmd+=(-d "{\"password\": \"${passvar}\"}")
curl_cmd+=(-H "Content-Type: application/json")
[ -n "$API_KEY" ] && curl_cmd+=(-H "X-Api-Key: ${API_KEY}")
result="$( "${curl_cmd[@]}" 2>&1)"
if [ $? -ne 0 ]; then
echo "Moonraker password request failed with error: ${result}"
echo "Make sure you entered the correct password."
exit -1
fi
response="$( echo "$result" | python3 -c "${check_pw_response}" )"
if [ -n "${response}" ]; then
echo "${response}"
else
echo "Invalid response received from Moonraker. Raw result: ${result}"
fi

View File

@@ -7,8 +7,24 @@ SYSTEMDDIR="/etc/systemd/system"
REBUILD_ENV="${MOONRAKER_REBUILD_ENV:-n}"
FORCE_DEFAULTS="${MOONRAKER_FORCE_DEFAULTS:-n}"
DISABLE_SYSTEMCTL="${MOONRAKER_DISABLE_SYSTEMCTL:-n}"
CONFIG_PATH="${MOONRAKER_CONFIG_PATH:-${HOME}/moonraker.conf}"
LOG_PATH="${MOONRAKER_LOG_PATH:-/tmp/moonraker.log}"
SKIP_POLKIT="${MOONRAKER_SKIP_POLKIT:-n}"
CONFIG_PATH="${MOONRAKER_CONFIG_PATH}"
LOG_PATH="${MOONRAKER_LOG_PATH}"
DATA_PATH="${MOONRAKER_DATA_PATH}"
INSTANCE_ALIAS="${MOONRAKER_ALIAS:-moonraker}"
SPEEDUPS="${MOONRAKER_SPEEDUPS:-n}"
SERVICE_VERSION="1"
package_decode_script=$( cat << EOF
import sys
import json
try:
ret = json.load(sys.stdin)
except Exception:
exit(0)
sys.stdout.write(' '.join(ret['debian']))
EOF
)
# Step 2: Clean up legacy installation
cleanup_legacy() {
@@ -25,17 +41,30 @@ cleanup_legacy() {
# Step 3: Install packages
install_packages()
{
PKGLIST="python3-virtualenv python3-dev libopenjp2-7 python3-libgpiod"
PKGLIST="${PKGLIST} curl libcurl4-openssl-dev libssl-dev liblmdb-dev"
PKGLIST="${PKGLIST} libsodium-dev zlib1g-dev libjpeg-dev packagekit"
# Update system package info
report_status "Running apt-get update..."
sudo apt-get update --allow-releaseinfo-change
system_deps="${SRCDIR}/scripts/system-dependencies.json"
if [ -f "${system_deps}" ]; then
if [ ! -x "$(command -v python3)" ]; then
report_status "Installing python3 base package..."
sudo apt-get install --yes python3
fi
PKGS="$( cat ${system_deps} | python3 -c "${package_decode_script}" )"
else
echo "Error: system-dependencies.json not found, falling back to legacy pacakge list"
PKGLIST="${PKGLIST} python3-virtualenv python3-dev"
PKGLIST="${PKGLIST} libopenjp2-7 libsodium-dev zlib1g-dev libjpeg-dev"
PKGLIST="${PKGLIST} packagekit wireless-tools curl"
PKGS=${PKGLIST}
fi
# Install desired packages
report_status "Installing packages..."
sudo apt-get install --yes ${PKGLIST}
report_status "Installing Moonraker Dependencies:"
report_status "${PKGS}"
sudo apt-get install --yes ${PKGS}
}
# Step 4: Create python virtual environment
@@ -50,29 +79,84 @@ create_virtualenv()
fi
if [ ! -d ${PYTHONDIR} ]; then
GET_PIP="${HOME}/get-pip.py"
virtualenv --no-pip -p /usr/bin/python3 ${PYTHONDIR}
curl https://bootstrap.pypa.io/pip/3.6/get-pip.py -o ${GET_PIP}
${PYTHONDIR}/bin/python ${GET_PIP}
rm ${GET_PIP}
virtualenv -p /usr/bin/python3 ${PYTHONDIR}
#GET_PIP="${HOME}/get-pip.py"
#curl https://bootstrap.pypa.io/pip/3.6/get-pip.py -o ${GET_PIP}
#${PYTHONDIR}/bin/python ${GET_PIP}
#rm ${GET_PIP}
fi
# Install/update dependencies
export SKIP_CYTHON=1
${PYTHONDIR}/bin/pip install -r ${SRCDIR}/scripts/moonraker-requirements.txt
if [ ${SPEEDUPS} = "y" ]; then
report_status "Installing Speedups..."
${PYTHONDIR}/bin/pip install -r ${SRCDIR}/scripts/moonraker-speedups.txt
fi
}
# Step 5: Install startup script
# Step 5: Initialize data folder
init_data_path()
{
report_status "Initializing Moonraker Data Path at ${DATA_PATH}"
config_dir="${DATA_PATH}/config"
logs_dir="${DATA_PATH}/logs"
env_dir="${DATA_PATH}/systemd"
config_file="${DATA_PATH}/config/moonraker.conf"
[ ! -e "${DATA_PATH}" ] && mkdir ${DATA_PATH}
[ ! -e "${config_dir}" ] && mkdir ${config_dir}
[ ! -e "${logs_dir}" ] && mkdir ${logs_dir}
[ ! -e "${env_dir}" ] && mkdir ${env_dir}
[ -n "${CONFIG_PATH}" ] && config_file=${CONFIG_PATH}
# Write initial configuration for first time installs
if [ ! -f $SERVICE_FILE ] && [ ! -e "${config_file}" ]; then
# detect machine provider
if [ "$( systemctl is-active dbus )" = "active" ]; then
provider="systemd_dbus"
else
provider="systemd_cli"
fi
report_status "Writing Config File ${config_file}:\n"
/bin/sh -c "cat > ${config_file}" << EOF
# Moonraker Configuration File
[server]
host: 0.0.0.0
port: 7125
# Make sure the klippy_uds_address is correct. It is initialized
# to the default address.
klippy_uds_address: /tmp/klippy_uds
[machine]
provider: ${provider}
EOF
cat ${config_file}
fi
}
# Step 6: Install startup script
install_script()
{
# Create systemd service file
SERVICE_FILE="${SYSTEMDDIR}/moonraker.service"
ENV_FILE="${DATA_PATH}/systemd/moonraker.env"
if [ ! -f $ENV_FILE ] || [ $FORCE_DEFAULTS = "y" ]; then
rm -f $ENV_FILE
env_vars="MOONRAKER_DATA_PATH=\"${DATA_PATH}\""
[ -n "${CONFIG_PATH}" ] && env_vars="${env_vars}\nMOONRAKER_CONFIG_PATH=\"${CONFIG_PATH}\""
[ -n "${LOG_PATH}" ] && env_vars="${env_vars}\nMOONRAKER_LOG_PATH=\"${LOG_PATH}\""
env_vars="${env_vars}\nMOONRAKER_ARGS=\"-m moonraker\""
env_vars="${env_vars}\nPYTHONPATH=\"${SRCDIR}\"\n"
echo -e $env_vars > $ENV_FILE
fi
[ -f $SERVICE_FILE ] && [ $FORCE_DEFAULTS = "n" ] && return
report_status "Installing system start script..."
sudo groupadd -f moonraker-admin
sudo /bin/sh -c "cat > ${SERVICE_FILE}" << EOF
#Systemd service file for moonraker
# systemd service file for moonraker
[Unit]
Description=API Server for Klipper
Description=API Server for Klipper SV${SERVICE_VERSION}
Requires=network-online.target
After=network-online.target
@@ -84,50 +168,57 @@ Type=simple
User=$USER
SupplementaryGroups=moonraker-admin
RemainAfterExit=yes
WorkingDirectory=${SRCDIR}
ExecStart=${LAUNCH_CMD} -c ${CONFIG_PATH} -l ${LOG_PATH}
EnvironmentFile=${ENV_FILE}
ExecStart=${PYTHONDIR}/bin/python \$MOONRAKER_ARGS
Restart=always
RestartSec=10
EOF
# Use systemctl to enable the klipper systemd service script
if [ $DISABLE_SYSTEMCTL = "n" ]; then
sudo systemctl enable moonraker.service
sudo systemctl enable "${INSTANCE_ALIAS}.service"
sudo systemctl daemon-reload
fi
}
# Step 7: Validate/Install polkit rules
check_polkit_rules()
{
if [ ! -x "$(command -v pkaction)" ]; then
if [ ! -x "$(command -v pkaction || true)" ]; then
return
fi
POLKIT_VERSION="$( pkaction --version | grep -Po "(\d?\.\d+)" )"
POLKIT_VERSION="$( pkaction --version | grep -Po "(\d+\.?\d*)" )"
NEED_POLKIT_INSTALL="n"
if [ "$POLKIT_VERSION" = "0.105" ]; then
POLKIT_LEGACY_FILE="/etc/polkit-1/localauthority/50-local.d/10-moonraker.pkla"
# legacy policykit rules don't give users other than root read access
if sudo [ ! -f $POLKIT_LEGACY_FILE ]; then
echo -e "\n*** No PolicyKit Rules detected, run 'set-policykit-rules.sh'"
echo "*** if you wish to grant Moonraker authorization to manage"
echo "*** system services, reboot/shutdown the system, and update"
echo "*** packages."
NEED_POLKIT_INSTALL="y"
fi
else
POLKIT_FILE="/etc/polkit-1/rules.d/moonraker.rules"
POLKIT_USR_FILE="/usr/share/polkit-1/rules.d/moonraker.rules"
if [ ! -f $POLKIT_FILE ] && [ ! -f $POLKIT_USR_FILE ]; then
NEED_POLKIT_INSTALL="y"
fi
fi
if [ "${NEED_POLKIT_INSTALL}" = "y" ]; then
if [ "${SKIP_POLKIT}" = "y" ]; then
echo -e "\n*** No PolicyKit Rules detected, run 'set-policykit-rules.sh'"
echo "*** if you wish to grant Moonraker authorization to manage"
echo "*** system services, reboot/shutdown the system, and update"
echo "*** packages."
else
report_status "Installing PolKit Rules"
${SRCDIR}/scripts/set-policykit-rules.sh -z
fi
fi
}
# Step 6: Start server
# Step 8: Start server
start_software()
{
report_status "Launching Moonraker API Server..."
sudo systemctl restart moonraker
sudo systemctl restart ${INSTANCE_ALIAS}
}
# Helper functions
@@ -149,24 +240,43 @@ set -e
# Find SRCDIR from the pathname of this script
SRCDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )"/.. && pwd )"
LAUNCH_CMD="${PYTHONDIR}/bin/python ${SRCDIR}/moonraker/moonraker.py"
# Parse command line arguments
while getopts "rfzc:l:" arg; do
while getopts "rfzxsc:l:d:a:" arg; do
case $arg in
r) REBUILD_ENV="y";;
f) FORCE_DEFAULTS="y";;
z) DISABLE_SYSTEMCTL="y";;
x) SKIP_POLKIT="y";;
s) SPEEDUPS="y";;
c) CONFIG_PATH=$OPTARG;;
l) LOG_PATH=$OPTARG;;
d) DATA_PATH=$OPTARG;;
a) INSTANCE_ALIAS=$OPTARG;;
esac
done
if [ -z "${DATA_PATH}" ]; then
if [ "${INSTANCE_ALIAS}" = "moonraker" ]; then
DATA_PATH="${HOME}/printer_data"
else
num="$( echo ${INSTANCE_ALIAS} | grep -Po "moonraker[-_]?\K\d+" || true )"
if [ -n "${num}" ]; then
DATA_PATH="${HOME}/printer_${num}_data"
else
DATA_PATH="${HOME}/${INSTANCE_ALIAS}_data"
fi
fi
fi
SERVICE_FILE="${SYSTEMDDIR}/${INSTANCE_ALIAS}.service"
# Run installation steps defined above
verify_ready
cleanup_legacy
install_packages
create_virtualenv
init_data_path
install_script
check_polkit_rules
if [ $DISABLE_SYSTEMCTL = "n" ]; then

57
scripts/make_sysdeps.py Normal file
View File

@@ -0,0 +1,57 @@
#! /usr/bin/python3
# Create system dependencies json file from the install script
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import argparse
import pathlib
import json
import re
from typing import List, Dict
def make_sysdeps(input: str, output: str, distro: str, truncate: bool) -> None:
sysdeps: Dict[str, List[str]] = {}
outpath = pathlib.Path(output).expanduser().resolve()
if outpath.is_file() and not truncate:
sysdeps = json.loads(outpath.read_bytes())
inst_path: pathlib.Path = pathlib.Path(input).expanduser().resolve()
if not inst_path.is_file():
raise Exception(f"Unable to locate install script: {inst_path}")
data = inst_path.read_text()
plines: List[str] = re.findall(r'PKGLIST="(.*)"', data)
plines = [p.lstrip("${PKGLIST}").strip() for p in plines]
packages: List[str] = []
for line in plines:
packages.extend(line.split())
sysdeps[distro] = packages
outpath.write_text(json.dumps(sysdeps, indent=4))
if __name__ == "__main__":
def_path = pathlib.Path(__file__).parent
desc = (
"make_sysdeps - generate system dependency json file from an install script"
)
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
"-i", "--input", metavar="<install script>",
help="path of the install script to read",
default=f"{def_path}/install-moonraker.sh"
)
parser.add_argument(
"-o", "--output", metavar="<output file>",
help="path of the system dependency file to write",
default=f"{def_path}/system-dependencies.json"
)
parser.add_argument(
"-d", "--distro", metavar="<linux distro>",
help="linux distro for dependencies", default="debian"
)
parser.add_argument(
"-t", "--truncate", action="store_true",
help="truncate output file"
)
args = parser.parse_args()
make_sysdeps(args.input, args.output, args.distro, args.truncate)

View File

@@ -1,18 +1,21 @@
# Python dependencies for Moonraker
tornado==6.1.0
--find-links=python_wheels
tornado==6.2.0 ; python_version=='3.7'
tornado==6.4.0 ; python_version>='3.8'
pyserial==3.4
pyserial-asyncio==0.6
pillow==9.0.1
lmdb==1.2.1
streaming-form-data==1.8.1
distro==1.5.0
pillow==9.5.0 ; python_version=='3.7'
pillow==10.3.0 ; python_version>='3.8'
streaming-form-data==1.11.0 ; python_version=='3.7'
streaming-form-data==1.15.0 ; python_version>='3.8'
distro==1.9.0
inotify-simple==1.3.5
libnacl==1.7.2
paho-mqtt==1.5.1
pycurl==7.44.1
zeroconf==0.37.0
preprocess-cancellation==0.2.0
jinja2==3.0.3
libnacl==2.1.0
paho-mqtt==1.6.1
zeroconf==0.131.0
preprocess-cancellation==0.2.1
jinja2==3.1.4
dbus-next==0.2.3
apprise==0.9.7
apprise==1.8.0
ldap3==2.9.1
python-periphery==2.4.1

View File

@@ -0,0 +1,2 @@
msgspec>=0.18.4 ; python_version>='3.8'
uvloop>=0.17.0

80
scripts/pdm_build_dist.py Normal file
View File

@@ -0,0 +1,80 @@
# Wheel Setup Script for generating metadata
#
# Copyright (C) 2023 Eric Callahan <arksine.code@gmail.com>
#
# This file may be distributed under the terms of the GNU GPLv3 license
from __future__ import annotations
import pathlib
import subprocess
import shlex
import json
import shutil
from datetime import datetime, timezone
from typing import Dict, Any, TYPE_CHECKING
if TYPE_CHECKING:
from pdm.backend.hooks.base import Context
__package_name__ = "moonraker"
__dependencies__ = "scripts/system-dependencies.json"
def _run_git_command(cmd: str) -> str:
prog = shlex.split(cmd)
process = subprocess.Popen(
prog, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
ret, err = process.communicate()
retcode = process.wait()
if retcode == 0:
return ret.strip().decode()
return ""
def get_commit_sha(source_path: pathlib.Path) -> str:
cmd = f"git -C {source_path} rev-parse HEAD"
return _run_git_command(cmd)
def retrieve_git_version(source_path: pathlib.Path) -> str:
cmd = f"git -C {source_path} describe --always --tags --long --dirty"
return _run_git_command(cmd)
def pdm_build_initialize(context: Context) -> None:
context.ensure_build_dir()
build_ver: str = context.config.metadata['version']
proj_name: str = context.config.metadata['name']
urls: Dict[str, str] = context.config.metadata['urls']
build_dir = pathlib.Path(context.build_dir)
rel_dpath = f"{__package_name__}-{build_ver}.data/data/share/{proj_name}"
data_path = build_dir.joinpath(rel_dpath)
pkg_path = build_dir.joinpath(__package_name__)
build_time = datetime.now(timezone.utc)
release_info: Dict[str, Any] = {
"project_name": proj_name,
"package_name": __package_name__,
"urls": {key.lower(): val for key, val in urls.items()},
"package_version": build_ver,
"git_version": retrieve_git_version(context.root),
"commit_sha": get_commit_sha(context.root),
"build_time": datetime.isoformat(build_time, timespec="seconds")
}
if __dependencies__:
deps = pathlib.Path(context.root).joinpath(__dependencies__)
if deps.is_file():
dep_info: Dict[str, Any] = json.loads(deps.read_bytes())
release_info["system_dependencies"] = dep_info
# Write the release info to both the package and the data path
rinfo_data = json.dumps(release_info, indent=4)
data_path.mkdir(parents=True, exist_ok=True)
pkg_path.mkdir(parents=True, exist_ok=True)
data_path.joinpath("release_info").write_text(rinfo_data)
pkg_path.joinpath("release_info").write_text(rinfo_data)
scripts_path = context.root.joinpath("scripts")
scripts_dest = data_path.joinpath("scripts")
scripts_dest.mkdir()
for item in scripts_path.iterdir():
if item.name == "__pycache__":
continue
if item.is_dir():
shutil.copytree(str(item), str(scripts_dest.joinpath(item.name)))
else:
shutil.copy2(str(item), str(scripts_dest))

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# LMDB Database restore utility
DATABASE_PATH="${HOME}/printer_data/database"
MOONRAKER_ENV="${HOME}/moonraker-env"
INPUT_FILE="${HOME}/database.backup"
print_help()
{
echo "Moonraker Database Restore Utility"
echo
echo "usage: restore-database.sh [-h] [-e <python env path>] [-d <database path>] [-i <input file>]"
echo
echo "optional arguments:"
echo " -h show this message"
echo " -e <env path> Moonraker Python Environment"
echo " -d <database path> Moonraker LMDB database path to restore to"
echo " -i <input file> backup file to restore from"
exit 0
}
# Parse command line arguments
while getopts "he:d:i:" arg; do
case $arg in
h) print_help;;
e) MOONRAKER_ENV=$OPTARG;;
d) DATABASE_PATH=$OPTARG;;
i) INPUT_FILE=$OPTARG;;
esac
done
PYTHON_BIN="${MOONRAKER_ENV}/bin/python"
DB_TOOL="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/dbtool.py"
if [ ! -f $PYTHON_BIN ]; then
echo "No Python binary found at '${PYTHON_BIN}'"
exit -1
fi
if [ ! -d $DATABASE_PATH ]; then
echo "No database folder found at '${DATABASE_PATH}'"
exit -1
fi
if [ ! -f $INPUT_FILE ]; then
echo "No Database Backup File found at '${INPUT_FILE}'"
exit -1
fi
if [ ! -f $DB_TOOL ]; then
echo "Unable to locate dbtool.py at '${DB_TOOL}'"
exit -1
fi
${PYTHON_BIN} ${DB_TOOL} restore ${DATABASE_PATH} ${INPUT_FILE}

View File

@@ -30,6 +30,8 @@ add_polkit_legacy_rules()
ACTIONS="${ACTIONS};org.freedesktop.login1.power-off-multiple-sessions"
ACTIONS="${ACTIONS};org.freedesktop.login1.reboot"
ACTIONS="${ACTIONS};org.freedesktop.login1.reboot-multiple-sessions"
ACTIONS="${ACTIONS};org.freedesktop.login1.halt"
ACTIONS="${ACTIONS};org.freedesktop.login1.halt-multiple-sessions"
ACTIONS="${ACTIONS};org.freedesktop.packagekit.*"
sudo /bin/sh -c "cat > ${RULE_FILE}" << EOF
[moonraker permissions]
@@ -72,6 +74,8 @@ polkit.addRule(function(action, subject) {
action.id == "org.freedesktop.login1.power-off-multiple-sessions" ||
action.id == "org.freedesktop.login1.reboot" ||
action.id == "org.freedesktop.login1.reboot-multiple-sessions" ||
action.id == "org.freedesktop.login1.halt" ||
action.id == "org.freedesktop.login1.halt-multiple-sessions" ||
action.id.startsWith("org.freedesktop.packagekit.")) &&
subject.user == "$USER") {
// Only allow processes with the "moonraker-admin" supplementary group

View File

@@ -0,0 +1,13 @@
{
"debian": [
"python3-virtualenv",
"python3-dev",
"libopenjp2-7",
"libsodium-dev",
"zlib1g-dev",
"libjpeg-dev",
"packagekit",
"wireless-tools",
"curl"
]
}

View File

@@ -10,9 +10,9 @@ import shlex
import tempfile
import subprocess
from typing import Iterator, Dict, AsyncIterator, Any
from moonraker import Server
from eventloop import EventLoop
import utils
from moonraker.server import Server
from moonraker.eventloop import EventLoop
from moonraker import utils
import dbtool
from fixtures import KlippyProcess, HttpClient, WebsocketClient

View File

@@ -5,10 +5,10 @@ import hashlib
import confighelper
import shutil
import time
from confighelper import ConfigError
from moonraker import Server
from utils import ServerError
from components import gpio
from moonraker.confighelper import ConfigError
from moonraker.server import Server
from moonraker.utils import ServerError
from moonraker.components import gpio
from mocks import MockGpiod
from typing import TYPE_CHECKING, Dict
if TYPE_CHECKING:

View File

@@ -5,8 +5,8 @@ import pytest_asyncio
import asyncio
import copy
from inspect import isawaitable
from moonraker import Server
from utils import ServerError
from moonraker.server import Server
from moonraker.utils import ServerError
from typing import TYPE_CHECKING, AsyncIterator, Dict, Any, Iterator
if TYPE_CHECKING:

View File

@@ -3,12 +3,12 @@ import pytest
import asyncio
import pathlib
from typing import TYPE_CHECKING, Dict
from moonraker import ServerError
from klippy_connection import KlippyRequest
from moonraker.server import ServerError
from moonraker.klippy_connection import KlippyRequest
from mocks import MockReader, MockWriter
if TYPE_CHECKING:
from moonraker import Server
from server import Server
from conftest import KlippyProcess
@pytest.mark.usefixtures("klippy")

Some files were not shown because too many files have changed in this diff Show More