* Migrate build system to uv
Closes#3787, and is a step towards #3081 and #4022
This change breaks our PyOxidizer bundling process. While we probably
could update it to work with the new venvs & lockfile, my intention
is to use this as a base to try out a uv-based packager/installer.
Some notes about the changes:
- Use uv for python download + venv installation
- Drop python/requirements* in favour of pyproject files / uv.lock
- Bumped to latest Python 3.9 version. The move to 3.13 should be
a fairly trivial change when we're ready.
- Dropped the old write_wheel.py in favour of uv/hatchling. This has
the unfortunate side-effect of dropping leading zeros in our wheels,
which we could try hack around in the future.
- Switch to Qt 6.7 for the dev repo, as it's the first PyQt version
with a Linux/ARM WebEngine wheel.
- Unified our macOS deployment target with minimum required for ARM.
- Dropped unused fluent python files
- Dropped unused python license generation
- Dropped helpers to run under Qt 5, as our wheels were already
requiring Qt 6 to install.
* Build action to create universal uv binary
* Drop some PyOxidizer-related files
* Use Windows ARM64 cargo/node binaries during build
We can't provide ARM64 wheels to users yet due to #4079, but we can
at least speed up the build.
The rustls -> native-tls change on Windows is because ring requires
clang to compile for ARM64, and I figured it's best to keep our Windows
deps consistent. We already built the wheels with native-tls.
* Make libankihelper a universal library
We were shipping a single arch library in a purelib, leading to
breakages when running on a different platform.
* Use Python wheel for mpv/lame on Windows/Mac
This is convenient, but suboptimal on a Mac at the moment. The first
run of mpv will take a number of seconds for security checks to run,
and our mpv code ends up timing out, repeating the process each time.
Our installer stub will need to invoke mpv once first to get it validated.
We could address this by distributing the audio with the installer/stub,
or perhaps by putting the binaries in a .pkg file that's notarized+stapled
and then included in the wheel.
* Add some helper scripts to build a fully-locked wheel
* Initial macOS launcher prototype
* Add a hidden env var to preload our libs and audio helpers on macOS
* qt/bundle -> qt/launcher
- remove more of the old bundling code
- handle app icon
* Fat binary, notarization & dmg
* Publish wheels on testpypi for testing
* Use our Python pin for the launcher too
* Python cleanups
* Extend launcher to other platforms + more
- Switch to Qt 6.8 for repo default, as 6.7 depends on an older
libwebp/tiff which is unavailable on newer installs
- Drop tools/mac-x86, as we no longer need to test against Qt 5
- Add flags to cross compile wheels on Mac and Linux
- Bump glibc target to 2_36, building on Debian Stable
- Increase mpv timeout on macOS to allow for initial gatekeeper checks
- Ship both arm64 and amd64 uv on Linux, with a bash stub to pick
the appropriate arch.
* Fix pylint on Linux
* Fix failure to run from /usr/local/bin
* Remove remaining pyoxidizer refs, and clean up duplicate release folder
* Rust dep updates
- Rust 1.87 for now (1.88 due out in around a week)
- Nom looks involved, so I left it for now
- prost-reflect depends on a new prost version that got yanked
* Python 3.13 + dep updates
Updated protoc binaries + add helper in order to try fix build breakage.
Ended up being due to an AI-generated update to pip-system-certs that
was not reviewed carefully enough:
https://gitlab.com/alelec/pip-system-certs/-/issues/36
The updated mypy/black needed some tweaks to our files.
* Windows compilation fixes
* Automatically run Anki after installing on Windows
* Touch pyproject.toml upon install, so we check for updates
* Update Python deps
- urllib3 for CVE
- pip-system-certs got fixed
- markdown/pytest also updated
* chore: add myself to CONTRIBUTORS file
* refactor: use newer type hints for Union/Optional
* refactor: fix deprecated type annotations
use collections.abc rather than typing
* refactor: use lower letter type annotations
* style: reformat with black
* refactor: remove unused imports
* refactor: add missing imports for type hints
* fixup! refactor: use newer type hints for Union/Optional
* fix: add missing imports for type annotations
* fixup! refactor: use newer type hints for Union/Optional
* fixup! style: reformat with black
* refactor: fix remaining imports re: type hints
* Update JS deps
* Update semver-compat Rust deps
* Update some semver-incompat Rust deps
- hyper/axum held back because reqwests is not ready
- rusqlite held back due to burn-rs incompat version
- wiremock held back due to compile issue
* pylint wants changes to our _rsbridge.pyi
* Update Python deps
Also solves a security warning in orjson
Reformat with latest black
Ideally this would have been in beta 6 :-) No add-ons appear to be
using customstudy.py/taglimit.py though, so it should hopefully not be
disruptive.
In the earlier custom study changes, we didn't get around to addressing
issue #1136. Now instead of trying to determine the maximum increase
to allow (which doesn't work correctly with nested decks), we just
present the total available to the user again, and let them decide. There's
plenty of room for improvement here still, but further work here might
be better done once we look into decoupling deck limits from deck presets.
Tags and available cards are fetched prior to showing the dialog now,
and will show a progress dialog if things take a while.
Tags are stored in an aux var now, so they don't inflate the deck
object size.
* Add forget prompt with options
- Restore original position
- Reset reps and lapses
* Restore position when resetting for export
* Add config context to avoid passing keys
* Add routine to fetch defaults; use method-specific enum (dae)
* Keep original position by default (dae)
* Fix code completion for forget dialog (dae)
Needs to be a symbolic link to the generated file
* Use submodule imports in aqt
* Use submodule imports in pylib
* More submodule imports in pylib
These required removing some direct imports to get rid of import cycles.
* Implement custom study on backend
* Switch frontend to backend custom study
* Skip typecheck for new pb classes
* Build tag search string on backend
Also fixes escaping of special characters in tag names.
* `cram.cards` -> `cram.card_limit`
* Assign more meaningful names in `TagLimit`
* Broaden rustfmt glob
* Use `invalid_input()` helper
* Assign `FilteredDeckForUpdate` to temp var
* Implement `SearchBuilder`
* Rewrite `custom_study()` with `SearchBuilder`
* Replace match macros with `SearchBuilder`
* Remove `into_nodes_list` & `concatenate_searches`
* PEP8 dbproxy.py
* PEP8 errors.py
* PEP8 httpclient.py
* PEP8 lang.py
* PEP8 latex.py
* Add decorator to deprectate key words
* Make replacement for deprecated attribute optional
* Use new helper `_print_replacement_warning()`
* PEP8 media.py
* PEP8 rsbackend.py
* PEP8 sound.py
* PEP8 stdmodels.py
* PEP8 storage.py
* PEP8 sync.py
* PEP8 tags.py
* PEP8 template.py
* PEP8 types.py
* Fix DeprecatedNamesMixinForModule
The class methods need to be overridden with instance methods, so every
module has its own dicts.
* Use `# pylint: disable=invalid-name` instead of id
* PEP8 utils.py
* Only decorate `__getattr__` with `@no_type_check`
* Fix mypy issue with snakecase
Importing it from `anki._vendor` raises attribute errors.
* Format
* Remove inheritance of DeprecatedNamesMixin
There's almost no shared code now and overriding classmethods with
instance methods raises mypy issues.
* Fix traceback frames of deprecation warnings
* remove fn/TimedLog (dae)
Neither Anki nor add-ons appear to have been using it
* fix some issues with stringcase use (dae)
- the wheel was depending on the PyPI version instead of our vendored
version
- _vendor:stringcase should not have been listed in the anki py_library.
We already include the sources in py_srcs, and need to refer to them
directly. By listing _vendor:stringcase as well, we were making a
top-level stringcase library available, which would have only worked for
distributing because the wheel definition was also incorrect.
- mypy errors are what caused me to mistakenly add the above - they
were because the type: ignore at the top of stringcase.py was causing
mypy to completely ignore the file, so it was not aware of any attributes
it contained.
This adds Python 3.9 and 3.10 typing syntax to files that import
attributions from __future___. Python 3.9 should be able to cope with
the 3.10 syntax, but Python 3.8 will no longer work.
On Windows/Mac, install the latest Python 3.9 version from python.org.
There are currently no orjson wheels for Python 3.10 on Windows/Mac,
which will break the build unless you have Rust installed separately.
On Linux, modern distros should have Python 3.9 available already. If
you're on an older distro, you'll need to build Python from source first.
In order to split backend.proto into a more manageable size, the protobuf
handling needed to be updated. This took more time than I would have
liked, as each language handles protobuf differently:
- The Python Protobuf code ignores "package" directives, and relies
solely on how the files are laid out on disk. While it would have been
nice to keep the generated files in a private subpackage, Protobuf gets
confused if the files are located in a location that does not match
their original .proto layout, so the old approach of storing them in
_backend/ will not work. They now clutter up pylib/anki instead. I'm
rather annoyed by that, but alternatives seem to be having to add an extra
level to the Protobuf path, making the other languages suffer, or trying
to hack around the issue by munging sys.modules.
- Protobufjs fails to expose packages if they don't start with a capital
letter, despite the fact that lowercase packages are the norm in most
languages :-( This required a patch to fix.
- Rust was the easiest, as Prost is relatively straightforward compared
to Google's tools.
The Protobuf files are now stored in /proto/anki, with a separate package
for each file. I've split backend.proto into a few files as a test, but
the majority of that work is still to come.
The Python Protobuf building is a bit of a hack at the moment, hard-coding
"proto" as the top level folder, but it seems to get the job done for now.
Also changed the workspace name, as there seems to be a number of Bazel
repos moving away from the more awkward reverse DNS naming style.
- make sure we set flag in changes when config var changed
- move current deck get/set into backend
- set_config() now returns a bool indicating whether a change was
made, so other operations can be gated off it
- active decks generation is deferred until sched.reset()
- use strum to generate an iterator for the protobuf enum so we don't
forget to add new labels if extending in the future
- no add-ons appear to be using dynOrderLabels(), so it has been removed
@RumovZ perhaps a similar approach might work for listing the available
browser columns as well?
- Filtered deck creation now happens as an atomic operation, and is
undoable.
- The logic for initial search text, normalizing searches and so on
has been pushed into the backend.
- Use protobuf to pass the filtered deck to the updated dialog, so
we don't need to deal with untyped JSON.
- Change the "revise your search?" prompt to be a simple info box -
user has access to cancel and build buttons, and doesn't need a separate
prompt. Tweak the wording so the 'show excluded' button should be more
obvious.
- Filtered decks have a time appended to them instead of a number,
primarily because it's easier to implement. No objections going back to
the old behaviour if someone wants to contribute a clean patch.
The standard de-duplication will happen if two decks are created in the
same minute with the same name.
- Tweak the default sort order, and start with two searches. The UI
will still hide the second search by default, but by starting with two,
the frontend doesn't need logic for creating the starting text.
- Search errors now have their own error type, instead of using
InvalidInput, as that was intended mainly for bad API calls. The markdown
conversion is done when the error is converted from the backend, allowing
errors to printed as a string without any special handling by the calling
code.
TODO: when building a new filtered deck, update_active() is clobbering
the undo log when the overview is refreshed
- QueueConfig is only used by the scheduler
- DeckConfig was being used in places that Config should have been used
- Add "Dict" to the name so that the bare name is free for use with a
stronger type.
- Introduced a new transact() method that wraps the return value
in a separate struct that describes the changes that were made.
- Changes are now gathered from the undo log, so we don't need to
guess at what was changed - eg if update_note() is called with identical
note contents, no changes are returned. Card changes will only be set
if cards were actually generated by the update_note() call, and tag
will only be set if a new tag was added.
- mw.perform_op() has been updated to expect the op to return the changes,
or a structure with the changes in it, and it will use them to fire the
change hook, instead of fetching the changes from undo_status(), so there
is no risk of race conditions.
- the various calls to mw.perform_op() have been split into separate
files like card_ops.py. Aside from making the code cleaner, this works
around a rather annoying issue with mypy. Because we run it with
no_strict_optional, mypy is happy to accept an operation that returns None,
despite the type signature saying it requires changes to be returned.
Turning no_strict_optional on for the whole codebase is not practical
at the moment, but we can enable it for individual files.
Still todo:
- The cursor keeps moving back to the start of a field when typing -
we need to ignore the refresh hook when we are the initiator.
- The busy cursor icon should probably be delayed a few hundreds ms.
- Still need to think about a nicer way of handling saveNow()
- op_made_changes(), op_affects_study_queue() might be better embedded
as properties in the object instead