* Migrate build system to uv
Closes#3787, and is a step towards #3081 and #4022
This change breaks our PyOxidizer bundling process. While we probably
could update it to work with the new venvs & lockfile, my intention
is to use this as a base to try out a uv-based packager/installer.
Some notes about the changes:
- Use uv for python download + venv installation
- Drop python/requirements* in favour of pyproject files / uv.lock
- Bumped to latest Python 3.9 version. The move to 3.13 should be
a fairly trivial change when we're ready.
- Dropped the old write_wheel.py in favour of uv/hatchling. This has
the unfortunate side-effect of dropping leading zeros in our wheels,
which we could try hack around in the future.
- Switch to Qt 6.7 for the dev repo, as it's the first PyQt version
with a Linux/ARM WebEngine wheel.
- Unified our macOS deployment target with minimum required for ARM.
- Dropped unused fluent python files
- Dropped unused python license generation
- Dropped helpers to run under Qt 5, as our wheels were already
requiring Qt 6 to install.
* Build action to create universal uv binary
* Drop some PyOxidizer-related files
* Use Windows ARM64 cargo/node binaries during build
We can't provide ARM64 wheels to users yet due to #4079, but we can
at least speed up the build.
The rustls -> native-tls change on Windows is because ring requires
clang to compile for ARM64, and I figured it's best to keep our Windows
deps consistent. We already built the wheels with native-tls.
* Make libankihelper a universal library
We were shipping a single arch library in a purelib, leading to
breakages when running on a different platform.
* Use Python wheel for mpv/lame on Windows/Mac
This is convenient, but suboptimal on a Mac at the moment. The first
run of mpv will take a number of seconds for security checks to run,
and our mpv code ends up timing out, repeating the process each time.
Our installer stub will need to invoke mpv once first to get it validated.
We could address this by distributing the audio with the installer/stub,
or perhaps by putting the binaries in a .pkg file that's notarized+stapled
and then included in the wheel.
* Add some helper scripts to build a fully-locked wheel
* Initial macOS launcher prototype
* Add a hidden env var to preload our libs and audio helpers on macOS
* qt/bundle -> qt/launcher
- remove more of the old bundling code
- handle app icon
* Fat binary, notarization & dmg
* Publish wheels on testpypi for testing
* Use our Python pin for the launcher too
* Python cleanups
* Extend launcher to other platforms + more
- Switch to Qt 6.8 for repo default, as 6.7 depends on an older
libwebp/tiff which is unavailable on newer installs
- Drop tools/mac-x86, as we no longer need to test against Qt 5
- Add flags to cross compile wheels on Mac and Linux
- Bump glibc target to 2_36, building on Debian Stable
- Increase mpv timeout on macOS to allow for initial gatekeeper checks
- Ship both arm64 and amd64 uv on Linux, with a bash stub to pick
the appropriate arch.
* Fix pylint on Linux
* Fix failure to run from /usr/local/bin
* Remove remaining pyoxidizer refs, and clean up duplicate release folder
* Rust dep updates
- Rust 1.87 for now (1.88 due out in around a week)
- Nom looks involved, so I left it for now
- prost-reflect depends on a new prost version that got yanked
* Python 3.13 + dep updates
Updated protoc binaries + add helper in order to try fix build breakage.
Ended up being due to an AI-generated update to pip-system-certs that
was not reviewed carefully enough:
https://gitlab.com/alelec/pip-system-certs/-/issues/36
The updated mypy/black needed some tweaks to our files.
* Windows compilation fixes
* Automatically run Anki after installing on Windows
* Touch pyproject.toml upon install, so we check for updates
* Update Python deps
- urllib3 for CVE
- pip-system-certs got fixed
- markdown/pytest also updated
Make sure to run tools/install-n2 after updating to this commit.
n2 have merged in some changes we were previously hosting in a fork,
but the parsing of the flags was altered.
* Update to latest Node LTS
* Add sveltekit
* Split tslib into separate @generated and @tslib components
SvelteKit's path aliases don't support multiple locations, so our old
approach of using @tslib to refer to both ts/lib and out/ts/lib will no
longer work. Instead, all generated sources and their includes are
placed in a separate out/ts/generated folder, and imported via @generated
instead. This also allows us to generate .ts files, instead of needing
to output separate .d.ts and .js files.
* Switch package.json to module type
* Avoid usage of baseUrl
Incompatible with SvelteKit
* Move sass into ts; use relative links
SvelteKit's default sass support doesn't allow overriding loadPaths
* jest->vitest, graphs example working with yarn dev
* most pages working in dev mode
* Some fixes after rebasing
* Fix/silence some svelte-check errors
* Get image-occlusion working with Fabric types
* Post-rebase lock changes
* Editor is now checked
* SvelteKit build integrated into ninja
* Use the new SvelteKit entrypoint for pages like congrats/deck options/etc
* Run eslint once for ts/**; fix some tests
* Fix a bunch of issues introduced when rebasing over latest main
* Run eslint fix
* Fix remaining eslint+pylint issues; tests now all pass
* Fix some issues with a clean build
* Latest bufbuild no longer requires @__PURE__ hack
* Add a few missed dependencies
* Add yarn.bat to fix Windows build
* Fix pages failing to show when ANKI_API_PORT not defined
* Fix svelte-check and vitest on Windows
* Set node path in ./yarn
* Move svelte-kit output to ts/.svelte-kit
Sadly, I couldn't figure out a way to store it in out/ if out/ is
a symlink, as it breaks module resolution when SvelteKit is run.
* Allow HMR inside Anki
* Skip SvelteKit build when HMR is defined
* Fix some post-rebase issues
I should have done a normal merge instead.
* Simplify the offline build
The two environment variables OFFLINE_BUILD and NO_VENV jointly provide
the ability to build Anki fully offline. This commit boils them down
into just one, namely OFFLINE_BUILD.
The rationale being that first, OFFLINE_BUILD implies the use of
a custom non-networked Python environment.
Second, building Anki with a custom Python environment in a networked
setting is a use case, that we currently do not support.
Developers in need of such a solution may want to give containerized
development environments a try. Users could also look into building
Anki fully offline instead.
* Add documentation for offline builds.
* Add support for offline generation of Sphinx documentation.
Control installation of Sphinx dependencies via the network through the
OFFLINE_BUILD environment variable.
* Add documentation for offline generation of Sphinx documentation.
* CONTRIBUTORS: Add myself to the contributors list
* Add support for offline builds
Downloading files during build time is a non-starter for FreeBSD ports
(and presumably for other *BSD ports and some Linux distros as well).
In order to still be able to build Anki successfully, two new
environment variables have been added that can be set accordingly:
* NO_VENV: If set, the Python system environment is used instead of
a venv. This is necessary if there are no usable Python wheels for a
platform, e.g. PyQt6.
* OFFLINE_BUILD: If set, the git repository synchronization (translation
files, build hash, etc.) is skipped.
To successfully build Anki offline, following conditions must be met:
1. All required dependencies (node, Python, rust, yarn, etc.) must be
present in the build environment.
2. The offline repositories for the translation files must be
copied/linked to ftl/qt-repo and ftl/core-repo.
3. The Python pseudo venv needs to be setup:
$ mkdir out/pyenv/bin
$ ln -s /path/to/python out/pyenv/bin/python
$ ln -s /path/to/protoc-gen-mypy out/pyenv/bin/protoc-gen-mypy
4. Create the offline cache for yarn and use its own environment
variable YARN_CACHE_FOLDER to it:
YARN_CACHE_FOLDER=/path/to/the/yarn/cache
$ /path/to/yarn install --ignore-scripts
5. Build Anki:
$ /path/to/cargo build --package runner --release --verbose --verbose
$ OFFLINE_BUILD=1 \
NO_VENV=1 \
${WRKSRC}/out/rust/release/runner build wheels
- Fixes an issue where tasks would continue to appear active for a while
after they had finished on Unix platforms
- The latest n2 now behaves the same way as ninja when substituting
variables, so we no longer need to do the substitution ourselves.
Combined with the previous changes, this allows the mobile clients to
build the web components without having to set up a Python environment,
and should speed up AnkiDroid CI.
- Don't bake modified PATH in to build.ninja; instead set it at run time.
- Add win audio in build.rs, as doing it in run.bat results in it being
added multiple times when you run multiple times.
I'd introduced maybe_reconfigure_build() after running into issues where
configure was not being invoked, but have discovered why that was happening:
the out folder path must be identical to the canonical path listed in
build.ninja, which it wasn't when a symlink was used. With this change,
we avoid having to invoke ninja twice, and get visibility into the
configure step.
This also makes rsbridge only depend on out/env, which prevents it from
being rebuilt when any reconfigure happens.
* eslint-plugin-svelte3 -> eslint-plugin-svelte
The former is deprecated, and blocks an update to Svelte 4.
Also drop unused svelte2tsx and types package.
* Drop unused symbols code for now
It may be added back in the future, but for now dropping it will save
200k from our editor bundle.
* Remove sass and caniuse-lite pins
The latter no longer seems to be required. The former was added to
suppress deprecation warnings when compiling the old bootstrap version
we have pinned. Those are hidden by the build tool now (though we really
need to address them at one point: https://github.com/ankitects/anki/issues/1385)
Also removed unused files section.
* Prevent proto compile from looking in node_modules/@types/sass
When deps are updated, tsc aborts because @types/sass is a dummy package
without an index.d.ts file.
* Filter Svelte warnings out of ./run
* Update to latest Bootstrap
This fixes the deprecation warnings we were getting during build:
bootstrap doesn't accept runtime CSS variables being set in Sass, as
it wants to apply transforms to the colors.
Closes#1385
* Start port to Svelte 4
- svelte-check tests have a bunch of failures; ./run works
- Svelte no longer exposes internals, so we can't use create_in_transition
- Also update esbuild and related components like esbuild-svelte
* Fix test failures
Had to add some more a11y warning ignores - have added
https://github.com/ankitects/anki/issues/2564 to address that in the
future.
* Remove some dependency pins
+ Remove sass, we don't need it directly
* Bump remaining JS deps that have a current semver
* Upgrade dprint/license-checker/marked
The new helper method avoids marked printing deprecation warnings to
the console.
Also remove unused lodash/long types, and move lodahs-es to devdeps
* Upgrade eslint and fluent packages
* Update @floating-ui/dom
The only dependencies remaining are currently blocked:
- Jest 29 gives some error about require vs import; may not be worth
investigating if we switch to Deno for the tests
- CodeMirror 6 is a big API change and will need work.
* Roll dprint back to an earlier version
GitHub dropped support for Ubuntu 18 runners, causing dprint's artifacts
to require a glibc version greater than what Anki CI currently has.
Also fix minilints declaring a stamp it wasn't creating. The same
approach is necessary with archives now too, as it no longer executes
under a standard "runner run".
For now, rustls is hard-coded - we could pass the desired TLS impl in
from the ./ninja script, but the runner is not recompiled frequently
anyway.
Workspace deps were introduced in Rust 1.64. They don't cover all the
cases that Hakari did unfortunately, but they are simpler to maintain,
and they avoid a couple of issues that Hakari had:
- It sometimes made updating dependencies harder due to the locked versions,
so you had to disable Hakari, do the updates, and then re-generate (
e.g. 943dddf28f)
- The current Hakari config was breaking AnkiDroid's build, as it was
stopping a cross-compile from functioning correctly.
Provides better visibility into what the build is currently doing.
Motivated by slow node.js downloads making the build appear stuck.
You can test this out by running ./tools/install-n2 then building
normally. Please report any problems, and 'cargo uninstall n2' to get
back to the old behaviour. It works on Windows, but prints a new line
each second instead of redrawing the same area.
A couple of changes were required for compatibility:
- n2 doesn't resolve $variable names inside other variables, so the
resolution needs to be done by our build generator.
- Our inputs and outputs in build.ninja need to be listed in a deterministic
order to avoid unwanted rebuilds. I've made a few other tweaks so the
build file should now be fully-deterministic.
* Fix .no-reduce-motion missing from graphs spinner, and not being honored
* Begin migration from protobuf.js -> protobuf-es
Motivation:
- Protobuf-es has a nicer API: messages are represented as classes, and
fields which should exist are not marked as nullable.
- As it uses modules, only the proto messages we actually use get included
in our bundle output. Protobuf.js put everything in a namespace, which
prevented tree-shaking, and made it awkward to access inner messages.
- ./run after touching a proto file drops from about 8s to 6s on my machine. The tradeoff
is slower decoding/encoding (#2043), but that was mainly a concern for the
graphs page, and was unblocked by
37151213cd
Approach/notes:
- We generate the new protobuf-es interface in addition to existing
protobuf.js interface, so we can migrate a module at a time, starting
with the graphs module.
- rslib:proto now generates RPC methods for TS in addition to the Python
interface. The input-arg-unrolling behaviour of the Python generation is
not required here, as we declare the input arg as a PlainMessage<T>, which
marks it as requiring all fields to be provided.
- i64 is represented as bigint in protobuf-es. We were using a patch to
protobuf.js to get it to output Javascript numbers instead of long.js
types, but now that our supported browser versions support bigint, it's
probably worth biting the bullet and migrating to bigint use. Our IDs
fit comfortably within MAX_SAFE_INTEGER, but that may not hold for future
fields we add.
- Oneofs are handled differently in protobuf-es, and are going to need
some refactoring.
Other notable changes:
- Added a --mkdir arg to our build runner, so we can create a dir easily
during the build on Windows.
- Simplified the preference handling code, by wrapping the preferences
in an outer store, instead of a separate store for each individual
preference. This means a change to one preference will trigger a redraw
of all components that depend on the preference store, but the redrawing
is cheap after moving the data processing to Rust, and it makes the code
easier to follow.
- Drop async(Reactive).ts in favour of more explicit handling with await
blocks/updating.
- Renamed add_inputs_to_group() -> add_dependency(), and fixed it not adding
dependencies to parent groups. Renamed add() -> add_action() for clarity.
* Remove a couple of unused proto imports
* Migrate card info
* Migrate congrats, image occlusion, and tag editor
+ Fix imports for multi-word proto files.
* Migrate change-notetype
* Migrate deck options
* Bump target to es2020; simplify ts lib list
Have used caniuse.com to confirm Chromium 77, iOS 14.5 and the Chrome
on Android support the full es2017-es2020 features.
* Migrate import-csv
* Migrate i18n and fix missing output types in .js
* Migrate custom scheduling, and remove protobuf.js
To mostly maintain our old API contract, we make use of protobuf-es's
ability to convert to JSON, which follows the same format as protobuf.js
did. It doesn't cover all case: users who were previously changing the
variant of a type will need to update their code, as assigning to a new
variant no longer automatically removes the old one, which will cause an
error when we try to convert back from JSON. But I suspect the large majority
of users are adjusting the current variant rather than creating a new one,
and this saves us having to write proxy wrappers, so it seems like a
reasonable compromise.
One other change I made at the same time was to rename value->kind for
the oneofs in our custom study protos, as 'value' was easily confused
with the 'case/value' output that protobuf-es has.
With protobuf.js codegen removed, touching a proto file and invoking
./run drops from about 8s to 6s.
This closes#2043.
* Allow tree-shaking on protobuf types
* Display backend error messages in our ts alert()
* Make sourcemap generation opt-in for ts-run
Considerably slows down build, and not used most of the time.
* bump walkdir from 2.3.2 to 2.3.3
Build was failing on windows due to walkdir matching . as any folder starting with . instead of as the cwd.
* change name in CONTRIBUTORS
* change name back in CONTRIBUTORS
* Don't download nodejs if NODE_BINARY is set
Some build environments, such as nixpkgs, restrict network access and
thus would prefer to not download anything at all. Setting PROTOC_BINARY
and friends makes the build system not download stuff, and the same
should be true for nodejs
* Allow setting YARN_BINARY for the build system
* Add myself to CONTRIBUTORS
As required by CI