Reviews and operations on the backend that support undoing can now be
committed immediately, so they will not be lost in the event of a crash.
This required tweaks to a few places:
- don't set collection mtime on save() unless changes were made in
Python, as otherwise we end up accidentally clearing the backend undo
queue
- autosave() is now run on every reset()
- garbage collection now runs in a timer, instead of relying on
autosave() to be run periodically
- anki._backend stores the protobuf files and rsbackend.py code
- pylib modules import protobuf messages directly from the
_pb2 files, and explicitly export any will be returned or consumed
by public pylib functions, so that calling code can import from pylib
- the "rsbackend" no longer imports and re-exports protobuf messages
- pylib can just consume them directly.
- move errors to errors.py
Still todo:
- rsbridge
- finishing the work on rsbackend, and check what we need to add
back to the original file location to avoid breaking add-ons
If the client's clock is behind AnkiWeb's, even by a few seconds,
we can end up with a situation where last_begin_at is updated after
the sync to a value less than the mtime we received from AnkiWeb,
causing the collection to be saved, which bumps the modtime.
Work around this by recording mtime at begin() time, and seeing if it
has changed in either direction.
Thanks to Rumo, who did the hard work looking into it:
https://forums.ankiweb.net/t/why-is-my-sync-button-blue/2078/21
This code was an awful hack to provide some semblance of UI
responsiveness while executing DB statements on the main thread.
Instead, we can just run DB statements in a background thread now,
keeping the UI responsive.
We no longer need to worry about pysqlite implicitly beginning
transactions, and can be more explicit about beginning/ending
transactions
save() now also has a trx argument controlling whether a
transaction should be started / left open
Some initial testing with orjson indicates performance varies from
slightly better than pysqlite to about 2x slower depending on the type
of query.
Performance could be improved by building the Python list in rspy
instead of sending back json that needs to be decoded, but it may make
more sense to rewrite the hotspots in Rust instead. More testing is
required in any case.