Reviews and operations on the backend that support undoing can now be
committed immediately, so they will not be lost in the event of a crash.
This required tweaks to a few places:
- don't set collection mtime on save() unless changes were made in
Python, as otherwise we end up accidentally clearing the backend undo
queue
- autosave() is now run on every reset()
- garbage collection now runs in a timer, instead of relying on
autosave() to be run periodically
- use dataclasses for the review/checkpoint undo cases, instead of the
nasty ad-hoc list structure
- expose backend review undo to Python, and hook it into GUI
- redo is not currently exposed on the GUI, and the backend can only
cope with reviews done by the new scheduler at the moment
- the initial undo prototype code was bumping mtime/usn on undo, but
that was not ideal, as it was breaking the queue handling which expected
the mtime to match. The original rationale for bumping mtime/usn was
to avoid problems with syncing, but various operations like removing
a revlog can't be synced anyway - so we just need to ensure we clear the
undo queue prior to syncing
- Rework V2 upgrade so that it no longer resets cards in learning,
or empties filtered decks.
- V1 users will receive a message at the top of the deck list
encouraging them to upgrade, and they can upgrade directly from that
screen.
- The setting in the preferences screen has been removed, so users
will need to use an older Anki version if they wish to switch back to
V1.
- Prevent V2 exports with scheduling from being importable into a V1
collection - the code was previously allowing this when it shouldn't
have been.
- New collections still default to v1 at the moment.
Also add helper to get map of decks and deck configs, as there were
a few places in the codebase where that was required.
- SearchTerm -> SearchNode
- Operator -> Joiner; share between messages
- build_search_string() supports specifying AND/OR as a convenience
- group_searches() makes it easier to negate
While implementing the overdue search, I realised it would be nice to
be able to construct a search string with OR and NOT searches without
having to construct each part individually with build_search_string().
Changes:
- Extends SearchTerm to support a text search, which will be parsed
by the backend. This allows us to do things like wrap text in a group
or NOT node.
- Because SearchTerm->Node conversion can now fail with a parsing error,
it's switched over to TryFrom
- Switch concatenate_searches and replace_search_term to use SearchTerms,
so that they too don't require separate string building steps.
- Remove the unused normalize_search()
- Remove negate_search, as this is now an operation on a Node, and
users can wrap their search in SearchTerm(negated=...)
- Remove the match_any and negate args from build_search_string
Having done all this work, I've just realised that perhaps the original
JSON idea was more feasible than I first thought - if we wrote it out
to a string and re-parsed it, we would be able to leverage the existing
checks that occur at parsing stage.
The progress messages are only really intended to be consumed by Anki.
If consumption by add-ons was expected, we'd be better off keeping the
wrapper, as the API for oneofs in Python is quite awkward to use.
While mypy can understand nested references like ConfigBool.Key.COLLAPSE_RECENT,
PyCharm doesn't understand the metaclass syntax, and shows the definitions
as invalid.
- anki._backend stores the protobuf files and rsbackend.py code
- pylib modules import protobuf messages directly from the
_pb2 files, and explicitly export any will be returned or consumed
by public pylib functions, so that calling code can import from pylib
- the "rsbackend" no longer imports and re-exports protobuf messages
- pylib can just consume them directly.
- move errors to errors.py
Still todo:
- rsbridge
- finishing the work on rsbackend, and check what we need to add
back to the original file location to avoid breaking add-ons
- IdList could be re-used for a cids: search in the future if required.
- Embedding the message means it's easy to access from Python as
an attribute of SearchTerm.
- use the TimestampSecs newtype instead of raw i64s
- use FixedOffset instead of a minutes_west offset
- check localOffset each time the timing is calculated, and set it
if it's stale - even for v1.
- check for and fix missing rollover when calculating timing
- stop explicitly passing localOffset in the sync/start call
If the client's clock is behind AnkiWeb's, even by a few seconds,
we can end up with a situation where last_begin_at is updated after
the sync to a value less than the mtime we received from AnkiWeb,
causing the collection to be saved, which bumps the modtime.
Work around this by recording mtime at begin() time, and seeing if it
has changed in either direction.
Thanks to Rumo, who did the hard work looking into it:
https://forums.ankiweb.net/t/why-is-my-sync-button-blue/2078/21