Instead of using a separate undo queue, the code now defers checking for
newly-due learning cards until the answering stage, and logs the updated
cutoff time as an undoable change, so that any newly-due learning cards
won't appear instead of a new/review card that was just undone.
Queue redo now uses a similar approach to undo, instead of rebuilding the
queues.
So, this is fun. Apparently "DeckId" is considered preferable to the
"DeckID" were were using until now, and the latest clippy will start
warning about it. We could of course disable the warning, but probably
better to bite the bullet and switch to the naming that's generally
considered best.
Fixes the following issue:
- some code directly modifies the database, causing modified_in_python
to be set to true
- an undoable operation is run, which calls autosave() at the end
- autosave() notices there's an undoable operation, and commits immediately
- because modified_in_python was true, col.mtime was bumped in Python
- that invalidated the undo queue, preventing the operation from being
undone
- transact() now automatically clears card queues unless an op
opts-out (and currently only AnswerCard does). This means there's no
risk of forgetting to clear the queues in an operation, or when undoing/
redoing
- CollectionOp->UndoableOp
- clear queues when redoing "answer card", instead of clearing redo
when clearing queues
- use dataclasses for the review/checkpoint undo cases, instead of the
nasty ad-hoc list structure
- expose backend review undo to Python, and hook it into GUI
- redo is not currently exposed on the GUI, and the backend can only
cope with reviews done by the new scheduler at the moment
- the initial undo prototype code was bumping mtime/usn on undo, but
that was not ideal, as it was breaking the queue handling which expected
the mtime to match. The original rationale for bumping mtime/usn was
to avoid problems with syncing, but various operations like removing
a revlog can't be synced anyway - so we just need to ensure we clear the
undo queue prior to syncing
Some initial testing with orjson indicates performance varies from
slightly better than pysqlite to about 2x slower depending on the type
of query.
Performance could be improved by building the Python list in rspy
instead of sending back json that needs to be decoded, but it may make
more sense to rewrite the hotspots in Rust instead. More testing is
required in any case.
committing the Protobuf implementation for posterity, but will replace
it with json, as Protobuf measures about 6x slower for some workloads
like 'select * from notes'