* Automatically elide empty inputs and outputs to backend methods
* Refactor service generation
Despite the fact that the majority of our Protobuf service methods require
an open collection, they were not accessible with just a Collection
object. To access the methods (e.g. because we haven't gotten around to
exposing the correct API in Collection yet), you had to wrap the collection
in a Backend object, and pay a mutex-acquisition cost for each call, even
if you have exclusive access to the object.
This commit migrates the majority of service methods to the Collection, so
they can now be used directly, and improves the ergonomics a bit at the
same time.
The approach taken:
- The service generation now happens in rslib instead of anki_proto, which
avoids the need for trait constraints and associated types.
- Service methods are assumed to be collection-based by default. Instead of
implementing the service on Backend, we now implement it on Collection, which
means our methods no longer need to use self.with_col(...).
- We automatically generate methods in Backend which use self.with_col() to
delegate to the Collection method.
- For methods that are only appropriate for the backend, we add a flag in
the .proto file. The codegen uses this flag to write the method into a
BackendFooService instead of FooService, which the backend implements.
- The flag can also allows us to define separate implementations for collection
and backend, so we can e.g. skip the collection mutex in the i18n service
while also providing the service on a collection.
Due to the orphan rule, this meant removing our usages of impl ProtoStruct,
or converting them to a trait when they were used commonly.
rslib now directly references anki_proto and anki_i18n, instead of
'pub use'-ing them, and we can put the generated files back in OUT_DIR.
* Relax chrono specification for AnkiDroid
https://github.com/ankidroid/Anki-Android-Backend/pull/251
* Add AnkiDroid service and AnkiDroid customizations
Most of the work here was done by David in the Backend repo; integrating
it into this repo for ease of future maintenance.
Based on 5d9f262f4c
with some tweaks:
- Protobuf imports have been fixed to match the recent refactor
- FatalError has been renamed to AnkidroidPanicError
- Tweaks to the desktop code to deal with the extra arg to open_collection,
and exclude AnkiDroid service methods from our Python code.
* Refactor AnkiDroid's DB code to avoid uses of unsafe
Instead of using a separate undo queue, the code now defers checking for
newly-due learning cards until the answering stage, and logs the updated
cutoff time as an undoable change, so that any newly-due learning cards
won't appear instead of a new/review card that was just undone.
Queue redo now uses a similar approach to undo, instead of rebuilding the
queues.
So, this is fun. Apparently "DeckId" is considered preferable to the
"DeckID" were were using until now, and the latest clippy will start
warning about it. We could of course disable the warning, but probably
better to bite the bullet and switch to the naming that's generally
considered best.
Fixes the following issue:
- some code directly modifies the database, causing modified_in_python
to be set to true
- an undoable operation is run, which calls autosave() at the end
- autosave() notices there's an undoable operation, and commits immediately
- because modified_in_python was true, col.mtime was bumped in Python
- that invalidated the undo queue, preventing the operation from being
undone
- transact() now automatically clears card queues unless an op
opts-out (and currently only AnswerCard does). This means there's no
risk of forgetting to clear the queues in an operation, or when undoing/
redoing
- CollectionOp->UndoableOp
- clear queues when redoing "answer card", instead of clearing redo
when clearing queues
- use dataclasses for the review/checkpoint undo cases, instead of the
nasty ad-hoc list structure
- expose backend review undo to Python, and hook it into GUI
- redo is not currently exposed on the GUI, and the backend can only
cope with reviews done by the new scheduler at the moment
- the initial undo prototype code was bumping mtime/usn on undo, but
that was not ideal, as it was breaking the queue handling which expected
the mtime to match. The original rationale for bumping mtime/usn was
to avoid problems with syncing, but various operations like removing
a revlog can't be synced anyway - so we just need to ensure we clear the
undo queue prior to syncing
Some initial testing with orjson indicates performance varies from
slightly better than pysqlite to about 2x slower depending on the type
of query.
Performance could be improved by building the Python list in rspy
instead of sending back json that needs to be decoded, but it may make
more sense to rewrite the hotspots in Rust instead. More testing is
required in any case.
committing the Protobuf implementation for posterity, but will replace
it with json, as Protobuf measures about 6x slower for some workloads
like 'select * from notes'