While implementing the overdue search, I realised it would be nice to
be able to construct a search string with OR and NOT searches without
having to construct each part individually with build_search_string().
Changes:
- Extends SearchTerm to support a text search, which will be parsed
by the backend. This allows us to do things like wrap text in a group
or NOT node.
- Because SearchTerm->Node conversion can now fail with a parsing error,
it's switched over to TryFrom
- Switch concatenate_searches and replace_search_term to use SearchTerms,
so that they too don't require separate string building steps.
- Remove the unused normalize_search()
- Remove negate_search, as this is now an operation on a Node, and
users can wrap their search in SearchTerm(negated=...)
- Remove the match_any and negate args from build_search_string
Having done all this work, I've just realised that perhaps the original
JSON idea was more feasible than I first thought - if we wrote it out
to a string and re-parsed it, we would be able to leverage the existing
checks that occur at parsing stage.
The progress messages are only really intended to be consumed by Anki.
If consumption by add-ons was expected, we'd be better off keeping the
wrapper, as the API for oneofs in Python is quite awkward to use.
While mypy can understand nested references like ConfigBool.Key.COLLAPSE_RECENT,
PyCharm doesn't understand the metaclass syntax, and shows the definitions
as invalid.
- anki._backend stores the protobuf files and rsbackend.py code
- pylib modules import protobuf messages directly from the
_pb2 files, and explicitly export any will be returned or consumed
by public pylib functions, so that calling code can import from pylib
- the "rsbackend" no longer imports and re-exports protobuf messages
- pylib can just consume them directly.
- move errors to errors.py
Still todo:
- rsbridge
- finishing the work on rsbackend, and check what we need to add
back to the original file location to avoid breaking add-ons
- IdList could be re-used for a cids: search in the future if required.
- Embedding the message means it's easy to access from Python as
an attribute of SearchTerm.
- use the TimestampSecs newtype instead of raw i64s
- use FixedOffset instead of a minutes_west offset
- check localOffset each time the timing is calculated, and set it
if it's stale - even for v1.
- check for and fix missing rollover when calculating timing
- stop explicitly passing localOffset in the sync/start call
If the client's clock is behind AnkiWeb's, even by a few seconds,
we can end up with a situation where last_begin_at is updated after
the sync to a value less than the mtime we received from AnkiWeb,
causing the collection to be saved, which bumps the modtime.
Work around this by recording mtime at begin() time, and seeing if it
has changed in either direction.
Thanks to Rumo, who did the hard work looking into it:
https://forums.ankiweb.net/t/why-is-my-sync-button-blue/2078/21
the change that caused it:
https://github.com/dropbox/mypy-protobuf/issues/118
This is more awkward to handle now, as the types are only available
at type-checking time. Python's static typing is such a mess :-(
Saves having to serialize the note fields and q/a templates, which
is particularly a win when rendering question/answer in the browse
screen.
Also some work towards being able to preview notes without having to
commit them to the database.
- notes with wrong field count are now recovered instead of
being deleted
- notes with missing note types are now recovered
- notes with missing cards are now recovered
- recover_missing_deck() still needs implementing
- checks required
- notetypes are fetched from the DB as needed, and cached in Python
- handle note type changes in the backend. Multiple operations can now
be performed in one go, but this is not currently exposed in the GUI.
- extra methods to grab sorted note type names quickly, and fetch by
name
- col.models.save() without a provided notetype is now a no-op
- note loading/saving handled in the backend
- notes with no valid cards can now be added
- templates can now be deleted even if they would previously
orphan notes
a number of fixmes have been left in notes.py and models.py
- mtime is tracked on each key individually, which will allow
merging of config changes when syncing in the future
- added col.(get|set|remove)_config()
- in order to support existing code that was mutating returned
values (eg col.conf["something"]["another"] = 5), the returned list/dict
will be automatically wrapped so that when the value is dropped, it
will save the mutated item back to the DB if it's changed. Code that
is fetching lists/dicts from the config like so:
col.conf["foo"]["bar"] = baz
col.setMod()
will continue to work in most case, but should be gradually updated to:
conf = col.get_config("foo")
conf["bar"] = baz
col.set_config("foo", conf)
Disabled for now; when enabled it will allow faster collection
open and close in the normal case, while continuing to downgrade
when exporting or doing a full sync.
Also, when downgrading is disabled, the journal mode is no longer
changed back to delete.
- tag list stored in a separate DB table
- non-wildcard searches now do full unicode case folding
(eg tag:masse matches 'Maße')
- wildcard matches do simple unicode case folding
- some functions haven't been updated yet, so ascii folding will
continue to be used in some operations
This is safer than just dropping the backend, as .close() will
block if something else is holding the mutex. Also means we can
drop the extra I18nBackend code.
Media syncing still needs fixing.
We no longer need to worry about pysqlite implicitly beginning
transactions, and can be more explicit about beginning/ending
transactions
save() now also has a trx argument controlling whether a
transaction should be started / left open
Some initial testing with orjson indicates performance varies from
slightly better than pysqlite to about 2x slower depending on the type
of query.
Performance could be improved by building the Python list in rspy
instead of sending back json that needs to be decoded, but it may make
more sense to rewrite the hotspots in Rust instead. More testing is
required in any case.
- _renderQA() has moved to template.py:render_card()
- dropped QAData in favour of a properly typed dict
- render_card() returns a TemplateRenderOutput struct instead of a dict
- card_did_render now takes that output as the first arg, and can
mutate it
- TemplateRenderContext now stores the original card, so it can return
a card even in the add screen case
The old mungeFields and mungeQA hook have been removed as part of this
change. mungeQA can be replaced with the card_did_render hook.
In the mungeFields case, please switch to using field_filter instead.
We can now show replay buttons for the audio contained in {{FrontSide}}
without having to play it again when the answer is shown.
The template code now always defers FrontSide rendering, as it wasn't
a big saving, and meant the logic had to be implemented twice.
- the context exists for the lifecycle of one card's render,
and caches calls to things like .card() to avoid add-ons needing to
do their own cache management.
- add-ons can optionally add extra data to the context if they need
it across multiple filters
- removed card_will_render. the legacy hook is still available for
now
- card_did_render is now called only once, with both front and back
text
The type arg is no longer used, as neither type 0 nor 1 appears to
have been used in the codebase.
By using the existing card ids, it allows add-ons that gather
information about a card to work properly in the card template screen
without extra hacks.
This allows us to add a docstring to .append() so users can see
the names of the arguments that are being passed, and means we
don't have to remember to prepend run_ when calling a hook.
Still todo:
- Add separate module for GUI hooks
- Update the remaining runHook/runFilter() calls
- Document the changes, including defensive registration
- The front and back are rendered in one call now. If the front
side contains no custom filters, we can bake {{FrontSide}} into the
rear side. If it did contain custom filters, we return the partially
complete rear template instead, and the calling code can inject
the FrontSide in after it has been fully rendered.
- Instead of modifying "cloze" into something like "cq-2", the card
ordinal and whether we're rendering the question or answer are now
passed in to the rendering filters as context.
- The Rust code doesn't need to support filter names split on '-'
anymore.
- Drop the "Show" part of hint descriptions so i18n support can be
deferred.
- Ignore blank filter names caused by user using two colons instead
of one.
- Fixed hint field and text transposition.
This extends the existing Rust code to handle conditional
replacement. The replacement of field names and filters to text
remains in Python, so that add-ons can still define their own
field modifiers.
The code is currently running the old Pystache rendering and the
new implementation in parallel, and will print a message to the
console if they don't match. If you notice any problems, please
let me know.
In the process of factoring out the field filtering, the "extra"
and "fullname" args are just passed in as a blank string now.
Extra was functionality that allowed a field modifier to be defined
as "filtername(arg1,arg2):field", and fullname was the name of the
field including any provided field modifiers. From grepping through
the add-ons on AnkiWeb, neither appears to have been used.