* Enable nc: to only search in a specific field
* Add FieldSearchMode enum to replace boolean fields
* Avoid magic numbers in enum
* Use standard naming so Prost can remove redundant text
---------
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
* Feat/use current_retrievability_seconds in SQL
* replace `days_since_last_review` with `seconds_since_last_review`
* Update is_due_in_days logic to include original or current due date check
* Expose original position to columns and card info
* Fix contributors
* Change routine name and return, fix SQL file, utilize coalesce inline
* Handle cards without original position
* Remove unecessary conversion
* Fix sorting of due timestamps in the browser
* Fix due sorting in notes mode
* Drop initial ctype sorting
* Fix new card positions being treated as due days
- We must use interval, not stability to infer days_elapsed
- We must use original due date in a filtered deck
- Use retrievability in filtered deck sorting, not just regular sorting
* Rollback if toggling state fails
Previously, if the search triggered by a state toggle failed, the switch
and the model would move to the new state, while the table would remain
in the previous state.
* Fix reversed sort orders of FSRS columns
* Add sep. default sort orders for notes and cards
* Add test for consistent default sort orders
* Add launch config for debugging in VSC
* Extend launch config for macOS and Linux
* Pack FSRS data into card.data
* Update FSRS card data when preset or weights change
+ Show FSRS stats in card stats
* Show a warning when there's a limited review history
* Add some translations; tweak UI
* Fix default requested retention
* Add browser columns, fix calculation of R
* Property searches
eg prop:d>0.1
* Integrate FSRS into reviewer
* Warn about long learning steps
* Hide minimum interval when FSRS is on
* Don't apply interval multiplier to FSRS intervals
* Expose memory state to Python
* Don't set memory state on new cards
* Port Jarret's new tests; add some helpers to make tests more compact
https://github.com/open-spaced-repetition/fsrs-rs/pull/64
* Fix learning cards not being given memory state
* Require update to v3 scheduler
* Don't exclude single learning step when calculating memory state
* Use relearning step when learning steps unavailable
* Update docstring
* fix single_card_revlog_to_items (#2656)
* not need check the review_kind for unique_dates
* add email address to CONTRIBUTORS
* fix last first learn & keep early review
* cargo fmt
* cargo clippy --fix
* Add Jarrett to about screen
* Fix fsrs_memory_state being initialized to default in get_card()
* Set initial memory state on graduate
* Update to latest FSRS
* Fix experiment.log being empty
* Fix broken colpkg imports
Introduced by "Update FSRS card data when preset or weights change"
* Update memory state during (re)learning; use FSRS for graduating intervals
* Reset memory state when cards are manually rescheduled as new
* Add difficulty graph; hide eases when FSRS enabled
* Add retrievability graph
* Derive memory_state from revlog when it's missing and shouldn't be
---------
Co-authored-by: Jarrett Ye <jarrett.ye@outlook.com>
* Automatically elide empty inputs and outputs to backend methods
* Refactor service generation
Despite the fact that the majority of our Protobuf service methods require
an open collection, they were not accessible with just a Collection
object. To access the methods (e.g. because we haven't gotten around to
exposing the correct API in Collection yet), you had to wrap the collection
in a Backend object, and pay a mutex-acquisition cost for each call, even
if you have exclusive access to the object.
This commit migrates the majority of service methods to the Collection, so
they can now be used directly, and improves the ergonomics a bit at the
same time.
The approach taken:
- The service generation now happens in rslib instead of anki_proto, which
avoids the need for trait constraints and associated types.
- Service methods are assumed to be collection-based by default. Instead of
implementing the service on Backend, we now implement it on Collection, which
means our methods no longer need to use self.with_col(...).
- We automatically generate methods in Backend which use self.with_col() to
delegate to the Collection method.
- For methods that are only appropriate for the backend, we add a flag in
the .proto file. The codegen uses this flag to write the method into a
BackendFooService instead of FooService, which the backend implements.
- The flag can also allows us to define separate implementations for collection
and backend, so we can e.g. skip the collection mutex in the i18n service
while also providing the service on a collection.
* Add crate snafu
* Replace all inline structs in AnkiError
* Derive Snafu on AnkiError
* Use snafu for card type errors
* Use snafu whatever error for InvalidInput
* Use snafu for NotFoundError and improve message
* Use snafu for FileIoError to attach context
Remove IoError.
Add some context-attaching helpers to replace code returning bare
io::Errors.
* Add more context-attaching io helpers
* Add message, context and backtrace to new snafus
* Utilize error context and backtrace on frontend
* Rename LocalizedError -> BackendError.
* Remove DocumentedError.
* Have all backend exceptions inherit BackendError.
* Rename localized(_description) -> message
* Remove accidentally committed experimental trait
* invalid_input_context -> ok_or_invalid
* ensure_valid_input! -> require!
* Always return `Err` from `invalid_input!`
Instead of a Result to unwrap, the macro accepts a source error now.
* new_tempfile_in_parent -> new_tempfile_in_parent_of
* ok_or_not_found -> or_not_found
* ok_or_invalid -> or_invalid
* Add crate convert_case
* Use unqualified lowercase type name
* Remove uses of snafu::ensure
* Allow public construction of InvalidInputErrors (dae)
Needed to port the AnkiDroid changes.
* Make into_protobuf() public (dae)
Also required for AnkiDroid. Not sure why it worked previously - possible
bug in older Rust version?
* Adjust remaining steps after config update
* Handle relearning steps separately
Also refactor a lot.
* Also adjust remaining steps after deck change
* Test step adjustment after config update
* Fix `SearchBuilder::(re)learning_cards()`
* Fix step adjustment after deck change
* Test step adjustment after deck change
* Fix test name
* Readjust remaining steps according to last delay
Also atomize tests and add some tooling.
* Implicitly group when joining searches
* Allow joining search types directly
* Test search joining
* Add comment for future selves (dae)
* Add one more assert that shows nested grouping (dae)
* Join user searches without grouping again
* Flatten a few clauses in custom study (dae)
Ideally this would have been in beta 6 :-) No add-ons appear to be
using customstudy.py/taglimit.py though, so it should hopefully not be
disruptive.
In the earlier custom study changes, we didn't get around to addressing
issue #1136. Now instead of trying to determine the maximum increase
to allow (which doesn't work correctly with nested decks), we just
present the total available to the user again, and let them decide. There's
plenty of room for improvement here still, but further work here might
be better done once we look into decoupling deck limits from deck presets.
Tags and available cards are fetched prior to showing the dialog now,
and will show a progress dialog if things take a while.
Tags are stored in an aux var now, so they don't inflate the deck
object size.
* Implement custom study on backend
* Switch frontend to backend custom study
* Skip typecheck for new pb classes
* Build tag search string on backend
Also fixes escaping of special characters in tag names.
* `cram.cards` -> `cram.card_limit`
* Assign more meaningful names in `TagLimit`
* Broaden rustfmt glob
* Use `invalid_input()` helper
* Assign `FilteredDeckForUpdate` to temp var
* Implement `SearchBuilder`
* Rewrite `custom_study()` with `SearchBuilder`
* Replace match macros with `SearchBuilder`
* Remove `into_nodes_list` & `concatenate_searches`
Instead, fetch the config order on the frontend and pass a builtin
variant into the backend.
That makes the following unnecessary:
* Resolving the config sort in search/mod.rs
* Deserializing the Column enum
* Config accessors for the sort columns
- Filtered deck creation now happens as an atomic operation, and is
undoable.
- The logic for initial search text, normalizing searches and so on
has been pushed into the backend.
- Use protobuf to pass the filtered deck to the updated dialog, so
we don't need to deal with untyped JSON.
- Change the "revise your search?" prompt to be a simple info box -
user has access to cancel and build buttons, and doesn't need a separate
prompt. Tweak the wording so the 'show excluded' button should be more
obvious.
- Filtered decks have a time appended to them instead of a number,
primarily because it's easier to implement. No objections going back to
the old behaviour if someone wants to contribute a clean patch.
The standard de-duplication will happen if two decks are created in the
same minute with the same name.
- Tweak the default sort order, and start with two searches. The UI
will still hide the second search by default, but by starting with two,
the frontend doesn't need logic for creating the starting text.
- Search errors now have their own error type, instead of using
InvalidInput, as that was intended mainly for bad API calls. The markdown
conversion is done when the error is converted from the backend, allowing
errors to printed as a string without any special handling by the calling
code.
TODO: when building a new filtered deck, update_active() is clobbering
the undo log when the overview is refreshed
- SearchTerm -> SearchNode
- Operator -> Joiner; share between messages
- build_search_string() supports specifying AND/OR as a convenience
- group_searches() makes it easier to negate
While implementing the overdue search, I realised it would be nice to
be able to construct a search string with OR and NOT searches without
having to construct each part individually with build_search_string().
Changes:
- Extends SearchTerm to support a text search, which will be parsed
by the backend. This allows us to do things like wrap text in a group
or NOT node.
- Because SearchTerm->Node conversion can now fail with a parsing error,
it's switched over to TryFrom
- Switch concatenate_searches and replace_search_term to use SearchTerms,
so that they too don't require separate string building steps.
- Remove the unused normalize_search()
- Remove negate_search, as this is now an operation on a Node, and
users can wrap their search in SearchTerm(negated=...)
- Remove the match_any and negate args from build_search_string
Having done all this work, I've just realised that perhaps the original
JSON idea was more feasible than I first thought - if we wrote it out
to a string and re-parsed it, we would be able to leverage the existing
checks that occur at parsing stage.