If we want to be able to factor the desired retention into a sort based
on relative overdueness, having the values accessible on the card makes
things easier.
Allowing some decks to be FSRS and some SM-2 will lead to confusing
behavior when sorting on SM-2 or FSRS-specific fields, or when moving
cards between decks.
Will allow user to see a record of difficulty changes, and allows us
to identify reviews that have been done with FSRS vs SM-2, since the
valid range is different.
* Pack FSRS data into card.data
* Update FSRS card data when preset or weights change
+ Show FSRS stats in card stats
* Show a warning when there's a limited review history
* Add some translations; tweak UI
* Fix default requested retention
* Add browser columns, fix calculation of R
* Property searches
eg prop:d>0.1
* Integrate FSRS into reviewer
* Warn about long learning steps
* Hide minimum interval when FSRS is on
* Don't apply interval multiplier to FSRS intervals
* Expose memory state to Python
* Don't set memory state on new cards
* Port Jarret's new tests; add some helpers to make tests more compact
https://github.com/open-spaced-repetition/fsrs-rs/pull/64
* Fix learning cards not being given memory state
* Require update to v3 scheduler
* Don't exclude single learning step when calculating memory state
* Use relearning step when learning steps unavailable
* Update docstring
* fix single_card_revlog_to_items (#2656)
* not need check the review_kind for unique_dates
* add email address to CONTRIBUTORS
* fix last first learn & keep early review
* cargo fmt
* cargo clippy --fix
* Add Jarrett to about screen
* Fix fsrs_memory_state being initialized to default in get_card()
* Set initial memory state on graduate
* Update to latest FSRS
* Fix experiment.log being empty
* Fix broken colpkg imports
Introduced by "Update FSRS card data when preset or weights change"
* Update memory state during (re)learning; use FSRS for graduating intervals
* Reset memory state when cards are manually rescheduled as new
* Add difficulty graph; hide eases when FSRS enabled
* Add retrievability graph
* Derive memory_state from revlog when it's missing and shouldn't be
---------
Co-authored-by: Jarrett Ye <jarrett.ye@outlook.com>
* Support searching for deck configs by name
* Integrate FSRS optimizer into Anki
* Hack in a rough implementation of evaluate_weights()
* Interrupt calculation if user closes dialog
* Fix interrupted error check
* log_loss/rmse
* Update to latest fsrs commit; add progress info to weight evaluation
* Fix progress not appearing when pretrain takes a while
* Update to latest commit
* Implement import log screen in Svelte
* Show filename in import log screen title
* Remove unused NoteRow property
* Show number of imported notes
* Use a single nid expression
* Use 'count' as variable name for consistency
* Import from @tslib/backend instead
* Fix summary_template typing
* Fix clippy warning
* Apply suggestions from code review
* Fix imports
* Contents -> Fields
* Increase max length of browser search bar
https://github.com/ankitects/anki/pull/2568/files#r1255227035
* Fix race condition in Bootstrap tooltip destruction
https://github.com/twbs/bootstrap/issues/37474
* summary_template -> summaryTemplate
* Make show link a button
* Run import ops on Svelte side
* Fix geometry not being restored in CSV Import page
* Make VirtualTable fill available height
* Keep CSV dialog modal
* Reword importing-existing-notes-skipped
* Avoid mentioning matching based on first field
* Change tick and cross icons
* List skipped notes last
* Pure CSS spinner
* Move set_wants_abort() call to relevant dialogs
* Show number of imported cards
* Remove bold from first sentence and indent summaries
* Update UI after import operations
* Add close button to import log page
Also make virtual table react to resize event.
* Fix typing
* Make CSV dialog non-modal again
Otherwise user can't interact with browser window.
* Update window modality after import
* Commit DB and update undo actions after import op
* Split frontend proto into separate file, so backend can ignore it
Currently the automatically-generated frontend RPC methods get placed in
'backend.js' with all the backend methods; we could optionally split them
into a separate 'frontend.js' file in the future.
* Migrate import_done from a bridgecmd to a HTTP request
* Update plural form of importing-notes-added
* Move import response handling to mediasrv
* Move task callback to script section
* Avoid unnecessary :global()
* .log cannot be missing if result exists
* Move import log search handling to mediasrv
* Type common params of ImportLogDialog
* Use else if
* Remove console.log()
* Add way to test apkg imports in new log screen
* Remove unused import
* Get actual card count for CSV imports
* Use import type
* Fix typing error
* Ignore import log when checking for changes in Python layer
* Apply suggestions from code review
* Remove imported card count for now
* Avoid non-null assertion in assignment
* Change showInBrowser to take an array of notes
* Use dataclasses for import log args
* Simplify ResultWithChanges in TS
* Only abort import when window is modal
* Fix ResultWithChanges typing
* Fix Rust warnings
* Only log one duplicate per incoming note
* Update wording about note updates
* Remove caveat about found_notes
* Reduce font size
* Remove redundant map
* Give credit to loading.io
* Remove unused line
---------
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
* Automatically elide empty inputs and outputs to backend methods
* Refactor service generation
Despite the fact that the majority of our Protobuf service methods require
an open collection, they were not accessible with just a Collection
object. To access the methods (e.g. because we haven't gotten around to
exposing the correct API in Collection yet), you had to wrap the collection
in a Backend object, and pay a mutex-acquisition cost for each call, even
if you have exclusive access to the object.
This commit migrates the majority of service methods to the Collection, so
they can now be used directly, and improves the ergonomics a bit at the
same time.
The approach taken:
- The service generation now happens in rslib instead of anki_proto, which
avoids the need for trait constraints and associated types.
- Service methods are assumed to be collection-based by default. Instead of
implementing the service on Backend, we now implement it on Collection, which
means our methods no longer need to use self.with_col(...).
- We automatically generate methods in Backend which use self.with_col() to
delegate to the Collection method.
- For methods that are only appropriate for the backend, we add a flag in
the .proto file. The codegen uses this flag to write the method into a
BackendFooService instead of FooService, which the backend implements.
- The flag can also allows us to define separate implementations for collection
and backend, so we can e.g. skip the collection mutex in the i18n service
while also providing the service on a collection.
Due to the orphan rule, this meant removing our usages of impl ProtoStruct,
or converting them to a trait when they were used commonly.
rslib now directly references anki_proto and anki_i18n, instead of
'pub use'-ing them, and we can put the generated files back in OUT_DIR.
* Move open_test_collection into Collection test impl
* Fix invalid ids when checking database
* Report fixed invalid ids
* Improve message when trying to export invalid ids
Also move ImportError due to namespace conflicts with snafu macro.
* Take a human name in DeckAdder::new
* Mention timestamps in the db check message (dae)
Will help to correlate the fix with the message shown when importing/
exporting.
* Ensure state mutator runs after card is rendered
* Ensure ease buttons only show when states are ready
* Pass context into states mutator
* Revert queuing of state mutator hook
Now that context data is exposed users shouldn't rely on the question
having been rendered anymore.
* Use callbacks instead of signals and timeout
... to track whether the states mutator ran or failed.
* Make mutator async
* Remove State enum
* Reduce requests and compute seed on backend
* Remove outdated comment.
* Revert removal of independent bury rules
* Revert 'hierarchical bury modes'
It's now again allowed to bury new, but not review cards e.g., but
siblings of previously gathered card queues will not be buried.
* Tweak docs (dae)
* Add missing Learn and PreviewRepeat queues
* Add CardAdder test helper
* Add option to have new cards ignore the review limit
Also entails a lot of refactoring because the old code was deeply
coupled to the previous behaviour.
* Add global option to ignore review limit
* Refactor decrementation
* Unify testing
* Enforce hierarchical bury modes
Interday learning burying is only allowed if review burying is enabled
and review burying is only allowed if new burying is enabled.
Closes#2352.
* Switch front end to new bury modes
* Wording tweaks (dae)
* Hide interday option if using v2 scheduler (dae)
This PR replaces the existing Python-driven sync server with a new one in Rust.
The new server supports both collection and media syncing, and is compatible
with both the new protocol mentioned below, and older clients. A setting has
been added to the preferences screen to point Anki to a local server, and a
similar setting is likely to come to AnkiMobile soon.
Documentation is available here: <https://docs.ankiweb.net/sync-server.html>
In addition to the new server and refactoring, this PR also makes changes to the
sync protocol. The existing sync protocol places payloads and metadata inside a
multipart POST body, which causes a few headaches:
- Legacy clients build the request in a non-deterministic order, meaning the
entire request needs to be scanned to extract the metadata.
- Reqwest's multipart API directly writes the multipart body, without exposing
the resulting stream to us, making it harder to track the progress of the
transfer. We've been relying on a patched version of reqwest for timeouts,
which is a pain to keep up to date.
To address these issues, the metadata is now sent in a HTTP header, with the
data payload sent directly in the body. Instead of the slower gzip, we now
use zstd. The old timeout handling code has been replaced with a new implementation
that wraps the request and response body streams to track progress, allowing us
to drop the git dependencies for reqwest, hyper-timeout and tokio-io-timeout.
The main other change to the protocol is that one-way syncs no longer need to
downgrade the collection to schema 11 prior to sending.
* Run cargo +nightly fmt
* Latest prost-build includes clippy workaround
* Tweak Rust protobuf imports
- Avoid use of stringify!(), as JetBrains editors get confused by it
- Stop merging all protobuf symbols into a single namespace
* Remove some unnecessary qualifications
Found via IntelliJ lint
* Migrate some asserts to assert_eq/ne
* Remove mention of node_modules exclusion
This no longer seems to be necessary after migrating away from Bazel,
and excluding it means TS/Svelte files can't be edited properly.
* Remove deprecated `and_hms()`
* Update chrono
* Update licenses and fix script
* Remove deprecated Date struct
* Remove chrono pin
* Skip format check on .vscode
Was failing for no reason.
* Replace deprecated chrono functions
* Add cargo-deny to update-licenses & pin versions (dae)
* Remove time 0.1 dependency (dae)
We don't need to wait for chrono 0.5; it was provided behind a legacy
feature flag.
* Add crate snafu
* Replace all inline structs in AnkiError
* Derive Snafu on AnkiError
* Use snafu for card type errors
* Use snafu whatever error for InvalidInput
* Use snafu for NotFoundError and improve message
* Use snafu for FileIoError to attach context
Remove IoError.
Add some context-attaching helpers to replace code returning bare
io::Errors.
* Add more context-attaching io helpers
* Add message, context and backtrace to new snafus
* Utilize error context and backtrace on frontend
* Rename LocalizedError -> BackendError.
* Remove DocumentedError.
* Have all backend exceptions inherit BackendError.
* Rename localized(_description) -> message
* Remove accidentally committed experimental trait
* invalid_input_context -> ok_or_invalid
* ensure_valid_input! -> require!
* Always return `Err` from `invalid_input!`
Instead of a Result to unwrap, the macro accepts a source error now.
* new_tempfile_in_parent -> new_tempfile_in_parent_of
* ok_or_not_found -> or_not_found
* ok_or_invalid -> or_invalid
* Add crate convert_case
* Use unqualified lowercase type name
* Remove uses of snafu::ensure
* Allow public construction of InvalidInputErrors (dae)
Needed to port the AnkiDroid changes.
* Make into_protobuf() public (dae)
Also required for AnkiDroid. Not sure why it worked previously - possible
bug in older Rust version?
* Enable state-dependent custom scheduling data
* Next(Card)States -> SchedulingStates
The fact that `current` was included in `next` always bothered me,
and custom data is part of the card state, so that was a bit confusing
too.
* Store custom_data in SchedulingState
* Make custom_data optional when answering
Avoids having to send it 4 extra times to the frontend, and avoids the
legacy answerCard() API clobbering the stored data.
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
* Add card meta for persisting custom scheduling state
* Rename meta -> custom_data
* Enforce limits on size of custom data
Large values will slow down table scans of the cards table, and it's
easier to be strict now and possibly relax things in the future than
the opposite.
* Pack card states and customData into a single message
+ default customData to empty if it can't be parsed
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
In v3, it's more informative to show the count of child decks separately,
since increasing the limit of the current deck does not increase the limits
of child decks. When we rework the decks list in the future, a tooltip
will hopefully provide an easier way for users to see where cards are
available, and where limits are being applied.
Closes#1868
* Implicitly group when joining searches
* Allow joining search types directly
* Test search joining
* Add comment for future selves (dae)
* Add one more assert that shows nested grouping (dae)
* Join user searches without grouping again
* Flatten a few clauses in custom study (dae)
* Fix legacy colpkg import; disable v3 import/export; add roundtrip test
The test has revealed we weren't decompressing the media files on v3
import. That's easy to fix, but means all files need decompressing
even when they already exist, which is not ideal - it would be better
to store size/checksum in the metadata instead.
* Switch media and meta to protobuf; re-enable v3 import/export
- Fixed media not being decompressed on import
- The uncompressed size and checksum is now included for each media
entry, so that we can quickly check if a given file needs to be extracted.
We're still just doing a naive size comparison on colpkg import at the
moment, but we may want to use a checksum in the future, and will need
a checksum for apkg imports.
- Checksums can't be efficiently encoded in JSON, so the media list
has been switched to protobuf to reduce the the space requirements.
- The meta file has been switched to protobuf as well, for consistency.
This will mean any colpkg files exported with beta7 will be
unreadable.
* Avoid integer version comparisons
* Re-enable v3 test
* Apply suggestions from code review
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
* Add export_colpkg() method to Collection
More discoverable, and easier to call from unit tests
* Split import/export code out into separate folders
Currently colpkg/*.rs contain some routines that will be useful for
apkg import/export as well; in the future we can refactor them into a
separate file in the parent module.
* Return a proper error when media import fails
This tripped me up when writing the earlier unit test - I had called
the equivalent of import_colpkg()?, and it was returning a string error
that I didn't notice. In practice this should result in the same text
being shown in the UI, but just skips the tooltip.
* Automatically create media folder on import
* Move roundtrip test into separate file; check collection too
* Remove zstd version suffix
Prevents a warning shown each time Rust Analyzer is used to check the
code.
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
Ideally this would have been in beta 6 :-) No add-ons appear to be
using customstudy.py/taglimit.py though, so it should hopefully not be
disruptive.
In the earlier custom study changes, we didn't get around to addressing
issue #1136. Now instead of trying to determine the maximum increase
to allow (which doesn't work correctly with nested decks), we just
present the total available to the user again, and let them decide. There's
plenty of room for improvement here still, but further work here might
be better done once we look into decoupling deck limits from deck presets.
Tags and available cards are fetched prior to showing the dialog now,
and will show a progress dialog if things take a while.
Tags are stored in an aux var now, so they don't inflate the deck
object size.
* Add forget prompt with options
- Restore original position
- Reset reps and lapses
* Restore position when resetting for export
* Add config context to avoid passing keys
* Add routine to fetch defaults; use method-specific enum (dae)
* Keep original position by default (dae)
* Fix code completion for forget dialog (dae)
Needs to be a symbolic link to the generated file
* Replace Card.data with .original_position
* Use and update original position in v3
* Show original position in card info
* Revert restoring original position for now
* Fix pb card to/from pylib card
* Try original_position as the last pb field
* minor wording tweaks (dae)
* avoid repinning Rust deps by default
* add id_tree dependency
* Respect intermediate child limits in v3
* Test new behaviour of v3 counts
* Rework v3 queue building to respect parent limits
* Add missing did field to SQL query
* Fix `LimitTreeMap::is_exhausted()`
* Rework tree building logic
https://github.com/ankitects/anki/pull/1638#discussion_r798328734
* Add timer for build_queues()
* `is_exhausted()` -> `limit_reached()`
* Move context and limits into `QueueBuilder`
This allows for moving more logic into QueueBuilder, so less passing
around of arguments. Unfortunately, some tests will require additional
work to set up.
* Fix stop condition in new_cards_by_position
* Fix order gather order of new cards by deck
* Add scheduler/queue/builder/burying.rs
* Fix bad tree due to unsorted child decks
* Fix comment
* Fix `cap_new_to_review_rec()`
* Add test for new card gathering
* Always sort `child_decks()`
* Fix deck removal in `cap_new_to_review_rec()`
* Fix sibling ordering in new card gathering
* Remove limits for deck total count with children
* Add random gather order
* Remove bad sibling order handling
All routines ensure ascending order now.
Also do some other minor refactoring.
* Remove queue truncating
All routines stop now as soon as the root limit is reached.
* Move deck fetching into `QueueBuilder::new()`
* Rework new card gather and sort options
https://github.com/ankitects/anki/pull/1638#issuecomment-1032173013
* Disable new sort order choices ...
depending on set gather order.
* Use enum instead of numbers
* Ensure valid sort order setting
* Update new gather and sort order tooltips
* Warn about random insertion order with v3
* Revert "Add timer for build_queues()"
This reverts commit c9f5fc6ebe.
* Update rslib/src/storage/card/mod.rs (dae)
* minor wording tweaks to the tooltips (dae)
+ move legacy strings to bottom
+ consistent capitalization (our leech action still needs fixing,
but that will require introducing a new 'suspend card' string as the
existing one is used elsewhere as well)
* Implement custom study on backend
* Switch frontend to backend custom study
* Skip typecheck for new pb classes
* Build tag search string on backend
Also fixes escaping of special characters in tag names.
* `cram.cards` -> `cram.card_limit`
* Assign more meaningful names in `TagLimit`
* Broaden rustfmt glob
* Use `invalid_input()` helper
* Assign `FilteredDeckForUpdate` to temp var
* Implement `SearchBuilder`
* Rewrite `custom_study()` with `SearchBuilder`
* Replace match macros with `SearchBuilder`
* Remove `into_nodes_list` & `concatenate_searches`
* Make hard repeat the current step's interval in v3
Unless for the first step to avoid identical interval with Again.
* Make Hard repeat the current step's interval in v2
* Adjust test to new Hard behaviour
* Fix steps being mistaken for seconds
* Cap steps at `u32::max` seconds
* Fix overflow of steps in Rust
* Prevent overflow of `IntervalKind`
* Prevent overflow in `revlod/mod.rs`
Also replace some `as` with `from` and `try_from` as is recommended to
highlight potential issues.
* Ensure v2 doesn't store overflowing revlog ivls
* Lower steps cap in deck options
Whereas large card intervals are converted to days, revlog intervals use
i32s to store large numbers of seconds.
* Format
e5e47a31fe (r748827327)
+ switched assert_lower_middle_upper to a macro, so that when it fails,
the reported line number is the original call site, instead of one inside
the helper function
* Remove flooring in v3 scheduler code
It is no longer supposed to be an exact port of the old Python code.
* Rework v3 fuzzing
https://github.com/ankitects/anki/issues/1416#issuecomment-958208149
* Ensure length of fuzz range is larger than 1
Only for new intervals larger than 1 and respecting max review interval.
* add the beginnings of a unit test
* Clarify `fuzz_factor` doc string
* Fix Python tests for 2021 scheduler
* Fix fuzz test
1.0 is not a valid fuzz factor.
* Add tests for fuzzing in Rust
* Use range notation in fuzz factor doc
* Strip redundant tests
`counts.learning` includes interday learning cards, so it is not
suitable to determine how many cards from the (intraday!) learning queue
are already included in the learning count when updating it.
The 'avoid showing learning card twice' logic is now only applied
when the next learning card was already due to be shown. This'll mean
there will be cases where a learning card does get shown twice near
the end, but it makes the behaviour easier to reason about, for both
us and end users.
There were a few issues going on here:
- If some operation had invalidated the queues, they were subsequently
recreated with a call to .get_queues() in the undo handling code. This
could happen after the changes to the card had already been reverted,
leading to a queue state that didn't match our expectations.
- More generally, it's not safe to assume our mutations will apply
cleanly after the queue has been rebuilt. The next card will vary
depending on the number of remaining cards when interspersing cards of
different types, and a queue-invalidating operation will have changed
the learning cutoff.
So rather than rebuilding the queues on demand, we now check that they
already exist, and were created at the time we expect. If not, we
invalidate them and skip applying the mutations, and a subsequent
refresh of the UI should rebuild the queues correctly.
As part of this change, the cutoff snapshot has been moved into the
normal answer update object.
One possible downside here is that adding a note during review may cause
a newly due learning card to appear when undoing a different review.
If this proves to be a problem, we could potentially note down the
learning cutoff and apply it when queues are rebuilt later.
Context: https://forums.ankiweb.net/t/more-cards-today-question-about-v3/12400/10
Previously, interday learning cards and reviews were gathered at the
same time in v3, with the review limit being applied to both of them. The
order cards were gathered in would change the ratio of gathered learning
cards and reviews, but as they were displayed together in a single count,
a changing ratio was not apparent, and no special handling was required
by the deck tree code.
Showing interday learning cards in the learning count, while still
applying a review limit to them, makes things more complicated, as
a changing ratio will result in different counts. The deck tree code
is not able to know which order cards will appear in, so without changes,
we would have had a situation where the deck list may show different counts
to those seen when clicking on a deck.
One way to solve this would have been to introduce a separate limit for
interday learning cards. But this would have meant users needed to
juggle two different limits, instead of having a single one that controls
total number of (non-intraday) cards shown.
Instead, the scheduler now fetches interday cards prior to reviews -
the rationale for that order is that learning cards tend to be more
fragile/urgent than reviews. The option to show learning cards
before/after/mixed with reviews still exists, but it applies only after
cards have been capped to the daily limit.
To ensure the deck tree code matches the counts the scheduler gives,
it too applies limits to interday learning cards first, and reviews
afterwards.
Interday learning cards are now counted in the learning count again,
and are no longer subject to the daily review limit.
The thinking behind the original change was that interday learning cards
are scheduled more like reviews, and counting them in the review count
would allow the learning count to focus on intraday learning - the red
number reflecting the fact that they are the most fragile memories. And
counting them together made it practical to apply the review limit
to both at once.
Since the release, there have been a number of users expecting to see
interday learning cards included in the learning count (the latest being
https://forums.ankiweb.net/t/feedback-and-a-feature-adjustment-request-for-2-1-45/12308),
and a good argument can be made for that too - they are, after all, listed
in the learning steps, and do tend to be harder than reviews. Short of
introducing another count to keep track of interday and intraday learning
separately, moving back to the old behaviour seems like the best move.
This also means it is not really practical to apply the review limit to
interday learning cards anymore, as the limit would be split between two
different numbers, and how much each number is capped would depend on
the order cards are introduced. The scheduler could figure this out, but
the deck list code does not know card order, and would need significant
changes to be able to produce numbers that matched the scheduler. And
even if we ignore implementation complexities, I think it would be more
difficult for users to reason about - the influence of the review limit
on new cards is confusing enough as it is.
The v3 scheduler will delay the final card from being shown twice in
a row, but the overdue case was being treated the same as the no-learning
case, leading to the message being hidden.
Previously we would just use 250% ease for any new card that had no
pre-configured ease, but this will result in decks that have
non-standard ease values to have "set due date" cards in them that don't
match. In order to make this somewhat more efficient, we cache
deckid->ease lookups during this operation.
Ref: <https://forums.ankiweb.net/t/set-due-date-doesnt-obey-default-ease-factor/9184>
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>