* Fix legacy colpkg import; disable v3 import/export; add roundtrip test
The test has revealed we weren't decompressing the media files on v3
import. That's easy to fix, but means all files need decompressing
even when they already exist, which is not ideal - it would be better
to store size/checksum in the metadata instead.
* Switch media and meta to protobuf; re-enable v3 import/export
- Fixed media not being decompressed on import
- The uncompressed size and checksum is now included for each media
entry, so that we can quickly check if a given file needs to be extracted.
We're still just doing a naive size comparison on colpkg import at the
moment, but we may want to use a checksum in the future, and will need
a checksum for apkg imports.
- Checksums can't be efficiently encoded in JSON, so the media list
has been switched to protobuf to reduce the the space requirements.
- The meta file has been switched to protobuf as well, for consistency.
This will mean any colpkg files exported with beta7 will be
unreadable.
* Avoid integer version comparisons
* Re-enable v3 test
* Apply suggestions from code review
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
* Add export_colpkg() method to Collection
More discoverable, and easier to call from unit tests
* Split import/export code out into separate folders
Currently colpkg/*.rs contain some routines that will be useful for
apkg import/export as well; in the future we can refactor them into a
separate file in the parent module.
* Return a proper error when media import fails
This tripped me up when writing the earlier unit test - I had called
the equivalent of import_colpkg()?, and it was returning a string error
that I didn't notice. In practice this should result in the same text
being shown in the UI, but just skips the tooltip.
* Automatically create media folder on import
* Move roundtrip test into separate file; check collection too
* Remove zstd version suffix
Prevents a warning shown each time Rust Analyzer is used to check the
code.
Co-authored-by: RumovZ <gp5glkw78@relay.firefox.com>
Ideally this would have been in beta 6 :-) No add-ons appear to be
using customstudy.py/taglimit.py though, so it should hopefully not be
disruptive.
In the earlier custom study changes, we didn't get around to addressing
issue #1136. Now instead of trying to determine the maximum increase
to allow (which doesn't work correctly with nested decks), we just
present the total available to the user again, and let them decide. There's
plenty of room for improvement here still, but further work here might
be better done once we look into decoupling deck limits from deck presets.
Tags and available cards are fetched prior to showing the dialog now,
and will show a progress dialog if things take a while.
Tags are stored in an aux var now, so they don't inflate the deck
object size.
* Add forget prompt with options
- Restore original position
- Reset reps and lapses
* Restore position when resetting for export
* Add config context to avoid passing keys
* Add routine to fetch defaults; use method-specific enum (dae)
* Keep original position by default (dae)
* Fix code completion for forget dialog (dae)
Needs to be a symbolic link to the generated file
* Replace Card.data with .original_position
* Use and update original position in v3
* Show original position in card info
* Revert restoring original position for now
* Fix pb card to/from pylib card
* Try original_position as the last pb field
* minor wording tweaks (dae)
* avoid repinning Rust deps by default
* add id_tree dependency
* Respect intermediate child limits in v3
* Test new behaviour of v3 counts
* Rework v3 queue building to respect parent limits
* Add missing did field to SQL query
* Fix `LimitTreeMap::is_exhausted()`
* Rework tree building logic
https://github.com/ankitects/anki/pull/1638#discussion_r798328734
* Add timer for build_queues()
* `is_exhausted()` -> `limit_reached()`
* Move context and limits into `QueueBuilder`
This allows for moving more logic into QueueBuilder, so less passing
around of arguments. Unfortunately, some tests will require additional
work to set up.
* Fix stop condition in new_cards_by_position
* Fix order gather order of new cards by deck
* Add scheduler/queue/builder/burying.rs
* Fix bad tree due to unsorted child decks
* Fix comment
* Fix `cap_new_to_review_rec()`
* Add test for new card gathering
* Always sort `child_decks()`
* Fix deck removal in `cap_new_to_review_rec()`
* Fix sibling ordering in new card gathering
* Remove limits for deck total count with children
* Add random gather order
* Remove bad sibling order handling
All routines ensure ascending order now.
Also do some other minor refactoring.
* Remove queue truncating
All routines stop now as soon as the root limit is reached.
* Move deck fetching into `QueueBuilder::new()`
* Rework new card gather and sort options
https://github.com/ankitects/anki/pull/1638#issuecomment-1032173013
* Disable new sort order choices ...
depending on set gather order.
* Use enum instead of numbers
* Ensure valid sort order setting
* Update new gather and sort order tooltips
* Warn about random insertion order with v3
* Revert "Add timer for build_queues()"
This reverts commit c9f5fc6ebe.
* Update rslib/src/storage/card/mod.rs (dae)
* minor wording tweaks to the tooltips (dae)
+ move legacy strings to bottom
+ consistent capitalization (our leech action still needs fixing,
but that will require introducing a new 'suspend card' string as the
existing one is used elsewhere as well)
* Implement custom study on backend
* Switch frontend to backend custom study
* Skip typecheck for new pb classes
* Build tag search string on backend
Also fixes escaping of special characters in tag names.
* `cram.cards` -> `cram.card_limit`
* Assign more meaningful names in `TagLimit`
* Broaden rustfmt glob
* Use `invalid_input()` helper
* Assign `FilteredDeckForUpdate` to temp var
* Implement `SearchBuilder`
* Rewrite `custom_study()` with `SearchBuilder`
* Replace match macros with `SearchBuilder`
* Remove `into_nodes_list` & `concatenate_searches`
* Make hard repeat the current step's interval in v3
Unless for the first step to avoid identical interval with Again.
* Make Hard repeat the current step's interval in v2
* Adjust test to new Hard behaviour
* Fix steps being mistaken for seconds
* Cap steps at `u32::max` seconds
* Fix overflow of steps in Rust
* Prevent overflow of `IntervalKind`
* Prevent overflow in `revlod/mod.rs`
Also replace some `as` with `from` and `try_from` as is recommended to
highlight potential issues.
* Ensure v2 doesn't store overflowing revlog ivls
* Lower steps cap in deck options
Whereas large card intervals are converted to days, revlog intervals use
i32s to store large numbers of seconds.
* Format
e5e47a31fe (r748827327)
+ switched assert_lower_middle_upper to a macro, so that when it fails,
the reported line number is the original call site, instead of one inside
the helper function
* Remove flooring in v3 scheduler code
It is no longer supposed to be an exact port of the old Python code.
* Rework v3 fuzzing
https://github.com/ankitects/anki/issues/1416#issuecomment-958208149
* Ensure length of fuzz range is larger than 1
Only for new intervals larger than 1 and respecting max review interval.
* add the beginnings of a unit test
* Clarify `fuzz_factor` doc string
* Fix Python tests for 2021 scheduler
* Fix fuzz test
1.0 is not a valid fuzz factor.
* Add tests for fuzzing in Rust
* Use range notation in fuzz factor doc
* Strip redundant tests
`counts.learning` includes interday learning cards, so it is not
suitable to determine how many cards from the (intraday!) learning queue
are already included in the learning count when updating it.
The 'avoid showing learning card twice' logic is now only applied
when the next learning card was already due to be shown. This'll mean
there will be cases where a learning card does get shown twice near
the end, but it makes the behaviour easier to reason about, for both
us and end users.
There were a few issues going on here:
- If some operation had invalidated the queues, they were subsequently
recreated with a call to .get_queues() in the undo handling code. This
could happen after the changes to the card had already been reverted,
leading to a queue state that didn't match our expectations.
- More generally, it's not safe to assume our mutations will apply
cleanly after the queue has been rebuilt. The next card will vary
depending on the number of remaining cards when interspersing cards of
different types, and a queue-invalidating operation will have changed
the learning cutoff.
So rather than rebuilding the queues on demand, we now check that they
already exist, and were created at the time we expect. If not, we
invalidate them and skip applying the mutations, and a subsequent
refresh of the UI should rebuild the queues correctly.
As part of this change, the cutoff snapshot has been moved into the
normal answer update object.
One possible downside here is that adding a note during review may cause
a newly due learning card to appear when undoing a different review.
If this proves to be a problem, we could potentially note down the
learning cutoff and apply it when queues are rebuilt later.
Context: https://forums.ankiweb.net/t/more-cards-today-question-about-v3/12400/10
Previously, interday learning cards and reviews were gathered at the
same time in v3, with the review limit being applied to both of them. The
order cards were gathered in would change the ratio of gathered learning
cards and reviews, but as they were displayed together in a single count,
a changing ratio was not apparent, and no special handling was required
by the deck tree code.
Showing interday learning cards in the learning count, while still
applying a review limit to them, makes things more complicated, as
a changing ratio will result in different counts. The deck tree code
is not able to know which order cards will appear in, so without changes,
we would have had a situation where the deck list may show different counts
to those seen when clicking on a deck.
One way to solve this would have been to introduce a separate limit for
interday learning cards. But this would have meant users needed to
juggle two different limits, instead of having a single one that controls
total number of (non-intraday) cards shown.
Instead, the scheduler now fetches interday cards prior to reviews -
the rationale for that order is that learning cards tend to be more
fragile/urgent than reviews. The option to show learning cards
before/after/mixed with reviews still exists, but it applies only after
cards have been capped to the daily limit.
To ensure the deck tree code matches the counts the scheduler gives,
it too applies limits to interday learning cards first, and reviews
afterwards.
Interday learning cards are now counted in the learning count again,
and are no longer subject to the daily review limit.
The thinking behind the original change was that interday learning cards
are scheduled more like reviews, and counting them in the review count
would allow the learning count to focus on intraday learning - the red
number reflecting the fact that they are the most fragile memories. And
counting them together made it practical to apply the review limit
to both at once.
Since the release, there have been a number of users expecting to see
interday learning cards included in the learning count (the latest being
https://forums.ankiweb.net/t/feedback-and-a-feature-adjustment-request-for-2-1-45/12308),
and a good argument can be made for that too - they are, after all, listed
in the learning steps, and do tend to be harder than reviews. Short of
introducing another count to keep track of interday and intraday learning
separately, moving back to the old behaviour seems like the best move.
This also means it is not really practical to apply the review limit to
interday learning cards anymore, as the limit would be split between two
different numbers, and how much each number is capped would depend on
the order cards are introduced. The scheduler could figure this out, but
the deck list code does not know card order, and would need significant
changes to be able to produce numbers that matched the scheduler. And
even if we ignore implementation complexities, I think it would be more
difficult for users to reason about - the influence of the review limit
on new cards is confusing enough as it is.
The v3 scheduler will delay the final card from being shown twice in
a row, but the overdue case was being treated the same as the no-learning
case, leading to the message being hidden.
Previously we would just use 250% ease for any new card that had no
pre-configured ease, but this will result in decks that have
non-standard ease values to have "set due date" cards in them that don't
match. In order to make this somewhat more efficient, we cache
deckid->ease lookups during this operation.
Ref: <https://forums.ankiweb.net/t/set-due-date-doesnt-obey-default-ease-factor/9184>
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
This makes the review backlog case more expensive, since we end up
shuffling items outside the daily limit, but for the common case it's
about the same speed, and it means we don't need two separate sorting
steps. New cards remain handled the same way, since a backlog
is common there.
Also ensures that interday learning cards honor the deck sorting, and
that the non-default sort orders shuffle at the end.
- The "unbury deck" option was broken, as it was ignoring child
decks. It would be nice if we could use active_decks instead, but
plugging that into the old scheduler without breaking undo seems a bit
tricky.
- Remove the implicit From impl for decks, so we need to be forced to
think about whether we want child decks or not.
- Daily limits are no longer inherited - each deck limits its own
cards, and the selected deck enforces a maximum limit.
- Fetching of review cards now uses a single query, and sorts in advance.
In collections with a large number of overdue cards and decks, this is
faster than iterating over each deck in turn.
- Include interday learning count in review count & review limit, and
allow them to be buried.
- Warn when parent review limit is lower than child deck in deck options.
- Cap the new card limit to the review limit.
- Add option to control whether new card fetching short-circuits.
Instead of using a separate undo queue, the code now defers checking for
newly-due learning cards until the answering stage, and logs the updated
cutoff time as an undoable change, so that any newly-due learning cards
won't appear instead of a new/review card that was just undone.
Queue redo now uses a similar approach to undo, instead of rebuilding the
queues.
The original rationale was avoiding a possible O(n) insertion if
the learning card was due outside the cutoff, but the increased code
complexity doesn't seem worth it, given that learning cards will
rarely grow above 1000.
Also added a currently-disabled test that demonstrates the current undo
handling behaviour is yielding incorrect counts; that will be reworked
in the next commit, and this change will make that easier.
- split new card fetch order and subsequent sort order; use latter
when building queues
- default to spacing siblings when burying is off, with options to
show each sibling in turn, and shuffle the fetched cards
The bury new/review flags are now pulled from each card's home deck,
instead of using a global setting that had not been hooked up. This
unfortunately means we need to fetch the map of all decks up front, as
we need to be able to look up a deck configuration for cards that are
in filtered decks.
Fixes a "card was modified" error caused by cards being buried during
review, when they weren't removed up-front.
Avoids duplicate work, and is a step towards allowing the next
states to be modified by third-party code.
Also:
- fixed incorrect underlined count, due to reviews being labeled as
learning cards
- fixed reviewer not refreshing when undoing a test review, by splitting
up backend queue rebuilding from frontend reviewer refresh
- moved answering into a CollectionOp
Allows add-on authors to define their own label for a group of undoable
operations. For example:
def mark_and_bury(
*,
parent: QWidget,
card_id: CardId,
) -> CollectionOp[OpChanges]:
def op(col: Collection) -> OpChanges:
target = col.add_custom_undo_entry("Mark and Bury")
col.sched.bury_cards([card_id])
card = col.get_card(card_id)
col.tags.bulk_add(note_ids=[card.nid], tags="marked")
return col.merge_undo_entries(target)
return CollectionOp(parent, op)
The .add_custom_undo_entry() is for adding your own custom actions.
When extending a standard Anki action, instead store `target =
col.undo_status().last_step` after executing the standard operation.
This started out as a bigger refactor that required a separate
.commit_undoable() call to be run after each operation, instead of
having each operation return changes directly. But that proved to be
somewhat cumbersome in unit tests, and ran the risk of unexpected
behaviour if the caller invoked an operation without remembering to
finalize it.
The deck name must be constructed by calling associated functions of
NativeDeckName, unless the name is guaranteed to be valid machine
name (like "Default").
NativeDeckName exposes methods to mutate the deck name and return
the human name.
The storage routines take &strs, but those should be slices of
NativeDeckNames to ensure machine form and normalization.
The backend knows exactly which op has executed, and it saves us having
to re-implement this logic on each client.
Fixes the browser table refreshing when toggling decks.
So, this is fun. Apparently "DeckId" is considered preferable to the
"DeckID" were were using until now, and the latest clippy will start
warning about it. We could of course disable the warning, but probably
better to bite the bullet and switch to the naming that's generally
considered best.
Instead of generating a fluent.proto file with a giant enum, create
a .json file representing the translations that downstream consumers
can use for code generation.
This enables the generation of a separate method for each translation,
with a docstring that shows the actual text, and any required arguments
listed in the function signature.
The codebase is still using the old enum for now; updating it will need
to come in future commits, and the old enum will need to be kept
around, as add-ons are referencing it.
Other changes:
- move translation code into a separate crate
- store the translations on a per-file/module basis, which will allow
us to avoid sending 1000+ strings on each JS page load in the future
- drop the undocumented support for external .ftl files, that we weren't
using
- duplicate strings in translation files are now checked for at build
time
- fix i18n test failing when run outside Bazel
- drop slog dependency in i18n module
- Filtered deck creation now happens as an atomic operation, and is
undoable.
- The logic for initial search text, normalizing searches and so on
has been pushed into the backend.
- Use protobuf to pass the filtered deck to the updated dialog, so
we don't need to deal with untyped JSON.
- Change the "revise your search?" prompt to be a simple info box -
user has access to cancel and build buttons, and doesn't need a separate
prompt. Tweak the wording so the 'show excluded' button should be more
obvious.
- Filtered decks have a time appended to them instead of a number,
primarily because it's easier to implement. No objections going back to
the old behaviour if someone wants to contribute a clean patch.
The standard de-duplication will happen if two decks are created in the
same minute with the same name.
- Tweak the default sort order, and start with two searches. The UI
will still hide the second search by default, but by starting with two,
the frontend doesn't need logic for creating the starting text.
- Search errors now have their own error type, instead of using
InvalidInput, as that was intended mainly for bad API calls. The markdown
conversion is done when the error is converted from the backend, allowing
errors to printed as a string without any special handling by the calling
code.
TODO: when building a new filtered deck, update_active() is clobbering
the undo log when the overview is refreshed