* Add crate snafu
* Replace all inline structs in AnkiError
* Derive Snafu on AnkiError
* Use snafu for card type errors
* Use snafu whatever error for InvalidInput
* Use snafu for NotFoundError and improve message
* Use snafu for FileIoError to attach context
Remove IoError.
Add some context-attaching helpers to replace code returning bare
io::Errors.
* Add more context-attaching io helpers
* Add message, context and backtrace to new snafus
* Utilize error context and backtrace on frontend
* Rename LocalizedError -> BackendError.
* Remove DocumentedError.
* Have all backend exceptions inherit BackendError.
* Rename localized(_description) -> message
* Remove accidentally committed experimental trait
* invalid_input_context -> ok_or_invalid
* ensure_valid_input! -> require!
* Always return `Err` from `invalid_input!`
Instead of a Result to unwrap, the macro accepts a source error now.
* new_tempfile_in_parent -> new_tempfile_in_parent_of
* ok_or_not_found -> or_not_found
* ok_or_invalid -> or_invalid
* Add crate convert_case
* Use unqualified lowercase type name
* Remove uses of snafu::ensure
* Allow public construction of InvalidInputErrors (dae)
Needed to port the AnkiDroid changes.
* Make into_protobuf() public (dae)
Also required for AnkiDroid. Not sure why it worked previously - possible
bug in older Rust version?
* Keep filtered decks when importing apkg
If all original decks exist and scheduling is included.
* Create missing decks from csv
* Export original decks if with_scheduling
* Also remap original deck ids on import
* Update imported filtered decks
* Fix meta column being mapped to tags
* Fix ids in csv deck and notetype columns
Note: This implies names which parse to an i64 will be seen as ids,
likely resulting in the intended deck/notetype not being found.
* Check for scheduling with revlog and deck configs
Might help with cases in which scheduling was included, but all cards
are new. In such a case, filtered deck should not be converted.
* Fix duplicate with same GUID being created
* Remove redundant `distinct`s from sql query
* Match notes by _either_ guid _or_ first field
* Refactor to emphasise GUID/first field distinction
* Export default deck and config if with scheduling
* Fix default deck being exported if it's a parent
* Add card meta for persisting custom scheduling state
* Rename meta -> custom_data
* Enforce limits on size of custom data
Large values will slow down table scans of the cards table, and it's
easier to be strict now and possibly relax things in the future than
the opposite.
* Pack card states and customData into a single message
+ default customData to empty if it can't be parsed
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
* Adjust remaining steps after config update
* Handle relearning steps separately
Also refactor a lot.
* Also adjust remaining steps after deck change
* Test step adjustment after config update
* Fix `SearchBuilder::(re)learning_cards()`
* Fix step adjustment after deck change
* Test step adjustment after deck change
* Fix test name
* Readjust remaining steps according to last delay
Also atomize tests and add some tooling.
* Fix footer moving upwards
* Fix column detection
Was broken because escaped line breaks were not considered.
Also removes delimiter detection on `#columns:` line. User must use tabs
or set delimiter beforehand.
* Add CSV preview
* Parse `#tags column:`
* Optionally export deck and notetype with CSV
* Avoid clones in CSV export
* Prevent bottom of page appearing under footer (dae)
* Increase padding to 1em (dae)
With 0.5em, when a vertical scrollbar is shown, it sits right next to
the right edge of the content, making it look like there's no right
margin.
* Experimental changes to make table fit+scroll (dae)
- limit individual cells to 15em, and show ellipses when truncated
- limit total table width to body width, so that inner table is shown
with scrollbar
- use class rather than id - ids are bad practice in Svelte components,
as more than one may be displayed on a single page
* Skip importing foreign notes with filtered decks
Were implicitly imported into the default deck before.
Also some refactoring to fetch deck ids and names beforehand.
* Hide spacer below hidden field mapping
* Fix guid being replaced when updating note
* Fix dupe identity check
Canonify tags before checking if dupe is identical, but only add update
tags later if appropriate.
* Fix deck export for notes with missing card 1
* Fix note lines starting with `#`
csv crate doesn't support escaping a leading comment char. :(
* Support import/export of guids
* Strip HTML from preview rows
* Fix initially set deck if current is filtered
* Make isHtml toggle reactive
* Fix `html_to_text_line()` stripping sound names
* Tweak export option labels
* Switch to patched rust-csv fork
Fixes writing lines starting with `#`, so revert 5ece10ad05.
* List column options with first column field
* Fix flag for exports with HTML stripped
* Add crate csv
* Add start of csv importing on backend
* Add Menomosyne serializer
* Add csv and json importing on backend
* Add plaintext importing on frontend
* Add csv metadata extraction on backend
* Add csv importing with GUI
* Fix missing dfa file in build
Added compile_data_attr, then re-ran cargo/update.py.
* Don't use doubly buffered reader in csv
* Escape HTML entities if CSV is not HTML
Also use name 'is_html' consistently.
* Use decimal number as foreign ease (like '2.5')
* ForeignCard.ivl → ForeignCard.interval
* Only allow fixed set of CSV delimiters
* Map timestamp of ForeignCard to native due time
* Don't trim CSV records
* Document use of empty strings for defaults
* Avoid creating CardGenContexts for every note
This requires CardGenContext to be generic, so it works both with an
owned and borrowed notetype.
* Show all accepted file types in import file picker
* Add import_json_file()
* factor → ease_factor
* delimter_from_value → delimiter_from_value
* Map columns to fields, not the other way around
* Fallback to current config for csv metadata
* Add start of new import csv screen
* Temporary fix for compilation issue on Linux/Mac
* Disable jest bazel action for import-csv
Jest fails with an error code if no tests are available, but this would
not be noticable on Windows as Jest is not run there.
* Fix field mapping issue
* Revert "Temporary fix for compilation issue on Linux/Mac"
This reverts commit 21f8a26140.
* Add HtmlSwitch and move Switch to components
* Fix spacing and make selectors consistent
* Fix shortcut tooltip
* Place import button at the top with path
* Fix meta column indices
* Remove NotetypeForString
* Fix queue and type of foreign cards
* Support different dupe resolution strategies
* Allow dupe resolution selection when importing CSV
* Test import of unnormalized text
Close #1863.
* Fix logging of foreign notes
* Implement CSV exports
* Use db_scalar() in notes_table_len()
* Rework CSV metadata
- Notetypes and decks are either defined by a global id or by a column.
- If a notetype id is provided, its field map must also be specified.
- If a notetype column is provided, fields are now mapped by index
instead of name at import time. So the first non-meta column is used for
the first field of every note, regardless of notetype. This makes
importing easier and should improve compatiblity with files without a
notetype column.
- Ensure first field can be mapped to a column.
- Meta columns must be defined as `#[meta name]:[column index]` instead
of in the `#columns` tag.
- Column labels contain the raw names defined by the file and must be
prettified by the frontend.
* Adjust frontend to new backend column mapping
* Add force flags for is_html and delimiter
* Detect if CSV is HTML by field content
* Update dupe resolution labels
* Simplify selectors
* Fix coalescence of oneofs in TS
* Disable meta columns from selection
Plus a lot of refactoring.
* Make import button stick to the bottom
* Write delimiter and html flag into csv
* Refetch field map after notetype change
* Fix log labels for csv import
* Log notes whose deck/notetype was missing
* Fix hiding of empty log queues
* Implement adding tags to all notes of a csv
* Fix dupe resolution not being set in log
* Implement adding tags to updated notes of a csv
* Check first note field is not empty
* Temporary fix for build on Linux/Mac
* Fix inverted html check (dae)
* Remove unused ftl string
* Delimiter → Separator
* Remove commented-out line
* Don't accept .json files
* Tweak tag ftl strings
* Remove redundant blur call
* Strip sound and add spaces in csv export
* Export HTML by default
* Fix unset deck in Mnemosyne import
Also accept both numbers and strings for notetypes and decks in JSON.
* Make DupeResolution::Update the default
* Fix missing dot in extension
* Make column indices 1-based
* Remove StickContainer from TagEditor
Fixes line breaking, border and z index on ImportCsvPage.
* Assign different key combos to tag editors
* Log all updated duplicates
Add a log field for the true number of found notes.
* Show identical notes as skipped
* Split tag-editor into separate ts module (dae)
* Add progress for CSV export
* Add progress for text import
* Tidy-ups after tag-editor split (dae)
- import-csv no longer depends on editor
- remove some commented lines
* Add apkg export on backend
* Filter out missing media-paths at write time
* Make TagMatcher::new() infallible
* Gather export data instead of copying directly
* Revert changes to rslib/src/tags/
* Reuse filename_is_safe/check_filename_safe()
* Accept func to produce MediaIter in export_apkg()
* Only store file folder once in MediaIter
* Use temporary tables for gathering
export_apkg() now accepts a search instead of a deck id. Decks are
gathered according to the matched notes' cards.
* Use schedule_as_new() to reset cards
* ExportData → ExchangeData
* Ignore ascii case when filtering system tags
* search_notes_cards_into_table →
search_cards_of_notes_into_table
* Start on apkg importing on backend
* Fix due dates in days for apkg export
* Refactor import-export/package
- Move media and meta code into appropriate modules.
- Normalize/check for normalization when deserializing media entries.
* Add SafeMediaEntry for deserialized MediaEntries
* Prepare media based on checksums
- Ensure all existing media files are hashed.
- Hash incoming files during preparation to detect conflicts.
- Uniquify names of conflicting files with hash (not notetype id).
- Mark media files as used while importing notes.
- Finally copy used media.
* Handle encoding in `replace_media_refs()`
* Add trait to keep down cow boilerplate
* Add notetypes immediately instaed of preparing
* Move target_col into Context
* Add notes immediately instaed of preparing
* Note id, not guid of conflicting notes
* Add import_decks()
* decks_configs → deck_configs
* Add import_deck_configs()
* Add import_cards(), import_revlog()
* Use dyn instead of generic for media_fn
Otherwise, would have to pass None with type annotation in the default
case.
* Fix signature of import_apkg()
* Fix search_cards_of_notes_into_table()
* Test new functions in text.rs
* Add roundtrip test for apkg (stub)
* Keep source id of imported cards (or skip)
* Keep source ids of imported revlog (or skip)
* Try to keep source ids of imported notes
* Make adding notetype with id undoable
* Wrap apkg import in transaction
* Keep source ids of imported deck configs (or skip)
* Handle card due dates and original due/did
* Fix importing cards/revlog
Card ids are manually uniquified.
* Factor out card importing
* Refactor card and revlog importing
* Factor out card importing
Also handle missing parents .
* Factor out note importing
* Factor out media importing
* Maybe upgrade scheduler of apkg
* Fix parent deck gathering
* Unconditionally import static media
* Fix deck importing edge cases
Test those edge cases, and add some global test helpers.
* Test note importing
* Let import_apkg() take a progress func
* Expand roundtrip apkg test
* Use fat pointer to avoid propogating generics
* Fix progress_fn type
* Expose apkg export/import on backend
* Return note log when importing apkg
* Fix archived collection name on apkg import
* Add CollectionOpWithBackendProgress
* Fix wrong Interrupted Exception being checked
* Add ClosedCollectionOp
* Add note ids to log and strip HTML
* Update progress when checking incoming media too
* Conditionally enable new importing in GUI
* Fix all_checksums() for media import
Entries of deleted files are nulled, not removed.
* Make apkg exporting on backend abortable
* Return number of notes imported from apkg
* Fix exception printing for QueryOp as well
* Add QueryOpWithBackendProgress
Also support backend exporting progress.
* Expose new apkg and colpkg exporting
* Open transaction in insert_data()
Was slowing down exporting by several orders of magnitude.
* Handle zstd-compressed apkg
* Add legacy arg to ExportAnkiPackage
Currently not exposed on the frontend
* Remove unused import in proto file
* Add symlink for typechecking of import_export_pb2
* Avoid kwargs in pb message creation, so typechecking is not lost
Protobuf's behaviour is rather subtle and I had to dig through the docs
to figure it out: set a field on a submessage to automatically assign
the submessage to the parent, or call SetInParent() to persist a default
version of the field you specified.
* Avoid re-exporting protobuf msgs we only use internally
* Stop after one test failure
mypy often fails much faster than pylint
* Avoid an extra allocation when extracting media checksums
* Update progress after prepare_media() finishes
Otherwise the bulk of the import ends up being shown as "Checked: 0"
in the progress window.
* Show progress of note imports
Note import is the slowest part, so showing progress here makes the UI
feel more responsive.
* Reset filtered decks at import time
Before this change, filtered decks exported with scheduling remained
filtered on import, and maybe_remove_from_filtered_deck() moved cards
into them as their home deck, leading to errors during review.
We may still want to provide a way to preserve filtered decks on import,
but to do that we'll need to ensure we don't rewrite the home decks of
cards, and we'll need to ensure the home decks are included as part of
the import (or give an error if they're not).
https://github.com/ankitects/anki/pull/1743/files#r839346423
* Fix a corner-case where due dates were shifted by a day
This issue existed in the old Python code as well. We need to include
the user's UTC offset in the exported file, or days_elapsed falls back
on the v1 cutoff calculation, which may be a day earlier or later than
the v2 calculation.
* Log conflicting note in remapped nt case
* take_fields() → into_fields()
* Alias `[u8; 20]` with `Sha1Hash`
* Truncate logged fields
* Rework apkg note import tests
- Use macros for more helpful errors.
- Split monolith into unit tests.
- Fix some unknown error with the previous test along the way.
(Was failing after 969484de4388d225c9f17d94534b3ba0094c3568.)
* Fix sorting of imported decks
Also adjust the test, so it fails without the patch. It was only passing
before, because the parent deck happened to come before the
inconsistently capitalised child alphabetically. But we want all parent
decks to be imported before their child decks, so their children can
adopt their capitalisation.
* target[_id]s → existing_card[_id]s
* export_collection_extracting_media() → ...
export_into_collection_file()
* target_already_exists→card_ordinal_already_exists
* Add search_cards_of_notes_into_table.sql
* Imrove type of apkg export selector/limit
* Remove redundant call to mod_schema()
* Parent tooltips to mw
* Fix a crash when truncating note text
String::truncate() is a bit of a footgun, and I've hit this before
too :-)
* Remove ExportLimit in favour of separate classes
* Remove OpWithBackendProgress and ClosedCollectionOp
Backend progress logic is now in ProgressManager. QueryOp can be used
for running on closed collection.
Also fix aborting of colpkg exports, which slipped through in #1817.
* Tidy up import log
* Avoid QDialog.exec()
* Default to excluding scheuling for deck list deck
* Use IncrementalProgress in whole import_export code
* Compare checksums when importing colpkgs
* Avoid registering changes if hashes are not needed
* ImportProgress::Collection → ImportProgress::File
* Make downgrading apkgs depend on meta version
* Generalise IncrementableProgress
And use it in entire import_export code instead.
* Fix type complexity lint
* Take count_map for IncrementableProgress::get_inner
* Replace import/export env with Shift click
* Accept all args from update() for backend progress
* Pass fields of ProgressUpdate explicitly
* Move update_interval into IncrementableProgress
* Outsource incrementing into Incrementor
* Mutate ProgressUpdate in progress_update callback
* Switch import/export legacy toggle to profile setting
Shift would have been nice, but the existing shortcuts complicate things.
If the user triggers an import with ctrl+shift+i, shift is unlikely to
have been released by the time our code runs, meaning the user accidentally
triggers the new code. We could potentially wait a while before bringing
up the dialog, but then we're forced to guess at how long it will take the
user to release the key.
One alternative would be to use alt instead of shift, but then we need to
trigger our shortcut when that key is pressed as well, and it could
potentially cause a conflict with an add-on that already uses that
combination.
* Show extension in export dialog
* Continue to provide separate options for schema 11+18 colpkg export
* Default to colpkg export when using File>Export
* Improve appearance of combo boxes when switching between apkg/colpkg
+ Deal with long deck names
* Convert newlines to spaces when showing fields from import
Ensures each imported note appears on a separate line
* Don't separate total note count from the other summary lines
This may come down to personal preference, but I feel the other counts
are equally as important, and separating them feels like it makes it
a bit easier to ignore them.
* Fix 'deck not normal' error when importing a filtered deck for the 2nd time
* Fix [Identical] being shown on first import
* Revert "Continue to provide separate options for schema 11+18 colpkg export"
This reverts commit 8f0b2c175f.
Will use a different approach
* Move legacy support into a separate exporter option; add to apkg export
* Adjust 'too new' message to also apply to .apkg import case
* Show a better message when attempting to import new apkg into old code
Previously the user could end seeing a message like:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 1: invalid start byte
Unfortunately we can't retroactively fix this for older clients.
* Hide legacy support option in older exporting screen
* Reflect change from paths to fnames in type & name
* Make imported decks normal at once
Then skip special casing in update_deck(). Also skip updating
description if new one is empty.
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
* Collection needs to be closed prior to backup even when not downgrading
* Backups -> BackupLimits
* Some improvements to backup_task
- backup_inner now returns the error instead of logging it, so that
the frontend can discover the issue when they await a backup (or create
another one)
- start_backup() was acquiring backup_task twice, and if another thread
started a backup between the two locks, the task could have been accidentally
overwritten without awaiting it
* Backups no longer require a collection close
- Instead of closing the collection, we ensure there is no active
transaction, and flush the WAL to disk. This means the undo history
is no longer lost on backup, which will be particularly useful if we
add a periodic backup in the future.
- Because a close is no longer required, backups are now achieved with
a separate command, instead of being included in CloseCollection().
- Full sync no longer requires an extra close+reopen step, and we now
wait for the backup to complete before proceeding.
- Create a backup before 'check db'
* Add File>Create Backup
https://forums.ankiweb.net/t/anki-mac-os-no-backup-on-sync/6157
* Defer checkpoint until we know we need it
When running periodic backups on a timer, we don't want to be fsync()ing
unnecessarily.
* Skip backup if modification time has not changed
We don't want the user leaving Anki open overnight, and coming back
to lots of identical backups.
* Periodic backups
Creates an automatic backup every 30 minutes if the collection has been
modified.
If there's a legacy checkpoint active, tries again 5 minutes later.
* Switch to a user-configurable backup duration
CreateBackup() now uses a simple force argument to determine whether
the user's limits should be respected or not, and only potentially
destructive ops (full download, check DB) override the user's configured
limit.
I considered having a separate limit for collection close and automatic
backups (eg keeping the previous 5 minute limit for collection close),
but that had two downsides:
- When the user closes their collection at the end of the day, they'd
get a recent backup. When they open the collection the next day, it
would get backed up again within 5 minutes, even though not much had
changed.
- Multiple limits are harder to communicate to users in the UI
Some remaining decisions I wasn't 100% sure about:
- If force is true but the collection has not been modified, the backup
will be skipped. If the user manually deleted their backups without
closing Anki, they wouldn't get a new one if the mtime hadn't changed.
- Force takes preference over the configured backup interval - should
we be ignored the user here, or take no backups at all?
Did a sneaky edit of the existing ftl string, as it hasn't been live
long.
* Move maybe_backup() into Collection
* Use a single method for manual and periodic backups
When manually creating a backup via the File menu, we no longer make
the user wait until the backup completes. As we continue waiting for
the backup in the background, if any errors occur, the user will get
notified about it fairly quickly.
* Show message to user if backup was skipped due to no changes
+ Don't incorrectly assert a backup will be created on force
* Add "automatic" to description
* Ensure we backup prior to importing colpkg if collection open
The backup doesn't happen when invoked from 'open backup' in the profile
screen, which matches Anki's previous behaviour. The user could
potentially clobber up to 30 minutes of their work if they exited to
the profile screen and restored a backup, but the alternative is we
create backups every time a backup is restored, which may happen a number
of times if the user is trying various ones. Or we could go back to a
separate throttle amount for this case, at the cost of more complexity.
* Remove the 0 special case on backup interval; minimum of 5 minutes
https://github.com/ankitects/anki/pull/1728#discussion_r830876833
* Write media files in chunks
* Test media file writing
* Add iter `ReadDirFiles`
* Remove ImportMediaError, fail fatally instead
Partially reverts commit f8ed4d89ba.
* Compare hashes of media files to be restored
* Improve `MediaCopier::copy()`
* Restore media files atomically with tempfile
* Make downgrade flag an enum
* Remove SchemaVersion::Latest in favour of Option
* Remove sha1 comparison again
* Remove unnecessary repr(u8) (dae)
Ideally this would have been in beta 6 :-) No add-ons appear to be
using customstudy.py/taglimit.py though, so it should hopefully not be
disruptive.
In the earlier custom study changes, we didn't get around to addressing
issue #1136. Now instead of trying to determine the maximum increase
to allow (which doesn't work correctly with nested decks), we just
present the total available to the user again, and let them decide. There's
plenty of room for improvement here still, but further work here might
be better done once we look into decoupling deck limits from deck presets.
Tags and available cards are fetched prior to showing the dialog now,
and will show a progress dialog if things take a while.
Tags are stored in an aux var now, so they don't inflate the deck
object size.
* Replace Card.data with .original_position
* Use and update original position in v3
* Show original position in card info
* Revert restoring original position for now
* Fix pb card to/from pylib card
* Try original_position as the last pb field
* minor wording tweaks (dae)
* avoid repinning Rust deps by default
* add id_tree dependency
* Respect intermediate child limits in v3
* Test new behaviour of v3 counts
* Rework v3 queue building to respect parent limits
* Add missing did field to SQL query
* Fix `LimitTreeMap::is_exhausted()`
* Rework tree building logic
https://github.com/ankitects/anki/pull/1638#discussion_r798328734
* Add timer for build_queues()
* `is_exhausted()` -> `limit_reached()`
* Move context and limits into `QueueBuilder`
This allows for moving more logic into QueueBuilder, so less passing
around of arguments. Unfortunately, some tests will require additional
work to set up.
* Fix stop condition in new_cards_by_position
* Fix order gather order of new cards by deck
* Add scheduler/queue/builder/burying.rs
* Fix bad tree due to unsorted child decks
* Fix comment
* Fix `cap_new_to_review_rec()`
* Add test for new card gathering
* Always sort `child_decks()`
* Fix deck removal in `cap_new_to_review_rec()`
* Fix sibling ordering in new card gathering
* Remove limits for deck total count with children
* Add random gather order
* Remove bad sibling order handling
All routines ensure ascending order now.
Also do some other minor refactoring.
* Remove queue truncating
All routines stop now as soon as the root limit is reached.
* Move deck fetching into `QueueBuilder::new()`
* Rework new card gather and sort options
https://github.com/ankitects/anki/pull/1638#issuecomment-1032173013
* Disable new sort order choices ...
depending on set gather order.
* Use enum instead of numbers
* Ensure valid sort order setting
* Update new gather and sort order tooltips
* Warn about random insertion order with v3
* Revert "Add timer for build_queues()"
This reverts commit c9f5fc6ebe.
* Update rslib/src/storage/card/mod.rs (dae)
* minor wording tweaks to the tooltips (dae)
+ move legacy strings to bottom
+ consistent capitalization (our leech action still needs fixing,
but that will require introducing a new 'suspend card' string as the
existing one is used elsewhere as well)
* Avoid rebuilding regex in field search
* Special case search in all fields
* Don't repeat mid nodes in field search sql
Small speed gain for searches like `*:re:foo` and reduces the sql tree
depth if a lot of field names of the same notetype match.
* Add sql function to match fields with regex
* Optimise used field search algorithm
- Searching in all fields is a special case.
- Using native SQL comparison is preferred.
- For Regex, use newly added SQL function.
* Please clippy
* Avoid pyramid of doom
* nt_fields -> matched_fields
* Add tests for regex and all field searches
* minor tweaks for readability (dae)
This brings the behaviour a bit closer to the default ordering of new
cards when they are reset, and is better than an undefined template
order. But it's a stopgap solution, and in the long run, filtered decks
need a bit of a rethink with the improved ordering than v3 has brought.
This was broken by an SQLite upgrade - previously we received the rows
in ix_cards_sched order, but recent versions use a table scan for that
query when the order is unspecified. Solved by being explicit about the
order we expect results to arrive.
https://forums.ankiweb.net/t/skipping-new-cards/15410
Matches should arrive in alphabetical order. Currently results are not
capped (JS should be able to handle ~1k tags without too much hassle),
and no reordering based on match location is done. Matches are substring
based, and multiple can be provided, eg "foo::bar" will match
"foof::baz::abbar".
This is not hooked up properly on the frontend at the moment -
updateSuggestions() seems to be missing the most recently typed character,
and is not updating the list of completions half the time.
Context: https://forums.ankiweb.net/t/more-cards-today-question-about-v3/12400/10
Previously, interday learning cards and reviews were gathered at the
same time in v3, with the review limit being applied to both of them. The
order cards were gathered in would change the ratio of gathered learning
cards and reviews, but as they were displayed together in a single count,
a changing ratio was not apparent, and no special handling was required
by the deck tree code.
Showing interday learning cards in the learning count, while still
applying a review limit to them, makes things more complicated, as
a changing ratio will result in different counts. The deck tree code
is not able to know which order cards will appear in, so without changes,
we would have had a situation where the deck list may show different counts
to those seen when clicking on a deck.
One way to solve this would have been to introduce a separate limit for
interday learning cards. But this would have meant users needed to
juggle two different limits, instead of having a single one that controls
total number of (non-intraday) cards shown.
Instead, the scheduler now fetches interday cards prior to reviews -
the rationale for that order is that learning cards tend to be more
fragile/urgent than reviews. The option to show learning cards
before/after/mixed with reviews still exists, but it applies only after
cards have been capped to the daily limit.
To ensure the deck tree code matches the counts the scheduler gives,
it too applies limits to interday learning cards first, and reviews
afterwards.
Interday learning cards are now counted in the learning count again,
and are no longer subject to the daily review limit.
The thinking behind the original change was that interday learning cards
are scheduled more like reviews, and counting them in the review count
would allow the learning count to focus on intraday learning - the red
number reflecting the fact that they are the most fragile memories. And
counting them together made it practical to apply the review limit
to both at once.
Since the release, there have been a number of users expecting to see
interday learning cards included in the learning count (the latest being
https://forums.ankiweb.net/t/feedback-and-a-feature-adjustment-request-for-2-1-45/12308),
and a good argument can be made for that too - they are, after all, listed
in the learning steps, and do tend to be harder than reviews. Short of
introducing another count to keep track of interday and intraday learning
separately, moving back to the old behaviour seems like the best move.
This also means it is not really practical to apply the review limit to
interday learning cards anymore, as the limit would be split between two
different numbers, and how much each number is capped would depend on
the order cards are introduced. The scheduler could figure this out, but
the deck list code does not know card order, and would need significant
changes to be able to produce numbers that matched the scheduler. And
even if we ignore implementation complexities, I think it would be more
difficult for users to reason about - the influence of the review limit
on new cards is confusing enough as it is.
Multiple configs with the same inner id would lead to errors like the
following when trying to open the collection:
DeckConfigInner.interval_multiplier: invalid wire type: StartGroup (expected ThirtyTwoBit)
This makes the review backlog case more expensive, since we end up
shuffling items outside the daily limit, but for the common case it's
about the same speed, and it means we don't need two separate sorting
steps. New cards remain handled the same way, since a backlog
is common there.
Also ensures that interday learning cards honor the deck sorting, and
that the non-default sort orders shuffle at the end.
- The "unbury deck" option was broken, as it was ignoring child
decks. It would be nice if we could use active_decks instead, but
plugging that into the old scheduler without breaking undo seems a bit
tricky.
- Remove the implicit From impl for decks, so we need to be forced to
think about whether we want child decks or not.
- Daily limits are no longer inherited - each deck limits its own
cards, and the selected deck enforces a maximum limit.
- Fetching of review cards now uses a single query, and sorts in advance.
In collections with a large number of overdue cards and decks, this is
faster than iterating over each deck in turn.
- Include interday learning count in review count & review limit, and
allow them to be buried.
- Warn when parent review limit is lower than child deck in deck options.
- Cap the new card limit to the review limit.
- Add option to control whether new card fetching short-circuits.
- split new card fetch order and subsequent sort order; use latter
when building queues
- default to spacing siblings when burying is off, with options to
show each sibling in turn, and shuffle the fetched cards
The bury new/review flags are now pulled from each card's home deck,
instead of using a global setting that had not been hooked up. This
unfortunately means we need to fetch the map of all decks up front, as
we need to be able to look up a deck configuration for cards that are
in filtered decks.
Fixes a "card was modified" error caused by cards being buried during
review, when they weren't removed up-front.