Anki used random 64bit IDs for cards, facts and fields. This had some nice
properties:
- merging data in syncs and imports was simply a matter of copying each way,
as conflicts were astronomically unlikely
- it made it easy to identify identical cards and prevent them from being
reimported
But there were some negatives too:
- they're more expensive to store
- javascript can't handle numbers > 2**53, which means AnkiMobile, iAnki and
so on have to treat the ids as strings, which is slow
- simply copying data in a sync or import can lead to corruption, as while a
duplicate id indicates the data was originally the same, it may have
diverged. A more intelligent approach is necessary.
- sqlite was sorting the fields table based on the id, which meant the fields
were spread across the table, and costly to fetch
So instead, we'll move to incremental ids. In the case of model changes we'll
declare that a schema change and force a full sync to avoid having to deal
with conflicts, and in the case of cards and facts, we'll need to update the
ids on one end to merge. Identical cards can be detected by checking to see if
their id is the same and their creation time is the same.
Creation time has been added back to cards and facts because it's necessary
for sync conflict merging. That means facts.pos is not required.
The graves table has been removed. It's not necessary for schema related
changes, and dead cards/facts can be represented as a card with queue=-4 and
created=0. Because we will record schema modification time and can ensure a
full sync propagates to all endpoints, it means we can remove the dead
cards/facts on schema change.
Tags have been removed from the facts table and are represented as a field
with ord=-1 and fmid=0. Combined with the locality improvement for fields, it
means that fetching fields is not much more expensive than using the q/a
cache.
Because of the above, removing the q/a cache is a possibility now. The q and a
columns on cards has been dropped. It will still be necessary to render the
q/a on fact add/edit, since we need to record media references. It would be
nice to avoid this in the future. Perhaps one way would be the ability to
assign a type to fields, like "image", "audio", or "latex". LaTeX needs
special consider anyway, as it was being rendered into the q/a cache.
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
sizable performance hit due to introspection. Operations like fetching 50k
records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
not a big burden.
In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
moved into JSON objects. This simplifies serializing, and means we won't
need DB schema changes to store extra options in the future. This change
obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened
Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().
Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.
Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.
Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.
The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().
Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.
The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).
deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.
And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
- statement() -> execute()
- statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
The tags tables were initially added to speed up the loading of the browser by
speeding up two operations: gathering a list of all tags to show in the
dropdown box, and finding cards with a given tag. The former functionality is
provided by the tags table, and the latter functionality by the cardTags
table.
Selective study is handled by groups now, which perform better since they
don't require a join or subselect, and can be embedded in the index. So the
only remaining benefit of cardTags is for the browser.
Performance testing indicates that cardTags is not saving us a large amount.
It only takes us 30ms to search a 50k card table for matches with a hot cache.
On a cold cache it means the facts table has to be loaded into memory, which
roughly doubles the load time with the default settings (we need to load the
cards table too, as we're sorting the cards), but that startup time was
necessary with certain settings in the past too (sorting by fact created for
example). With groups implemented, the cost of maintaining a cache just for
initial browser load time is hard to justify.
Other changes:
- the tags table has any missing tags added to it when facts are added/edited.
This means old tags will stick around even when no cards reference them, but
is much cheaper than reference counting or a separate table, and simplifies
updates and syncing.
- the tags table has a modified field now so we can can sync it instead of
having to scan all facts coming across in a sync
- priority field removed
- we no longer put model names or card templates into the tags table. There
were two reasons we did this in the past: so we could cram/selective study
them, and for plugins. Selective study uses groups now, and plugins can
check the model's name instead (and most already do). This also does away
with the somewhat confusing behaviour of names also being tags.
- facts have their tags as _tags now. You can get a list with tags(), but
editing operations should use add/deleteTags() instead of manually editing
the string.
- removed 'created' column from various tables. We don't care when things like
models are created, and card creation time didn't reflect the actual time a
card was created
- facts were previously ordered by their creation date. The code would
manually set the creation time for subsequent facts on import by 0.0001
seconds, and then card due times were set by adding the fact time to the
ordinal number*0.000001. This was prone to error, and the number of zeros used
was actually different in different parts of the code. Instead of this, we
replace it with a 'pos' column on facts, which increments for each new fact.
- importing should add new facts with a higher pos, but concurrent updates in
a synced deck can have multiple facts with the same pos
- due times are completely different now, and depend on the card type
- new cards have due=fact.pos or random(0, 10000)
- reviews have due set to an integer representing days since deck
creation/download
- cards in the learn queue use an integer timestamp in seconds
- many columns like modified, lastSync, factor, interval, etc have been converted to
integer columns. They are cheaper to store (large decks can save 10s of
megabytes) and faster to search for.
- cards have their group assigned on fact creation. In the future we'll add a
per-template option for a default group.
- switch to due/random order for the review queue on upgrade. Users can still
switch to the old behaviour if they want, but many people don't care what
it's set to, and due is considerably faster, which may result in a better
user experience
Users who want to study small subsections at one time (eg, "lesson 14") are
currently best served by creating lots of little decks. This is because:
- selective study is a bit cumbersome to switch between
- the graphs and statitics are for the entire deck
- selective study can be slow on mobile devices - when the list of cards to
hide/show is big, or when there are many due cards, performance can suffer
- scheduling can only be configured per deck
Groups are intended to address the above problems. All cards start off in the
same group, but they can have their group changed. Unlike tags, cards can only
be a member of a single group at once time. This allows us to divide the deck
up into a non-overlapping set of cards, which will make things like showing
due counts for a single category considerably cheaper. The user interface
might want to show something like a deck browser for decks that have more than
one group, showing due counts and allowing people to study each group
individually, or to study all at once.
Instead of storing the scheduling config in the deck or the model, we move the
scheduling into a separate config table, and link that to the groups table.
That way a user can have multiple groups that all share the same scheduling
information if they want.
And deletion tracking is now in a single table.
- model config is now stored as a json-serialized dict, which allows us to
quickly gather the info and allows for adding extra options more easily in
the future
- denormalize modelId into the cards table, so we can get the model scheduling
information without having to hit the facts table
- remove position - since we will handle spacing differently we don't need a
separate variable to due to define sort order
- remove lastInterval from cards; the new cram mode and review early shouldn't
need it
- successive->streak
- add new columns for learn mode
- move cram mode into new file; learn more and review early need more thought
- initial work on learn mode
- initial unit tests
- move most scheduling parameters from deck to models
- remove obsolete fields in deck and models
- decks->deck
- remove deck id reference in models
- move some deckVars into the deck table
- simplify deckstorage
- lock sessionhelper by default
- add models/currentModel as properties instead of ORM mappings
- remove models.tags
- remove remaining support for memory-backed databases
- use a blank string for syncName instead of null
- remove backup code; will handle in gui
- bump version to 100
- update unit tests
Cards had developed quite a lot of cruft from incremental changes, and a
number of important attributes were stored in names that had no bearing to
their actual use.
Added:
- position, which new cards will be sorted on in the future
- flags, which is reserved for future use
Renamed:
- type to queue
- relativeDelay to type
- noCount to lapses
Removed:
- all new/young/matureEase counts; the information is in the revlog
- firstAnswered, lastDue, lastFactor, averageTime and totalTime for the same
reason
- isDue, spaceUntil and combinedDue, because they are no longer used. Spaced
cards will be implemented differently in a coming commit.
- priority
- yesCount, because it can be inferred from reps & lapses
- tags; they've been stored in facts for a long time now
Also compatibility with deck versions less than 65 has been dropped, so decks
will need to be upgraded to 1.2 before they can be upgraded by the dev code.
All shared decks are on 1.2, so this should hopefully not be a problem.
- media is no longer hashed, and instead stored in the db using its original
name
- when adding media, its checksum is calculated and used to look for
duplicates
- duplicate filenames will result in a number tacked on the file
- the size column is used to count card references to media. If media is
referenced in a fact but not the question or answer, the count will be zero.
- there is no guarantee media will be listed in the media db if it is unused
on the question & answer
- if rebuildMediaDir(delete=True), then entries with zero references are
deleted, along with any unused files in the media dir.
- rebuildMediaDir() will update the internal checksums, and set the checksum
to "" if a file can't be found
- rebuildMediaDir() is a lot less destructive now, and will leave alone
directories it finds in the media folder (but not look in them either)
- rebuildMediaDir() returns more information about the state of media now
- the online and mobile clients will need to to make sure that when
downloading media, entries with no checksum are non-fatal and should not
abort the download process.
- the ref count is updated every time the q/a is updated - so the db should be
up to date after every add/edit/import
- since we look for media on the q/a now, card templates like '<img
src="{{{field}}}">' will work now
- export original files as gone as it is not needed anymore
- move from per-model media URL to deckVar. downloadMissingMedia() uses this
now. Deck subscriptions will have to be updated to share media another way.
- pass deck in formatQA, as latex support is going to change
- obsolete spaceUntil - it serves no useful purpose
- the old per-model spacing variables are obsolete, as the new approach
requires uniform spacing across all models for new cards
- introduce a new per-deck variable: newSpacing
- don't fill new queue if we've done today's cards
- still need to check cramming / review early
newSpacing is a time in seconds to delay introduction of sibling new cards.
It can be applied as many times as necessary as there is no harm in new cards
being delayed repeatedly. Because the default queue length is 200 and it can
take quite some time for the spaced cards to be placed in the queue again, we
use a separate array to track spaced new cards provided the configured delay
is less than 20 minutes. At times under 20 minutes this number is not a
guaranteed minimum spacing - if the new card queue is empty the spaced cards
will be flushed before checking the new queue again, as otherwise we end up
trying to fill on every repetition. The due counts no longer decrease by more
than one if the spacing is less than the due cutoff, since that confused some
users.
Review cards are now placed at the end of the current review queue, and will
never be rescheduled to a different day. The old approach had a number of
problems:
- the more card models you had, the more likely a card would be spaced
multiple times, resulting in you forgetting the card before you get a chance
to review it
- spacing was applied even if the due card was already late
- repeatedly failing one card over a period of days or weeks would also stave
the other cards of attention
In various parts of the code we need to get all cards of a given category
(new, failed, etc) regardless of whether they're suspended, buried, etc. So we
store the true type in the obsolete relativeDelay column and add in index for
it, because it's cheaper than putting indices on reps & successive.
* Adjust type to remove cards from the queues, so we don't have to rebuild
priorities to restore them:
Type -= 3 when suspending
Type += 3 when burying
Type += 6 when cramming / reviewing early
We still need to adjust priorities for backwards compatibility, but this can
be removed in the future.
* Factor out scheduler-specific code in answerCard(), so the different
schedulers are now fully modular
* Differentiate between a card's current queue and its type
* Make sure dueCutoff cuts off at the chosen offset instead of midnight
- reimplement reviewEarly and newEarly by replacing parts of the scheduler,
instead of adding special conditions
- remove references to isDue and priority (1,2,3,4) which is not necessary
anymore
- add option to switch between per-day scheduling and due now scheduling
- newCardsToday() -> newCardsDoneToday()
- don't decrement counts for suspended cards
- make sure to update type when suspending/unsuspending
- fix findCards()
- set hardInterval = 1-1.1 on upgrade, or the default per day scheduling doesn't
make sense
Previously we used getCard() to fetch a card at the time. This required a
number of indices to perform efficiently, and the indices were expensive in
terms of disk space and time required to keep them up to date. Instead we now
gather a bunch of cards at once.
- Drop checkDue()/isDue so writes are not necessary to the DB when checking
for due cards
- Due counts checked on deck load, and only updated once a day or at the end
of a session. This prevents cards from expiring during reviews, leading to
confusing undo behaviour and due counts that go up instead of down as you
review. The default will be to only expire cards once a day, which represents
a change from the way things were done previously.
- Set deck var defaults on deck load/create instead of upgrade, which should
fix upgrade issues
- The scheduling code can now have bits and pieces switched out, which should
make review early / cram etc easier to integrate
- Cards with priority <= 0 now have their type incremented by three, so we can
get access to schedulable cards with a single column.
- rebuildQueue() -> reset()
- refresh() -> refreshSession()
- Views and many of the indices on the cards table are now obsolete and will
be removed in the future. I won't remove them straight away, so as to not
break backward compatibility.
- Use bigger intervals between successive card templates, as the previous
intervals were too small to represent in doubles in some circumstances
Still to do:
- review early
- learn more
- failing mature cards where delay1 > delay0