The old template handling was too complicated, and generated frequent
questions on the forums. By dropping non-active templates we can do away with
the generate cards function, and advanced users can simulate the old behaviour
by using conditional field templates.
For importing and the deck creation wizard, we need to be able to generate
thousands of cards efficiently. So instead of requiring the creation of a fact
and rendering it, we cache the required fields and cloze references in the
model.
Also, emptyAns is dropped, as people can achieve the same behaviour by adding
the required answer fields as conditional to the question.
Todo: refactor genCards() to work in bulk, handle cloze edits intelligently
(prompt to delete invalid references, create new cards as necessary)
- keep track of rep/time counts per group, instead of just at the top level
- sort by due after retrieving learn cards
- ensure activeGroups is sorted alphabetically
- ensure new cards come in alphabetical group order
- ensure queues are refilled when empty
because group deletions are likely to be a semi-common operation (esp. for new users trying out shared material), deleting groups will no longer cause a full sync. in order to avoid syncing issues, we now allow cards/facts/etc to point to an invalid group, and in that case, we just treat them like they're in the default group
Decks now have an "update sequence number". All objects also have a USN, which
is set to the deck USN each time they are modified. When syncing, each side
sends any objects with a USN >= clientUSN. When objects are copied via sync,
they have their USNs bumped to the current serverUSN. After a sync, the USN on
both sides is set to serverUSN + 1.
This solves the failing three way test, ensures we receive all changes
regardless of clock drift, and as the revlog also has a USN now, ensures that
old revlog entries are imported properly too.
Objects retain a separate modification time, which is used for conflict
resolution, deck subscriptions/importing, and info for the user.
Note that if the clock is too far off, it will still cause confusion for
users, as the due counts may be different depending on the time. For this
reason it's probably a good idea to keep a limit on how far the clock can
deviate.
We still keep track of the last sync time, but only so we can determine if the
schema has changed since the last sync.
The media code needs to be updated to use USNs too.
Ported the sync code to the latest libanki structure. Key points:
No summary:
The old style got each side to fetch ids+mod times and required the client to
diff them and then request or bundle up the appropriate objects. Instead, we now
get each side to send all changed objects, and it's the responsibility of the
other side to decide what needs to be merged and what needs to be discarded.
This allows us to skip a separate summary step, which saves scanning tables
twice, and allows us to reduce server requests from 4 to 3.
Schema changes:
Certain operations that are difficult to merge (such as changing the number of
fields in a model, or deleting models or groups) result in a full sync. The
user is warned about it in the GUI before such schema-changing operations
execute.
Sync size:
For now, we don't try to deal with large incremental syncs. Because the cards,
facts and revlog can be large in memory (hundreds of megabytes in some cases),
they would have to be chunked for the benefit of devices with a low amount of
memory.
Currently findChanges() uses the full fact/card objects which we're planning to
send to the server. It could be rewritten to fetch a summary (just the id, mod
& rep columns) which would save some memory, and then compare against blocks
of a few hundred remote objects at a time. However, it's a bit more
complicated than that:
- If the local summary is huge it could exceed memory limits. Without a local
summary we'd have to query the db for each record, which could be a lot
slower.
- We currently accumulate a list of remote records we need to add locally.
This list also has the potential to get too big. We would need to
periodically commit the changes as we accumulate them.
- Merging a large amount of changes is also potentially slow on mobile
devices.
Given the fact that certain schema-changing operations require a full sync
anyway, I think it's probably best to concentrate on a chunked full sync for
now instead, as provided the user syncs periodically it should not be easy to
hit the full sync limits except after bulk editing operations.
Chunked partial syncing should be possible to add in the future without any
changes to the deck format.
Still to do:
- deck conf merging
- full syncing
- new http proxy
As discussed on the forums, moving to a single collection requires moving some
deck-level configuration into groups so users can have different settings like
new cards/day for each top level item.
Also:
- store id in groups
- add mod time to gconf updates
- move the limiting code that's not specific to scheduling into groups.py
- store the current model id per top level group
- moved tags into json like previous changes, and dropped the unnecessary id
- added tags.py for a tag manager
- moved the tag utilities from utils into tags.py
Like the previous change, models have been moved from a separate DB table to
an entry in the deck. We need them for many operations including reviewing,
and it's easier to keep them in memory than half on disk with a cache that
gets cleared every time we .reset(). This means they are easily serialized as
well - previously they were part Python and part JSON, which made access
confusing.
Because the data is all pulled from JSON now, the instance methods have been
moved to the model registry. Eg:
model.addField(...) -> deck.models.addField(model, ...).
- IDs are now timestamped as with groups et al.
- The data field for plugins was also removed. Config info can be added to
deck.conf; larger data should be stored externally.
- Upgrading needs to be updated for the new model structure.
- HexifyID() now accepts strings as well, as our IDs get converted to strings
in the serialization process.
The approach of using incrementing id numbers works for syncing if we assume
the server is canonical and all other clients rewrite their ids as necessary,
but upon reflection it is not sufficient for merging decks in general, as we
have no way of knowing whether objects with the same id are actually the same
or not. So we need some way of uniquely identifying the object.
One approach would be to go back to Anki 1.0's random 64bit numbers, but as
outlined in a previous commit such large numbers can't be handled easy in some
languages like Javascript, and they tend to be fragmented on disk which
impacts performance. It's much better if we can keep content added at the same
time in the same place on disk, so that operations like syncing which are mainly
interested in newly added content can run faster.
Another approach is to add a separate column containing the unique id, which
is what Mnemosyne 2.0 will be doing. Unfortunately it means adding an index
for that column, leading to slower inserts and larger deck files. And if the
current sequential ids are kept, a bunch of code needs to be kept to ensure ids
don't conflict when merging.
To address the above, the plan is to use a millisecond timestamp as the id.
This ensures disk order reflects creation order, allows us to merge the id and
crt columns, avoids the need for a separate index, and saves us from worrying
about rewriting ids. There is of course a small chance that the objects to be
merged were created at exactly the same time, but this is extremely unlikely.
This commit changes models. Other objects will follow.
When adding facts, you can now pass in a group id which the GUI should support
editing. Templates will have an optional group id which overrides the provided
id, so users can automatically put certain card types in a different group (or
all of them, if desired). Greying out the group box in the GUI in that case
would be a good idea.
Our goal is to allow decks created on one platform to use similar fonts (even
if named differently) when moving to another platform. The old solution wasn't
useful for the web version or mobile versions. Instead, we store a mapping in
the deck, and when generating the CSS, we list all possible fonts. An option
in the interface for the user to add extra fonts might be nice.
The model now has a css column, and when it's flushed, it generates the css
for the fields and templates. This means we don't have to generate the CSS on
deck load anymore.
The hex cache has also been removed. Javascript couldn't handle big ints, but
since ints are small numbers now, we no longer need a cache to efficiently
convert an id to hex.
Since Anki first moved to an SQL backend, it has stored fields in a fields
table, with one field per line. This is a natural layout in a relational
database, and it had some nice properties. It meant we could retrieve an
individual field of a fact, which we used for limiting searches to a
particular field, for sorting, and for determining if a field was unique, by
adding an index on the field value.
The index was very expensive, so as part of the early work towards 2.0 I added
a checksum field instead, and added an index to that. This was a lot cheaper
than storing the entire value twice for the purpose of fast searches, but it
only partly solved the problem. We still needed an index on factId so that we
could retrieve a given fact's fields quickly. For simple models this was
fairly cheap, but as the number of fields grows the table grows very big. 25k
facts with 30 fields each and the fields table has grown to 750k entries. This
makes the factId index and checksum index really expensive - with the q/a
cache removed, about 30% of the deck in such a situation.
Equally problematic was sorting on those fields. Short of adding another
expensive index, a sort involves a table scan of the entire table.
We solve these problems by moving all fields into the facts table. For this to
work, we need to address some issues:
Sorting: we'll add an option to the model to specify the sort field. When
facts are modified, that field is written to a separate sort column. It can be
HTML stripped, and possibly truncated to a maximum number of letters. This
means that switching sort to a different field involves an expensive rewrite
of the sort column, but people tend to leave their sort field set to the same
value, and we don't need to clear the field if the user switches temporarily
to a non-field sort like due order. And it has the nice properties of allowing
different models to be sorted on different columns at the same time, and
makes it impossible for models to be hidden because the user has sorted on a
field which doesn't appear in some models.
Searching for words with embedded HTML: 1.2 introduced a HTML-stripped cache
of the fields content, which both sped up searches (since we didn't have to
search the possibly large fields table), and meant we could find "bob" in
"b<b>ob</b>" quickly. The ability to quickly search for words peppered with
HTML was nice, but it meant doubling the cost of storing text in many cases,
and meant after any edit more data has to be written to the DB. Instead, we'll
do it on the fly. On this i7 computer, stripping HTML from all fields takes
1-2.6 seconds on 25-50k decks. We could possibly skip the stripping for people
who don't require it - the number of people who bold parts of words is
actually pretty small.
Duplicate detection: one option would be to fetch all fields when the add
cards dialog or editor are opened. But this will be expensive on mobile
devices. Instead, we'll create a separate table of (fid, csum), with an index
on both columns. When we edit a fact, we delete all the existing checksums for
that fact, and add checksums for any fields that must be checked as unique. We
could optionally skip the index on csum - some benchmarking is required.
As for the new table layout, creating separate columns for each field won't
scale. Instead, we store the fields in a single column, separated by an ascii
record separator. We split on that character when extracting from
the database, and join on it when writing to the DB.
Searching on a particular field in the browser will be accomplished by finding
all facts that match, and then unpacking to see if the relevant field matched.
Tags have been moved back to a separate column. Now that fields are on the
facts table, there is no need to pack them in as a field simply to avoid
another table hit.
Anki used random 64bit IDs for cards, facts and fields. This had some nice
properties:
- merging data in syncs and imports was simply a matter of copying each way,
as conflicts were astronomically unlikely
- it made it easy to identify identical cards and prevent them from being
reimported
But there were some negatives too:
- they're more expensive to store
- javascript can't handle numbers > 2**53, which means AnkiMobile, iAnki and
so on have to treat the ids as strings, which is slow
- simply copying data in a sync or import can lead to corruption, as while a
duplicate id indicates the data was originally the same, it may have
diverged. A more intelligent approach is necessary.
- sqlite was sorting the fields table based on the id, which meant the fields
were spread across the table, and costly to fetch
So instead, we'll move to incremental ids. In the case of model changes we'll
declare that a schema change and force a full sync to avoid having to deal
with conflicts, and in the case of cards and facts, we'll need to update the
ids on one end to merge. Identical cards can be detected by checking to see if
their id is the same and their creation time is the same.
Creation time has been added back to cards and facts because it's necessary
for sync conflict merging. That means facts.pos is not required.
The graves table has been removed. It's not necessary for schema related
changes, and dead cards/facts can be represented as a card with queue=-4 and
created=0. Because we will record schema modification time and can ensure a
full sync propagates to all endpoints, it means we can remove the dead
cards/facts on schema change.
Tags have been removed from the facts table and are represented as a field
with ord=-1 and fmid=0. Combined with the locality improvement for fields, it
means that fetching fields is not much more expensive than using the q/a
cache.
Because of the above, removing the q/a cache is a possibility now. The q and a
columns on cards has been dropped. It will still be necessary to render the
q/a on fact add/edit, since we need to record media references. It would be
nice to avoid this in the future. Perhaps one way would be the ability to
assign a type to fields, like "image", "audio", or "latex". LaTeX needs
special consider anyway, as it was being rendered into the q/a cache.
- make sure we're actually stripping text in the field cache
- make sure a default group is added on upgrade
- make sure old style field references are upgrade
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
sizable performance hit due to introspection. Operations like fetching 50k
records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
not a big burden.
In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
moved into JSON objects. This simplifies serializing, and means we won't
need DB schema changes to store extra options in the future. This change
obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened
Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().
Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.
Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.
Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.
The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().
Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.
The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).
deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.
And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
- statement() -> execute()
- statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
- removed 'created' column from various tables. We don't care when things like
models are created, and card creation time didn't reflect the actual time a
card was created
- facts were previously ordered by their creation date. The code would
manually set the creation time for subsequent facts on import by 0.0001
seconds, and then card due times were set by adding the fact time to the
ordinal number*0.000001. This was prone to error, and the number of zeros used
was actually different in different parts of the code. Instead of this, we
replace it with a 'pos' column on facts, which increments for each new fact.
- importing should add new facts with a higher pos, but concurrent updates in
a synced deck can have multiple facts with the same pos
- due times are completely different now, and depend on the card type
- new cards have due=fact.pos or random(0, 10000)
- reviews have due set to an integer representing days since deck
creation/download
- cards in the learn queue use an integer timestamp in seconds
- many columns like modified, lastSync, factor, interval, etc have been converted to
integer columns. They are cheaper to store (large decks can save 10s of
megabytes) and faster to search for.
- cards have their group assigned on fact creation. In the future we'll add a
per-template option for a default group.
- switch to due/random order for the review queue on upgrade. Users can still
switch to the old behaviour if they want, but many people don't care what
it's set to, and due is considerably faster, which may result in a better
user experience
Users who want to study small subsections at one time (eg, "lesson 14") are
currently best served by creating lots of little decks. This is because:
- selective study is a bit cumbersome to switch between
- the graphs and statitics are for the entire deck
- selective study can be slow on mobile devices - when the list of cards to
hide/show is big, or when there are many due cards, performance can suffer
- scheduling can only be configured per deck
Groups are intended to address the above problems. All cards start off in the
same group, but they can have their group changed. Unlike tags, cards can only
be a member of a single group at once time. This allows us to divide the deck
up into a non-overlapping set of cards, which will make things like showing
due counts for a single category considerably cheaper. The user interface
might want to show something like a deck browser for decks that have more than
one group, showing due counts and allowing people to study each group
individually, or to study all at once.
Instead of storing the scheduling config in the deck or the model, we move the
scheduling into a separate config table, and link that to the groups table.
That way a user can have multiple groups that all share the same scheduling
information if they want.
And deletion tracking is now in a single table.
- model config is now stored as a json-serialized dict, which allows us to
quickly gather the info and allows for adding extra options more easily in
the future
- denormalize modelId into the cards table, so we can get the model scheduling
information without having to hit the facts table
- remove position - since we will handle spacing differently we don't need a
separate variable to due to define sort order
- remove lastInterval from cards; the new cram mode and review early shouldn't
need it
- successive->streak
- add new columns for learn mode
- move cram mode into new file; learn more and review early need more thought
- initial work on learn mode
- initial unit tests
- move most scheduling parameters from deck to models
- remove obsolete fields in deck and models
- decks->deck
- remove deck id reference in models
- move some deckVars into the deck table
- simplify deckstorage
- lock sessionhelper by default
- add models/currentModel as properties instead of ORM mappings
- remove models.tags
- remove remaining support for memory-backed databases
- use a blank string for syncName instead of null
- remove backup code; will handle in gui
- bump version to 100
- update unit tests
Previously we had an index on the value field, which was very expensive for
long fields. Instead we use a separate column and take the first 8 characters
of the field value's md5sum, and index that. In decks with lots of text in
fields, it can cut the deck size by 30% or more, and many decks improve by
10-20%. Decks with only a few characters in fields may increase in size
slightly, but this is offset by the fact that we only generate a checksum for
fields that have uniqueness checking on.
Also, fixed import->update reporting the total # of available facts instead of
the number of facts that were imported.
- media is no longer hashed, and instead stored in the db using its original
name
- when adding media, its checksum is calculated and used to look for
duplicates
- duplicate filenames will result in a number tacked on the file
- the size column is used to count card references to media. If media is
referenced in a fact but not the question or answer, the count will be zero.
- there is no guarantee media will be listed in the media db if it is unused
on the question & answer
- if rebuildMediaDir(delete=True), then entries with zero references are
deleted, along with any unused files in the media dir.
- rebuildMediaDir() will update the internal checksums, and set the checksum
to "" if a file can't be found
- rebuildMediaDir() is a lot less destructive now, and will leave alone
directories it finds in the media folder (but not look in them either)
- rebuildMediaDir() returns more information about the state of media now
- the online and mobile clients will need to to make sure that when
downloading media, entries with no checksum are non-fatal and should not
abort the download process.
- the ref count is updated every time the q/a is updated - so the db should be
up to date after every add/edit/import
- since we look for media on the q/a now, card templates like '<img
src="{{{field}}}">' will work now
- export original files as gone as it is not needed anymore
- move from per-model media URL to deckVar. downloadMissingMedia() uses this
now. Deck subscriptions will have to be updated to share media another way.
- pass deck in formatQA, as latex support is going to change
- obsolete spaceUntil - it serves no useful purpose
- the old per-model spacing variables are obsolete, as the new approach
requires uniform spacing across all models for new cards
- introduce a new per-deck variable: newSpacing
- don't fill new queue if we've done today's cards
- still need to check cramming / review early
newSpacing is a time in seconds to delay introduction of sibling new cards.
It can be applied as many times as necessary as there is no harm in new cards
being delayed repeatedly. Because the default queue length is 200 and it can
take quite some time for the spaced cards to be placed in the queue again, we
use a separate array to track spaced new cards provided the configured delay
is less than 20 minutes. At times under 20 minutes this number is not a
guaranteed minimum spacing - if the new card queue is empty the spaced cards
will be flushed before checking the new queue again, as otherwise we end up
trying to fill on every repetition. The due counts no longer decrease by more
than one if the spacing is less than the due cutoff, since that confused some
users.
Review cards are now placed at the end of the current review queue, and will
never be rescheduled to a different day. The old approach had a number of
problems:
- the more card models you had, the more likely a card would be spaced
multiple times, resulting in you forgetting the card before you get a chance
to review it
- spacing was applied even if the due card was already late
- repeatedly failing one card over a period of days or weeks would also stave
the other cards of attention