The old template handling was too complicated, and generated frequent
questions on the forums. By dropping non-active templates we can do away with
the generate cards function, and advanced users can simulate the old behaviour
by using conditional field templates.
- if we store it inside the media folder, we inadvertently bump the folder mod
time every time sqlite creates a journal file
- close/reopen the media db as the deck is closed/opened
I removed the media database in an earlier commit, but it's now necessary
again as I decided to add native media syncing to AnkiWeb.
This time, the DB is stored in the media folder rather than with the deck.
This means we avoid sending it in a full sync, and makes deck backups faster.
The DB is a cache of file modtimes and checksums. When findChanges() is
called, the code checks to see which files were added, changed or deleted
since the last time, and updates the log of changes. Because the scanning step
and log retrieval is separate, it's possible to do the scanning in the
background if the need arises.
If the DB is deleted by the user, Anki will forget any deletions, and add all
the files back to the DB the next time it's accessed.
File changes are recorded as a delete + add.
media.addFile() could be optimized in the future to log media added manually
by the user, allowing us to skip the full directory scan in cases where the
only changes were manually added media.
- moved tags into json like previous changes, and dropped the unnecessary id
- added tags.py for a tag manager
- moved the tag utilities from utils into tags.py
The media table was originally introduced when Anki hashed media filenames,
and needed a way to remember the original filename. It also helped with:
1) getting a quick list of all media used in the deck, or the media added
since the last sync, for mobile clients
2) merging identical files with different names
But had some drawbacks:
- every operation that modifies templates, models or facts meant generating
the q/a and checking if any new media had appeared
- each entry is about 70 bytes, and some decks have 100k+ media files
So we remove the media table. We address 1) by being more intelligent about
media downloads on the mobile platform. We ask the user after a full sync if
they want to look for missing media, and they can choose not to if they know
they haven't added any. And on a partial sync, we can scan the contents of the
incoming facts for media references, and download any references we find. This
also avoids all the issues people had with media not downloading because it
was in their media folder but not in the media database.
For 2), when copying media to the media folder, if we have a duplicate
filename, we check if that file has the same md5, and avoid copying if so.
This won't merge identical content that has separate names, but instances
where users need that are rare.
Since Anki first moved to an SQL backend, it has stored fields in a fields
table, with one field per line. This is a natural layout in a relational
database, and it had some nice properties. It meant we could retrieve an
individual field of a fact, which we used for limiting searches to a
particular field, for sorting, and for determining if a field was unique, by
adding an index on the field value.
The index was very expensive, so as part of the early work towards 2.0 I added
a checksum field instead, and added an index to that. This was a lot cheaper
than storing the entire value twice for the purpose of fast searches, but it
only partly solved the problem. We still needed an index on factId so that we
could retrieve a given fact's fields quickly. For simple models this was
fairly cheap, but as the number of fields grows the table grows very big. 25k
facts with 30 fields each and the fields table has grown to 750k entries. This
makes the factId index and checksum index really expensive - with the q/a
cache removed, about 30% of the deck in such a situation.
Equally problematic was sorting on those fields. Short of adding another
expensive index, a sort involves a table scan of the entire table.
We solve these problems by moving all fields into the facts table. For this to
work, we need to address some issues:
Sorting: we'll add an option to the model to specify the sort field. When
facts are modified, that field is written to a separate sort column. It can be
HTML stripped, and possibly truncated to a maximum number of letters. This
means that switching sort to a different field involves an expensive rewrite
of the sort column, but people tend to leave their sort field set to the same
value, and we don't need to clear the field if the user switches temporarily
to a non-field sort like due order. And it has the nice properties of allowing
different models to be sorted on different columns at the same time, and
makes it impossible for models to be hidden because the user has sorted on a
field which doesn't appear in some models.
Searching for words with embedded HTML: 1.2 introduced a HTML-stripped cache
of the fields content, which both sped up searches (since we didn't have to
search the possibly large fields table), and meant we could find "bob" in
"b<b>ob</b>" quickly. The ability to quickly search for words peppered with
HTML was nice, but it meant doubling the cost of storing text in many cases,
and meant after any edit more data has to be written to the DB. Instead, we'll
do it on the fly. On this i7 computer, stripping HTML from all fields takes
1-2.6 seconds on 25-50k decks. We could possibly skip the stripping for people
who don't require it - the number of people who bold parts of words is
actually pretty small.
Duplicate detection: one option would be to fetch all fields when the add
cards dialog or editor are opened. But this will be expensive on mobile
devices. Instead, we'll create a separate table of (fid, csum), with an index
on both columns. When we edit a fact, we delete all the existing checksums for
that fact, and add checksums for any fields that must be checked as unique. We
could optionally skip the index on csum - some benchmarking is required.
As for the new table layout, creating separate columns for each field won't
scale. Instead, we store the fields in a single column, separated by an ascii
record separator. We split on that character when extracting from
the database, and join on it when writing to the DB.
Searching on a particular field in the browser will be accomplished by finding
all facts that match, and then unpacking to see if the relevant field matched.
Tags have been moved back to a separate column. Now that fields are on the
facts table, there is no need to pack them in as a field simply to avoid
another table hit.