Commit graph

15 commits

Author SHA1 Message Date
Damien Elmes
ddcf83bc7f add [] to cloze 2011-04-29 11:43:06 +09:00
Damien Elmes
0fdb966766 wrap clozes in a classed span instead of bold 2011-04-28 09:24:03 +09:00
Damien Elmes
03d00228aa support stripping context of cloze 2011-04-28 09:24:03 +09:00
Damien Elmes
98a63285e1 port changeModel() 2011-04-28 09:24:02 +09:00
Damien Elmes
93d80678f9 _fields -> fields 2011-04-28 09:24:02 +09:00
Damien Elmes
de81f0238a template moving 2011-04-28 09:24:01 +09:00
Damien Elmes
1d91e459ec wrap the q/a in the card template div 2011-04-28 09:23:57 +09:00
Damien Elmes
ed75e4bee2 refactor cloze deletions and text:field, add forecast
Previously cloze deletions were handled by copying the contents of one field
into another and applying transforms to it. This had a number of problems:
- after you add a card, you can't undo the cloze deletion
- if you spot a mistake, you have to edit it twice (or more if you have more
  than one cloze for a sentence)
- making multiple clozes requires copying & pasting the sentence multiple
  times
- this also lead to much bigger decks if the sentences being cloze-deleted are
  large
- related clozes can't be spaced apart as siblings

To address these issues, we introduce the idea of cloze tags in the card
template and fields. If the template has the text:

{{cloze:1:field}}

And a field has the following contents:

{{c1::hello}}

Then the template will automatically replace that part of the text with either
occluded text, or a highlighted answer. All other clozes in the field are
displayed normally.

At the same time, we add support for text: into the template library, instead
of manually creating text: fields in the dict for every field.

Finally, add a forecast routine to get the due counts for the next week, which
is used in the GUI.
2011-04-28 09:23:56 +09:00
Damien Elmes
e728d49232 delete -> del for consistency 2011-04-28 09:23:55 +09:00
Damien Elmes
0db95ded74 add template deletion & unit test 2011-04-28 09:23:54 +09:00
Damien Elmes
4638d3de46 implement field add/delete/rename/move
- now in model.py instead of deck.py
- added unit tests
2011-04-28 09:23:54 +09:00
Damien Elmes
93dcfceffe convert templates to a json object, and replace tid with ord
it's faster for us to parse another json string than pull a record from a
separate db table, and this makes templates and fields consistent
2011-04-28 09:23:54 +09:00
Damien Elmes
1078285f0f change field storage format, improve upgrade speed
Since Anki first moved to an SQL backend, it has stored fields in a fields
table, with one field per line. This is a natural layout in a relational
database, and it had some nice properties. It meant we could retrieve an
individual field of a fact, which we used for limiting searches to a
particular field, for sorting, and for determining if a field was unique, by
adding an index on the field value.

The index was very expensive, so as part of the early work towards 2.0 I added
a checksum field instead, and added an index to that. This was a lot cheaper
than storing the entire value twice for the purpose of fast searches, but it
only partly solved the problem. We still needed an index on factId so that we
could retrieve a given fact's fields quickly. For simple models this was
fairly cheap, but as the number of fields grows the table grows very big. 25k
facts with 30 fields each and the fields table has grown to 750k entries. This
makes the factId index and checksum index really expensive - with the q/a
cache removed, about 30% of the deck in such a situation.

Equally problematic was sorting on those fields. Short of adding another
expensive index, a sort involves a table scan of the entire table.

We solve these problems by moving all fields into the facts table. For this to
work, we need to address some issues:

Sorting: we'll add an option to the model to specify the sort field. When
facts are modified, that field is written to a separate sort column. It can be
HTML stripped, and possibly truncated to a maximum number of letters. This
means that switching sort to a different field involves an expensive rewrite
of the sort column, but people tend to leave their sort field set to the same
value, and we don't need to clear the field if the user switches temporarily
to a non-field sort like due order. And it has the nice properties of allowing
different models to be sorted on different columns at the same time, and
makes it impossible for models to be hidden because the user has sorted on a
field which doesn't appear in some models.

Searching for words with embedded HTML: 1.2 introduced a HTML-stripped cache
of the fields content, which both sped up searches (since we didn't have to
search the possibly large fields table), and meant we could find "bob" in
"b<b>ob</b>" quickly. The ability to quickly search for words peppered with
HTML was nice, but it meant doubling the cost of storing text in many cases,
and meant after any edit more data has to be written to the DB. Instead, we'll
do it on the fly. On this i7 computer, stripping HTML from all fields takes
1-2.6 seconds on 25-50k decks. We could possibly skip the stripping for people
who don't require it - the number of people who bold parts of words is
actually pretty small.

Duplicate detection: one option would be to fetch all fields when the add
cards dialog or editor are opened. But this will be expensive on mobile
devices. Instead, we'll create a separate table of (fid, csum), with an index
on both columns. When we edit a fact, we delete all the existing checksums for
that fact, and add checksums for any fields that must be checked as unique. We
could optionally skip the index on csum - some benchmarking is required.

As for the new table layout, creating separate columns for each field won't
scale. Instead, we store the fields in a single column, separated by an ascii
record separator. We split on that character when extracting from
the database, and join on it when writing to the DB.

Searching on a particular field in the browser will be accomplished by finding
all facts that match, and then unpacking to see if the relevant field matched.

Tags have been moved back to a separate column. Now that fields are on the
facts table, there is no need to pack them in as a field simply to avoid
another table hit.
2011-04-28 09:23:53 +09:00
Damien Elmes
9c247f45bd remove q/a cache, tags in fields, rewrite remaining ids, more
Anki used random 64bit IDs for cards, facts and fields. This had some nice
properties:
- merging data in syncs and imports was simply a matter of copying each way,
  as conflicts were astronomically unlikely
- it made it easy to identify identical cards and prevent them from being
  reimported
But there were some negatives too:
- they're more expensive to store
- javascript can't handle numbers > 2**53, which means AnkiMobile, iAnki and
  so on have to treat the ids as strings, which is slow
- simply copying data in a sync or import can lead to corruption, as while a
  duplicate id indicates the data was originally the same, it may have
  diverged. A more intelligent approach is necessary.
- sqlite was sorting the fields table based on the id, which meant the fields
  were spread across the table, and costly to fetch

So instead, we'll move to incremental ids. In the case of model changes we'll
declare that a schema change and force a full sync to avoid having to deal
with conflicts, and in the case of cards and facts, we'll need to update the
ids on one end to merge. Identical cards can be detected by checking to see if
their id is the same and their creation time is the same.

Creation time has been added back to cards and facts because it's necessary
for sync conflict merging. That means facts.pos is not required.

The graves table has been removed. It's not necessary for schema related
changes, and dead cards/facts can be represented as a card with queue=-4 and
created=0. Because we will record schema modification time and can ensure a
full sync propagates to all endpoints, it means we can remove the dead
cards/facts on schema change.

Tags have been removed from the facts table and are represented as a field
with ord=-1 and fmid=0. Combined with the locality improvement for fields, it
means that fetching fields is not much more expensive than using the q/a
cache.

Because of the above, removing the q/a cache is a possibility now. The q and a
columns on cards has been dropped. It will still be necessary to render the
q/a on fact add/edit, since we need to record media references. It would be
nice to avoid this in the future. Perhaps one way would be the ability to
assign a type to fields, like "image", "audio", or "latex". LaTeX needs
special consider anyway, as it was being rendered into the q/a cache.
2011-04-28 09:23:53 +09:00
Damien Elmes
668a58b65a update unit tests for sqlalchemy drop 2011-04-28 09:23:53 +09:00