Commit graph

43 commits

Author SHA1 Message Date
Damien Elmes
74518264e0 raise an exception when an invalid sort type is passed 2011-11-30 12:34:49 +09:00
Damien Elmes
be3258b0a8 fix field limiting 2011-11-28 20:51:47 +09:00
Damien Elmes
73b80211c2 add current deck limit 2011-11-28 20:48:14 +09:00
Damien Elmes
8c1266ab2a less terse card state searches 2011-11-28 20:18:16 +09:00
Damien Elmes
67e4f0d1cc tweak find code 2011-11-28 20:04:39 +09:00
Damien Elmes
f7790275ce groups -> decks 2011-11-23 19:28:09 +09:00
Damien Elmes
279a942642 deck -> collection 2011-11-23 17:47:44 +09:00
Damien Elmes
6e4e8249fb facts -> notes 2011-11-23 12:37:21 +09:00
Damien Elmes
bc9f6e6a24 add USNs
Decks now have an "update sequence number". All objects also have a USN, which
is set to the deck USN each time they are modified. When syncing, each side
sends any objects with a USN >= clientUSN. When objects are copied via sync,
they have their USNs bumped to the current serverUSN. After a sync, the USN on
both sides is set to serverUSN + 1.

This solves the failing three way test, ensures we receive all changes
regardless of clock drift, and as the revlog also has a USN now, ensures that
old revlog entries are imported properly too.

Objects retain a separate modification time, which is used for conflict
resolution, deck subscriptions/importing, and info for the user.

Note that if the clock is too far off, it will still cause confusion for
users, as the due counts may be different depending on the time. For this
reason it's probably a good idea to keep a limit on how far the clock can
deviate.

We still keep track of the last sync time, but only so we can determine if the
schema has changed since the last sync.

The media code needs to be updated to use USNs too.
2011-09-13 21:10:21 +09:00
Damien Elmes
78600e8ed6 move group code into a registry like models 2011-08-27 23:45:55 +09:00
Damien Elmes
d3a3edb707 move models into the deck table
Like the previous change, models have been moved from a separate DB table to
an entry in the deck. We need them for many operations including reviewing,
and it's easier to keep them in memory than half on disk with a cache that
gets cleared every time we .reset(). This means they are easily serialized as
well - previously they were part Python and part JSON, which made access
confusing.

Because the data is all pulled from JSON now, the instance methods have been
moved to the model registry. Eg:
  model.addField(...) -> deck.models.addField(model, ...).

- IDs are now timestamped as with groups et al.

- The data field for plugins was also removed. Config info can be added to
  deck.conf; larger data should be stored externally.

- Upgrading needs to be updated for the new model structure.

- HexifyID() now accepts strings as well, as our IDs get converted to strings
  in the serialization process.
2011-08-27 22:27:09 +09:00
Damien Elmes
7afe6a9a7d convert groups to json; use timestamp ids for all but default
Rather than use a combination of id lookups on the groups table and a group
configuration cache in the scheduler, I've moved the groups and group config
into json objects on the deck table. This results in a net saving of code and
saves one or more DB lookups on each card answer, in exchange for a small
increase in deck load/save work.

I did a quick survey of AnkiWeb, and the vast majority of decks use less than
100 tags, and it's safe to assume groups will follow a similar pattern.

All groups and group configs except the default one will use integer
timestamps now, to simplify merging when syncing and importing.

defaultGroup() has been removed in favour of keeping the models up to date
(not yet done).
2011-08-27 17:13:04 +09:00
Damien Elmes
644a885a07 update fact ids, graves
- should never skip recording graves, for the sake of merging
- 1.0 upgrade will fail on decks that have the same fact creation date. need
      to work around this in the future
2011-08-26 21:23:16 +09:00
Damien Elmes
a65f241258 gracefully handle invalid queries 2011-04-29 11:46:26 +09:00
Damien Elmes
2243b691cc add full search to ignore formatting 2011-04-28 09:24:03 +09:00
Damien Elmes
8b747c3aac allow the user to choose if case should be folded 2011-04-28 09:24:03 +09:00
Damien Elmes
bee14e1a0b make fieldNames() available for gui code too 2011-04-28 09:24:03 +09:00
Damien Elmes
d51cd5a433 find & replace 2011-04-28 09:24:03 +09:00
Damien Elmes
8c1c729544 fix card state negation 2011-04-28 09:24:02 +09:00
Damien Elmes
437b4780c0 add missing cardIvl sort 2011-04-28 09:24:02 +09:00
Damien Elmes
1b3548f164 fix sorting by ease 2011-04-28 09:24:02 +09:00
Damien Elmes
2a225b1fae use the sort property set in the deck 2011-04-28 09:24:02 +09:00
Damien Elmes
d089deae5a add the ability to reverse sort order 2011-04-28 09:24:02 +09:00
Damien Elmes
6617214708 add recent list; fix groups 2011-04-28 09:24:02 +09:00
Damien Elmes
7ffff24790 fix case where multiple models have the same template 2011-04-28 09:24:02 +09:00
Damien Elmes
d6116a5377 model and group searching 2011-04-28 09:24:02 +09:00
Damien Elmes
9ebbdc2be1 sort by card ordinal after fact sort 2011-04-28 09:24:02 +09:00
Damien Elmes
efe6177c7a refactor ordering 2011-04-28 09:24:02 +09:00
Damien Elmes
291bd399b7 field searching
dropped support for field:foo, as you can type 'foo:' instead to accomplish
the same thing
2011-04-28 09:24:01 +09:00
Damien Elmes
57938927e7 users can pass a number for template ordinal; makes show:one obsolete 2011-04-28 09:24:01 +09:00
Damien Elmes
94d4e319ae fids and template searches 2011-04-28 09:24:01 +09:00
Damien Elmes
a13c18cf4b start refactoring find code 2011-04-28 09:24:01 +09:00
Damien Elmes
2343222245 remove rest of q/a searching 2011-04-28 09:24:01 +09:00
Damien Elmes
69a9d665e3 question/answer searching is obsolete 2011-04-28 09:24:01 +09:00
Damien Elmes
2dfdfad6f2 update license link 2011-04-28 09:24:01 +09:00
Damien Elmes
8fcc6b3085 gpl3->agpl 2011-04-28 09:24:01 +09:00
Damien Elmes
368bdf8d05 set the new types on upgrade 2011-04-28 09:23:56 +09:00
Damien Elmes
1078285f0f change field storage format, improve upgrade speed
Since Anki first moved to an SQL backend, it has stored fields in a fields
table, with one field per line. This is a natural layout in a relational
database, and it had some nice properties. It meant we could retrieve an
individual field of a fact, which we used for limiting searches to a
particular field, for sorting, and for determining if a field was unique, by
adding an index on the field value.

The index was very expensive, so as part of the early work towards 2.0 I added
a checksum field instead, and added an index to that. This was a lot cheaper
than storing the entire value twice for the purpose of fast searches, but it
only partly solved the problem. We still needed an index on factId so that we
could retrieve a given fact's fields quickly. For simple models this was
fairly cheap, but as the number of fields grows the table grows very big. 25k
facts with 30 fields each and the fields table has grown to 750k entries. This
makes the factId index and checksum index really expensive - with the q/a
cache removed, about 30% of the deck in such a situation.

Equally problematic was sorting on those fields. Short of adding another
expensive index, a sort involves a table scan of the entire table.

We solve these problems by moving all fields into the facts table. For this to
work, we need to address some issues:

Sorting: we'll add an option to the model to specify the sort field. When
facts are modified, that field is written to a separate sort column. It can be
HTML stripped, and possibly truncated to a maximum number of letters. This
means that switching sort to a different field involves an expensive rewrite
of the sort column, but people tend to leave their sort field set to the same
value, and we don't need to clear the field if the user switches temporarily
to a non-field sort like due order. And it has the nice properties of allowing
different models to be sorted on different columns at the same time, and
makes it impossible for models to be hidden because the user has sorted on a
field which doesn't appear in some models.

Searching for words with embedded HTML: 1.2 introduced a HTML-stripped cache
of the fields content, which both sped up searches (since we didn't have to
search the possibly large fields table), and meant we could find "bob" in
"b<b>ob</b>" quickly. The ability to quickly search for words peppered with
HTML was nice, but it meant doubling the cost of storing text in many cases,
and meant after any edit more data has to be written to the DB. Instead, we'll
do it on the fly. On this i7 computer, stripping HTML from all fields takes
1-2.6 seconds on 25-50k decks. We could possibly skip the stripping for people
who don't require it - the number of people who bold parts of words is
actually pretty small.

Duplicate detection: one option would be to fetch all fields when the add
cards dialog or editor are opened. But this will be expensive on mobile
devices. Instead, we'll create a separate table of (fid, csum), with an index
on both columns. When we edit a fact, we delete all the existing checksums for
that fact, and add checksums for any fields that must be checked as unique. We
could optionally skip the index on csum - some benchmarking is required.

As for the new table layout, creating separate columns for each field won't
scale. Instead, we store the fields in a single column, separated by an ascii
record separator. We split on that character when extracting from
the database, and join on it when writing to the DB.

Searching on a particular field in the browser will be accomplished by finding
all facts that match, and then unpacking to see if the relevant field matched.

Tags have been moved back to a separate column. Now that fields are on the
facts table, there is no need to pack them in as a field simply to avoid
another table hit.
2011-04-28 09:23:53 +09:00
Damien Elmes
9c247f45bd remove q/a cache, tags in fields, rewrite remaining ids, more
Anki used random 64bit IDs for cards, facts and fields. This had some nice
properties:
- merging data in syncs and imports was simply a matter of copying each way,
  as conflicts were astronomically unlikely
- it made it easy to identify identical cards and prevent them from being
  reimported
But there were some negatives too:
- they're more expensive to store
- javascript can't handle numbers > 2**53, which means AnkiMobile, iAnki and
  so on have to treat the ids as strings, which is slow
- simply copying data in a sync or import can lead to corruption, as while a
  duplicate id indicates the data was originally the same, it may have
  diverged. A more intelligent approach is necessary.
- sqlite was sorting the fields table based on the id, which meant the fields
  were spread across the table, and costly to fetch

So instead, we'll move to incremental ids. In the case of model changes we'll
declare that a schema change and force a full sync to avoid having to deal
with conflicts, and in the case of cards and facts, we'll need to update the
ids on one end to merge. Identical cards can be detected by checking to see if
their id is the same and their creation time is the same.

Creation time has been added back to cards and facts because it's necessary
for sync conflict merging. That means facts.pos is not required.

The graves table has been removed. It's not necessary for schema related
changes, and dead cards/facts can be represented as a card with queue=-4 and
created=0. Because we will record schema modification time and can ensure a
full sync propagates to all endpoints, it means we can remove the dead
cards/facts on schema change.

Tags have been removed from the facts table and are represented as a field
with ord=-1 and fmid=0. Combined with the locality improvement for fields, it
means that fetching fields is not much more expensive than using the q/a
cache.

Because of the above, removing the q/a cache is a possibility now. The q and a
columns on cards has been dropped. It will still be necessary to render the
q/a on fact add/edit, since we need to record media references. It would be
nice to avoid this in the future. Perhaps one way would be the ability to
assign a type to fields, like "image", "audio", or "latex". LaTeX needs
special consider anyway, as it was being rendered into the q/a cache.
2011-04-28 09:23:53 +09:00
Damien Elmes
2f27133705 drop sqlalchemy; massive refactor
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
  sizable performance hit due to introspection. Operations like fetching 50k
  records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
  is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
  not a big burden.

In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
  moved into JSON objects. This simplifies serializing, and means we won't
  need DB schema changes to store extra options in the future. This change
  obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened

Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().

Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.

Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.

Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.

The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().

Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.

The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).

deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.

And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
 - statement() -> execute()
 - statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
2011-04-28 09:23:53 +09:00
Damien Elmes
1d6dbf9900 rework tag handling and remove cardTags
The tags tables were initially added to speed up the loading of the browser by
speeding up two operations: gathering a list of all tags to show in the
dropdown box, and finding cards with a given tag. The former functionality is
provided by the tags table, and the latter functionality by the cardTags
table.

Selective study is handled by groups now, which perform better since they
don't require a join or subselect, and can be embedded in the index. So the
only remaining benefit of cardTags is for the browser.

Performance testing indicates that cardTags is not saving us a large amount.
It only takes us 30ms to search a 50k card table for matches with a hot cache.
On a cold cache it means the facts table has to be loaded into memory, which
roughly doubles the load time with the default settings (we need to load the
cards table too, as we're sorting the cards), but that startup time was
necessary with certain settings in the past too (sorting by fact created for
example). With groups implemented, the cost of maintaining a cache just for
initial browser load time is hard to justify.

Other changes:

- the tags table has any missing tags added to it when facts are added/edited.
  This means old tags will stick around even when no cards reference them, but
  is much cheaper than reference counting or a separate table, and simplifies
  updates and syncing.
- the tags table has a modified field now so we can can sync it instead of
  having to scan all facts coming across in a sync
- priority field removed
- we no longer put model names or card templates into the tags table. There
  were two reasons we did this in the past: so we could cram/selective study
  them, and for plugins. Selective study uses groups now, and plugins can
  check the model's name instead (and most already do). This also does away
  with the somewhat confusing behaviour of names also being tags.
- facts have their tags as _tags now. You can get a list with tags(), but
  editing operations should use add/deleteTags() instead of manually editing
  the string.
2011-04-28 09:23:29 +09:00
Damien Elmes
7c76058f4b add find+search code from ankiqt 2011-04-28 09:23:29 +09:00
Damien Elmes
a902ad0b5f move finding code into separate file 2011-04-28 09:23:28 +09:00