In preparation for cramming:
- add odid for storing old deck on a per-card basis
- rename edue to odue
- at the moment note.did still exists, but in the future we may ignore it and
use model.did instead
see the following for background discussion:
http://groups.google.com/group/ankisrs-users/browse_thread/thread/4db5e82f7dff74fb
- change sched index to the more efficient gid, queue, due
- drop the dynamic index support. as there's no no q/a cache anymore, it's
cheap enough to hit the cards table directly, and we can't use the index in
its new form.
- drop order by clauses (see todo)
- ensure there's always an active group. if users want to study all groups at
once, they need to create a top level group. we do this because otherwise
the 'top level group' that's active when everything is selected is not
clear.
to do:
- new cards will appear in gid order, but the gid numbers don't reflect
alphabetical sorting. we need to change the scheduling code so that it steps
through each group in turn
- likewise for the learn queue
because group deletions are likely to be a semi-common operation (esp. for new users trying out shared material), deleting groups will no longer cause a full sync. in order to avoid syncing issues, we now allow cards/facts/etc to point to an invalid group, and in that case, we just treat them like they're in the default group
As per the forum thread, the current due counts are really demotivating when
there's a backlog of cards. In attempt to solve this, I'm trying out a new
behaviour as the default: instead of reporting all the due cards including the
backlog, the status bar will show an increasing count of cards studied that
day. Theoretically this should allow users to focus on what they've done
rather than what they have to do. The old behaviour is still there as an option.
the initial plan was to zero the creation time and leave the cards/facts there
until we have a chance to garbage collect them on a schema change, but such an
approach won't work with deck subscriptions
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
sizable performance hit due to introspection. Operations like fetching 50k
records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
not a big burden.
In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
moved into JSON objects. This simplifies serializing, and means we won't
need DB schema changes to store extra options in the future. This change
obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened
Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().
Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.
Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.
Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.
The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().
Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.
The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).
deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.
And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
- statement() -> execute()
- statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
- skip updating buried cards on startup; it's expensive and we'll do that on
deck close in the future
- add an index for groupId. Initial profiling indicates that groupId-based
selective study is considerably faster in certain scenarios
The 50k element deck I'm testing with now opens and builds the queue in 40ms
on a cold cache, of which 34ms is the initial deck startup and 6ms the queue
build. Adding back the undo log and backups will of course increase this, but
this is a big improvement for checking due times in the deck browser.