The approach of using incrementing id numbers works for syncing if we assume
the server is canonical and all other clients rewrite their ids as necessary,
but upon reflection it is not sufficient for merging decks in general, as we
have no way of knowing whether objects with the same id are actually the same
or not. So we need some way of uniquely identifying the object.
One approach would be to go back to Anki 1.0's random 64bit numbers, but as
outlined in a previous commit such large numbers can't be handled easy in some
languages like Javascript, and they tend to be fragmented on disk which
impacts performance. It's much better if we can keep content added at the same
time in the same place on disk, so that operations like syncing which are mainly
interested in newly added content can run faster.
Another approach is to add a separate column containing the unique id, which
is what Mnemosyne 2.0 will be doing. Unfortunately it means adding an index
for that column, leading to slower inserts and larger deck files. And if the
current sequential ids are kept, a bunch of code needs to be kept to ensure ids
don't conflict when merging.
To address the above, the plan is to use a millisecond timestamp as the id.
This ensures disk order reflects creation order, allows us to merge the id and
crt columns, avoids the need for a separate index, and saves us from worrying
about rewriting ids. There is of course a small chance that the objects to be
merged were created at exactly the same time, but this is extremely unlikely.
This commit changes models. Other objects will follow.
the initial plan was to zero the creation time and leave the cards/facts there
until we have a chance to garbage collect them on a schema change, but such an
approach won't work with deck subscriptions
this means that content added earlier has a higher chance of appearing, but it
makes it consistent with sortCards(), and the user can sort manually if need
be
instead of completely resetting a card like we did in resetCards() in the
past, forgetCards() just puts the card back in the new queue and leaves the
factor and revlog alone. If users want to complete reset a card, they'll need to
export it.
reps should now be equal to the number of entries in the revlog, and only
exists so that we can order by review count in the browser efficiently
streak is no longer necessary as we have a learn queue now
The undo code was using triggers and a temporary table to write out all changed rows before making a change. This made for powerful undo/redo support, but had some problems:
- creating the tables and triggers wasn't cheap, especially on mobile devices
- likewise, every data modification required writing into two separate databases, almost doubling the amount of writes required
- it was possible to leave the DB in an inconsistent state if an undoable operation is followed by a non-undoable operation that references the undoable operation, and the user then rolls back the undoable operation.
To address these issues, we simplify undo by integrating it with the autosave changes:
- .save() can be passed a name to mark a rollback point. If the user undoes the change, any changes since the last save are lost
- autosaves happen every 5 minutes, and are pushed back on a .save(), so the maximum work a user can lose is 5 minutes.
- reviews are handled separately, so we can let the user undo multiple reviews at once
- if necessary, special cases could be added for other operations like marking
This means that if a user does two damaging operations in a row they won't be able to restore the first one, but such an event is both unlikely, and is also covered by the backups made each time a deck is opened.