- don't send server graves graves back on the next sync
- make sure we update usns of models/tags/decks as well on upload
- don't die when updating decks after current deck deleted
- report counts when sanity check fails
Instead of having required and unique flags for every field, enforce both
requirements on the first field, and neither on the rest. This mirrors the
subject/body format people are used to in note-taking apps. The subject
defines the object being learnt, and the remaining fields represent properties
of that object.
In the past, duplicate checking served two purposes: it quickly notified the
user that they're entering the same fact twice, and it notified the user if
they'd accidentally mistyped a secondary field. The former behaviour is
important for avoiding wasted effort, and so it should be done in real time.
The latter behaviour is not essential however - a typo is not wasted effort,
and it could be fixed in a periodic 'find duplicates' function. Given that
some users ended up with sluggish decks due to the overhead a large number of
facts * a large number of unique fields caused, this seems like a change for
the better.
This also means Anki will let you add notes as long as as the first field has
been filled out. Again, this is not a big deal: Anki is still checking to make
sure one or more cards will be generated, and the user can easily add any
missing fields later.
As a bonus, this change simplifies field configuration somewhat. As the card
layout and field dialogs are a popular point of confusion, the more they can
be simplified, the better.
Instead of a separate option to hide question, embed the question into the
answer format by default. Users who don't want to see the question can remove
the question fields, and users who want a separator between the question and
answer (or not) can control it in HTML now.
Also, remove obsolete field CSS, and don't accidentally chomp a character on
upgrade.
The old template handling was too complicated, and generated frequent
questions on the forums. By dropping non-active templates we can do away with
the generate cards function, and advanced users can simulate the old behaviour
by using conditional field templates.
For importing and the deck creation wizard, we need to be able to generate
thousands of cards efficiently. So instead of requiring the creation of a fact
and rendering it, we cache the required fields and cloze references in the
model.
Also, emptyAns is dropped, as people can achieve the same behaviour by adding
the required answer fields as conditional to the question.
Todo: refactor genCards() to work in bulk, handle cloze edits intelligently
(prompt to delete invalid references, create new cards as necessary)
- keep track of rep/time counts per group, instead of just at the top level
- sort by due after retrieving learn cards
- ensure activeGroups is sorted alphabetically
- ensure new cards come in alphabetical group order
- ensure queues are refilled when empty
because group deletions are likely to be a semi-common operation (esp. for new users trying out shared material), deleting groups will no longer cause a full sync. in order to avoid syncing issues, we now allow cards/facts/etc to point to an invalid group, and in that case, we just treat them like they're in the default group
Decks now have an "update sequence number". All objects also have a USN, which
is set to the deck USN each time they are modified. When syncing, each side
sends any objects with a USN >= clientUSN. When objects are copied via sync,
they have their USNs bumped to the current serverUSN. After a sync, the USN on
both sides is set to serverUSN + 1.
This solves the failing three way test, ensures we receive all changes
regardless of clock drift, and as the revlog also has a USN now, ensures that
old revlog entries are imported properly too.
Objects retain a separate modification time, which is used for conflict
resolution, deck subscriptions/importing, and info for the user.
Note that if the clock is too far off, it will still cause confusion for
users, as the due counts may be different depending on the time. For this
reason it's probably a good idea to keep a limit on how far the clock can
deviate.
We still keep track of the last sync time, but only so we can determine if the
schema has changed since the last sync.
The media code needs to be updated to use USNs too.
Ported the sync code to the latest libanki structure. Key points:
No summary:
The old style got each side to fetch ids+mod times and required the client to
diff them and then request or bundle up the appropriate objects. Instead, we now
get each side to send all changed objects, and it's the responsibility of the
other side to decide what needs to be merged and what needs to be discarded.
This allows us to skip a separate summary step, which saves scanning tables
twice, and allows us to reduce server requests from 4 to 3.
Schema changes:
Certain operations that are difficult to merge (such as changing the number of
fields in a model, or deleting models or groups) result in a full sync. The
user is warned about it in the GUI before such schema-changing operations
execute.
Sync size:
For now, we don't try to deal with large incremental syncs. Because the cards,
facts and revlog can be large in memory (hundreds of megabytes in some cases),
they would have to be chunked for the benefit of devices with a low amount of
memory.
Currently findChanges() uses the full fact/card objects which we're planning to
send to the server. It could be rewritten to fetch a summary (just the id, mod
& rep columns) which would save some memory, and then compare against blocks
of a few hundred remote objects at a time. However, it's a bit more
complicated than that:
- If the local summary is huge it could exceed memory limits. Without a local
summary we'd have to query the db for each record, which could be a lot
slower.
- We currently accumulate a list of remote records we need to add locally.
This list also has the potential to get too big. We would need to
periodically commit the changes as we accumulate them.
- Merging a large amount of changes is also potentially slow on mobile
devices.
Given the fact that certain schema-changing operations require a full sync
anyway, I think it's probably best to concentrate on a chunked full sync for
now instead, as provided the user syncs periodically it should not be easy to
hit the full sync limits except after bulk editing operations.
Chunked partial syncing should be possible to add in the future without any
changes to the deck format.
Still to do:
- deck conf merging
- full syncing
- new http proxy
As discussed on the forums, moving to a single collection requires moving some
deck-level configuration into groups so users can have different settings like
new cards/day for each top level item.
Also:
- store id in groups
- add mod time to gconf updates
- move the limiting code that's not specific to scheduling into groups.py
- store the current model id per top level group
- moved tags into json like previous changes, and dropped the unnecessary id
- added tags.py for a tag manager
- moved the tag utilities from utils into tags.py
Like the previous change, models have been moved from a separate DB table to
an entry in the deck. We need them for many operations including reviewing,
and it's easier to keep them in memory than half on disk with a cache that
gets cleared every time we .reset(). This means they are easily serialized as
well - previously they were part Python and part JSON, which made access
confusing.
Because the data is all pulled from JSON now, the instance methods have been
moved to the model registry. Eg:
model.addField(...) -> deck.models.addField(model, ...).
- IDs are now timestamped as with groups et al.
- The data field for plugins was also removed. Config info can be added to
deck.conf; larger data should be stored externally.
- Upgrading needs to be updated for the new model structure.
- HexifyID() now accepts strings as well, as our IDs get converted to strings
in the serialization process.
The approach of using incrementing id numbers works for syncing if we assume
the server is canonical and all other clients rewrite their ids as necessary,
but upon reflection it is not sufficient for merging decks in general, as we
have no way of knowing whether objects with the same id are actually the same
or not. So we need some way of uniquely identifying the object.
One approach would be to go back to Anki 1.0's random 64bit numbers, but as
outlined in a previous commit such large numbers can't be handled easy in some
languages like Javascript, and they tend to be fragmented on disk which
impacts performance. It's much better if we can keep content added at the same
time in the same place on disk, so that operations like syncing which are mainly
interested in newly added content can run faster.
Another approach is to add a separate column containing the unique id, which
is what Mnemosyne 2.0 will be doing. Unfortunately it means adding an index
for that column, leading to slower inserts and larger deck files. And if the
current sequential ids are kept, a bunch of code needs to be kept to ensure ids
don't conflict when merging.
To address the above, the plan is to use a millisecond timestamp as the id.
This ensures disk order reflects creation order, allows us to merge the id and
crt columns, avoids the need for a separate index, and saves us from worrying
about rewriting ids. There is of course a small chance that the objects to be
merged were created at exactly the same time, but this is extremely unlikely.
This commit changes models. Other objects will follow.