Commit graph

106 commits

Author SHA1 Message Date
Damien Elmes
abf6e3fa13 don't fall over if cwd doesn't exist 2012-02-13 12:16:52 +09:00
Damien Elmes
3dc9454cd7 revert to the previous cwd rather than assuming a particular path layout 2012-01-30 05:32:18 +09:00
mikem
f3847daae4 fix for FileIO exception when deleting a profile from windows 2012-01-29 19:15:12 +09:00
Damien Elmes
ba0f6f36cc don't create media db on server 2011-12-13 12:15:13 +09:00
Damien Elmes
d94f6d2011 block illegal filename characters 2011-12-08 19:23:11 +09:00
Damien Elmes
9b76a4669c compress zip, add media sanity check 2011-12-06 23:36:01 +09:00
Damien Elmes
fabec6e920 more sync resuming changes
The server now returns the next usn after every addFiles(), so an interrupted
upload doesn't cause the uploaded material to be sent back down.
2011-12-06 23:04:26 +09:00
Damien Elmes
be66c960a9 rework media syncing so we can resume 2011-12-06 22:18:41 +09:00
Damien Elmes
b33c2e99c0 return false, not 'continue', when not finished 2011-12-05 14:43:32 +09:00
Damien Elmes
0ff59b87e9 make sure we commit after media operations 2011-12-04 13:00:38 +09:00
Damien Elmes
1db4b41e23 use WAL mode if available; don't delete media in check 2011-12-02 22:44:00 +09:00
Damien Elmes
67e4f0d1cc tweak find code 2011-11-28 20:04:39 +09:00
Damien Elmes
0032dd39c9 return 0 count when media not copied 2011-11-28 16:05:48 +09:00
Damien Elmes
faf2f061a8 log imported card/media counts 2011-11-25 14:13:14 +09:00
Damien Elmes
279a942642 deck -> collection 2011-11-23 17:47:44 +09:00
Damien Elmes
6e4e8249fb facts -> notes 2011-11-23 12:37:21 +09:00
Damien Elmes
357e5e7036 media.check() takes a list of files for ankiweb integration 2011-11-08 22:32:24 +09:00
Damien Elmes
795cdd7d3f remove the concept of non-active templates
The old template handling was too complicated, and generated frequent
questions on the forums. By dropping non-active templates we can do away with
the generate cards function, and advanced users can simulate the old behaviour
by using conditional field templates.
2011-11-08 18:06:19 +09:00
Damien Elmes
7a849fccc5 fix allMedia() and unit tests 2011-10-29 15:31:27 +09:00
Damien Elmes
ed7367d67f fix reporting of latex errors; catch some bad commands 2011-10-29 15:12:57 +09:00
Damien Elmes
5ac6cc7a36 don't die if deck has no media folder 2011-10-22 04:33:49 +09:00
Damien Elmes
119217290e implement anki1 importer 2011-10-21 23:45:42 +09:00
Damien Elmes
b242b06052 import media too 2011-10-21 07:53:22 +09:00
Damien Elmes
cf4abcb403 split upgrade code into separate file; use .anki2 now 2011-10-20 05:26:50 +09:00
Damien Elmes
0c85acf3f7 refactor to support early exit when no media changes
- regular sync now receives a media USN as well
- server bumps media usn not only on sync but any other media change
- added local media.hasChanged()
2011-10-06 15:34:22 +09:00
Damien Elmes
afe1ad2b0b add resync test, fix zip meta 2011-10-06 14:37:07 +09:00
Damien Elmes
8c1f397459 ensure successive calls work 2011-10-06 13:25:01 +09:00
Damien Elmes
866fe8a283 don't sync mod time; media conflicts are very unlikely 2011-10-03 13:33:55 +09:00
Damien Elmes
49181ee738 fix media zipping and addFiles call 2011-10-03 12:59:35 +09:00
Damien Elmes
5da3bba1df initial work on media syncing 2011-10-03 12:45:08 +09:00
Damien Elmes
22df2790f9 refactor media change logging 2011-09-25 06:33:57 +09:00
Damien Elmes
87bfb38e2b move db
- if we store it inside the media folder, we inadvertently bump the folder mod
   time every time sqlite creates a journal file

- close/reopen the media db as the deck is closed/opened
2011-09-12 05:03:31 +09:00
Damien Elmes
c59dd854fb add change detection
I removed the media database in an earlier commit, but it's now necessary
again as I decided to add native media syncing to AnkiWeb.

This time, the DB is stored in the media folder rather than with the deck.
This means we avoid sending it in a full sync, and makes deck backups faster.
The DB is a cache of file modtimes and checksums. When findChanges() is
called, the code checks to see which files were added, changed or deleted
since the last time, and updates the log of changes. Because the scanning step
and log retrieval is separate, it's possible to do the scanning in the
background if the need arises.

If the DB is deleted by the user, Anki will forget any deletions, and add all
the files back to the DB the next time it's accessed.

File changes are recorded as a delete + add.

media.addFile() could be optimized in the future to log media added manually
by the user, allowing us to skip the full directory scan in cases where the
only changes were manually added media.
2011-09-12 03:11:06 +09:00
Damien Elmes
7e1df75cc2 simplify media.py
- drop mediaPrefix & the mediaURL-based downloading
- always create the media folder
- remove move() in preparation for a single collection approach
2011-09-11 00:25:22 +09:00
Damien Elmes
be5c5a2018 move tags into deck; code into separate file
- moved tags into json like previous changes, and dropped the unnecessary id
- added tags.py for a tag manager
- moved the tag utilities from utils into tags.py
2011-08-28 13:44:29 +09:00
Damien Elmes
0d6064b933 rename() 2011-04-28 09:24:05 +09:00
Damien Elmes
344b111b80 centralize all tmp dir access 2011-04-28 09:24:03 +09:00
Damien Elmes
2dfdfad6f2 update license link 2011-04-28 09:24:01 +09:00
Damien Elmes
8fcc6b3085 gpl3->agpl 2011-04-28 09:24:01 +09:00
Damien Elmes
c682080890 make it easier to get media dir; remove tidyHTML() 2011-04-28 09:24:01 +09:00
Damien Elmes
cc9f5b8d86 stripMedia->strip 2011-04-28 09:23:58 +09:00
Damien Elmes
4d0e4836fe add the ability to add extra classes to q/a 2011-04-28 09:23:57 +09:00
Damien Elmes
8705085200 update latex support 2011-04-28 09:23:56 +09:00
Damien Elmes
9b70af678c add rescheduling and interval reset to cram; don't include already due cards 2011-04-28 09:23:56 +09:00
Damien Elmes
be045d451c remove the media table
The media table was originally introduced when Anki hashed media filenames,
and needed a way to remember the original filename. It also helped with:
1) getting a quick list of all media used in the deck, or the media added
   since the last sync, for mobile clients
2) merging identical files with different names

But had some drawbacks:
- every operation that modifies templates, models or facts meant generating
  the q/a and checking if any new media had appeared
- each entry is about 70 bytes, and some decks have 100k+ media files

So we remove the media table. We address 1) by being more intelligent about
media downloads on the mobile platform. We ask the user after a full sync if
they want to look for missing media, and they can choose not to if they know
they haven't added any. And on a partial sync, we can scan the contents of the
incoming facts for media references, and download any references we find. This
also avoids all the issues people had with media not downloading because it
was in their media folder but not in the media database.

For 2), when copying media to the media folder, if we have a duplicate
filename, we check if that file has the same md5, and avoid copying if so.
This won't merge identical content that has separate names, but instances
where users need that are rare.
2011-04-28 09:23:56 +09:00
Damien Elmes
511d6e89a1 remove progress handling code; we'll do it in the GUI or provide cb 2011-04-28 09:23:55 +09:00
Damien Elmes
ad68500494 updateCache -> renderQA, move some functions around 2011-04-28 09:23:54 +09:00
Damien Elmes
1078285f0f change field storage format, improve upgrade speed
Since Anki first moved to an SQL backend, it has stored fields in a fields
table, with one field per line. This is a natural layout in a relational
database, and it had some nice properties. It meant we could retrieve an
individual field of a fact, which we used for limiting searches to a
particular field, for sorting, and for determining if a field was unique, by
adding an index on the field value.

The index was very expensive, so as part of the early work towards 2.0 I added
a checksum field instead, and added an index to that. This was a lot cheaper
than storing the entire value twice for the purpose of fast searches, but it
only partly solved the problem. We still needed an index on factId so that we
could retrieve a given fact's fields quickly. For simple models this was
fairly cheap, but as the number of fields grows the table grows very big. 25k
facts with 30 fields each and the fields table has grown to 750k entries. This
makes the factId index and checksum index really expensive - with the q/a
cache removed, about 30% of the deck in such a situation.

Equally problematic was sorting on those fields. Short of adding another
expensive index, a sort involves a table scan of the entire table.

We solve these problems by moving all fields into the facts table. For this to
work, we need to address some issues:

Sorting: we'll add an option to the model to specify the sort field. When
facts are modified, that field is written to a separate sort column. It can be
HTML stripped, and possibly truncated to a maximum number of letters. This
means that switching sort to a different field involves an expensive rewrite
of the sort column, but people tend to leave their sort field set to the same
value, and we don't need to clear the field if the user switches temporarily
to a non-field sort like due order. And it has the nice properties of allowing
different models to be sorted on different columns at the same time, and
makes it impossible for models to be hidden because the user has sorted on a
field which doesn't appear in some models.

Searching for words with embedded HTML: 1.2 introduced a HTML-stripped cache
of the fields content, which both sped up searches (since we didn't have to
search the possibly large fields table), and meant we could find "bob" in
"b<b>ob</b>" quickly. The ability to quickly search for words peppered with
HTML was nice, but it meant doubling the cost of storing text in many cases,
and meant after any edit more data has to be written to the DB. Instead, we'll
do it on the fly. On this i7 computer, stripping HTML from all fields takes
1-2.6 seconds on 25-50k decks. We could possibly skip the stripping for people
who don't require it - the number of people who bold parts of words is
actually pretty small.

Duplicate detection: one option would be to fetch all fields when the add
cards dialog or editor are opened. But this will be expensive on mobile
devices. Instead, we'll create a separate table of (fid, csum), with an index
on both columns. When we edit a fact, we delete all the existing checksums for
that fact, and add checksums for any fields that must be checked as unique. We
could optionally skip the index on csum - some benchmarking is required.

As for the new table layout, creating separate columns for each field won't
scale. Instead, we store the fields in a single column, separated by an ascii
record separator. We split on that character when extracting from
the database, and join on it when writing to the DB.

Searching on a particular field in the browser will be accomplished by finding
all facts that match, and then unpacking to see if the relevant field matched.

Tags have been moved back to a separate column. Now that fields are on the
facts table, there is no need to pack them in as a field simply to avoid
another table hit.
2011-04-28 09:23:53 +09:00
Damien Elmes
9c247f45bd remove q/a cache, tags in fields, rewrite remaining ids, more
Anki used random 64bit IDs for cards, facts and fields. This had some nice
properties:
- merging data in syncs and imports was simply a matter of copying each way,
  as conflicts were astronomically unlikely
- it made it easy to identify identical cards and prevent them from being
  reimported
But there were some negatives too:
- they're more expensive to store
- javascript can't handle numbers > 2**53, which means AnkiMobile, iAnki and
  so on have to treat the ids as strings, which is slow
- simply copying data in a sync or import can lead to corruption, as while a
  duplicate id indicates the data was originally the same, it may have
  diverged. A more intelligent approach is necessary.
- sqlite was sorting the fields table based on the id, which meant the fields
  were spread across the table, and costly to fetch

So instead, we'll move to incremental ids. In the case of model changes we'll
declare that a schema change and force a full sync to avoid having to deal
with conflicts, and in the case of cards and facts, we'll need to update the
ids on one end to merge. Identical cards can be detected by checking to see if
their id is the same and their creation time is the same.

Creation time has been added back to cards and facts because it's necessary
for sync conflict merging. That means facts.pos is not required.

The graves table has been removed. It's not necessary for schema related
changes, and dead cards/facts can be represented as a card with queue=-4 and
created=0. Because we will record schema modification time and can ensure a
full sync propagates to all endpoints, it means we can remove the dead
cards/facts on schema change.

Tags have been removed from the facts table and are represented as a field
with ord=-1 and fmid=0. Combined with the locality improvement for fields, it
means that fetching fields is not much more expensive than using the q/a
cache.

Because of the above, removing the q/a cache is a possibility now. The q and a
columns on cards has been dropped. It will still be necessary to render the
q/a on fact add/edit, since we need to record media references. It would be
nice to avoid this in the future. Perhaps one way would be the ability to
assign a type to fields, like "image", "audio", or "latex". LaTeX needs
special consider anyway, as it was being rendered into the q/a cache.
2011-04-28 09:23:53 +09:00
Damien Elmes
2f27133705 drop sqlalchemy; massive refactor
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
  sizable performance hit due to introspection. Operations like fetching 50k
  records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
  is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
  not a big burden.

In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
  moved into JSON objects. This simplifies serializing, and means we won't
  need DB schema changes to store extra options in the future. This change
  obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened

Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().

Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.

Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.

Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.

The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().

Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.

The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).

deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.

And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
 - statement() -> execute()
 - statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
2011-04-28 09:23:53 +09:00