Commit graph

24 commits

Author SHA1 Message Date
Damien Elmes
593e45a9bb decrease timeout to 30; httplib2 retries for us 2011-12-07 18:08:15 +09:00
Damien Elmes
5a3d65ac61 send zips in 2.5MB increments 2011-12-05 22:06:23 +09:00
Damien Elmes
41636f41f8 tweak sync error handling 2011-12-05 17:44:54 +09:00
Damien Elmes
e85603dae6 fix certs bundling 2011-12-04 19:09:47 +09:00
Damien Elmes
d148a6cf1b bundle ssl certs; share con across all sync types 2011-12-03 16:38:45 +09:00
Damien Elmes
f7790275ce groups -> decks 2011-11-23 19:28:09 +09:00
Damien Elmes
6e4e8249fb facts -> notes 2011-11-23 12:37:21 +09:00
Damien Elmes
76960abd75 fix upgrading; drop old mnemosyne 1 importer 2011-10-20 22:05:34 +09:00
Damien Elmes
5da3bba1df initial work on media syncing 2011-10-03 12:45:08 +09:00
Damien Elmes
22df2790f9 refactor media change logging 2011-09-25 06:33:57 +09:00
Damien Elmes
024c42fef8 group scheduling refactor
see the following for background discussion:
http://groups.google.com/group/ankisrs-users/browse_thread/thread/4db5e82f7dff74fb

- change sched index to the more efficient gid, queue, due
- drop the dynamic index support. as there's no no q/a cache anymore, it's
  cheap enough to hit the cards table directly, and we can't use the index in
  its new form.
- drop order by clauses (see todo)
- ensure there's always an active group. if users want to study all groups at
  once, they need to create a top level group. we do this because otherwise
  the 'top level group' that's active when everything is selected is not
  clear.

to do:

- new cards will appear in gid order, but the gid numbers don't reflect
  alphabetical sorting. we need to change the scheduling code so that it steps
  through each group in turn
- likewise for the learn queue
2011-09-22 11:54:01 +09:00
Damien Elmes
ee767ff132 refactor to allow group deletions without schema mod
because group deletions are likely to be a semi-common operation (esp. for new users trying out shared material), deleting groups will no longer cause a full sync. in order to avoid syncing issues, we now allow cards/facts/etc to point to an invalid group, and in that case, we just treat them like they're in the default group
2011-09-15 01:37:30 +09:00
Damien Elmes
751cb7df67 add a new default for counts()
As per the forum thread, the current due counts are really demotivating when
there's a backlog of cards. In attempt to solve this, I'm trying out a new
behaviour as the default: instead of reporting all the due cards including the
backlog, the status bar will show an increasing count of cards studied that
day. Theoretically this should allow users to focus on what they've done
rather than what they have to do. The old behaviour is still there as an option.
2011-09-07 19:11:37 +09:00
Damien Elmes
a9b4285959 rename a few methods for consistency 2011-08-28 13:48:17 +09:00
Damien Elmes
3d370f675b restore the deletion log
the initial plan was to zero the creation time and leave the cards/facts there
until we have a chance to garbage collect them on a schema change, but such an
approach won't work with deck subscriptions
2011-05-04 19:00:38 +09:00
Damien Elmes
84d59e4b01 don't truncate sort field anymore, since we display it in the gui 2011-04-28 09:24:03 +09:00
Damien Elmes
2dfdfad6f2 update license link 2011-04-28 09:24:01 +09:00
Damien Elmes
8fcc6b3085 gpl3->agpl 2011-04-28 09:24:01 +09:00
Damien Elmes
01eb8d98a5 update order consts 2011-04-28 09:23:59 +09:00
Damien Elmes
d95cc6c44b implement sort fields; make sure they're updated on upgrade 2011-04-28 09:23:54 +09:00
Damien Elmes
2f27133705 drop sqlalchemy; massive refactor
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
  sizable performance hit due to introspection. Operations like fetching 50k
  records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
  is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
  not a big burden.

In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
  moved into JSON objects. This simplifies serializing, and means we won't
  need DB schema changes to store extra options in the future. This change
  obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened

Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().

Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.

Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.

Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.

The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().

Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.

The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).

deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.

And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
 - statement() -> execute()
 - statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
2011-04-28 09:23:53 +09:00
Damien Elmes
a902ad0b5f move finding code into separate file 2011-04-28 09:23:28 +09:00
Damien Elmes
b3ee91a9d5 add index for groupId, improve startup speed
- skip updating buried cards on startup; it's expensive and we'll do that on
  deck close in the future
- add an index for groupId. Initial profiling indicates that groupId-based
  selective study is considerably faster in certain scenarios

The 50k element deck I'm testing with now opens and builds the queue in 40ms
on a cold cache, of which 34ms is the initial deck startup and 6ms the queue
build. Adding back the undo log and backups will of course increase this, but
this is a big improvement for checking due times in the deck browser.
2011-04-28 09:23:28 +09:00
Damien Elmes
2613143fe9 improve dynamic indices, implement new queue 2011-04-28 09:23:28 +09:00