Commit graph

217 commits

Author SHA1 Message Date
Damien Elmes
5da3bba1df initial work on media syncing 2011-10-03 12:45:08 +09:00
Damien Elmes
a8d2578be5 move common auth+upload code into base class 2011-10-01 22:16:56 +09:00
Damien Elmes
bf3bb9dd32 if sync server offline, abort sync tests 2011-10-01 20:44:35 +09:00
Damien Elmes
f131021c7e remote partial syncing working; fixed mod time on finish() 2011-10-01 14:24:39 +09:00
Damien Elmes
dd5b2056fb factor out file uploads; use for incremental sync too 2011-09-30 21:09:10 +09:00
Damien Elmes
20d753591d add timestamp & common error checks to meta(); kill old code 2011-09-29 22:18:36 +09:00
Damien Elmes
aabc884341 start work on remote syncing; full up/down implemented 2011-09-29 20:58:42 +09:00
Damien Elmes
22df2790f9 refactor media change logging 2011-09-25 06:33:57 +09:00
Damien Elmes
9fdfac722d fixed bug in bundling 2011-09-24 14:46:42 +09:00
Damien Elmes
667b89ecc5 support partial syncs of arbitrary size
The full sync threshold was a hack to ensure we synced the deck in a
memory-efficient way if there was a lot of data to send. The problem is that
it's not easy for the user to predict how many changes there are, and so it
might come as a surprise to them when a sync suddenly switches to a full sync.

In order to be able to send changes in chunks rather than all at once, some
changes had to be made:

- Clients now set usn=-1 when they modify an object, which allows us to
  distinguish between objects that have been modified on the server, and ones
  that have been modified on the client. If we don't do this, we would have to
  buffer the local changes in a temporary location before adding the server
  changes.
- Before a client sends the objects to the server, it changes the usn to
  maxUsn both in the payload and the local storage.
- We do deletions at the start
- To determine which card or fact is newer, we have to fetch the modification
  time of the local version. We do this in batches rather than try to load the
  entire list in memory.
2011-09-24 12:42:02 +09:00
Damien Elmes
e7f416406d refactor learning
Rather than showing the user how many cards are in the learning queue, we want
to be able to show them the number of reps they have to do to clear the queue,
so they can better estimate the required time. Before we were counting up with
the grade column, but this means we can't quickly sum up the number of reps
left. So we invert it, and count down instead.

I also dropped the 'first time bonus' for now. If there's enough demand for
it, it can be added back by using the flags column, instead of a dedicated
cycles column.
2011-09-23 10:29:49 +09:00
Damien Elmes
3e5041f337 reduce default cache size
Use a more conservative 40MB for systems with a smaller amount of memory.
Ideally we should bump this up if we detect the running system has a decent
amount of memory.
2011-09-18 05:07:30 +09:00
Damien Elmes
2e628996a3 add flags back to cards, and fix syncing 2011-09-18 04:37:48 +09:00
Damien Elmes
ee767ff132 refactor to allow group deletions without schema mod
because group deletions are likely to be a semi-common operation (esp. for new users trying out shared material), deleting groups will no longer cause a full sync. in order to avoid syncing issues, we now allow cards/facts/etc to point to an invalid group, and in that case, we just treat them like they're in the default group
2011-09-15 01:37:30 +09:00
Damien Elmes
bc9f6e6a24 add USNs
Decks now have an "update sequence number". All objects also have a USN, which
is set to the deck USN each time they are modified. When syncing, each side
sends any objects with a USN >= clientUSN. When objects are copied via sync,
they have their USNs bumped to the current serverUSN. After a sync, the USN on
both sides is set to serverUSN + 1.

This solves the failing three way test, ensures we receive all changes
regardless of clock drift, and as the revlog also has a USN now, ensures that
old revlog entries are imported properly too.

Objects retain a separate modification time, which is used for conflict
resolution, deck subscriptions/importing, and info for the user.

Note that if the clock is too far off, it will still cause confusion for
users, as the due counts may be different depending on the time. For this
reason it's probably a good idea to keep a limit on how far the clock can
deviate.

We still keep track of the last sync time, but only so we can determine if the
schema has changed since the last sync.

The media code needs to be updated to use USNs too.
2011-09-13 21:10:21 +09:00
Damien Elmes
cf8288a5e0 todo 2011-09-09 23:06:12 +09:00
Damien Elmes
14b642f633 mod schema when rev order updated 2011-09-09 22:38:00 +09:00
Damien Elmes
9aad5c1166 more unit tests, fix bugs
- make sure gconf has an id
- merge deck conf
2011-09-09 22:34:50 +09:00
Damien Elmes
f15cb23c41 skip the 600 second pad during testing 2011-09-09 21:17:42 +09:00
Damien Elmes
85a2bb6193 revlog timestamp is ms based; should fetch facts/cards by mod not id 2011-09-09 21:11:11 +09:00
Damien Elmes
362ae3eee2 initial work on sync refactor
Ported the sync code to the latest libanki structure. Key points:

No summary:

The old style got each side to fetch ids+mod times and required the client to
diff them and then request or bundle up the appropriate objects. Instead, we now
get each side to send all changed objects, and it's the responsibility of the
other side to decide what needs to be merged and what needs to be discarded.
This allows us to skip a separate summary step, which saves scanning tables
twice, and allows us to reduce server requests from 4 to 3.

Schema changes:

Certain operations that are difficult to merge (such as changing the number of
fields in a model, or deleting models or groups) result in a full sync. The
user is warned about it in the GUI before such schema-changing operations
execute.

Sync size:

For now, we don't try to deal with large incremental syncs. Because the cards,
facts and revlog can be large in memory (hundreds of megabytes in some cases),
they would have to be chunked for the benefit of devices with a low amount of
memory.

Currently findChanges() uses the full fact/card objects which we're planning to
send to the server. It could be rewritten to fetch a summary (just the id, mod
& rep columns) which would save some memory, and then compare against blocks
of a few hundred remote objects at a time. However, it's a bit more
complicated than that:

- If the local summary is huge it could exceed memory limits. Without a local
  summary we'd have to query the db for each record, which could be a lot
  slower.

- We currently accumulate a list of remote records we need to add locally.
  This list also has the potential to get too big. We would need to
  periodically commit the changes as we accumulate them.

- Merging a large amount of changes is also potentially slow on mobile
  devices.

Given the fact that certain schema-changing operations require a full sync
anyway, I think it's probably best to concentrate on a chunked full sync for
now instead, as provided the user syncs periodically it should not be easy to
hit the full sync limits except after bulk editing operations.

Chunked partial syncing should be possible to add in the future without any
changes to the deck format.

Still to do:
- deck conf merging
- full syncing
- new http proxy
2011-09-08 12:50:42 +09:00
Damien Elmes
9130e09b3e rename some columns for consistency
- revlog's 'time' is now 'id', like the other tables
- 'taken' is now 'time'
- also dropped the eta code
2011-09-06 21:33:19 +09:00
Damien Elmes
a9b4285959 rename a few methods for consistency 2011-08-28 13:48:17 +09:00
Damien Elmes
91efb8f30b some initial sync work 2011-05-29 08:13:54 +09:00
Damien Elmes
344b111b80 centralize all tmp dir access 2011-04-28 09:24:03 +09:00
Damien Elmes
2dfdfad6f2 update license link 2011-04-28 09:24:01 +09:00
Damien Elmes
8fcc6b3085 gpl3->agpl 2011-04-28 09:24:01 +09:00
Damien Elmes
9c247f45bd remove q/a cache, tags in fields, rewrite remaining ids, more
Anki used random 64bit IDs for cards, facts and fields. This had some nice
properties:
- merging data in syncs and imports was simply a matter of copying each way,
  as conflicts were astronomically unlikely
- it made it easy to identify identical cards and prevent them from being
  reimported
But there were some negatives too:
- they're more expensive to store
- javascript can't handle numbers > 2**53, which means AnkiMobile, iAnki and
  so on have to treat the ids as strings, which is slow
- simply copying data in a sync or import can lead to corruption, as while a
  duplicate id indicates the data was originally the same, it may have
  diverged. A more intelligent approach is necessary.
- sqlite was sorting the fields table based on the id, which meant the fields
  were spread across the table, and costly to fetch

So instead, we'll move to incremental ids. In the case of model changes we'll
declare that a schema change and force a full sync to avoid having to deal
with conflicts, and in the case of cards and facts, we'll need to update the
ids on one end to merge. Identical cards can be detected by checking to see if
their id is the same and their creation time is the same.

Creation time has been added back to cards and facts because it's necessary
for sync conflict merging. That means facts.pos is not required.

The graves table has been removed. It's not necessary for schema related
changes, and dead cards/facts can be represented as a card with queue=-4 and
created=0. Because we will record schema modification time and can ensure a
full sync propagates to all endpoints, it means we can remove the dead
cards/facts on schema change.

Tags have been removed from the facts table and are represented as a field
with ord=-1 and fmid=0. Combined with the locality improvement for fields, it
means that fetching fields is not much more expensive than using the q/a
cache.

Because of the above, removing the q/a cache is a possibility now. The q and a
columns on cards has been dropped. It will still be necessary to render the
q/a on fact add/edit, since we need to record media references. It would be
nice to avoid this in the future. Perhaps one way would be the ability to
assign a type to fields, like "image", "audio", or "latex". LaTeX needs
special consider anyway, as it was being rendered into the q/a cache.
2011-04-28 09:23:53 +09:00
Damien Elmes
2f27133705 drop sqlalchemy; massive refactor
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
  sizable performance hit due to introspection. Operations like fetching 50k
  records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
  is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
  not a big burden.

In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
  moved into JSON objects. This simplifies serializing, and means we won't
  need DB schema changes to store extra options in the future. This change
  obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened

Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().

Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.

Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.

Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.

The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().

Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.

The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).

deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.

And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
 - statement() -> execute()
 - statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
2011-04-28 09:23:53 +09:00
Damien Elmes
1d6dbf9900 rework tag handling and remove cardTags
The tags tables were initially added to speed up the loading of the browser by
speeding up two operations: gathering a list of all tags to show in the
dropdown box, and finding cards with a given tag. The former functionality is
provided by the tags table, and the latter functionality by the cardTags
table.

Selective study is handled by groups now, which perform better since they
don't require a join or subselect, and can be embedded in the index. So the
only remaining benefit of cardTags is for the browser.

Performance testing indicates that cardTags is not saving us a large amount.
It only takes us 30ms to search a 50k card table for matches with a hot cache.
On a cold cache it means the facts table has to be loaded into memory, which
roughly doubles the load time with the default settings (we need to load the
cards table too, as we're sorting the cards), but that startup time was
necessary with certain settings in the past too (sorting by fact created for
example). With groups implemented, the cost of maintaining a cache just for
initial browser load time is hard to justify.

Other changes:

- the tags table has any missing tags added to it when facts are added/edited.
  This means old tags will stick around even when no cards reference them, but
  is much cheaper than reference counting or a separate table, and simplifies
  updates and syncing.
- the tags table has a modified field now so we can can sync it instead of
  having to scan all facts coming across in a sync
- priority field removed
- we no longer put model names or card templates into the tags table. There
  were two reasons we did this in the past: so we could cram/selective study
  them, and for plugins. Selective study uses groups now, and plugins can
  check the model's name instead (and most already do). This also does away
  with the somewhat confusing behaviour of names also being tags.
- facts have their tags as _tags now. You can get a list with tags(), but
  editing operations should use add/deleteTags() instead of manually editing
  the string.
2011-04-28 09:23:29 +09:00
Damien Elmes
f828393de3 rename deck.s to a more understable deck.db; keep s for compat 2011-04-28 09:21:07 +09:00
Damien Elmes
b9cf5ad85d fix timeForNewCard(), revlog sncing, priority index del 2011-04-28 09:21:07 +09:00
Damien Elmes
b6bb03025f new history table
- rename to revlog
- change the pk to time, as we want an index on time, and the old multi-column
  index was expensive and not useful
- remove yes/no count; they can be inferred from the ease
- remove lastFactor, as it's in the previous entry
- remove delay, it can be inferred from last entry
- remove 'next' from nextInterval and nextFactor
- rename 'thinkingTime' to 'userTime'
- rename reps to rep
- migrate old data to new table, and fix some problems in the process: ease0
  -> ease1, and limit thinking time to 60 seconds as it should have been
  previously
2011-04-28 09:21:07 +09:00
Damien Elmes
855de47ffe remove the stats table
The stats table was how the early non-SQL versions of Anki kept track of
statistics, before there was a revision log. It is being removed because:

- it's not possible to show the statistics for a subset of the deck
- it can't meaningfully be copied on import/export
- it makes it harder to implement sync merging

Implications:

- graphs and deck stats roughly 1.5-3x longer than before, but we'll have the
  ability to generate stats for subsections of the deck, and it's not time
  critical code
- people who've been using anki since the very early days may notice a drop in
  statistics, as early repetitions were recorded in the stats table but the
  revlog didn't exist at that point.
- due bugs in old syncs and imports/exports, the stats and revlog may not
  match numbers exactly

To remove it, the following changes have been made:

- the graphs and deck stats now generate their data entirely from the revlog
- there are no stats to keep track of how many cards we've answered, so we
  pull that information from the revlog in reset()
- we remove _globalStats and _dailyStats from the deck
- we check if a day rollover has occurred using failedCutoff instead
- we remove the getStats() routine
- the ETA code is currently disabled
- timeboxing routines use repsToday instead of stats
- remove stats delete from export
- remove stats table and index in upgrade
- remove stats syncing and globalStats refresh pre-sync
- remove stats count check in fullSync check, which was redundant anyway
- update unit tests

Also:

- newCountToday -> newCount, to bring it in line with revCount&failedCount
  which also reflect the currently due count
- newCount -> newAvail
- timeboxing routines renamed since the old names were confusingly similar to
  refreshSession() which does something different

Todo:

- update newSeenToday & repsToday when answering a card
- reimplement eta
2011-04-28 09:21:07 +09:00
Damien Elmes
9421a037f6 remove self explanatory module docstrings; strip trailing whitespace 2011-04-28 09:21:07 +09:00
Damien Elmes
218f823037 default to dev server 2011-04-28 09:21:07 +09:00
Damien Elmes
4302306fe9 use a checksum for field values; fixed import->update number
Previously we had an index on the value field, which was very expensive for
long fields. Instead we use a separate column and take the first 8 characters
of the field value's md5sum, and index that. In decks with lots of text in
fields, it can cut the deck size by 30% or more, and many decks improve by
10-20%. Decks with only a few characters in fields may increase in size
slightly, but this is offset by the fact that we only generate a checksum for
fields that have uniqueness checking on.

Also, fixed import->update reporting the total # of available facts instead of
the number of facts that were imported.
2011-04-28 09:21:06 +09:00
Damien Elmes
da48eb1e55 remove old relativeDelay compat fix in sync 2011-04-28 09:21:06 +09:00
Damien Elmes
1ddf1be747 abort the summary early if we're over the full sync threshold 2011-04-28 09:21:06 +09:00
Damien Elmes
28604b9d29 remove priorities 2011-04-28 09:21:06 +09:00
Damien Elmes
1b7ac91a2a force a full sync if there have been schema changes on either side 2011-04-28 09:21:06 +09:00
Damien Elmes
4bf334c6b3 strip some old code 2011-04-28 09:21:06 +09:00
Damien Elmes
1aafbd02f3 use env vars for host/port 2011-04-28 09:19:57 +09:00
Damien Elmes
c4e045463b set rd=2 in subscriptions 2011-03-04 14:32:17 +09:00
Damien Elmes
07db17be88 off by one in relativeDelay sync code 2011-02-07 00:04:39 +09:00
Damien Elmes
c1d15b8a9e clearer message when facts missing after sync 2011-01-21 11:02:07 +09:00
Damien Elmes
0b07707e68 make sure we don't try to send the queues when bundling deck 2011-01-13 07:55:39 +09:00
Damien Elmes
2ca27d389f fix local syncing 2011-01-07 13:35:15 +09:00
Damien Elmes
1f34abc003 more fixes for skewed clocks
if a client with a clock greater than server time synced a deck, the modified
time ended up higher than lastSync when the deck was modified on the server.
instead we force the modified time to be <= the server time, which is known
correct.
2011-01-04 15:51:24 +09:00
Damien Elmes
ff5bc72121 pass in a 0 timediff if using stock sync() 2010-12-25 12:43:35 +09:00
Damien Elmes
ee7da2bd65 update comment 2010-12-21 06:41:42 +09:00
Damien Elmes
7c45bab35a rate-limit sync progress messages for win32 installs with huge net bufs 2010-12-17 21:04:47 +09:00
Damien Elmes
400ca9a8a2 factor in time difference when determining common point 2010-12-17 03:55:04 +09:00
Damien Elmes
0c9672e7b8 rewrite media support
- media is no longer hashed, and instead stored in the db using its original
  name
- when adding media, its checksum is calculated and used to look for
  duplicates
- duplicate filenames will result in a number tacked on the file
- the size column is used to count card references to media. If media is
  referenced in a fact but not the question or answer, the count will be zero.
- there is no guarantee media will be listed in the media db if it is unused
  on the question & answer
- if rebuildMediaDir(delete=True), then entries with zero references are
  deleted, along with any unused files in the media dir.
- rebuildMediaDir() will update the internal checksums, and set the checksum
  to "" if a file can't be found
- rebuildMediaDir() is a lot less destructive now, and will leave alone
  directories it finds in the media folder (but not look in them either)
- rebuildMediaDir() returns more information about the state of media now
- the online and mobile clients will need to to make sure that when
  downloading media, entries with no checksum are non-fatal and should not
  abort the download process.
- the ref count is updated every time the q/a is updated - so the db should be
  up to date after every add/edit/import
- since we look for media on the q/a now, card templates like '<img
  src="{{{field}}}">' will work now
- export original files as gone as it is not needed anymore
- move from per-model media URL to deckVar. downloadMissingMedia() uses this
  now. Deck subscriptions will have to be updated to share media another way.
- pass deck in formatQA, as latex support is going to change
2010-12-11 01:19:31 +09:00
Damien Elmes
a383223e02 provide more info in sync error messages; catch zlib decode errors 2010-12-07 16:48:49 +09:00
Damien Elmes
2013e7e4ff conditional delete of css 2010-12-07 14:35:16 +09:00
Damien Elmes
15763f8f3c make sure we don't commit during a sync
updateDynamicIndices() is done on next deck load anyway
2010-12-07 11:59:02 +09:00
Damien Elmes
039af66a9d don't rebuild counts in applyPayload(), as the deck will be reopened 2010-12-07 11:56:43 +09:00
Damien Elmes
4c8f2d3b47 add finish() command to sync protocol 2010-12-07 11:02:23 +09:00
Damien Elmes
0af8da9cb8 sync updates
- set lastSync on successful upload, not before it
- make sure source file is closed
- use v2 sync protocol
2010-12-07 09:20:31 +09:00
Damien Elmes
da97701b2d disable lastSync fudging again 2010-12-07 09:19:09 +09:00
Damien Elmes
9259718fd5 set syncName after full download 2010-12-02 07:23:54 +09:00
Damien Elmes
6ec898ca4b Require explicit reset for most queue-modifying functions
When you call operations like deleteCards(), suspendCards() and so on, it is
now necessary to call deck.reset() afterwards. This allows the calling code to
delay a reset if necessary. If the calling code calls a function that says the
caller must reset, the caller should be sure to call .reset() and fetch the
current card again. Failure to do the latter will result in answerCard()
attempting to remove the card from the queue, when the queue has been cleared.
2010-11-23 17:41:36 +09:00
Damien Elmes
b69fd48768 more type handling updates; don't munge counts on sync
In various parts of the code we need to get all cards of a given category
(new, failed, etc) regardless of whether they're suspended, buried, etc. So we
store the true type in the obsolete relativeDelay column and add in index for
it, because it's cheaper than putting indices on reps & successive.
2010-11-13 18:39:24 +09:00
Damien Elmes
6ed0bc91bb update sync url 2010-11-08 09:22:36 +09:00
Damien Elmes
e0d46f0f12 be resilient if spaceUntil sent wrong 2010-11-06 07:50:05 +09:00
Damien Elmes
46790f2e92 remove (incorrect) code in sync, is covered on deck load anyway 2010-11-02 22:34:10 +09:00
Damien Elmes
2c5ac66083 type/priority changes, cram/rev early refactor, more
* Adjust type to remove cards from the queues, so we don't have to rebuild
  priorities to restore them:

Type -= 3 when suspending
Type += 3 when burying
Type += 6 when cramming / reviewing early

We still need to adjust priorities for backwards compatibility, but this can
be removed in the future.

* Factor out scheduler-specific code in answerCard(), so the different
  schedulers are now fully modular

* Differentiate between a card's current queue and its type

* Make sure dueCutoff cuts off at the chosen offset instead of midnight
2010-11-02 01:59:20 +09:00
Damien Elmes
ad743d850d start work on scheduling refactor
Previously we used getCard() to fetch a card at the time. This required a
number of indices to perform efficiently, and the indices were expensive in
terms of disk space and time required to keep them up to date. Instead we now
gather a bunch of cards at once.

- Drop checkDue()/isDue so writes are not necessary to the DB when checking
for due cards
- Due counts checked on deck load, and only updated once a day or at the end
of a session. This prevents cards from expiring during reviews, leading to
confusing undo behaviour and due counts that go up instead of down as you
review. The default will be to only expire cards once a day, which represents
a change from the way things were done previously.
- Set deck var defaults on deck load/create instead of upgrade, which should
fix upgrade issues
- The scheduling code can now have bits and pieces switched out, which should
make review early / cram etc easier to integrate
- Cards with priority <= 0 now have their type incremented by three, so we can
get access to schedulable cards with a single column.
- rebuildQueue() -> reset()
- refresh() -> refreshSession()
- Views and many of the indices on the cards table are now obsolete and will
  be removed in the future. I won't remove them straight away, so as to not
  break backward compatibility.
- Use bigger intervals between successive card templates, as the previous
intervals were too small to represent in doubles in some circumstances

Still to do:

- review early
- learn more
- failing mature cards where delay1 > delay0
2010-10-18 14:35:11 +09:00
Damien Elmes
8df9111b50 only copy used media on import/export 2010-10-16 10:29:39 +09:00
Damien Elmes
e010ef8062 add clock skew compensation again
this has the negative effect of causing multiple full syncs if syncing
multiple times within a 5 minute period of the previous full sync, but it
makes it much less likely that people's due counts will fall out of sync
2010-10-02 14:47:33 +09:00
Damien Elmes
99ba3f09c8 catch missing facts at end of sync 2010-09-12 12:21:39 +09:00
Damien Elmes
5e7c62bca5 don't compensate for clock skew 2010-07-30 18:15:24 +09:00
Damien Elmes
e956aa9afb remove obsolete function 2010-07-27 22:54:51 +09:00
Damien Elmes
d3fb189a72 improved lastSync/modified handling
- never bump deck mod while syncing
- set lastSync to current time, not deck modified time
- don't update lastSync until the final part of the sync
- lower clock skew allowance to ~5 minutes
- bump full sync threshold to 1000 modified items
2010-07-27 22:46:04 +09:00
Damien Elmes
55194f8aa7 ensure cardmodel/fieldmodels work when given a string too 2010-07-26 17:15:05 +09:00
Damien Elmes
61a7d6d79e make sure we match a given model even when given a string 2010-07-24 14:28:24 +09:00
Damien Elmes
6852b0acda bump mod time on full sync to server, ensure lastSync matches 2010-07-21 18:37:05 +09:00
Damien Elmes
f03000d27b remove string exceptions for python2.6 2010-06-10 13:24:46 +09:00
Damien Elmes
5616e679f5 cache the css as a deck var, don't accidentally send it in sync 2010-05-10 21:32:36 +09:00
Damien Elmes
fd1953bfb5 convert to a list, not tuples, so we can modify on the fly 2010-05-07 16:05:35 +09:00
Damien Elmes
c8d9bac5df clarify one way sync error 2010-03-05 09:27:11 +09:00
Damien Elmes
7c8e612704 use a constant for chunk size 2010-02-18 17:36:54 +09:00
Damien Elmes
33ec7ce133 clarify comment 2009-12-02 03:38:46 +09:00
Damien Elmes
093395b9e0 Revert "add 30 second timeout to all sync ops"
This reverts commit cbc23e5231.
2009-11-24 23:35:34 +09:00
Damien Elmes
cbc23e5231 add 30 second timeout to all sync ops 2009-10-04 19:53:12 +09:00
Damien Elmes
164b0583c3 unlink tmp file after full up 2009-10-04 19:33:18 +09:00
Damien Elmes
75f56d13e2 decrease chunk size to 32k due to crappy win32 network cards 2009-09-26 05:51:22 +09:00
Damien Elmes
27732e3553 catch large # of reviews in full sync, reduce limit to 500 2009-06-25 05:47:14 +09:00
Damien Elmes
6b7c0d7997 fix missing media problem, fix json decode float 2009-06-21 04:36:36 +09:00
Damien Elmes
1f0a8edfa4 strip out mediaSupported 2009-06-20 01:32:20 +09:00
Damien Elmes
3d81181323 bulk media support -> local media copy, always send media table 2009-06-19 11:50:31 +09:00
Damien Elmes
f96a7a7c5b Reverting "suspend/unsuspend noweb on full sync"; obsolete 2009-06-19 10:10:37 +09:00
Damien Elmes
093fb4695b suspend/unsuspend noweb on full sync 2009-06-18 04:04:50 +09:00
Damien Elmes
03369658ee remove sleep debugging 2009-06-16 04:01:13 +09:00
Damien Elmes
717044dcad add progress handler back to full sync upload 2009-06-16 03:59:54 +09:00
Damien Elmes
5b8832402a fix close post sync 2009-06-16 00:02:26 +09:00
Damien Elmes
3b99232f7a switch to urllib2 to pick up proxy, monkey-patch httplib to incrementally send 2009-06-15 23:01:43 +09:00
Damien Elmes
efb71c754c bump protocol version 2009-06-13 01:14:54 +09:00
Damien Elmes
28f6df93cb assert response ok 2009-06-10 22:33:20 +09:00