Anki/tests/test_deck.py
Damien Elmes 2f27133705 drop sqlalchemy; massive refactor
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
  sizable performance hit due to introspection. Operations like fetching 50k
  records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
  is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
  not a big burden.

In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
  moved into JSON objects. This simplifies serializing, and means we won't
  need DB schema changes to store extra options in the future. This change
  obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened

Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().

Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.

Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.

Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.

The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().

Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.

The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).

deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.

And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
 - statement() -> execute()
 - statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
2011-04-28 09:23:53 +09:00

126 lines
3.5 KiB
Python

# coding: utf-8
import os, re
from tests.shared import assertException, getEmptyDeck, testDir
from anki import Deck
newPath = None
newMod = None
def test_create():
global newPath, newMod
path = "/tmp/test_attachNew.anki"
try:
os.unlink(path)
except OSError:
pass
deck = Deck(path)
# for open()
newPath = deck.path
deck.save()
newMod = deck.mod
deck.close()
del deck
def test_open():
deck = Deck(newPath)
assert deck.mod == newMod
deck.close()
def test_openReadOnly():
# non-writeable dir
assertException(Exception,
lambda: Deck("/attachroot"))
# reuse tmp file from before, test non-writeable file
os.chmod(newPath, 0)
assertException(Exception,
lambda: Deck(newPath))
os.chmod(newPath, 0666)
os.unlink(newPath)
def test_factAddDelete():
deck = getEmptyDeck()
# add a fact
f = deck.newFact()
f['Front'] = u"one"; f['Back'] = u"two"
n = deck.addFact(f)
assert n == 1
deck.rollback()
assert deck.cardCount() == 0
# try with two cards
f = deck.newFact()
f['Front'] = u"one"; f['Back'] = u"two"
m = f.model
m.templates[1].active = True
m.flush()
n = deck.addFact(f)
assert n == 2
# check q/a generation
c0 = f.cards()[0]
assert re.sub("</?.+?>", "", c0.q) == u"one"
# it should not be a duplicate
for p in f.problems():
assert not p
# now let's make a duplicate and test uniqueness
f2 = deck.newFact()
f2.model.fields[1].conf['required'] = True
f2['Front'] = u"one"; f2['Back'] = u""
p = f2.problems()
assert p[0] == "unique"
assert p[1] == "required"
# try delete the first card
cards = f.cards()
id1 = cards[0].id; id2 = cards[1].id
assert deck.cardCount() == 2
assert deck.factCount() == 1
deck.deleteCard(id1)
assert deck.cardCount() == 1
assert deck.factCount() == 1
# and the second should clear the fact
deck.deleteCard(id2)
assert deck.cardCount() == 0
assert deck.factCount() == 0
def test_fieldChecksum():
deck = getEmptyDeck()
f = deck.newFact()
f['Front'] = u"new"; f['Back'] = u"new2"
deck.addFact(f)
assert deck.db.scalar(
"select csum from fdata where ord = 0") == "22af645d"
# empty field should have no checksum
f['Front'] = u""
f.flush()
assert deck.db.scalar(
"select csum from fdata where ord = 0") == ""
# changing the val should change the checksum
f['Front'] = u"newx"
f.flush()
assert deck.db.scalar(
"select csum from fdata where ord = 0") == "4b0e5a4c"
# back should have no checksum, because it's not set to be unique
assert deck.db.scalar(
"select csum from fdata where ord = 1") == ""
# if we turn on unique, it should get a checksum
f.model.fields[1].conf['unique'] = True
f.model.flush()
f.model.updateCache()
print deck.db.scalar(
"select csum from fdata where ord = 1")
assert deck.db.scalar(
"select csum from fdata where ord = 1") == "82f2ec5f"
# turning it off doesn't currently zero the checksum for efficiency reasons
# f.model.fields[1].conf['unique'] = False
# f.model.flush()
# f.model.updateCache()
# assert deck.db.scalar(
# "select csum from fdata where ord = 1") == ""
def test_upgrade():
import tempfile, shutil
src = os.path.join(testDir, "support", "anki12.anki")
(fd, dst) = tempfile.mkstemp(suffix=".anki")
print "upgrade to", dst
shutil.copy(src, dst)
deck = Deck(dst)