Anki/tests/test_media.py
Damien Elmes 2f27133705 drop sqlalchemy; massive refactor
SQLAlchemy is a great tool, but it wasn't a great fit for Anki:
- We often had to drop down to raw SQL for performance reasons.
- The DB cursors and results were wrapped, which incurred a
  sizable performance hit due to introspection. Operations like fetching 50k
  records from a hot cache were taking more than twice as long to complete.
- We take advantage of sqlite-specific features, so SQL language abstraction
  is useless to us.
- The anki schema is quite small, so manually saving and loading objects is
  not a big burden.

In the process of porting to DBAPI, I've refactored the database schema:
- App configuration data that we don't need in joins or bulk updates has been
  moved into JSON objects. This simplifies serializing, and means we won't
  need DB schema changes to store extra options in the future. This change
  obsoletes the deckVars table.
- Renamed tables:
-- fieldModels -> fields
-- cardModels -> templates
-- fields -> fdata
- a number of attribute names have been shortened

Classes like Card, Fact & Model remain. They maintain a reference to the deck.
To write their state to the DB, call .flush().

Objects no longer have their modification time manually updated. Instead, the
modification time is updated when they are flushed. This also applies to the
deck.

Decks will now save on close, because various operations that were done at
deck load will be moved into deck close instead. Operations like undoing
buried card are cheap on a hot cache, but expensive on startup.
Programmatically you can call .close(save=False) to avoid a save and a
modification bump. This will be useful for generating due counts.

Because of the new saving behaviour, the save and save as options will be
removed from the GUI in the future.

The q/a cache and field cache generating has been centralized. Facts will
automatically rebuild the cache on flush; models can do so with
model.updateCache().

Media handling has also been reworked. It has moved into a MediaRegistry
object, which the deck holds. Refcounting has been dropped - it meant we had
to compare old and new value every time facts or models were changed, and
existed for the sole purpose of not showing errors on a missing media
download. Instead we just media.registerText(q+a) when it's updated. The
download function will be expanded to ask the user if they want to continue
after a certain number of files have failed to download, which should be an
adequate alternative. And we now add the file into the media DB when it's
copied to th emedia directory, not when the card is commited. This fixes
duplicates a user would get if they added the same media to a card twice
without adding the card.

The old DeckStorage object had its upgrade code split in a previous commit;
the opening and upgrading code has been merged back together, and put in a
separate storage.py file. The correct way to open a deck now is import anki; d
= anki.Deck(path).

deck.getCard() -> deck.sched.getCard()
same with answerCard
deck.getCard(id) returns a Card object now.

And the DB wrapper has had a few changes:
- sql statements are a more standard DBAPI:
 - statement() -> execute()
 - statements() -> executemany()
- called like execute(sql, 1, 2, 3) or execute(sql, a=1, b=2, c=3)
- column0 -> list
2011-04-28 09:23:53 +09:00

89 lines
2.9 KiB
Python

# coding: utf-8
import tempfile, os, time
from anki import Deck
from anki.utils import checksum
from shared import getEmptyDeck, testDir
# uniqueness check
def test_unique():
d = getEmptyDeck()
dir = tempfile.mkdtemp(prefix="anki")
# new file
n = "foo.jpg"
new = os.path.basename(d.media.uniquePath(dir, n))
assert new == n
# duplicate file
open(os.path.join(dir, n), "w").write("hello")
n = "foo.jpg"
new = os.path.basename(d.media.uniquePath(dir, n))
assert new == "foo (1).jpg"
# another duplicate
open(os.path.join(dir, "foo (1).jpg"), "w").write("hello")
n = "foo.jpg"
new = os.path.basename(d.media.uniquePath(dir, n))
assert new == "foo (2).jpg"
# copying files to media folder
def test_copy():
d = getEmptyDeck()
dir = tempfile.mkdtemp(prefix="anki")
path = os.path.join(dir, "foo.jpg")
open(path, "w").write("hello")
# new file
assert d.media.addFile(path) == "foo.jpg"
# dupe md5
path = os.path.join(dir, "bar.jpg")
open(path, "w").write("hello")
assert d.media.addFile(path) == "foo.jpg"
# media db
def test_db():
deck = getEmptyDeck()
dir = tempfile.mkdtemp(prefix="anki")
path = os.path.join(dir, "foo.jpg")
open(path, "w").write("hello")
# add a new fact that references it twice
f = deck.newFact()
f['Front'] = u"<img src='foo.jpg'>"
f['Back'] = u"back [sound:foo.jpg]"
deck.addFact(f)
# 1 entry in the media db, and no checksum
assert deck.db.scalar("select count() from media") == 1
assert not deck.db.scalar("select group_concat(csum, '') from media")
# copy to media folder
path = deck.media.addFile(path)
# md5 should be set now
assert deck.db.scalar("select count() from media") == 1
assert deck.db.scalar("select group_concat(csum, '') from media")
# detect file modifications
oldsum = deck.db.scalar("select csum from media")
open(path, "w").write("world")
deck.media.rebuildMediaDir()
newsum = deck.db.scalar("select csum from media")
assert newsum and newsum != oldsum
# delete underlying file and check db
os.unlink(path)
deck.media.rebuildMediaDir()
# md5 should be gone again
assert deck.db.scalar("select count() from media") == 1
assert deck.db.scalar("select not csum from media")
# media db should pick up media defined via templates & bulk update
f['Back'] = u"bar.jpg"
f.flush()
# modify template & regenerate
assert deck.db.scalar("select count() from media") == 1
m = deck.currentModel()
m.templates[0].afmt=u'<img src="{{{Back}}}">'
m.flush()
m.updateCache()
assert deck.db.scalar("select count() from media") == 2
def test_deckIntegration():
deck = getEmptyDeck()
# create a media dir
deck.media.mediaDir(create=True)
# put a file into it
file = unicode(os.path.join(testDir, "deck/fake.png"))
deck.media.addFile(file)
print "todo: check media copied on rename"