mirror of
https://github.com/ankitects/anki.git
synced 2025-09-18 14:02:21 -04:00

* Add apkg export on backend
* Filter out missing media-paths at write time
* Make TagMatcher::new() infallible
* Gather export data instead of copying directly
* Revert changes to rslib/src/tags/
* Reuse filename_is_safe/check_filename_safe()
* Accept func to produce MediaIter in export_apkg()
* Only store file folder once in MediaIter
* Use temporary tables for gathering
export_apkg() now accepts a search instead of a deck id. Decks are
gathered according to the matched notes' cards.
* Use schedule_as_new() to reset cards
* ExportData → ExchangeData
* Ignore ascii case when filtering system tags
* search_notes_cards_into_table →
search_cards_of_notes_into_table
* Start on apkg importing on backend
* Fix due dates in days for apkg export
* Refactor import-export/package
- Move media and meta code into appropriate modules.
- Normalize/check for normalization when deserializing media entries.
* Add SafeMediaEntry for deserialized MediaEntries
* Prepare media based on checksums
- Ensure all existing media files are hashed.
- Hash incoming files during preparation to detect conflicts.
- Uniquify names of conflicting files with hash (not notetype id).
- Mark media files as used while importing notes.
- Finally copy used media.
* Handle encoding in `replace_media_refs()`
* Add trait to keep down cow boilerplate
* Add notetypes immediately instaed of preparing
* Move target_col into Context
* Add notes immediately instaed of preparing
* Note id, not guid of conflicting notes
* Add import_decks()
* decks_configs → deck_configs
* Add import_deck_configs()
* Add import_cards(), import_revlog()
* Use dyn instead of generic for media_fn
Otherwise, would have to pass None with type annotation in the default
case.
* Fix signature of import_apkg()
* Fix search_cards_of_notes_into_table()
* Test new functions in text.rs
* Add roundtrip test for apkg (stub)
* Keep source id of imported cards (or skip)
* Keep source ids of imported revlog (or skip)
* Try to keep source ids of imported notes
* Make adding notetype with id undoable
* Wrap apkg import in transaction
* Keep source ids of imported deck configs (or skip)
* Handle card due dates and original due/did
* Fix importing cards/revlog
Card ids are manually uniquified.
* Factor out card importing
* Refactor card and revlog importing
* Factor out card importing
Also handle missing parents .
* Factor out note importing
* Factor out media importing
* Maybe upgrade scheduler of apkg
* Fix parent deck gathering
* Unconditionally import static media
* Fix deck importing edge cases
Test those edge cases, and add some global test helpers.
* Test note importing
* Let import_apkg() take a progress func
* Expand roundtrip apkg test
* Use fat pointer to avoid propogating generics
* Fix progress_fn type
* Expose apkg export/import on backend
* Return note log when importing apkg
* Fix archived collection name on apkg import
* Add CollectionOpWithBackendProgress
* Fix wrong Interrupted Exception being checked
* Add ClosedCollectionOp
* Add note ids to log and strip HTML
* Update progress when checking incoming media too
* Conditionally enable new importing in GUI
* Fix all_checksums() for media import
Entries of deleted files are nulled, not removed.
* Make apkg exporting on backend abortable
* Return number of notes imported from apkg
* Fix exception printing for QueryOp as well
* Add QueryOpWithBackendProgress
Also support backend exporting progress.
* Expose new apkg and colpkg exporting
* Open transaction in insert_data()
Was slowing down exporting by several orders of magnitude.
* Handle zstd-compressed apkg
* Add legacy arg to ExportAnkiPackage
Currently not exposed on the frontend
* Remove unused import in proto file
* Add symlink for typechecking of import_export_pb2
* Avoid kwargs in pb message creation, so typechecking is not lost
Protobuf's behaviour is rather subtle and I had to dig through the docs
to figure it out: set a field on a submessage to automatically assign
the submessage to the parent, or call SetInParent() to persist a default
version of the field you specified.
* Avoid re-exporting protobuf msgs we only use internally
* Stop after one test failure
mypy often fails much faster than pylint
* Avoid an extra allocation when extracting media checksums
* Update progress after prepare_media() finishes
Otherwise the bulk of the import ends up being shown as "Checked: 0"
in the progress window.
* Show progress of note imports
Note import is the slowest part, so showing progress here makes the UI
feel more responsive.
* Reset filtered decks at import time
Before this change, filtered decks exported with scheduling remained
filtered on import, and maybe_remove_from_filtered_deck() moved cards
into them as their home deck, leading to errors during review.
We may still want to provide a way to preserve filtered decks on import,
but to do that we'll need to ensure we don't rewrite the home decks of
cards, and we'll need to ensure the home decks are included as part of
the import (or give an error if they're not).
https://github.com/ankitects/anki/pull/1743/files#r839346423
* Fix a corner-case where due dates were shifted by a day
This issue existed in the old Python code as well. We need to include
the user's UTC offset in the exported file, or days_elapsed falls back
on the v1 cutoff calculation, which may be a day earlier or later than
the v2 calculation.
* Log conflicting note in remapped nt case
* take_fields() → into_fields()
* Alias `[u8; 20]` with `Sha1Hash`
* Truncate logged fields
* Rework apkg note import tests
- Use macros for more helpful errors.
- Split monolith into unit tests.
- Fix some unknown error with the previous test along the way.
(Was failing after 969484de4388d225c9f17d94534b3ba0094c3568.)
* Fix sorting of imported decks
Also adjust the test, so it fails without the patch. It was only passing
before, because the parent deck happened to come before the
inconsistently capitalised child alphabetically. But we want all parent
decks to be imported before their child decks, so their children can
adopt their capitalisation.
* target[_id]s → existing_card[_id]s
* export_collection_extracting_media() → ...
export_into_collection_file()
* target_already_exists→card_ordinal_already_exists
* Add search_cards_of_notes_into_table.sql
* Imrove type of apkg export selector/limit
* Remove redundant call to mod_schema()
* Parent tooltips to mw
* Fix a crash when truncating note text
String::truncate() is a bit of a footgun, and I've hit this before
too :-)
* Remove ExportLimit in favour of separate classes
* Remove OpWithBackendProgress and ClosedCollectionOp
Backend progress logic is now in ProgressManager. QueryOp can be used
for running on closed collection.
Also fix aborting of colpkg exports, which slipped through in #1817.
* Tidy up import log
* Avoid QDialog.exec()
* Default to excluding scheuling for deck list deck
* Use IncrementalProgress in whole import_export code
* Compare checksums when importing colpkgs
* Avoid registering changes if hashes are not needed
* ImportProgress::Collection → ImportProgress::File
* Make downgrading apkgs depend on meta version
* Generalise IncrementableProgress
And use it in entire import_export code instead.
* Fix type complexity lint
* Take count_map for IncrementableProgress::get_inner
* Replace import/export env with Shift click
* Accept all args from update() for backend progress
* Pass fields of ProgressUpdate explicitly
* Move update_interval into IncrementableProgress
* Outsource incrementing into Incrementor
* Mutate ProgressUpdate in progress_update callback
* Switch import/export legacy toggle to profile setting
Shift would have been nice, but the existing shortcuts complicate things.
If the user triggers an import with ctrl+shift+i, shift is unlikely to
have been released by the time our code runs, meaning the user accidentally
triggers the new code. We could potentially wait a while before bringing
up the dialog, but then we're forced to guess at how long it will take the
user to release the key.
One alternative would be to use alt instead of shift, but then we need to
trigger our shortcut when that key is pressed as well, and it could
potentially cause a conflict with an add-on that already uses that
combination.
* Show extension in export dialog
* Continue to provide separate options for schema 11+18 colpkg export
* Default to colpkg export when using File>Export
* Improve appearance of combo boxes when switching between apkg/colpkg
+ Deal with long deck names
* Convert newlines to spaces when showing fields from import
Ensures each imported note appears on a separate line
* Don't separate total note count from the other summary lines
This may come down to personal preference, but I feel the other counts
are equally as important, and separating them feels like it makes it
a bit easier to ignore them.
* Fix 'deck not normal' error when importing a filtered deck for the 2nd time
* Fix [Identical] being shown on first import
* Revert "Continue to provide separate options for schema 11+18 colpkg export"
This reverts commit 8f0b2c175f
.
Will use a different approach
* Move legacy support into a separate exporter option; add to apkg export
* Adjust 'too new' message to also apply to .apkg import case
* Show a better message when attempting to import new apkg into old code
Previously the user could end seeing a message like:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 1: invalid start byte
Unfortunately we can't retroactively fix this for older clients.
* Hide legacy support option in older exporting screen
* Reflect change from paths to fnames in type & name
* Make imported decks normal at once
Then skip special casing in update_deck(). Also skip updating
description if new one is empty.
Co-authored-by: Damien Elmes <gpg@ankiweb.net>
479 lines
17 KiB
Python
479 lines
17 KiB
Python
# Copyright: Ankitects Pty Ltd and contributors
|
|
# License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
|
|
|
|
# pylint: disable=invalid-name
|
|
|
|
import os
|
|
import unicodedata
|
|
from typing import Optional
|
|
|
|
from anki.cards import CardId
|
|
from anki.collection import Collection
|
|
from anki.consts import *
|
|
from anki.decks import DeckId, DeckManager
|
|
from anki.importing.base import Importer
|
|
from anki.models import NotetypeId
|
|
from anki.notes import NoteId
|
|
from anki.utils import int_time, join_fields, split_fields, strip_html_media
|
|
|
|
GUID = 1
|
|
MID = 2
|
|
MOD = 3
|
|
|
|
|
|
class V2ImportIntoV1(Exception):
|
|
pass
|
|
|
|
|
|
class MediaMapInvalid(Exception):
|
|
pass
|
|
|
|
|
|
class Anki2Importer(Importer):
|
|
|
|
needMapper = False
|
|
deckPrefix: Optional[str] = None
|
|
allowUpdate = True
|
|
src: Collection
|
|
dst: Collection
|
|
|
|
def __init__(self, col: Collection, file: str) -> None:
|
|
super().__init__(col, file)
|
|
|
|
# set later, defined here for typechecking
|
|
self._decks: dict[DeckId, DeckId] = {}
|
|
self.source_needs_upgrade = False
|
|
|
|
def run(self, media: None = None, importing_v2: bool = True) -> None:
|
|
self._importing_v2 = importing_v2
|
|
self._prepareFiles()
|
|
if media is not None:
|
|
# Anki1 importer has provided us with a custom media folder
|
|
self.src.media._dir = media
|
|
try:
|
|
self._import()
|
|
finally:
|
|
self.src.close(save=False, downgrade=False)
|
|
|
|
def _prepareFiles(self) -> None:
|
|
self.source_needs_upgrade = False
|
|
|
|
self.dst = self.col
|
|
self.src = Collection(self.file)
|
|
|
|
if not self._importing_v2 and self.col.sched_ver() != 1:
|
|
# any scheduling included?
|
|
if self.src.db.scalar("select 1 from cards where queue != 0 limit 1"):
|
|
self.source_needs_upgrade = True
|
|
elif self._importing_v2 and self.col.sched_ver() == 1:
|
|
raise V2ImportIntoV1()
|
|
|
|
def _import(self) -> None:
|
|
self._decks = {}
|
|
if self.deckPrefix:
|
|
id = self.dst.decks.id(self.deckPrefix)
|
|
self.dst.decks.select(id)
|
|
self._prepareTS()
|
|
self._prepareModels()
|
|
self._importNotes()
|
|
self._importCards()
|
|
self._importStaticMedia()
|
|
self._postImport()
|
|
self.dst.optimize()
|
|
|
|
# Notes
|
|
######################################################################
|
|
|
|
def _logNoteRow(self, action: str, noteRow: list[str]) -> None:
|
|
self.log.append(
|
|
"[{}] {}".format(action, strip_html_media(noteRow[6].replace("\x1f", ", ")))
|
|
)
|
|
|
|
def _importNotes(self) -> None:
|
|
# build guid -> (id,mod,mid) hash & map of existing note ids
|
|
self._notes: dict[str, tuple[NoteId, int, NotetypeId]] = {}
|
|
existing = {}
|
|
for id, guid, mod, mid in self.dst.db.execute(
|
|
"select id, guid, mod, mid from notes"
|
|
):
|
|
self._notes[guid] = (id, mod, mid)
|
|
existing[id] = True
|
|
# we ignore updates to changed schemas. we need to note the ignored
|
|
# guids, so we avoid importing invalid cards
|
|
self._ignoredGuids: dict[str, bool] = {}
|
|
# iterate over source collection
|
|
add = []
|
|
update = []
|
|
dirty = []
|
|
usn = self.dst.usn()
|
|
dupesIdentical = []
|
|
dupesIgnored = []
|
|
total = 0
|
|
for note in self.src.db.execute("select * from notes"):
|
|
total += 1
|
|
# turn the db result into a mutable list
|
|
note = list(note)
|
|
shouldAdd = self._uniquifyNote(note)
|
|
if shouldAdd:
|
|
# ensure id is unique
|
|
while note[0] in existing:
|
|
note[0] += 999
|
|
existing[note[0]] = True
|
|
# bump usn
|
|
note[4] = usn
|
|
# update media references in case of dupes
|
|
note[6] = self._mungeMedia(note[MID], note[6])
|
|
add.append(note)
|
|
dirty.append(note[0])
|
|
# note we have the added the guid
|
|
self._notes[note[GUID]] = (note[0], note[3], note[MID])
|
|
else:
|
|
# a duplicate or changed schema - safe to update?
|
|
if self.allowUpdate:
|
|
oldNid, oldMod, oldMid = self._notes[note[GUID]]
|
|
# will update if incoming note more recent
|
|
if oldMod < note[MOD]:
|
|
# safe if note types identical
|
|
if oldMid == note[MID]:
|
|
# incoming note should use existing id
|
|
note[0] = oldNid
|
|
note[4] = usn
|
|
note[6] = self._mungeMedia(note[MID], note[6])
|
|
update.append(note)
|
|
dirty.append(note[0])
|
|
else:
|
|
dupesIgnored.append(note)
|
|
self._ignoredGuids[note[GUID]] = True
|
|
else:
|
|
dupesIdentical.append(note)
|
|
|
|
self.log.append(self.dst.tr.importing_notes_found_in_file(val=total))
|
|
|
|
if dupesIgnored:
|
|
self.log.append(
|
|
self.dst.tr.importing_notes_that_could_not_be_imported(
|
|
val=len(dupesIgnored)
|
|
)
|
|
)
|
|
if update:
|
|
self.log.append(
|
|
self.dst.tr.importing_notes_updated_as_file_had_newer(val=len(update))
|
|
)
|
|
if add:
|
|
self.log.append(self.dst.tr.importing_notes_added_from_file(val=len(add)))
|
|
if dupesIdentical:
|
|
self.log.append(
|
|
self.dst.tr.importing_notes_skipped_as_theyre_already_in(
|
|
val=len(dupesIdentical),
|
|
)
|
|
)
|
|
|
|
self.log.append("")
|
|
|
|
if dupesIgnored:
|
|
for row in dupesIgnored:
|
|
self._logNoteRow(self.dst.tr.importing_skipped(), row)
|
|
if update:
|
|
for row in update:
|
|
self._logNoteRow(self.dst.tr.importing_updated(), row)
|
|
if add:
|
|
for row in add:
|
|
self._logNoteRow(self.dst.tr.adding_added(), row)
|
|
if dupesIdentical:
|
|
for row in dupesIdentical:
|
|
self._logNoteRow(self.dst.tr.importing_identical(), row)
|
|
|
|
# export info for calling code
|
|
self.dupes = len(dupesIdentical)
|
|
self.added = len(add)
|
|
self.updated = len(update)
|
|
# add to col
|
|
self.dst.db.executemany(
|
|
"insert or replace into notes values (?,?,?,?,?,?,?,?,?,?,?)", add
|
|
)
|
|
self.dst.db.executemany(
|
|
"insert or replace into notes values (?,?,?,?,?,?,?,?,?,?,?)", update
|
|
)
|
|
self.dst.after_note_updates(dirty, mark_modified=False, generate_cards=False)
|
|
|
|
# determine if note is a duplicate, and adjust mid and/or guid as required
|
|
# returns true if note should be added
|
|
def _uniquifyNote(self, note: list[Any]) -> bool:
|
|
origGuid = note[GUID]
|
|
srcMid = note[MID]
|
|
dstMid = self._mid(srcMid)
|
|
# duplicate schemas?
|
|
if srcMid == dstMid:
|
|
return origGuid not in self._notes
|
|
# differing schemas and note doesn't exist?
|
|
note[MID] = dstMid
|
|
if origGuid not in self._notes:
|
|
return True
|
|
# schema changed; don't import
|
|
self._ignoredGuids[origGuid] = True
|
|
return False
|
|
|
|
# Models
|
|
######################################################################
|
|
# Models in the two decks may share an ID but not a schema, so we need to
|
|
# compare the field & template signature rather than just rely on ID. If
|
|
# the schemas don't match, we increment the mid and try again, creating a
|
|
# new model if necessary.
|
|
|
|
def _prepareModels(self) -> None:
|
|
"Prepare index of schema hashes."
|
|
self._modelMap: dict[NotetypeId, NotetypeId] = {}
|
|
|
|
def _mid(self, srcMid: NotetypeId) -> Any:
|
|
"Return local id for remote MID."
|
|
# already processed this mid?
|
|
if srcMid in self._modelMap:
|
|
return self._modelMap[srcMid]
|
|
mid = srcMid
|
|
srcModel = self.src.models.get(srcMid)
|
|
srcScm = self.src.models.scmhash(srcModel)
|
|
while True:
|
|
# missing from target col?
|
|
if not self.dst.models.have(mid):
|
|
# copy it over
|
|
model = srcModel.copy()
|
|
model["id"] = mid
|
|
model["usn"] = self.col.usn()
|
|
self.dst.models.update(model, skip_checks=True)
|
|
break
|
|
# there's an existing model; do the schemas match?
|
|
dstModel = self.dst.models.get(mid)
|
|
dstScm = self.dst.models.scmhash(dstModel)
|
|
if srcScm == dstScm:
|
|
# copy styling changes over if newer
|
|
if srcModel["mod"] > dstModel["mod"]:
|
|
model = srcModel.copy()
|
|
model["id"] = mid
|
|
model["usn"] = self.col.usn()
|
|
self.dst.models.update(model, skip_checks=True)
|
|
break
|
|
# as they don't match, try next id
|
|
mid = NotetypeId(mid + 1)
|
|
# save map and return new mid
|
|
self._modelMap[srcMid] = mid
|
|
return mid
|
|
|
|
# Decks
|
|
######################################################################
|
|
|
|
def _did(self, did: DeckId) -> Any:
|
|
"Given did in src col, return local id."
|
|
# already converted?
|
|
if did in self._decks:
|
|
return self._decks[did]
|
|
# get the name in src
|
|
g = self.src.decks.get(did)
|
|
name = g["name"]
|
|
# if there's a prefix, replace the top level deck
|
|
if self.deckPrefix:
|
|
tmpname = "::".join(DeckManager.path(name)[1:])
|
|
name = self.deckPrefix
|
|
if tmpname:
|
|
name += f"::{tmpname}"
|
|
# manually create any parents so we can pull in descriptions
|
|
head = ""
|
|
for parent in DeckManager.immediate_parent_path(name):
|
|
if head:
|
|
head += "::"
|
|
head += parent
|
|
idInSrc = self.src.decks.id(head)
|
|
self._did(idInSrc)
|
|
# if target is a filtered deck, we'll need a new deck name
|
|
deck = self.dst.decks.by_name(name)
|
|
if deck and deck["dyn"]:
|
|
name = "%s %d" % (name, int_time())
|
|
# create in local
|
|
newid = self.dst.decks.id(name)
|
|
# pull conf over
|
|
if "conf" in g and g["conf"] != 1:
|
|
conf = self.src.decks.get_config(g["conf"])
|
|
self.dst.decks.save(conf)
|
|
self.dst.decks.update_config(conf)
|
|
g2 = self.dst.decks.get(newid)
|
|
g2["conf"] = g["conf"]
|
|
self.dst.decks.save(g2)
|
|
# save desc
|
|
deck = self.dst.decks.get(newid)
|
|
deck["desc"] = g["desc"]
|
|
self.dst.decks.save(deck)
|
|
# add to deck map and return
|
|
self._decks[did] = newid
|
|
return newid
|
|
|
|
# Cards
|
|
######################################################################
|
|
|
|
def _importCards(self) -> None:
|
|
if self.source_needs_upgrade:
|
|
self.src.upgrade_to_v2_scheduler()
|
|
# build map of (guid, ord) -> cid and used id cache
|
|
self._cards: dict[tuple[str, int], CardId] = {}
|
|
existing = {}
|
|
for guid, ord, cid in self.dst.db.execute(
|
|
"select f.guid, c.ord, c.id from cards c, notes f " "where c.nid = f.id"
|
|
):
|
|
existing[cid] = True
|
|
self._cards[(guid, ord)] = cid
|
|
# loop through src
|
|
cards = []
|
|
revlog = []
|
|
cnt = 0
|
|
usn = self.dst.usn()
|
|
aheadBy = self.src.sched.today - self.dst.sched.today
|
|
for card in self.src.db.execute(
|
|
"select f.guid, f.mid, c.* from cards c, notes f " "where c.nid = f.id"
|
|
):
|
|
guid = card[0]
|
|
if guid in self._ignoredGuids:
|
|
continue
|
|
# does the card's note exist in dst col?
|
|
if guid not in self._notes:
|
|
continue
|
|
# does the card already exist in the dst col?
|
|
ord = card[5]
|
|
if (guid, ord) in self._cards:
|
|
# fixme: in future, could update if newer mod time
|
|
continue
|
|
# doesn't exist. strip off note info, and save src id for later
|
|
card = list(card[2:])
|
|
scid = card[0]
|
|
# ensure the card id is unique
|
|
while card[0] in existing:
|
|
card[0] += 999
|
|
existing[card[0]] = True
|
|
# update cid, nid, etc
|
|
card[1] = self._notes[guid][0]
|
|
card[2] = self._did(card[2])
|
|
card[4] = int_time()
|
|
card[5] = usn
|
|
# review cards have a due date relative to collection
|
|
if (
|
|
card[7] in (QUEUE_TYPE_REV, QUEUE_TYPE_DAY_LEARN_RELEARN)
|
|
or card[6] == CARD_TYPE_REV
|
|
):
|
|
card[8] -= aheadBy
|
|
# odue needs updating too
|
|
if card[14]:
|
|
card[14] -= aheadBy
|
|
# if odid true, convert card from filtered to normal
|
|
if card[15]:
|
|
# odid
|
|
card[15] = 0
|
|
# odue
|
|
card[8] = card[14]
|
|
card[14] = 0
|
|
# queue
|
|
if card[6] == CARD_TYPE_LRN: # type
|
|
card[7] = QUEUE_TYPE_NEW
|
|
else:
|
|
card[7] = card[6]
|
|
# type
|
|
if card[6] == CARD_TYPE_LRN:
|
|
card[6] = CARD_TYPE_NEW
|
|
cards.append(card)
|
|
# we need to import revlog, rewriting card ids and bumping usn
|
|
for rev in self.src.db.execute("select * from revlog where cid = ?", scid):
|
|
rev = list(rev)
|
|
rev[1] = card[0]
|
|
rev[2] = self.dst.usn()
|
|
revlog.append(rev)
|
|
cnt += 1
|
|
# apply
|
|
self.dst.db.executemany(
|
|
"""
|
|
insert or ignore into cards values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)""",
|
|
cards,
|
|
)
|
|
self.dst.db.executemany(
|
|
"""
|
|
insert or ignore into revlog values (?,?,?,?,?,?,?,?,?)""",
|
|
revlog,
|
|
)
|
|
|
|
# Media
|
|
######################################################################
|
|
|
|
# note: this func only applies to imports of .anki2. for .apkg files, the
|
|
# apkg importer does the copying
|
|
def _importStaticMedia(self) -> None:
|
|
# Import any '_foo' prefixed media files regardless of whether
|
|
# they're used on notes or not
|
|
dir = self.src.media.dir()
|
|
if not os.path.exists(dir):
|
|
return
|
|
for fname in os.listdir(dir):
|
|
if fname.startswith("_") and not self.dst.media.have(fname):
|
|
self._writeDstMedia(fname, self._srcMediaData(fname))
|
|
|
|
def _mediaData(self, fname: str, dir: Optional[str] = None) -> bytes:
|
|
if not dir:
|
|
dir = self.src.media.dir()
|
|
path = os.path.join(dir, fname)
|
|
try:
|
|
with open(path, "rb") as f:
|
|
return f.read()
|
|
except OSError:
|
|
return b""
|
|
|
|
def _srcMediaData(self, fname: str) -> bytes:
|
|
"Data for FNAME in src collection."
|
|
return self._mediaData(fname, self.src.media.dir())
|
|
|
|
def _dstMediaData(self, fname: str) -> bytes:
|
|
"Data for FNAME in dst collection."
|
|
return self._mediaData(fname, self.dst.media.dir())
|
|
|
|
def _writeDstMedia(self, fname: str, data: bytes) -> None:
|
|
path = os.path.join(self.dst.media.dir(), unicodedata.normalize("NFC", fname))
|
|
try:
|
|
with open(path, "wb") as f:
|
|
f.write(data)
|
|
except OSError:
|
|
# the user likely used subdirectories
|
|
pass
|
|
|
|
def _mungeMedia(self, mid: NotetypeId, fieldsStr: str) -> str:
|
|
fields = split_fields(fieldsStr)
|
|
|
|
def repl(match):
|
|
fname = match.group("fname")
|
|
srcData = self._srcMediaData(fname)
|
|
dstData = self._dstMediaData(fname)
|
|
if not srcData:
|
|
# file was not in source, ignore
|
|
return match.group(0)
|
|
# if model-local file exists from a previous import, use that
|
|
name, ext = os.path.splitext(fname)
|
|
lname = f"{name}_{mid}{ext}"
|
|
if self.dst.media.have(lname):
|
|
return match.group(0).replace(fname, lname)
|
|
# if missing or the same, pass unmodified
|
|
elif not dstData or srcData == dstData:
|
|
# need to copy?
|
|
if not dstData:
|
|
self._writeDstMedia(fname, srcData)
|
|
return match.group(0)
|
|
# exists but does not match, so we need to dedupe
|
|
self._writeDstMedia(lname, srcData)
|
|
return match.group(0).replace(fname, lname)
|
|
|
|
for idx, field in enumerate(fields):
|
|
fields[idx] = self.dst.media.transform_names(field, repl)
|
|
return join_fields(fields)
|
|
|
|
# Post-import cleanup
|
|
######################################################################
|
|
|
|
def _postImport(self) -> None:
|
|
for did in list(self._decks.values()):
|
|
self.col.sched.maybe_randomize_deck(did)
|
|
# make sure new position is correct
|
|
self.dst.conf["nextPos"] = (
|
|
self.dst.db.scalar("select max(due)+1 from cards where type = 0") or 0
|
|
)
|
|
self.dst.save()
|