mirror of
https://github.com/ankitects/anki.git
synced 2025-11-10 22:57:11 -05:00
* Collection needs to be closed prior to backup even when not downgrading * Backups -> BackupLimits * Some improvements to backup_task - backup_inner now returns the error instead of logging it, so that the frontend can discover the issue when they await a backup (or create another one) - start_backup() was acquiring backup_task twice, and if another thread started a backup between the two locks, the task could have been accidentally overwritten without awaiting it * Backups no longer require a collection close - Instead of closing the collection, we ensure there is no active transaction, and flush the WAL to disk. This means the undo history is no longer lost on backup, which will be particularly useful if we add a periodic backup in the future. - Because a close is no longer required, backups are now achieved with a separate command, instead of being included in CloseCollection(). - Full sync no longer requires an extra close+reopen step, and we now wait for the backup to complete before proceeding. - Create a backup before 'check db' * Add File>Create Backup https://forums.ankiweb.net/t/anki-mac-os-no-backup-on-sync/6157 * Defer checkpoint until we know we need it When running periodic backups on a timer, we don't want to be fsync()ing unnecessarily. * Skip backup if modification time has not changed We don't want the user leaving Anki open overnight, and coming back to lots of identical backups. * Periodic backups Creates an automatic backup every 30 minutes if the collection has been modified. If there's a legacy checkpoint active, tries again 5 minutes later. * Switch to a user-configurable backup duration CreateBackup() now uses a simple force argument to determine whether the user's limits should be respected or not, and only potentially destructive ops (full download, check DB) override the user's configured limit. I considered having a separate limit for collection close and automatic backups (eg keeping the previous 5 minute limit for collection close), but that had two downsides: - When the user closes their collection at the end of the day, they'd get a recent backup. When they open the collection the next day, it would get backed up again within 5 minutes, even though not much had changed. - Multiple limits are harder to communicate to users in the UI Some remaining decisions I wasn't 100% sure about: - If force is true but the collection has not been modified, the backup will be skipped. If the user manually deleted their backups without closing Anki, they wouldn't get a new one if the mtime hadn't changed. - Force takes preference over the configured backup interval - should we be ignored the user here, or take no backups at all? Did a sneaky edit of the existing ftl string, as it hasn't been live long. * Move maybe_backup() into Collection * Use a single method for manual and periodic backups When manually creating a backup via the File menu, we no longer make the user wait until the backup completes. As we continue waiting for the backup in the background, if any errors occur, the user will get notified about it fairly quickly. * Show message to user if backup was skipped due to no changes + Don't incorrectly assert a backup will be created on force * Add "automatic" to description * Ensure we backup prior to importing colpkg if collection open The backup doesn't happen when invoked from 'open backup' in the profile screen, which matches Anki's previous behaviour. The user could potentially clobber up to 30 minutes of their work if they exited to the profile screen and restored a backup, but the alternative is we create backups every time a backup is restored, which may happen a number of times if the user is trying various ones. Or we could go back to a separate throttle amount for this case, at the cost of more complexity. * Remove the 0 special case on backup interval; minimum of 5 minutes https://github.com/ankitects/anki/pull/1728#discussion_r830876833
180 lines
5.7 KiB
Rust
180 lines
5.7 KiB
Rust
// Copyright: Ankitects Pty Ltd and contributors
|
|
// License: GNU AGPL, version 3 or later; http://www.gnu.org/licenses/agpl.html
|
|
|
|
pub mod backup;
|
|
pub(crate) mod timestamps;
|
|
mod transact;
|
|
pub(crate) mod undo;
|
|
|
|
use std::{collections::HashMap, path::PathBuf, sync::Arc};
|
|
|
|
use crate::{
|
|
browser_table,
|
|
decks::{Deck, DeckId},
|
|
error::Result,
|
|
i18n::I18n,
|
|
log::{default_logger, Logger},
|
|
notetype::{Notetype, NotetypeId},
|
|
scheduler::{queue::CardQueues, SchedulerInfo},
|
|
storage::{SchemaVersion, SqliteStorage},
|
|
timestamp::TimestampMillis,
|
|
types::Usn,
|
|
undo::UndoManager,
|
|
};
|
|
|
|
#[derive(Default)]
|
|
pub struct CollectionBuilder {
|
|
collection_path: Option<PathBuf>,
|
|
media_folder: Option<PathBuf>,
|
|
media_db: Option<PathBuf>,
|
|
server: Option<bool>,
|
|
tr: Option<I18n>,
|
|
log: Option<Logger>,
|
|
}
|
|
|
|
impl CollectionBuilder {
|
|
/// Create a new builder with the provided collection path.
|
|
/// If an in-memory database is desired, used ::default() instead.
|
|
pub fn new(col_path: impl Into<PathBuf>) -> Self {
|
|
let mut builder = Self::default();
|
|
builder.set_collection_path(col_path);
|
|
builder
|
|
}
|
|
|
|
pub fn build(&self) -> Result<Collection> {
|
|
let col_path = self
|
|
.collection_path
|
|
.clone()
|
|
.unwrap_or_else(|| PathBuf::from(":memory:"));
|
|
let tr = self.tr.clone().unwrap_or_else(I18n::template_only);
|
|
let server = self.server.unwrap_or_default();
|
|
let media_folder = self.media_folder.clone().unwrap_or_default();
|
|
let media_db = self.media_db.clone().unwrap_or_default();
|
|
let log = self.log.clone().unwrap_or_else(crate::log::terminal);
|
|
|
|
let storage = SqliteStorage::open_or_create(&col_path, &tr, server)?;
|
|
let col = Collection {
|
|
storage,
|
|
col_path,
|
|
media_folder,
|
|
media_db,
|
|
tr,
|
|
log,
|
|
server,
|
|
state: CollectionState::default(),
|
|
};
|
|
|
|
Ok(col)
|
|
}
|
|
|
|
pub fn set_collection_path<P: Into<PathBuf>>(&mut self, collection: P) -> &mut Self {
|
|
self.collection_path = Some(collection.into());
|
|
self
|
|
}
|
|
|
|
pub fn set_media_paths<P: Into<PathBuf>>(&mut self, media_folder: P, media_db: P) -> &mut Self {
|
|
self.media_folder = Some(media_folder.into());
|
|
self.media_db = Some(media_db.into());
|
|
self
|
|
}
|
|
|
|
pub fn set_server(&mut self, server: bool) -> &mut Self {
|
|
self.server = Some(server);
|
|
self
|
|
}
|
|
|
|
pub fn set_tr(&mut self, tr: I18n) -> &mut Self {
|
|
self.tr = Some(tr);
|
|
self
|
|
}
|
|
|
|
/// Directly set the logger.
|
|
pub fn set_logger(&mut self, log: Logger) -> &mut Self {
|
|
self.log = Some(log);
|
|
self
|
|
}
|
|
|
|
/// Log to the provided file.
|
|
pub fn set_log_file(&mut self, log_file: &str) -> Result<&mut Self, std::io::Error> {
|
|
self.set_logger(default_logger(Some(log_file))?);
|
|
Ok(self)
|
|
}
|
|
}
|
|
|
|
#[cfg(test)]
|
|
pub fn open_test_collection() -> Collection {
|
|
CollectionBuilder::default().build().unwrap()
|
|
}
|
|
|
|
#[derive(Debug, Default)]
|
|
pub struct CollectionState {
|
|
pub(crate) undo: UndoManager,
|
|
pub(crate) notetype_cache: HashMap<NotetypeId, Arc<Notetype>>,
|
|
pub(crate) deck_cache: HashMap<DeckId, Arc<Deck>>,
|
|
pub(crate) scheduler_info: Option<SchedulerInfo>,
|
|
pub(crate) card_queues: Option<CardQueues>,
|
|
pub(crate) active_browser_columns: Option<Arc<Vec<browser_table::Column>>>,
|
|
/// True if legacy Python code has executed SQL that has modified the
|
|
/// database, requiring modification time to be bumped.
|
|
pub(crate) modified_by_dbproxy: bool,
|
|
/// The modification time at the last backup, so we don't create multiple
|
|
/// identical backups.
|
|
pub(crate) last_backup_modified: Option<TimestampMillis>,
|
|
}
|
|
|
|
pub struct Collection {
|
|
pub(crate) storage: SqliteStorage,
|
|
#[allow(dead_code)]
|
|
pub(crate) col_path: PathBuf,
|
|
pub(crate) media_folder: PathBuf,
|
|
pub(crate) media_db: PathBuf,
|
|
pub(crate) tr: I18n,
|
|
pub(crate) log: Logger,
|
|
pub(crate) server: bool,
|
|
pub(crate) state: CollectionState,
|
|
}
|
|
|
|
impl Collection {
|
|
pub fn as_builder(&self) -> CollectionBuilder {
|
|
let mut builder = CollectionBuilder::new(&self.col_path);
|
|
builder
|
|
.set_media_paths(self.media_folder.clone(), self.media_db.clone())
|
|
.set_server(self.server)
|
|
.set_tr(self.tr.clone())
|
|
.set_logger(self.log.clone());
|
|
builder
|
|
}
|
|
|
|
pub(crate) fn close(self, desired_version: Option<SchemaVersion>) -> Result<()> {
|
|
self.storage.close(desired_version)
|
|
}
|
|
|
|
pub(crate) fn usn(&self) -> Result<Usn> {
|
|
// if we cache this in the future, must make sure to invalidate cache when usn bumped in sync.finish()
|
|
self.storage.usn(self.server)
|
|
}
|
|
|
|
/// Prepare for upload. Caller should not create transaction.
|
|
pub(crate) fn before_upload(&mut self) -> Result<()> {
|
|
self.transact_no_undo(|col| {
|
|
col.storage.clear_all_graves()?;
|
|
col.storage.clear_pending_note_usns()?;
|
|
col.storage.clear_pending_card_usns()?;
|
|
col.storage.clear_pending_revlog_usns()?;
|
|
col.storage.clear_tag_usns()?;
|
|
col.storage.clear_deck_conf_usns()?;
|
|
col.storage.clear_deck_usns()?;
|
|
col.storage.clear_notetype_usns()?;
|
|
col.storage.increment_usn()?;
|
|
col.set_schema_modified()?;
|
|
col.storage
|
|
.set_last_sync(col.storage.get_collection_timestamps()?.schema_change)
|
|
})?;
|
|
self.storage.optimize()
|
|
}
|
|
|
|
pub(crate) fn clear_caches(&mut self) {
|
|
self.state.deck_cache.clear();
|
|
self.state.notetype_cache.clear();
|
|
}
|
|
}
|