In protobuf "...enum values use C++ scoping rules, meaning that
enum values are siblings of their type, not children of it.
Therefore, [an enum variant] must be unique within [a message],
not just within [the enum.]"
So we must prefix enum variants with their enum's name, but can
also call them directly from the message namespace.
The protobuf crate is smart, though, and strips the prefixes.
(Simultaneously change some SearchTerm variant names.)
- IdList could be re-used for a cids: search in the future if required.
- Embedding the message means it's easy to access from Python as
an attribute of SearchTerm.
- backend routines should contain minimal logic, and should call
into a routine on the collection
- instead of copying the giant-string approach the Python code was taking,
we use a HashSet to keep track of seen tags as we loop through the
notes, which should be more efficient
See https://github.com/ankitects/anki/pull/900#issuecomment-758284016
- Leave tag names alone and add the collapsed and config columns to the tags table.
- Update The DB check code to preserve the collapse state of used tags.
- Add a simple test for clearing tags and their children
- use the TimestampSecs newtype instead of raw i64s
- use FixedOffset instead of a minutes_west offset
- check localOffset each time the timing is calculated, and set it
if it's stale - even for v1.
- check for and fix missing rollover when calculating timing
- stop explicitly passing localOffset in the sync/start call
Running and testing should be working on the three platforms, but
there's still a fair bit that needs to be done:
- Wheel building + testing in a venv still needs to be implemented.
- Python requirements still need to be compiled with piptool and pinned;
need to compile on all platforms then merge
- Cargo deps in cargo/ and rslib/ need to be cleaned up, and ideally
unified into one place
- Currently using rustls to work around openssl compilation issues
on Linux, but this will break corporate proxies with custom SSL
authorities; need to conditionally use openssl or use
https://github.com/seanmonstar/reqwest/pull/1058
- Makefiles and docs still need cleaning up
- It may make sense to reparent ts/* to the top level, as we don't
nest the other modules under a specific language.
- rspy and pylib must always be updated in lock-step, so merging
rspy into pylib as a private module would simplify things.
- Merging desktop-ftl and mobile-ftl into the core ftl would make
managing and updating translations easier.
- Obsolete scripts need removing.
- And probably more.
The previous implementation had some slightly questionable memory safety
properties (older versions of PyO3 didn't uphold the Rust aliasing rules
and would thus create multiple &mut references to #[pyclass] objects).
This explains why Backend has internal Mutex<T>s even though all of its
methods took &mut self.
The solution is to simply make all methods take &self, which luckily
doesn't pose too make issues -- most of the code inside Backend already
has sufficient locking. The only two things which needed to be
explicitly handled where:
1. "self.runtime" which was fairly easy to handle. All usages of
the Runtime only require an immutable reference to create a new
Handle, so we could switch to OnceCell which provides
lazy-initialisation semantics without needing a more heavy-handed
Mutex<tokio::runtime::Handle>.
2. "self.sync_abort" was simply wrapped in a Mutex<>, though some of the
odd semantics of sync_abort (not being able to handle multiple
processes synchronising at the same time) become pretty obvious with
this change (for now we just log a warning in that case). In
addition, switch to an RAII-style guard to make sure we don't forget
to clear the abort_handle.
As a result, we now no longer break Rust's aliasing rules and we can
build with newer versions of PyO3 which have runtime checks for these
things (and build on stable Rust).
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
- blue for normal sync, red for full sync required
- refactor status fetching code so we don't hold a collection lock
during the network request, which slows things down
- fix sync spinner restarting when returning to deck list
- Use a separate abort handle, as the media sync is running
in the background and we need to be able to target them separately.
The current progress handling is going to need a rethink if we introduce
any other background tasks in the future.
- Roll back the transaction when interrupting.
Also:
- provide a way for the progress handler to skip the throttling so that
we can ensure progress is updated at the end of a stage
- show 'checking' at the end of full sync
- .update() should update a single deck and preserve usn by default,
as that's what existing code expects
- decks are automatically renamed when they conflict with an existing
name