mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-02-25 21:48:50 +00:00
Compare commits
8 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1073ee9e30 | ||
|
|
f94653e60e | ||
|
|
3dccf2076f | ||
|
|
3e78fe03e1 | ||
|
|
4aa8fc3519 | ||
|
|
ba3d2220e1 | ||
|
|
8057b516af | ||
|
|
f2b4431182 |
122
docs/design_docs/chunk_aggregation_by_prefix.md
Normal file
122
docs/design_docs/chunk_aggregation_by_prefix.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# [WITHDRAWN] Chunk Aggregation by Prefix
|
||||
|
||||
## Goal
|
||||
|
||||
To address the "document explosion" and storage bloat issues caused by the current chunking mechanism, while preserving the benefits of content-addressable storage and efficient delta synchronisation. This design aims to significantly reduce the number of documents in the database and simplify Garbage Collection (GC).
|
||||
|
||||
## Motivation
|
||||
|
||||
Our current synchronisation solution splits files into content-defined chunks, with each chunk stored as a separate document in CouchDB, identified by its hash. This architecture effectively leverages CouchDB's replication for automatic deduplication and efficient transfer.
|
||||
|
||||
However, this approach faces significant challenges as the number of files and edits increases:
|
||||
1. **Document Explosion:** A large vault can generate millions of chunk documents, severely degrading CouchDB's performance, particularly during view building and replication.
|
||||
2. **Storage Bloat & GC Difficulty:** Obsolete chunks generated during edits are difficult to identify and remove. Since CouchDB's deletion (`_deleted: true`) is a soft delete, and compaction is a heavy, space-intensive operation, unused chunks perpetually consume storage, making GC impractical for many users.
|
||||
3. **The "Eden" Problem:** A previous attempt, "Keep newborn chunks in Eden", aimed to mitigate this by embedding volatile chunks within the parent document. While it reduced the number of standalone chunks, it introduced a new issue: the parent document's history (`_revs_info`) became excessively large, causing its own form of database bloat and making compaction equally necessary but difficult to manage.
|
||||
|
||||
This new design addresses the root cause—the sheer number of documents—by aggregating chunks into sets.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- The new implementation must maintain the core benefit of deduplication to ensure efficient synchronisation.
|
||||
- The solution must not introduce a single point of bottleneck and should handle concurrent writes from multiple clients gracefully.
|
||||
- The system must provide a clear and feasible strategy for Garbage Collection.
|
||||
- The design should be forward-compatible, allowing for a smooth migration path for existing users.
|
||||
|
||||
## Outlined Methods and Implementation Plans
|
||||
|
||||
### Abstract
|
||||
|
||||
This design introduces a two-tiered document structure to manage chunks: **Index Documents** and **Data Documents**. Chunks are no longer stored as individual documents. Instead, they are grouped into `Data Documents` based on a common hash prefix. The existence and location of each chunk are tracked by `Index Documents`, which are also grouped by the same prefix. This approach dramatically reduces the total document count.
|
||||
|
||||
### Detailed Implementation
|
||||
|
||||
**1. Document Structure:**
|
||||
|
||||
- **Index Document:** Maps chunk hashes to their corresponding Data Document ID. Identified by a prefix of the chunk hash.
|
||||
- `_id`: `idx:{prefix}` (e.g., `idx:a9f1b`)
|
||||
- Content:
|
||||
```json
|
||||
{
|
||||
"_id": "idx:a9f1b",
|
||||
"_rev": "...",
|
||||
"chunks": {
|
||||
"a9f1b12...": "dat:a9f1b-001",
|
||||
"a9f1b34...": "dat:a9f1b-001",
|
||||
"a9f1b56...": "dat:a9f1b-002"
|
||||
}
|
||||
}
|
||||
```
|
||||
- **Data Document:** Contains the actual chunk data as base64-encoded strings. Identified by a prefix and a sequential number.
|
||||
- `_id`: `dat:{prefix}-{sequence}` (e.g., `dat:a9f1b-001`)
|
||||
- Content:
|
||||
```json
|
||||
{
|
||||
"_id": "dat:a9f1b-001",
|
||||
"_rev": "...",
|
||||
"chunks": {
|
||||
"a9f1b12...": "...", // base64 data
|
||||
"a9f1b34...": "..." // base64 data
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**2. Configuration:**
|
||||
|
||||
- `chunk_prefix_length`: The number of characters from the start of a chunk hash to use as a prefix (e.g., `5`). This determines the granularity of aggregation.
|
||||
- `data_doc_size_limit`: The maximum size for a single Data Document to prevent it from becoming too large (e.g., 1MB). When this limit is reached, a new Data Document with an incremented sequence number is created.
|
||||
|
||||
**3. Write/Save Operation Flow:**
|
||||
|
||||
When a client creates new chunks:
|
||||
1. For each new chunk, determine its hash prefix.
|
||||
2. Read the corresponding `Index Document` (e.g., `idx:a9f1b`).
|
||||
3. From the index, determine which of the new chunks already exist in the database.
|
||||
4. For the **truly new chunks only**:
|
||||
a. Read the last `Data Document` for that prefix (e.g., `dat:a9f1b-005`).
|
||||
b. If it is nearing its size limit, create a new one (`dat:a9f1b-006`).
|
||||
c. Add the new chunk data to the Data Document and save it.
|
||||
5. Update the `Index Document` with the locations of the newly added chunks.
|
||||
|
||||
**4. Handling Write Conflicts:**
|
||||
|
||||
Concurrent writes to the same `Index Document` or `Data Document` from multiple clients will cause conflicts (409 Conflict). This is expected and must be handled gracefully. Since additions are incremental, the client application must implement a **retry-and-merge loop**:
|
||||
1. Attempt to save the document.
|
||||
2. On a conflict, re-fetch the latest version of the document from the server.
|
||||
3. Merge its own changes into the latest version.
|
||||
4. Attempt to save again.
|
||||
5. Repeat until successful or a retry limit is reached.
|
||||
|
||||
**5. Garbage Collection (GC):**
|
||||
|
||||
GC becomes a manageable, periodic batch process:
|
||||
1. Scan all file metadata documents to build a master set of all *currently referenced* chunk hashes.
|
||||
2. Iterate through all `Index Documents`. For each chunk listed:
|
||||
a. If the chunk hash is not in the master reference set, it is garbage.
|
||||
b. Remove the garbage entry from the `Index Document`.
|
||||
c. Remove the corresponding data from its `Data Document`.
|
||||
3. If a `Data Document` becomes empty after this process, it can be deleted.
|
||||
|
||||
## Test Strategy
|
||||
|
||||
1. **Unit Tests:** Implement tests for the conflict resolution logic (retry-and-merge loop) to ensure robustness.
|
||||
2. **Integration Tests:**
|
||||
- Verify that concurrent writes from multiple simulated clients result in a consistent, merged state without data loss.
|
||||
- Run a full synchronisation scenario and confirm the resulting database has a significantly lower document count compared to the previous implementation.
|
||||
3. **GC Test:** Simulate a scenario where files are deleted, run the GC process, and verify that orphaned chunks are correctly removed from both Index and Data documents, and that storage is reclaimed after compaction.
|
||||
4. **Migration Test:** Develop and test a "rebuild" process for existing users, which migrates their chunk data into the new aggregated structure.
|
||||
|
||||
## Documentation Strategy
|
||||
|
||||
- This design document will be published to explain the new architecture.
|
||||
- The configuration options (`chunk_prefix_length`, etc.) will be documented for advanced users.
|
||||
- A guide for the migration/rebuild process will be provided.
|
||||
|
||||
## Future Work
|
||||
|
||||
The separation of index and data opens up a powerful possibility. While this design initially implements both within CouchDB, the `Data Documents` could be offloaded to a dedicated object storage service such as **S3, MinIO, or Cloudflare R2**.
|
||||
|
||||
In such a hybrid model, CouchDB would handle only the lightweight `Index Documents` and file metadata, serving as a high-speed synchronisation and coordination layer. The bulky chunk data would reside in a more cost-effective and scalable blob store. This would represent the ultimate evolution of this architecture, combining the best of both worlds.
|
||||
|
||||
## Consideration and Conclusion
|
||||
|
||||
This design directly addresses the scalability limitations of the original chunk-per-document model. By aggregating chunks into sets, it significantly reduces the document count, which in turn improves database performance and makes maintenance feasible. The explicit handling of write conflicts and a clear strategy for garbage collection make this a robust and sustainable long-term solution. It effectively resolves the problems identified in previous approaches, including the "Eden" experiment, by tackling the root cause of database bloat. This architecture provides a solid foundation for future growth and scalability.
|
||||
127
docs/design_docs/intention_of_chunks.md
Normal file
127
docs/design_docs/intention_of_chunks.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# [WIP] The design intent explanation for using metadata and chunks
|
||||
|
||||
## Abstract
|
||||
|
||||
## Goal
|
||||
|
||||
- To explain the following:
|
||||
- What metadata and chunks are
|
||||
- The design intent of using metadata and chunks
|
||||
|
||||
## Background and Motivation
|
||||
|
||||
We are using PouchDB and CouchDB for storing files and synchronising them. PouchDB is a JavaScript database that stores data on the device (browser, and of course, Obsidian), while CouchDB is a NoSQL database that stores data on the server. The two databases can be synchronised to keep data consistent across devices via the CouchDB replication protocol. This is a powerful and flexible way to store and synchronise data, including conflict management, but it is not well suited for files. Therefore, we needed to manage how to store files and synchronise them.
|
||||
|
||||
## Terminology
|
||||
|
||||
- Password:
|
||||
- A string used to authenticate the user.
|
||||
|
||||
- Passphrase:
|
||||
- A string used to encrypt and decrypt data.
|
||||
- This is not a password.
|
||||
|
||||
- Encrypt:
|
||||
- To convert data into a format that is unreadable to anyone.
|
||||
- Can be decrypted by the user who has the passphrase.
|
||||
- Should be 1:n, containing random data to ensure that even the same data, when encrypted, results in different outputs.
|
||||
|
||||
- Obfuscate:
|
||||
- To convert data into a format that is not easily readable.
|
||||
- Can be decrypted by the user who has the passphrase.
|
||||
- Should be 1:1, containing no random data, and the same data is always obfuscated to the same result. It is necessarily unreadable.
|
||||
|
||||
- Hash:
|
||||
- To convert data into a fixed-length string that is not easily readable.
|
||||
- Cannot be decrypted.
|
||||
- Should be 1:1, containing no random data, and the same data is always hashed to the same result.
|
||||
|
||||
## Designs
|
||||
|
||||
### Principles
|
||||
|
||||
- To synchronise and handle conflicts, we should keep the history of modifications.
|
||||
- No data should be lost. Even though some extra data may be stored, it should be removed later, safely.
|
||||
- Each stored data item should be as small as possible to transfer efficiently, but not so small as to be inefficient.
|
||||
- Any type of file should be supported, including binary files.
|
||||
- Encryption should be supported efficiently.
|
||||
- This method should not depart too far from the PouchDB/CouchDB philosophy. It needs to leave room for other `remote`s, to benefit from custom replicators.
|
||||
|
||||
As a result, we have adopted the following design.
|
||||
|
||||
- Files are stored as one metadata entry and multiple chunks.
|
||||
- Chunks are content-addressable, and the metadata contains the ids of the chunks.
|
||||
- Chunks may be referenced from multiple metadata entries. They should be efficiently managed to avoid redundancy.
|
||||
|
||||
### Metadata Design
|
||||
|
||||
The metadata contains the following information:
|
||||
|
||||
| Field | Type | Description | Note |
|
||||
| -------- | -------------------- | ---------------------------- | ----------------------------------------------------------------------------------------------------- |
|
||||
| _id | string | The id of the metadata | It is created from the file path |
|
||||
| _rev | string | The revision of the metadata | It is created by PouchDB |
|
||||
| children | [string] | The ids of the chunks | |
|
||||
| path | string | The path of the file | If Obfuscate path has been enabled, it has been encrypted |
|
||||
| size | number | The size of the metadata | Not respected; for troubleshooting |
|
||||
| ctime | string | The creation timestamp | This is not used to compare files, but when writing to storage, it will be used |
|
||||
| mtime | string | The modification timestamp | This will be used to compare files, and will be written to storage |
|
||||
| type | `plain` \| `newnote` | The type of the file | Children of type `plain` will not be base64 encoded, while `newnote` will be |
|
||||
| e_ | boolean | The file is encrypted | Encryption is processed during transfer to the remote. In local storage, this property does not exist |
|
||||
|
||||
#### Decision Rule for `_id` of Metadata
|
||||
|
||||
```ts
|
||||
// Note: This is pseudo code.
|
||||
let _id = PATH;
|
||||
if (!HANDLE_FILES_AS_CASE_SENSITIVE) {
|
||||
_id = _id.toLowerCase();
|
||||
}
|
||||
if (_id.startsWith("_")) {
|
||||
_id = "/" + _id;
|
||||
}
|
||||
if (OBFUSCATE_PATH) {
|
||||
_id = `f:${OBFUSCATE_PATH(_id, E2EE_PASSPHRASE)}`;
|
||||
}
|
||||
return _id;
|
||||
```
|
||||
|
||||
#### Expected Questions
|
||||
|
||||
- Why do we need to handle files as case-sensitive?
|
||||
- Some filesystems are case-sensitive, while others are not. For example, Windows is not case-sensitive, while Linux is. Therefore, we need to handle files as case-sensitive to manage conflicts.
|
||||
- The trade-off is that you will not be able to manage files with different cases, so this can be disabled if you only have case-sensitive terminals.
|
||||
- Why obfuscate the path?
|
||||
- E2EE only encrypts the content of the file, not metadata. Hence, E2EE alone is not enough to protect the vault completely. The path is also part of the metadata, so it should be obfuscated. This is a trade-off between security and performance. However, if you title a note with sensitive information, you should obfuscate the path.
|
||||
- What is `f:`?
|
||||
- It is a prefix to indicate that the path is obfuscated. It is used to distinguish between normal paths and obfuscated paths. Due to file enumeration, Self-hosted LiveSync should scan the files to find the metadata, excluding chunks and other information.
|
||||
- Why does an unobfuscated path not start with `f:`?
|
||||
- For compatibility. Self-hosted LiveSync, by its nature, must also be able to handle files created with newer versions as far as possible.
|
||||
|
||||
### Chunk Design
|
||||
|
||||
#### Chunk Structure
|
||||
|
||||
The chunk contains the following information:
|
||||
|
||||
| Field | Type | Description | Note |
|
||||
| ----- | ------------ | ------------------------- | ----------------------------------------------------------------------------------------------------- |
|
||||
| _id | `h:{string}` | The id of the chunk | It is created from the hash of the chunk content |
|
||||
| _rev | string | The revision of the chunk | It is created by PouchDB |
|
||||
| data | string | The content of the chunk | |
|
||||
| type | `leaf` | Fixed | |
|
||||
| e_ | boolean | The chunk is encrypted | Encryption is processed during transfer to the remote. In local storage, this property does not exist |
|
||||
|
||||
**SORRY, TO BE WRITTEN, BUT WE HAVE IMPLEMENTED `v2`, WHICH REQUIRES MORE INFORMATION.**
|
||||
|
||||
### How they are unified
|
||||
|
||||
## Deduplication and Optimisation
|
||||
|
||||
## Synchronisation Strategy
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
## Security and Privacy
|
||||
|
||||
## Edge Cases
|
||||
117
docs/design_docs/tired_chunk_pack.md
Normal file
117
docs/design_docs/tired_chunk_pack.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# [IN DESIGN] Tiered Chunk Storage with Live Compaction
|
||||
|
||||
** VERY IMPORTANT NOTE: This design must be used with the new journal synchronisation method. Otherwise, we risk introducing the bloat of changes from hot-pack into the Bucket. (CouchDB/PouchDB can synchronise only the most recent changes, or resolve conflicts.) Previous Journal Sync **IS NOT**. Please proceed with caution. **
|
||||
|
||||
## Goal
|
||||
|
||||
To establish a highly efficient, robust, and scalable synchronisation architecture by introducing a tiered storage system inspired by Log-Structured Merge-Trees (LSM-Trees). This design aims to address the challenges of real-time synchronisation, specifically the massive generation of transient data, while minimising storage bloat and ensuring high performance.
|
||||
|
||||
## Motivation
|
||||
|
||||
Our previous designs, including "Chunk Aggregation by Prefix", successfully addressed the "document explosion" problem. However, the introduction of real-time editor synchronisation exposed a new, critical challenge: the constant generation of short-lived "garbage" chunks during user input. This "garbage storm" places immense pressure on storage, I/O, and the Garbage Collection (GC) process.
|
||||
|
||||
A simple aggregation strategy is insufficient because it treats all data equally, mixing valuable, stable chunks with transient, garbage chunks in permanent storage. This leads to storage bloat and inefficient compaction. We require a system that can intelligently distinguish between "hot" (volatile) and "cold" (stable) data, processing them in the most efficient manner possible.
|
||||
|
||||
## Outlined Methods and Implementation Plans
|
||||
|
||||
### Abstract
|
||||
|
||||
This design implements a two-tiered storage system within CouchDB.
|
||||
1. **Level 0 – Hot Storage:** A set of "Hot-Packs", one for each active client. These act as fast, append-only logs for all newly created chunks. They serve as a temporary staging area, absorbing the "garbage storm" of real-time editing.
|
||||
2. **Level 1 – Cold Storage:** The permanent, immutable storage for stable chunks, consisting of **Index Documents** for fast lookups and **Data Documents (Cold-Packs)** for storing chunk data.
|
||||
|
||||
A background "Compaction" process continuously promotes stable chunks from Hot Storage to Cold Storage, while automatically discarding garbage. This keeps the permanent storage clean and highly optimised.
|
||||
|
||||
### Detailed Implementation
|
||||
|
||||
**1. Document Structure:**
|
||||
|
||||
- **Hot-Pack Document (Level 0):** A per-client, append-only log.
|
||||
- `_id`: `hotpack:{client_id}` (`client_id` could be the same as the `deviceNodeID` used in the `accepted_nodes` in MILESTONE_DOC; enables database 'lockout' for safe synchronisation)
|
||||
- Content: A log of chunk creation events.
|
||||
```json
|
||||
{
|
||||
"_id": "hotpack:a9f1b12...",
|
||||
"_rev": "...",
|
||||
"log": [
|
||||
{ "hash": "abc...", "data": "...", "ts": ..., "file_id": "file1" },
|
||||
{ "hash": "def...", "data": "...", "ts": ..., "file_id": "file2" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- **Index Document (Level 1):** A fast, prefix-based lookup table for stable chunks.
|
||||
- `_id`: `idx:{prefix}` (e.g., `idx:a9f1b`)
|
||||
- Content: Maps a chunk hash to the ID of the Cold-Pack it resides in.
|
||||
```json
|
||||
{
|
||||
"_id": "idx:a9f1b",
|
||||
"chunks": { "a9f1b12...": "dat:1678886400" }
|
||||
}
|
||||
```
|
||||
|
||||
- **Cold-Pack Document (Level 1):** An immutable data block created by the compaction process.
|
||||
- `_id`: `dat:{timestamp_or_uuid}` (e.g., `dat:1678886400123`)
|
||||
- Content: A collection of stable chunks.
|
||||
```json
|
||||
{
|
||||
"_id": "dat:1678886400123",
|
||||
"chunks": { "a9f1b12...": "...", "c3d4e5f...": "..." }
|
||||
}
|
||||
```
|
||||
|
||||
- **Hot-Pack List Document:** A central registry of all active Hot-Packs. This might be a computed document that clients maintain in memory on startup.
|
||||
- `_id`: `hotpack_list`
|
||||
- Content: `{"active_clients": ["hotpack:a9f1b12...", "hotpack:c3d4e5f..."]}`
|
||||
|
||||
**2. Write/Save Operation Flow (Real-time Editing):**
|
||||
|
||||
1. A client generates a new chunk.
|
||||
2. It **immediately appends** the chunk object (`{hash, data, ts, file_id}`) to its **own** Hot-Pack document's `log` array within its local PouchDB. This operation is extremely fast.
|
||||
3. The PouchDB synchronisation process replicates this change to the remote CouchDB and other clients in the background. No other Hot-Packs are consulted during this write operation.
|
||||
|
||||
**3. Read/Load Operation Flow:**
|
||||
|
||||
To find a chunk's data:
|
||||
1. The client first consults its in-memory list of active Hot-Pack IDs (see section 5).
|
||||
2. It searches for the chunk hash in all **Hot-Pack documents**, starting from its own, then others. It reads them in reverse log order (newest first).
|
||||
3. If not found, it consults the appropriate **Index Document (`idx:...`)** to get the ID of the Cold-Pack.
|
||||
4. It then reads the chunk data from the corresponding **Cold-Pack document (`dat:...`)**.
|
||||
|
||||
**4. Compaction & Promotion Process (The "GC"):**
|
||||
|
||||
This is a background task run periodically by clients, or triggered when the number of unprocessed log entries exceeds a threshold (to maintain the ability to synchronise with the remote database, which has a limited document size).
|
||||
1. The client takes its own Hot-Pack (`hotpack:{client_id}`) and scans its `log` array from the beginning (oldest first).
|
||||
2. For each chunk in the log, it checks if the chunk is still referenced in the latest revision of any file.
|
||||
- **If not referenced (Garbage):** The log entry is simply discarded.
|
||||
- **If referenced (Stable):** The chunk is added to a "promotion batch".
|
||||
3. After scanning a certain number of log entries, the client takes the "promotion batch".
|
||||
4. It creates one or more new, immutable **Cold-Pack (`dat:...`)** documents to store the chunk data from the batch.
|
||||
5. It updates the corresponding **Index (`idx:...`)** documents to point to the new Cold-Pack(s).
|
||||
6. Once the promotion is successfully saved to the database, it **removes the processed entries from its Hot-Pack's `log` array**. This is a critical step to prevent reprocessing and keep the Hot-Pack small.
|
||||
|
||||
**5. Hot-Pack List Management:**
|
||||
|
||||
To know which Hot-Packs to read, clients will:
|
||||
1. On startup, load the `hotpack_list` document into memory.
|
||||
2. Use PouchDB's live `changes` feed to monitor the creation of new `hotpack:*` documents.
|
||||
3. Upon detecting an unknown Hot-Pack, the client updates its in-memory list and attempts to update the central `hotpack_list` document (on a best-effort basis, with conflict resolution).
|
||||
|
||||
## Planned Test Strategy
|
||||
|
||||
1. **Unit Tests:** Test the Compaction/Promotion logic extensively. Ensure garbage is correctly identified and stable chunks are promoted correctly.
|
||||
2. **Integration Tests:** Simulate a multi-client real-time editing session.
|
||||
- Verify that writes are fast and responsive.
|
||||
- Confirm that transient garbage chunks do not pollute the Cold Storage.
|
||||
- Confirm that after a period of inactivity, compaction runs and the Hot-Packs shrink.
|
||||
3. **Stress Tests:** Simulate many clients joining and leaving to test the robustness of the `hotpack_list` management.
|
||||
|
||||
## Documentation Strategy
|
||||
|
||||
- This design document will serve as the core architectural reference.
|
||||
- The roles of each document type (Hot-Pack, Index, Cold-Pack, List) will be clearly explained for future developers.
|
||||
- The logic of the Compaction/Promotion process will be detailed.
|
||||
|
||||
## Consideration and Conclusion
|
||||
|
||||
This tiered storage design is a direct evolution, born from the lessons of previous architectures. It embraces the ephemeral nature of data in real-time applications. By creating a "staging area" (Hot-Packs) for volatile data, it protects the integrity and performance of the permanent "cold" storage. The Compaction process acts as a self-cleaning mechanism, ensuring that only valuable, stable data is retained long-term. This is not just an optimisation; it is a fundamental shift that enables robust, high-performance, and scalable real-time synchronisation on top of CouchDB.
|
||||
97
docs/design_docs/tired_chunk_pack_bucket.md
Normal file
97
docs/design_docs/tired_chunk_pack_bucket.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# [IN DESIGN] Tiered Chunk Storage for Bucket Sync
|
||||
|
||||
## Goal
|
||||
|
||||
To evolve the "Journal Sync" mechanism by integrating the Tiered Storage architecture. This design aims to drastically reduce the size and number of sync packs, minimise storage consumption on the backend bucket, and establish a clear, efficient process for Garbage Collection, all while remaining protocol-agnostic.
|
||||
|
||||
## Motivation
|
||||
|
||||
The original "Journal Sync" liberates us from CouchDB's protocol, but it still packages and transfers entire document changes, including bulky and often transient chunk data. In a real-time or frequent-editing scenario, this results in:
|
||||
1. **Bloated Sync Packs:** Packs become large with redundant or short-lived chunk data, increasing upload and download times.
|
||||
2. **Inefficient Storage:** The backend bucket stores numerous packs containing overlapping and obsolete chunk data, wasting space.
|
||||
3. **Impractical Garbage Collection:** Identifying and purging obsolete *chunk data* from within the pack-based journal history is extremely difficult.
|
||||
|
||||
This new design addresses these problems by fundamentally changing *what* is synchronised in the journal packs. We will synchronise lightweight metadata and logs, while handling bulk data separately.
|
||||
|
||||
## Outlined methods and implementation plans
|
||||
|
||||
### Abstract
|
||||
|
||||
This design adapts the Tiered Storage model for a bucket-based backend. The backend bucket is partitioned into distinct areas for different data types. The "Journal Sync" process is now responsible for synchronising only the "hot" volatile data and lightweight metadata. A separate, asynchronous "Compaction" process, which can be run by any client, is responsible for migrating stable data into permanent, deduplicated "cold" storage.
|
||||
|
||||
### Detailed Implementation
|
||||
|
||||
**1. Bucket Structure:**
|
||||
|
||||
The backend bucket will have four distinct logical areas (prefixes):
|
||||
- `packs/`: For "Journal Sync" packs, containing the journal of metadata and Hot-Log changes.
|
||||
- `hot_logs/`: A dedicated area for each client's "Hot-Log," containing newly created, volatile chunks.
|
||||
- `indices/`: For prefix-based Index files, mapping chunk hashes to their permanent location in Cold Storage.
|
||||
- `cold_chunks/`: For deduplicated, stable chunk data, stored by content hash.
|
||||
|
||||
**2. Data Structures (Client-side PouchDB & Backend Bucket):**
|
||||
|
||||
- **Client Metadata:** Standard file metadata documents, kept in the client's PouchDB.
|
||||
- **Hot-Log (in `hot_logs/`):** A per-client, append-only log file on the bucket.
|
||||
- Path: `hot_logs/{client_id}.jsonlog`
|
||||
- Content: A sequence of JSON objects, one per line, representing chunk creation events. `{"hash": "...", "data": "...", "ts": ..., "file_id": "..."}`
|
||||
|
||||
- **Index File (in `indices/`):** A JSON file for a given hash prefix.
|
||||
- Path: `indices/{prefix}.json`
|
||||
- Content: Maps a chunk hash to its content hash (which is its key in `cold_chunks/`). `{"hash_abc...": true, "hash_def...": true}`
|
||||
|
||||
- **Cold Chunk (in `cold_chunks/`):** The raw, immutable, deduplicated chunk data.
|
||||
- Path: `cold_chunks/{chunk_hash}`
|
||||
|
||||
**3. "Journal Sync" - Send/Receive Operation (Not Live):**
|
||||
|
||||
This process is now extremely lightweight.
|
||||
1. **Send:**
|
||||
a. The client takes all newly generated chunks and **appends them to its own Hot-Log file (`hot_logs/{client_id}.jsonlog`)** on the bucket.
|
||||
b. The client updates its local file metadata in PouchDB.
|
||||
c. It then creates a "Journal Sync" pack containing **only the PouchDB journal of the file metadata changes.** This pack is very small as it contains no chunk data.
|
||||
d. The pack is uploaded to `packs/`.
|
||||
|
||||
2. **Receive:**
|
||||
a. The client downloads new packs from `packs/` and applies the metadata journal to its local PouchDB.
|
||||
b. It downloads the latest versions of all **other clients' Hot-Log files** from `hot_logs/`.
|
||||
c. Now the client has a complete, up-to-date view of all metadata and all "hot" chunks.
|
||||
|
||||
**4. Read/Load Operation Flow:**
|
||||
|
||||
To find a chunk's data:
|
||||
1. The client searches for the chunk hash in its local copy of all **Hot-Logs**.
|
||||
2. If not found, it downloads and consults the appropriate **Index file (`indices/{prefix}.json`)**.
|
||||
3. If the index confirms existence, it downloads the data from **`cold_chunks/{chunk_hash}`**.
|
||||
|
||||
**5. Compaction & Promotion Process (Asynchronous "GC"):**
|
||||
|
||||
This is a deliberate, offline-capable process that any client can choose to run.
|
||||
1. The client "leases" its own Hot-Log for compaction.
|
||||
2. It reads its entire `hot_logs/{client_id}.jsonlog`.
|
||||
3. For each chunk in the log, it checks if the chunk is referenced in the *current, latest state* of the file metadata.
|
||||
- **If not referenced (Garbage):** The log entry is discarded.
|
||||
- **If referenced (Stable):** The chunk is added to a "promotion batch."
|
||||
4. For each chunk in the promotion batch:
|
||||
a. It checks the corresponding `indices/{prefix}.json` to see if the chunk already exists in Cold Storage.
|
||||
b. If it does not exist, it **uploads the chunk data to `cold_chunks/{chunk_hash}`** and updates the `indices/{prefix}.json` file.
|
||||
5. Once the entire Hot-Log has been processed, the client **deletes its `hot_logs/{client_id}.jsonlog` file** (or truncates it to empty), effectively completing the cycle.
|
||||
|
||||
## Test strategy
|
||||
|
||||
1. **Component Tests:** Test the Compaction process independently. Ensure it correctly identifies stable versus garbage chunks and populates the `cold_chunks/` and `indices/` areas correctly.
|
||||
2. **Integration Tests:**
|
||||
- Simulate a multi-client sync cycle. Verify that sync packs in `packs/` are small.
|
||||
- Confirm that `hot_logs/` are correctly created and updated.
|
||||
- Run the Compaction process and verify that data migrates correctly to cold storage and the hot log is cleared.
|
||||
3. **Conflict Tests:** Simulate two clients trying to compact the same index file simultaneously and ensure the outcome is consistent (for example, via a locking mechanism or last-write-wins).
|
||||
|
||||
## Documentation strategy
|
||||
|
||||
- This design document will be the primary reference for the bucket-based architecture.
|
||||
- The structure of the backend bucket (`packs/`, `hot_logs/`, etc.) will be clearly defined.
|
||||
- A detailed description of how to run the Compaction process will be provided to users.
|
||||
|
||||
## Consideration and Conclusion
|
||||
|
||||
By applying the Tiered Storage model to "Journal Sync", we transform it into a remarkably efficient system. The synchronisation of everyday changes becomes extremely fast and lightweight, as only metadata journals are exchanged. The heavy lifting of data deduplication and permanent storage is offloaded to a separate, asynchronous Compaction process. This clear separation of concerns makes the system highly scalable, minimises storage costs, and finally provides a practical, robust solution for Garbage Collection in a protocol-agnostic, bucket-based environment.
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.25.0",
|
||||
"version": "0.25.2",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.25.0",
|
||||
"version": "0.25.4",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
30
package-lock.json
generated
30
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.25.0",
|
||||
"version": "0.25.4",
|
||||
"lockfileVersion": 2,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.25.0",
|
||||
"version": "0.25.4",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-s3": "^3.808.0",
|
||||
@@ -20,7 +20,7 @@
|
||||
"fflate": "^0.8.2",
|
||||
"idb": "^8.0.3",
|
||||
"minimatch": "^10.0.1",
|
||||
"octagonal-wheels": "^0.1.35",
|
||||
"octagonal-wheels": "^0.1.37",
|
||||
"qrcode-generator": "^1.4.4",
|
||||
"svelte-check": "^4.1.7",
|
||||
"trystero": "^0.21.5",
|
||||
@@ -1666,9 +1666,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@eslint/plugin-kit": {
|
||||
"version": "0.3.3",
|
||||
"resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.3.3.tgz",
|
||||
"integrity": "sha512-1+WqvgNMhmlAambTvT3KPtCl/Ibr68VldY2XY40SL1CE0ZXiakFR/cbTspaF5HsnpDMvcYYoJHfl4980NBjGag==",
|
||||
"version": "0.3.4",
|
||||
"resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.3.4.tgz",
|
||||
"integrity": "sha512-Ul5l+lHEcw3L5+k8POx6r74mxEYKG5kOb6Xpy2gCRW6zweT6TEhAf8vhxGgjhqrd/VO/Dirhsb+1hNpD1ue9hw==",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
@@ -8281,9 +8281,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/octagonal-wheels": {
|
||||
"version": "0.1.35",
|
||||
"resolved": "https://registry.npmjs.org/octagonal-wheels/-/octagonal-wheels-0.1.35.tgz",
|
||||
"integrity": "sha512-fjyvgg1+aG4SnpPjdZp6SPA/N6CseTgTLWnYWFN0mdH6qVAZzxNkDxKACtPxuxRQfgjj83yiPhQBMT3yA0XUnw==",
|
||||
"version": "0.1.37",
|
||||
"resolved": "https://registry.npmjs.org/octagonal-wheels/-/octagonal-wheels-0.1.37.tgz",
|
||||
"integrity": "sha512-+kDdbN5h74ulo3JkQbR00DlJrCpwbhFfx9WJ0bX33H5PoJeiOqgZ1DvH8mH2ajkCVsNDMSwdhtk17pfu3N8jZg==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"idb": "^8.0.3"
|
||||
@@ -11719,9 +11719,9 @@
|
||||
"dev": true
|
||||
},
|
||||
"@eslint/plugin-kit": {
|
||||
"version": "0.3.3",
|
||||
"resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.3.3.tgz",
|
||||
"integrity": "sha512-1+WqvgNMhmlAambTvT3KPtCl/Ibr68VldY2XY40SL1CE0ZXiakFR/cbTspaF5HsnpDMvcYYoJHfl4980NBjGag==",
|
||||
"version": "0.3.4",
|
||||
"resolved": "https://registry.npmjs.org/@eslint/plugin-kit/-/plugin-kit-0.3.4.tgz",
|
||||
"integrity": "sha512-Ul5l+lHEcw3L5+k8POx6r74mxEYKG5kOb6Xpy2gCRW6zweT6TEhAf8vhxGgjhqrd/VO/Dirhsb+1hNpD1ue9hw==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@eslint/core": "^0.15.1",
|
||||
@@ -16491,9 +16491,9 @@
|
||||
}
|
||||
},
|
||||
"octagonal-wheels": {
|
||||
"version": "0.1.35",
|
||||
"resolved": "https://registry.npmjs.org/octagonal-wheels/-/octagonal-wheels-0.1.35.tgz",
|
||||
"integrity": "sha512-fjyvgg1+aG4SnpPjdZp6SPA/N6CseTgTLWnYWFN0mdH6qVAZzxNkDxKACtPxuxRQfgjj83yiPhQBMT3yA0XUnw==",
|
||||
"version": "0.1.37",
|
||||
"resolved": "https://registry.npmjs.org/octagonal-wheels/-/octagonal-wheels-0.1.37.tgz",
|
||||
"integrity": "sha512-+kDdbN5h74ulo3JkQbR00DlJrCpwbhFfx9WJ0bX33H5PoJeiOqgZ1DvH8mH2ajkCVsNDMSwdhtk17pfu3N8jZg==",
|
||||
"requires": {
|
||||
"idb": "^8.0.3"
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.25.0",
|
||||
"version": "0.25.4",
|
||||
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"main": "main.js",
|
||||
"type": "module",
|
||||
@@ -89,7 +89,7 @@
|
||||
"fflate": "^0.8.2",
|
||||
"idb": "^8.0.3",
|
||||
"minimatch": "^10.0.1",
|
||||
"octagonal-wheels": "^0.1.35",
|
||||
"octagonal-wheels": "^0.1.37",
|
||||
"qrcode-generator": "^1.4.4",
|
||||
"svelte-check": "^4.1.7",
|
||||
"trystero": "^0.21.5",
|
||||
|
||||
2
src/lib
2
src/lib
Submodule src/lib updated: ab02f72aa5...a5ac735c6f
12
src/main.ts
12
src/main.ts
@@ -472,6 +472,14 @@ export default class ObsidianLiveSyncPlugin
|
||||
$$clearUsedPassphrase(): void {
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
|
||||
$$decryptSettings(settings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
$$adjustSettings(settings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
|
||||
$$loadSettings(): Promise<void> {
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
@@ -546,6 +554,10 @@ export default class ObsidianLiveSyncPlugin
|
||||
$everyAfterResumeProcess(): Promise<boolean> {
|
||||
return InterceptiveEvery;
|
||||
}
|
||||
|
||||
$$fetchRemotePreferredTweakValues(trialSetting: RemoteDBSettings): Promise<TweakValues | false> {
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
$$checkAndAskResolvingMismatchedTweaks(preferred: Partial<TweakValues>): Promise<[TweakValues | boolean, boolean]> {
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
|
||||
@@ -32,6 +32,7 @@ import { EVENT_FILE_SAVED, EVENT_SETTING_SAVED, eventHub } from "../../common/ev
|
||||
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
|
||||
import { globalSlipBoard } from "../../lib/src/bureau/bureau";
|
||||
import { $msg } from "../../lib/src/common/i18n";
|
||||
import { clearHandlers } from "../../lib/src/replication/SyncParamsHandler";
|
||||
|
||||
const KEY_REPLICATION_ON_EVENT = "replicationOnEvent";
|
||||
const REPLICATION_ON_EVENT_FORECASTED_TIME = 5000;
|
||||
@@ -66,6 +67,8 @@ export class ModuleReplicator extends AbstractModule implements ICoreModule {
|
||||
this.core.replicator = replicator;
|
||||
this._replicatorType = this.settings.remoteType;
|
||||
await yieldMicrotask();
|
||||
// Clear any existing sync parameter handlers (means clearing key-deriving salt).
|
||||
clearHandlers();
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -88,8 +91,9 @@ export class ModuleReplicator extends AbstractModule implements ICoreModule {
|
||||
|
||||
async $everyBeforeReplicate(showMessage: boolean): Promise<boolean> {
|
||||
// Checking salt
|
||||
if (!(await this.ensureReplicatorPBKDF2Salt(showMessage))) {
|
||||
Logger("Failed to ensure PBKDF2 salt for replication.", LOG_LEVEL_NOTICE);
|
||||
// Showing message is false: that because be shown here. (And it is a fatal error, no way to hide it).
|
||||
if (!(await this.ensureReplicatorPBKDF2Salt(false))) {
|
||||
Logger("Failed to initialise the encryption key, preventing replication.", LOG_LEVEL_NOTICE);
|
||||
return false;
|
||||
}
|
||||
await this.loadQueuedFiles();
|
||||
|
||||
@@ -6,6 +6,7 @@ import {
|
||||
FLAGMD_REDFLAG2_HR,
|
||||
FLAGMD_REDFLAG3,
|
||||
FLAGMD_REDFLAG3_HR,
|
||||
type ObsidianLiveSyncSettings,
|
||||
} from "../../lib/src/common/types.ts";
|
||||
import { AbstractModule } from "../AbstractModule.ts";
|
||||
import type { ICoreModule } from "../ModuleTypes.ts";
|
||||
@@ -133,6 +134,34 @@ export class ModuleRedFlag extends AbstractModule implements ICoreModule {
|
||||
return false;
|
||||
}
|
||||
|
||||
const optionFetchRemoteConf = $msg("RedFlag.FetchRemoteConfig.Buttons.Fetch");
|
||||
const optionCancel = $msg("RedFlag.FetchRemoteConfig.Buttons.Cancel");
|
||||
const fetchRemote = await this.core.confirm.askSelectStringDialogue(
|
||||
$msg("RedFlag.FetchRemoteConfig.Message"),
|
||||
[optionFetchRemoteConf, optionCancel],
|
||||
{
|
||||
defaultAction: optionFetchRemoteConf,
|
||||
timeout: 0,
|
||||
title: $msg("RedFlag.FetchRemoteConfig.Title"),
|
||||
}
|
||||
);
|
||||
if (fetchRemote === optionFetchRemoteConf) {
|
||||
this._log("Fetching remote configuration", LOG_LEVEL_NOTICE);
|
||||
const newSettings = JSON.parse(JSON.stringify(this.core.settings)) as ObsidianLiveSyncSettings;
|
||||
const remoteConfig = await this.core.$$fetchRemotePreferredTweakValues(newSettings);
|
||||
if (remoteConfig) {
|
||||
this._log("Remote configuration found.", LOG_LEVEL_NOTICE);
|
||||
const mergedSettings = {
|
||||
...this.core.settings,
|
||||
...remoteConfig,
|
||||
} satisfies ObsidianLiveSyncSettings;
|
||||
this._log("Remote configuration applied.", LOG_LEVEL_NOTICE);
|
||||
this.core.settings = mergedSettings;
|
||||
} else {
|
||||
this._log("Remote configuration not applied.", LOG_LEVEL_NOTICE);
|
||||
}
|
||||
}
|
||||
|
||||
await this.core.rebuilder.$fetchLocal(makeLocalChunkBeforeSync, !makeLocalFilesBeforeSync);
|
||||
|
||||
await this.deleteRedFlag3();
|
||||
|
||||
@@ -164,22 +164,28 @@ export class ModuleResolvingMismatchedTweaks extends AbstractModule implements I
|
||||
return "IGNORE";
|
||||
}
|
||||
|
||||
async $$checkAndAskUseRemoteConfiguration(
|
||||
trialSetting: RemoteDBSettings
|
||||
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
|
||||
const replicator = await this.core.$anyNewReplicator(trialSetting);
|
||||
async $$fetchRemotePreferredTweakValues(trialSetting: RemoteDBSettings): Promise<TweakValues | false> {
|
||||
const replicator = await this.core.$anyNewReplicator();
|
||||
if (await replicator.tryConnectRemote(trialSetting)) {
|
||||
const preferred = await replicator.getRemotePreferredTweakValues(trialSetting);
|
||||
if (preferred) {
|
||||
return await this.$$askUseRemoteConfiguration(trialSetting, preferred);
|
||||
} else {
|
||||
this._log("Failed to get the preferred tweak values from the remote server.", LOG_LEVEL_NOTICE);
|
||||
return preferred;
|
||||
}
|
||||
return { result: false, requireFetch: false };
|
||||
} else {
|
||||
this._log("Failed to connect to the remote server.", LOG_LEVEL_NOTICE);
|
||||
return { result: false, requireFetch: false };
|
||||
this._log("Failed to get the preferred tweak values from the remote server.", LOG_LEVEL_NOTICE);
|
||||
return false;
|
||||
}
|
||||
this._log("Failed to connect to the remote server.", LOG_LEVEL_NOTICE);
|
||||
return false;
|
||||
}
|
||||
|
||||
async $$checkAndAskUseRemoteConfiguration(
|
||||
trialSetting: RemoteDBSettings
|
||||
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
|
||||
const preferred = await this.core.$$fetchRemotePreferredTweakValues(trialSetting);
|
||||
if (preferred) {
|
||||
return await this.$$askUseRemoteConfiguration(trialSetting, preferred);
|
||||
}
|
||||
return { result: false, requireFetch: false };
|
||||
}
|
||||
|
||||
async $$askUseRemoteConfiguration(
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
|
||||
import { type ObsidianLiveSyncSettings } from "../../lib/src/common/types.js";
|
||||
import { LOG_LEVEL_NOTICE } from "octagonal-wheels/common/logger";
|
||||
import {
|
||||
EVENT_REQUEST_OPEN_P2P,
|
||||
EVENT_REQUEST_OPEN_SETTING_WIZARD,
|
||||
@@ -11,131 +10,31 @@ import {
|
||||
import { AbstractModule } from "../AbstractModule.ts";
|
||||
import type { ICoreModule } from "../ModuleTypes.ts";
|
||||
import { $msg } from "src/lib/src/common/i18n.ts";
|
||||
import { checkUnsuitableValues, RuleLevel, type RuleForType } from "../../lib/src/common/configForDoc.ts";
|
||||
import { getConfName, type AllSettingItemKey } from "../features/SettingDialogue/settingConstants.ts";
|
||||
import { performDoctorConsultation, RebuildOptions } from "../../lib/src/common/configForDoc.ts";
|
||||
|
||||
export class ModuleMigration extends AbstractModule implements ICoreModule {
|
||||
async migrateUsingDoctor(skipRebuild: boolean = false, activateReason = "updated", forceRescan = false) {
|
||||
const r = checkUnsuitableValues(this.core.settings);
|
||||
if (!forceRescan && r.version == this.settings.doctorProcessedVersion) {
|
||||
const isIssueFound = Object.keys(r.rules).length > 0;
|
||||
const msg = isIssueFound ? "Issues found" : "No issues found";
|
||||
this._log(`${msg} but marked as to be silent`, LOG_LEVEL_VERBOSE);
|
||||
return;
|
||||
}
|
||||
const issues = Object.entries(r.rules);
|
||||
if (issues.length == 0) {
|
||||
this._log(
|
||||
$msg("Doctor.Message.NoIssues"),
|
||||
activateReason !== "updated" ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO
|
||||
);
|
||||
return;
|
||||
} else {
|
||||
const OPT_YES = `${$msg("Doctor.Button.Yes")}` as const;
|
||||
const OPT_NO = `${$msg("Doctor.Button.No")}` as const;
|
||||
const OPT_DISMISS = `${$msg("Doctor.Button.DismissThisVersion")}` as const;
|
||||
// this._log(`Issues found in ${key}`, LOG_LEVEL_VERBOSE);
|
||||
const issues = Object.keys(r.rules)
|
||||
.map((key) => `- ${getConfName(key as AllSettingItemKey)}`)
|
||||
.join("\n");
|
||||
const msg = await this.core.confirm.askSelectStringDialogue(
|
||||
$msg("Doctor.Dialogue.Main", { activateReason, issues }),
|
||||
[OPT_YES, OPT_NO, OPT_DISMISS],
|
||||
{
|
||||
title: $msg("Doctor.Dialogue.Title"),
|
||||
defaultAction: OPT_YES,
|
||||
}
|
||||
);
|
||||
if (msg == OPT_DISMISS) {
|
||||
this.settings.doctorProcessedVersion = r.version;
|
||||
await this.core.saveSettings();
|
||||
this._log("Marked as to be silent", LOG_LEVEL_VERBOSE);
|
||||
return;
|
||||
}
|
||||
if (msg != OPT_YES) return;
|
||||
let shouldRebuild = false;
|
||||
let shouldRebuildLocal = false;
|
||||
const issueItems = Object.entries(r.rules) as [keyof ObsidianLiveSyncSettings, RuleForType<any>][];
|
||||
this._log(`${issueItems.length} Issue(s) found `, LOG_LEVEL_VERBOSE);
|
||||
let idx = 0;
|
||||
const applySettings = {} as Partial<ObsidianLiveSyncSettings>;
|
||||
const OPT_FIX = `${$msg("Doctor.Button.Fix")}` as const;
|
||||
const OPT_SKIP = `${$msg("Doctor.Button.Skip")}` as const;
|
||||
const OPT_FIXBUTNOREBUILD = `${$msg("Doctor.Button.FixButNoRebuild")}` as const;
|
||||
let skipped = 0;
|
||||
for (const [key, value] of issueItems) {
|
||||
const levelMap = {
|
||||
[RuleLevel.Necessary]: $msg("Doctor.Level.Necessary"),
|
||||
[RuleLevel.Recommended]: $msg("Doctor.Level.Recommended"),
|
||||
[RuleLevel.Optional]: $msg("Doctor.Level.Optional"),
|
||||
[RuleLevel.Must]: $msg("Doctor.Level.Must"),
|
||||
};
|
||||
const level = value.level ? levelMap[value.level] : "Unknown";
|
||||
const options = [OPT_FIX] as [typeof OPT_FIX | typeof OPT_SKIP | typeof OPT_FIXBUTNOREBUILD];
|
||||
if ((!skipRebuild && value.requireRebuild) || value.requireRebuildLocal) {
|
||||
options.push(OPT_FIXBUTNOREBUILD);
|
||||
}
|
||||
options.push(OPT_SKIP);
|
||||
const note = skipRebuild
|
||||
? ""
|
||||
: `${value.requireRebuild ? $msg("Doctor.Message.RebuildRequired") : ""}${value.requireRebuildLocal ? $msg("Doctor.Message.RebuildLocalRequired") : ""}`;
|
||||
|
||||
const ret = await this.core.confirm.askSelectStringDialogue(
|
||||
$msg("Doctor.Dialogue.MainFix", {
|
||||
name: getConfName(key as AllSettingItemKey),
|
||||
current: `${this.settings[key]}`,
|
||||
reason: value.reasonFunc?.(this.settings) ?? value.reason ?? " N/A ",
|
||||
ideal: `${value.valueDisplayFunc ? value.valueDisplayFunc(this.settings) : value.value}`,
|
||||
//@ts-ignore
|
||||
level: `${level}`,
|
||||
note: note,
|
||||
}),
|
||||
options,
|
||||
{
|
||||
title: $msg("Doctor.Dialogue.TitleFix", { current: `${++idx}`, total: `${issueItems.length}` }),
|
||||
defaultAction: OPT_FIX,
|
||||
}
|
||||
);
|
||||
|
||||
if (ret == OPT_FIX || ret == OPT_FIXBUTNOREBUILD) {
|
||||
//@ts-ignore
|
||||
applySettings[key] = value.value;
|
||||
if (ret == OPT_FIX) {
|
||||
shouldRebuild = shouldRebuild || value.requireRebuild || false;
|
||||
shouldRebuildLocal = shouldRebuildLocal || value.requireRebuildLocal || false;
|
||||
}
|
||||
} else {
|
||||
skipped++;
|
||||
}
|
||||
}
|
||||
if (Object.keys(applySettings).length > 0) {
|
||||
this.settings = {
|
||||
...this.settings,
|
||||
...applySettings,
|
||||
};
|
||||
}
|
||||
if (skipped == 0) {
|
||||
this.settings.doctorProcessedVersion = r.version;
|
||||
} else {
|
||||
if (
|
||||
(await this.core.confirm.askYesNoDialog($msg("Doctor.Message.SomeSkipped"), {
|
||||
title: $msg("Doctor.Dialogue.TitleAlmostDone"),
|
||||
defaultOption: "No",
|
||||
})) == "no"
|
||||
) {
|
||||
// Some skipped, and user wants
|
||||
this.settings.doctorProcessedVersion = r.version;
|
||||
}
|
||||
const { shouldRebuild, shouldRebuildLocal, isModified, settings } = await performDoctorConsultation(
|
||||
this.core,
|
||||
this.settings,
|
||||
{
|
||||
localRebuild: skipRebuild ? RebuildOptions.SkipEvenIfRequired : RebuildOptions.AutomaticAcceptable,
|
||||
remoteRebuild: skipRebuild ? RebuildOptions.SkipEvenIfRequired : RebuildOptions.AutomaticAcceptable,
|
||||
activateReason,
|
||||
forceRescan,
|
||||
}
|
||||
);
|
||||
if (isModified) {
|
||||
this.settings = settings;
|
||||
await this.core.saveSettings();
|
||||
if (!skipRebuild) {
|
||||
if (shouldRebuild) {
|
||||
await this.core.rebuilder.scheduleRebuild();
|
||||
await this.core.$$performRestart();
|
||||
} else if (shouldRebuildLocal) {
|
||||
await this.core.rebuilder.scheduleFetch();
|
||||
await this.core.$$performRestart();
|
||||
}
|
||||
}
|
||||
if (!skipRebuild) {
|
||||
if (shouldRebuild) {
|
||||
await this.core.rebuilder.scheduleRebuild();
|
||||
await this.core.$$performRestart();
|
||||
} else if (shouldRebuildLocal) {
|
||||
await this.core.rebuilder.scheduleFetch();
|
||||
await this.core.$$performRestart();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -11,11 +11,11 @@ import {
|
||||
SALT_OF_PASSPHRASE,
|
||||
} from "../../lib/src/common/types";
|
||||
import { LOG_LEVEL_NOTICE, LOG_LEVEL_URGENT } from "octagonal-wheels/common/logger";
|
||||
import { encrypt, tryDecrypt } from "octagonal-wheels/encryption";
|
||||
import { $msg, setLang } from "../../lib/src/common/i18n";
|
||||
import { isCloudantURI } from "../../lib/src/pouchdb/utils_couchdb";
|
||||
import { getLanguage } from "obsidian";
|
||||
import { SUPPORTED_I18N_LANGS, type I18N_LANGS } from "../../lib/src/common/rosetta.ts";
|
||||
import { decryptString, encryptString } from "@/lib/src/encryption/stringEncryption.ts";
|
||||
export class ModuleObsidianSettings extends AbstractObsidianModule implements IObsidianModule {
|
||||
async $everyOnLayoutReady(): Promise<boolean> {
|
||||
let isChanged = false;
|
||||
@@ -73,7 +73,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
|
||||
}
|
||||
|
||||
async decryptConfigurationItem(encrypted: string, passphrase: string) {
|
||||
const dec = await tryDecrypt(encrypted, passphrase + SALT_OF_PASSPHRASE, false);
|
||||
const dec = await decryptString(encrypted, passphrase + SALT_OF_PASSPHRASE);
|
||||
if (dec) {
|
||||
this.usedPassphrase = passphrase;
|
||||
return dec;
|
||||
@@ -83,7 +83,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
|
||||
|
||||
async encryptConfigurationItem(src: string, settings: ObsidianLiveSyncSettings) {
|
||||
if (this.usedPassphrase != "") {
|
||||
return await encrypt(src, this.usedPassphrase + SALT_OF_PASSPHRASE, false);
|
||||
return await encryptString(src, this.usedPassphrase + SALT_OF_PASSPHRASE);
|
||||
}
|
||||
|
||||
const passphrase = await this.getPassphrase(settings);
|
||||
@@ -94,7 +94,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
|
||||
);
|
||||
return "";
|
||||
}
|
||||
const dec = await encrypt(src, passphrase + SALT_OF_PASSPHRASE, false);
|
||||
const dec = await encryptString(src, passphrase + SALT_OF_PASSPHRASE);
|
||||
if (dec) {
|
||||
this.usedPassphrase = passphrase;
|
||||
return dec;
|
||||
@@ -174,18 +174,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
|
||||
}
|
||||
}
|
||||
|
||||
async $$loadSettings(): Promise<void> {
|
||||
const settings = Object.assign({}, DEFAULT_SETTINGS, await this.core.loadData()) as ObsidianLiveSyncSettings;
|
||||
|
||||
if (typeof settings.isConfigured == "undefined") {
|
||||
// If migrated, mark true
|
||||
if (JSON.stringify(settings) !== JSON.stringify(DEFAULT_SETTINGS)) {
|
||||
settings.isConfigured = true;
|
||||
} else {
|
||||
settings.additionalSuffixOfDatabaseName = this.appId;
|
||||
settings.isConfigured = false;
|
||||
}
|
||||
}
|
||||
async $$decryptSettings(settings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
|
||||
const passphrase = await this.getPassphrase(settings);
|
||||
if (passphrase === false) {
|
||||
this._log("No passphrase found for data.json! Verify configuration before syncing.", LOG_LEVEL_URGENT);
|
||||
@@ -237,20 +226,62 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
|
||||
}
|
||||
}
|
||||
}
|
||||
this.settings = settings;
|
||||
return settings;
|
||||
}
|
||||
|
||||
/**
|
||||
* This method mutates the settings object.
|
||||
* @param settings
|
||||
* @returns
|
||||
*/
|
||||
$$adjustSettings(settings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
|
||||
// Adjust settings as needed
|
||||
|
||||
// Delete this feature to avoid problems on mobile.
|
||||
settings.disableRequestURI = true;
|
||||
|
||||
// GC is disabled.
|
||||
settings.gcDelay = 0;
|
||||
// So, use history is always enabled.
|
||||
settings.useHistory = true;
|
||||
|
||||
if ("workingEncrypt" in settings) delete settings.workingEncrypt;
|
||||
if ("workingPassphrase" in settings) delete settings.workingPassphrase;
|
||||
// Splitter configurations have been replaced with chunkSplitterVersion.
|
||||
if (settings.chunkSplitterVersion == "") {
|
||||
if (settings.enableChunkSplitterV2) {
|
||||
if (settings.useSegmenter) {
|
||||
settings.chunkSplitterVersion = "v2-segmenter";
|
||||
} else {
|
||||
settings.chunkSplitterVersion = "v2";
|
||||
}
|
||||
} else {
|
||||
settings.chunkSplitterVersion = "";
|
||||
}
|
||||
} else if (!(settings.chunkSplitterVersion in ChunkAlgorithmNames)) {
|
||||
settings.chunkSplitterVersion = "";
|
||||
}
|
||||
return Promise.resolve(settings);
|
||||
}
|
||||
|
||||
async $$loadSettings(): Promise<void> {
|
||||
const settings = Object.assign({}, DEFAULT_SETTINGS, await this.core.loadData()) as ObsidianLiveSyncSettings;
|
||||
|
||||
if (typeof settings.isConfigured == "undefined") {
|
||||
// If migrated, mark true
|
||||
if (JSON.stringify(settings) !== JSON.stringify(DEFAULT_SETTINGS)) {
|
||||
settings.isConfigured = true;
|
||||
} else {
|
||||
settings.additionalSuffixOfDatabaseName = this.appId;
|
||||
settings.isConfigured = false;
|
||||
}
|
||||
}
|
||||
|
||||
this.settings = await this.core.$$decryptSettings(settings);
|
||||
|
||||
setLang(this.settings.displayLanguage);
|
||||
|
||||
if ("workingEncrypt" in this.settings) delete this.settings.workingEncrypt;
|
||||
if ("workingPassphrase" in this.settings) delete this.settings.workingPassphrase;
|
||||
|
||||
// Delete this feature to avoid problems on mobile.
|
||||
this.settings.disableRequestURI = true;
|
||||
|
||||
// GC is disabled.
|
||||
this.settings.gcDelay = 0;
|
||||
// So, use history is always enabled.
|
||||
this.settings.useHistory = true;
|
||||
await this.core.$$adjustSettings(this.settings);
|
||||
|
||||
const lsKey = "obsidian-live-sync-vaultanddevicename-" + this.core.$$getVaultName();
|
||||
if (this.settings.deviceAndVaultName != "") {
|
||||
@@ -275,21 +306,6 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
|
||||
}
|
||||
}
|
||||
|
||||
// Splitter configurations have been replaced with chunkSplitterVersion.
|
||||
if (this.settings.chunkSplitterVersion == "") {
|
||||
if (this.settings.enableChunkSplitterV2) {
|
||||
if (this.settings.useSegmenter) {
|
||||
this.settings.chunkSplitterVersion = "v2-segmenter";
|
||||
} else {
|
||||
this.settings.chunkSplitterVersion = "v2";
|
||||
}
|
||||
} else {
|
||||
this.settings.chunkSplitterVersion = "";
|
||||
}
|
||||
} else if (!(this.settings.chunkSplitterVersion in ChunkAlgorithmNames)) {
|
||||
this.settings.chunkSplitterVersion = "";
|
||||
}
|
||||
|
||||
// this.core.ignoreFiles = this.settings.ignoreFiles.split(",").map(e => e.trim());
|
||||
eventHub.emitEvent(EVENT_REQUEST_RELOAD_SETTING_TAB);
|
||||
}
|
||||
|
||||
@@ -7,7 +7,6 @@ import {
|
||||
} from "../../lib/src/common/types.ts";
|
||||
import { configURIBase, configURIBaseQR } from "../../common/types.ts";
|
||||
// import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser.js";
|
||||
import { decrypt, encrypt } from "../../lib/src/encryption/e2ee_v2.ts";
|
||||
import { fireAndForget } from "../../lib/src/common/utils.ts";
|
||||
import {
|
||||
EVENT_REQUEST_COPY_SETUP_URI,
|
||||
@@ -19,6 +18,8 @@ import { AbstractObsidianModule, type IObsidianModule } from "../AbstractObsidia
|
||||
import { decodeAnyArray, encodeAnyArray } from "../../common/utils.ts";
|
||||
import qrcode from "qrcode-generator";
|
||||
import { $msg } from "../../lib/src/common/i18n.ts";
|
||||
import { performDoctorConsultation, RebuildOptions } from "@/lib/src/common/configForDoc.ts";
|
||||
import { encryptString, decryptString } from "@/lib/src/encryption/stringEncryption.ts";
|
||||
|
||||
export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsidianModule {
|
||||
$everyOnload(): Promise<boolean> {
|
||||
@@ -129,9 +130,7 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
|
||||
delete setting[k];
|
||||
}
|
||||
}
|
||||
const encryptedSetting = encodeURIComponent(
|
||||
await encrypt(JSON.stringify(setting), encryptingPassphrase, false)
|
||||
);
|
||||
const encryptedSetting = encodeURIComponent(await encryptString(JSON.stringify(setting), encryptingPassphrase));
|
||||
const uri = `${configURIBase}${encryptedSetting} `;
|
||||
await navigator.clipboard.writeText(uri);
|
||||
this._log("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
|
||||
@@ -150,9 +149,7 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
|
||||
encryptedCouchDBConnection: "",
|
||||
encryptedPassphrase: "",
|
||||
};
|
||||
const encryptedSetting = encodeURIComponent(
|
||||
await encrypt(JSON.stringify(setting), encryptingPassphrase, false)
|
||||
);
|
||||
const encryptedSetting = encodeURIComponent(await encryptString(JSON.stringify(setting), encryptingPassphrase));
|
||||
const uri = `${configURIBase}${encryptedSetting} `;
|
||||
await navigator.clipboard.writeText(uri);
|
||||
this._log("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
|
||||
@@ -170,6 +167,73 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
|
||||
const config = decodeURIComponent(setupURI.substring(configURIBase.length));
|
||||
await this.setupWizard(config);
|
||||
}
|
||||
async askSyncWithRemoteConfig(tryingSettings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
|
||||
const buttons = {
|
||||
fetch: $msg("Setup.FetchRemoteConf.Buttons.Fetch"),
|
||||
no: $msg("Setup.FetchRemoteConf.Buttons.Skip"),
|
||||
} as const;
|
||||
const fetchRemoteConf = await this.core.confirm.askSelectStringDialogue(
|
||||
$msg("Setup.FetchRemoteConf.Message"),
|
||||
Object.values(buttons),
|
||||
{ defaultAction: buttons.fetch, timeout: 0, title: $msg("Setup.FetchRemoteConf.Title") }
|
||||
);
|
||||
if (fetchRemoteConf == buttons.no) {
|
||||
return tryingSettings;
|
||||
}
|
||||
|
||||
const newSettings = JSON.parse(JSON.stringify(tryingSettings)) as ObsidianLiveSyncSettings;
|
||||
const remoteConfig = await this.core.$$fetchRemotePreferredTweakValues(newSettings);
|
||||
if (remoteConfig) {
|
||||
this._log("Remote configuration found.", LOG_LEVEL_NOTICE);
|
||||
const resultSettings = {
|
||||
...DEFAULT_SETTINGS,
|
||||
...tryingSettings,
|
||||
...remoteConfig,
|
||||
} satisfies ObsidianLiveSyncSettings;
|
||||
return resultSettings;
|
||||
} else {
|
||||
this._log("Remote configuration not applied.", LOG_LEVEL_NOTICE);
|
||||
return {
|
||||
...DEFAULT_SETTINGS,
|
||||
...tryingSettings,
|
||||
} satisfies ObsidianLiveSyncSettings;
|
||||
}
|
||||
}
|
||||
async askPerformDoctor(
|
||||
tryingSettings: ObsidianLiveSyncSettings
|
||||
): Promise<{ settings: ObsidianLiveSyncSettings; shouldRebuild: boolean; isModified: boolean }> {
|
||||
const buttons = {
|
||||
yes: $msg("Setup.Doctor.Buttons.Yes"),
|
||||
no: $msg("Setup.Doctor.Buttons.No"),
|
||||
} as const;
|
||||
const performDoctor = await this.core.confirm.askSelectStringDialogue(
|
||||
$msg("Setup.Doctor.Message"),
|
||||
Object.values(buttons),
|
||||
{ defaultAction: buttons.yes, timeout: 0, title: $msg("Setup.Doctor.Title") }
|
||||
);
|
||||
if (performDoctor == buttons.no) {
|
||||
return { settings: tryingSettings, shouldRebuild: false, isModified: false };
|
||||
}
|
||||
|
||||
const newSettings = JSON.parse(JSON.stringify(tryingSettings)) as ObsidianLiveSyncSettings;
|
||||
const { settings, shouldRebuild, isModified } = await performDoctorConsultation(this.core, newSettings, {
|
||||
localRebuild: RebuildOptions.AutomaticAcceptable, // Because we are in the setup wizard, we can skip the confirmation.
|
||||
remoteRebuild: RebuildOptions.SkipEvenIfRequired,
|
||||
activateReason: "New settings from URI",
|
||||
});
|
||||
if (isModified) {
|
||||
this._log("Doctor has fixed some issues!", LOG_LEVEL_NOTICE);
|
||||
return {
|
||||
settings,
|
||||
shouldRebuild,
|
||||
isModified,
|
||||
};
|
||||
} else {
|
||||
this._log("Doctor detected no issues!", LOG_LEVEL_NOTICE);
|
||||
return { settings: tryingSettings, shouldRebuild: false, isModified: false };
|
||||
}
|
||||
}
|
||||
|
||||
async applySettingWizard(
|
||||
oldConf: ObsidianLiveSyncSettings,
|
||||
newConf: ObsidianLiveSyncSettings,
|
||||
@@ -180,20 +244,24 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
|
||||
{}
|
||||
);
|
||||
if (result == "yes") {
|
||||
const newSettingW = Object.assign({}, DEFAULT_SETTINGS, newConf) as ObsidianLiveSyncSettings;
|
||||
let newSettingW = Object.assign({}, DEFAULT_SETTINGS, newConf) as ObsidianLiveSyncSettings;
|
||||
this.core.replicator.closeReplication();
|
||||
this.settings.suspendFileWatching = true;
|
||||
console.dir(newSettingW);
|
||||
newSettingW = await this.askSyncWithRemoteConfig(newSettingW);
|
||||
const { settings, shouldRebuild, isModified } = await this.askPerformDoctor(newSettingW);
|
||||
if (isModified) {
|
||||
newSettingW = settings;
|
||||
}
|
||||
// Back into the default method once.
|
||||
newSettingW.configPassphraseStore = "";
|
||||
newSettingW.encryptedPassphrase = "";
|
||||
newSettingW.encryptedCouchDBConnection = "";
|
||||
newSettingW.additionalSuffixOfDatabaseName = `${"appId" in this.app ? this.app.appId : ""} `;
|
||||
const setupJustImport = "Don't sync anything, just apply the settings.";
|
||||
const setupAsNew = "This is a new client - sync everything from the remote server.";
|
||||
const setupAsMerge = "This is an existing client - merge existing files with the server.";
|
||||
const setupAgain = "Initialise new server data - ideal for new or broken servers.";
|
||||
const setupManually = "Continue and configure manually.";
|
||||
const setupJustImport = $msg("Setup.Apply.Buttons.OnlyApply");
|
||||
const setupAsNew = $msg("Setup.Apply.Buttons.ApplyAndFetch");
|
||||
const setupAsMerge = $msg("Setup.Apply.Buttons.ApplyAndMerge");
|
||||
const setupAgain = $msg("Setup.Apply.Buttons.ApplyAndRebuild");
|
||||
const setupCancel = $msg("Setup.Apply.Buttons.Cancel");
|
||||
newSettingW.syncInternalFiles = false;
|
||||
newSettingW.usePluginSync = false;
|
||||
newSettingW.isConfigured = true;
|
||||
@@ -201,11 +269,16 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
|
||||
if (!newSettingW.useIndexedDBAdapter) {
|
||||
newSettingW.useIndexedDBAdapter = true;
|
||||
}
|
||||
const warn = shouldRebuild ? $msg("Setup.Apply.WarningRebuildRecommended") : "";
|
||||
const message = $msg("Setup.Apply.Message", {
|
||||
method,
|
||||
warn,
|
||||
});
|
||||
|
||||
const setupType = await this.core.confirm.askSelectStringDialogue(
|
||||
"How would you like to set it up?",
|
||||
[setupAsNew, setupAgain, setupAsMerge, setupJustImport, setupManually],
|
||||
{ defaultAction: setupAsNew }
|
||||
message,
|
||||
[setupAsNew, setupAsMerge, setupAgain, setupJustImport, setupCancel],
|
||||
{ defaultAction: setupAsNew, title: $msg("Setup.Apply.Title", { method }), timeout: 0 }
|
||||
);
|
||||
if (setupType == setupJustImport) {
|
||||
this.core.settings = newSettingW;
|
||||
@@ -237,71 +310,11 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
|
||||
await this.core.saveSettings();
|
||||
this.core.$$clearUsedPassphrase();
|
||||
await this.core.rebuilder.$rebuildEverything();
|
||||
} else if (setupType == setupManually) {
|
||||
const keepLocalDB = await this.core.confirm.askYesNoDialog("Keep local DB?", {
|
||||
defaultOption: "No",
|
||||
});
|
||||
const keepRemoteDB = await this.core.confirm.askYesNoDialog("Keep remote DB?", {
|
||||
defaultOption: "No",
|
||||
});
|
||||
if (keepLocalDB == "yes" && keepRemoteDB == "yes") {
|
||||
// nothing to do. so peaceful.
|
||||
this.core.settings = newSettingW;
|
||||
this.core.$$clearUsedPassphrase();
|
||||
await this.core.$allSuspendAllSync();
|
||||
await this.core.$allSuspendExtraSync();
|
||||
await this.core.saveSettings();
|
||||
const replicate = await this.core.confirm.askYesNoDialog("Unlock and replicate?", {
|
||||
defaultOption: "Yes",
|
||||
});
|
||||
if (replicate == "yes") {
|
||||
await this.core.$$replicate(true);
|
||||
await this.core.$$markRemoteUnlocked();
|
||||
}
|
||||
this._log("Configuration loaded.", LOG_LEVEL_NOTICE);
|
||||
return;
|
||||
}
|
||||
if (keepLocalDB == "no" && keepRemoteDB == "no") {
|
||||
const reset = await this.core.confirm.askYesNoDialog("Drop everything?", {
|
||||
defaultOption: "No",
|
||||
});
|
||||
if (reset != "yes") {
|
||||
this._log("Cancelled", LOG_LEVEL_NOTICE);
|
||||
this.core.settings = oldConf;
|
||||
return;
|
||||
}
|
||||
}
|
||||
let initDB;
|
||||
this.core.settings = newSettingW;
|
||||
this.core.$$clearUsedPassphrase();
|
||||
await this.core.saveSettings();
|
||||
if (keepLocalDB == "no") {
|
||||
await this.core.$$resetLocalDatabase();
|
||||
await this.core.localDatabase.initializeDatabase();
|
||||
const rebuild = await this.core.confirm.askYesNoDialog("Rebuild the database?", {
|
||||
defaultOption: "Yes",
|
||||
});
|
||||
if (rebuild == "yes") {
|
||||
initDB = this.core.$$initializeDatabase(true);
|
||||
} else {
|
||||
await this.core.$$markRemoteResolved();
|
||||
}
|
||||
}
|
||||
if (keepRemoteDB == "no") {
|
||||
await this.core.$$tryResetRemoteDatabase();
|
||||
await this.core.$$markRemoteLocked();
|
||||
}
|
||||
if (keepLocalDB == "no" || keepRemoteDB == "no") {
|
||||
const replicate = await this.core.confirm.askYesNoDialog("Replicate once?", {
|
||||
defaultOption: "Yes",
|
||||
});
|
||||
if (replicate == "yes") {
|
||||
if (initDB != null) {
|
||||
await initDB;
|
||||
}
|
||||
await this.core.$$replicate(true);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Explicitly cancel the operation or the dialog was closed.
|
||||
this._log("Cancelled", LOG_LEVEL_NOTICE);
|
||||
this.core.settings = oldConf;
|
||||
return;
|
||||
}
|
||||
this._log("Configuration loaded.", LOG_LEVEL_NOTICE);
|
||||
} else {
|
||||
@@ -320,7 +333,7 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
|
||||
true
|
||||
);
|
||||
if (encryptingPassphrase === false) return;
|
||||
const newConf = await JSON.parse(await decrypt(confString, encryptingPassphrase, false));
|
||||
const newConf = await JSON.parse(await decryptString(confString, encryptingPassphrase));
|
||||
if (newConf) {
|
||||
await this.applySettingWizard(oldConf, newConf);
|
||||
this._log("Configuration loaded.", LOG_LEVEL_NOTICE);
|
||||
|
||||
@@ -1,422 +1 @@
|
||||
import { $t } from "../../../lib/src/common/i18n.ts";
|
||||
import {
|
||||
DEFAULT_SETTINGS,
|
||||
configurationNames,
|
||||
type ConfigurationItem,
|
||||
type FilterBooleanKeys,
|
||||
type FilterNumberKeys,
|
||||
type FilterStringKeys,
|
||||
type ObsidianLiveSyncSettings,
|
||||
} from "../../../lib/src/common/types.ts";
|
||||
|
||||
export type OnDialogSettings = {
|
||||
configPassphrase: string;
|
||||
preset: "" | "PERIODIC" | "LIVESYNC" | "DISABLE";
|
||||
syncMode: "ONEVENTS" | "PERIODIC" | "LIVESYNC";
|
||||
dummy: number;
|
||||
deviceAndVaultName: string;
|
||||
};
|
||||
|
||||
export const OnDialogSettingsDefault: OnDialogSettings = {
|
||||
configPassphrase: "",
|
||||
preset: "",
|
||||
syncMode: "ONEVENTS",
|
||||
dummy: 0,
|
||||
deviceAndVaultName: "",
|
||||
};
|
||||
export const AllSettingDefault = { ...DEFAULT_SETTINGS, ...OnDialogSettingsDefault };
|
||||
|
||||
export type AllSettings = ObsidianLiveSyncSettings & OnDialogSettings;
|
||||
export type AllStringItemKey = FilterStringKeys<AllSettings>;
|
||||
export type AllNumericItemKey = FilterNumberKeys<AllSettings>;
|
||||
export type AllBooleanItemKey = FilterBooleanKeys<AllSettings>;
|
||||
export type AllSettingItemKey = AllStringItemKey | AllNumericItemKey | AllBooleanItemKey;
|
||||
|
||||
export type ValueOf<T extends AllSettingItemKey> = T extends AllStringItemKey
|
||||
? string
|
||||
: T extends AllNumericItemKey
|
||||
? number
|
||||
: T extends AllBooleanItemKey
|
||||
? boolean
|
||||
: AllSettings[T];
|
||||
|
||||
export const SettingInformation: Partial<Record<keyof AllSettings, ConfigurationItem>> = {
|
||||
liveSync: {
|
||||
name: "Sync Mode",
|
||||
},
|
||||
couchDB_URI: {
|
||||
name: "Server URI",
|
||||
placeHolder: "https://........",
|
||||
},
|
||||
couchDB_USER: {
|
||||
name: "Username",
|
||||
desc: "username",
|
||||
},
|
||||
couchDB_PASSWORD: {
|
||||
name: "Password",
|
||||
desc: "password",
|
||||
},
|
||||
couchDB_DBNAME: {
|
||||
name: "Database Name",
|
||||
},
|
||||
passphrase: {
|
||||
name: "Passphrase",
|
||||
desc: "Encryption phassphrase. If changed, you should overwrite the server's database with the new (encrypted) files.",
|
||||
},
|
||||
showStatusOnEditor: {
|
||||
name: "Show status inside the editor",
|
||||
desc: "Requires restart of Obsidian.",
|
||||
},
|
||||
showOnlyIconsOnEditor: {
|
||||
name: "Show status as icons only",
|
||||
},
|
||||
showStatusOnStatusbar: {
|
||||
name: "Show status on the status bar",
|
||||
desc: "Requires restart of Obsidian.",
|
||||
},
|
||||
lessInformationInLog: {
|
||||
name: "Show only notifications",
|
||||
desc: "Disables logging, only shows notifications. Please disable if you report an issue.",
|
||||
},
|
||||
showVerboseLog: {
|
||||
name: "Verbose Log",
|
||||
desc: "Show verbose log. Please enable if you report an issue.",
|
||||
},
|
||||
hashCacheMaxCount: {
|
||||
name: "Memory cache size (by total items)",
|
||||
},
|
||||
hashCacheMaxAmount: {
|
||||
name: "Memory cache size (by total characters)",
|
||||
desc: "(Mega chars)",
|
||||
},
|
||||
writeCredentialsForSettingSync: {
|
||||
name: "Write credentials in the file",
|
||||
desc: "(Not recommended) If set, credentials will be stored in the file.",
|
||||
},
|
||||
notifyAllSettingSyncFile: {
|
||||
name: "Notify all setting files",
|
||||
},
|
||||
configPassphrase: {
|
||||
name: "Passphrase of sensitive configuration items",
|
||||
desc: "This passphrase will not be copied to another device. It will be set to `Default` until you configure it again.",
|
||||
},
|
||||
configPassphraseStore: {
|
||||
name: "Encrypting sensitive configuration items",
|
||||
},
|
||||
syncOnSave: {
|
||||
name: "Sync on Save",
|
||||
desc: "Starts synchronisation when a file is saved.",
|
||||
},
|
||||
syncOnEditorSave: {
|
||||
name: "Sync on Editor Save",
|
||||
desc: "When you save a file in the editor, start a sync automatically",
|
||||
},
|
||||
syncOnFileOpen: {
|
||||
name: "Sync on File Open",
|
||||
desc: "Forces the file to be synced when opened.",
|
||||
},
|
||||
syncOnStart: {
|
||||
name: "Sync on Startup",
|
||||
desc: "Automatically Sync all files when opening Obsidian.",
|
||||
},
|
||||
syncAfterMerge: {
|
||||
name: "Sync after merging file",
|
||||
desc: "Sync automatically after merging files",
|
||||
},
|
||||
trashInsteadDelete: {
|
||||
name: "Use the trash bin",
|
||||
desc: "Move remotely deleted files to the trash, instead of deleting.",
|
||||
},
|
||||
doNotDeleteFolder: {
|
||||
name: "Keep empty folder",
|
||||
desc: "Should we keep folders that don't have any files inside?",
|
||||
},
|
||||
resolveConflictsByNewerFile: {
|
||||
name: "(BETA) Always overwrite with a newer file",
|
||||
desc: "Testing only - Resolve file conflicts by syncing newer copies of the file, this can overwrite modified files. Be Warned.",
|
||||
},
|
||||
checkConflictOnlyOnOpen: {
|
||||
name: "Delay conflict resolution of inactive files",
|
||||
desc: "Should we only check for conflicts when a file is opened?",
|
||||
},
|
||||
showMergeDialogOnlyOnActive: {
|
||||
name: "Delay merge conflict prompt for inactive files.",
|
||||
desc: "Should we prompt you about conflicting files when a file is opened?",
|
||||
},
|
||||
disableMarkdownAutoMerge: {
|
||||
name: "Always prompt merge conflicts",
|
||||
desc: "Should we prompt you for every single merge, even if we can safely merge automatcially?",
|
||||
},
|
||||
writeDocumentsIfConflicted: {
|
||||
name: "Apply Latest Change if Conflicting",
|
||||
desc: "Enable this option to automatically apply the most recent change to documents even when it conflicts",
|
||||
},
|
||||
syncInternalFilesInterval: {
|
||||
name: "Scan hidden files periodically",
|
||||
desc: "Seconds, 0 to disable",
|
||||
},
|
||||
batchSave: {
|
||||
name: "Batch database update",
|
||||
desc: "Reducing the frequency with which on-disk changes are reflected into the DB",
|
||||
},
|
||||
readChunksOnline: {
|
||||
name: "Fetch chunks on demand",
|
||||
desc: "(ex. Read chunks online) If this option is enabled, LiveSync reads chunks online directly instead of replicating them locally. Increasing Custom chunk size is recommended.",
|
||||
},
|
||||
syncMaxSizeInMB: {
|
||||
name: "Maximum file size",
|
||||
desc: "(MB) If this is set, changes to local and remote files that are larger than this will be skipped. If the file becomes smaller again, a newer one will be used.",
|
||||
},
|
||||
useIgnoreFiles: {
|
||||
name: "(Beta) Use ignore files",
|
||||
desc: "If this is set, changes to local files which are matched by the ignore files will be skipped. Remote changes are determined using local ignore files.",
|
||||
},
|
||||
ignoreFiles: {
|
||||
name: "Ignore files",
|
||||
desc: "Comma separated `.gitignore, .dockerignore`",
|
||||
},
|
||||
batch_size: {
|
||||
name: "Batch size",
|
||||
desc: "Number of changes to sync at a time. Defaults to 50. Minimum is 2.",
|
||||
},
|
||||
batches_limit: {
|
||||
name: "Batch limit",
|
||||
desc: "Number of batches to process at a time. Defaults to 40. Minimum is 2. This along with batch size controls how many docs are kept in memory at a time.",
|
||||
},
|
||||
useTimeouts: {
|
||||
name: "Use timeouts instead of heartbeats",
|
||||
desc: "If this option is enabled, PouchDB will hold the connection open for 60 seconds, and if no change arrives in that time, close and reopen the socket, instead of holding it open indefinitely. Useful when a proxy limits request duration but can increase resource usage.",
|
||||
},
|
||||
concurrencyOfReadChunksOnline: {
|
||||
name: "Batch size of on-demand fetching",
|
||||
},
|
||||
minimumIntervalOfReadChunksOnline: {
|
||||
name: "The delay for consecutive on-demand fetches",
|
||||
},
|
||||
suspendFileWatching: {
|
||||
name: "Suspend file watching",
|
||||
desc: "Stop watching for file changes.",
|
||||
},
|
||||
suspendParseReplicationResult: {
|
||||
name: "Suspend database reflecting",
|
||||
desc: "Stop reflecting database changes to storage files.",
|
||||
},
|
||||
writeLogToTheFile: {
|
||||
name: "Write logs into the file",
|
||||
desc: "Warning! This will have a serious impact on performance. And the logs will not be synchronised under the default name. Please be careful with logs; they often contain your confidential information.",
|
||||
},
|
||||
deleteMetadataOfDeletedFiles: {
|
||||
name: "Do not keep metadata of deleted files.",
|
||||
},
|
||||
useIndexedDBAdapter: {
|
||||
name: "(Obsolete) Use an old adapter for compatibility",
|
||||
desc: "Before v0.17.16, we used an old adapter for the local database. Now the new adapter is preferred. However, it needs local database rebuilding. Please disable this toggle when you have enough time. If leave it enabled, also while fetching from the remote database, you will be asked to disable this.",
|
||||
obsolete: true,
|
||||
},
|
||||
watchInternalFileChanges: {
|
||||
name: "Scan changes on customization sync",
|
||||
desc: "Do not use internal API",
|
||||
},
|
||||
doNotSuspendOnFetching: {
|
||||
name: "Fetch database with previous behaviour",
|
||||
},
|
||||
disableCheckingConfigMismatch: {
|
||||
name: "Do not check configuration mismatch before replication",
|
||||
},
|
||||
usePluginSync: {
|
||||
name: "Enable customization sync",
|
||||
},
|
||||
autoSweepPlugins: {
|
||||
name: "Scan customization automatically",
|
||||
desc: "Scan customization before replicating.",
|
||||
},
|
||||
autoSweepPluginsPeriodic: {
|
||||
name: "Scan customization periodically",
|
||||
desc: "Scan customization every 1 minute.",
|
||||
},
|
||||
notifyPluginOrSettingUpdated: {
|
||||
name: "Notify customized",
|
||||
desc: "Notify when other device has newly customized.",
|
||||
},
|
||||
remoteType: {
|
||||
name: "Remote Type",
|
||||
desc: "Remote server type",
|
||||
},
|
||||
endpoint: {
|
||||
name: "Endpoint URL",
|
||||
placeHolder: "https://........",
|
||||
},
|
||||
accessKey: {
|
||||
name: "Access Key",
|
||||
},
|
||||
secretKey: {
|
||||
name: "Secret Key",
|
||||
},
|
||||
region: {
|
||||
name: "Region",
|
||||
placeHolder: "auto",
|
||||
},
|
||||
bucket: {
|
||||
name: "Bucket Name",
|
||||
},
|
||||
useCustomRequestHandler: {
|
||||
name: "Use Custom HTTP Handler",
|
||||
desc: "Enable this if your Object Storage doesn't support CORS",
|
||||
},
|
||||
maxChunksInEden: {
|
||||
name: "Maximum Incubating Chunks",
|
||||
desc: "The maximum number of chunks that can be incubated within the document. Chunks exceeding this number will immediately graduate to independent chunks.",
|
||||
},
|
||||
maxTotalLengthInEden: {
|
||||
name: "Maximum Incubating Chunk Size",
|
||||
desc: "The maximum total size of chunks that can be incubated within the document. Chunks exceeding this size will immediately graduate to independent chunks.",
|
||||
},
|
||||
maxAgeInEden: {
|
||||
name: "Maximum Incubation Period",
|
||||
desc: "The maximum duration for which chunks can be incubated within the document. Chunks exceeding this period will graduate to independent chunks.",
|
||||
},
|
||||
settingSyncFile: {
|
||||
name: "Filename",
|
||||
desc: "Save settings to a markdown file. You will be notified when new settings arrive. You can set different files by the platform.",
|
||||
},
|
||||
preset: {
|
||||
name: "Presets",
|
||||
desc: "Apply preset configuration",
|
||||
},
|
||||
syncMode: {
|
||||
name: "Sync Mode",
|
||||
},
|
||||
periodicReplicationInterval: {
|
||||
name: "Periodic Sync interval",
|
||||
desc: "Interval (sec)",
|
||||
},
|
||||
syncInternalFilesBeforeReplication: {
|
||||
name: "Scan for hidden files before replication",
|
||||
},
|
||||
automaticallyDeleteMetadataOfDeletedFiles: {
|
||||
name: "Delete old metadata of deleted files on start-up",
|
||||
desc: "(Days passed, 0 to disable automatic-deletion)",
|
||||
},
|
||||
additionalSuffixOfDatabaseName: {
|
||||
name: "Database suffix",
|
||||
desc: "LiveSync could not handle multiple vaults which have same name without different prefix, This should be automatically configured.",
|
||||
},
|
||||
hashAlg: {
|
||||
name: configurationNames["hashAlg"]?.name || "",
|
||||
desc: "xxhash64 is the current default.",
|
||||
},
|
||||
deviceAndVaultName: {
|
||||
name: "Device name",
|
||||
desc: "Unique name between all synchronized devices. To edit this setting, please disable customization sync once.",
|
||||
},
|
||||
displayLanguage: {
|
||||
name: "Display Language",
|
||||
desc: 'Not all messages have been translated. And, please revert to "Default" when reporting errors.',
|
||||
},
|
||||
enableChunkSplitterV2: {
|
||||
name: "Use splitting-limit-capped chunk splitter",
|
||||
desc: "If enabled, chunks will be split into no more than 100 items. However, dedupe is slightly weaker.",
|
||||
},
|
||||
disableWorkerForGeneratingChunks: {
|
||||
name: "Do not split chunks in the background",
|
||||
desc: "If disabled(toggled), chunks will be split on the UI thread (Previous behaviour).",
|
||||
},
|
||||
processSmallFilesInUIThread: {
|
||||
name: "Process small files in the foreground",
|
||||
desc: "If enabled, the file under 1kb will be processed in the UI thread.",
|
||||
},
|
||||
batchSaveMinimumDelay: {
|
||||
name: "Minimum delay for batch database updating",
|
||||
desc: "Seconds. Saving to the local database will be delayed until this value after we stop typing or saving.",
|
||||
},
|
||||
batchSaveMaximumDelay: {
|
||||
name: "Maximum delay for batch database updating",
|
||||
desc: "Saving will be performed forcefully after this number of seconds.",
|
||||
},
|
||||
notifyThresholdOfRemoteStorageSize: {
|
||||
name: "Notify when the estimated remote storage size exceeds on start up",
|
||||
desc: "MB (0 to disable).",
|
||||
},
|
||||
usePluginSyncV2: {
|
||||
name: "Enable per-file customization sync",
|
||||
desc: "If enabled, efficient per-file customization sync will be used. A minor migration is required when enabling this feature, and all devices must be updated to v0.23.18. Enabling this feature will result in losing compatibility with older versions.",
|
||||
},
|
||||
handleFilenameCaseSensitive: {
|
||||
name: "Handle files as Case-Sensitive",
|
||||
desc: "If this enabled, All files are handled as case-Sensitive (Previous behaviour).",
|
||||
},
|
||||
doNotUseFixedRevisionForChunks: {
|
||||
name: "Compute revisions for chunks",
|
||||
desc: "If this enabled, all chunks will be stored with the revision made from its content.",
|
||||
},
|
||||
sendChunksBulkMaxSize: {
|
||||
name: "Maximum size of chunks to send in one request",
|
||||
desc: "MB",
|
||||
},
|
||||
useAdvancedMode: {
|
||||
name: "Enable advanced features",
|
||||
// desc: "Enable advanced mode"
|
||||
},
|
||||
usePowerUserMode: {
|
||||
name: "Enable poweruser features",
|
||||
// desc: "Enable power user mode",
|
||||
// level: LEVEL_ADVANCED
|
||||
},
|
||||
useEdgeCaseMode: {
|
||||
name: "Enable edge case treatment features",
|
||||
},
|
||||
enableDebugTools: {
|
||||
name: "Enable Developers' Debug Tools.",
|
||||
desc: "While enabled, it causes very performance impact but debugging replication testing and other features will be enabled. Please disable this if you have not read the source code. Requires restart of Obsidian.",
|
||||
},
|
||||
suppressNotifyHiddenFilesChange: {
|
||||
name: "Suppress notification of hidden files change",
|
||||
desc: "If enabled, the notification of hidden files change will be suppressed.",
|
||||
},
|
||||
syncMinimumInterval: {
|
||||
name: "Minimum interval for syncing",
|
||||
desc: "The minimum interval for automatic synchronisation on event.",
|
||||
},
|
||||
useRequestAPI: {
|
||||
name: "Use Request API to avoid `inevitable` CORS problem",
|
||||
desc: "If enabled, the request API will be used to avoid `inevitable` CORS problems. This is a workaround and may not work in all cases. PLEASE READ THE DOCUMENTATION BEFORE USING THIS OPTION. This is a less-secure option.",
|
||||
},
|
||||
hideFileWarningNotice: {
|
||||
name: "Show status icon instead of file warnings banner",
|
||||
desc: "If enabled, the ⛔ icon will be shown inside the status instead of the file warnings banner. No details will be shown.",
|
||||
},
|
||||
bucketPrefix: {
|
||||
name: "File prefix on the bucket",
|
||||
desc: "Effectively a directory. Should end with `/`. e.g., `vault-name/`.",
|
||||
},
|
||||
chunkSplitterVersion: {
|
||||
name: "Chunk Splitter",
|
||||
desc: "Now we can choose how to split the chunks; V3 is the most efficient. If you have troubled, please make this Default or Legacy.",
|
||||
},
|
||||
};
|
||||
function translateInfo(infoSrc: ConfigurationItem | undefined | false) {
|
||||
if (!infoSrc) return false;
|
||||
const info = { ...infoSrc };
|
||||
info.name = $t(info.name);
|
||||
if (info.desc) {
|
||||
info.desc = $t(info.desc);
|
||||
}
|
||||
return info;
|
||||
}
|
||||
function _getConfig(key: AllSettingItemKey) {
|
||||
if (key in configurationNames) {
|
||||
return configurationNames[key as keyof ObsidianLiveSyncSettings];
|
||||
}
|
||||
if (key in SettingInformation) {
|
||||
return SettingInformation[key as keyof ObsidianLiveSyncSettings];
|
||||
}
|
||||
return false;
|
||||
}
|
||||
export function getConfig(key: AllSettingItemKey) {
|
||||
return translateInfo(_getConfig(key));
|
||||
}
|
||||
export function getConfName(key: AllSettingItemKey) {
|
||||
const conf = getConfig(key);
|
||||
if (!conf) return `${key} (No info)`;
|
||||
return conf.name;
|
||||
}
|
||||
export * from "@lib/common/settingConstants.ts";
|
||||
|
||||
112
updates.md
112
updates.md
@@ -1,4 +1,54 @@
|
||||
## 0.25.4
|
||||
|
||||
29th July, 2025
|
||||
|
||||
### Fixed
|
||||
|
||||
- The PBKDF2Salt is no longer corrupted when attempting replication while the device is offline. (#686)
|
||||
- If this issue has already occurred, please use `Maintenance` -> `Rebuilding Operations (Remote Only)` -> `Overwrite Remote` and `Send` to resolve it.
|
||||
- Please perform this operation on the device that is most reliable.
|
||||
- I am so sorry for the inconvenience; there are no patching workarounds. The rebuilding operation is the only solution.
|
||||
- This issue only affects the encryption of the remote database and does not impact the local databases on any devices.
|
||||
- (Preventing synchronisation is by design and expected behaviour, even if it is sometimes inconvenient. This is also why we should avoid using workarounds; it is, admittedly, an excuse).
|
||||
- In any case, we can unlock the remote from the warning dialogue on receiving devices. We are performing replication, instead of simple synchronisation at the expense of a little complexity (I would love to express thank you again for your every effort to manage and maintain the settings! Your all understanding saves our notes).
|
||||
- This process may require considerable time and bandwidth (as usual), so please wait patiently and ensure a stable network connection.
|
||||
|
||||
### Side note
|
||||
|
||||
The PBKDF2Salt will be referred to as the `Security Seed`, and it is used to derive the encryption key for replication. Therefore, it should be stored on the server prior to synchronisation. We apologise for the lack of explanation in previous updates!
|
||||
|
||||
## 0.25.3
|
||||
|
||||
22nd July, 2025
|
||||
|
||||
### Fixed
|
||||
|
||||
- Now the `Doctor` at migration will save the configuration.
|
||||
|
||||
## 0.25.2 ~~0.25.1~~
|
||||
|
||||
(0.25.1 was missed due to a mistake in the versioning process).
|
||||
19th July, 2025
|
||||
|
||||
### Refined and New Features
|
||||
|
||||
- Fetching the remote database on `RedFlag` now also retrieves remote configurations optionally.
|
||||
- This is beneficial if we have already set up another device and wish to use the same configuration. We will see a much less frequent `Unmatched` dialogue.
|
||||
- The setup wizard using Set-up URI and QR code has been improved.
|
||||
- The message is now more user-friendly.
|
||||
- The obsolete method (manual setting application) has been removed.
|
||||
- The `Cancel` button has been added to the setup wizard.
|
||||
- We can now fetch the remote configuration from the server if it exists, which is useful for adding new devices.
|
||||
- Mostly same as a `RedFlag` fetching remote configuration.
|
||||
- We can also use the `Doctor` to check and fix the imported (and fetched) configuration before applying it.
|
||||
|
||||
### Changes
|
||||
|
||||
- The Set-up URI is now encrypted with a new encryption algorithm (mostly the same as `V2`).
|
||||
- The new Set-up URI is not compatible with version 0.24.x or earlier.
|
||||
|
||||
## 0.25.0
|
||||
|
||||
19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
|
||||
|
||||
After reading Issue #668, I conducted another self-review of the E2EE-related code. In retrospect, it was clearly written by someone inexperienced, which is understandable, but it is still rather embarrassing. Three years is certainly enough time for growth.
|
||||
@@ -8,31 +58,36 @@ I have now rewritten the E2EE code to be more robust and easier to understand. I
|
||||
As a result, this is the first time in a while that forward compatibility has been broken. We have also taken the opportunity to change all metadata to use encryption rather than obfuscation. Furthermore, the `Dynamic Iteration Count` setting is now redundant and has been moved to the `Patches` pane in the settings. Thanks to Rabin-Karp, the eden setting is also no longer necessary and has been relocated accordingly. Therefore, v0.25.0 represents a legitimate and correct evolution.
|
||||
|
||||
### Fixed
|
||||
|
||||
- The encryption algorithm now uses HKDF with a master key.
|
||||
- This is more robust and faster than the previous implementation.
|
||||
- It is now more secure against rainbow table attacks.
|
||||
- The previous implementation can still be used via `Patches` -> `End-to-end encryption algorithm` -> `Force V1`.
|
||||
- Note that `V1: Legacy` can decrypt V2, but produces V1 output.
|
||||
- Note that `V1: Legacy` can decrypt V2, but produces V1 output.
|
||||
- `Fetch everything from the remote` now works correctly.
|
||||
- It no longer creates local database entries before synchronisation.
|
||||
- Extra log messages during QR code decoding have been removed.
|
||||
|
||||
### Changed
|
||||
|
||||
- The following settings have been moved to the `Patches` pane:
|
||||
- `Remote Database Tweak`
|
||||
- `Incubate Chunks in Document`
|
||||
- `Data Compression`
|
||||
- `Incubate Chunks in Document`
|
||||
- `Data Compression`
|
||||
|
||||
### Behavioural and API Changes
|
||||
|
||||
- `DirectFileManipulatorV2` now requires new settings (as you may already know, E2EEAlgorithm).
|
||||
- The database version has been increased to `12` from `10`.
|
||||
- If an older version is detected, we will be notified and synchronisation will be paused until the update is acknowledged. (It has been a long time since this behaviour was last encountered; we always err on the side of caution, even if it is less convenient.)
|
||||
|
||||
### Refactored
|
||||
|
||||
- `couchdb_utils.ts` has been separated into several explicitly named files.
|
||||
- Some missing functions in `bgWorker.mock.ts` have been added.
|
||||
|
||||
## 0.24.31
|
||||
|
||||
10th July, 2025
|
||||
|
||||
### Fixed
|
||||
@@ -44,56 +99,5 @@ As a result, this is the first time in a while that forward compatibility has be
|
||||
- Resolving conflicts dialogue will not be shown for the multiple files at once.
|
||||
- It will be shown for each file, one by one.
|
||||
|
||||
## 0.24.30
|
||||
|
||||
9th July, 2025
|
||||
|
||||
### New Feature
|
||||
|
||||
- New chunking algorithm `V3: Fine deduplication` has been added, and will be recommended after updates.
|
||||
- The Rabin-Karp algorithm is used for efficient chunking.
|
||||
- This will be the default in the new installations.
|
||||
- It is more robust and faster than the previous one.
|
||||
- We can change it in the `Advanced` pane of the settings.
|
||||
- New language `ko` (Korean) has been added.
|
||||
- Thank you for your contribution, [@ellixspace](https://x.com/ellixspace)!
|
||||
- Any contributions are welcome, from any route. Please let me know if I seem to be unaware of this. It is often the case that I am not really aware of it.
|
||||
- Chinese (Simplified) translation has been updated.
|
||||
- Thank you for your contribution, [@52sanmao](https://github.com/52sanmao)!
|
||||
|
||||
### Fixed
|
||||
|
||||
- Numeric settings are now never lost the focus during value changing.
|
||||
- Doctor now redacts more sensitive information on error reports.
|
||||
|
||||
### Improved
|
||||
|
||||
- All translations have been rewritten into YAML format, to easier to manage and contribute.
|
||||
- We can write them with comments, newlines, and other YAML features.
|
||||
- Doctor recommendations are now shown in a user-friendly notation.
|
||||
- We can now see the recommended as `V3: Fine deduplication` instead of `v3-rabin-karp`.
|
||||
|
||||
### Refactored
|
||||
|
||||
- Never-ending `ObsidianLiveSyncSettingTab.ts` has finally been separated into each pane's file.
|
||||
- Some commented-out code has been removed.
|
||||
|
||||
### Acknowledgement
|
||||
|
||||
- Jun Murakami, Shun Ishiguro, and Yoshihiro Oyama. 2012. Implementation and Evaluation of a Cache Deduplication Mechanism with Content-Defined Chunking. In _IPSJ SIG Technical Report_, Vol.2012-ARC-202, No.4. Information Processing Society of Japan, 1-7.
|
||||
|
||||
## 0.24.29
|
||||
|
||||
20th June, 2025
|
||||
|
||||
### Fixed
|
||||
|
||||
- Synchronisation with buckets now works correctly, regardless of whether a prefix is set or the bucket has been (re-) initialised (#664).
|
||||
- An information message is now displayed again, during any automatic synchronisation is enabled (#662).
|
||||
|
||||
### Tidied up
|
||||
|
||||
- Importing paths have been tidied up.
|
||||
|
||||
Older notes are in
|
||||
[updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md).
|
||||
|
||||
@@ -14,6 +14,57 @@ Thank you, and I hope your troubles will be resolved!
|
||||
|
||||
---
|
||||
|
||||
## 0.24.30
|
||||
|
||||
9th July, 2025
|
||||
|
||||
### New Feature
|
||||
|
||||
- New chunking algorithm `V3: Fine deduplication` has been added, and will be recommended after updates.
|
||||
- The Rabin-Karp algorithm is used for efficient chunking.
|
||||
- This will be the default in the new installations.
|
||||
- It is more robust and faster than the previous one.
|
||||
- We can change it in the `Advanced` pane of the settings.
|
||||
- New language `ko` (Korean) has been added.
|
||||
- Thank you for your contribution, [@ellixspace](https://x.com/ellixspace)!
|
||||
- Any contributions are welcome, from any route. Please let me know if I seem to be unaware of this. It is often the case that I am not really aware of it.
|
||||
- Chinese (Simplified) translation has been updated.
|
||||
- Thank you for your contribution, [@52sanmao](https://github.com/52sanmao)!
|
||||
|
||||
### Fixed
|
||||
|
||||
- Numeric settings are now never lost the focus during value changing.
|
||||
- Doctor now redacts more sensitive information on error reports.
|
||||
|
||||
### Improved
|
||||
|
||||
- All translations have been rewritten into YAML format, to easier to manage and contribute.
|
||||
- We can write them with comments, newlines, and other YAML features.
|
||||
- Doctor recommendations are now shown in a user-friendly notation.
|
||||
- We can now see the recommended as `V3: Fine deduplication` instead of `v3-rabin-karp`.
|
||||
|
||||
### Refactored
|
||||
|
||||
- Never-ending `ObsidianLiveSyncSettingTab.ts` has finally been separated into each pane's file.
|
||||
- Some commented-out code has been removed.
|
||||
|
||||
### Acknowledgement
|
||||
|
||||
- Jun Murakami, Shun Ishiguro, and Yoshihiro Oyama. 2012. Implementation and Evaluation of a Cache Deduplication Mechanism with Content-Defined Chunking. In _IPSJ SIG Technical Report_, Vol.2012-ARC-202, No.4. Information Processing Society of Japan, 1-7.
|
||||
|
||||
## 0.24.29
|
||||
|
||||
20th June, 2025
|
||||
|
||||
### Fixed
|
||||
|
||||
- Synchronisation with buckets now works correctly, regardless of whether a prefix is set or the bucket has been (re-) initialised (#664).
|
||||
- An information message is now displayed again, during any automatic synchronisation is enabled (#662).
|
||||
|
||||
### Tidied up
|
||||
|
||||
- Importing paths have been tidied up.
|
||||
|
||||
## 0.24.28
|
||||
|
||||
15th June, 2025
|
||||
|
||||
Reference in New Issue
Block a user