Compare commits

...

43 Commits

Author SHA1 Message Date
vorotamoroz
a9c3f60fe7 bump 2025-08-09 01:51:17 +09:00
vorotamoroz
f996e056af ### Fixed
- Storage scanning no longer occurs when `Suspend file watching` is enabled (including boot-sequence).

### Improved
- Saving notes and files now consumes less memory.
- Chunk caching is now more efficient.
- Both of them (may) are effective for #692, #680, and some more.

### Changed
- `Incubate Chunks in Document` (also known as `Eden`) is now fully sunset.
- The `Compute revisions for chunks` setting has also been removed.
- As mentioned, `Memory cache size (by total characters)` has been removed.

### Refactored
- A significant refactoring of the core codebase is underway (please refer the release-note).
2025-08-09 01:45:41 +09:00
vorotamoroz
1073ee9e30 bump 2025-07-29 12:50:25 +01:00
vorotamoroz
f94653e60e ## 0.25.4
- The PBKDF2Salt is no longer corrupted when attempting replication while the device is offline (#686)
2025-07-29 12:45:28 +01:00
vorotamoroz
3dccf2076f Add draft design documents 2025-07-28 19:14:22 +09:00
vorotamoroz
3e78fe03e1 bump 2025-07-22 04:36:22 +01:00
vorotamoroz
4aa8fc3519 ## 0.25.3
### Fixed
- Now the `Doctor` at migration will save the configuration.
2025-07-22 04:35:40 +01:00
vorotamoroz
ba3d2220e1 bump again 2025-07-19 18:08:27 +09:00
vorotamoroz
8057b516af bump 2025-07-19 17:51:53 +09:00
vorotamoroz
f2b4431182 ## 0.25.1
19th July, 2025

### Refined and New Features
- Fetching the remote database on `RedFlag` now also retrieves remote configurations optionally.
- The setup wizard using Set-up URI and QR code has been improved.

### Changes
- The Set-up URI is now encrypted with a new encryption algorithm (mostly the same as `V2`).
2025-07-19 17:26:52 +09:00
vorotamoroz
badec46d9a 0.25.0 released 2025-07-19 15:21:36 +09:00
vorotamoroz
355e41f488 bump for beta 2025-07-14 00:36:09 +09:00
vorotamoroz
e0e7e1b5ca ### Fixed
- The encryption algorithm now uses HKDF with a master key.
- `Fetch everything from the remote` now works correctly.
- Extra log messages during QR code decoding have been removed.

### Changed
- Some settings have been moved to the `Patches` pane:

### Behavioural and API Changes
- `DirectFileManipulatorV2` now requires new settings (as you may already know, E2EEAlgorithm).
- The database version has been increased to `12` from `10`.
2025-07-14 00:33:40 +09:00
vorotamoroz
ce4b61557a bump 2025-07-10 11:24:59 +01:00
vorotamoroz
52b02f3888 ## 0.24.31
### Fixed

- The description of `Enable Developers' Debug Tools.` has been refined.
- Automatic conflict checking and resolution has been improved.
- Resolving conflicts dialogue will not be shown for the multiple files at once.
2025-07-10 11:12:44 +01:00
vorotamoroz
7535999388 Update updates.md 2025-07-09 22:28:50 +09:00
vorotamoroz
dccf8580b8 Update updates.md 2025-07-09 22:27:52 +09:00
vorotamoroz
e3964f3c5d bump 2025-07-09 12:48:37 +01:00
vorotamoroz
375e7bde31 ### New Feature
- New chunking algorithm `V3: Fine deduplication` has been added, and will be recommended after updates.
- New language `ko` (Korean) has been added.
- Chinese (Simplified) translation has been updated.

### Fixed

- Numeric settings are now never lost the focus during the value changing.

### Improved
- All translations have rewritten into YAML format, to easier manage and contribution.
- Doctor recommendations have now shown in the user-friendly notation.

### Refactored

- Never ending `ObsidianLiveSyncSettingTag.ts` finally had separated into each pane's file.
- Some commented-out codes have been removed.
2025-07-09 12:15:59 +01:00
vorotamoroz
1179438df8 bump 2025-06-20 12:43:39 +01:00
vorotamoroz
47ea8f6859 ## 0.24.29
### Fixed

- Synchronisation with buckets now works correctly, regardless of whether a prefix is set or the bucket has been (re-) initialised (#664).
- An information message is now displayed again, during any automatic synchronisation is enabled (#662).

### Tidied up

- Importing paths have been tidied up.
2025-06-20 12:43:15 +01:00
vorotamoroz
670fe16486 Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-06-16 02:53:30 +01:00
vorotamoroz
3f0093916c Add some note 2025-06-16 02:52:59 +01:00
vorotamoroz
9503474d06 bump 2025-06-15 18:49:31 +09:00
vorotamoroz
ddf7b243e4 ## 0.24.28
### Fixed

- Batch Update is no longer available in LiveSync mode to avoid unexpected behaviour. (#653)
- Now compatible with Cloudflare R2 again for bucket synchronisation.
- Prevention of broken behaviour due to database connection failures added (#649).
2025-06-15 18:49:16 +09:00
vorotamoroz
f37561c3c1 Update Library 2025-06-15 18:24:19 +09:00
vorotamoroz
f01429decc Merge pull request #633 from jmarmstrong1207/patch-1
Add simple docker compose to the setup guide
2025-06-10 11:27:22 +09:00
vorotamoroz
c0fcb66924 bump 2025-06-10 02:52:45 +01:00
vorotamoroz
5f76b9809b ## 0.24.27
### Improved

- We can use prefix for path for the Bucket synchronisation.
- The "Use Request API to avoid `inevitable` CORS problem" option is now promoted to the normal setting, not a niche patch.

### Fixed

- Now switching replicators applied immediately, without the need to restart Obsidian.

### Tidied up

- Some dependencies have been updated to the latest version.
2025-06-10 02:51:47 +01:00
vorotamoroz
d61d6fec37 bump manifest 2025-05-14 13:55:42 +01:00
vorotamoroz
9fdd622824 bump 2025-05-14 13:55:17 +01:00
vorotamoroz
3b8d03a189 ## 0.24.26
### New Features

- Automatic display-language changing according to the Obsidian language
  setting.
- Now we can limit files to be synchronised even in the hidden files.
- "Use Request API to avoid `inevitable` CORS problem" has been implemented.
- `Show status icon instead of file warnings banner` has been implemented.

### Improved

- All regular expressions can be inverted by prefixing `!!` now.

### Fixed

- No longer unexpected files will be gathered during hidden file sync.
- No longer broken `\n` and new-line characters during the bucket
  synchronisation.
- We can purge the remote bucket again if we using MinIO instead of AWS S3 or
  Cloudflare R2.
- Purging the remote bucket is now more reliable.
- Some wrong messages have been fixed.

### Behaviour changed

- Entering into the deeper directories to gather the hidden files is now limited
  by `/` or `\/` prefixed ignore filters.

### Etcetera

- Some code has been tidied up.
- Trying less warning-suppressing and be more safer-coding.
- Dependent libraries have been updated to the latest version.
- Some build processes have been separated to `pre` and `post` processes.
2025-05-14 13:11:03 +01:00
James Armstrong
1f1a39e5a0 Add simple docker compose 2025-04-28 06:10:29 -07:00
vorotamoroz
d0e92cff7a refine readme 2025-04-23 06:45:03 +01:00
vorotamoroz
5addddc792 Update doc 2025-04-23 05:51:51 +01:00
vorotamoroz
d978892661 Update docs 2025-04-23 05:11:59 +01:00
vorotamoroz
cfb061a6a2 add note to troubleshooting 2025-04-23 05:10:40 +01:00
vorotamoroz
381055fc93 bump 2025-04-22 11:29:42 +01:00
vorotamoroz
37d12916fc ## 0.24.25
### Improved

- Peer-to-peer synchronisation has been got more robust.

### Fixed

- No longer broken falsy values in settings during set-up by the QR code generation.

### Refactored

- Some `window` references now have pointed to `globalThis`.
- Some sloppy-import has been fixed.
- A server side implementation `Synchromesh` has been suffixed with `deno` instead of `server` now.
2025-04-22 11:28:55 +01:00
vorotamoroz
944aa846c4 bump 2025-04-15 11:10:37 +01:00
vorotamoroz
abca808e29 ### Fixed
- No longer broken JSON files including `\n`, during the bucket synchronisation. (#623)
- Custom headers and JWT tokens are now correctly sent to the server during configuration checking. (#624)

### Improved

- Bucket synchronisation has been enhanced for better performance and reliability.
    - Now less duplicated chunks are sent to the server.
    - Fetching conflicted files from the server is now more reliable.
    - Dependent libraries have been updated to the latest version.
2025-04-15 11:09:49 +01:00
vorotamoroz
90bb610133 Update submodule 2025-04-14 03:23:24 +01:00
vorotamoroz
9c5e9fe63b Conclusion stated. 2025-04-11 14:13:22 +01:00
58 changed files with 10351 additions and 10491 deletions

View File

@@ -1,35 +1,39 @@
<!-- For translation: 20240227r0 -->
# Self-hosted LiveSync
[Japanese docs](./README_ja.md) - [Chinese docs](./README_cn.md).
Self-hosted LiveSync is a community-implemented synchronization plugin, available on every obsidian-compatible platform and using CouchDB or Object Storage (e.g., MinIO, S3, R2, etc.) as the server.
Self-hosted LiveSync is a community-developed synchronisation plug-in available on all Obsidian-compatible platforms. It leverages robust server solutions such as CouchDB or object storage systems (e.g., MinIO, S3, R2, etc.) to ensure reliable data synchronisation.
Additionally, it supports peer-to-peer synchronisation using WebRTC now (experimental), enabling you to synchronise your notes directly between devices without relying on a server.
![obsidian_live_sync_demo](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif)
Note: This plugin cannot synchronise with the official "Obsidian Sync".
>[!IMPORTANT]
> This plug-in is not compatible with the official "Obsidian Sync" and cannot synchronise with it.
## Features
- Synchronise vaults efficiently with minimal traffic.
- Handle conflicting modifications effectively.
- Automatically merge simple conflicts.
- Use open-source solutions for the server.
- Compatible solutions are supported.
- Support end-to-end encryption.
- Synchronise settings, snippets, themes, and plug-ins via [Customisation Sync (Beta)](#customization-sync) or [Hidden File Sync](#hiddenfilesync).
- Enable WebRTC peer-to-peer synchronisation without requiring a `host` (Experimental).
- This feature is still in the experimental stage. Please exercise caution when using it.
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices.
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).
- Synchronize vaults very efficiently with less traffic.
- Good at conflicted modification.
- Automatic merging for simple conflicts.
- Using OSS solution for the server.
- Compatible solutions can be used.
- Supporting End-to-end encryption.
- Synchronisation of settings, snippets, themes, and plug-ins, via [Customization sync(Beta)](#customization-sync) or [Hidden File Sync](#hiddenfilesync)
- WebClip from [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
- WebRTC peer-to-peer synchronisation without the need for any `host` is now possible. (Experimental)
- This feature is still in the experimental stage. Please be careful when using it.
- Instead of using public servers, you can use [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer) the pseudo client for receiving and sending between devices.
- A pre-built instance is served at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (in the vrtmrz blog site). This is of course also peer-to-peer. Feel free to use it.
- There is an [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html), and [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).
This plug-in might be useful for researchers, engineers, and developers with a need to keep their notes fully self-hosted for security reasons. Or just anyone who would like the peace of mind of knowing that their notes are fully private.
This plug-in may be particularly useful for researchers, engineers, and developers who need to keep their notes fully self-hosted for security reasons. It is also suitable for anyone seeking the peace of mind that comes with knowing their notes remain entirely private.
>[!IMPORTANT]
> - Before installing or upgrading this plug-in, please back your vault up.
> - Do not enable this plugin with another synchronization solution at the same time (including iCloud and Obsidian Sync).
> - This is a synchronization plugin. Not a backup solution. Do not rely on this for backup.
> - Before installing or upgrading this plug-in, please back up your vault.
> - Do not enable this plug-in alongside another synchronisation solution at the same time (including iCloud and Obsidian Sync).
> - For backups, we also provide a plug-in called [Differential ZIP Backup](https://github.com/vrtmrz/diffzip).
## How to use
@@ -48,9 +52,11 @@ This plug-in might be useful for researchers, engineers, and developers with a n
1. [Setup CouchDB on fly.io](docs/setup_flyio.md)
2. [Setup your CouchDB](docs/setup_own_server.md)
2. Configure plug-in in [Quick Setup](docs/quick_setup.md)
> [!TIP]
> Now, fly.io has become not free. Fortunately, even though there are some issues, we are still able to use IBM Cloudant. Here is [Setup IBM Cloudant](docs/setup_cloudant.md). It will be updated soon!
> Fly.io is no longer free. Fortunately, despite some issues, we can still use IBM Cloudant. Refer to [Setup IBM Cloudant](docs/setup_cloudant.md).
> And also, we can use peer-to-peer synchronisation without a server. Or very cheap Object Storage -- Cloudflare R2 can be used for free.
> HOWEVER, most importantly, we can use the server that we trust. Therefore, please set up your own server.
> CouchDB can be run on a Raspberry Pi. (But please be careful about the security of your server).
## Information in StatusBar
@@ -80,17 +86,14 @@ Synchronization status is shown in the status bar with the following icons.
To prevent file and database corruption, please wait to stop Obsidian until all progress indicators have disappeared as possible (The plugin will also try to resume, though). Especially in case of if you have deleted or renamed files.
## Tips and Troubleshooting
If you are having problems getting the plugin working see: [Tips and Troubleshooting](docs/troubleshooting.md)
If you are having problems getting the plugin working see: [Tips and Troubleshooting](docs/troubleshooting.md).
## Acknowledgements
The project has been in continual progress and harmony because of
- Many [Contributors](https://github.com/vrtmrz/obsidian-livesync/graphs/contributors)
- Many [GitHub Sponsors](https://github.com/sponsors/vrtmrz#sponsors)
- JetBrains Community Programs / Support for Open-Source Projects <img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jetbrains.png" alt="JetBrains logo." height="24">
The project has been in continual progress and harmony thanks to:
- Many [Contributors](https://github.com/vrtmrz/obsidian-livesync/graphs/contributors).
- Many [GitHub Sponsors](https://github.com/sponsors/vrtmrz#sponsors).
- JetBrains Community Programs / Support for Open-Source Projects. <img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jetbrains.png" alt="JetBrains logo" height="24">
May those who have contributed be honoured and remembered for their kindness and generosity.

BIN
docs/all_toggles.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

View File

@@ -0,0 +1,122 @@
# [WITHDRAWN] Chunk Aggregation by Prefix
## Goal
To address the "document explosion" and storage bloat issues caused by the current chunking mechanism, while preserving the benefits of content-addressable storage and efficient delta synchronisation. This design aims to significantly reduce the number of documents in the database and simplify Garbage Collection (GC).
## Motivation
Our current synchronisation solution splits files into content-defined chunks, with each chunk stored as a separate document in CouchDB, identified by its hash. This architecture effectively leverages CouchDB's replication for automatic deduplication and efficient transfer.
However, this approach faces significant challenges as the number of files and edits increases:
1. **Document Explosion:** A large vault can generate millions of chunk documents, severely degrading CouchDB's performance, particularly during view building and replication.
2. **Storage Bloat & GC Difficulty:** Obsolete chunks generated during edits are difficult to identify and remove. Since CouchDB's deletion (`_deleted: true`) is a soft delete, and compaction is a heavy, space-intensive operation, unused chunks perpetually consume storage, making GC impractical for many users.
3. **The "Eden" Problem:** A previous attempt, "Keep newborn chunks in Eden", aimed to mitigate this by embedding volatile chunks within the parent document. While it reduced the number of standalone chunks, it introduced a new issue: the parent document's history (`_revs_info`) became excessively large, causing its own form of database bloat and making compaction equally necessary but difficult to manage.
This new design addresses the root cause—the sheer number of documents—by aggregating chunks into sets.
## Prerequisites
- The new implementation must maintain the core benefit of deduplication to ensure efficient synchronisation.
- The solution must not introduce a single point of bottleneck and should handle concurrent writes from multiple clients gracefully.
- The system must provide a clear and feasible strategy for Garbage Collection.
- The design should be forward-compatible, allowing for a smooth migration path for existing users.
## Outlined Methods and Implementation Plans
### Abstract
This design introduces a two-tiered document structure to manage chunks: **Index Documents** and **Data Documents**. Chunks are no longer stored as individual documents. Instead, they are grouped into `Data Documents` based on a common hash prefix. The existence and location of each chunk are tracked by `Index Documents`, which are also grouped by the same prefix. This approach dramatically reduces the total document count.
### Detailed Implementation
**1. Document Structure:**
- **Index Document:** Maps chunk hashes to their corresponding Data Document ID. Identified by a prefix of the chunk hash.
- `_id`: `idx:{prefix}` (e.g., `idx:a9f1b`)
- Content:
```json
{
"_id": "idx:a9f1b",
"_rev": "...",
"chunks": {
"a9f1b12...": "dat:a9f1b-001",
"a9f1b34...": "dat:a9f1b-001",
"a9f1b56...": "dat:a9f1b-002"
}
}
```
- **Data Document:** Contains the actual chunk data as base64-encoded strings. Identified by a prefix and a sequential number.
- `_id`: `dat:{prefix}-{sequence}` (e.g., `dat:a9f1b-001`)
- Content:
```json
{
"_id": "dat:a9f1b-001",
"_rev": "...",
"chunks": {
"a9f1b12...": "...", // base64 data
"a9f1b34...": "..." // base64 data
}
}
```
**2. Configuration:**
- `chunk_prefix_length`: The number of characters from the start of a chunk hash to use as a prefix (e.g., `5`). This determines the granularity of aggregation.
- `data_doc_size_limit`: The maximum size for a single Data Document to prevent it from becoming too large (e.g., 1MB). When this limit is reached, a new Data Document with an incremented sequence number is created.
**3. Write/Save Operation Flow:**
When a client creates new chunks:
1. For each new chunk, determine its hash prefix.
2. Read the corresponding `Index Document` (e.g., `idx:a9f1b`).
3. From the index, determine which of the new chunks already exist in the database.
4. For the **truly new chunks only**:
a. Read the last `Data Document` for that prefix (e.g., `dat:a9f1b-005`).
b. If it is nearing its size limit, create a new one (`dat:a9f1b-006`).
c. Add the new chunk data to the Data Document and save it.
5. Update the `Index Document` with the locations of the newly added chunks.
**4. Handling Write Conflicts:**
Concurrent writes to the same `Index Document` or `Data Document` from multiple clients will cause conflicts (409 Conflict). This is expected and must be handled gracefully. Since additions are incremental, the client application must implement a **retry-and-merge loop**:
1. Attempt to save the document.
2. On a conflict, re-fetch the latest version of the document from the server.
3. Merge its own changes into the latest version.
4. Attempt to save again.
5. Repeat until successful or a retry limit is reached.
**5. Garbage Collection (GC):**
GC becomes a manageable, periodic batch process:
1. Scan all file metadata documents to build a master set of all *currently referenced* chunk hashes.
2. Iterate through all `Index Documents`. For each chunk listed:
a. If the chunk hash is not in the master reference set, it is garbage.
b. Remove the garbage entry from the `Index Document`.
c. Remove the corresponding data from its `Data Document`.
3. If a `Data Document` becomes empty after this process, it can be deleted.
## Test Strategy
1. **Unit Tests:** Implement tests for the conflict resolution logic (retry-and-merge loop) to ensure robustness.
2. **Integration Tests:**
- Verify that concurrent writes from multiple simulated clients result in a consistent, merged state without data loss.
- Run a full synchronisation scenario and confirm the resulting database has a significantly lower document count compared to the previous implementation.
3. **GC Test:** Simulate a scenario where files are deleted, run the GC process, and verify that orphaned chunks are correctly removed from both Index and Data documents, and that storage is reclaimed after compaction.
4. **Migration Test:** Develop and test a "rebuild" process for existing users, which migrates their chunk data into the new aggregated structure.
## Documentation Strategy
- This design document will be published to explain the new architecture.
- The configuration options (`chunk_prefix_length`, etc.) will be documented for advanced users.
- A guide for the migration/rebuild process will be provided.
## Future Work
The separation of index and data opens up a powerful possibility. While this design initially implements both within CouchDB, the `Data Documents` could be offloaded to a dedicated object storage service such as **S3, MinIO, or Cloudflare R2**.
In such a hybrid model, CouchDB would handle only the lightweight `Index Documents` and file metadata, serving as a high-speed synchronisation and coordination layer. The bulky chunk data would reside in a more cost-effective and scalable blob store. This would represent the ultimate evolution of this architecture, combining the best of both worlds.
## Consideration and Conclusion
This design directly addresses the scalability limitations of the original chunk-per-document model. By aggregating chunks into sets, it significantly reduces the document count, which in turn improves database performance and makes maintenance feasible. The explicit handling of write conflicts and a clear strategy for garbage collection make this a robust and sustainable long-term solution. It effectively resolves the problems identified in previous approaches, including the "Eden" experiment, by tackling the root cause of database bloat. This architecture provides a solid foundation for future growth and scalability.

View File

@@ -0,0 +1,127 @@
# [WIP] The design intent explanation for using metadata and chunks
## Abstract
## Goal
- To explain the following:
- What metadata and chunks are
- The design intent of using metadata and chunks
## Background and Motivation
We are using PouchDB and CouchDB for storing files and synchronising them. PouchDB is a JavaScript database that stores data on the device (browser, and of course, Obsidian), while CouchDB is a NoSQL database that stores data on the server. The two databases can be synchronised to keep data consistent across devices via the CouchDB replication protocol. This is a powerful and flexible way to store and synchronise data, including conflict management, but it is not well suited for files. Therefore, we needed to manage how to store files and synchronise them.
## Terminology
- Password:
- A string used to authenticate the user.
- Passphrase:
- A string used to encrypt and decrypt data.
- This is not a password.
- Encrypt:
- To convert data into a format that is unreadable to anyone.
- Can be decrypted by the user who has the passphrase.
- Should be 1:n, containing random data to ensure that even the same data, when encrypted, results in different outputs.
- Obfuscate:
- To convert data into a format that is not easily readable.
- Can be decrypted by the user who has the passphrase.
- Should be 1:1, containing no random data, and the same data is always obfuscated to the same result. It is necessarily unreadable.
- Hash:
- To convert data into a fixed-length string that is not easily readable.
- Cannot be decrypted.
- Should be 1:1, containing no random data, and the same data is always hashed to the same result.
## Designs
### Principles
- To synchronise and handle conflicts, we should keep the history of modifications.
- No data should be lost. Even though some extra data may be stored, it should be removed later, safely.
- Each stored data item should be as small as possible to transfer efficiently, but not so small as to be inefficient.
- Any type of file should be supported, including binary files.
- Encryption should be supported efficiently.
- This method should not depart too far from the PouchDB/CouchDB philosophy. It needs to leave room for other `remote`s, to benefit from custom replicators.
As a result, we have adopted the following design.
- Files are stored as one metadata entry and multiple chunks.
- Chunks are content-addressable, and the metadata contains the ids of the chunks.
- Chunks may be referenced from multiple metadata entries. They should be efficiently managed to avoid redundancy.
### Metadata Design
The metadata contains the following information:
| Field | Type | Description | Note |
| -------- | -------------------- | ---------------------------- | ----------------------------------------------------------------------------------------------------- |
| _id | string | The id of the metadata | It is created from the file path |
| _rev | string | The revision of the metadata | It is created by PouchDB |
| children | [string] | The ids of the chunks | |
| path | string | The path of the file | If Obfuscate path has been enabled, it has been encrypted |
| size | number | The size of the metadata | Not respected; for troubleshooting |
| ctime | string | The creation timestamp | This is not used to compare files, but when writing to storage, it will be used |
| mtime | string | The modification timestamp | This will be used to compare files, and will be written to storage |
| type | `plain` \| `newnote` | The type of the file | Children of type `plain` will not be base64 encoded, while `newnote` will be |
| e_ | boolean | The file is encrypted | Encryption is processed during transfer to the remote. In local storage, this property does not exist |
#### Decision Rule for `_id` of Metadata
```ts
// Note: This is pseudo code.
let _id = PATH;
if (!HANDLE_FILES_AS_CASE_SENSITIVE) {
_id = _id.toLowerCase();
}
if (_id.startsWith("_")) {
_id = "/" + _id;
}
if (OBFUSCATE_PATH) {
_id = `f:${OBFUSCATE_PATH(_id, E2EE_PASSPHRASE)}`;
}
return _id;
```
#### Expected Questions
- Why do we need to handle files as case-sensitive?
- Some filesystems are case-sensitive, while others are not. For example, Windows is not case-sensitive, while Linux is. Therefore, we need to handle files as case-sensitive to manage conflicts.
- The trade-off is that you will not be able to manage files with different cases, so this can be disabled if you only have case-sensitive terminals.
- Why obfuscate the path?
- E2EE only encrypts the content of the file, not metadata. Hence, E2EE alone is not enough to protect the vault completely. The path is also part of the metadata, so it should be obfuscated. This is a trade-off between security and performance. However, if you title a note with sensitive information, you should obfuscate the path.
- What is `f:`?
- It is a prefix to indicate that the path is obfuscated. It is used to distinguish between normal paths and obfuscated paths. Due to file enumeration, Self-hosted LiveSync should scan the files to find the metadata, excluding chunks and other information.
- Why does an unobfuscated path not start with `f:`?
- For compatibility. Self-hosted LiveSync, by its nature, must also be able to handle files created with newer versions as far as possible.
### Chunk Design
#### Chunk Structure
The chunk contains the following information:
| Field | Type | Description | Note |
| ----- | ------------ | ------------------------- | ----------------------------------------------------------------------------------------------------- |
| _id | `h:{string}` | The id of the chunk | It is created from the hash of the chunk content |
| _rev | string | The revision of the chunk | It is created by PouchDB |
| data | string | The content of the chunk | |
| type | `leaf` | Fixed | |
| e_ | boolean | The chunk is encrypted | Encryption is processed during transfer to the remote. In local storage, this property does not exist |
**SORRY, TO BE WRITTEN, BUT WE HAVE IMPLEMENTED `v2`, WHICH REQUIRES MORE INFORMATION.**
### How they are unified
## Deduplication and Optimisation
## Synchronisation Strategy
## Performance Considerations
## Security and Privacy
## Edge Cases

View File

@@ -0,0 +1,117 @@
# [IN DESIGN] Tiered Chunk Storage with Live Compaction
** VERY IMPORTANT NOTE: This design must be used with the new journal synchronisation method. Otherwise, we risk introducing the bloat of changes from hot-pack into the Bucket. (CouchDB/PouchDB can synchronise only the most recent changes, or resolve conflicts.) Previous Journal Sync **IS NOT**. Please proceed with caution. **
## Goal
To establish a highly efficient, robust, and scalable synchronisation architecture by introducing a tiered storage system inspired by Log-Structured Merge-Trees (LSM-Trees). This design aims to address the challenges of real-time synchronisation, specifically the massive generation of transient data, while minimising storage bloat and ensuring high performance.
## Motivation
Our previous designs, including "Chunk Aggregation by Prefix", successfully addressed the "document explosion" problem. However, the introduction of real-time editor synchronisation exposed a new, critical challenge: the constant generation of short-lived "garbage" chunks during user input. This "garbage storm" places immense pressure on storage, I/O, and the Garbage Collection (GC) process.
A simple aggregation strategy is insufficient because it treats all data equally, mixing valuable, stable chunks with transient, garbage chunks in permanent storage. This leads to storage bloat and inefficient compaction. We require a system that can intelligently distinguish between "hot" (volatile) and "cold" (stable) data, processing them in the most efficient manner possible.
## Outlined Methods and Implementation Plans
### Abstract
This design implements a two-tiered storage system within CouchDB.
1. **Level 0 Hot Storage:** A set of "Hot-Packs", one for each active client. These act as fast, append-only logs for all newly created chunks. They serve as a temporary staging area, absorbing the "garbage storm" of real-time editing.
2. **Level 1 Cold Storage:** The permanent, immutable storage for stable chunks, consisting of **Index Documents** for fast lookups and **Data Documents (Cold-Packs)** for storing chunk data.
A background "Compaction" process continuously promotes stable chunks from Hot Storage to Cold Storage, while automatically discarding garbage. This keeps the permanent storage clean and highly optimised.
### Detailed Implementation
**1. Document Structure:**
- **Hot-Pack Document (Level 0):** A per-client, append-only log.
- `_id`: `hotpack:{client_id}` (`client_id` could be the same as the `deviceNodeID` used in the `accepted_nodes` in MILESTONE_DOC; enables database 'lockout' for safe synchronisation)
- Content: A log of chunk creation events.
```json
{
"_id": "hotpack:a9f1b12...",
"_rev": "...",
"log": [
{ "hash": "abc...", "data": "...", "ts": ..., "file_id": "file1" },
{ "hash": "def...", "data": "...", "ts": ..., "file_id": "file2" }
]
}
```
- **Index Document (Level 1):** A fast, prefix-based lookup table for stable chunks.
- `_id`: `idx:{prefix}` (e.g., `idx:a9f1b`)
- Content: Maps a chunk hash to the ID of the Cold-Pack it resides in.
```json
{
"_id": "idx:a9f1b",
"chunks": { "a9f1b12...": "dat:1678886400" }
}
```
- **Cold-Pack Document (Level 1):** An immutable data block created by the compaction process.
- `_id`: `dat:{timestamp_or_uuid}` (e.g., `dat:1678886400123`)
- Content: A collection of stable chunks.
```json
{
"_id": "dat:1678886400123",
"chunks": { "a9f1b12...": "...", "c3d4e5f...": "..." }
}
```
- **Hot-Pack List Document:** A central registry of all active Hot-Packs. This might be a computed document that clients maintain in memory on startup.
- `_id`: `hotpack_list`
- Content: `{"active_clients": ["hotpack:a9f1b12...", "hotpack:c3d4e5f..."]}`
**2. Write/Save Operation Flow (Real-time Editing):**
1. A client generates a new chunk.
2. It **immediately appends** the chunk object (`{hash, data, ts, file_id}`) to its **own** Hot-Pack document's `log` array within its local PouchDB. This operation is extremely fast.
3. The PouchDB synchronisation process replicates this change to the remote CouchDB and other clients in the background. No other Hot-Packs are consulted during this write operation.
**3. Read/Load Operation Flow:**
To find a chunk's data:
1. The client first consults its in-memory list of active Hot-Pack IDs (see section 5).
2. It searches for the chunk hash in all **Hot-Pack documents**, starting from its own, then others. It reads them in reverse log order (newest first).
3. If not found, it consults the appropriate **Index Document (`idx:...`)** to get the ID of the Cold-Pack.
4. It then reads the chunk data from the corresponding **Cold-Pack document (`dat:...`)**.
**4. Compaction & Promotion Process (The "GC"):**
This is a background task run periodically by clients, or triggered when the number of unprocessed log entries exceeds a threshold (to maintain the ability to synchronise with the remote database, which has a limited document size).
1. The client takes its own Hot-Pack (`hotpack:{client_id}`) and scans its `log` array from the beginning (oldest first).
2. For each chunk in the log, it checks if the chunk is still referenced in the latest revision of any file.
- **If not referenced (Garbage):** The log entry is simply discarded.
- **If referenced (Stable):** The chunk is added to a "promotion batch".
3. After scanning a certain number of log entries, the client takes the "promotion batch".
4. It creates one or more new, immutable **Cold-Pack (`dat:...`)** documents to store the chunk data from the batch.
5. It updates the corresponding **Index (`idx:...`)** documents to point to the new Cold-Pack(s).
6. Once the promotion is successfully saved to the database, it **removes the processed entries from its Hot-Pack's `log` array**. This is a critical step to prevent reprocessing and keep the Hot-Pack small.
**5. Hot-Pack List Management:**
To know which Hot-Packs to read, clients will:
1. On startup, load the `hotpack_list` document into memory.
2. Use PouchDB's live `changes` feed to monitor the creation of new `hotpack:*` documents.
3. Upon detecting an unknown Hot-Pack, the client updates its in-memory list and attempts to update the central `hotpack_list` document (on a best-effort basis, with conflict resolution).
## Planned Test Strategy
1. **Unit Tests:** Test the Compaction/Promotion logic extensively. Ensure garbage is correctly identified and stable chunks are promoted correctly.
2. **Integration Tests:** Simulate a multi-client real-time editing session.
- Verify that writes are fast and responsive.
- Confirm that transient garbage chunks do not pollute the Cold Storage.
- Confirm that after a period of inactivity, compaction runs and the Hot-Packs shrink.
3. **Stress Tests:** Simulate many clients joining and leaving to test the robustness of the `hotpack_list` management.
## Documentation Strategy
- This design document will serve as the core architectural reference.
- The roles of each document type (Hot-Pack, Index, Cold-Pack, List) will be clearly explained for future developers.
- The logic of the Compaction/Promotion process will be detailed.
## Consideration and Conclusion
This tiered storage design is a direct evolution, born from the lessons of previous architectures. It embraces the ephemeral nature of data in real-time applications. By creating a "staging area" (Hot-Packs) for volatile data, it protects the integrity and performance of the permanent "cold" storage. The Compaction process acts as a self-cleaning mechanism, ensuring that only valuable, stable data is retained long-term. This is not just an optimisation; it is a fundamental shift that enables robust, high-performance, and scalable real-time synchronisation on top of CouchDB.

View File

@@ -0,0 +1,97 @@
# [IN DESIGN] Tiered Chunk Storage for Bucket Sync
## Goal
To evolve the "Journal Sync" mechanism by integrating the Tiered Storage architecture. This design aims to drastically reduce the size and number of sync packs, minimise storage consumption on the backend bucket, and establish a clear, efficient process for Garbage Collection, all while remaining protocol-agnostic.
## Motivation
The original "Journal Sync" liberates us from CouchDB's protocol, but it still packages and transfers entire document changes, including bulky and often transient chunk data. In a real-time or frequent-editing scenario, this results in:
1. **Bloated Sync Packs:** Packs become large with redundant or short-lived chunk data, increasing upload and download times.
2. **Inefficient Storage:** The backend bucket stores numerous packs containing overlapping and obsolete chunk data, wasting space.
3. **Impractical Garbage Collection:** Identifying and purging obsolete *chunk data* from within the pack-based journal history is extremely difficult.
This new design addresses these problems by fundamentally changing *what* is synchronised in the journal packs. We will synchronise lightweight metadata and logs, while handling bulk data separately.
## Outlined methods and implementation plans
### Abstract
This design adapts the Tiered Storage model for a bucket-based backend. The backend bucket is partitioned into distinct areas for different data types. The "Journal Sync" process is now responsible for synchronising only the "hot" volatile data and lightweight metadata. A separate, asynchronous "Compaction" process, which can be run by any client, is responsible for migrating stable data into permanent, deduplicated "cold" storage.
### Detailed Implementation
**1. Bucket Structure:**
The backend bucket will have four distinct logical areas (prefixes):
- `packs/`: For "Journal Sync" packs, containing the journal of metadata and Hot-Log changes.
- `hot_logs/`: A dedicated area for each client's "Hot-Log," containing newly created, volatile chunks.
- `indices/`: For prefix-based Index files, mapping chunk hashes to their permanent location in Cold Storage.
- `cold_chunks/`: For deduplicated, stable chunk data, stored by content hash.
**2. Data Structures (Client-side PouchDB & Backend Bucket):**
- **Client Metadata:** Standard file metadata documents, kept in the client's PouchDB.
- **Hot-Log (in `hot_logs/`):** A per-client, append-only log file on the bucket.
- Path: `hot_logs/{client_id}.jsonlog`
- Content: A sequence of JSON objects, one per line, representing chunk creation events. `{"hash": "...", "data": "...", "ts": ..., "file_id": "..."}`
- **Index File (in `indices/`):** A JSON file for a given hash prefix.
- Path: `indices/{prefix}.json`
- Content: Maps a chunk hash to its content hash (which is its key in `cold_chunks/`). `{"hash_abc...": true, "hash_def...": true}`
- **Cold Chunk (in `cold_chunks/`):** The raw, immutable, deduplicated chunk data.
- Path: `cold_chunks/{chunk_hash}`
**3. "Journal Sync" - Send/Receive Operation (Not Live):**
This process is now extremely lightweight.
1. **Send:**
a. The client takes all newly generated chunks and **appends them to its own Hot-Log file (`hot_logs/{client_id}.jsonlog`)** on the bucket.
b. The client updates its local file metadata in PouchDB.
c. It then creates a "Journal Sync" pack containing **only the PouchDB journal of the file metadata changes.** This pack is very small as it contains no chunk data.
d. The pack is uploaded to `packs/`.
2. **Receive:**
a. The client downloads new packs from `packs/` and applies the metadata journal to its local PouchDB.
b. It downloads the latest versions of all **other clients' Hot-Log files** from `hot_logs/`.
c. Now the client has a complete, up-to-date view of all metadata and all "hot" chunks.
**4. Read/Load Operation Flow:**
To find a chunk's data:
1. The client searches for the chunk hash in its local copy of all **Hot-Logs**.
2. If not found, it downloads and consults the appropriate **Index file (`indices/{prefix}.json`)**.
3. If the index confirms existence, it downloads the data from **`cold_chunks/{chunk_hash}`**.
**5. Compaction & Promotion Process (Asynchronous "GC"):**
This is a deliberate, offline-capable process that any client can choose to run.
1. The client "leases" its own Hot-Log for compaction.
2. It reads its entire `hot_logs/{client_id}.jsonlog`.
3. For each chunk in the log, it checks if the chunk is referenced in the *current, latest state* of the file metadata.
- **If not referenced (Garbage):** The log entry is discarded.
- **If referenced (Stable):** The chunk is added to a "promotion batch."
4. For each chunk in the promotion batch:
a. It checks the corresponding `indices/{prefix}.json` to see if the chunk already exists in Cold Storage.
b. If it does not exist, it **uploads the chunk data to `cold_chunks/{chunk_hash}`** and updates the `indices/{prefix}.json` file.
5. Once the entire Hot-Log has been processed, the client **deletes its `hot_logs/{client_id}.jsonlog` file** (or truncates it to empty), effectively completing the cycle.
## Test strategy
1. **Component Tests:** Test the Compaction process independently. Ensure it correctly identifies stable versus garbage chunks and populates the `cold_chunks/` and `indices/` areas correctly.
2. **Integration Tests:**
- Simulate a multi-client sync cycle. Verify that sync packs in `packs/` are small.
- Confirm that `hot_logs/` are correctly created and updated.
- Run the Compaction process and verify that data migrates correctly to cold storage and the hot log is cleared.
3. **Conflict Tests:** Simulate two clients trying to compact the same index file simultaneously and ensure the outcome is consistent (for example, via a locking mechanism or last-write-wins).
## Documentation strategy
- This design document will be the primary reference for the bucket-based architecture.
- The structure of the backend bucket (`packs/`, `hot_logs/`, etc.) will be clearly defined.
- A detailed description of how to run the Compaction process will be provided to users.
## Consideration and Conclusion
By applying the Tiered Storage model to "Journal Sync", we transform it into a remarkably efficient system. The synchronisation of everyday changes becomes extremely fast and lightweight, as only metadata journals are exchanged. The heavy lifting of data deduplication and permanent storage is offloaded to a separate, asynchronous Compaction process. This clear separation of concerns makes the system highly scalable, minimises storage costs, and finally provides a practical, robust solution for Garbage Collection in a protocol-agnostic, bucket-based environment.

View File

@@ -1,6 +1,6 @@
# Keep newborn chunks in Eden.
# Keep newborn chunks in Eden
NOTE: This is the planned feature design document. This is planned, but not be implemented now (v0.23.3). This has not reached the design freeze and will be added to from time to time.
Notice: deprecated. please refer to the result section of this document.
## Goal
@@ -19,15 +19,18 @@ Reduce the number of chunks which in volatile, and reduce the usage of storage o
- The problem is that this unnecessary chunking slows down both local and remote operations.
## Prerequisite
- The implementation must be able to control the size of the document appropriately so that it does not become non-transferable (1).
- The implementation must be such that data corruption can be avoided even if forward compatibility is not maintained; due to the nature of Self-hosted LiveSync, backward version connexions are expected.
- The implementation must be such that data corruption can be avoided even if forward compatibility is not maintained; due to the nature of Self-hosted LiveSync, backward version connexions are expected.
- Viewed as a feature:
- This feature should be disabled for migration users.
- This feature should be enabled for new users and after rebuilds of migrated users.
- Therefore, back into the implementation view, Ideally, the implementation should be such that data recovery can be achieved by immediately upgrading after replication.
## Outlined methods and implementation plans
### Abstract
To store and transfer only stable chunks independently and share them from multiple documents after stabilisation, new chunks, i.e. chunks that are considered non-stable, are modified to be stored in the document and transferred with the document. In this case, care should be taken not to exceed prerequisite (1).
If this is achieved, the non-leaf document will not be transferred, and even if it is, the chunk will be stored in the document, so that the size can be reduced by the compaction.
@@ -40,11 +43,11 @@ Details are given below.
type EntryWithEden = {
eden: {
[key: DocumentID]: {
data: string,
epoch: number, // The document revision which this chunk has been born.
}
}
}
data: string;
epoch: number; // The document revision which this chunk has been born.
};
};
};
```
2. The following configuration items are added:
Note: These configurations should be shared as `Tweaks value` between each client.
@@ -63,6 +66,7 @@ Details are given below.
5. In End-to-End Encryption, property `eden` of documents will also be encrypted.
### Note
- When this feature has been enabled, forward compatibility is temporarily lost. However, it is detected as missing chunks, and this data is not reflected in the storage in the old version. Therefore, no data loss will occur.
## Test strategy
@@ -77,5 +81,26 @@ Details are given below.
- Indeed, we lack a fulfilled configuration table. Efforts will be made and, if they can be produced, this document will then be referenced. But not required while in the experimental or beta feature.
- However, this might be an essential feature. Further efforts are desired.
## Results from actual operation
After implementing this feature, we have been using it for a while. The following results were obtained.
- Drawbacks were thought not to be a problem, but they were actually a problem:
- A document with `Eden` has a quite larger history compared to a document without `Eden`.
- Self-hosted LiveSync does not perform compaction aggressively, which results in the remote database becoming partially bloated.
- Compaction of the Remote Database (CouchDB) requires the same amount of free space as the size of the database. Therefore, it is not possible to perform compaction on a remote database if we reached to the maximum size of the database. It means that when we detect it, it is too late.
- We have mentioned that `We need compaction` in previous sections. However, but it was so hard to be determined whether the compaction is required or not, until the database is bloated. (Of course, it requires some time to compact the database, and, literally, some document loses its history. It is not a good idea to perform frequently and meaninglessly. We need manual decision, but indeed difficult to normal users).
### Consideration and Conclusion
To be described after implemented, tested, and, released.
This feature results in two aspects:
- For the users who are familiar with the CouchDB, this feature is a bit useful. They can watch and handle the database by themselves.
- For the users who are not familiar with the CouchDB, i.e., normal users, this feature is not so useful, either. They are not familiar with the database, and they do not know how to handle it. Therefore, they cannot decide whether the compaction is required or not.
Hence, this feature would be kept as an experimental feature, but it is not enabled by default. In addition to that, it is marked as deprecated. Detailed notice will be noisy for the users who are not familiar with the CouchDB. Details would be kept in this document, for the future.
It is not recommended to use this feature, unless the person who is familiar with the CouchDB and the database management.
Vorotamoroz has written this document. Bias: I am the first author of this plug-in, familiar with the CouchDB.
Research and development has been frozen on 2025-04-11. But, bugs will be fixed if they are found. Please feel free to report them.

View File

@@ -49,6 +49,23 @@ Once CouchDB run, these directories will be owned by uid:`5984`. Please chown it
```
$ docker run --name couchdb-for-ols -d --restart always -e COUCHDB_USER=${username} -e COUCHDB_PASSWORD=${password} -v ${PWD}/couchdb-data:/opt/couchdb/data -v ${PWD}/couchdb-etc:/opt/couchdb/etc/local.d -p 5984:5984 couchdb
```
If you prefer a compose file instead of docker run, here is the equivalent below:
```
services:
couchdb:
image: couchdb:latest
container_name: couchdb-for-ols
user: 1000:1000
environment:
- COUCHDB_USER=${username}
- COUCHDB_PASSWORD=${password}
volumes:
- ./couchdb-data:/opt/couchdb/data
- ./couchdb-etc:/opt/couchdb/etc/local.d
ports:
- 5984:5984
restart: unless-stopped
```
### B. Install CouchDB directly
Please refer to the [official document](https://docs.couchdb.org/en/stable/install/index.html). However, we do not have to configure it fully. Just the administrator needs to be configured.

View File

@@ -1,8 +1,16 @@
<!-- 2025-02-18 -->
# Tips and Troubleshooting
- [Tips and Troubleshooting](#tips-and-troubleshooting)
- [Tips](#tips)
- [CORS avoidance](#cors-avoidance)
- [CORS configuration with reverse proxy](#cors-configuration-with-reverse-proxy)
- [Nginx](#nginx)
- [Nginx and subdirectory](#nginx-and-subdirectory)
- [Caddy](#caddy)
- [Caddy and subdirectory](#caddy-and-subdirectory)
- [Apache](#apache)
- [Show all setting panes](#show-all-setting-panes)
- [How to resolve `Tweaks Mismatched of Changed`](#how-to-resolve-tweaks-mismatched-of-changed)
- [Notable bugs and fixes](#notable-bugs-and-fixes)
- [Binary files get bigger on iOS](#binary-files-get-bigger-on-ios)
- [Some setting name has been changed](#some-setting-name-has-been-changed)
@@ -14,24 +22,142 @@
- [Why are the logs volatile and ephemeral?](#why-are-the-logs-volatile-and-ephemeral)
- [Some network logs are not written into the file.](#some-network-logs-are-not-written-into-the-file)
- [If a file were deleted or trimmed, the capacity of the database should be reduced, right?](#if-a-file-were-deleted-or-trimmed-the-capacity-of-the-database-should-be-reduced-right)
- [How to launch the DevTools](#how-to-launch-the-devtools)
- [On Desktop Devices](#on-desktop-devices)
- [On Android](#on-android)
- [On iOS, iPadOS devices](#on-ios-ipados-devices)
- [How can I use the DevTools?](#how-can-i-use-the-devtools)
- [Checking the network log](#checking-the-network-log)
- [Troubleshooting](#troubleshooting)
- [While using Cloudflare Tunnels, often Obsidian API fallback and `524` error occurs.](#while-using-cloudflare-tunnels-often-obsidian-api-fallback-and-524-error-occurs)
- [On the mobile device, cannot synchronise on the local network!](#on-the-mobile-device-cannot-synchronise-on-the-local-network)
- [I think that something bad happening on the vault...](#i-think-that-something-bad-happening-on-the-vault)
- [Tips](#tips)
- [How to resolve `Tweaks Mismatched of Changed`](#how-to-resolve-tweaks-mismatched-of-changed)
- [Old tips](#old-tips)
<!-- - -->
## Tips
### CORS avoidance
If we are unable to configure CORS properly for any reason (for example, if we cannot configure non-administered network devices), we may choose to ignore CORS.
To use the Obsidian API (also known as the Non-Native API) to bypass CORS, we can enable the toggle ``Use Request API to avoid `inevitable` CORS problem``.
<!-- Add **Long explanation of CORS** here for integrity -->
### CORS configuration with reverse proxy
- IMPORTANT: CouchDB handles CORS by itself. Do not process CORS on the reverse
proxy.
- Do not process `Option` requests on the reverse proxy!
- Make sure `host` and `X-Forwarded-For` headers are forwarded to the CouchDB.
- If you are using a subdirectory, make sure to handle it properly. More
detailed information is in the
[CouchDB documentation](https://docs.couchdb.org/en/stable/best-practices/reverse-proxies.html).
Minimal configurations are as follows:
#### Nginx
```nginx
location / {
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
```
#### Nginx and subdirectory
```nginx
location /couchdb {
rewrite ^ $request_uri;
rewrite ^/couchdb/(.*) /$1 break;
proxy_pass http://localhost:5984$uri;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /_session {
proxy_pass http://localhost:5984/_session;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
```
#### Caddy
```caddyfile
domain.com {
reverse_proxy localhost:5984
}
```
#### Caddy and subdirectory
```caddyfile
domain.com {
reverse_proxy /couchdb/* localhost:5984
reverse_proxy /_session/* localhost:5984/_session
}
```
#### Apache
Sorry, Apache is not recommended for CouchDB. Omit the configuration from here.
Please refer to the
[Official documentation](https://docs.couchdb.org/en/stable/best-practices/reverse-proxies.html#reverse-proxying-with-apache-http-server).
### Show all setting panes
Full pane is not shown by default. To show all panes, please toggle all in
`🧙‍♂️ Wizard` -> `Enable extra and advanced features`.
For your information, the all panes are as follows:
![All Panes](all_toggles.png)
### How to resolve `Tweaks Mismatched of Changed`
(Since v0.23.17)
If you have changed some configurations or tweaks which should be unified
between the devices, you will be asked how to reflect (or not) other devices at
the next synchronisation. It also occurs on the device itself, where changes are
made, to prevent unexpected configuration changes from unwanted propagation.\
(We may thank this behaviour if we have synchronised or backed up and restored
Self-hosted LiveSync. At least, for me so).
Following dialogue will be shown: ![Dialogue](tweak_mismatch_dialogue.png)
- If we want to propagate the setting of the device, we should choose
`Update with mine`.
- On other devices, we should choose `Use configured` to accept and use the
configured configuration.
- `Dismiss` can postpone a decision. However, we cannot synchronise until we
have decided.
Rest assured that in most cases we can choose `Use configured`. (Unless you are
certain that you have not changed the configuration).
If we see it for the first time, it reflects the settings of the device that has
been synchronised with the remote for the first time since the upgrade.
Probably, we can accept that.
<!-- Add here -->
## Notable bugs and fixes
### Binary files get bigger on iOS
- Reported at: v0.20.x
- Fixed at: v0.21.2 (Fixed but not reviewed)
- Required action: larger files will not be fixed automatically, please perform `Verify and repair all files`. If our local database and storage are not matched, we will be asked to apply which one.
- Required action: larger files will not be fixed automatically, please perform
`Verify and repair all files`. If our local database and storage are not
matched, we will be asked to apply which one.
### Some setting name has been changed
@@ -48,39 +174,50 @@
### Why `Use an old adapter for compatibility` is somehow enabled in my vault?
Because you are a compassionate and experienced user. Before v0.17.16, we used an old adapter for the local database. At that time, current default adapter has not been stable.
The new adapter has better performance and has a new feature like purging. Therefore, we should use new adapters and current default is so.
Because you are a compassionate and experienced user. Before v0.17.16, we used
an old adapter for the local database. At that time, current default adapter has
not been stable. The new adapter has better performance and has a new feature
like purging. Therefore, we should use new adapters and current default is so.
However, when switching from an old adapter to a new adapter, some converting or local database rebuilding is required, and it takes a few time. It was a long time ago now, but we once inconvenienced everyone in a hurry when we changed the format of our database.
For these reasons, this toggle is automatically on if we have upgraded from vault which using an old adapter.
However, when switching from an old adapter to a new adapter, some converting or
local database rebuilding is required, and it takes a few time. It was a long
time ago now, but we once inconvenienced everyone in a hurry when we changed the
format of our database. For these reasons, this toggle is automatically on if we
have upgraded from vault which using an old adapter.
When you rebuild everything or fetch from the remote again, you will be asked to switch this.
When you rebuild everything or fetch from the remote again, you will be asked to
switch this.
Therefore, experienced users (especially those stable enough not to have to rebuild the database) may have this toggle enabled in their Vault.
Please disable it when you have enough time.
Therefore, experienced users (especially those stable enough not to have to
rebuild the database) may have this toggle enabled in their Vault. Please
disable it when you have enough time.
### ZIP (or any extensions) files were not synchronised. Why?
It depends on Obsidian detects. May toggling `Detect all extensions` of `File and links` (setting of Obsidian) will help us.
It depends on Obsidian detects. May toggling `Detect all extensions` of
`File and links` (setting of Obsidian) will help us.
### I hope to report the issue, but you said you needs `Report`. How to make it?
We can copy the report to the clipboard, by pressing the `Make report` button on the `Hatch` pane.
![Screenshot](../images/hatch.png)
We can copy the report to the clipboard, by pressing the `Make report` button on
the `Hatch` pane. ![Screenshot](../images/hatch.png)
### Where can I check the log?
We can launch the log pane by `Show log` on the command palette.
And if you have troubled something, please enable the `Verbose Log` on the `General Setting` pane.
We can launch the log pane by `Show log` on the command palette. And if you have
troubled something, please enable the `Verbose Log` on the `General Setting`
pane.
However, the logs would not be kept so long and cleared when restarted. If you want to check the logs, please enable `Write logs into the file` temporarily.
However, the logs would not be kept so long and cleared when restarted. If you
want to check the logs, please enable `Write logs into the file` temporarily.
![ScreenShot](../images/write_logs_into_the_file.png)
> [!IMPORTANT]
>
> - Writing logs into the file will impact the performance.
> - Please make sure that you have erased all your confidential information before reporting issue.
> - Please make sure that you have erased all your confidential information
> before reporting issue.
### Why are the logs volatile and ephemeral?
@@ -88,41 +225,89 @@ To avoid unexpected exposure to our confidential things.
### Some network logs are not written into the file.
Especially the CORS error will be reported as a general error to the plug-in for security reasons. So we cannot detect and log it. We are only able to investigate them by [Checking the network log](#checking-the-network-log).
Especially the CORS error will be reported as a general error to the plug-in for
security reasons. So we cannot detect and log it. We are only able to
investigate them by [Checking the network log](#checking-the-network-log).
### If a file were deleted or trimmed, the capacity of the database should be reduced, right?
No, even though if files were deleted, chunks were not deleted.
Self-hosted LiveSync splits the files into multiple chunks and transfers only newly created. This behaviour enables us to less traffic. And, the chunks will be shared between the files to reduce the total usage of the database.
No, even though if files were deleted, chunks were not deleted. Self-hosted
LiveSync splits the files into multiple chunks and transfers only newly created.
This behaviour enables us to less traffic. And, the chunks will be shared
between the files to reduce the total usage of the database.
And one more thing, we can handle the conflicts on any device even though it has happened on other devices. This means that conflicts will happen in the past, after the time we have synchronised. Hence we cannot collect and delete the unused chunks even though if we are not currently referenced.
And one more thing, we can handle the conflicts on any device even though it has
happened on other devices. This means that conflicts will happen in the past,
after the time we have synchronised. Hence we cannot collect and delete the
unused chunks even though if we are not currently referenced.
To shrink the database size, `Rebuild everything` only reliably and effectively.
But do not worry, if we have synchronised well. We have the actual and real
files. Only it takes a bit of time and traffics.
### How to launch the DevTools
#### On Desktop Devices
We can launch the DevTools by pressing `ctrl`+`shift`+`i` (`Command`+`shift`+`i` on Mac).
#### On Android
Please refer to [Remote debug Android devices](https://developer.chrome.com/docs/devtools/remote-debugging/).
Once the DevTools have been launched, everything operates the same as on a PC.
#### On iOS, iPadOS devices
If we have a Mac, we can inspect from Safari on the Mac. Please refer to [Inspecting iOS and iPadOS](https://developer.apple.com/documentation/safari-developer-tools/inspecting-ios).
To shrink the database size, `Rebuild everything` only reliably and effectively. But do not worry, if we have synchronised well. We have the actual and real files. Only it takes a bit of time and traffics.
### How can I use the DevTools?
#### Checking the network log
1. Open the network pane.
2. Find the requests marked in red.
2. Find the requests marked in red.\
![Errored](../images/devtools1.png)
3. Capture the `Headers`, `Payload`, and, `Response`. **Please be sure to keep important information confidential**. If the `Response` contains secrets, you can omitted that.
Note: Headers contains a some credentials. **The path of the request URL, Remote Address, authority, and authorization must be concealed.**
3. Capture the `Headers`, `Payload`, and, `Response`. **Please be sure to keep
important information confidential**. If the `Response` contains secrets, you
can omitted that. Note: Headers contains a some credentials. **The path of
the request URL, Remote Address, authority, and authorization must be
concealed.**\
![Concealed sample](../images/devtools2.png)
## Troubleshooting
<!-- Add here -->
### While using Cloudflare Tunnels, often Obsidian API fallback and `524` error occurs.
A `524` error occurs when the request to the server is not completed within a
`specified time`. This is a timeout error from Cloudflare. From the reported
issue, it seems to be 100 seconds. (#627).
Therefore, this error returns from Cloudflare, not from the server. Hence, the
result contains no CORS field. It means that this response makes the Obsidian
API fallback.
However, even if the Obsidian API fallback occurs, the request is still not
completed within the `specified time`, 100 seconds.
To solve this issue, we need to configure the timeout settings.
Please enable the toggle in `💪 Power users` -> `CouchDB Connection Tweak` ->
`Use timeouts instead of heartbeats`.
### On the mobile device, cannot synchronise on the local network!
Obsidian mobile is not able to connect to the non-secure end-point, such as starting with `http://`. Make sure your URI of CouchDB. Also not able to use a self-signed certificate.
Obsidian mobile is not able to connect to the non-secure end-point, such as
starting with `http://`. Make sure your URI of CouchDB. Also not able to use a
self-signed certificate.
### I think that something bad happening on the vault...
Place `redflag.md` on top of the vault, and restart Obsidian. The most simple way is to create a new note and rename it to `redflag`. Of course, we can put it without Obsidian.
Place `redflag.md` on top of the vault, and restart Obsidian. The most simple
way is to create a new note and rename it to `redflag`. Of course, we can put it
without Obsidian.
If there is `redflag.md`, Self-hosted LiveSync suspends all database and storage processes.
If there is `redflag.md`, Self-hosted LiveSync suspends all database and storage
processes.
There are some options to use `redflag.md`.
@@ -132,44 +317,48 @@ There are some options to use `redflag.md`.
| `redflag2.md` | `flag_rebuild.md` | Suspends all processes, and rebuild both local and remote databases by local files. |
| `redflag3.md` | `flag_fetch.md` | Suspends all processes, discard the local database, and fetch from the remote again. |
When fetching everything remotely or performing a rebuild, restarting Obsidian is performed once for safety reasons. At that time, Self-hosted LiveSync uses these files to determine whether the process should be carried out.
(The use of normal markdown files is a trick to externally force cancellation in the event of faults in the rebuild or fetch function itself, especially on mobile devices).
This mechanism is also used for set-up. And just for information, these files are also not subject to synchronisation.
When fetching everything remotely or performing a rebuild, restarting Obsidian
is performed once for safety reasons. At that time, Self-hosted LiveSync uses
these files to determine whether the process should be carried out. (The use of
normal markdown files is a trick to externally force cancellation in the event
of faults in the rebuild or fetch function itself, especially on mobile
devices). This mechanism is also used for set-up. And just for information,
these files are also not subject to synchronisation.
However, occasionally the deletion of files may fail. This should generally work normally after restarting Obsidian. (As far as I can observe).
## Tips
### How to resolve `Tweaks Mismatched of Changed`
(Since v0.23.17)
If you have changed some configurations or tweaks which should be unified between the devices, you will be asked how to reflect (or not) other devices at the next synchronisation. It also occurs on the device itself, where changes are made, to prevent unexpected configuration changes from unwanted propagation.
(We may thank this behaviour if we have synchronised or backed up and restored Self-hosted LiveSync. At least, for me so).
Following dialogue will be shown:
![Dialogue](tweak_mismatch_dialogue.png)
- If we want to propagate the setting of the device, we should choose `Update with mine`.
- On other devices, we should choose `Use configured` to accept and use the configured configuration.
- `Dismiss` can postpone a decision. However, we cannot synchronise until we have decided.
Rest assured that in most cases we can choose `Use configured`. (Unless you are certain that you have not changed the configuration).
If we see it for the first time, it reflects the settings of the device that has been synchronised with the remote for the first time since the upgrade. Probably, we can accept that.
<!-- Add here -->
However, occasionally the deletion of files may fail. This should generally work
normally after restarting Obsidian. (As far as I can observe).
### Old tips
- Rarely, a file in the database could be corrupted. The plugin will not write to local storage when a file looks corrupted. If a local version of the file is on your device, the corruption could be fixed by editing the local file and synchronizing it. But if the file does not exist on any of your devices, then it can not be rescued. In this case, you can delete these items from the settings dialog.
- To stop the boot-up sequence (eg. for fixing problems on databases), you can put a `redflag.md` file (or directory) at the root of your vault.
Tip for iOS: a redflag directory can be created at the root of the vault using the File application.
- Also, with `redflag2.md` placed, we can automatically rebuild both the local and the remote databases during the boot-up sequence. With `redflag3.md`, we can discard only the local database and fetch from the remote again.
- Q: The database is growing, how can I shrink it down?
A: each of the docs is saved with their past 100 revisions for detecting and resolving conflicts. Picturing that one device has been offline for a while, and comes online again. The device has to compare its notes with the remotely saved ones. If there exists a historic revision in which the note used to be identical, it could be updated safely (like git fast-forward). Even if that is not in revision histories, we only have to check the differences after the revision that both devices commonly have. This is like git's conflict-resolving method. So, We have to make the database again like an enlarged git repo if you want to solve the root of the problem.
- Rarely, a file in the database could be corrupted. The plugin will not write
to local storage when a file looks corrupted. If a local version of the file
is on your device, the corruption could be fixed by editing the local file and
synchronizing it. But if the file does not exist on any of your devices, then
it can not be rescued. In this case, you can delete these items from the
settings dialog.
- To stop the boot-up sequence (eg. for fixing problems on databases), you can
put a `redflag.md` file (or directory) at the root of your vault. Tip for iOS:
a redflag directory can be created at the root of the vault using the File
application.
- Also, with `redflag2.md` placed, we can automatically rebuild both the local
and the remote databases during the boot-up sequence. With `redflag3.md`, we
can discard only the local database and fetch from the remote again.
- Q: The database is growing, how can I shrink it down? A: each of the docs is
saved with their past 100 revisions for detecting and resolving conflicts.
Picturing that one device has been offline for a while, and comes online
again. The device has to compare its notes with the remotely saved ones. If
there exists a historic revision in which the note used to be identical, it
could be updated safely (like git fast-forward). Even if that is not in
revision histories, we only have to check the differences after the revision
that both devices commonly have. This is like git's conflict-resolving method.
So, We have to make the database again like an enlarged git repo if you want
to solve the root of the problem.
- And more technical Information is in the [Technical Information](tech_info.md)
- If you want to synchronize files without obsidian, you can use [filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync).
- WebClipper is also available on Chrome Web Store:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
- If you want to synchronize files without obsidian, you can use
[filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync).
- WebClipper is also available on Chrome Web
Store:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
Repo is here: [obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip). (Docs are a work in progress.)
Repo is here:
[obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip).
(Docs are a work in progress.)

View File

@@ -40,8 +40,7 @@ export default [
"src/lib/test",
"src/lib/src/cli",
"**/main.js",
"src/lib/apps/webpeer/dist",
"src/lib/apps/webpeer/svelte.config.js",
"src/lib/apps/webpeer/*"
],
},
...compat.extends(

View File

@@ -1,7 +1,7 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.24.0",
"version": "0.25.2",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",

View File

@@ -1,7 +1,7 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.24.23",
"version": "0.25.5",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",

11473
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,21 @@
{
"name": "obsidian-livesync",
"version": "0.24.23",
"version": "0.25.5",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js",
"type": "module",
"scripts": {
"bakei18n": "npx tsx ./src/lib/_tools/bakei18n.ts",
"i18n:bakejson": "npx tsx ./src/lib/_tools/bakei18n.ts",
"i18n:yaml2json": "npx tsx ./src/lib/_tools/yaml2json.ts",
"i18n:json2yaml": "npx tsx ./src/lib/_tools/json2yaml.ts",
"prettyjson": "prettier --config ./.prettierrc ./src/lib/src/common/messagesJson/*.json --write --log-level error",
"postbakei18n": "prettier --config ./.prettierrc ./src/lib/src/common/messages/*.ts --write --log-level error",
"posti18n:yaml2json": "npm run prettyjson",
"predev": "npm run bakei18n",
"dev": "node esbuild.config.mjs",
"build": "npm run bakei18n && node esbuild.config.mjs production",
"prebuild": "npm run bakei18n",
"build": "node esbuild.config.mjs production",
"buildDev": "node esbuild.config.mjs dev",
"lint": "eslint src",
"svelte-check": "svelte-check --tsconfig ./tsconfig.json",
@@ -15,19 +23,20 @@
"pretty": "npm run prettyNoWrite -- --write --log-level error",
"prettyCheck": "npm run prettyNoWrite -- --check",
"prettyNoWrite": "prettier --config ./.prettierrc \"**/*.js\" \"**/*.ts\" \"**/*.json\" ",
"check": "npm run lint && npm run svelte-check && npm run tsc-check"
"check": "npm run lint && npm run svelte-check && npm run tsc-check",
"unittest": "deno test -A --no-check --coverage=cov_profile --v8-flags=--expose-gc --trace-leaks ./src/"
},
"keywords": [],
"author": "vorotamoroz",
"license": "MIT",
"devDependencies": {
"@chialab/esbuild-plugin-worker": "^0.18.1",
"@eslint/compat": "^1.2.6",
"@eslint/eslintrc": "^3.2.0",
"@eslint/js": "^9.20.0",
"@eslint/compat": "^1.2.7",
"@eslint/eslintrc": "^3.3.0",
"@eslint/js": "^9.21.0",
"@tsconfig/svelte": "^5.0.4",
"@types/diff-match-patch": "^1.0.36",
"@types/node": "^22.5.4",
"@types/node": "^22.13.8",
"@types/pouchdb": "^6.4.2",
"@types/pouchdb-adapter-http": "^6.1.6",
"@types/pouchdb-adapter-idb": "^6.1.7",
@@ -36,21 +45,23 @@
"@types/pouchdb-mapreduce": "^6.1.10",
"@types/pouchdb-replication": "^6.4.7",
"@types/transform-pouch": "^1.0.6",
"@typescript-eslint/eslint-plugin": "^8.24.1",
"@typescript-eslint/parser": "^8.24.1",
"builtin-modules": "^4.0.0",
"esbuild": "0.24.2",
"@typescript-eslint/eslint-plugin": "8.25.0",
"@typescript-eslint/parser": "8.25.0",
"builtin-modules": "5.0.0",
"esbuild": "0.25.0",
"esbuild-svelte": "^0.9.0",
"eslint": "^9.20.1",
"eslint": "^9.21.0",
"eslint-plugin-import": "^2.31.0",
"eslint-plugin-svelte": "^2.46.1",
"eslint-plugin-svelte": "^3.0.2",
"events": "^3.3.0",
"obsidian": "^1.7.2",
"postcss": "^8.5.2",
"glob": "^11.0.3",
"obsidian": "^1.8.7",
"postcss": "^8.5.3",
"postcss-load-config": "^6.0.1",
"pouchdb-adapter-http": "^9.0.0",
"pouchdb-adapter-idb": "^9.0.0",
"pouchdb-adapter-indexeddb": "^9.0.0",
"pouchdb-adapter-memory": "^9.0.0",
"pouchdb-core": "^9.0.0",
"pouchdb-errors": "^9.0.0",
"pouchdb-find": "^9.0.0",
@@ -58,29 +69,33 @@
"pouchdb-merge": "^9.0.0",
"pouchdb-replication": "^9.0.0",
"pouchdb-utils": "^9.0.0",
"prettier": "^3.5.1",
"svelte": "^5.20.1",
"prettier": "3.5.2",
"svelte": "5.28.6",
"svelte-preprocess": "^6.0.3",
"terser": "^5.39.0",
"transform-pouch": "^2.0.0",
"tslib": "^2.8.1",
"tsx": "^4.19.2",
"typescript": "^5.7.3"
"tsx": "^4.19.4",
"typescript": "5.7.3",
"yaml": "^2.8.0",
"@types/deno": "^2.3.0"
},
"dependencies": {
"@aws-sdk/client-s3": "^3.645.0",
"@smithy/fetch-http-handler": "^3.2.4",
"@smithy/protocol-http": "^4.1.0",
"@smithy/querystring-builder": "^3.0.3",
"@aws-sdk/client-s3": "^3.808.0",
"@smithy/fetch-http-handler": "^5.0.2",
"@smithy/md5-js": "^4.0.2",
"@smithy/middleware-apply-body-checksum": "^4.1.0",
"@smithy/protocol-http": "^5.1.0",
"@smithy/querystring-builder": "^4.0.2",
"diff-match-patch": "^1.0.5",
"esbuild-plugin-inline-worker": "^0.1.1",
"fflate": "^0.8.2",
"idb": "^8.0.2",
"idb": "^8.0.3",
"minimatch": "^10.0.1",
"octagonal-wheels": "^0.1.25",
"octagonal-wheels": "^0.1.37",
"qrcode-generator": "^1.4.4",
"svelte-check": "^4.1.4",
"trystero": "^0.20.1",
"svelte-check": "^4.1.7",
"trystero": "^0.21.5",
"xxhash-wasm-102": "npm:xxhash-wasm@^1.0.2"
}
}

View File

@@ -1,13 +1,5 @@
import { deleteDB, type IDBPDatabase, openDB } from "idb";
export interface KeyValueDatabase {
get<T>(key: IDBValidKey): Promise<T>;
set<T>(key: IDBValidKey, value: T): Promise<IDBValidKey>;
del(key: IDBValidKey): Promise<void>;
clear(): Promise<void>;
keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]>;
close(): void;
destroy(): Promise<void>;
}
import type { KeyValueDatabase } from "../lib/src/interfaces/KeyValueDatabase.ts";
const databaseCache: { [key: string]: IDBPDatabase<any> } = {};
export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => {
if (dbKey in databaseCache) {

View File

@@ -15,6 +15,7 @@ import {
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
type AnyEntry,
type CouchDBCredentials,
type DocumentID,
type EntryHasPath,
type FilePath,
@@ -27,10 +28,12 @@ import type ObsidianLiveSyncPlugin from "../main.ts";
import { writeString } from "../lib/src/string_and_binary/convert.ts";
import { fireAndForget } from "../lib/src/common/utils.ts";
import { sameChangePairs } from "./stores.ts";
import type { KeyValueDatabase } from "./KeyValueDB.ts";
import { scheduleTask } from "octagonal-wheels/concurrency/task";
import { EVENT_PLUGIN_UNLOADED, eventHub } from "./events.ts";
import { promiseWithResolver, type PromiseWithResolvers } from "octagonal-wheels/promises";
import { AuthorizationHeaderGenerator } from "../lib/src/replication/httplib.ts";
import type { KeyValueDatabase } from "../lib/src/interfaces/KeyValueDatabase.ts";
export { scheduleTask, cancelTask, cancelAllTasks } from "../lib/src/concurrency/task.ts";
@@ -230,17 +233,17 @@ export const _requestToCouchDBFetch = async (
export const _requestToCouchDB = async (
baseUri: string,
username: string,
password: string,
credentials: CouchDBCredentials,
origin: string,
path?: string,
body?: any,
method?: string
method?: string,
customHeaders?: Record<string, string>
) => {
const utf8str = String.fromCharCode.apply(null, [...writeString(`${username}:${password}`)]);
const encoded = window.btoa(utf8str);
const authHeader = "Basic " + encoded;
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin };
// Create each time to avoid caching.
const authHeaderGen = new AuthorizationHeaderGenerator();
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
const uri = `${baseUri}/${path}`;
const requestParam: RequestUrlParam = {
url: uri,
@@ -251,6 +254,9 @@ export const _requestToCouchDB = async (
};
return await requestUrl(requestParam);
};
/**
* @deprecated Use requestToCouchDBWithCredentials instead.
*/
export const requestToCouchDB = async (
baseUri: string,
username: string,
@@ -258,12 +264,34 @@ export const requestToCouchDB = async (
origin: string = "",
key?: string,
body?: string,
method?: string
method?: string,
customHeaders?: Record<string, string>
) => {
const uri = `_node/_local/_config${key ? "/" + key : ""}`;
return await _requestToCouchDB(baseUri, username, password, origin, uri, body, method);
return await _requestToCouchDB(
baseUri,
{ username, password, type: "basic" },
origin,
uri,
body,
method,
customHeaders
);
};
export function requestToCouchDBWithCredentials(
baseUri: string,
credentials: CouchDBCredentials,
origin: string = "",
key?: string,
body?: string,
method?: string,
customHeaders?: Record<string, string>
) {
const uri = `_node/_local/_config${key ? "/" + key : ""}`;
return _requestToCouchDB(baseUri, credentials, origin, uri, body, method, customHeaders);
}
export const BASE_IS_NEW = Symbol("base");
export const TARGET_IS_NEW = Symbol("target");
export const EVEN = Symbol("even");
@@ -586,10 +614,10 @@ const decodePrefixMapNumber = Object.fromEntries(
);
export function encodeAnyArray(obj: any[]): string {
const tempArray = obj.map((v) => {
if (v == null) return "n";
if (v == false) return "f";
if (v == true) return "t";
if (v == undefined) return "u";
if (v === null) return "n";
if (v === false) return "f";
if (v === true) return "t";
if (v === undefined) return "u";
if (typeof v == "number") {
const b36 = v.toString(36);
const strNum = v.toString();

View File

@@ -25,6 +25,8 @@ import {
readContent,
createBlob,
fireAndForget,
type CustomRegExp,
getFileRegExp,
} from "../../lib/src/common/utils.ts";
import {
compareMTime,
@@ -164,12 +166,11 @@ export class HiddenFileSync extends LiveSyncCommands implements IObsidianModule
return Promise.resolve(true);
}
updateSettingCache() {
const ignorePatterns = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",")
.filter((e) => e)
.map((e) => new RegExp(e, "i"));
const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
this.ignorePatterns = ignorePatterns;
const targetFilter = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
this.targetPatterns = targetFilter;
this.shouldSkipFile = [] as FilePathWithPrefixLC[];
// Exclude files handled by customization sync
const configDir = normalizePath(this.app.vault.configDir);
@@ -219,12 +220,10 @@ export class HiddenFileSync extends LiveSyncCommands implements IObsidianModule
? this.settings.syncInternalFilesInterval * 1000
: 0
);
const ignorePatterns = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",")
.filter((e) => e)
.map((e) => new RegExp(e, "i"));
const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
this.ignorePatterns = ignorePatterns;
const targetFilter = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
this.targetPatterns = targetFilter;
return Promise.resolve(true);
}
@@ -1683,14 +1682,12 @@ ${messageFetch}${messageOverwrite}${messageMerge}
// <-- Configuration handling
// --> Local Storage SubFunctions
ignorePatterns: RegExp[] = [];
ignorePatterns: CustomRegExp[] = [];
targetPatterns: CustomRegExp[] = [];
async scanInternalFileNames() {
const configDir = normalizePath(this.app.vault.configDir);
const ignoreFilter = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",")
.filter((e) => e)
.map((e) => new RegExp(e, "i"));
const ignoreFilter = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
const targetFilter = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
const synchronisedInConfigSync = !this.settings.usePluginSync
? []
: Object.values(this.settings.pluginSyncExtendedSetting)
@@ -1701,7 +1698,7 @@ ${messageFetch}${messageOverwrite}${messageMerge}
const root = this.app.vault.getRoot();
const findRoot = root.path;
const filenames = (await this.getFiles(findRoot, [], undefined, ignoreFilter))
const filenames = (await this.getFiles(findRoot, [], targetFilter, ignoreFilter))
.filter((e) => e.startsWith("."))
.filter((e) => !e.startsWith(".trash"));
const files = filenames.filter((path) =>
@@ -1737,7 +1734,7 @@ ${messageFetch}${messageOverwrite}${messageMerge}
return result;
}
async getFiles(path: string, ignoreList: string[], filter?: RegExp[], ignoreFilter?: RegExp[]) {
async getFiles(path: string, ignoreList: string[], filter?: CustomRegExp[], ignoreFilter?: CustomRegExp[]) {
let w: ListedFiles;
try {
w = await this.app.vault.adapter.list(path);
@@ -1746,26 +1743,32 @@ ${messageFetch}${messageOverwrite}${messageMerge}
this._log(ex, LOG_LEVEL_VERBOSE);
return [];
}
const filesSrc = [
...w.files
.filter((e) => !ignoreList.some((ee) => e.endsWith(ee)))
.filter((e) => !filter || filter.some((ee) => e.match(ee)))
.filter((e) => !ignoreFilter || ignoreFilter.every((ee) => !e.match(ee))),
];
let files = [] as string[];
for (const file of filesSrc) {
if (!(await this.plugin.$$isIgnoredByIgnoreFiles(file))) {
files.push(file);
for (const file of w.files) {
if (ignoreList && ignoreList.length > 0) {
if (ignoreList.some((e) => file.endsWith(e))) continue;
}
if (filter && filter.length > 0) {
if (!filter.some((e) => e.test(file))) {
continue;
}
}
if (ignoreFilter && ignoreFilter.some((ee) => ee.test(file))) {
continue;
}
if (await this.plugin.$$isIgnoredByIgnoreFiles(file)) continue;
files.push(file);
}
L1: for (const v of w.folders) {
for (const ignore of ignoreList) {
if (v.endsWith(ignore)) {
continue L1;
}
}
if (ignoreFilter && ignoreFilter.some((e) => v.match(e))) {
if (
ignoreFilter &&
ignoreFilter.some((e) => (e.pattern.startsWith("/") || e.pattern.startsWith("\\/")) && e.test(v))
) {
continue L1;
}
if (await this.plugin.$$isIgnoredByIgnoreFiles(v)) {

View File

@@ -28,7 +28,7 @@ export class LocalDatabaseMaintenance extends LiveSyncCommands implements IObsid
return this.localDatabase.localDatabase;
}
clearHash() {
this.localDatabase.hashCaches.clear();
this.localDatabase.clearCaches();
}
async confirm(title: string, message: string, affirmative = "Yes", negative = "No") {

Submodule src/lib updated: ad3f7ee995...7d1597edcf

View File

@@ -27,7 +27,7 @@ import {
LiveSyncAbstractReplicator,
type LiveSyncReplicatorEnv,
} from "./lib/src/replication/LiveSyncAbstractReplicator.js";
import { type KeyValueDatabase } from "./common/KeyValueDB.ts";
import { type KeyValueDatabase } from "./lib/src/interfaces/KeyValueDatabase.ts";
import { LiveSyncCommands } from "./features/LiveSyncCommands.ts";
import { HiddenFileSync } from "./features/HiddenFileSync/CmdHiddenFileSync.ts";
import { ConfigSync } from "./features/ConfigSync/CmdConfigSync.ts";
@@ -291,7 +291,9 @@ export default class ObsidianLiveSyncPlugin
performSetup: boolean,
skipInfo: boolean,
compression: boolean,
customHeaders: Record<string, string>
customHeaders: Record<string, string>,
useRequestAPI: boolean,
getPBKDF2Salt: () => Promise<Uint8Array>
): Promise<
| string
| {
@@ -470,6 +472,14 @@ export default class ObsidianLiveSyncPlugin
$$clearUsedPassphrase(): void {
throwShouldBeOverridden();
}
$$decryptSettings(settings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
throwShouldBeOverridden();
}
$$adjustSettings(settings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
throwShouldBeOverridden();
}
$$loadSettings(): Promise<void> {
throwShouldBeOverridden();
}
@@ -544,6 +554,10 @@ export default class ObsidianLiveSyncPlugin
$everyAfterResumeProcess(): Promise<boolean> {
return InterceptiveEvery;
}
$$fetchRemotePreferredTweakValues(trialSetting: RemoteDBSettings): Promise<TweakValues | false> {
throwShouldBeOverridden();
}
$$checkAndAskResolvingMismatchedTweaks(preferred: Partial<TweakValues>): Promise<[TweakValues | boolean, boolean]> {
throwShouldBeOverridden();
}
@@ -577,7 +591,11 @@ export default class ObsidianLiveSyncPlugin
throwShouldBeOverridden();
}
$$initializeDatabase(showingNotice: boolean = false, reopenDatabase = true): Promise<boolean> {
$$initializeDatabase(
showingNotice: boolean = false,
reopenDatabase = true,
ignoreSuspending: boolean = false
): Promise<boolean> {
throwShouldBeOverridden();
}
@@ -614,7 +632,7 @@ export default class ObsidianLiveSyncPlugin
throwShouldBeOverridden();
}
$$performFullScan(showingNotice?: boolean): Promise<void> {
$$performFullScan(showingNotice?: boolean, ignoreSuspending?: boolean): Promise<void> {
throwShouldBeOverridden();
}

View File

@@ -53,7 +53,7 @@ export class ModuleDatabaseFileAccess extends AbstractModule implements IObsidia
async () => await this.storeContent("autoTest.md" as FilePathWithPrefix, testString)
);
// For test, we need to clear the caches.
await this.localDatabase.hashCaches.clear();
this.localDatabase.clearCaches();
await this._test("readContent", async () => {
const content = await this.fetch("autoTest.md" as FilePathWithPrefix);
if (!content) return "File not found";

View File

@@ -11,7 +11,7 @@ import { AbstractModule } from "../AbstractModule.ts";
import type { Rebuilder } from "../interfaces/DatabaseRebuilder.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
import type { LiveSyncCouchDBReplicator } from "../../lib/src/replication/couchdb/LiveSyncReplicator.ts";
import { fetchAllUsedChunks } from "../../lib/src/pouchdb/utils_couchdb.ts";
import { fetchAllUsedChunks } from "@/lib/src/pouchdb/chunks.ts";
import { EVENT_DATABASE_REBUILT, eventHub } from "src/common/events.ts";
export class ModuleRebuilder extends AbstractModule implements ICoreModule, Rebuilder {
@@ -73,7 +73,7 @@ export class ModuleRebuilder extends AbstractModule implements ICoreModule, Rebu
await this.core.$$realizeSettingSyncMode();
await this.resetLocalDatabase();
await delay(1000);
await this.core.$$initializeDatabase(true);
await this.core.$$initializeDatabase(true, true, true);
await this.core.$$markRemoteLocked();
await this.core.$$tryResetRemoteDatabase();
await this.core.$$markRemoteLocked();
@@ -90,8 +90,8 @@ export class ModuleRebuilder extends AbstractModule implements ICoreModule, Rebu
return this.rebuildEverything();
}
$fetchLocal(makeLocalChunkBeforeSync?: boolean): Promise<void> {
return this.fetchLocal(makeLocalChunkBeforeSync);
$fetchLocal(makeLocalChunkBeforeSync?: boolean, preventMakeLocalFilesBeforeSync?: boolean): Promise<void> {
return this.fetchLocal(makeLocalChunkBeforeSync, preventMakeLocalFilesBeforeSync);
}
async scheduleRebuild(): Promise<void> {
@@ -190,7 +190,7 @@ export class ModuleRebuilder extends AbstractModule implements ICoreModule, Rebu
if (makeLocalChunkBeforeSync) {
await this.core.fileHandler.createAllChunks(true);
} else if (!preventMakeLocalFilesBeforeSync) {
await this.core.$$initializeDatabase(true);
await this.core.$$initializeDatabase(true, true, true);
} else {
// Do not create local file entries before sync (Means use remote information)
}

View File

@@ -4,7 +4,8 @@ import { AbstractModule } from "../AbstractModule";
import type { ICoreModule } from "../ModuleTypes";
import { Logger, LOG_LEVEL_NOTICE, LOG_LEVEL_INFO, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { isLockAcquired, shareRunningResult, skipIfDuplicated } from "octagonal-wheels/concurrency/lock";
import { purgeUnreferencedChunks, balanceChunkPurgedDBs } from "../../lib/src/pouchdb/utils_couchdb";
import { balanceChunkPurgedDBs } from "@/lib/src/pouchdb/chunks";
import { purgeUnreferencedChunks } from "@/lib/src/pouchdb/chunks";
import { LiveSyncCouchDBReplicator } from "../../lib/src/replication/couchdb/LiveSyncReplicator";
import { throttle } from "octagonal-wheels/function";
import { arrayToChunkedArray } from "octagonal-wheels/collection";
@@ -27,20 +28,29 @@ import {
updatePreviousExecutionTime,
} from "../../common/utils";
import { isAnyNote } from "../../lib/src/common/utils";
import { EVENT_FILE_SAVED, eventHub } from "../../common/events";
import { EVENT_FILE_SAVED, EVENT_SETTING_SAVED, eventHub } from "../../common/events";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import { globalSlipBoard } from "../../lib/src/bureau/bureau";
import { $msg } from "../../lib/src/common/i18n";
import { clearHandlers } from "../../lib/src/replication/SyncParamsHandler";
const KEY_REPLICATION_ON_EVENT = "replicationOnEvent";
const REPLICATION_ON_EVENT_FORECASTED_TIME = 5000;
export class ModuleReplicator extends AbstractModule implements ICoreModule {
_replicatorType?: string;
$everyOnloadAfterLoadSettings(): Promise<boolean> {
eventHub.onEvent(EVENT_FILE_SAVED, () => {
if (this.settings.syncOnSave && !this.core.$$isSuspended()) {
scheduleTask("perform-replicate-after-save", 250, () => this.core.$$replicateByEvent());
}
});
eventHub.onEvent(EVENT_SETTING_SAVED, (setting) => {
if (this._replicatorType !== setting.remoteType) {
void this.setReplicator();
}
});
return Promise.resolve(true);
}
@@ -50,8 +60,15 @@ export class ModuleReplicator extends AbstractModule implements ICoreModule {
this._log($msg("Replicator.Message.InitialiseFatalError"), LOG_LEVEL_NOTICE);
return false;
}
if (this.core.replicator) {
await this.core.replicator.closeReplication();
this._log("Replicator closed for changing", LOG_LEVEL_VERBOSE);
}
this.core.replicator = replicator;
this._replicatorType = this.settings.remoteType;
await yieldMicrotask();
// Clear any existing sync parameter handlers (means clearing key-deriving salt).
clearHandlers();
return true;
}
@@ -66,8 +83,19 @@ export class ModuleReplicator extends AbstractModule implements ICoreModule {
$everyOnResetDatabase(db: LiveSyncLocalDB): Promise<boolean> {
return this.setReplicator();
}
async ensureReplicatorPBKDF2Salt(showMessage: boolean = false): Promise<boolean> {
// Checking salt
const replicator = this.core.$$getReplicator();
return await replicator.ensurePBKDF2Salt(this.settings, showMessage, true);
}
async $everyBeforeReplicate(showMessage: boolean): Promise<boolean> {
// Checking salt
// Showing message is false: that because be shown here. (And it is a fatal error, no way to hide it).
if (!(await this.ensureReplicatorPBKDF2Salt(false))) {
Logger("Failed to initialise the encryption key, preventing replication.", LOG_LEVEL_NOTICE);
return false;
}
await this.loadQueuedFiles();
return true;
}
@@ -122,12 +150,12 @@ Even if you choose to clean up, you will see this option again if you exit Obsid
}
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
this.localDatabase.hashCaches.clear();
this.localDatabase.clearCaches();
// Perform the synchronisation once.
if (await this.core.replicator.openReplication(this.settings, false, showMessage, true)) {
await balanceChunkPurgedDBs(this.localDatabase.localDatabase, remoteDB.db);
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
this.localDatabase.hashCaches.clear();
this.localDatabase.clearCaches();
await this.core.$$getReplicator().markRemoteResolved(this.settings);
Logger("The local database has been cleaned up.", showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
} else {
@@ -231,34 +259,49 @@ Even if you choose to clean up, you will see this option again if you exit Obsid
async loadQueuedFiles() {
if (this.settings.suspendParseReplicationResult) return;
if (!this.settings.isConfigured) return;
const kvDBKey = "queued-files";
// const ids = [...new Set(JSON.parse(localStorage.getItem(lsKey) || "[]"))] as string[];
const ids = [...new Set((await this.core.kvDB.get<string[]>(kvDBKey)) ?? [])];
const batchSize = 100;
const chunkedIds = arrayToChunkedArray(ids, batchSize);
try {
const kvDBKey = "queued-files";
// const ids = [...new Set(JSON.parse(localStorage.getItem(lsKey) || "[]"))] as string[];
const ids = [...new Set((await this.core.kvDB.get<string[]>(kvDBKey)) ?? [])];
const batchSize = 100;
const chunkedIds = arrayToChunkedArray(ids, batchSize);
// suspendParseReplicationResult is true, so we have to resume it if it is suspended.
if (this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
for await (const idsBatch of chunkedIds) {
const ret = await this.localDatabase.allDocsRaw<EntryDoc>({
keys: idsBatch,
include_docs: true,
limit: 100,
});
const docs = ret.rows.filter((e) => e.doc).map((e) => e.doc) as PouchDB.Core.ExistingDocument<EntryDoc>[];
const errors = ret.rows.filter((e) => !e.doc && !e.value.deleted);
if (errors.length > 0) {
Logger("Some queued processes were not resurrected");
Logger(JSON.stringify(errors), LOG_LEVEL_VERBOSE);
// suspendParseReplicationResult is true, so we have to resume it if it is suspended.
if (this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
for await (const idsBatch of chunkedIds) {
const ret = await this.localDatabase.allDocsRaw<EntryDoc>({
keys: idsBatch,
include_docs: true,
limit: 100,
});
const docs = ret.rows
.filter((e) => e.doc)
.map((e) => e.doc) as PouchDB.Core.ExistingDocument<EntryDoc>[];
const errors = ret.rows.filter((e) => !e.doc && !e.value.deleted);
if (errors.length > 0) {
Logger("Some queued processes were not resurrected");
Logger(JSON.stringify(errors), LOG_LEVEL_VERBOSE);
}
this.replicationResultProcessor.enqueueAll(docs);
}
} catch (e) {
Logger(`Failed to load queued files.`, LOG_LEVEL_NOTICE);
Logger(e, LOG_LEVEL_VERBOSE);
} finally {
// Check again before awaiting,
if (this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
this.replicationResultProcessor.enqueueAll(docs);
}
if (this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
// Wait for all queued files to be processed.
try {
await this.replicationResultProcessor.waitForAllProcessed();
} catch (e) {
Logger(`Failed to wait for all queued files to be processed.`, LOG_LEVEL_NOTICE);
Logger(e, LOG_LEVEL_VERBOSE);
}
await this.replicationResultProcessor.waitForAllProcessed();
}
replicationResultProcessor = new QueueProcessor(
@@ -267,7 +310,7 @@ Even if you choose to clean up, you will see this option again if you exit Obsid
const change = docs[0];
if (!change) return;
if (isChunk(change._id)) {
globalSlipBoard.submit("read-chunk", change._id, change as EntryLeaf);
this.localDatabase.onNewLeaf(change as EntryLeaf);
return;
}
if (await this.core.$anyModuleParsedReplicationResultItem(change)) return;

View File

@@ -37,13 +37,17 @@ export class ModuleConflictChecker extends AbstractModule implements ICoreModule
// TODO-> Move to ModuleConflictResolver?
conflictResolveQueue = new QueueProcessor(
async (filenames: FilePathWithPrefix[]) => {
await this.core.$$resolveConflict(filenames[0]);
const filename = filenames[0];
return await this.core.$$resolveConflict(filename);
},
{
suspended: false,
batchSize: 1,
concurrentLimit: 1,
delay: 10,
// No need to limit concurrency to `1` here, subsequent process will handle it,
// And, some cases, we do not need to synchronised. (e.g., auto-merge available).
// Therefore, limiting global concurrency is performed on resolver with the UI.
concurrentLimit: 10,
delay: 0,
keepResultUntilDownstreamConnected: false,
}
).replaceEnqueueProcessor((queue, newEntity) => {
@@ -57,19 +61,13 @@ export class ModuleConflictChecker extends AbstractModule implements ICoreModule
new QueueProcessor(
(files: FilePathWithPrefix[]) => {
const filename = files[0];
// const file = await this.core.storageAccess.isExists(filename);
// if (!file) return [];
// if (!(file instanceof TFile)) return;
// if ((file instanceof TFolder)) return [];
// Check again?
return Promise.resolve([filename]);
// this.conflictResolveQueue.enqueueWithKey(filename, { filename, file });
},
{
suspended: false,
batchSize: 1,
concurrentLimit: 5,
delay: 10,
concurrentLimit: 10,
delay: 0,
keepResultUntilDownstreamConnected: true,
pipeTo: this.conflictResolveQueue,
totalRemainingReactiveSource: this.core.conflictProcessQueueCount,

View File

@@ -6,6 +6,7 @@ import {
FLAGMD_REDFLAG2_HR,
FLAGMD_REDFLAG3,
FLAGMD_REDFLAG3_HR,
type ObsidianLiveSyncSettings,
} from "../../lib/src/common/types.ts";
import { AbstractModule } from "../AbstractModule.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
@@ -121,9 +122,9 @@ export class ModuleRedFlag extends AbstractModule implements ICoreModule {
}
);
let makeLocalChunkBeforeSync = false;
let preventMakeLocalFilesBeforeSync = false;
let makeLocalFilesBeforeSync = false;
if (chunkMode === method1) {
preventMakeLocalFilesBeforeSync = true;
makeLocalFilesBeforeSync = true;
} else if (chunkMode === method2) {
makeLocalChunkBeforeSync = true;
} else if (chunkMode === method3) {
@@ -133,7 +134,35 @@ export class ModuleRedFlag extends AbstractModule implements ICoreModule {
return false;
}
await this.core.rebuilder.$fetchLocal(makeLocalChunkBeforeSync, preventMakeLocalFilesBeforeSync);
const optionFetchRemoteConf = $msg("RedFlag.FetchRemoteConfig.Buttons.Fetch");
const optionCancel = $msg("RedFlag.FetchRemoteConfig.Buttons.Cancel");
const fetchRemote = await this.core.confirm.askSelectStringDialogue(
$msg("RedFlag.FetchRemoteConfig.Message"),
[optionFetchRemoteConf, optionCancel],
{
defaultAction: optionFetchRemoteConf,
timeout: 0,
title: $msg("RedFlag.FetchRemoteConfig.Title"),
}
);
if (fetchRemote === optionFetchRemoteConf) {
this._log("Fetching remote configuration", LOG_LEVEL_NOTICE);
const newSettings = JSON.parse(JSON.stringify(this.core.settings)) as ObsidianLiveSyncSettings;
const remoteConfig = await this.core.$$fetchRemotePreferredTweakValues(newSettings);
if (remoteConfig) {
this._log("Remote configuration found.", LOG_LEVEL_NOTICE);
const mergedSettings = {
...this.core.settings,
...remoteConfig,
} satisfies ObsidianLiveSyncSettings;
this._log("Remote configuration applied.", LOG_LEVEL_NOTICE);
this.core.settings = mergedSettings;
} else {
this._log("Remote configuration not applied.", LOG_LEVEL_NOTICE);
}
}
await this.core.rebuilder.$fetchLocal(makeLocalChunkBeforeSync, !makeLocalFilesBeforeSync);
await this.deleteRedFlag3();
if (this.settings.suspendFileWatching) {

View File

@@ -164,22 +164,28 @@ export class ModuleResolvingMismatchedTweaks extends AbstractModule implements I
return "IGNORE";
}
async $$checkAndAskUseRemoteConfiguration(
trialSetting: RemoteDBSettings
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
const replicator = await this.core.$anyNewReplicator(trialSetting);
async $$fetchRemotePreferredTweakValues(trialSetting: RemoteDBSettings): Promise<TweakValues | false> {
const replicator = await this.core.$anyNewReplicator();
if (await replicator.tryConnectRemote(trialSetting)) {
const preferred = await replicator.getRemotePreferredTweakValues(trialSetting);
if (preferred) {
return await this.$$askUseRemoteConfiguration(trialSetting, preferred);
} else {
this._log("Failed to get the preferred tweak values from the remote server.", LOG_LEVEL_NOTICE);
return preferred;
}
return { result: false, requireFetch: false };
} else {
this._log("Failed to connect to the remote server.", LOG_LEVEL_NOTICE);
return { result: false, requireFetch: false };
this._log("Failed to get the preferred tweak values from the remote server.", LOG_LEVEL_NOTICE);
return false;
}
this._log("Failed to connect to the remote server.", LOG_LEVEL_NOTICE);
return false;
}
async $$checkAndAskUseRemoteConfiguration(
trialSetting: RemoteDBSettings
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
const preferred = await this.core.$$fetchRemotePreferredTweakValues(trialSetting);
if (preferred) {
return await this.$$askUseRemoteConfiguration(trialSetting, preferred);
}
return { result: false, requireFetch: false };
}
async $$askUseRemoteConfiguration(

View File

@@ -14,7 +14,7 @@ import type {
import { TFileToUXFileInfoStub, TFolderToUXFileInfoStub } from "./storageLib/utilObsidian.ts";
import { StorageEventManagerObsidian, type StorageEventManager } from "./storageLib/StorageEventManager";
import type { StorageAccess } from "../interfaces/StorageAccess";
import { createBlob } from "../../lib/src/common/utils";
import { createBlob, type CustomRegExp } from "../../lib/src/common/utils";
export class ModuleFileAccessObsidian extends AbstractObsidianModule implements IObsidianModule, StorageAccess {
vaultAccess!: SerializedFileAccess;
@@ -223,8 +223,8 @@ export class ModuleFileAccessObsidian extends AbstractObsidianModule implements
async getFilesIncludeHidden(
basePath: string,
includeFilter?: RegExp[],
excludeFilter?: RegExp[],
includeFilter?: CustomRegExp[],
excludeFilter?: CustomRegExp[],
skipFolder: string[] = [".git", ".trash", "node_modules"]
): Promise<FilePath[]> {
let w: ListedFiles;
@@ -239,16 +239,11 @@ export class ModuleFileAccessObsidian extends AbstractObsidianModule implements
let files = [] as string[];
for (const file of w.files) {
if (excludeFilter && excludeFilter.some((ee) => file.match(ee))) {
// If excludeFilter and includeFilter are both set, the file will be included in the list.
if (includeFilter) {
if (!includeFilter.some((e) => file.match(e))) continue;
} else {
continue;
}
if (includeFilter && includeFilter.length > 0) {
if (!includeFilter.some((e) => e.test(file))) continue;
}
if (includeFilter) {
if (!includeFilter.some((e) => file.match(e))) continue;
if (excludeFilter && excludeFilter.some((ee) => ee.test(file))) {
continue;
}
if (await this.plugin.$$isIgnoredByIgnoreFiles(file)) continue;
files.push(file);
@@ -259,16 +254,12 @@ export class ModuleFileAccessObsidian extends AbstractObsidianModule implements
if (skipFolder.some((e) => folderName === e)) {
continue;
}
if (excludeFilter && excludeFilter.some((e) => v.match(e))) {
if (includeFilter) {
if (!includeFilter.some((e) => v.match(e))) {
continue;
}
}
}
if (includeFilter) {
if (!includeFilter.some((e) => v.match(e))) continue;
if (excludeFilter && excludeFilter.some((e) => e.test(v))) {
continue;
}
if (await this.plugin.$$isIgnoredByIgnoreFiles(v)) {
continue;
}
// OK, deep dive!
files = files.concat(await this.getFilesIncludeHidden(v, includeFilter, excludeFilter, skipFolder));

View File

@@ -12,7 +12,7 @@ import {
type UXFileInfoStub,
type UXInternalFileInfoStub,
} from "../../../lib/src/common/types.ts";
import { delay, fireAndForget } from "../../../lib/src/common/utils.ts";
import { delay, fireAndForget, getFileRegExp } from "../../../lib/src/common/utils.ts";
import { type FileEventItem, type FileEventType } from "../../../common/types.ts";
import { serialized, skipIfDuplicated } from "../../../lib/src/concurrency/lock.ts";
import {
@@ -161,12 +161,10 @@ export class StorageEventManagerObsidian extends StorageEventManager {
if (!this.plugin.settings.syncInternalFiles && !this.plugin.settings.usePluginSync) return;
if (!this.plugin.settings.watchInternalFileChanges) return;
if (!path.startsWith(this.plugin.app.vault.configDir)) return;
const ignorePatterns = this.plugin.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",")
.filter((e) => e)
.map((e) => new RegExp(e, "i"));
if (ignorePatterns.some((e) => path.match(e))) return;
const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
const targetPatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
if (ignorePatterns.some((e) => e.test(path))) return;
if (!targetPatterns.some((e) => e.test(path))) return;
if (path.endsWith("/")) {
// Folder
return;

View File

@@ -20,7 +20,7 @@ import { AbstractModule } from "../AbstractModule.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
import { withConcurrency } from "octagonal-wheels/iterable/map";
export class ModuleInitializerFile extends AbstractModule implements ICoreModule {
async $$performFullScan(showingNotice?: boolean): Promise<void> {
async $$performFullScan(showingNotice?: boolean, ignoreSuspending: boolean = false): Promise<void> {
this._log("Opening the key-value database", LOG_LEVEL_VERBOSE);
const isInitialized = (await this.core.kvDB.get<boolean>("initialized")) || false;
// synchronize all files between database and storage.
@@ -34,6 +34,16 @@ export class ModuleInitializerFile extends AbstractModule implements ICoreModule
}
return;
}
if (!ignoreSuspending && this.settings.suspendFileWatching) {
if (showingNotice) {
this._log(
"Now suspending file watching. Synchronising between the storage and the local database is now prevented.",
LOG_LEVEL_NOTICE,
"syncAll"
);
}
return;
}
if (showingNotice) {
this._log("Initializing", LOG_LEVEL_NOTICE, "syncAll");
@@ -355,11 +365,15 @@ export class ModuleInitializerFile extends AbstractModule implements ICoreModule
this._log(`Checking expired file history done`);
}
async $$initializeDatabase(showingNotice: boolean = false, reopenDatabase = true): Promise<boolean> {
async $$initializeDatabase(
showingNotice: boolean = false,
reopenDatabase = true,
ignoreSuspending: boolean = false
): Promise<boolean> {
this.core.$$resetIsReady();
if (!reopenDatabase || (await this.core.$$openDatabase())) {
if (this.localDatabase.isReady) {
await this.core.$$performFullScan(showingNotice);
await this.core.$$performFullScan(showingNotice, ignoreSuspending);
}
if (!(await this.core.$everyOnDatabaseInitialized(showingNotice))) {
this._log(`Initializing database has been failed on some module`, LOG_LEVEL_NOTICE);

View File

@@ -1,5 +1,4 @@
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger.js";
import { type ObsidianLiveSyncSettings } from "../../lib/src/common/types.js";
import { LOG_LEVEL_NOTICE } from "octagonal-wheels/common/logger";
import {
EVENT_REQUEST_OPEN_P2P,
EVENT_REQUEST_OPEN_SETTING_WIZARD,
@@ -11,130 +10,31 @@ import {
import { AbstractModule } from "../AbstractModule.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
import { $msg } from "src/lib/src/common/i18n.ts";
import { checkUnsuitableValues, RuleLevel, type RuleForType } from "../../lib/src/common/configForDoc.ts";
import { getConfName, type AllSettingItemKey } from "../features/SettingDialogue/settingConstants.ts";
import { performDoctorConsultation, RebuildOptions } from "../../lib/src/common/configForDoc.ts";
export class ModuleMigration extends AbstractModule implements ICoreModule {
async migrateUsingDoctor(skipRebuild: boolean = false, activateReason = "updated", forceRescan = false) {
const r = checkUnsuitableValues(this.core.settings);
if (!forceRescan && r.version == this.settings.doctorProcessedVersion) {
const isIssueFound = Object.keys(r.rules).length > 0;
const msg = isIssueFound ? "Issues found" : "No issues found";
this._log(`${msg} but marked as to be silent`, LOG_LEVEL_VERBOSE);
return;
}
const issues = Object.entries(r.rules);
if (issues.length == 0) {
this._log(
$msg("Doctor.Message.NoIssues"),
activateReason !== "updated" ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO
);
return;
} else {
const OPT_YES = `${$msg("Doctor.Button.Yes")}` as const;
const OPT_NO = `${$msg("Doctor.Button.No")}` as const;
const OPT_DISMISS = `${$msg("Doctor.Button.DismissThisVersion")}` as const;
// this._log(`Issues found in ${key}`, LOG_LEVEL_VERBOSE);
const issues = Object.keys(r.rules)
.map((key) => `- ${getConfName(key as AllSettingItemKey)}`)
.join("\n");
const msg = await this.core.confirm.askSelectStringDialogue(
$msg("Doctor.Dialogue.Main", { activateReason, issues }),
[OPT_YES, OPT_NO, OPT_DISMISS],
{
title: $msg("Doctor.Dialogue.Title"),
defaultAction: OPT_YES,
}
);
if (msg == OPT_DISMISS) {
this.settings.doctorProcessedVersion = r.version;
await this.core.saveSettings();
this._log("Marked as to be silent", LOG_LEVEL_VERBOSE);
return;
}
if (msg != OPT_YES) return;
let shouldRebuild = false;
let shouldRebuildLocal = false;
const issueItems = Object.entries(r.rules) as [keyof ObsidianLiveSyncSettings, RuleForType<any>][];
this._log(`${issueItems.length} Issue(s) found `, LOG_LEVEL_VERBOSE);
let idx = 0;
const applySettings = {} as Partial<ObsidianLiveSyncSettings>;
const OPT_FIX = `${$msg("Doctor.Button.Fix")}` as const;
const OPT_SKIP = `${$msg("Doctor.Button.Skip")}` as const;
const OPT_FIXBUTNOREBUILD = `${$msg("Doctor.Button.FixButNoRebuild")}` as const;
let skipped = 0;
for (const [key, value] of issueItems) {
const levelMap = {
[RuleLevel.Necessary]: $msg("Doctor.Level.Necessary"),
[RuleLevel.Recommended]: $msg("Doctor.Level.Recommended"),
[RuleLevel.Optional]: $msg("Doctor.Level.Optional"),
[RuleLevel.Must]: $msg("Doctor.Level.Must"),
};
const level = value.level ? levelMap[value.level] : "Unknown";
const options = [OPT_FIX] as [typeof OPT_FIX | typeof OPT_SKIP | typeof OPT_FIXBUTNOREBUILD];
if ((!skipRebuild && value.requireRebuild) || value.requireRebuildLocal) {
options.push(OPT_FIXBUTNOREBUILD);
}
options.push(OPT_SKIP);
const note = skipRebuild
? ""
: `${value.requireRebuild ? $msg("Doctor.Message.RebuildRequired") : ""}${value.requireRebuildLocal ? $msg("Doctor.Message.RebuildLocalRequired") : ""}`;
const ret = await this.core.confirm.askSelectStringDialogue(
$msg("Doctor.Dialogue.MainFix", {
name: getConfName(key as AllSettingItemKey),
current: `${this.settings[key]}`,
reason: value.reason ?? " N/A ",
ideal: `${value.value}`,
level: `${level}`,
note: note,
}),
options,
{
title: $msg("Doctor.Dialogue.TitleFix", { current: `${++idx}`, total: `${issueItems.length}` }),
defaultAction: OPT_FIX,
}
);
if (ret == OPT_FIX || ret == OPT_FIXBUTNOREBUILD) {
//@ts-ignore
applySettings[key] = value.value;
if (ret == OPT_FIX) {
shouldRebuild = shouldRebuild || value.requireRebuild || false;
shouldRebuildLocal = shouldRebuildLocal || value.requireRebuildLocal || false;
}
} else {
skipped++;
}
}
if (Object.keys(applySettings).length > 0) {
this.settings = {
...this.settings,
...applySettings,
};
}
if (skipped == 0) {
this.settings.doctorProcessedVersion = r.version;
} else {
if (
(await this.core.confirm.askYesNoDialog($msg("Doctor.Message.SomeSkipped"), {
title: $msg("Doctor.Dialogue.TitleAlmostDone"),
defaultOption: "No",
})) == "no"
) {
// Some skipped, and user wants
this.settings.doctorProcessedVersion = r.version;
}
const { shouldRebuild, shouldRebuildLocal, isModified, settings } = await performDoctorConsultation(
this.core,
this.settings,
{
localRebuild: skipRebuild ? RebuildOptions.SkipEvenIfRequired : RebuildOptions.AutomaticAcceptable,
remoteRebuild: skipRebuild ? RebuildOptions.SkipEvenIfRequired : RebuildOptions.AutomaticAcceptable,
activateReason,
forceRescan,
}
);
if (isModified) {
this.settings = settings;
await this.core.saveSettings();
if (!skipRebuild) {
if (shouldRebuild) {
await this.core.rebuilder.scheduleRebuild();
await this.core.$$performRestart();
} else if (shouldRebuildLocal) {
await this.core.rebuilder.scheduleFetch();
await this.core.$$performRestart();
}
}
if (!skipRebuild) {
if (shouldRebuild) {
await this.core.rebuilder.scheduleRebuild();
await this.core.$$performRestart();
} else if (shouldRebuildLocal) {
await this.core.rebuilder.scheduleFetch();
await this.core.$$performRestart();
}
}
}
@@ -147,157 +47,6 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
await this.saveSettings();
}
}
// async migrationCheck() {
// const old = this.settings.settingVersion;
// const current = SETTING_VERSION_SUPPORT_CASE_INSENSITIVE;
// // Check each migrations(old -> current)
// if (!(await this.migrateToCaseInsensitive(old, current))) {
// this._log(
// $msg("moduleMigration.logMigrationFailed", {
// old: old.toString(),
// current: current.toString(),
// }),
// LOG_LEVEL_NOTICE
// );
// return;
// }
// }
// async migrateToCaseInsensitive(old: number, current: number) {
// if (
// this.settings.handleFilenameCaseSensitive !== undefined &&
// this.settings.doNotUseFixedRevisionForChunks !== undefined
// ) {
// if (current < SETTING_VERSION_SUPPORT_CASE_INSENSITIVE) {
// this.settings.settingVersion = SETTING_VERSION_SUPPORT_CASE_INSENSITIVE;
// await this.saveSettings();
// }
// return true;
// }
// if (
// old >= SETTING_VERSION_SUPPORT_CASE_INSENSITIVE &&
// this.settings.handleFilenameCaseSensitive !== undefined &&
// this.settings.doNotUseFixedRevisionForChunks !== undefined
// ) {
// return true;
// }
// let remoteHandleFilenameCaseSensitive: undefined | boolean = undefined;
// let remoteDoNotUseFixedRevisionForChunks: undefined | boolean = undefined;
// let remoteChecked = false;
// try {
// const remoteInfo = await this.core.replicator.getRemotePreferredTweakValues(this.settings);
// if (remoteInfo) {
// remoteHandleFilenameCaseSensitive =
// "handleFilenameCaseSensitive" in remoteInfo ? remoteInfo.handleFilenameCaseSensitive : false;
// remoteDoNotUseFixedRevisionForChunks =
// "doNotUseFixedRevisionForChunks" in remoteInfo ? remoteInfo.doNotUseFixedRevisionForChunks : false;
// if (
// remoteHandleFilenameCaseSensitive !== undefined ||
// remoteDoNotUseFixedRevisionForChunks !== undefined
// ) {
// remoteChecked = true;
// }
// } else {
// this._log($msg("moduleMigration.logFetchRemoteTweakFailed"), LOG_LEVEL_INFO);
// }
// } catch (ex) {
// this._log($msg("moduleMigration.logRemoteTweakUnavailable"), LOG_LEVEL_INFO);
// this._log(ex, LOG_LEVEL_VERBOSE);
// }
// if (remoteChecked) {
// // The case that the remote could be checked.
// if (remoteHandleFilenameCaseSensitive && remoteDoNotUseFixedRevisionForChunks) {
// // Migrated, but configured as same as old behaviour.
// this.settings.handleFilenameCaseSensitive = true;
// this.settings.doNotUseFixedRevisionForChunks = true;
// this.settings.settingVersion = SETTING_VERSION_SUPPORT_CASE_INSENSITIVE;
// this._log(
// $msg("moduleMigration.logMigratedSameBehaviour", {
// current: current.toString(),
// }),
// LOG_LEVEL_INFO
// );
// await this.saveSettings();
// return true;
// }
// const message = $msg("moduleMigration.msgFetchRemoteAgain");
// const OPTION_FETCH = $msg("moduleMigration.optionYesFetchAgain");
// const DISMISS = $msg("moduleMigration.optionNoAskAgain");
// const options = [OPTION_FETCH, DISMISS];
// const ret = await this.core.confirm.confirmWithMessage(
// $msg("moduleMigration.titleCaseSensitivity"),
// message,
// options,
// DISMISS,
// 40
// );
// if (ret == OPTION_FETCH) {
// this.settings.handleFilenameCaseSensitive = remoteHandleFilenameCaseSensitive || false;
// this.settings.doNotUseFixedRevisionForChunks = remoteDoNotUseFixedRevisionForChunks || false;
// this.settings.settingVersion = SETTING_VERSION_SUPPORT_CASE_INSENSITIVE;
// await this.saveSettings();
// try {
// await this.core.rebuilder.scheduleFetch();
// return;
// } catch (ex) {
// this._log($msg("moduleMigration.logRedflag2CreationFail"), LOG_LEVEL_VERBOSE);
// this._log(ex, LOG_LEVEL_VERBOSE);
// }
// return false;
// } else {
// return false;
// }
// }
// const ENABLE_BOTH = $msg("moduleMigration.optionEnableBoth");
// const ENABLE_FILENAME_CASE_INSENSITIVE = $msg("moduleMigration.optionEnableFilenameCaseInsensitive");
// const ENABLE_FIXED_REVISION_FOR_CHUNKS = $msg("moduleMigration.optionEnableFixedRevisionForChunks");
// const ADJUST_TO_REMOTE = $msg("moduleMigration.optionAdjustRemote");
// const KEEP = $msg("moduleMigration.optionKeepPreviousBehaviour");
// const DISMISS = $msg("moduleMigration.optionDecideLater");
// const message = $msg("moduleMigration.msgSinceV02321");
// const options = [ENABLE_BOTH, ENABLE_FILENAME_CASE_INSENSITIVE, ENABLE_FIXED_REVISION_FOR_CHUNKS];
// if (remoteChecked) {
// options.push(ADJUST_TO_REMOTE);
// }
// options.push(KEEP, DISMISS);
// const ret = await this.core.confirm.confirmWithMessage(
// $msg("moduleMigration.titleCaseSensitivity"),
// message,
// options,
// DISMISS,
// 40
// );
// console.dir(ret);
// switch (ret) {
// case ENABLE_BOTH:
// this.settings.handleFilenameCaseSensitive = false;
// this.settings.doNotUseFixedRevisionForChunks = false;
// break;
// case ENABLE_FILENAME_CASE_INSENSITIVE:
// this.settings.handleFilenameCaseSensitive = false;
// this.settings.doNotUseFixedRevisionForChunks = true;
// break;
// case ENABLE_FIXED_REVISION_FOR_CHUNKS:
// this.settings.doNotUseFixedRevisionForChunks = false;
// this.settings.handleFilenameCaseSensitive = true;
// break;
// case KEEP:
// this.settings.handleFilenameCaseSensitive = true;
// this.settings.doNotUseFixedRevisionForChunks = true;
// this.settings.settingVersion = SETTING_VERSION_SUPPORT_CASE_INSENSITIVE;
// await this.saveSettings();
// return true;
// case DISMISS:
// default:
// return false;
// }
// this.settings.settingVersion = SETTING_VERSION_SUPPORT_CASE_INSENSITIVE;
// await this.saveSettings();
// await this.core.rebuilder.scheduleRebuild();
// await this.core.$$performRestart();
// }
async initialMessage() {
const message = $msg("moduleMigration.msgInitialSetup", {

View File

@@ -1,53 +1,28 @@
import { AbstractObsidianModule, type IObsidianModule } from "../AbstractObsidianModule.ts";
import { LOG_LEVEL_DEBUG, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { Notice, requestUrl, type RequestUrlParam, type RequestUrlResponse } from "../../deps.ts";
import {
type CouchDBCredentials,
type EntryDoc,
type FilePathWithPrefix,
type JWTCredentials,
type JWTHeader,
type JWTParams,
type JWTPayload,
type PreparedJWT,
} from "../../lib/src/common/types.ts";
import { type CouchDBCredentials, type EntryDoc, type FilePathWithPrefix } from "../../lib/src/common/types.ts";
import { getPathFromTFile } from "../../common/utils.ts";
import {
disableEncryption,
enableEncryption,
isCloudantURI,
isValidRemoteCouchDBURI,
replicationFilter,
} from "../../lib/src/pouchdb/utils_couchdb.ts";
import { isCloudantURI, isValidRemoteCouchDBURI } from "../../lib/src/pouchdb/utils_couchdb.ts";
import { replicationFilter } from "@/lib/src/pouchdb/compress.ts";
import { disableEncryption } from "@/lib/src/pouchdb/encryption.ts";
import { enableEncryption } from "@/lib/src/pouchdb/encryption.ts";
import { setNoticeClass } from "../../lib/src/mock_and_interop/wrapper.ts";
import { ObsHttpHandler } from "./APILib/ObsHttpHandler.ts";
import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser.ts";
import { reactive, reactiveSource } from "octagonal-wheels/dataobject/reactive.js";
import { arrayBufferToBase64Single, writeString } from "../../lib/src/string_and_binary/convert.ts";
import { Refiner } from "octagonal-wheels/dataobject/Refiner";
import { AuthorizationHeaderGenerator } from "../../lib/src/replication/httplib.ts";
setNoticeClass(Notice);
async function fetchByAPI(request: RequestUrlParam): Promise<RequestUrlResponse> {
const ret = await requestUrl(request);
if (ret.status - (ret.status % 100) !== 200) {
const er: Error & { status?: number } = new Error(`Request Error:${ret.status}`);
if (ret.json) {
er.message = ret.json.reason;
er.name = `${ret.json.error ?? ""}:${ret.json.message ?? ""}`;
}
er.status = ret.status;
throw er;
}
async function fetchByAPI(request: RequestUrlParam, errorAsResult = false): Promise<RequestUrlResponse> {
const ret = await requestUrl({ ...request, throw: !errorAsResult });
return ret;
}
export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidianModule {
_customHandler!: ObsHttpHandler;
authHeaderSource = reactiveSource<string>("");
authHeader = reactive(() =>
this.authHeaderSource.value == "" ? "" : "Basic " + window.btoa(this.authHeaderSource.value)
);
_authHeader = new AuthorizationHeaderGenerator();
last_successful_post = false;
$$customFetchHandler(): ObsHttpHandler {
@@ -58,13 +33,7 @@ export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidi
return !this.last_successful_post;
}
async fetchByAPI(
url: string,
localURL: string,
method: string,
authHeader: string,
opts?: RequestInit
): Promise<Response> {
async _fetchByAPI(url: string, authHeader: string, opts?: RequestInit): Promise<Response> {
const body = opts?.body as string;
const transformedHeaders = { ...(opts?.headers as Record<string, string>) };
@@ -78,25 +47,36 @@ export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidi
method: opts?.method,
body: body,
headers: transformedHeaders,
contentType: "application/json",
// contentType: opts.headers,
contentType:
transformedHeaders?.["content-type"] ?? transformedHeaders?.["Content-Type"] ?? "application/json",
};
const r = await fetchByAPI(requestParam, true);
return new Response(r.arrayBuffer, {
headers: r.headers,
status: r.status,
statusText: `${r.status}`,
});
}
async fetchByAPI(
url: string,
localURL: string,
method: string,
authHeader: string,
opts?: RequestInit
): Promise<Response> {
const body = opts?.body as string;
const size = body ? ` (${body.length})` : "";
try {
const r = await this._fetchByAPI(url, authHeader, opts);
this.plugin.requestCount.value = this.plugin.requestCount.value + 1;
const r = await fetchByAPI(requestParam);
if (method == "POST" || method == "PUT") {
this.last_successful_post = r.status - (r.status % 100) == 200;
} else {
this.last_successful_post = true;
}
this._log(`HTTP:${method}${size} to:${localURL} -> ${r.status}`, LOG_LEVEL_DEBUG);
return new Response(r.arrayBuffer, {
headers: r.headers,
status: r.status,
statusText: `${r.status}`,
});
return r;
} catch (ex) {
this._log(`HTTP:${method}${size} to:${localURL} -> failed`, LOG_LEVEL_VERBOSE);
// limit only in bulk_docs.
@@ -110,135 +90,6 @@ export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidi
}
}
_importKey(auth: JWTCredentials) {
if (auth.jwtAlgorithm == "HS256" || auth.jwtAlgorithm == "HS512") {
const key = (auth.jwtKey || "").trim();
if (key == "") {
throw new Error("JWT key is empty");
}
const binaryDerString = window.atob(key);
const binaryDer = new Uint8Array(binaryDerString.length);
for (let i = 0; i < binaryDerString.length; i++) {
binaryDer[i] = binaryDerString.charCodeAt(i);
}
const hashName = auth.jwtAlgorithm == "HS256" ? "SHA-256" : "SHA-512";
return crypto.subtle.importKey("raw", binaryDer, { name: "HMAC", hash: { name: hashName } }, true, [
"sign",
]);
} else if (auth.jwtAlgorithm == "ES256" || auth.jwtAlgorithm == "ES512") {
const pem = auth.jwtKey
.replace(/-----BEGIN [^-]+-----/, "")
.replace(/-----END [^-]+-----/, "")
.replace(/\s+/g, "");
// const pem = key.replace(/\s/g, "");
const binaryDerString = window.atob(pem);
const binaryDer = new Uint8Array(binaryDerString.length);
for (let i = 0; i < binaryDerString.length; i++) {
binaryDer[i] = binaryDerString.charCodeAt(i);
}
// const binaryDer = base64ToArrayBuffer(pem);
const namedCurve = auth.jwtAlgorithm == "ES256" ? "P-256" : "P-521";
const param = { name: "ECDSA", namedCurve };
return crypto.subtle.importKey("pkcs8", binaryDer, param, true, ["sign"]);
} else {
throw new Error("Supplied JWT algorithm is not supported.");
}
}
_currentCryptoKey = new Refiner<JWTCredentials, CryptoKey>({
evaluation: async (auth, previous) => {
return await this._importKey(auth);
},
});
_jwt = new Refiner<JWTParams, PreparedJWT>({
evaluation: async (params, previous) => {
const encodedHeader = btoa(JSON.stringify(params.header));
const encodedPayload = btoa(JSON.stringify(params.payload));
const buff = `${encodedHeader}.${encodedPayload}`.replace(/\+/g, "-").replace(/\//g, "_").replace(/=/g, "");
const key = await this._currentCryptoKey.update(params.credentials).value;
let token = "";
if (params.header.alg == "ES256" || params.header.alg == "ES512") {
const jwt = await crypto.subtle.sign(
{ name: "ECDSA", hash: { name: "SHA-256" } },
key,
writeString(buff)
);
token = (await arrayBufferToBase64Single(jwt))
.replace(/\+/g, "-")
.replace(/\//g, "_")
.replace(/=/g, "");
} else if (params.header.alg == "HS256" || params.header.alg == "HS512") {
const jwt = await crypto.subtle.sign(
{ name: "HMAC", hash: { name: params.header.alg } },
key,
writeString(buff)
);
token = (await arrayBufferToBase64Single(jwt))
.replace(/\+/g, "-")
.replace(/\//g, "_")
.replace(/=/g, "");
} else {
throw new Error("JWT algorithm is not supported.");
}
return {
...params,
token: `${buff}.${token}`,
} as PreparedJWT;
},
});
_jwtParams = new Refiner<JWTCredentials, JWTParams>({
evaluation(source, previous) {
const kid = source.jwtKid || undefined;
const sub = (source.jwtSub || "").trim();
if (sub == "") {
throw new Error("JWT sub is empty");
}
const algorithm = source.jwtAlgorithm || "";
if (!algorithm) {
throw new Error("JWT algorithm is not configured.");
}
if (algorithm != "HS256" && algorithm != "HS512" && algorithm != "ES256" && algorithm != "ES512") {
throw new Error("JWT algorithm is not supported.");
}
const header: JWTHeader = {
alg: source.jwtAlgorithm || "HS256",
typ: "JWT",
kid,
};
const iat = ~~(new Date().getTime() / 1000);
const exp = iat + (source.jwtExpDuration || 5) * 60; // 5 minutes
const payload = {
exp,
iat,
sub: source.jwtSub || "",
"_couchdb.roles": ["_admin"],
} satisfies JWTPayload;
return {
header,
payload,
credentials: source,
};
},
shouldUpdate(isDifferent, source, previous) {
if (isDifferent) {
return true;
}
if (!previous) {
return true;
}
// if expired.
const d = ~~(new Date().getTime() / 1000);
if (previous.payload.exp < d) {
// console.warn(`jwt expired ${previous.payload.exp} < ${d}`);
return true;
}
return false;
},
});
async $$connectRemoteCouchDB(
uri: string,
auth: CouchDBCredentials,
@@ -248,30 +99,21 @@ export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidi
performSetup: boolean,
skipInfo: boolean,
compression: boolean,
customHeaders: Record<string, string>
customHeaders: Record<string, string>,
useRequestAPI: boolean,
getPBKDF2Salt: () => Promise<Uint8Array>
): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> {
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters.";
if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces.";
let authHeader = "";
if ("username" in auth) {
const userNameAndPassword = auth.username && auth.password ? `${auth.username}:${auth.password}` : "";
if (this.authHeaderSource.value != userNameAndPassword) {
this.authHeaderSource.value = userNameAndPassword;
}
authHeader = this.authHeader.value;
} else if ("jwtAlgorithm" in auth) {
const params = await this._jwtParams.update(auth).value;
const jwt = await this._jwt.update(params).value;
const token = jwt.token;
authHeader = `Bearer ${token}`;
}
// let authHeader = await this._authHeader.getAuthorizationHeader(auth);
const conf: PouchDB.HttpAdapter.HttpAdapterConfiguration = {
adapter: "http",
auth: "username" in auth ? auth : undefined,
skip_setup: !performSetup,
fetch: async (url: string | Request, opts?: RequestInit) => {
const authHeader = await this._authHeader.getAuthorizationHeader(auth);
let size = "";
const localURL = url.toString().substring(uri.length);
const method = opts?.method ?? "GET";
@@ -288,27 +130,23 @@ export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidi
size = ` (${opts_length})`;
}
try {
if (!disableRequestURI && typeof url == "string" && typeof (opts?.body ?? "") == "string") {
return await this.fetchByAPI(url, localURL, method, authHeader, opts);
const headers = new Headers(opts?.headers);
if (customHeaders) {
for (const [key, value] of Object.entries(customHeaders)) {
if (key && value) {
headers.append(key, value);
}
}
}
if (!("username" in auth)) {
headers.append("authorization", authHeader);
}
// --> native Fetch API.
try {
if (customHeaders) {
for (const [key, value] of Object.entries(customHeaders)) {
if (key && value) {
(opts!.headers as Headers).append(key, value);
}
}
// // Issue #407
// (opts!.headers as Headers).append("ngrok-skip-browser-warning", "123");
}
// debugger;
if (!("username" in auth)) {
(opts!.headers as Headers).append("authorization", authHeader);
}
this.plugin.requestCount.value = this.plugin.requestCount.value + 1;
const response: Response = await fetch(url, opts);
const response: Response = await (useRequestAPI
? this._fetchByAPI(url.toString(), authHeader, { ...opts, headers })
: fetch(url, { ...opts, headers }));
if (method == "POST" || method == "PUT") {
this.last_successful_post = response.ok;
} else {
@@ -341,10 +179,17 @@ export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidi
return response;
} catch (ex) {
if (ex instanceof TypeError) {
if (useRequestAPI) {
this._log("Failed to request by API.");
throw ex;
}
this._log(
"Failed to fetch by native fetch API. Trying to fetch by API to get more information."
);
const resp2 = await this.fetchByAPI(url.toString(), localURL, method, authHeader, opts);
const resp2 = await this.fetchByAPI(url.toString(), localURL, method, authHeader, {
...opts,
headers,
});
if (resp2.status / 100 == 2) {
this._log(
"The request was successful by API. But the native fetch API failed! Please check CORS settings on the remote database!. While this condition, you cannot enable LiveSync",
@@ -382,7 +227,14 @@ export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidi
replicationFilter(db, compression);
disableEncryption();
if (passphrase !== "false" && typeof passphrase === "string") {
enableEncryption(db, passphrase, useDynamicIterationCount, false);
enableEncryption(
db,
passphrase,
useDynamicIterationCount,
false,
getPBKDF2Salt,
this.settings.E2EEAlgorithm
);
}
if (skipInfo) {
return { db: db, info: { db_name: "", doc_count: 0, update_seq: "" } };

View File

@@ -7,6 +7,7 @@ import { uint8ArrayToHexString } from "octagonal-wheels/binary/hex";
import { parseYaml, requestUrl, stringifyYaml } from "obsidian";
import type { FilePath } from "../../lib/src/common/types.ts";
import { scheduleTask } from "octagonal-wheels/concurrency/task";
import { getFileRegExp } from "../../lib/src/common/utils.ts";
declare global {
interface LSEvents {
@@ -200,13 +201,10 @@ export class ModuleReplicateTest extends AbstractObsidianModule implements IObsi
}
async _dumpFileListIncludeHidden(outFile?: string) {
const ignorePatterns = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",")
.filter((e) => e)
.map((e) => new RegExp(e, "i"));
const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
const targetPatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
const out = [] as any[];
const files = await this.core.storageAccess.getFilesIncludeHidden("", undefined, ignorePatterns);
const files = await this.core.storageAccess.getFilesIncludeHidden("", targetPatterns, ignorePatterns);
// console.dir(files);
const webcrypto = await getWebCrypto();
for (const file of files) {

View File

@@ -13,6 +13,7 @@ import { ConflictResolveModal } from "./InteractiveConflictResolving/ConflictRes
import { AbstractObsidianModule, type IObsidianModule } from "../AbstractObsidianModule.ts";
import { displayRev, getPath, getPathWithoutPrefix } from "../../common/utils.ts";
import { fireAndForget } from "octagonal-wheels/promises";
import { serialized } from "../../lib/src/concurrency/lock.ts";
export class ModuleInteractiveConflictResolver extends AbstractObsidianModule implements IObsidianModule {
$everyOnloadStart(): Promise<boolean> {
@@ -34,67 +35,71 @@ export class ModuleInteractiveConflictResolver extends AbstractObsidianModule im
}
async $anyResolveConflictByUI(filename: FilePathWithPrefix, conflictCheckResult: diff_result): Promise<boolean> {
this._log("Merge:open conflict dialog", LOG_LEVEL_VERBOSE);
const dialog = new ConflictResolveModal(this.app, filename, conflictCheckResult);
dialog.open();
const selected = await dialog.waitForResult();
if (selected === CANCELLED) {
// Cancelled by UI, or another conflict.
this._log(`Merge: Cancelled ${filename}`, LOG_LEVEL_INFO);
return false;
}
const testDoc = await this.localDatabase.getDBEntry(filename, { conflicts: true }, false, true, true);
if (testDoc === false) {
this._log(`Merge: Could not read ${filename} from the local database`, LOG_LEVEL_VERBOSE);
return false;
}
if (!testDoc._conflicts) {
this._log(`Merge: Nothing to do ${filename}`, LOG_LEVEL_VERBOSE);
return false;
}
const toDelete = selected;
// const toKeep = conflictCheckResult.left.rev != toDelete ? conflictCheckResult.left.rev : conflictCheckResult.right.rev;
if (toDelete === LEAVE_TO_SUBSEQUENT) {
// Concatenate both conflicted revisions.
// Create a new file by concatenating both conflicted revisions.
const p = conflictCheckResult.diff.map((e) => e[1]).join("");
const delRev = testDoc._conflicts[0];
if (!(await this.core.databaseFileAccess.storeContent(filename, p))) {
this._log(`Concatenated content cannot be stored:${filename}`, LOG_LEVEL_NOTICE);
// UI for resolving conflicts should one-by-one.
return await serialized(`conflict-resolve-ui`, async () => {
this._log("Merge:open conflict dialog", LOG_LEVEL_VERBOSE);
const dialog = new ConflictResolveModal(this.app, filename, conflictCheckResult);
dialog.open();
const selected = await dialog.waitForResult();
if (selected === CANCELLED) {
// Cancelled by UI, or another conflict.
this._log(`Merge: Cancelled ${filename}`, LOG_LEVEL_INFO);
return false;
}
// 2. As usual, delete the conflicted revision and if there are no conflicts, write the resolved content to the storage.
if (
(await this.core.$$resolveConflictByDeletingRev(filename, delRev, "UI Concatenated")) ==
MISSING_OR_ERROR
) {
this._log(
`Concatenated saved, but cannot delete conflicted revisions: ${filename}, (${displayRev(delRev)})`,
LOG_LEVEL_NOTICE
);
const testDoc = await this.localDatabase.getDBEntry(filename, { conflicts: true }, false, true, true);
if (testDoc === false) {
this._log(`Merge: Could not read ${filename} from the local database`, LOG_LEVEL_VERBOSE);
return false;
}
} else if (typeof toDelete === "string") {
// Select one of the conflicted revision to delete.
if (
(await this.core.$$resolveConflictByDeletingRev(filename, toDelete, "UI Selected")) == MISSING_OR_ERROR
) {
if (!testDoc._conflicts) {
this._log(`Merge: Nothing to do ${filename}`, LOG_LEVEL_VERBOSE);
return false;
}
const toDelete = selected;
// const toKeep = conflictCheckResult.left.rev != toDelete ? conflictCheckResult.left.rev : conflictCheckResult.right.rev;
if (toDelete === LEAVE_TO_SUBSEQUENT) {
// Concatenate both conflicted revisions.
// Create a new file by concatenating both conflicted revisions.
const p = conflictCheckResult.diff.map((e) => e[1]).join("");
const delRev = testDoc._conflicts[0];
if (!(await this.core.databaseFileAccess.storeContent(filename, p))) {
this._log(`Concatenated content cannot be stored:${filename}`, LOG_LEVEL_NOTICE);
return false;
}
// 2. As usual, delete the conflicted revision and if there are no conflicts, write the resolved content to the storage.
if (
(await this.core.$$resolveConflictByDeletingRev(filename, delRev, "UI Concatenated")) ==
MISSING_OR_ERROR
) {
this._log(
`Concatenated saved, but cannot delete conflicted revisions: ${filename}, (${displayRev(delRev)})`,
LOG_LEVEL_NOTICE
);
return false;
}
} else if (typeof toDelete === "string") {
// Select one of the conflicted revision to delete.
if (
(await this.core.$$resolveConflictByDeletingRev(filename, toDelete, "UI Selected")) ==
MISSING_OR_ERROR
) {
this._log(`Merge: Something went wrong: ${filename}, (${toDelete})`, LOG_LEVEL_NOTICE);
return false;
}
} else {
this._log(`Merge: Something went wrong: ${filename}, (${toDelete})`, LOG_LEVEL_NOTICE);
return false;
}
} else {
this._log(`Merge: Something went wrong: ${filename}, (${toDelete})`, LOG_LEVEL_NOTICE);
// In here, some merge has been processed.
// So we have to run replication if configured.
// TODO: Make this is as a event request
if (this.settings.syncAfterMerge && !this.core.$$isSuspended()) {
await this.core.$$replicateByEvent();
}
// And, check it again.
await this.core.$$queueConflictCheck(filename);
return false;
}
// In here, some merge has been processed.
// So we have to run replication if configured.
// TODO: Make this is as a event request
if (this.settings.syncAfterMerge && !this.core.$$isSuspended()) {
await this.core.$$replicateByEvent();
}
// And, check it again.
await this.core.$$queueConflictCheck(filename);
return false;
});
}
async allConflictCheck() {
while (await this.pickFileForResolve());

View File

@@ -63,6 +63,7 @@ export class ModuleLog extends AbstractObsidianModule implements IObsidianModule
statusBarLabels!: ReactiveValue<{ message: string; status: string }>;
statusLog = reactiveSource("");
activeFileStatus = reactiveSource("");
notifies: { [key: string]: { notice: Notice; count: number } } = {};
p2pLogCollector = new P2PLogCollector();
@@ -181,10 +182,11 @@ export class ModuleLog extends AbstractObsidianModule implements IObsidianModule
? `WARNING! RESTARTING OBSIDIAN IS SCHEDULED!\n`
: "";
const { message } = statusLineLabel();
const fileStatus = this.activeFileStatus.value;
const status = scheduleMessage + this.statusLog.value;
const fileStatusIcon = `${fileStatus && this.settings.hideFileWarningNotice ? " ⛔ SKIP" : ""}`;
return {
message,
message: `${message}${fileStatusIcon}`,
status,
};
});
@@ -229,7 +231,9 @@ export class ModuleLog extends AbstractObsidianModule implements IObsidianModule
return "";
}
async setFileStatus() {
this.messageArea!.innerText = await this.getActiveFileStatus();
const fileStatus = await this.getActiveFileStatus();
this.activeFileStatus.value = fileStatus;
this.messageArea!.innerText = this.settings.hideFileWarningNotice ? "" : fileStatus;
}
onActiveLeafChange() {
fireAndForget(async () => {

View File

@@ -3,6 +3,7 @@ import { type IObsidianModule, AbstractObsidianModule } from "../AbstractObsidia
import { EVENT_REQUEST_RELOAD_SETTING_TAB, EVENT_SETTING_SAVED, eventHub } from "../../common/events";
import {
type BucketSyncSetting,
ChunkAlgorithmNames,
type ConfigPassphraseStore,
type CouchDBConnection,
DEFAULT_SETTINGS,
@@ -10,10 +11,46 @@ import {
SALT_OF_PASSPHRASE,
} from "../../lib/src/common/types";
import { LOG_LEVEL_NOTICE, LOG_LEVEL_URGENT } from "octagonal-wheels/common/logger";
import { encrypt, tryDecrypt } from "octagonal-wheels/encryption";
import { setLang } from "../../lib/src/common/i18n";
import { $msg, setLang } from "../../lib/src/common/i18n";
import { isCloudantURI } from "../../lib/src/pouchdb/utils_couchdb";
import { getLanguage } from "obsidian";
import { SUPPORTED_I18N_LANGS, type I18N_LANGS } from "../../lib/src/common/rosetta.ts";
import { decryptString, encryptString } from "@/lib/src/encryption/stringEncryption.ts";
export class ModuleObsidianSettings extends AbstractObsidianModule implements IObsidianModule {
async $everyOnLayoutReady(): Promise<boolean> {
let isChanged = false;
if (this.settings.displayLanguage == "") {
const obsidianLanguage = getLanguage();
if (
SUPPORTED_I18N_LANGS.indexOf(obsidianLanguage) !== -1 && // Check if the language is supported
obsidianLanguage != this.settings.displayLanguage && // Check if the language is different from the current setting
this.settings.displayLanguage != ""
) {
// Check if the current setting is not empty (Means migrated or installed).
this.settings.displayLanguage = obsidianLanguage as I18N_LANGS;
isChanged = true;
setLang(this.settings.displayLanguage);
} else if (this.settings.displayLanguage == "") {
this.settings.displayLanguage = "def";
setLang(this.settings.displayLanguage);
await this.core.$$saveSettingData();
}
}
if (isChanged) {
const revert = $msg("dialog.yourLanguageAvailable.btnRevertToDefault");
if (
(await this.core.confirm.askSelectStringDialogue($msg(`dialog.yourLanguageAvailable`), ["OK", revert], {
defaultAction: "OK",
title: $msg(`dialog.yourLanguageAvailable.Title`),
})) == revert
) {
this.settings.displayLanguage = "def";
setLang(this.settings.displayLanguage);
}
await this.core.$$saveSettingData();
}
return true;
}
getPassphrase(settings: ObsidianLiveSyncSettings) {
const methods: Record<ConfigPassphraseStore, () => Promise<string | false>> = {
"": () => Promise.resolve("*"),
@@ -36,7 +73,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
}
async decryptConfigurationItem(encrypted: string, passphrase: string) {
const dec = await tryDecrypt(encrypted, passphrase + SALT_OF_PASSPHRASE, false);
const dec = await decryptString(encrypted, passphrase + SALT_OF_PASSPHRASE);
if (dec) {
this.usedPassphrase = passphrase;
return dec;
@@ -46,7 +83,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
async encryptConfigurationItem(src: string, settings: ObsidianLiveSyncSettings) {
if (this.usedPassphrase != "") {
return await encrypt(src, this.usedPassphrase + SALT_OF_PASSPHRASE, false);
return await encryptString(src, this.usedPassphrase + SALT_OF_PASSPHRASE);
}
const passphrase = await this.getPassphrase(settings);
@@ -57,7 +94,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
);
return "";
}
const dec = await encrypt(src, passphrase + SALT_OF_PASSPHRASE, false);
const dec = await encryptString(src, passphrase + SALT_OF_PASSPHRASE);
if (dec) {
this.usedPassphrase = passphrase;
return dec;
@@ -102,6 +139,8 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
jwtKid: settings.jwtKid,
jwtExpDuration: settings.jwtExpDuration,
jwtSub: settings.jwtSub,
useRequestAPI: settings.useRequestAPI,
bucketPrefix: settings.bucketPrefix,
};
settings.encryptedCouchDBConnection = await this.encryptConfigurationItem(
JSON.stringify(connectionSetting),
@@ -135,18 +174,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
}
}
async $$loadSettings(): Promise<void> {
const settings = Object.assign({}, DEFAULT_SETTINGS, await this.core.loadData()) as ObsidianLiveSyncSettings;
if (typeof settings.isConfigured == "undefined") {
// If migrated, mark true
if (JSON.stringify(settings) !== JSON.stringify(DEFAULT_SETTINGS)) {
settings.isConfigured = true;
} else {
settings.additionalSuffixOfDatabaseName = this.appId;
settings.isConfigured = false;
}
}
async $$decryptSettings(settings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
const passphrase = await this.getPassphrase(settings);
if (passphrase === false) {
this._log("No passphrase found for data.json! Verify configuration before syncing.", LOG_LEVEL_URGENT);
@@ -198,19 +226,62 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
}
}
}
this.settings = settings;
setLang(this.settings.displayLanguage);
return settings;
}
if ("workingEncrypt" in this.settings) delete this.settings.workingEncrypt;
if ("workingPassphrase" in this.settings) delete this.settings.workingPassphrase;
/**
* This method mutates the settings object.
* @param settings
* @returns
*/
$$adjustSettings(settings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
// Adjust settings as needed
// Delete this feature to avoid problems on mobile.
this.settings.disableRequestURI = true;
settings.disableRequestURI = true;
// GC is disabled.
this.settings.gcDelay = 0;
settings.gcDelay = 0;
// So, use history is always enabled.
this.settings.useHistory = true;
settings.useHistory = true;
if ("workingEncrypt" in settings) delete settings.workingEncrypt;
if ("workingPassphrase" in settings) delete settings.workingPassphrase;
// Splitter configurations have been replaced with chunkSplitterVersion.
if (settings.chunkSplitterVersion == "") {
if (settings.enableChunkSplitterV2) {
if (settings.useSegmenter) {
settings.chunkSplitterVersion = "v2-segmenter";
} else {
settings.chunkSplitterVersion = "v2";
}
} else {
settings.chunkSplitterVersion = "";
}
} else if (!(settings.chunkSplitterVersion in ChunkAlgorithmNames)) {
settings.chunkSplitterVersion = "";
}
return Promise.resolve(settings);
}
async $$loadSettings(): Promise<void> {
const settings = Object.assign({}, DEFAULT_SETTINGS, await this.core.loadData()) as ObsidianLiveSyncSettings;
if (typeof settings.isConfigured == "undefined") {
// If migrated, mark true
if (JSON.stringify(settings) !== JSON.stringify(DEFAULT_SETTINGS)) {
settings.isConfigured = true;
} else {
settings.additionalSuffixOfDatabaseName = this.appId;
settings.isConfigured = false;
}
}
this.settings = await this.core.$$decryptSettings(settings);
setLang(this.settings.displayLanguage);
await this.core.$$adjustSettings(this.settings);
const lsKey = "obsidian-live-sync-vaultanddevicename-" + this.core.$$getVaultName();
if (this.settings.deviceAndVaultName != "") {
@@ -234,6 +305,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
this.settings.usePluginSync = false;
}
}
// this.core.ignoreFiles = this.settings.ignoreFiles.split(",").map(e => e.trim());
eventHub.emitEvent(EVENT_REQUEST_RELOAD_SETTING_TAB);
}

View File

@@ -7,7 +7,6 @@ import {
} from "../../lib/src/common/types.ts";
import { configURIBase, configURIBaseQR } from "../../common/types.ts";
// import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser.js";
import { decrypt, encrypt } from "../../lib/src/encryption/e2ee_v2.ts";
import { fireAndForget } from "../../lib/src/common/utils.ts";
import {
EVENT_REQUEST_COPY_SETUP_URI,
@@ -19,6 +18,8 @@ import { AbstractObsidianModule, type IObsidianModule } from "../AbstractObsidia
import { decodeAnyArray, encodeAnyArray } from "../../common/utils.ts";
import qrcode from "qrcode-generator";
import { $msg } from "../../lib/src/common/i18n.ts";
import { performDoctorConsultation, RebuildOptions } from "@/lib/src/common/configForDoc.ts";
import { encryptString, decryptString } from "@/lib/src/encryption/stringEncryption.ts";
export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsidianModule {
$everyOnload(): Promise<boolean> {
@@ -67,14 +68,13 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
const fullIndexes = Object.entries(KeyIndexOfSettings) as [keyof ObsidianLiveSyncSettings, number][];
for (const [settingKey, index] of fullIndexes) {
const settingValue = this.settings[settingKey];
if (index < 0) {
// This setting should be ignored.
continue;
}
settingArr[index] = settingValue;
}
const w = encodeAnyArray(settingArr);
// console.warn(w.length)
// console.warn(w);
// const j = decodeAnyArray(w);
// console.warn(j);
// console.warn(`is equal: ${isObjectDifferent(settingArr, j)}`);
const qr = qrcode(0, "L");
const uri = `${configURIBaseQR}${encodeURIComponent(w)}`;
qr.addData(uri);
@@ -90,6 +90,10 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
const fullIndexes = Object.entries(KeyIndexOfSettings) as [keyof ObsidianLiveSyncSettings, number][];
const newSettings = { ...DEFAULT_SETTINGS } as ObsidianLiveSyncSettings;
for (const [settingKey, index] of fullIndexes) {
if (index < 0) {
// This setting should be ignored.
continue;
}
if (index >= settingArr.length) {
// Possibly a new setting added.
continue;
@@ -98,7 +102,6 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
//@ts-ignore
newSettings[settingKey] = settingValue;
}
console.warn(newSettings);
await this.applySettingWizard(this.settings, newSettings, "QR Code");
}
async command_copySetupURI(stripExtra = true) {
@@ -127,9 +130,7 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
delete setting[k];
}
}
const encryptedSetting = encodeURIComponent(
await encrypt(JSON.stringify(setting), encryptingPassphrase, false)
);
const encryptedSetting = encodeURIComponent(await encryptString(JSON.stringify(setting), encryptingPassphrase));
const uri = `${configURIBase}${encryptedSetting} `;
await navigator.clipboard.writeText(uri);
this._log("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
@@ -148,9 +149,7 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
encryptedCouchDBConnection: "",
encryptedPassphrase: "",
};
const encryptedSetting = encodeURIComponent(
await encrypt(JSON.stringify(setting), encryptingPassphrase, false)
);
const encryptedSetting = encodeURIComponent(await encryptString(JSON.stringify(setting), encryptingPassphrase));
const uri = `${configURIBase}${encryptedSetting} `;
await navigator.clipboard.writeText(uri);
this._log("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
@@ -168,6 +167,73 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
const config = decodeURIComponent(setupURI.substring(configURIBase.length));
await this.setupWizard(config);
}
async askSyncWithRemoteConfig(tryingSettings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
const buttons = {
fetch: $msg("Setup.FetchRemoteConf.Buttons.Fetch"),
no: $msg("Setup.FetchRemoteConf.Buttons.Skip"),
} as const;
const fetchRemoteConf = await this.core.confirm.askSelectStringDialogue(
$msg("Setup.FetchRemoteConf.Message"),
Object.values(buttons),
{ defaultAction: buttons.fetch, timeout: 0, title: $msg("Setup.FetchRemoteConf.Title") }
);
if (fetchRemoteConf == buttons.no) {
return tryingSettings;
}
const newSettings = JSON.parse(JSON.stringify(tryingSettings)) as ObsidianLiveSyncSettings;
const remoteConfig = await this.core.$$fetchRemotePreferredTweakValues(newSettings);
if (remoteConfig) {
this._log("Remote configuration found.", LOG_LEVEL_NOTICE);
const resultSettings = {
...DEFAULT_SETTINGS,
...tryingSettings,
...remoteConfig,
} satisfies ObsidianLiveSyncSettings;
return resultSettings;
} else {
this._log("Remote configuration not applied.", LOG_LEVEL_NOTICE);
return {
...DEFAULT_SETTINGS,
...tryingSettings,
} satisfies ObsidianLiveSyncSettings;
}
}
async askPerformDoctor(
tryingSettings: ObsidianLiveSyncSettings
): Promise<{ settings: ObsidianLiveSyncSettings; shouldRebuild: boolean; isModified: boolean }> {
const buttons = {
yes: $msg("Setup.Doctor.Buttons.Yes"),
no: $msg("Setup.Doctor.Buttons.No"),
} as const;
const performDoctor = await this.core.confirm.askSelectStringDialogue(
$msg("Setup.Doctor.Message"),
Object.values(buttons),
{ defaultAction: buttons.yes, timeout: 0, title: $msg("Setup.Doctor.Title") }
);
if (performDoctor == buttons.no) {
return { settings: tryingSettings, shouldRebuild: false, isModified: false };
}
const newSettings = JSON.parse(JSON.stringify(tryingSettings)) as ObsidianLiveSyncSettings;
const { settings, shouldRebuild, isModified } = await performDoctorConsultation(this.core, newSettings, {
localRebuild: RebuildOptions.AutomaticAcceptable, // Because we are in the setup wizard, we can skip the confirmation.
remoteRebuild: RebuildOptions.SkipEvenIfRequired,
activateReason: "New settings from URI",
});
if (isModified) {
this._log("Doctor has fixed some issues!", LOG_LEVEL_NOTICE);
return {
settings,
shouldRebuild,
isModified,
};
} else {
this._log("Doctor detected no issues!", LOG_LEVEL_NOTICE);
return { settings: tryingSettings, shouldRebuild: false, isModified: false };
}
}
async applySettingWizard(
oldConf: ObsidianLiveSyncSettings,
newConf: ObsidianLiveSyncSettings,
@@ -178,20 +244,24 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
{}
);
if (result == "yes") {
const newSettingW = Object.assign({}, DEFAULT_SETTINGS, newConf) as ObsidianLiveSyncSettings;
let newSettingW = Object.assign({}, DEFAULT_SETTINGS, newConf) as ObsidianLiveSyncSettings;
this.core.replicator.closeReplication();
this.settings.suspendFileWatching = true;
console.dir(newSettingW);
newSettingW = await this.askSyncWithRemoteConfig(newSettingW);
const { settings, shouldRebuild, isModified } = await this.askPerformDoctor(newSettingW);
if (isModified) {
newSettingW = settings;
}
// Back into the default method once.
newSettingW.configPassphraseStore = "";
newSettingW.encryptedPassphrase = "";
newSettingW.encryptedCouchDBConnection = "";
newSettingW.additionalSuffixOfDatabaseName = `${"appId" in this.app ? this.app.appId : ""} `;
const setupJustImport = "Don't sync anything, just apply the settings.";
const setupAsNew = "This is a new client - sync everything from the remote server.";
const setupAsMerge = "This is an existing client - merge existing files with the server.";
const setupAgain = "Initialise new server data - ideal for new or broken servers.";
const setupManually = "Continue and configure manually.";
const setupJustImport = $msg("Setup.Apply.Buttons.OnlyApply");
const setupAsNew = $msg("Setup.Apply.Buttons.ApplyAndFetch");
const setupAsMerge = $msg("Setup.Apply.Buttons.ApplyAndMerge");
const setupAgain = $msg("Setup.Apply.Buttons.ApplyAndRebuild");
const setupCancel = $msg("Setup.Apply.Buttons.Cancel");
newSettingW.syncInternalFiles = false;
newSettingW.usePluginSync = false;
newSettingW.isConfigured = true;
@@ -199,11 +269,16 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
if (!newSettingW.useIndexedDBAdapter) {
newSettingW.useIndexedDBAdapter = true;
}
const warn = shouldRebuild ? $msg("Setup.Apply.WarningRebuildRecommended") : "";
const message = $msg("Setup.Apply.Message", {
method,
warn,
});
const setupType = await this.core.confirm.askSelectStringDialogue(
"How would you like to set it up?",
[setupAsNew, setupAgain, setupAsMerge, setupJustImport, setupManually],
{ defaultAction: setupAsNew }
message,
[setupAsNew, setupAsMerge, setupAgain, setupJustImport, setupCancel],
{ defaultAction: setupAsNew, title: $msg("Setup.Apply.Title", { method }), timeout: 0 }
);
if (setupType == setupJustImport) {
this.core.settings = newSettingW;
@@ -235,71 +310,11 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
await this.core.saveSettings();
this.core.$$clearUsedPassphrase();
await this.core.rebuilder.$rebuildEverything();
} else if (setupType == setupManually) {
const keepLocalDB = await this.core.confirm.askYesNoDialog("Keep local DB?", {
defaultOption: "No",
});
const keepRemoteDB = await this.core.confirm.askYesNoDialog("Keep remote DB?", {
defaultOption: "No",
});
if (keepLocalDB == "yes" && keepRemoteDB == "yes") {
// nothing to do. so peaceful.
this.core.settings = newSettingW;
this.core.$$clearUsedPassphrase();
await this.core.$allSuspendAllSync();
await this.core.$allSuspendExtraSync();
await this.core.saveSettings();
const replicate = await this.core.confirm.askYesNoDialog("Unlock and replicate?", {
defaultOption: "Yes",
});
if (replicate == "yes") {
await this.core.$$replicate(true);
await this.core.$$markRemoteUnlocked();
}
this._log("Configuration loaded.", LOG_LEVEL_NOTICE);
return;
}
if (keepLocalDB == "no" && keepRemoteDB == "no") {
const reset = await this.core.confirm.askYesNoDialog("Drop everything?", {
defaultOption: "No",
});
if (reset != "yes") {
this._log("Cancelled", LOG_LEVEL_NOTICE);
this.core.settings = oldConf;
return;
}
}
let initDB;
this.core.settings = newSettingW;
this.core.$$clearUsedPassphrase();
await this.core.saveSettings();
if (keepLocalDB == "no") {
await this.core.$$resetLocalDatabase();
await this.core.localDatabase.initializeDatabase();
const rebuild = await this.core.confirm.askYesNoDialog("Rebuild the database?", {
defaultOption: "Yes",
});
if (rebuild == "yes") {
initDB = this.core.$$initializeDatabase(true);
} else {
await this.core.$$markRemoteResolved();
}
}
if (keepRemoteDB == "no") {
await this.core.$$tryResetRemoteDatabase();
await this.core.$$markRemoteLocked();
}
if (keepLocalDB == "no" || keepRemoteDB == "no") {
const replicate = await this.core.confirm.askYesNoDialog("Replicate once?", {
defaultOption: "Yes",
});
if (replicate == "yes") {
if (initDB != null) {
await initDB;
}
await this.core.$$replicate(true);
}
}
} else {
// Explicitly cancel the operation or the dialog was closed.
this._log("Cancelled", LOG_LEVEL_NOTICE);
this.core.settings = oldConf;
return;
}
this._log("Configuration loaded.", LOG_LEVEL_NOTICE);
} else {
@@ -318,7 +333,7 @@ export class ModuleSetupObsidian extends AbstractObsidianModule implements IObsi
true
);
if (encryptingPassphrase === false) return;
const newConf = await JSON.parse(await decrypt(confString, encryptingPassphrase, false));
const newConf = await JSON.parse(await decryptString(confString, encryptingPassphrase));
if (newConf) {
await this.applySettingWizard(oldConf, newConf);
this._log("Configuration loaded.", LOG_LEVEL_NOTICE);

View File

@@ -14,14 +14,7 @@ import {
statusDisplay,
type ConfigurationItem,
} from "../../../lib/src/common/types.ts";
import {
type ObsidianLiveSyncSettingTab,
type AutoWireOption,
wrapMemo,
type OnUpdateResult,
createStub,
findAttrFromParent,
} from "./ObsidianLiveSyncSettingTab.ts";
import { createStub, type ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import {
type AllSettingItemKey,
getConfig,
@@ -31,6 +24,7 @@ import {
type AllBooleanItemKey,
} from "./settingConstants.ts";
import { $msg } from "src/lib/src/common/i18n.ts";
import { findAttrFromParent, wrapMemo, type AutoWireOption, type OnUpdateResult } from "./SettingPane.ts";
export class LiveSyncSetting extends Setting {
autoWiredComponent?: TextComponent | ToggleComponent | DropdownComponent | ButtonComponent | TextAreaComponent;
@@ -184,10 +178,10 @@ export class LiveSyncSetting extends Setting {
const conf = this.autoWireSetting(key, opt);
this.addText((text) => {
this.autoWiredComponent = text;
if (opt.clampMin) {
if (opt.clampMin !== undefined) {
text.inputEl.setAttribute("min", `${opt.clampMin}`);
}
if (opt.clampMax) {
if (opt.clampMax !== undefined) {
text.inputEl.setAttribute("max", `${opt.clampMax}`);
}
let lastError = false;
@@ -203,8 +197,8 @@ export class LiveSyncSetting extends Setting {
const value = parsedValue;
let hasError = false;
if (isNaN(value)) hasError = true;
if (opt.clampMax && opt.clampMax < value) hasError = true;
if (opt.clampMin && opt.clampMin > value) {
if (opt.clampMax !== undefined && opt.clampMax < value) hasError = true;
if (opt.clampMin !== undefined && opt.clampMin > value) {
if (opt.acceptZero && value == 0) {
// This is ok.
} else {

View File

@@ -1,31 +1,28 @@
<script lang="ts">
export let patterns = [] as string[];
export let originals = [] as string[];
import type { CustomRegExpSource } from "../../../lib/src/common/types";
import { isInvertedRegExp, isValidRegExp } from "../../../lib/src/common/utils";
export let apply: (args: string[]) => Promise<void> = (_: string[]) => Promise.resolve();
export let patterns = [] as CustomRegExpSource[];
export let originals = [] as CustomRegExpSource[];
export let apply: (args: CustomRegExpSource[]) => Promise<void> = (_: CustomRegExpSource[]) => Promise.resolve();
function revert() {
patterns = [...originals];
}
const CHECK_OK = "✔";
const CHECK_NG = "⚠";
const MARK_MODIFIED = "✏ ";
function checkRegExp(pattern: string) {
if (pattern.trim() == "") return "";
try {
new RegExp(pattern);
return CHECK_OK;
} catch (ex) {
return CHECK_NG;
}
function checkRegExp(pattern: CustomRegExpSource) {
return isValidRegExp(pattern) ? CHECK_OK : CHECK_NG;
}
$: statusName = patterns.map((e) => checkRegExp(e));
$: modified = patterns.map((e, i) => (e != (originals?.[i] ?? "") ? MARK_MODIFIED : ""));
$: isInvertedExp = patterns.map((e) => isInvertedRegExp(e));
function remove(idx: number) {
patterns[idx] = "";
patterns[idx] = "" as CustomRegExpSource;
}
function add() {
patterns = [...patterns, ""];
patterns = [...patterns, "" as CustomRegExpSource];
}
</script>
@@ -33,7 +30,9 @@
{#each patterns as pattern, idx}
<!-- svelte-ignore a11y-label-has-associated-control -->
<li>
<label>{modified[idx]}{statusName[idx]}</label><input type="text" bind:value={pattern} class={modified[idx]} />
<label>{modified[idx]}{statusName[idx]}</label>
<span class="chip">{isInvertedExp[idx] ? "INVERTED" : ""}</span>
<input type="text" bind:value={pattern} class={modified[idx]} />
<button class="iconbutton" on:click={() => remove(idx)}>🗑</button>
</li>
{/each}
@@ -43,8 +42,16 @@
</label>
</li>
<li class="buttons">
<button on:click={() => apply(patterns)} disabled={statusName.some((e) => e === CHECK_NG) || modified.every((e) => e === "")}>Apply </button>
<button on:click={() => revert()} disabled={statusName.some((e) => e === CHECK_NG) || modified.every((e) => e === "")}>Revert </button>
<button
on:click={() => apply(patterns)}
disabled={statusName.some((e) => e === CHECK_NG) || modified.every((e) => e === "")}
>Apply
</button>
<button
on:click={() => revert()}
disabled={statusName.some((e) => e === CHECK_NG) || modified.every((e) => e === "")}
>Revert
</button>
</li>
</ul>
@@ -85,4 +92,14 @@
button.iconbutton {
max-width: 4em;
}
.chip {
background-color: var(--tag-background);
color: var(--tag-color);
padding: var(--size-2-1) var(--size-4-1);
border-radius: 0.5em;
font-size: 0.8em;
}
.chip:empty {
display: none;
}
</style>

View File

@@ -0,0 +1,44 @@
import { ChunkAlgorithmNames } from "../../../lib/src/common/types.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
export function paneAdvanced(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement, { addPanel }: PageFunctions): void {
void addPanel(paneEl, "Memory cache").then((paneEl) => {
new Setting(paneEl).autoWireNumeric("hashCacheMaxCount", { clampMin: 10 });
// new Setting(paneEl).autoWireNumeric("hashCacheMaxAmount", { clampMin: 1 });
});
void addPanel(paneEl, "Local Database Tweak").then((paneEl) => {
paneEl.addClass("wizardHidden");
const items = ChunkAlgorithmNames;
new Setting(paneEl).autoWireDropDown("chunkSplitterVersion", {
options: items,
});
new Setting(paneEl).autoWireNumeric("customChunkSize", { clampMin: 0, acceptZero: true });
});
void addPanel(paneEl, "Transfer Tweak").then((paneEl) => {
new Setting(paneEl)
.setClass("wizardHidden")
.autoWireToggle("readChunksOnline", { onUpdate: this.onlyOnCouchDB });
new Setting(paneEl).setClass("wizardHidden").autoWireNumeric("concurrencyOfReadChunksOnline", {
clampMin: 10,
onUpdate: this.onlyOnCouchDB,
});
new Setting(paneEl).setClass("wizardHidden").autoWireNumeric("minimumIntervalOfReadChunksOnline", {
clampMin: 10,
onUpdate: this.onlyOnCouchDB,
});
// new Setting(paneEl)
// .setClass("wizardHidden")
// .autoWireToggle("sendChunksBulk", { onUpdate: onlyOnCouchDB })
// new Setting(paneEl)
// .setClass("wizardHidden")
// .autoWireNumeric("sendChunksBulkMaxSize", {
// clampMax: 100, clampMin: 1, onUpdate: onlyOnCouchDB
// })
});
}

View File

@@ -0,0 +1,33 @@
import { MarkdownRenderer } from "../../../deps.ts";
import { versionNumberString2Number } from "../../../lib/src/string_and_binary/convert.ts";
import { $msg } from "../../../lib/src/common/i18n.ts";
import { fireAndForget } from "octagonal-wheels/promises";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
//@ts-ignore
const manifestVersion: string = MANIFEST_VERSION || "-";
//@ts-ignore
const updateInformation: string = UPDATE_INFO || "";
const lastVersion = ~~(versionNumberString2Number(manifestVersion) / 1000);
export function paneChangeLog(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement): void {
const informationDivEl = this.createEl(paneEl, "div", { text: "" });
const tmpDiv = createDiv();
// tmpDiv.addClass("sls-header-button");
tmpDiv.addClass("op-warn-info");
tmpDiv.innerHTML = `<p>${$msg("obsidianLiveSyncSettingTab.msgNewVersionNote")}</p><button>${$msg("obsidianLiveSyncSettingTab.optionOkReadEverything")}</button>`;
if (lastVersion > (this.editingSettings?.lastReadUpdates || 0)) {
const informationButtonDiv = informationDivEl.appendChild(tmpDiv);
informationButtonDiv.querySelector("button")?.addEventListener("click", () => {
fireAndForget(async () => {
this.editingSettings.lastReadUpdates = lastVersion;
await this.saveAllDirtySettings();
informationButtonDiv.remove();
});
});
}
fireAndForget(() =>
MarkdownRenderer.render(this.plugin.app, updateInformation, informationDivEl, "/", this.plugin)
);
}

View File

@@ -0,0 +1,77 @@
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import { EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG, eventHub } from "../../../common/events.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
import { enableOnly, visibleOnly } from "./SettingPane.ts";
export function paneCustomisationSync(
this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement,
{ addPanel }: PageFunctions
): void {
// With great respect, thank you TfTHacker!
// Refer: https://github.com/TfTHacker/obsidian42-brat/blob/main/src/features/BetaPlugins.ts
void addPanel(paneEl, "Customization Sync").then((paneEl) => {
const enableOnlyOnPluginSyncIsNotEnabled = enableOnly(() => this.isConfiguredAs("usePluginSync", false));
const visibleOnlyOnPluginSyncEnabled = visibleOnly(() => this.isConfiguredAs("usePluginSync", true));
this.createEl(
paneEl,
"div",
{
text: "Please set device name to identify this device. This name should be unique among your devices. While not configured, we cannot enable this feature.",
cls: "op-warn",
},
(c) => {},
visibleOnly(() => this.isConfiguredAs("deviceAndVaultName", ""))
);
this.createEl(
paneEl,
"div",
{
text: "We cannot change the device name while this feature is enabled. Please disable this feature to change the device name.",
cls: "op-warn-info",
},
(c) => {},
visibleOnly(() => this.isConfiguredAs("usePluginSync", true))
);
new Setting(paneEl).autoWireText("deviceAndVaultName", {
placeHolder: "desktop",
onUpdate: enableOnlyOnPluginSyncIsNotEnabled,
});
new Setting(paneEl).autoWireToggle("usePluginSyncV2");
new Setting(paneEl).autoWireToggle("usePluginSync", {
onUpdate: enableOnly(() => !this.isConfiguredAs("deviceAndVaultName", "")),
});
new Setting(paneEl).autoWireToggle("autoSweepPlugins", {
onUpdate: visibleOnlyOnPluginSyncEnabled,
});
new Setting(paneEl).autoWireToggle("autoSweepPluginsPeriodic", {
onUpdate: visibleOnly(
() => this.isConfiguredAs("usePluginSync", true) && this.isConfiguredAs("autoSweepPlugins", true)
),
});
new Setting(paneEl).autoWireToggle("notifyPluginOrSettingUpdated", {
onUpdate: visibleOnlyOnPluginSyncEnabled,
});
new Setting(paneEl)
.setName("Open")
.setDesc("Open the dialog")
.addButton((button) => {
button
.setButtonText("Open")
.setDisabled(false)
.onClick(() => {
// this.plugin.getAddOn<ConfigSync>(ConfigSync.name)?.showPluginSyncModal();
// this.plugin.addOnConfigSync.showPluginSyncModal();
eventHub.emitEvent(EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG);
});
})
.addOnUpdate(visibleOnlyOnPluginSyncEnabled);
});
}

View File

@@ -0,0 +1,45 @@
import { $msg, $t } from "../../../lib/src/common/i18n.ts";
import { SUPPORTED_I18N_LANGS, type I18N_LANGS } from "../../../lib/src/common/rosetta.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
import { visibleOnly } from "./SettingPane.ts";
export function paneGeneral(
this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement,
{ addPanel, addPane }: PageFunctions
): void {
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleAppearance")).then((paneEl) => {
const languages = Object.fromEntries([
// ["", $msg("obsidianLiveSyncSettingTab.defaultLanguage")],
...SUPPORTED_I18N_LANGS.map((e) => [e, $t(`lang-${e}`)]),
]) as Record<I18N_LANGS, string>;
new Setting(paneEl).autoWireDropDown("displayLanguage", {
options: languages,
});
this.addOnSaved("displayLanguage", () => this.display());
new Setting(paneEl).autoWireToggle("showStatusOnEditor");
new Setting(paneEl).autoWireToggle("showOnlyIconsOnEditor", {
onUpdate: visibleOnly(() => this.isConfiguredAs("showStatusOnEditor", true)),
});
new Setting(paneEl).autoWireToggle("showStatusOnStatusbar");
new Setting(paneEl).autoWireToggle("hideFileWarningNotice");
});
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleLogging")).then((paneEl) => {
paneEl.addClass("wizardHidden");
new Setting(paneEl).autoWireToggle("lessInformationInLog");
new Setting(paneEl).autoWireToggle("showVerboseLog", {
onUpdate: visibleOnly(() => this.isConfiguredAs("lessInformationInLog", false)),
});
});
new Setting(paneEl).setClass("wizardOnly").addButton((button) =>
button
.setButtonText($msg("obsidianLiveSyncSettingTab.btnNext"))
.setCta()
.onClick(() => {
this.changeDisplay("0");
})
);
}

View File

@@ -0,0 +1,531 @@
import { stringifyYaml } from "../../../deps.ts";
import {
type ObsidianLiveSyncSettings,
type FilePathWithPrefix,
type DocumentID,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
type LoadedEntry,
REMOTE_COUCHDB,
REMOTE_MINIO,
type MetaEntry,
type FilePath,
DEFAULT_SETTINGS,
} from "../../../lib/src/common/types.ts";
import {
createBlob,
getFileRegExp,
isDocContentSame,
parseHeaderValues,
readAsBlob,
} from "../../../lib/src/common/utils.ts";
import { Logger } from "../../../lib/src/common/logger.ts";
import { isCloudantURI } from "../../../lib/src/pouchdb/utils_couchdb.ts";
import { getPath, requestToCouchDBWithCredentials } from "../../../common/utils.ts";
import { addPrefix, shouldBeIgnored, stripAllPrefixes } from "../../../lib/src/string_and_binary/path.ts";
import { $msg } from "../../../lib/src/common/i18n.ts";
import { Semaphore } from "octagonal-wheels/concurrency/semaphore";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import { EVENT_REQUEST_RUN_DOCTOR, eventHub } from "../../../common/events.ts";
import { ICHeader, ICXHeader, PSCHeader } from "../../../common/types.ts";
import { HiddenFileSync } from "../../../features/HiddenFileSync/CmdHiddenFileSync.ts";
import { EVENT_REQUEST_SHOW_HISTORY } from "../../../common/obsidianEvents.ts";
import { generateCredentialObject } from "../../../lib/src/replication/httplib.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
export function paneHatch(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement, { addPanel }: PageFunctions): void {
// const hatchWarn = this.createEl(paneEl, "div", { text: `To stop the boot up sequence for fixing problems on databases, you can put redflag.md on top of your vault (Rebooting obsidian is required).` });
// hatchWarn.addClass("op-warn-info");
void addPanel(paneEl, $msg("Setting.TroubleShooting")).then((paneEl) => {
new Setting(paneEl)
.setName($msg("Setting.TroubleShooting.Doctor"))
.setDesc($msg("Setting.TroubleShooting.Doctor.Desc"))
.addButton((button) =>
button
.setButtonText("Run Doctor")
.setCta()
.setDisabled(false)
.onClick(() => {
this.closeSetting();
eventHub.emitEvent(EVENT_REQUEST_RUN_DOCTOR, "you wanted(Thank you)!");
})
);
new Setting(paneEl).setName("Prepare the 'report' to create an issue").addButton((button) =>
button
.setButtonText("Copy Report to clipboard")
.setCta()
.setDisabled(false)
.onClick(async () => {
let responseConfig: any = {};
const REDACTED = "𝑅𝐸𝐷𝐴𝐶𝑇𝐸𝐷";
if (this.editingSettings.remoteType == REMOTE_COUCHDB) {
try {
const credential = generateCredentialObject(this.editingSettings);
const customHeaders = parseHeaderValues(this.editingSettings.couchDB_CustomHeaders);
const r = await requestToCouchDBWithCredentials(
this.editingSettings.couchDB_URI,
credential,
window.origin,
undefined,
undefined,
undefined,
customHeaders
);
Logger(JSON.stringify(r.json, null, 2));
responseConfig = r.json;
responseConfig["couch_httpd_auth"].secret = REDACTED;
responseConfig["couch_httpd_auth"].authentication_db = REDACTED;
responseConfig["couch_httpd_auth"].authentication_redirect = REDACTED;
responseConfig["couchdb"].uuid = REDACTED;
responseConfig["admins"] = REDACTED;
delete responseConfig["jwt_keys"];
if ("secret" in responseConfig["chttpd_auth"])
responseConfig["chttpd_auth"].secret = REDACTED;
} catch (ex) {
Logger(ex, LOG_LEVEL_VERBOSE);
responseConfig = {
error: "Requesting information from the remote CouchDB has failed. If you are using IBM Cloudant, this is normal behaviour.",
};
}
} else if (this.editingSettings.remoteType == REMOTE_MINIO) {
responseConfig = { error: "Object Storage Synchronisation" };
//
}
const defaultKeys = Object.keys(DEFAULT_SETTINGS) as (keyof ObsidianLiveSyncSettings)[];
const pluginConfig = JSON.parse(JSON.stringify(this.editingSettings)) as ObsidianLiveSyncSettings;
const pluginKeys = Object.keys(pluginConfig);
for (const key of pluginKeys) {
if (defaultKeys.includes(key as any)) continue;
delete pluginConfig[key as keyof ObsidianLiveSyncSettings];
}
pluginConfig.couchDB_DBNAME = REDACTED;
pluginConfig.couchDB_PASSWORD = REDACTED;
const scheme = pluginConfig.couchDB_URI.startsWith("http:")
? "(HTTP)"
: pluginConfig.couchDB_URI.startsWith("https:")
? "(HTTPS)"
: "";
pluginConfig.couchDB_URI = isCloudantURI(pluginConfig.couchDB_URI)
? "cloudant"
: `self-hosted${scheme}`;
pluginConfig.couchDB_USER = REDACTED;
pluginConfig.passphrase = REDACTED;
pluginConfig.encryptedPassphrase = REDACTED;
pluginConfig.encryptedCouchDBConnection = REDACTED;
pluginConfig.accessKey = REDACTED;
pluginConfig.secretKey = REDACTED;
const redact = (source: string) => `${REDACTED}(${source.length} letters)`;
pluginConfig.region = redact(pluginConfig.region);
pluginConfig.bucket = redact(pluginConfig.bucket);
pluginConfig.pluginSyncExtendedSetting = {};
pluginConfig.P2P_AppID = redact(pluginConfig.P2P_AppID);
pluginConfig.P2P_passphrase = redact(pluginConfig.P2P_passphrase);
pluginConfig.P2P_roomID = redact(pluginConfig.P2P_roomID);
pluginConfig.P2P_relays = redact(pluginConfig.P2P_relays);
pluginConfig.jwtKey = redact(pluginConfig.jwtKey);
pluginConfig.jwtSub = redact(pluginConfig.jwtSub);
pluginConfig.jwtKid = redact(pluginConfig.jwtKid);
pluginConfig.bucketCustomHeaders = redact(pluginConfig.bucketCustomHeaders);
pluginConfig.couchDB_CustomHeaders = redact(pluginConfig.couchDB_CustomHeaders);
const endpoint = pluginConfig.endpoint;
if (endpoint == "") {
pluginConfig.endpoint = "Not configured or AWS";
} else {
const endpointScheme = pluginConfig.endpoint.startsWith("http:")
? "(HTTP)"
: pluginConfig.endpoint.startsWith("https:")
? "(HTTPS)"
: "";
pluginConfig.endpoint = `${endpoint.indexOf(".r2.cloudflarestorage.") !== -1 ? "R2" : "self-hosted?"}(${endpointScheme})`;
}
const obsidianInfo = {
navigator: navigator.userAgent,
fileSystem: this.plugin.$$isStorageInsensitive() ? "insensitive" : "sensitive",
};
const msgConfig = `# ---- Obsidian info ----
${stringifyYaml(obsidianInfo)}
---
# ---- remote config ----
${stringifyYaml(responseConfig)}
---
# ---- Plug-in config ----
${stringifyYaml({
version: this.manifestVersion,
...pluginConfig,
})}`;
console.log(msgConfig);
await navigator.clipboard.writeText(msgConfig);
Logger(
`Generated report has been copied to clipboard. Please report the issue with this! Thank you for your cooperation!`,
LOG_LEVEL_NOTICE
);
})
);
new Setting(paneEl).autoWireToggle("writeLogToTheFile");
});
void addPanel(paneEl, "Scram Switches").then((paneEl) => {
new Setting(paneEl).autoWireToggle("suspendFileWatching");
this.addOnSaved("suspendFileWatching", () => this.plugin.$$askReload());
new Setting(paneEl).autoWireToggle("suspendParseReplicationResult");
this.addOnSaved("suspendParseReplicationResult", () => this.plugin.$$askReload());
});
void addPanel(paneEl, "Recovery and Repair").then((paneEl) => {
const addResult = async (path: string, file: FilePathWithPrefix | false, fileOnDB: LoadedEntry | false) => {
const storageFileStat = file ? await this.plugin.storageAccess.statHidden(file) : null;
resultArea.appendChild(
this.createEl(resultArea, "div", {}, (el) => {
el.appendChild(this.createEl(el, "h6", { text: path }));
el.appendChild(
this.createEl(el, "div", {}, (infoGroupEl) => {
infoGroupEl.appendChild(
this.createEl(infoGroupEl, "div", {
text: `Storage : Modified: ${!storageFileStat ? `Missing:` : `${new Date(storageFileStat.mtime).toLocaleString()}, Size:${storageFileStat.size}`}`,
})
);
infoGroupEl.appendChild(
this.createEl(infoGroupEl, "div", {
text: `Database: Modified: ${!fileOnDB ? `Missing:` : `${new Date(fileOnDB.mtime).toLocaleString()}, Size:${fileOnDB.size}`}`,
})
);
})
);
if (fileOnDB && file) {
el.appendChild(
this.createEl(el, "button", { text: "Show history" }, (buttonEl) => {
buttonEl.onClickEvent(() => {
eventHub.emitEvent(EVENT_REQUEST_SHOW_HISTORY, {
file: file,
fileOnDB: fileOnDB,
});
});
})
);
}
if (file) {
el.appendChild(
this.createEl(el, "button", { text: "Storage -> Database" }, (buttonEl) => {
buttonEl.onClickEvent(async () => {
if (file.startsWith(".")) {
const addOn = this.plugin.getAddOn<HiddenFileSync>(HiddenFileSync.name);
if (addOn) {
const file = (await addOn.scanInternalFiles()).find((e) => e.path == path);
if (!file) {
Logger(
`Failed to find the file in the internal files: ${path}`,
LOG_LEVEL_NOTICE
);
return;
}
if (!(await addOn.storeInternalFileToDatabase(file, true))) {
Logger(
`Failed to store the file to the database (Hidden file): ${file}`,
LOG_LEVEL_NOTICE
);
return;
}
}
} else {
if (!(await this.plugin.fileHandler.storeFileToDB(file as FilePath, true))) {
Logger(
`Failed to store the file to the database: ${file}`,
LOG_LEVEL_NOTICE
);
return;
}
}
el.remove();
});
})
);
}
if (fileOnDB) {
el.appendChild(
this.createEl(el, "button", { text: "Database -> Storage" }, (buttonEl) => {
buttonEl.onClickEvent(async () => {
if (fileOnDB.path.startsWith(ICHeader)) {
const addOn = this.plugin.getAddOn<HiddenFileSync>(HiddenFileSync.name);
if (addOn) {
if (
!(await addOn.extractInternalFileFromDatabase(path as FilePath, true))
) {
Logger(
`Failed to store the file to the database (Hidden file): ${file}`,
LOG_LEVEL_NOTICE
);
return;
}
}
} else {
if (
!(await this.plugin.fileHandler.dbToStorage(
fileOnDB as MetaEntry,
null,
true
))
) {
Logger(
`Failed to store the file to the storage: ${fileOnDB.path}`,
LOG_LEVEL_NOTICE
);
return;
}
}
el.remove();
});
})
);
}
return el;
})
);
};
const checkBetweenStorageAndDatabase = async (file: FilePathWithPrefix, fileOnDB: LoadedEntry) => {
const dataContent = readAsBlob(fileOnDB);
const content = createBlob(await this.plugin.storageAccess.readHiddenFileBinary(file));
if (await isDocContentSame(content, dataContent)) {
Logger(`Compare: SAME: ${file}`);
} else {
Logger(`Compare: CONTENT IS NOT MATCHED! ${file}`, LOG_LEVEL_NOTICE);
void addResult(file, file, fileOnDB);
}
};
new Setting(paneEl)
.setName("Recreate missing chunks for all files")
.setDesc("This will recreate chunks for all files. If there were missing chunks, this may fix the errors.")
.addButton((button) =>
button
.setButtonText("Recreate all")
.setCta()
.onClick(async () => {
await this.plugin.fileHandler.createAllChunks(true);
})
);
new Setting(paneEl)
.setName("Resolve All conflicted files by the newer one")
.setDesc(
"Resolve all conflicted files by the newer one. Caution: This will overwrite the older one, and cannot resurrect the overwritten one."
)
.addButton((button) =>
button
.setButtonText("Resolve All")
.setCta()
.onClick(async () => {
await this.plugin.rebuilder.resolveAllConflictedFilesByNewerOnes();
})
);
new Setting(paneEl)
.setName("Verify and repair all files")
.setDesc(
"Compare the content of files between on local database and storage. If not matched, you will be asked which one you want to keep."
)
.addButton((button) =>
button
.setButtonText("Verify all")
.setDisabled(false)
.setCta()
.onClick(async () => {
Logger("Start verifying all files", LOG_LEVEL_NOTICE, "verify");
const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
const targetPatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
this.plugin.localDatabase.clearCaches();
Logger("Start verifying all files", LOG_LEVEL_NOTICE, "verify");
const files = this.plugin.settings.syncInternalFiles
? await this.plugin.storageAccess.getFilesIncludeHidden("/", targetPatterns, ignorePatterns)
: await this.plugin.storageAccess.getFileNames();
const documents = [] as FilePath[];
const adn = this.plugin.localDatabase.findAllDocs();
for await (const i of adn) {
const path = getPath(i);
if (path.startsWith(ICXHeader)) continue;
if (path.startsWith(PSCHeader)) continue;
if (!this.plugin.settings.syncInternalFiles && path.startsWith(ICHeader)) continue;
documents.push(stripAllPrefixes(path));
}
const allPaths = [...new Set([...documents, ...files])];
let i = 0;
const incProc = () => {
i++;
if (i % 25 == 0)
Logger(
`Checking ${i}/${allPaths.length} files \n`,
LOG_LEVEL_NOTICE,
"verify-processed"
);
};
const semaphore = Semaphore(10);
const processes = allPaths.map(async (path) => {
try {
if (shouldBeIgnored(path)) {
return incProc();
}
const stat = (await this.plugin.storageAccess.isExistsIncludeHidden(path))
? await this.plugin.storageAccess.statHidden(path)
: false;
const fileOnStorage = stat != null ? stat : false;
if (!(await this.plugin.$$isTargetFile(path))) return incProc();
const releaser = await semaphore.acquire(1);
if (fileOnStorage && this.plugin.$$isFileSizeExceeded(fileOnStorage.size))
return incProc();
try {
const isHiddenFile = path.startsWith(".");
const dbPath = isHiddenFile ? addPrefix(path, ICHeader) : path;
const fileOnDB = await this.plugin.localDatabase.getDBEntry(dbPath);
if (fileOnDB && this.plugin.$$isFileSizeExceeded(fileOnDB.size)) return incProc();
if (!fileOnDB && fileOnStorage) {
Logger(`Compare: Not found on the local database: ${path}`, LOG_LEVEL_NOTICE);
void addResult(path, path, false);
return incProc();
}
if (fileOnDB && !fileOnStorage) {
Logger(`Compare: Not found on the storage: ${path}`, LOG_LEVEL_NOTICE);
void addResult(path, false, fileOnDB);
return incProc();
}
if (fileOnStorage && fileOnDB) {
await checkBetweenStorageAndDatabase(path, fileOnDB);
}
} catch (ex) {
Logger(`Error while processing ${path}`, LOG_LEVEL_NOTICE);
Logger(ex, LOG_LEVEL_VERBOSE);
} finally {
releaser();
incProc();
}
} catch (ex) {
Logger(`Error while processing without semaphore ${path}`, LOG_LEVEL_NOTICE);
Logger(ex, LOG_LEVEL_VERBOSE);
}
});
await Promise.all(processes);
Logger("done", LOG_LEVEL_NOTICE, "verify");
// Logger(`${i}/${files.length}\n`, LOG_LEVEL_NOTICE, "verify-processed");
})
);
const resultArea = paneEl.createDiv({ text: "" });
new Setting(paneEl)
.setName("Check and convert non-path-obfuscated files")
.setDesc("")
.addButton((button) =>
button
.setButtonText("Perform")
.setDisabled(false)
.setWarning()
.onClick(async () => {
for await (const docName of this.plugin.localDatabase.findAllDocNames()) {
if (!docName.startsWith("f:")) {
const idEncoded = await this.plugin.$$path2id(docName as FilePathWithPrefix);
const doc = await this.plugin.localDatabase.getRaw(docName as DocumentID);
if (!doc) continue;
if (doc.type != "newnote" && doc.type != "plain") {
continue;
}
if (doc?.deleted ?? false) continue;
const newDoc = { ...doc };
//Prepare converted data
newDoc._id = idEncoded;
newDoc.path = docName as FilePathWithPrefix;
// @ts-ignore
delete newDoc._rev;
try {
const obfuscatedDoc = await this.plugin.localDatabase.getRaw(idEncoded, {
revs_info: true,
});
// Unfortunately we have to delete one of them.
// Just now, save it as a conflicted document.
obfuscatedDoc._revs_info?.shift(); // Drop latest revision.
const previousRev = obfuscatedDoc._revs_info?.shift(); // Use second revision.
if (previousRev) {
newDoc._rev = previousRev.rev;
} else {
//If there are no revisions, set the possibly unique one
newDoc._rev =
"1-" +
`00000000000000000000000000000000${~~(Math.random() * 1e9)}${~~(Math.random() * 1e9)}${~~(Math.random() * 1e9)}${~~(Math.random() * 1e9)}`.slice(
-32
);
}
const ret = await this.plugin.localDatabase.putRaw(newDoc, { force: true });
if (ret.ok) {
Logger(
`${docName} has been converted as conflicted document`,
LOG_LEVEL_NOTICE
);
doc._deleted = true;
if ((await this.plugin.localDatabase.putRaw(doc)).ok) {
Logger(`Old ${docName} has been deleted`, LOG_LEVEL_NOTICE);
}
await this.plugin.$$queueConflictCheckIfOpen(docName as FilePathWithPrefix);
} else {
Logger(`Converting ${docName} Failed!`, LOG_LEVEL_NOTICE);
Logger(ret, LOG_LEVEL_VERBOSE);
}
} catch (ex: any) {
if (ex?.status == 404) {
// We can perform this safely
if ((await this.plugin.localDatabase.putRaw(newDoc)).ok) {
Logger(`${docName} has been converted`, LOG_LEVEL_NOTICE);
doc._deleted = true;
if ((await this.plugin.localDatabase.putRaw(doc)).ok) {
Logger(`Old ${docName} has been deleted`, LOG_LEVEL_NOTICE);
}
}
} else {
Logger(`Something went wrong while converting ${docName}`, LOG_LEVEL_NOTICE);
Logger(ex, LOG_LEVEL_VERBOSE);
// Something wrong.
}
}
}
}
Logger(`Converting finished`, LOG_LEVEL_NOTICE);
})
);
});
void addPanel(paneEl, "Reset").then((paneEl) => {
new Setting(paneEl).setName("Back to non-configured").addButton((button) =>
button
.setButtonText("Back")
.setDisabled(false)
.onClick(async () => {
this.editingSettings.isConfigured = false;
await this.saveAllDirtySettings();
this.plugin.$$askReload();
})
);
new Setting(paneEl).setName("Delete all customization sync data").addButton((button) =>
button
.setButtonText("Delete")
.setDisabled(false)
.setWarning()
.onClick(async () => {
Logger(`Deleting customization sync data`, LOG_LEVEL_NOTICE);
const entriesToDelete = await this.plugin.localDatabase.allDocsRaw({
startkey: "ix:",
endkey: "ix:\u{10ffff}",
include_docs: true,
});
const newData = entriesToDelete.rows.map((e) => ({
...e.doc,
_deleted: true,
}));
const r = await this.plugin.localDatabase.bulkDocsRaw(newData as any[]);
// Do not care about the result.
Logger(
`${r.length} items have been removed, to confirm how many items are left, please perform it again.`,
LOG_LEVEL_NOTICE
);
})
);
});
}

View File

@@ -0,0 +1,374 @@
import { LocalDatabaseMaintenance } from "../../../features/LocalDatabaseMainte/CmdLocalDatabaseMainte.ts";
import { LOG_LEVEL_NOTICE, Logger } from "../../../lib/src/common/logger.ts";
import { FLAGMD_REDFLAG, FLAGMD_REDFLAG2_HR, FLAGMD_REDFLAG3_HR } from "../../../lib/src/common/types.ts";
import { fireAndForget } from "../../../lib/src/common/utils.ts";
import { LiveSyncCouchDBReplicator } from "../../../lib/src/replication/couchdb/LiveSyncReplicator.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab";
import { visibleOnly, type PageFunctions } from "./SettingPane";
export function paneMaintenance(
this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement,
{ addPanel }: PageFunctions
): void {
const isRemoteLockedAndDeviceNotAccepted = () => this.plugin?.replicator?.remoteLockedAndDeviceNotAccepted;
const isRemoteLocked = () => this.plugin?.replicator?.remoteLocked;
// if (this.plugin?.replicator?.remoteLockedAndDeviceNotAccepted) {
this.createEl(
paneEl,
"div",
{
text: "The remote database is locked for synchronization to prevent vault corruption because this device isn't marked as 'resolved'. Please backup your vault, reset the local database, and select 'Mark this device as resolved'. This warning will persist until the device is confirmed as resolved by replication.",
cls: "op-warn",
},
(c) => {
this.createEl(
c,
"button",
{
text: "I've made a backup, mark this device 'resolved'",
cls: "mod-warning",
},
(e) => {
e.addEventListener("click", () => {
fireAndForget(async () => {
await this.plugin.$$markRemoteResolved();
this.display();
});
});
}
);
},
visibleOnly(isRemoteLockedAndDeviceNotAccepted)
);
this.createEl(
paneEl,
"div",
{
text: "To prevent unwanted vault corruption, the remote database has been locked for synchronization. (This device is marked 'resolved') When all your devices are marked 'resolved', unlock the database. This warning kept showing until confirming the device is resolved by the replication",
cls: "op-warn",
},
(c) =>
this.createEl(
c,
"button",
{
text: "I'm ready, unlock the database",
cls: "mod-warning",
},
(e) => {
e.addEventListener("click", () => {
fireAndForget(async () => {
await this.plugin.$$markRemoteUnlocked();
this.display();
});
});
}
),
visibleOnly(isRemoteLocked)
);
void addPanel(paneEl, "Scram!").then((paneEl) => {
new Setting(paneEl)
.setName("Lock Server")
.setDesc("Lock the remote server to prevent synchronization with other devices.")
.addButton((button) =>
button
.setButtonText("Lock")
.setDisabled(false)
.setWarning()
.onClick(async () => {
await this.plugin.$$markRemoteLocked();
})
)
.addOnUpdate(this.onlyOnCouchDBOrMinIO);
new Setting(paneEl)
.setName("Emergency restart")
.setDesc("Disables all synchronization and restart.")
.addButton((button) =>
button
.setButtonText("Flag and restart")
.setDisabled(false)
.setWarning()
.onClick(async () => {
await this.plugin.storageAccess.writeFileAuto(FLAGMD_REDFLAG, "");
this.plugin.$$performRestart();
})
);
});
void addPanel(paneEl, "Syncing", () => {}, this.onlyOnCouchDBOrMinIO).then((paneEl) => {
new Setting(paneEl)
.setName("Resend")
.setDesc("Resend all chunks to the remote.")
.addButton((button) =>
button
.setButtonText("Send chunks")
.setWarning()
.setDisabled(false)
.onClick(async () => {
if (this.plugin.replicator instanceof LiveSyncCouchDBReplicator) {
await this.plugin.replicator.sendChunks(this.plugin.settings, undefined, true, 0);
}
})
)
.addOnUpdate(this.onlyOnCouchDB);
new Setting(paneEl)
.setName("Reset journal received history")
.setDesc(
"Initialise journal received history. On the next sync, every item except this device sent will be downloaded again."
)
.addButton((button) =>
button
.setButtonText("Reset received")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.getMinioJournalSyncClient().updateCheckPointInfo((info) => ({
...info,
receivedFiles: new Set(),
knownIDs: new Set(),
}));
Logger(`Journal received history has been cleared.`, LOG_LEVEL_NOTICE);
})
)
.addOnUpdate(this.onlyOnMinIO);
new Setting(paneEl)
.setName("Reset journal sent history")
.setDesc(
"Initialise journal sent history. On the next sync, every item except this device received will be sent again."
)
.addButton((button) =>
button
.setButtonText("Reset sent history")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.getMinioJournalSyncClient().updateCheckPointInfo((info) => ({
...info,
lastLocalSeq: 0,
sentIDs: new Set(),
sentFiles: new Set(),
}));
Logger(`Journal sent history has been cleared.`, LOG_LEVEL_NOTICE);
})
)
.addOnUpdate(this.onlyOnMinIO);
});
void addPanel(paneEl, "Garbage Collection (Beta)", (e) => e, this.onlyOnP2POrCouchDB).then((paneEl) => {
new Setting(paneEl)
.setName("Remove all orphaned chunks")
.setDesc("Remove all orphaned chunks from the local database.")
.addButton((button) =>
button
.setButtonText("Remove")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin
.getAddOn<LocalDatabaseMaintenance>(LocalDatabaseMaintenance.name)
?.removeUnusedChunks();
})
);
new Setting(paneEl)
.setName("Resurrect deleted chunks")
.setDesc(
"If you have deleted chunks before fully synchronised and missed some chunks, you possibly can resurrect them."
)
.addButton((button) =>
button
.setButtonText("Try resurrect")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin
.getAddOn<LocalDatabaseMaintenance>(LocalDatabaseMaintenance.name)
?.resurrectChunks();
})
);
new Setting(paneEl)
.setName("Commit File Deletion")
.setDesc("Completely delete all deleted documents from the local database.")
.addButton((button) =>
button
.setButtonText("Delete")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin
.getAddOn<LocalDatabaseMaintenance>(LocalDatabaseMaintenance.name)
?.commitFileDeletion();
})
);
});
void addPanel(paneEl, "Rebuilding Operations (Local)").then((paneEl) => {
new Setting(paneEl)
.setName("Fetch from remote")
.setDesc("Restore or reconstruct local database from remote.")
.addButton((button) =>
button
.setButtonText("Fetch")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin.storageAccess.writeFileAuto(FLAGMD_REDFLAG3_HR, "");
this.plugin.$$performRestart();
})
)
.addButton((button) =>
button
.setButtonText("Fetch w/o restarting")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.rebuildDB("localOnly");
})
);
new Setting(paneEl)
.setName("Fetch rebuilt DB (Save local documents before)")
.setDesc("Restore or reconstruct local database from remote database but use local chunks.")
.addButton((button) =>
button
.setButtonText("Save and Fetch")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.rebuildDB("localOnlyWithChunks");
})
)
.addOnUpdate(this.onlyOnCouchDB);
});
void addPanel(paneEl, "Total Overhaul", () => {}, this.onlyOnCouchDBOrMinIO).then((paneEl) => {
new Setting(paneEl)
.setName("Rebuild everything")
.setDesc("Rebuild local and remote database with local files.")
.addButton((button) =>
button
.setButtonText("Rebuild")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin.storageAccess.writeFileAuto(FLAGMD_REDFLAG2_HR, "");
this.plugin.$$performRestart();
})
)
.addButton((button) =>
button
.setButtonText("Rebuild w/o restarting")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.rebuildDB("rebuildBothByThisDevice");
})
);
});
void addPanel(paneEl, "Rebuilding Operations (Remote Only)", () => {}, this.onlyOnCouchDBOrMinIO).then((paneEl) => {
new Setting(paneEl)
.setName("Perform cleanup")
.setDesc(
"Reduces storage space by discarding all non-latest revisions. This requires the same amount of free space on the remote server and the local client."
)
.addButton((button) =>
button
.setButtonText("Perform")
.setDisabled(false)
.onClick(async () => {
const replicator = this.plugin.replicator as LiveSyncCouchDBReplicator;
Logger(`Cleanup has been began`, LOG_LEVEL_NOTICE, "compaction");
if (await replicator.compactRemote(this.editingSettings)) {
Logger(`Cleanup has been completed!`, LOG_LEVEL_NOTICE, "compaction");
} else {
Logger(`Cleanup has been failed!`, LOG_LEVEL_NOTICE, "compaction");
}
})
)
.addOnUpdate(this.onlyOnCouchDB);
new Setting(paneEl)
.setName("Overwrite remote")
.setDesc("Overwrite remote with local DB and passphrase.")
.addButton((button) =>
button
.setButtonText("Send")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.rebuildDB("remoteOnly");
})
);
new Setting(paneEl)
.setName("Reset all journal counter")
.setDesc("Initialise all journal history, On the next sync, every item will be received and sent.")
.addButton((button) =>
button
.setButtonText("Reset all")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.getMinioJournalSyncClient().resetCheckpointInfo();
Logger(`Journal exchange history has been cleared.`, LOG_LEVEL_NOTICE);
})
)
.addOnUpdate(this.onlyOnMinIO);
new Setting(paneEl)
.setName("Purge all journal counter")
.setDesc("Purge all download/upload cache.")
.addButton((button) =>
button
.setButtonText("Reset all")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.getMinioJournalSyncClient().resetAllCaches();
Logger(`Journal download/upload cache has been cleared.`, LOG_LEVEL_NOTICE);
})
)
.addOnUpdate(this.onlyOnMinIO);
new Setting(paneEl)
.setName("Fresh Start Wipe")
.setDesc("Delete all data on the remote server.")
.addButton((button) =>
button
.setButtonText("Delete")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.getMinioJournalSyncClient().updateCheckPointInfo((info) => ({
...info,
receivedFiles: new Set(),
knownIDs: new Set(),
lastLocalSeq: 0,
sentIDs: new Set(),
sentFiles: new Set(),
}));
await this.resetRemoteBucket();
Logger(`Deleted all data on remote server`, LOG_LEVEL_NOTICE);
})
)
.addOnUpdate(this.onlyOnMinIO);
});
void addPanel(paneEl, "Reset").then((paneEl) => {
new Setting(paneEl)
.setName("Delete local database to reset or uninstall Self-hosted LiveSync")
.addButton((button) =>
button
.setButtonText("Delete")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin.$$resetLocalDatabase();
await this.plugin.$$initializeDatabase();
})
);
});
}

View File

@@ -0,0 +1,112 @@
import {
E2EEAlgorithmNames,
E2EEAlgorithms,
type HashAlgorithm,
LOG_LEVEL_NOTICE,
} from "../../../lib/src/common/types.ts";
import { Logger } from "../../../lib/src/common/logger.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
import { visibleOnly } from "./SettingPane.ts";
export function panePatches(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement, { addPanel }: PageFunctions): void {
void addPanel(paneEl, "Compatibility (Metadata)").then((paneEl) => {
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("deleteMetadataOfDeletedFiles");
new Setting(paneEl).setClass("wizardHidden").autoWireNumeric("automaticallyDeleteMetadataOfDeletedFiles", {
onUpdate: visibleOnly(() => this.isConfiguredAs("deleteMetadataOfDeletedFiles", true)),
});
});
void addPanel(paneEl, "Compatibility (Conflict Behaviour)").then((paneEl) => {
paneEl.addClass("wizardHidden");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("disableMarkdownAutoMerge");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("writeDocumentsIfConflicted");
});
void addPanel(paneEl, "Compatibility (Database structure)").then((paneEl) => {
new Setting(paneEl).autoWireToggle("useIndexedDBAdapter", { invert: true, holdValue: true });
// new Setting(paneEl)
// .autoWireToggle("doNotUseFixedRevisionForChunks", { holdValue: true })
// .setClass("wizardHidden");
new Setting(paneEl).autoWireToggle("handleFilenameCaseSensitive", { holdValue: true }).setClass("wizardHidden");
this.addOnSaved("useIndexedDBAdapter", async () => {
await this.saveAllDirtySettings();
await this.rebuildDB("localOnly");
});
});
void addPanel(paneEl, "Compatibility (Internal API Usage)").then((paneEl) => {
new Setting(paneEl).autoWireToggle("watchInternalFileChanges", { invert: true });
});
void addPanel(paneEl, "Compatibility (Remote Database)").then((paneEl) => {
new Setting(paneEl).autoWireDropDown("E2EEAlgorithm", {
options: E2EEAlgorithmNames,
});
});
new Setting(paneEl).autoWireToggle("useDynamicIterationCount", {
holdValue: true,
onUpdate: visibleOnly(
() =>
this.isConfiguredAs("E2EEAlgorithm", E2EEAlgorithms.ForceV1) ||
this.isConfiguredAs("E2EEAlgorithm", E2EEAlgorithms.V1)
),
});
void addPanel(paneEl, "Edge case addressing (Database)").then((paneEl) => {
new Setting(paneEl)
.autoWireText("additionalSuffixOfDatabaseName", { holdValue: true })
.addApplyButton(["additionalSuffixOfDatabaseName"]);
this.addOnSaved("additionalSuffixOfDatabaseName", async (key) => {
Logger("Suffix has been changed. Reopening database...", LOG_LEVEL_NOTICE);
await this.plugin.$$initializeDatabase();
});
new Setting(paneEl).autoWireDropDown("hashAlg", {
options: {
"": "Old Algorithm",
xxhash32: "xxhash32 (Fast but less collision resistance)",
xxhash64: "xxhash64 (Fastest)",
"mixed-purejs": "PureJS fallback (Fast, W/O WebAssembly)",
sha1: "Older fallback (Slow, W/O WebAssembly)",
} as Record<HashAlgorithm, string>,
});
this.addOnSaved("hashAlg", async () => {
await this.plugin.localDatabase._prepareHashFunctions();
});
});
void addPanel(paneEl, "Edge case addressing (Behaviour)").then((paneEl) => {
new Setting(paneEl).autoWireToggle("doNotSuspendOnFetching");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("doNotDeleteFolder");
});
void addPanel(paneEl, "Edge case addressing (Processing)").then((paneEl) => {
new Setting(paneEl).autoWireToggle("disableWorkerForGeneratingChunks");
new Setting(paneEl).autoWireToggle("processSmallFilesInUIThread", {
onUpdate: visibleOnly(() => this.isConfiguredAs("disableWorkerForGeneratingChunks", false)),
});
});
// void addPanel(paneEl, "Edge case addressing (Networking)").then((paneEl) => {
// new Setting(paneEl).autoWireToggle("useRequestAPI");
// });
void addPanel(paneEl, "Compatibility (Trouble addressed)").then((paneEl) => {
new Setting(paneEl).autoWireToggle("disableCheckingConfigMismatch");
});
void addPanel(paneEl, "Remote Database Tweak (In sunset)").then((paneEl) => {
// new Setting(paneEl).autoWireToggle("useEden").setClass("wizardHidden");
// const onlyUsingEden = visibleOnly(() => this.isConfiguredAs("useEden", true));
// new Setting(paneEl).autoWireNumeric("maxChunksInEden", { onUpdate: onlyUsingEden }).setClass("wizardHidden");
// new Setting(paneEl)
// .autoWireNumeric("maxTotalLengthInEden", { onUpdate: onlyUsingEden })
// .setClass("wizardHidden");
// new Setting(paneEl).autoWireNumeric("maxAgeInEden", { onUpdate: onlyUsingEden }).setClass("wizardHidden");
new Setting(paneEl).autoWireToggle("enableCompression").setClass("wizardHidden");
});
}

View File

@@ -0,0 +1,59 @@
import { type ConfigPassphraseStore } from "../../../lib/src/common/types.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
export function panePowerUsers(
this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement,
{ addPanel }: PageFunctions
): void {
void addPanel(paneEl, "CouchDB Connection Tweak", undefined, this.onlyOnCouchDB).then((paneEl) => {
paneEl.addClass("wizardHidden");
this.createEl(
paneEl,
"div",
{
text: `If you reached the payload size limit when using IBM Cloudant, please decrease batch size and batch limit to a lower value.`,
},
undefined,
this.onlyOnCouchDB
).addClass("wizardHidden");
new Setting(paneEl)
.setClass("wizardHidden")
.autoWireNumeric("batch_size", { clampMin: 2, onUpdate: this.onlyOnCouchDB });
new Setting(paneEl).setClass("wizardHidden").autoWireNumeric("batches_limit", {
clampMin: 2,
onUpdate: this.onlyOnCouchDB,
});
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("useTimeouts", { onUpdate: this.onlyOnCouchDB });
});
void addPanel(paneEl, "Configuration Encryption").then((paneEl) => {
const passphrase_options: Record<ConfigPassphraseStore, string> = {
"": "Default",
LOCALSTORAGE: "Use a custom passphrase",
ASK_AT_LAUNCH: "Ask an passphrase at every launch",
};
new Setting(paneEl)
.setName("Encrypting sensitive configuration items")
.autoWireDropDown("configPassphraseStore", {
options: passphrase_options,
holdValue: true,
})
.setClass("wizardHidden");
new Setting(paneEl)
.autoWireText("configPassphrase", { isPassword: true, holdValue: true })
.setClass("wizardHidden")
.addOnUpdate(() => ({
disabled: !this.isConfiguredAs("configPassphraseStore", "LOCALSTORAGE"),
}));
new Setting(paneEl).addApplyButton(["configPassphrase", "configPassphraseStore"]).setClass("wizardHidden");
});
void addPanel(paneEl, "Developer").then((paneEl) => {
new Setting(paneEl).autoWireToggle("enableDebugTools").setClass("wizardHidden");
});
}

View File

@@ -0,0 +1,702 @@
import { MarkdownRenderer } from "../../../deps.ts";
import {
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
PREFERRED_JOURNAL_SYNC,
PREFERRED_SETTING_CLOUDANT,
PREFERRED_SETTING_SELF_HOSTED,
REMOTE_COUCHDB,
REMOTE_MINIO,
REMOTE_P2P,
} from "../../../lib/src/common/types.ts";
import { parseHeaderValues } from "../../../lib/src/common/utils.ts";
import { LOG_LEVEL_INFO, Logger } from "../../../lib/src/common/logger.ts";
import { isCloudantURI } from "../../../lib/src/pouchdb/utils_couchdb.ts";
import { requestToCouchDBWithCredentials } from "../../../common/utils.ts";
import { $msg } from "../../../lib/src/common/i18n.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import { fireAndForget } from "octagonal-wheels/promises";
import { generateCredentialObject } from "../../../lib/src/replication/httplib.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
import { combineOnUpdate, visibleOnly } from "./SettingPane.ts";
import { getWebCrypto } from "../../../lib/src/mods.ts";
import { arrayBufferToBase64Single } from "../../../lib/src/string_and_binary/convert.ts";
export function paneRemoteConfig(
this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement,
{ addPanel, addPane }: PageFunctions
): void {
let checkResultDiv: HTMLDivElement;
const checkConfig = async (checkResultDiv: HTMLDivElement | undefined) => {
Logger($msg("obsidianLiveSyncSettingTab.logCheckingDbConfig"), LOG_LEVEL_INFO);
let isSuccessful = true;
const emptyDiv = createDiv();
emptyDiv.innerHTML = "<span></span>";
checkResultDiv?.replaceChildren(...[emptyDiv]);
const addResult = (msg: string, classes?: string[]) => {
const tmpDiv = createDiv();
tmpDiv.addClass("ob-btn-config-fix");
if (classes) {
tmpDiv.addClasses(classes);
}
tmpDiv.innerHTML = `${msg}`;
checkResultDiv?.appendChild(tmpDiv);
};
try {
if (isCloudantURI(this.editingSettings.couchDB_URI)) {
Logger($msg("obsidianLiveSyncSettingTab.logCannotUseCloudant"), LOG_LEVEL_NOTICE);
return;
}
// Tip: Add log for cloudant as Logger($msg("obsidianLiveSyncSettingTab.logServerConfigurationCheck"));
const customHeaders = parseHeaderValues(this.editingSettings.couchDB_CustomHeaders);
const credential = generateCredentialObject(this.editingSettings);
const r = await requestToCouchDBWithCredentials(
this.editingSettings.couchDB_URI,
credential,
window.origin,
undefined,
undefined,
undefined,
customHeaders
);
const responseConfig = r.json;
const addConfigFixButton = (title: string, key: string, value: string) => {
if (!checkResultDiv) return;
const tmpDiv = createDiv();
tmpDiv.addClass("ob-btn-config-fix");
tmpDiv.innerHTML = `<label>${title}</label><button>${$msg("obsidianLiveSyncSettingTab.btnFix")}</button>`;
const x = checkResultDiv.appendChild(tmpDiv);
x.querySelector("button")?.addEventListener("click", () => {
fireAndForget(async () => {
Logger($msg("obsidianLiveSyncSettingTab.logCouchDbConfigSet", { title, key, value }));
const res = await requestToCouchDBWithCredentials(
this.editingSettings.couchDB_URI,
credential,
undefined,
key,
value,
undefined,
customHeaders
);
if (res.status == 200) {
Logger(
$msg("obsidianLiveSyncSettingTab.logCouchDbConfigUpdated", { title }),
LOG_LEVEL_NOTICE
);
checkResultDiv.removeChild(x);
await checkConfig(checkResultDiv);
} else {
Logger(
$msg("obsidianLiveSyncSettingTab.logCouchDbConfigFail", { title }),
LOG_LEVEL_NOTICE
);
Logger(res.text, LOG_LEVEL_VERBOSE);
}
});
});
};
addResult($msg("obsidianLiveSyncSettingTab.msgNotice"), ["ob-btn-config-head"]);
addResult($msg("obsidianLiveSyncSettingTab.msgIfConfigNotPersistent"), ["ob-btn-config-info"]);
addResult($msg("obsidianLiveSyncSettingTab.msgConfigCheck"), ["ob-btn-config-head"]);
// Admin check
// for database creation and deletion
if (!(this.editingSettings.couchDB_USER in responseConfig.admins)) {
addResult($msg("obsidianLiveSyncSettingTab.warnNoAdmin"));
} else {
addResult($msg("obsidianLiveSyncSettingTab.okAdminPrivileges"));
}
// HTTP user-authorization check
if (responseConfig?.chttpd?.require_valid_user != "true") {
isSuccessful = false;
addResult($msg("obsidianLiveSyncSettingTab.errRequireValidUser"));
addConfigFixButton(
$msg("obsidianLiveSyncSettingTab.msgSetRequireValidUser"),
"chttpd/require_valid_user",
"true"
);
} else {
addResult($msg("obsidianLiveSyncSettingTab.okRequireValidUser"));
}
if (responseConfig?.chttpd_auth?.require_valid_user != "true") {
isSuccessful = false;
addResult($msg("obsidianLiveSyncSettingTab.errRequireValidUserAuth"));
addConfigFixButton(
$msg("obsidianLiveSyncSettingTab.msgSetRequireValidUserAuth"),
"chttpd_auth/require_valid_user",
"true"
);
} else {
addResult($msg("obsidianLiveSyncSettingTab.okRequireValidUserAuth"));
}
// HTTPD check
// Check Authentication header
if (!responseConfig?.httpd["WWW-Authenticate"]) {
isSuccessful = false;
addResult($msg("obsidianLiveSyncSettingTab.errMissingWwwAuth"));
addConfigFixButton(
$msg("obsidianLiveSyncSettingTab.msgSetWwwAuth"),
"httpd/WWW-Authenticate",
'Basic realm="couchdb"'
);
} else {
addResult($msg("obsidianLiveSyncSettingTab.okWwwAuth"));
}
if (responseConfig?.httpd?.enable_cors != "true") {
isSuccessful = false;
addResult($msg("obsidianLiveSyncSettingTab.errEnableCors"));
addConfigFixButton($msg("obsidianLiveSyncSettingTab.msgEnableCors"), "httpd/enable_cors", "true");
} else {
addResult($msg("obsidianLiveSyncSettingTab.okEnableCors"));
}
// If the server is not cloudant, configure request size
if (!isCloudantURI(this.editingSettings.couchDB_URI)) {
// REQUEST SIZE
if (Number(responseConfig?.chttpd?.max_http_request_size ?? 0) < 4294967296) {
isSuccessful = false;
addResult($msg("obsidianLiveSyncSettingTab.errMaxRequestSize"));
addConfigFixButton(
$msg("obsidianLiveSyncSettingTab.msgSetMaxRequestSize"),
"chttpd/max_http_request_size",
"4294967296"
);
} else {
addResult($msg("obsidianLiveSyncSettingTab.okMaxRequestSize"));
}
if (Number(responseConfig?.couchdb?.max_document_size ?? 0) < 50000000) {
isSuccessful = false;
addResult($msg("obsidianLiveSyncSettingTab.errMaxDocumentSize"));
addConfigFixButton(
$msg("obsidianLiveSyncSettingTab.msgSetMaxDocSize"),
"couchdb/max_document_size",
"50000000"
);
} else {
addResult($msg("obsidianLiveSyncSettingTab.okMaxDocumentSize"));
}
}
// CORS check
// checking connectivity for mobile
if (responseConfig?.cors?.credentials != "true") {
isSuccessful = false;
addResult($msg("obsidianLiveSyncSettingTab.errCorsCredentials"));
addConfigFixButton(
$msg("obsidianLiveSyncSettingTab.msgSetCorsCredentials"),
"cors/credentials",
"true"
);
} else {
addResult($msg("obsidianLiveSyncSettingTab.okCorsCredentials"));
}
const ConfiguredOrigins = ((responseConfig?.cors?.origins ?? "") + "").split(",");
if (
responseConfig?.cors?.origins == "*" ||
(ConfiguredOrigins.indexOf("app://obsidian.md") !== -1 &&
ConfiguredOrigins.indexOf("capacitor://localhost") !== -1 &&
ConfiguredOrigins.indexOf("http://localhost") !== -1)
) {
addResult($msg("obsidianLiveSyncSettingTab.okCorsOrigins"));
} else {
addResult($msg("obsidianLiveSyncSettingTab.errCorsOrigins"));
addConfigFixButton(
$msg("obsidianLiveSyncSettingTab.msgSetCorsOrigins"),
"cors/origins",
"app://obsidian.md,capacitor://localhost,http://localhost"
);
isSuccessful = false;
}
addResult($msg("obsidianLiveSyncSettingTab.msgConnectionCheck"), ["ob-btn-config-head"]);
addResult($msg("obsidianLiveSyncSettingTab.msgCurrentOrigin", { origin: window.location.origin }));
// Request header check
const origins = ["app://obsidian.md", "capacitor://localhost", "http://localhost"];
for (const org of origins) {
const rr = await requestToCouchDBWithCredentials(
this.editingSettings.couchDB_URI,
credential,
org,
undefined,
undefined,
undefined,
customHeaders
);
const responseHeaders = Object.fromEntries(
Object.entries(rr.headers).map((e) => {
e[0] = `${e[0]}`.toLowerCase();
return e;
})
);
addResult($msg("obsidianLiveSyncSettingTab.msgOriginCheck", { org }));
if (responseHeaders["access-control-allow-credentials"] != "true") {
addResult($msg("obsidianLiveSyncSettingTab.errCorsNotAllowingCredentials"));
isSuccessful = false;
} else {
addResult($msg("obsidianLiveSyncSettingTab.okCorsCredentialsForOrigin"));
}
if (responseHeaders["access-control-allow-origin"] != org) {
addResult(
$msg("obsidianLiveSyncSettingTab.warnCorsOriginUnmatched", {
from: origin,
to: responseHeaders["access-control-allow-origin"],
})
);
} else {
addResult($msg("obsidianLiveSyncSettingTab.okCorsOriginMatched"));
}
}
addResult($msg("obsidianLiveSyncSettingTab.msgDone"), ["ob-btn-config-head"]);
addResult($msg("obsidianLiveSyncSettingTab.msgConnectionProxyNote"), ["ob-btn-config-info"]);
Logger($msg("obsidianLiveSyncSettingTab.logCheckingConfigDone"), LOG_LEVEL_INFO);
} catch (ex: any) {
if (ex?.status == 401) {
isSuccessful = false;
addResult($msg("obsidianLiveSyncSettingTab.errAccessForbidden"));
addResult($msg("obsidianLiveSyncSettingTab.errCannotContinueTest"));
Logger($msg("obsidianLiveSyncSettingTab.logCheckingConfigDone"), LOG_LEVEL_INFO);
} else {
Logger($msg("obsidianLiveSyncSettingTab.logCheckingConfigFailed"), LOG_LEVEL_NOTICE);
Logger(ex);
isSuccessful = false;
}
}
return isSuccessful;
};
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleRemoteServer")).then((paneEl) => {
// const containerRemoteDatabaseEl = containerEl.createDiv();
this.createEl(
paneEl,
"div",
{
text: $msg("obsidianLiveSyncSettingTab.msgSettingsUnchangeableDuringSync"),
},
undefined,
visibleOnly(() => this.isAnySyncEnabled())
).addClass("op-warn-info");
new Setting(paneEl).autoWireDropDown("remoteType", {
holdValue: true,
options: {
[REMOTE_COUCHDB]: $msg("obsidianLiveSyncSettingTab.optionCouchDB"),
[REMOTE_MINIO]: $msg("obsidianLiveSyncSettingTab.optionMinioS3R2"),
[REMOTE_P2P]: "Only Peer-to-Peer",
},
onUpdate: this.enableOnlySyncDisabled,
});
void addPanel(paneEl, "Peer-to-Peer", undefined, this.onlyOnOnlyP2P).then((paneEl) => {
const syncWarnP2P = this.createEl(paneEl, "div", {
text: "",
});
const p2pMessage = `This feature is a Work In Progress, and configurable on \`P2P Replicator\` Pane.
The pane also can be launched by \`P2P Replicator\` command from the Command Palette.
`;
void MarkdownRenderer.render(this.plugin.app, p2pMessage, syncWarnP2P, "/", this.plugin);
syncWarnP2P.addClass("op-warn-info");
new Setting(paneEl).setName("Apply Settings").setClass("wizardHidden").addApplyButton(["remoteType"]);
// .addOnUpdate(onlyOnMinIO);
// new Setting(paneEl).addButton((button) =>
// button
// .setButtonText("Open P2P Replicator")
// .onClick(() => {
// const addOn = this.plugin.getAddOn<P2PReplicator>(P2PReplicator.name);
// void addOn?.openPane();
// this.closeSetting();
// })
// );
});
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleMinioS3R2"), undefined, this.onlyOnMinIO).then(
(paneEl) => {
const syncWarnMinio = this.createEl(paneEl, "div", {
text: "",
});
const ObjectStorageMessage = $msg("obsidianLiveSyncSettingTab.msgObjectStorageWarning");
void MarkdownRenderer.render(this.plugin.app, ObjectStorageMessage, syncWarnMinio, "/", this.plugin);
syncWarnMinio.addClass("op-warn-info");
new Setting(paneEl).autoWireText("endpoint", { holdValue: true });
new Setting(paneEl).autoWireText("accessKey", { holdValue: true });
new Setting(paneEl).autoWireText("secretKey", {
holdValue: true,
isPassword: true,
});
new Setting(paneEl).autoWireText("region", { holdValue: true });
new Setting(paneEl).autoWireText("bucket", { holdValue: true });
new Setting(paneEl).autoWireText("bucketPrefix", {
holdValue: true,
placeHolder: "vaultname/",
});
new Setting(paneEl).autoWireToggle("useCustomRequestHandler", { holdValue: true });
new Setting(paneEl).autoWireTextArea("bucketCustomHeaders", {
holdValue: true,
placeHolder: "x-custom-header: value\n x-custom-header2: value2",
});
new Setting(paneEl).setName($msg("obsidianLiveSyncSettingTab.nameTestConnection")).addButton((button) =>
button
.setButtonText($msg("obsidianLiveSyncSettingTab.btnTest"))
.setDisabled(false)
.onClick(async () => {
await this.testConnection(this.editingSettings);
})
);
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameApplySettings"))
.setClass("wizardHidden")
.addApplyButton([
"remoteType",
"endpoint",
"region",
"accessKey",
"secretKey",
"bucket",
"useCustomRequestHandler",
"bucketCustomHeaders",
"bucketPrefix",
])
.addOnUpdate(this.onlyOnMinIO);
}
);
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleCouchDB"), undefined, this.onlyOnCouchDB).then(
(paneEl) => {
if (this.plugin.$$isMobile()) {
this.createEl(
paneEl,
"div",
{
text: $msg("obsidianLiveSyncSettingTab.msgNonHTTPSWarning"),
},
undefined,
visibleOnly(() => !this.editingSettings.couchDB_URI.startsWith("https://"))
).addClass("op-warn");
} else {
this.createEl(
paneEl,
"div",
{
text: $msg("obsidianLiveSyncSettingTab.msgNonHTTPSInfo"),
},
undefined,
visibleOnly(() => !this.editingSettings.couchDB_URI.startsWith("https://"))
).addClass("op-warn-info");
}
new Setting(paneEl).autoWireText("couchDB_URI", {
holdValue: true,
onUpdate: this.enableOnlySyncDisabled,
});
new Setting(paneEl).autoWireToggle("useJWT", {
holdValue: true,
onUpdate: this.enableOnlySyncDisabled,
});
new Setting(paneEl).autoWireText("couchDB_USER", {
holdValue: true,
onUpdate: combineOnUpdate(
this.enableOnlySyncDisabled,
visibleOnly(() => !this.editingSettings.useJWT)
),
});
new Setting(paneEl).autoWireText("couchDB_PASSWORD", {
holdValue: true,
isPassword: true,
onUpdate: combineOnUpdate(
this.enableOnlySyncDisabled,
visibleOnly(() => !this.editingSettings.useJWT)
),
});
const algorithms = {
["HS256"]: "HS256",
["HS512"]: "HS512",
["ES256"]: "ES256",
["ES512"]: "ES512",
} as const;
new Setting(paneEl).autoWireDropDown("jwtAlgorithm", {
options: algorithms,
onUpdate: combineOnUpdate(
this.enableOnlySyncDisabled,
visibleOnly(() => this.editingSettings.useJWT)
),
});
new Setting(paneEl).autoWireTextArea("jwtKey", {
holdValue: true,
onUpdate: combineOnUpdate(
this.enableOnlySyncDisabled,
visibleOnly(() => this.editingSettings.useJWT)
),
});
// eslint-disable-next-line prefer-const
let generatedKeyDivEl: HTMLDivElement;
new Setting(paneEl)
.setDesc("Generate ES256 Keypair for testing")
.addButton((button) =>
button.setButtonText("Generate").onClick(async () => {
const crypto = await getWebCrypto();
const keyPair = await crypto.subtle.generateKey(
{ name: "ECDSA", namedCurve: "P-256" },
true,
["sign", "verify"]
);
const pubKey = await crypto.subtle.exportKey("spki", keyPair.publicKey);
const privateKey = await crypto.subtle.exportKey("pkcs8", keyPair.privateKey);
const encodedPublicKey = await arrayBufferToBase64Single(pubKey);
const encodedPrivateKey = await arrayBufferToBase64Single(privateKey);
const privateKeyPem = `> -----BEGIN PRIVATE KEY-----\n> ${encodedPrivateKey}\n> -----END PRIVATE KEY-----`;
const publicKeyPem = `> -----BEGIN PUBLIC KEY-----\\n${encodedPublicKey}\\n-----END PUBLIC KEY-----`;
const title = $msg("Setting.GenerateKeyPair.Title");
const msg = $msg("Setting.GenerateKeyPair.Desc", {
public_key: publicKeyPem,
private_key: privateKeyPem,
});
await MarkdownRenderer.render(
this.plugin.app,
"## " + title + "\n\n" + msg,
generatedKeyDivEl,
"/",
this.plugin
);
})
)
.addOnUpdate(
combineOnUpdate(
this.enableOnlySyncDisabled,
visibleOnly(() => this.editingSettings.useJWT)
)
);
generatedKeyDivEl = this.createEl(
paneEl,
"div",
{ text: "" },
(el) => {},
visibleOnly(() => this.editingSettings.useJWT)
);
new Setting(paneEl).autoWireText("jwtKid", {
holdValue: true,
onUpdate: combineOnUpdate(
this.enableOnlySyncDisabled,
visibleOnly(() => this.editingSettings.useJWT)
),
});
new Setting(paneEl).autoWireText("jwtSub", {
holdValue: true,
onUpdate: combineOnUpdate(
this.enableOnlySyncDisabled,
visibleOnly(() => this.editingSettings.useJWT)
),
});
new Setting(paneEl).autoWireNumeric("jwtExpDuration", {
holdValue: true,
onUpdate: combineOnUpdate(
this.enableOnlySyncDisabled,
visibleOnly(() => this.editingSettings.useJWT)
),
});
new Setting(paneEl).autoWireText("couchDB_DBNAME", {
holdValue: true,
onUpdate: this.enableOnlySyncDisabled,
});
new Setting(paneEl).autoWireTextArea("couchDB_CustomHeaders", { holdValue: true });
new Setting(paneEl).autoWireToggle("useRequestAPI", {
holdValue: true,
onUpdate: this.enableOnlySyncDisabled,
});
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameTestDatabaseConnection"))
.setClass("wizardHidden")
.setDesc($msg("obsidianLiveSyncSettingTab.descTestDatabaseConnection"))
.addButton((button) =>
button
.setButtonText($msg("obsidianLiveSyncSettingTab.btnTest"))
.setDisabled(false)
.onClick(async () => {
await this.testConnection();
})
);
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameValidateDatabaseConfig"))
.setDesc($msg("obsidianLiveSyncSettingTab.descValidateDatabaseConfig"))
.addButton((button) =>
button
.setButtonText($msg("obsidianLiveSyncSettingTab.btnCheck"))
.setDisabled(false)
.onClick(async () => {
await checkConfig(checkResultDiv);
})
);
checkResultDiv = this.createEl(paneEl, "div", {
text: "",
});
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameApplySettings"))
.setClass("wizardHidden")
.addApplyButton([
"remoteType",
"couchDB_URI",
"couchDB_USER",
"couchDB_PASSWORD",
"couchDB_DBNAME",
"jwtAlgorithm",
"jwtExpDuration",
"jwtKey",
"jwtSub",
"jwtKid",
"useJWT",
"couchDB_CustomHeaders",
"useRequestAPI",
])
.addOnUpdate(this.onlyOnCouchDB);
}
);
});
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleNotification"), () => {}, this.onlyOnCouchDB).then(
(paneEl) => {
paneEl.addClass("wizardHidden");
new Setting(paneEl).autoWireNumeric("notifyThresholdOfRemoteStorageSize", {}).setClass("wizardHidden");
}
);
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.panelPrivacyEncryption")).then((paneEl) => {
new Setting(paneEl).autoWireToggle("encrypt", { holdValue: true });
const isEncryptEnabled = visibleOnly(() => this.isConfiguredAs("encrypt", true));
new Setting(paneEl).autoWireText("passphrase", {
holdValue: true,
isPassword: true,
onUpdate: isEncryptEnabled,
});
new Setting(paneEl).autoWireToggle("usePathObfuscation", {
holdValue: true,
onUpdate: isEncryptEnabled,
});
});
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleFetchSettings")).then((paneEl) => {
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.titleFetchConfigFromRemote"))
.setDesc($msg("obsidianLiveSyncSettingTab.descFetchConfigFromRemote"))
.addButton((button) =>
button
.setButtonText($msg("obsidianLiveSyncSettingTab.buttonFetch"))
.setDisabled(false)
.onClick(async () => {
const trialSetting = { ...this.initialSettings, ...this.editingSettings };
const newTweaks = await this.plugin.$$checkAndAskUseRemoteConfiguration(trialSetting);
if (newTweaks.result !== false) {
if (this.inWizard) {
this.editingSettings = { ...this.editingSettings, ...newTweaks.result };
this.requestUpdate();
return;
} else {
this.closeSetting();
this.plugin.settings = { ...this.plugin.settings, ...newTweaks.result };
if (newTweaks.requireFetch) {
if (
(await this.plugin.confirm.askYesNoDialog(
$msg("SettingTab.Message.AskRebuild"),
{
defaultOption: "Yes",
}
)) == "no"
) {
await this.plugin.$$saveSettingData();
return;
}
await this.plugin.$$saveSettingData();
await this.plugin.rebuilder.scheduleFetch();
await this.plugin.$$scheduleAppReload();
return;
} else {
await this.plugin.$$saveSettingData();
}
}
}
})
);
});
new Setting(paneEl).setClass("wizardOnly").addButton((button) =>
button
.setButtonText($msg("obsidianLiveSyncSettingTab.buttonNext"))
.setCta()
.setDisabled(false)
.onClick(async () => {
if (!(await checkConfig(checkResultDiv))) {
if (
(await this.plugin.confirm.askYesNoDialog(
$msg("obsidianLiveSyncSettingTab.msgConfigCheckFailed"),
{
defaultOption: "No",
title: $msg("obsidianLiveSyncSettingTab.titleRemoteConfigCheckFailed"),
}
)) == "no"
) {
return;
}
}
const isEncryptionFullyEnabled =
!this.editingSettings.encrypt || !this.editingSettings.usePathObfuscation;
if (isEncryptionFullyEnabled) {
if (
(await this.plugin.confirm.askYesNoDialog(
$msg("obsidianLiveSyncSettingTab.msgEnableEncryptionRecommendation"),
{
defaultOption: "No",
title: $msg("obsidianLiveSyncSettingTab.titleEncryptionNotEnabled"),
}
)) == "no"
) {
return;
}
}
if (!this.editingSettings.encrypt) {
this.editingSettings.passphrase = "";
}
if (!(await this.isPassphraseValid())) {
if (
(await this.plugin.confirm.askYesNoDialog(
$msg("obsidianLiveSyncSettingTab.msgInvalidPassphrase"),
{
defaultOption: "No",
title: $msg("obsidianLiveSyncSettingTab.titleEncryptionPassphraseInvalid"),
}
)) == "no"
) {
return;
}
}
if (isCloudantURI(this.editingSettings.couchDB_URI)) {
this.editingSettings = { ...this.editingSettings, ...PREFERRED_SETTING_CLOUDANT };
} else if (this.editingSettings.remoteType == REMOTE_MINIO) {
this.editingSettings = { ...this.editingSettings, ...PREFERRED_JOURNAL_SYNC };
} else {
this.editingSettings = { ...this.editingSettings, ...PREFERRED_SETTING_SELF_HOSTED };
}
if (
(await this.plugin.confirm.askYesNoDialog(
$msg("obsidianLiveSyncSettingTab.msgFetchConfigFromRemote"),
{ defaultOption: "Yes", title: $msg("obsidianLiveSyncSettingTab.titleFetchConfig") }
)) == "yes"
) {
const trialSetting = { ...this.initialSettings, ...this.editingSettings };
const newTweaks = await this.plugin.$$checkAndAskUseRemoteConfiguration(trialSetting);
if (newTweaks.result !== false) {
this.editingSettings = { ...this.editingSettings, ...newTweaks.result };
this.requestUpdate();
} else {
// Messages should be already shown.
}
}
this.changeDisplay("30");
})
);
}

View File

@@ -0,0 +1,121 @@
import { LEVEL_ADVANCED, type CustomRegExpSource } from "../../../lib/src/common/types.ts";
import { constructCustomRegExpList, splitCustomRegExpList } from "../../../lib/src/common/utils.ts";
import MultipleRegExpControl from "./MultipleRegExpControl.svelte";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import { mount } from "svelte";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
import { visibleOnly } from "./SettingPane.ts";
export function paneSelector(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement, { addPanel }: PageFunctions): void {
void addPanel(paneEl, "Normal Files").then((paneEl) => {
paneEl.addClass("wizardHidden");
const syncFilesSetting = new Setting(paneEl)
.setName("Synchronising files")
.setDesc(
"(RegExp) Empty to sync all files. Set filter as a regular expression to limit synchronising files."
)
.setClass("wizardHidden");
mount(MultipleRegExpControl, {
target: syncFilesSetting.controlEl,
props: {
patterns: splitCustomRegExpList(this.editingSettings.syncOnlyRegEx, "|[]|"),
originals: splitCustomRegExpList(this.editingSettings.syncOnlyRegEx, "|[]|"),
apply: async (newPatterns: CustomRegExpSource[]) => {
this.editingSettings.syncOnlyRegEx = constructCustomRegExpList(newPatterns, "|[]|");
await this.saveAllDirtySettings();
this.display();
},
},
});
const nonSyncFilesSetting = new Setting(paneEl)
.setName("Non-Synchronising files")
.setDesc("(RegExp) If this is set, any changes to local and remote files that match this will be skipped.")
.setClass("wizardHidden");
mount(MultipleRegExpControl, {
target: nonSyncFilesSetting.controlEl,
props: {
patterns: splitCustomRegExpList(this.editingSettings.syncIgnoreRegEx, "|[]|"),
originals: splitCustomRegExpList(this.editingSettings.syncIgnoreRegEx, "|[]|"),
apply: async (newPatterns: CustomRegExpSource[]) => {
this.editingSettings.syncIgnoreRegEx = constructCustomRegExpList(newPatterns, "|[]|");
await this.saveAllDirtySettings();
this.display();
},
},
});
new Setting(paneEl).setClass("wizardHidden").autoWireNumeric("syncMaxSizeInMB", { clampMin: 0 });
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("useIgnoreFiles");
new Setting(paneEl).setClass("wizardHidden").autoWireTextArea("ignoreFiles", {
onUpdate: visibleOnly(() => this.isConfiguredAs("useIgnoreFiles", true)),
});
});
void addPanel(paneEl, "Hidden Files", undefined, undefined, LEVEL_ADVANCED).then((paneEl) => {
const targetPatternSetting = new Setting(paneEl)
.setName("Target patterns")
.setClass("wizardHidden")
.setDesc("Patterns to match files for syncing");
const patTarget = splitCustomRegExpList(this.editingSettings.syncInternalFilesTargetPatterns, ",");
mount(MultipleRegExpControl, {
target: targetPatternSetting.controlEl,
props: {
patterns: patTarget,
originals: [...patTarget],
apply: async (newPatterns: CustomRegExpSource[]) => {
this.editingSettings.syncInternalFilesTargetPatterns = constructCustomRegExpList(newPatterns, ",");
await this.saveAllDirtySettings();
this.display();
},
},
});
const defaultSkipPattern = "\\/node_modules\\/, \\/\\.git\\/, ^\\.git\\/, \\/obsidian-livesync\\/";
const defaultSkipPatternXPlat =
defaultSkipPattern + ",\\/workspace$ ,\\/workspace.json$,\\/workspace-mobile.json$";
const pat = splitCustomRegExpList(this.editingSettings.syncInternalFilesIgnorePatterns, ",");
const patSetting = new Setting(paneEl).setName("Ignore patterns").setClass("wizardHidden").setDesc("");
mount(MultipleRegExpControl, {
target: patSetting.controlEl,
props: {
patterns: pat,
originals: [...pat],
apply: async (newPatterns: CustomRegExpSource[]) => {
this.editingSettings.syncInternalFilesIgnorePatterns = constructCustomRegExpList(newPatterns, ",");
await this.saveAllDirtySettings();
this.display();
},
},
});
const addDefaultPatterns = async (patterns: string) => {
const oldList = splitCustomRegExpList(this.editingSettings.syncInternalFilesIgnorePatterns, ",");
const newList = splitCustomRegExpList(
patterns as unknown as typeof this.editingSettings.syncInternalFilesIgnorePatterns,
","
);
const allSet = new Set<CustomRegExpSource>([...oldList, ...newList]);
this.editingSettings.syncInternalFilesIgnorePatterns = constructCustomRegExpList([...allSet], ",");
await this.saveAllDirtySettings();
this.display();
};
new Setting(paneEl)
.setName("Add default patterns")
.setClass("wizardHidden")
.addButton((button) => {
button.setButtonText("Default").onClick(async () => {
await addDefaultPatterns(defaultSkipPattern);
});
})
.addButton((button) => {
button.setButtonText("Cross-platform").onClick(async () => {
await addDefaultPatterns(defaultSkipPatternXPlat);
});
});
});
}

View File

@@ -0,0 +1,205 @@
import { MarkdownRenderer } from "../../../deps.ts";
import { $msg } from "../../../lib/src/common/i18n.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import { fireAndForget } from "octagonal-wheels/promises";
import {
EVENT_REQUEST_COPY_SETUP_URI,
EVENT_REQUEST_OPEN_SETUP_URI,
EVENT_REQUEST_SHOW_SETUP_QR,
eventHub,
} from "../../../common/events.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
import { visibleOnly } from "./SettingPane.ts";
import { DEFAULT_SETTINGS } from "../../../lib/src/common/types.ts";
import { request } from "obsidian";
export function paneSetup(
this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement,
{ addPanel, addPane }: PageFunctions
): void {
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleQuickSetup")).then((paneEl) => {
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameConnectSetupURI"))
.setDesc($msg("obsidianLiveSyncSettingTab.descConnectSetupURI"))
.addButton((text) => {
text.setButtonText($msg("obsidianLiveSyncSettingTab.btnUse")).onClick(() => {
this.closeSetting();
eventHub.emitEvent(EVENT_REQUEST_OPEN_SETUP_URI);
});
});
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameManualSetup"))
.setDesc($msg("obsidianLiveSyncSettingTab.descManualSetup"))
.addButton((text) => {
text.setButtonText($msg("obsidianLiveSyncSettingTab.btnStart")).onClick(async () => {
await this.enableMinimalSetup();
});
});
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameEnableLiveSync"))
.setDesc($msg("obsidianLiveSyncSettingTab.descEnableLiveSync"))
.addOnUpdate(visibleOnly(() => !this.isConfiguredAs("isConfigured", true)))
.addButton((text) => {
text.setButtonText($msg("obsidianLiveSyncSettingTab.btnEnable")).onClick(async () => {
this.editingSettings.isConfigured = true;
await this.saveAllDirtySettings();
this.plugin.$$askReload();
});
});
});
void addPanel(
paneEl,
$msg("obsidianLiveSyncSettingTab.titleSetupOtherDevices"),
undefined,
visibleOnly(() => this.isConfiguredAs("isConfigured", true))
).then((paneEl) => {
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameCopySetupURI"))
.setDesc($msg("obsidianLiveSyncSettingTab.descCopySetupURI"))
.addButton((text) => {
text.setButtonText($msg("obsidianLiveSyncSettingTab.btnCopy")).onClick(() => {
// await this.plugin.addOnSetup.command_copySetupURI();
eventHub.emitEvent(EVENT_REQUEST_COPY_SETUP_URI);
});
});
new Setting(paneEl)
.setName($msg("Setup.ShowQRCode"))
.setDesc($msg("Setup.ShowQRCode.Desc"))
.addButton((text) => {
text.setButtonText($msg("Setup.ShowQRCode")).onClick(() => {
eventHub.emitEvent(EVENT_REQUEST_SHOW_SETUP_QR);
});
});
});
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleReset")).then((paneEl) => {
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameDiscardSettings"))
.addButton((text) => {
text.setButtonText($msg("obsidianLiveSyncSettingTab.btnDiscard"))
.onClick(async () => {
if (
(await this.plugin.confirm.askYesNoDialog(
$msg("obsidianLiveSyncSettingTab.msgDiscardConfirmation"),
{ defaultOption: "No" }
)) == "yes"
) {
this.editingSettings = { ...this.editingSettings, ...DEFAULT_SETTINGS };
await this.saveAllDirtySettings();
this.plugin.settings = { ...DEFAULT_SETTINGS };
await this.plugin.$$saveSettingData();
await this.plugin.$$resetLocalDatabase();
// await this.plugin.initializeDatabase();
this.plugin.$$askReload();
}
})
.setWarning();
})
.addOnUpdate(visibleOnly(() => this.isConfiguredAs("isConfigured", true)));
});
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleExtraFeatures")).then((paneEl) => {
new Setting(paneEl).autoWireToggle("useAdvancedMode");
new Setting(paneEl).autoWireToggle("usePowerUserMode");
new Setting(paneEl).autoWireToggle("useEdgeCaseMode");
this.addOnSaved("useAdvancedMode", () => this.display());
this.addOnSaved("usePowerUserMode", () => this.display());
this.addOnSaved("useEdgeCaseMode", () => this.display());
});
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleOnlineTips")).then((paneEl) => {
// this.createEl(paneEl, "h3", { text: $msg("obsidianLiveSyncSettingTab.titleOnlineTips") });
const repo = "vrtmrz/obsidian-livesync";
const topPath = $msg("obsidianLiveSyncSettingTab.linkTroubleshooting");
const rawRepoURI = `https://raw.githubusercontent.com/${repo}/main`;
this.createEl(
paneEl,
"div",
"",
(el) =>
(el.innerHTML = `<a href='https://github.com/${repo}/blob/main${topPath}' target="_blank">${$msg("obsidianLiveSyncSettingTab.linkOpenInBrowser")}</a>`)
);
const troubleShootEl = this.createEl(paneEl, "div", {
text: "",
cls: "sls-troubleshoot-preview",
});
const loadMarkdownPage = async (pathAll: string, basePathParam: string = "") => {
troubleShootEl.style.minHeight = troubleShootEl.clientHeight + "px";
troubleShootEl.empty();
const fullPath = pathAll.startsWith("/") ? pathAll : `${basePathParam}/${pathAll}`;
const directoryArr = fullPath.split("/");
const filename = directoryArr.pop();
const directly = directoryArr.join("/");
const basePath = directly;
let remoteTroubleShootMDSrc = "";
try {
remoteTroubleShootMDSrc = await request(`${rawRepoURI}${basePath}/${filename}`);
} catch (ex: any) {
remoteTroubleShootMDSrc = `${$msg("obsidianLiveSyncSettingTab.logErrorOccurred")}\n${ex.toString()}`;
}
const remoteTroubleShootMD = remoteTroubleShootMDSrc.replace(
/\((.*?(.png)|(.jpg))\)/g,
`(${rawRepoURI}${basePath}/$1)`
);
// Render markdown
await MarkdownRenderer.render(
this.plugin.app,
`<a class='sls-troubleshoot-anchor'></a> [${$msg("obsidianLiveSyncSettingTab.linkTipsAndTroubleshooting")}](${topPath}) [${$msg("obsidianLiveSyncSettingTab.linkPageTop")}](${filename})\n\n${remoteTroubleShootMD}`,
troubleShootEl,
`${rawRepoURI}`,
this.plugin
);
// Menu
troubleShootEl.querySelector<HTMLAnchorElement>(".sls-troubleshoot-anchor")?.parentElement?.setCssStyles({
position: "sticky",
top: "-1em",
backgroundColor: "var(--modal-background)",
});
// Trap internal links.
troubleShootEl.querySelectorAll<HTMLAnchorElement>("a.internal-link").forEach((anchorEl) => {
anchorEl.addEventListener("click", (evt) => {
fireAndForget(async () => {
const uri = anchorEl.getAttr("data-href");
if (!uri) return;
if (uri.startsWith("#")) {
evt.preventDefault();
const elements = Array.from(
troubleShootEl.querySelectorAll<HTMLHeadingElement>("[data-heading]")
);
const p = elements.find(
(e) =>
e.getAttr("data-heading")?.toLowerCase().split(" ").join("-") ==
uri.substring(1).toLowerCase()
);
if (p) {
p.setCssStyles({ scrollMargin: "3em" });
p.scrollIntoView({
behavior: "instant",
block: "start",
});
}
} else {
evt.preventDefault();
await loadMarkdownPage(uri, basePath);
troubleShootEl.setCssStyles({ scrollMargin: "1em" });
troubleShootEl.scrollIntoView({
behavior: "instant",
block: "start",
});
}
});
});
});
troubleShootEl.style.minHeight = "";
};
void loadMarkdownPage(topPath);
});
}

View File

@@ -0,0 +1,346 @@
import {
type ObsidianLiveSyncSettings,
LOG_LEVEL_NOTICE,
REMOTE_COUCHDB,
LEVEL_ADVANCED,
} from "../../../lib/src/common/types.ts";
import { Logger } from "../../../lib/src/common/logger.ts";
import { $msg } from "../../../lib/src/common/i18n.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import { fireAndForget } from "octagonal-wheels/promises";
import { EVENT_REQUEST_COPY_SETUP_URI, eventHub } from "../../../common/events.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts";
import { visibleOnly } from "./SettingPane.ts";
export function paneSyncSettings(
this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement,
{ addPanel, addPane }: PageFunctions
): void {
if (this.editingSettings.versionUpFlash != "") {
const c = this.createEl(
paneEl,
"div",
{
text: this.editingSettings.versionUpFlash,
cls: "op-warn sls-setting-hidden",
},
(el) => {
this.createEl(el, "button", { text: $msg("obsidianLiveSyncSettingTab.btnGotItAndUpdated") }, (e) => {
e.addClass("mod-cta");
e.addEventListener("click", () => {
fireAndForget(async () => {
this.editingSettings.versionUpFlash = "";
await this.saveAllDirtySettings();
c.remove();
});
});
});
},
visibleOnly(() => !this.isConfiguredAs("versionUpFlash", ""))
);
}
this.createEl(paneEl, "div", {
text: $msg("obsidianLiveSyncSettingTab.msgSelectAndApplyPreset"),
cls: "wizardOnly",
}).addClasses(["op-warn-info"]);
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleSynchronizationPreset")).then((paneEl) => {
const options: Record<string, string> =
this.editingSettings.remoteType == REMOTE_COUCHDB
? {
NONE: "",
LIVESYNC: $msg("obsidianLiveSyncSettingTab.optionLiveSync"),
PERIODIC: $msg("obsidianLiveSyncSettingTab.optionPeriodicWithBatch"),
DISABLE: $msg("obsidianLiveSyncSettingTab.optionDisableAllAutomatic"),
}
: {
NONE: "",
PERIODIC: $msg("obsidianLiveSyncSettingTab.optionPeriodicWithBatch"),
DISABLE: $msg("obsidianLiveSyncSettingTab.optionDisableAllAutomatic"),
};
new Setting(paneEl)
.autoWireDropDown("preset", {
options: options,
holdValue: true,
})
.addButton((button) => {
button.setButtonText($msg("obsidianLiveSyncSettingTab.btnApply"));
button.onClick(async () => {
// await this.saveSettings(["preset"]);
await this.saveAllDirtySettings();
});
});
this.addOnSaved("preset", async (currentPreset) => {
if (currentPreset == "") {
Logger($msg("obsidianLiveSyncSettingTab.logSelectAnyPreset"), LOG_LEVEL_NOTICE);
return;
}
const presetAllDisabled = {
batchSave: false,
liveSync: false,
periodicReplication: false,
syncOnSave: false,
syncOnEditorSave: false,
syncOnStart: false,
syncOnFileOpen: false,
syncAfterMerge: false,
} as Partial<ObsidianLiveSyncSettings>;
const presetLiveSync = {
...presetAllDisabled,
liveSync: true,
} as Partial<ObsidianLiveSyncSettings>;
const presetPeriodic = {
...presetAllDisabled,
batchSave: true,
periodicReplication: true,
syncOnSave: false,
syncOnEditorSave: false,
syncOnStart: true,
syncOnFileOpen: true,
syncAfterMerge: true,
} as Partial<ObsidianLiveSyncSettings>;
if (currentPreset == "LIVESYNC") {
this.editingSettings = {
...this.editingSettings,
...presetLiveSync,
};
Logger($msg("obsidianLiveSyncSettingTab.logConfiguredLiveSync"), LOG_LEVEL_NOTICE);
} else if (currentPreset == "PERIODIC") {
this.editingSettings = {
...this.editingSettings,
...presetPeriodic,
};
Logger($msg("obsidianLiveSyncSettingTab.logConfiguredPeriodic"), LOG_LEVEL_NOTICE);
} else {
Logger($msg("obsidianLiveSyncSettingTab.logConfiguredDisabled"), LOG_LEVEL_NOTICE);
this.editingSettings = {
...this.editingSettings,
...presetAllDisabled,
};
}
if (this.inWizard) {
this.closeSetting();
this.inWizard = false;
if (!this.editingSettings.isConfigured) {
this.editingSettings.isConfigured = true;
await this.saveAllDirtySettings();
await this.plugin.$$realizeSettingSyncMode();
await this.rebuildDB("localOnly");
// this.resetEditingSettings();
if (
(await this.plugin.confirm.askYesNoDialog(
$msg("obsidianLiveSyncSettingTab.msgGenerateSetupURI"),
{
defaultOption: "Yes",
title: $msg("obsidianLiveSyncSettingTab.titleCongratulations"),
}
)) == "yes"
) {
eventHub.emitEvent(EVENT_REQUEST_COPY_SETUP_URI);
}
} else {
if (this.isNeedRebuildLocal() || this.isNeedRebuildRemote()) {
await this.confirmRebuild();
} else {
await this.saveAllDirtySettings();
await this.plugin.$$realizeSettingSyncMode();
this.plugin.$$askReload();
}
}
} else {
await this.saveAllDirtySettings();
await this.plugin.$$realizeSettingSyncMode();
}
});
});
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleSynchronizationMethod")).then((paneEl) => {
paneEl.addClass("wizardHidden");
// const onlyOnLiveSync = visibleOnly(() => this.isConfiguredAs("syncMode", "LIVESYNC"));
const onlyOnNonLiveSync = visibleOnly(() => !this.isConfiguredAs("syncMode", "LIVESYNC"));
const onlyOnPeriodic = visibleOnly(() => this.isConfiguredAs("syncMode", "PERIODIC"));
const optionsSyncMode =
this.editingSettings.remoteType == REMOTE_COUCHDB
? {
ONEVENTS: $msg("obsidianLiveSyncSettingTab.optionOnEvents"),
PERIODIC: $msg("obsidianLiveSyncSettingTab.optionPeriodicAndEvents"),
LIVESYNC: $msg("obsidianLiveSyncSettingTab.optionLiveSync"),
}
: {
ONEVENTS: $msg("obsidianLiveSyncSettingTab.optionOnEvents"),
PERIODIC: $msg("obsidianLiveSyncSettingTab.optionPeriodicAndEvents"),
};
new Setting(paneEl)
.autoWireDropDown("syncMode", {
//@ts-ignore
options: optionsSyncMode,
})
.setClass("wizardHidden");
this.addOnSaved("syncMode", async (value) => {
this.editingSettings.liveSync = false;
this.editingSettings.periodicReplication = false;
if (value == "LIVESYNC") {
this.editingSettings.liveSync = true;
} else if (value == "PERIODIC") {
this.editingSettings.periodicReplication = true;
}
await this.saveSettings(["liveSync", "periodicReplication"]);
await this.plugin.$$realizeSettingSyncMode();
});
new Setting(paneEl)
.autoWireNumeric("periodicReplicationInterval", {
clampMax: 5000,
onUpdate: onlyOnPeriodic,
})
.setClass("wizardHidden");
new Setting(paneEl).autoWireNumeric("syncMinimumInterval", {
onUpdate: onlyOnNonLiveSync,
});
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncOnSave", { onUpdate: onlyOnNonLiveSync });
new Setting(paneEl)
.setClass("wizardHidden")
.autoWireToggle("syncOnEditorSave", { onUpdate: onlyOnNonLiveSync });
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncOnFileOpen", { onUpdate: onlyOnNonLiveSync });
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncOnStart", { onUpdate: onlyOnNonLiveSync });
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncAfterMerge", { onUpdate: onlyOnNonLiveSync });
});
void addPanel(
paneEl,
$msg("obsidianLiveSyncSettingTab.titleUpdateThinning"),
undefined,
visibleOnly(() => !this.isConfiguredAs("syncMode", "LIVESYNC"))
).then((paneEl) => {
paneEl.addClass("wizardHidden");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("batchSave");
new Setting(paneEl).setClass("wizardHidden").autoWireNumeric("batchSaveMinimumDelay", {
acceptZero: true,
onUpdate: visibleOnly(() => this.isConfiguredAs("batchSave", true)),
});
new Setting(paneEl).setClass("wizardHidden").autoWireNumeric("batchSaveMaximumDelay", {
acceptZero: true,
onUpdate: visibleOnly(() => this.isConfiguredAs("batchSave", true)),
});
});
void addPanel(
paneEl,
$msg("obsidianLiveSyncSettingTab.titleDeletionPropagation"),
undefined,
undefined,
LEVEL_ADVANCED
).then((paneEl) => {
paneEl.addClass("wizardHidden");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("trashInsteadDelete");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("doNotDeleteFolder");
});
void addPanel(
paneEl,
$msg("obsidianLiveSyncSettingTab.titleConflictResolution"),
undefined,
undefined,
LEVEL_ADVANCED
).then((paneEl) => {
paneEl.addClass("wizardHidden");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("resolveConflictsByNewerFile");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("checkConflictOnlyOnOpen");
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("showMergeDialogOnlyOnActive");
});
void addPanel(
paneEl,
$msg("obsidianLiveSyncSettingTab.titleSyncSettingsViaMarkdown"),
undefined,
undefined,
LEVEL_ADVANCED
).then((paneEl) => {
paneEl.addClass("wizardHidden");
new Setting(paneEl).autoWireText("settingSyncFile", { holdValue: true }).addApplyButton(["settingSyncFile"]);
new Setting(paneEl).autoWireToggle("writeCredentialsForSettingSync");
new Setting(paneEl).autoWireToggle("notifyAllSettingSyncFile");
});
void addPanel(
paneEl,
$msg("obsidianLiveSyncSettingTab.titleHiddenFiles"),
undefined,
undefined,
LEVEL_ADVANCED
).then((paneEl) => {
paneEl.addClass("wizardHidden");
const LABEL_ENABLED = $msg("obsidianLiveSyncSettingTab.labelEnabled");
const LABEL_DISABLED = $msg("obsidianLiveSyncSettingTab.labelDisabled");
const hiddenFileSyncSetting = new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameHiddenFileSynchronization"))
.setClass("wizardHidden");
const hiddenFileSyncSettingEl = hiddenFileSyncSetting.settingEl;
const hiddenFileSyncSettingDiv = hiddenFileSyncSettingEl.createDiv("");
hiddenFileSyncSettingDiv.innerText = this.editingSettings.syncInternalFiles ? LABEL_ENABLED : LABEL_DISABLED;
if (this.editingSettings.syncInternalFiles) {
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameDisableHiddenFileSync"))
.setClass("wizardHidden")
.addButton((button) => {
button.setButtonText($msg("obsidianLiveSyncSettingTab.btnDisable")).onClick(async () => {
this.editingSettings.syncInternalFiles = false;
await this.saveAllDirtySettings();
this.display();
});
});
} else {
new Setting(paneEl)
.setName($msg("obsidianLiveSyncSettingTab.nameEnableHiddenFileSync"))
.setClass("wizardHidden")
.addButton((button) => {
button.setButtonText("Merge").onClick(async () => {
this.closeSetting();
// this.resetEditingSettings();
await this.plugin.$anyConfigureOptionalSyncFeature("MERGE");
});
})
.addButton((button) => {
button.setButtonText("Fetch").onClick(async () => {
this.closeSetting();
// this.resetEditingSettings();
await this.plugin.$anyConfigureOptionalSyncFeature("FETCH");
});
})
.addButton((button) => {
button.setButtonText("Overwrite").onClick(async () => {
this.closeSetting();
// this.resetEditingSettings();
await this.plugin.$anyConfigureOptionalSyncFeature("OVERWRITE");
});
});
}
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("suppressNotifyHiddenFilesChange", {});
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncInternalFilesBeforeReplication", {
onUpdate: visibleOnly(() => this.isConfiguredAs("watchInternalFileChanges", true)),
});
new Setting(paneEl).setClass("wizardHidden").autoWireNumeric("syncInternalFilesInterval", {
clampMin: 10,
acceptZero: true,
});
});
}

View File

@@ -0,0 +1,119 @@
import { $msg } from "../../../lib/src/common/i18n";
import { LEVEL_ADVANCED, LEVEL_EDGE_CASE, LEVEL_POWER_USER, type ConfigLevel } from "../../../lib/src/common/types";
import type { AllSettingItemKey, AllSettings } from "./settingConstants";
export const combineOnUpdate = (func1: OnUpdateFunc, func2: OnUpdateFunc): OnUpdateFunc => {
return () => ({
...func1(),
...func2(),
});
};
export const setLevelClass = (el: HTMLElement, level?: ConfigLevel) => {
switch (level) {
case LEVEL_POWER_USER:
el.addClass("sls-setting-poweruser");
break;
case LEVEL_ADVANCED:
el.addClass("sls-setting-advanced");
break;
case LEVEL_EDGE_CASE:
el.addClass("sls-setting-edgecase");
break;
default:
// NO OP.
}
};
export function setStyle(el: HTMLElement, styleHead: string, condition: () => boolean) {
if (condition()) {
el.addClass(`${styleHead}-enabled`);
el.removeClass(`${styleHead}-disabled`);
} else {
el.addClass(`${styleHead}-disabled`);
el.removeClass(`${styleHead}-enabled`);
}
}
export function visibleOnly(cond: () => boolean): OnUpdateFunc {
return () => ({
visibility: cond(),
});
}
export function enableOnly(cond: () => boolean): OnUpdateFunc {
return () => ({
disabled: !cond(),
});
}
export type OnUpdateResult = {
visibility?: boolean;
disabled?: boolean;
classes?: string[];
isCta?: boolean;
isWarning?: boolean;
};
export type OnUpdateFunc = () => OnUpdateResult;
export type UpdateFunction = () => void;
export type OnSavedHandlerFunc<T extends AllSettingItemKey> = (value: AllSettings[T]) => Promise<void> | void;
export type OnSavedHandler<T extends AllSettingItemKey> = {
key: T;
handler: OnSavedHandlerFunc<T>;
};
export function getLevelStr(level: ConfigLevel) {
return level == LEVEL_POWER_USER
? $msg("obsidianLiveSyncSettingTab.levelPowerUser")
: level == LEVEL_ADVANCED
? $msg("obsidianLiveSyncSettingTab.levelAdvanced")
: level == LEVEL_EDGE_CASE
? $msg("obsidianLiveSyncSettingTab.levelEdgeCase")
: "";
}
export type AutoWireOption = {
placeHolder?: string;
holdValue?: boolean;
isPassword?: boolean;
invert?: boolean;
onUpdate?: OnUpdateFunc;
obsolete?: boolean;
};
export function findAttrFromParent(el: HTMLElement, attr: string): string {
let current: HTMLElement | null = el;
while (current) {
const value = current.getAttribute(attr);
if (value) {
return value;
}
current = current.parentElement;
}
return "";
}
export function wrapMemo<T>(func: (arg: T) => void) {
let buf: T | undefined = undefined;
return (arg: T) => {
if (buf !== arg) {
func(arg);
buf = arg;
}
};
}
export type PageFunctions = {
addPane: (
parentEl: HTMLElement,
title: string,
icon: string,
order: number,
wizardHidden: boolean,
level?: ConfigLevel
) => Promise<HTMLDivElement>;
addPanel: (
parentEl: HTMLElement,
title: string,
callback?: (el: HTMLDivElement) => void,
func?: OnUpdateFunc,
level?: ConfigLevel
) => Promise<HTMLDivElement>;
};

View File

@@ -1,406 +1 @@
import { $t } from "../../../lib/src/common/i18n.ts";
import {
DEFAULT_SETTINGS,
configurationNames,
type ConfigurationItem,
type FilterBooleanKeys,
type FilterNumberKeys,
type FilterStringKeys,
type ObsidianLiveSyncSettings,
} from "../../../lib/src/common/types.ts";
export type OnDialogSettings = {
configPassphrase: string;
preset: "" | "PERIODIC" | "LIVESYNC" | "DISABLE";
syncMode: "ONEVENTS" | "PERIODIC" | "LIVESYNC";
dummy: number;
deviceAndVaultName: string;
};
export const OnDialogSettingsDefault: OnDialogSettings = {
configPassphrase: "",
preset: "",
syncMode: "ONEVENTS",
dummy: 0,
deviceAndVaultName: "",
};
export const AllSettingDefault = { ...DEFAULT_SETTINGS, ...OnDialogSettingsDefault };
export type AllSettings = ObsidianLiveSyncSettings & OnDialogSettings;
export type AllStringItemKey = FilterStringKeys<AllSettings>;
export type AllNumericItemKey = FilterNumberKeys<AllSettings>;
export type AllBooleanItemKey = FilterBooleanKeys<AllSettings>;
export type AllSettingItemKey = AllStringItemKey | AllNumericItemKey | AllBooleanItemKey;
export type ValueOf<T extends AllSettingItemKey> = T extends AllStringItemKey
? string
: T extends AllNumericItemKey
? number
: T extends AllBooleanItemKey
? boolean
: AllSettings[T];
export const SettingInformation: Partial<Record<keyof AllSettings, ConfigurationItem>> = {
liveSync: {
name: "Sync Mode",
},
couchDB_URI: {
name: "Server URI",
placeHolder: "https://........",
},
couchDB_USER: {
name: "Username",
desc: "username",
},
couchDB_PASSWORD: {
name: "Password",
desc: "password",
},
couchDB_DBNAME: {
name: "Database Name",
},
passphrase: {
name: "Passphrase",
desc: "Encryption phassphrase. If changed, you should overwrite the server's database with the new (encrypted) files.",
},
showStatusOnEditor: {
name: "Show status inside the editor",
desc: "Requires restart of Obsidian.",
},
showOnlyIconsOnEditor: {
name: "Show status as icons only",
},
showStatusOnStatusbar: {
name: "Show status on the status bar",
desc: "Requires restart of Obsidian.",
},
lessInformationInLog: {
name: "Show only notifications",
desc: "Disables logging, only shows notifications. Please disable if you report an issue.",
},
showVerboseLog: {
name: "Verbose Log",
desc: "Show verbose log. Please enable if you report an issue.",
},
hashCacheMaxCount: {
name: "Memory cache size (by total items)",
},
hashCacheMaxAmount: {
name: "Memory cache size (by total characters)",
desc: "(Mega chars)",
},
writeCredentialsForSettingSync: {
name: "Write credentials in the file",
desc: "(Not recommended) If set, credentials will be stored in the file.",
},
notifyAllSettingSyncFile: {
name: "Notify all setting files",
},
configPassphrase: {
name: "Passphrase of sensitive configuration items",
desc: "This passphrase will not be copied to another device. It will be set to `Default` until you configure it again.",
},
configPassphraseStore: {
name: "Encrypting sensitive configuration items",
},
syncOnSave: {
name: "Sync on Save",
desc: "Starts synchronisation when a file is saved.",
},
syncOnEditorSave: {
name: "Sync on Editor Save",
desc: "When you save a file in the editor, start a sync automatically",
},
syncOnFileOpen: {
name: "Sync on File Open",
desc: "Forces the file to be synced when opened.",
},
syncOnStart: {
name: "Sync on Startup",
desc: "Automatically Sync all files when opening Obsidian.",
},
syncAfterMerge: {
name: "Sync after merging file",
desc: "Sync automatically after merging files",
},
trashInsteadDelete: {
name: "Use the trash bin",
desc: "Move remotely deleted files to the trash, instead of deleting.",
},
doNotDeleteFolder: {
name: "Keep empty folder",
desc: "Should we keep folders that don't have any files inside?",
},
resolveConflictsByNewerFile: {
name: "(BETA) Always overwrite with a newer file",
desc: "Testing only - Resolve file conflicts by syncing newer copies of the file, this can overwrite modified files. Be Warned.",
},
checkConflictOnlyOnOpen: {
name: "Delay conflict resolution of inactive files",
desc: "Should we only check for conflicts when a file is opened?",
},
showMergeDialogOnlyOnActive: {
name: "Delay merge conflict prompt for inactive files.",
desc: "Should we prompt you about conflicting files when a file is opened?",
},
disableMarkdownAutoMerge: {
name: "Always prompt merge conflicts",
desc: "Should we prompt you for every single merge, even if we can safely merge automatcially?",
},
writeDocumentsIfConflicted: {
name: "Apply Latest Change if Conflicting",
desc: "Enable this option to automatically apply the most recent change to documents even when it conflicts",
},
syncInternalFilesInterval: {
name: "Scan hidden files periodically",
desc: "Seconds, 0 to disable",
},
batchSave: {
name: "Batch database update",
desc: "Reducing the frequency with which on-disk changes are reflected into the DB",
},
readChunksOnline: {
name: "Fetch chunks on demand",
desc: "(ex. Read chunks online) If this option is enabled, LiveSync reads chunks online directly instead of replicating them locally. Increasing Custom chunk size is recommended.",
},
syncMaxSizeInMB: {
name: "Maximum file size",
desc: "(MB) If this is set, changes to local and remote files that are larger than this will be skipped. If the file becomes smaller again, a newer one will be used.",
},
useIgnoreFiles: {
name: "(Beta) Use ignore files",
desc: "If this is set, changes to local files which are matched by the ignore files will be skipped. Remote changes are determined using local ignore files.",
},
ignoreFiles: {
name: "Ignore files",
desc: "Comma separated `.gitignore, .dockerignore`",
},
batch_size: {
name: "Batch size",
desc: "Number of changes to sync at a time. Defaults to 50. Minimum is 2.",
},
batches_limit: {
name: "Batch limit",
desc: "Number of batches to process at a time. Defaults to 40. Minimum is 2. This along with batch size controls how many docs are kept in memory at a time.",
},
useTimeouts: {
name: "Use timeouts instead of heartbeats",
desc: "If this option is enabled, PouchDB will hold the connection open for 60 seconds, and if no change arrives in that time, close and reopen the socket, instead of holding it open indefinitely. Useful when a proxy limits request duration but can increase resource usage.",
},
concurrencyOfReadChunksOnline: {
name: "Batch size of on-demand fetching",
},
minimumIntervalOfReadChunksOnline: {
name: "The delay for consecutive on-demand fetches",
},
suspendFileWatching: {
name: "Suspend file watching",
desc: "Stop watching for file changes.",
},
suspendParseReplicationResult: {
name: "Suspend database reflecting",
desc: "Stop reflecting database changes to storage files.",
},
writeLogToTheFile: {
name: "Write logs into the file",
desc: "Warning! This will have a serious impact on performance. And the logs will not be synchronised under the default name. Please be careful with logs; they often contain your confidential information.",
},
deleteMetadataOfDeletedFiles: {
name: "Do not keep metadata of deleted files.",
},
useIndexedDBAdapter: {
name: "(Obsolete) Use an old adapter for compatibility",
desc: "Before v0.17.16, we used an old adapter for the local database. Now the new adapter is preferred. However, it needs local database rebuilding. Please disable this toggle when you have enough time. If leave it enabled, also while fetching from the remote database, you will be asked to disable this.",
obsolete: true,
},
watchInternalFileChanges: {
name: "Scan changes on customization sync",
desc: "Do not use internal API",
},
doNotSuspendOnFetching: {
name: "Fetch database with previous behaviour",
},
disableCheckingConfigMismatch: {
name: "Do not check configuration mismatch before replication",
},
usePluginSync: {
name: "Enable customization sync",
},
autoSweepPlugins: {
name: "Scan customization automatically",
desc: "Scan customization before replicating.",
},
autoSweepPluginsPeriodic: {
name: "Scan customization periodically",
desc: "Scan customization every 1 minute.",
},
notifyPluginOrSettingUpdated: {
name: "Notify customized",
desc: "Notify when other device has newly customized.",
},
remoteType: {
name: "Remote Type",
desc: "Remote server type",
},
endpoint: {
name: "Endpoint URL",
placeHolder: "https://........",
},
accessKey: {
name: "Access Key",
},
secretKey: {
name: "Secret Key",
},
region: {
name: "Region",
placeHolder: "auto",
},
bucket: {
name: "Bucket Name",
},
useCustomRequestHandler: {
name: "Use Custom HTTP Handler",
desc: "Enable this if your Object Storage doesn't support CORS",
},
maxChunksInEden: {
name: "Maximum Incubating Chunks",
desc: "The maximum number of chunks that can be incubated within the document. Chunks exceeding this number will immediately graduate to independent chunks.",
},
maxTotalLengthInEden: {
name: "Maximum Incubating Chunk Size",
desc: "The maximum total size of chunks that can be incubated within the document. Chunks exceeding this size will immediately graduate to independent chunks.",
},
maxAgeInEden: {
name: "Maximum Incubation Period",
desc: "The maximum duration for which chunks can be incubated within the document. Chunks exceeding this period will graduate to independent chunks.",
},
settingSyncFile: {
name: "Filename",
desc: "Save settings to a markdown file. You will be notified when new settings arrive. You can set different files by the platform.",
},
preset: {
name: "Presets",
desc: "Apply preset configuration",
},
syncMode: {
name: "Sync Mode",
},
periodicReplicationInterval: {
name: "Periodic Sync interval",
desc: "Interval (sec)",
},
syncInternalFilesBeforeReplication: {
name: "Scan for hidden files before replication",
},
automaticallyDeleteMetadataOfDeletedFiles: {
name: "Delete old metadata of deleted files on start-up",
desc: "(Days passed, 0 to disable automatic-deletion)",
},
additionalSuffixOfDatabaseName: {
name: "Database suffix",
desc: "LiveSync could not handle multiple vaults which have same name without different prefix, This should be automatically configured.",
},
hashAlg: {
name: configurationNames["hashAlg"]?.name || "",
desc: "xxhash64 is the current default.",
},
deviceAndVaultName: {
name: "Device name",
desc: "Unique name between all synchronized devices. To edit this setting, please disable customization sync once.",
},
displayLanguage: {
name: "Display Language",
desc: 'Not all messages have been translated. And, please revert to "Default" when reporting errors.',
},
enableChunkSplitterV2: {
name: "Use splitting-limit-capped chunk splitter",
desc: "If enabled, chunks will be split into no more than 100 items. However, dedupe is slightly weaker.",
},
disableWorkerForGeneratingChunks: {
name: "Do not split chunks in the background",
desc: "If disabled(toggled), chunks will be split on the UI thread (Previous behaviour).",
},
processSmallFilesInUIThread: {
name: "Process small files in the foreground",
desc: "If enabled, the file under 1kb will be processed in the UI thread.",
},
batchSaveMinimumDelay: {
name: "Minimum delay for batch database updating",
desc: "Seconds. Saving to the local database will be delayed until this value after we stop typing or saving.",
},
batchSaveMaximumDelay: {
name: "Maximum delay for batch database updating",
desc: "Saving will be performed forcefully after this number of seconds.",
},
notifyThresholdOfRemoteStorageSize: {
name: "Notify when the estimated remote storage size exceeds on start up",
desc: "MB (0 to disable).",
},
usePluginSyncV2: {
name: "Enable per-file customization sync",
desc: "If enabled, efficient per-file customization sync will be used. A minor migration is required when enabling this feature, and all devices must be updated to v0.23.18. Enabling this feature will result in losing compatibility with older versions.",
},
handleFilenameCaseSensitive: {
name: "Handle files as Case-Sensitive",
desc: "If this enabled, All files are handled as case-Sensitive (Previous behaviour).",
},
doNotUseFixedRevisionForChunks: {
name: "Compute revisions for chunks",
desc: "If this enabled, all chunks will be stored with the revision made from its content.",
},
sendChunksBulkMaxSize: {
name: "Maximum size of chunks to send in one request",
desc: "MB",
},
useAdvancedMode: {
name: "Enable advanced features",
// desc: "Enable advanced mode"
},
usePowerUserMode: {
name: "Enable poweruser features",
// desc: "Enable power user mode",
// level: LEVEL_ADVANCED
},
useEdgeCaseMode: {
name: "Enable edge case treatment features",
},
enableDebugTools: {
name: "Enable Developers' Debug Tools.",
desc: "Requires restart of Obsidian",
},
suppressNotifyHiddenFilesChange: {
name: "Suppress notification of hidden files change",
desc: "If enabled, the notification of hidden files change will be suppressed.",
},
syncMinimumInterval: {
name: "Minimum interval for syncing",
desc: "The minimum interval for automatic synchronisation on event.",
},
};
function translateInfo(infoSrc: ConfigurationItem | undefined | false) {
if (!infoSrc) return false;
const info = { ...infoSrc };
info.name = $t(info.name);
if (info.desc) {
info.desc = $t(info.desc);
}
return info;
}
function _getConfig(key: AllSettingItemKey) {
if (key in configurationNames) {
return configurationNames[key as keyof ObsidianLiveSyncSettings];
}
if (key in SettingInformation) {
return SettingInformation[key as keyof ObsidianLiveSyncSettings];
}
return false;
}
export function getConfig(key: AllSettingItemKey) {
return translateInfo(_getConfig(key));
}
export function getConfName(key: AllSettingItemKey) {
const conf = getConfig(key);
if (!conf) return `${key} (No info)`;
return conf.name;
}
export * from "@lib/common/settingConstants.ts";

View File

@@ -7,6 +7,7 @@ import type {
UXFolderInfo,
UXStat,
} from "../../lib/src/common/types";
import type { CustomRegExp } from "../../lib/src/common/utils";
export interface StorageAccess {
deleteVaultItem(file: FilePathWithPrefix | UXFileInfoStub | UXFolderInfo): Promise<void>;
@@ -48,8 +49,8 @@ export interface StorageAccess {
getFilesIncludeHidden(
basePath: string,
includeFilter?: RegExp[],
excludeFilter?: RegExp[],
includeFilter?: CustomRegExp[],
excludeFilter?: CustomRegExp[],
skipFolder?: string[]
): Promise<FilePath[]>;
}

View File

@@ -23,5 +23,5 @@
}
},
"include": ["**/*.ts"],
"exclude": ["pouchdb-browser-webpack", "utils"]
"exclude": ["pouchdb-browser-webpack", "utils", "src/lib/apps", "**/*.test.ts"]
}

View File

@@ -1,97 +1,111 @@
## 0.24.11
## 0.25.5
Peer-to-peer synchronisation has been implemented!
Until now, I have not provided a synchronisation server. More people may not even know that I have shut down the test server. I confess that this is a bit repetitive, but I confess it is a cautionary tale. This is out of a sense of self-discipline that someone has occurred who could see your data. Even if the 'someone' is me. I should not be unaware of its superiority, even though well-meaning and am a servant of all. (Half joking, but also serious).
However, now I can provide you with a signalling server. Because, to the best of my knowledge, it is only the network that is connected to your device.
Also, this signalling server is just a Nostr relay, not my implementation. You can run your implementation, which you consider trustworthy, on a trustworthy server. You do not even have to trust me. Mate, it is great, isn't it? For your information, strfry is running on my signalling server.
Nevertheless, that being said, to be more honest, I still have not decided what to do with this signalling server if too much traffic comes in.
Note: Already you have noticed this, but let me mention it again, this is a significantly large update. If you have noticed anything, please let me know. I will try to fix it as soon as possible (Some address is on my [profile](https://github.com/vrtmrz)).
## 0.24.23
### New Feature
- Now, we can send custom headers to the server.
- They can be sent to either CouchDB or Object Storage.
- Authentication with JWT in CouchDB is now supported.
- I will describe steps later, but please refer to the [CouchDB document](https://docs.couchdb.org/en/stable/config/auth.html#authentication-configuration).
- A JWT keypair for testing can be generated in the setting dialogue.
### Improved
- The QR Code for set-up can be shown also from the setting dialogue now.
- Conflict checking for preventing unexpected overwriting on the boot-up process has been quite faster.
9th August, 2025
### Fixed
- Some bugs on Dev and Testing modules have been fixed.
## 0.24.22 ~~0.24.21~~
(Really sorry for the confusion. I have got a miss at releasing...).
### Fixed
- No longer conflicted files are handled in the boot-up process. No more unexpected overwriting.
- It ignores `Always overwrite with a newer file`, and always be prevented for the safety. Please pick it manually or open the file.
- Some log messages on conflict resolution has been corrected.
- Automatic merge notifications, displayed on the grounds of `same`, have been degraded to logs.
- Storage scanning no longer occurs when `Suspend file watching` is enabled (including boot-sequence).
- This change improves safety when troubleshooting or fetching the remote database.
### Improved
- Now we can fetch the remote database with keeping local files completely intact.
- In new option, all files are stored into the local database before the fetching, and will be merged automatically or detected as conflicts.
- The dialogue presenting options when performing `Fetch` are now more informative.
- Saving notes and files now consumes less memory.
- Data is no longer fully buffered in memory and written at once; instead, it is now written in each over-2MB increments.
- Chunk caching is now more efficient.
- Chunks are now managed solely by their count (still maintained as LRU). If memory usage becomes excessive, they will be automatically released by the system-runtime.
- Reverse-indexing is also no longer used. It is performed as scanning caches and act also as a WeakRef thinning.
- Both of them (may) are effective for #692, #680, and some more.
### Changed
- `Incubate Chunks in Document` (also known as `Eden`) is now fully sunset.
- Existing chunks can still be read, but new ones will no longer be created.
- The `Compute revisions for chunks` setting has also been removed.
- This feature is now always enabled and is no longer configurable (restoring the original behaviour).
- As mentioned, `Memory cache size (by total characters)` has been removed.
- The `Memory cache size (by total items)` setting is now the only option available (but it has 10x ratio compared to the previous version).
### Refactored
- Some class methods have been fixed its arguments to be more consistent.
- Types have been defined for some conditional results.
- A significant refactoring of the core codebase is underway.
- This is part of our ongoing efforts to improve code maintainability, readability, and to unify interfaces.
- Previously, complex files posed a risk due to a low bus factor. Fortunately, as our devices have become faster and more capable, we can now write code that is clearer and more maintainable (And not so much costs on performance).
- Hashing functions have been refactored into the `HashManager` class and its derived classes.
- Chunk splitting functions have been refactored into the `ContentSplitterCore` class and its derived classes.
- Change tracking functions have been refactored into the `ChangeManager` class.
- Chunk read/write functions have been refactored into the `ChunkManager` class.
- Fetching chunks on demand is now handled separately from the `ChunkManager` and chunk reading functions. Chunks are queued by the `ChunkManager` and then processed by the `ChunkFetcher`, simplifying the process and reducing unnecessary complexity.
- Then, local database access via `LiveSyncLocalDB` has been refactored to use the new classes.
- References to external sources from `commonlib` have been corrected.
- Type definitions in `types.ts` have been refined.
- Unit tests are being added incrementally.
- I am using `Deno` for testing, to simplify testing and coverage reporting.
- While this is not identical to the Obsidian environment, `jest` may also have limitations. It is certainly better than having no tests.
- In other words, recent manual scenario testing has highlighted some shortcomings.
- `pouchdb-test`, used for testing PouchDB with Deno, has been added, utilising the `memory` adapter.
## 0.24.20
Side note: Although class-oriented programming is sometimes considered an outdated style, However, I have come to re-evaluate it as valuable from the perspectives of maintainability and readability.
### Improved
- Now we can see the detail of `TypeError` using Obsidian API during remote database access.
## 0.25.4
## 0.24.19
### New Feature
- Now we can generate a QR Code for transferring the configuration to another device.
- This QR Code can be scanned by the camera app or something QR Code Reader of another device, and via Obsidian URL, the configuration will be transferred.
- Note: This QR Code is not encrypted. So, please be careful when transferring the configuration.
## 0.24.18
29th July, 2025
### Fixed
- Now no chunk creation errors will be raised after switching `Compute revisions for chunks`.
- Some invisible file can be handled correctly (e.g., `writing-goals-history.csv`).
- Fetching configuration from the server is now saves the configuration immediately (if we are not in the wizard).
- The PBKDF2Salt is no longer corrupted when attempting replication while the device is offline. (#686)
- If this issue has already occurred, please use `Maintenance` -> `Rebuilding Operations (Remote Only)` -> `Overwrite Remote` and `Send` to resolve it.
- Please perform this operation on the device that is most reliable.
- I am so sorry for the inconvenience; there are no patching workarounds. The rebuilding operation is the only solution.
- This issue only affects the encryption of the remote database and does not impact the local databases on any devices.
- (Preventing synchronisation is by design and expected behaviour, even if it is sometimes inconvenient. This is also why we should avoid using workarounds; it is, admittedly, an excuse).
- In any case, we can unlock the remote from the warning dialogue on receiving devices. We are performing replication, instead of simple synchronisation at the expense of a little complexity (I would love to express thank you again for your every effort to manage and maintain the settings! Your all understanding saves our notes).
- This process may require considerable time and bandwidth (as usual), so please wait patiently and ensure a stable network connection.
### Improved
### Side note
- Mismatched configuration dialogue is now more informative, and rewritten to more user-friendly.
- Applying configuration mismatch is now without rebuilding (at our own risks).
- Now, rebuilding is decided more fine grained.
The PBKDF2Salt will be referred to as the `Security Seed`, and it is used to derive the encryption key for replication. Therefore, it should be stored on the server prior to synchronisation. We apologise for the lack of explanation in previous updates!
### Improved internally
## 0.25.3
- Translations can be nested. i.e., task:`Some procedure`, check: `%{task} checking`, checkfailed: `%{check} failed` produces `Some procedure checking failed`.
- Max to 10 levels of nesting
22nd July, 2025
## 0.24.17
### Fixed
Confession. I got the default values wrong. So scary and sorry.
- Now the `Doctor` at migration will save the configuration.
### Behaviour and default changed
## 0.25.2 ~~0.25.1~~
- **NOW INDEED AND ACTUALLY** `Compute revisions for chunks` are backed into enabled again. it is necessary for garbage collection of chunks.
- As far as existing users are concerned, this will not automatically change, but the Doctor will inform us.
(0.25.1 was missed due to a mistake in the versioning process).
19th July, 2025
### Refined and New Features
- Fetching the remote database on `RedFlag` now also retrieves remote configurations optionally.
- This is beneficial if we have already set up another device and wish to use the same configuration. We will see a much less frequent `Unmatched` dialogue.
- The setup wizard using Set-up URI and QR code has been improved.
- The message is now more user-friendly.
- The obsolete method (manual setting application) has been removed.
- The `Cancel` button has been added to the setup wizard.
- We can now fetch the remote configuration from the server if it exists, which is useful for adding new devices.
- Mostly same as a `RedFlag` fetching remote configuration.
- We can also use the `Doctor` to check and fix the imported (and fetched) configuration before applying it.
### Changes
- The Set-up URI is now encrypted with a new encryption algorithm (mostly the same as `V2`).
- The new Set-up URI is not compatible with version 0.24.x or earlier.
## 0.25.0
19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
After reading Issue #668, I conducted another self-review of the E2EE-related code. In retrospect, it was clearly written by someone inexperienced, which is understandable, but it is still rather embarrassing. Three years is certainly enough time for growth.
I have now rewritten the E2EE code to be more robust and easier to understand. It is significantly more readable and should be easier to maintain in the future. The performance issue, previously considered a concern, has been addressed by introducing a master key and deriving keys using HKDF. This approach is both fast and robust, and it provides protection against rainbow table attacks. (In addition, this implementation has been [a dedicated package on the npm registry](https://github.com/vrtmrz/octagonal-wheels), and tested in 100% branch-coverage).
As a result, this is the first time in a while that forward compatibility has been broken. We have also taken the opportunity to change all metadata to use encryption rather than obfuscation. Furthermore, the `Dynamic Iteration Count` setting is now redundant and has been moved to the `Patches` pane in the settings. Thanks to Rabin-Karp, the eden setting is also no longer necessary and has been relocated accordingly. Therefore, v0.25.0 represents a legitimate and correct evolution.
Older notes are in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md).
Older notes are in
[updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md).

View File

@@ -1,3 +1,45 @@
## 0.25.0
19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
After reading Issue #668, I conducted another self-review of the E2EE-related code. In retrospect, it was clearly written by someone inexperienced, which is understandable, but it is still rather embarrassing. Three years is certainly enough time for growth.
I have now rewritten the E2EE code to be more robust and easier to understand. It is significantly more readable and should be easier to maintain in the future. The performance issue, previously considered a concern, has been addressed by introducing a master key and deriving keys using HKDF. This approach is both fast and robust, and it provides protection against rainbow table attacks. (In addition, this implementation has been [a dedicated package on the npm registry](https://github.com/vrtmrz/octagonal-wheels), and tested in 100% branch-coverage).
As a result, this is the first time in a while that forward compatibility has been broken. We have also taken the opportunity to change all metadata to use encryption rather than obfuscation. Furthermore, the `Dynamic Iteration Count` setting is now redundant and has been moved to the `Patches` pane in the settings. Thanks to Rabin-Karp, the eden setting is also no longer necessary and has been relocated accordingly. Therefore, v0.25.0 represents a legitimate and correct evolution.
### Fixed
- The encryption algorithm now uses HKDF with a master key.
- This is more robust and faster than the previous implementation.
- It is now more secure against rainbow table attacks.
- The previous implementation can still be used via `Patches` -> `End-to-end encryption algorithm` -> `Force V1`.
- Note that `V1: Legacy` can decrypt V2, but produces V1 output.
- `Fetch everything from the remote` now works correctly.
- It no longer creates local database entries before synchronisation.
- Extra log messages during QR code decoding have been removed.
### Changed
- The following settings have been moved to the `Patches` pane:
- `Remote Database Tweak`
- `Incubate Chunks in Document`
- `Data Compression`
### Behavioural and API Changes
- `DirectFileManipulatorV2` now requires new settings (as you may already know, E2EEAlgorithm).
- The database version has been increased to `12` from `10`.
- If an older version is detected, we will be notified and synchronisation will be paused until the update is acknowledged. (It has been a long time since this behaviour was last encountered; we always err on the side of caution, even if it is less convenient.)
### Refactored
- `couchdb_utils.ts` has been separated into several explicitly named files.
- Some missing functions in `bgWorker.mock.ts` have been added.
## 0.24.0
I know that we have been waiting for a long time. It is finally released!
@@ -14,9 +56,330 @@ Thank you, and I hope your troubles will be resolved!
---
## 0.24.31
10th July, 2025
### Fixed
- The description of `Enable Developers' Debug Tools.` has been refined.
- Now performance impact is more clearly stated.
- Automatic conflict checking and resolution has been improved.
- It now works parallelly for each other file, instead of sequentially. It makes significantly faster on first synchronisation when with local files information.
- Resolving conflicts dialogue will not be shown for the multiple files at once.
- It will be shown for each file, one by one.
## 0.24.30
9th July, 2025
### New Feature
- New chunking algorithm `V3: Fine deduplication` has been added, and will be recommended after updates.
- The Rabin-Karp algorithm is used for efficient chunking.
- This will be the default in the new installations.
- It is more robust and faster than the previous one.
- We can change it in the `Advanced` pane of the settings.
- New language `ko` (Korean) has been added.
- Thank you for your contribution, [@ellixspace](https://x.com/ellixspace)!
- Any contributions are welcome, from any route. Please let me know if I seem to be unaware of this. It is often the case that I am not really aware of it.
- Chinese (Simplified) translation has been updated.
- Thank you for your contribution, [@52sanmao](https://github.com/52sanmao)!
### Fixed
- Numeric settings are now never lost the focus during value changing.
- Doctor now redacts more sensitive information on error reports.
### Improved
- All translations have been rewritten into YAML format, to easier to manage and contribute.
- We can write them with comments, newlines, and other YAML features.
- Doctor recommendations are now shown in a user-friendly notation.
- We can now see the recommended as `V3: Fine deduplication` instead of `v3-rabin-karp`.
### Refactored
- Never-ending `ObsidianLiveSyncSettingTab.ts` has finally been separated into each pane's file.
- Some commented-out code has been removed.
### Acknowledgement
- Jun Murakami, Shun Ishiguro, and Yoshihiro Oyama. 2012. Implementation and Evaluation of a Cache Deduplication Mechanism with Content-Defined Chunking. In _IPSJ SIG Technical Report_, Vol.2012-ARC-202, No.4. Information Processing Society of Japan, 1-7.
## 0.24.29
20th June, 2025
### Fixed
- Synchronisation with buckets now works correctly, regardless of whether a prefix is set or the bucket has been (re-) initialised (#664).
- An information message is now displayed again, during any automatic synchronisation is enabled (#662).
### Tidied up
- Importing paths have been tidied up.
## 0.24.28
15th June, 2025
### Fixed
- Batch Update is no longer available in LiveSync mode to avoid unexpected behaviour. (#653)
- Now compatible with Cloudflare R2 again for bucket synchronisation.
- @edo-bari-ikutsu, thank you for [your contribution](https://github.com/vrtmrz/livesync-commonlib/pull/12)!
- Prevention of broken behaviour due to database connection failures added (#649).
## 0.24.27
10th June, 2025
### Improved
- We can use prefix for path for the Bucket synchronisation.
- For example, if you set the `vaultName/` as a prefix for the bucket in the root directory, all data will be transferred to the bucket under the `vaultName/` directory.
- The "Use Request API to avoid `inevitable` CORS problem" option is now promoted to the normal setting, not a niche patch.
### Fixed
- Now switching replicators applied immediately, without the need to restart Obsidian.
### Tidied up
- Some dependencies have been updated to the latest version.
## 0.24.26
14th May, 2025
This update introduces an option to circumvent Cross-Origin Resource Sharing
(CORS) constraints for CouchDB requests, by leveraging Obsidian's native request
API. The implementation of such a feature had previously been deferred due to
significant security considerations.
CORS is a vital security mechanism, enabling servers like CouchDB -- which
functions as a sophisticated REST API -- to control access from different
origins, thereby ensuring secure communication across trust boundaries. I had
long hesitated to offer a CORS circumvention method, as it deviates from
security best practices; My preference was for users to configure CORS correctly
on the server-side.
However, this policy has shifted due to specific reports of intractable
CORS-related configuration issues, particularly within enterprise proxy
environments where proxy servers can unpredictably alter or block
communications. Given that a primary objective of the "Self-hosted LiveSync"
plugin is to facilitate secure Obsidian usage within stringent corporate
settings, addressing these 'unavoidable' user-reported problems became
essential. Mostly raison d'être of this plugin.
Consequently, the option "Use Request API to avoid `inevitable` CORS problem"
has been implemented. Users are strongly advised to enable this _only_ when
operating within a trusted environment. We can enable this option in the `Patch` pane.
However, just to whisper, this is tremendously fast.
### New Features
- Automatic display-language changing according to the Obsidian language
setting.
- We will be asked on the migration or first startup.
- **Note: Please revert to the default language if you report any issues.**
- Not all messages are translated yet. We welcome your contribution!
- Now we can limit files to be synchronised even in the hidden files.
- "Use Request API to avoid `inevitable` CORS problem" has been implemented.
- Less secure, please use it only if you are sure that you are in the trusted
environment and be able to ignore the CORS. No `Web viewer` or similar tools
are recommended. (To avoid the origin forged attack). If you are able to
configure the server setting, always that is recommended.
- `Show status icon instead of file warnings banner` has been implemented.
- If enabled, the ⛔ icon will be shown inside the status instead of the file
warnings banner. No details will be shown.
### Improved
- All regular expressions can be inverted by prefixing `!!` now.
### Fixed
- No longer unexpected files will be gathered during hidden file sync.
- No longer broken `\n` and new-line characters during the bucket
synchronisation.
- We can purge the remote bucket again if we using MinIO instead of AWS S3 or
Cloudflare R2.
- Purging the remote bucket is now more reliable.
- 100 files are purged at a time.
- Some wrong messages have been fixed.
### Behaviour changed
- Entering into the deeper directories to gather the hidden files is now limited
by `/` or `\/` prefixed ignore filters. (It means that directories are scanned
deeper than before).
- However, inside the these directories, the files are still limited by the
ignore filters.
### Etcetera
- Some code has been tidied up.
- Trying less warning-suppressing and be more safer-coding.
- Dependent libraries have been updated to the latest version.
- Some build processes have been separated to `pre` and `post` processes.
## 0.24.25
22nd April, 2025
### Improved
- Peer-to-peer synchronisation has been got more robust.
### Fixed
- No longer broken falsy values in settings during set-up by the QR code
generation.
### Refactored
- Some `window` references now have pointed to `globalThis`.
- Some sloppy-import has been fixed.
- A server side implementation `Synchromesh` has been suffixed with `deno`
instead of `server` now.
## 0.24.24
15th April, 2025
### Fixed
- No longer broken JSON files including `\n`, during the bucket synchronisation.
(#623)
- Custom headers and JWT tokens are now correctly sent to the server during
configuration checking. (#624)
### Improved
- Bucket synchronisation has been enhanced for better performance and
reliability.
- Now less duplicated chunks are sent to the server. Note: If you have
encountered about too less chunks, please let me know. However, you can send
it to the server by `Overwrite remote`.
- Fetching conflicted files from the server is now more reliable.
- Dependent libraries have been updated to the latest version.
- Also, let me know if you have encountered any issues with this update.
Especially you are using a device that has been in use for a little
longer.
## 0.24.23
10th April, 2025
### New Feature
- Now, we can send custom headers to the server.
- They can be sent to either CouchDB or Object Storage.
- Authentication with JWT in CouchDB is now supported.
- I will describe steps later, but please refer to the
[CouchDB document](https://docs.couchdb.org/en/stable/config/auth.html#authentication-configuration).
- A JWT keypair for testing can be generated in the setting dialogue.
### Improved
- The QR Code for set-up can be shown also from the setting dialogue now.
- Conflict checking for preventing unexpected overwriting on the boot-up process
has been quite faster.
### Fixed
- Some bugs on Dev and Testing modules have been fixed.
## 0.24.22 ~~0.24.21~~
1st April, 2025
(Really sorry for the confusion. I have got a miss at releasing...).
### Fixed
- No longer conflicted files are handled in the boot-up process. No more
unexpected overwriting.
- It ignores `Always overwrite with a newer file`, and always be prevented for
the safety. Please pick it manually or open the file.
- Some log messages on conflict resolution has been corrected.
- Automatic merge notifications, displayed on the grounds of `same`, have been
degraded to logs.
### Improved
- Now we can fetch the remote database with keeping local files completely
intact.
- In new option, all files are stored into the local database before the
fetching, and will be merged automatically or detected as conflicts.
- The dialogue presenting options when performing `Fetch` are now more
informative.
### Refactored
- Some class methods have been fixed its arguments to be more consistent.
- Types have been defined for some conditional results.
## 0.24.20
24th March, 2025
### Improved
- Now we can see the detail of `TypeError` using Obsidian API during remote
database access.
### Behaviour and default changed
- **NOW INDEED AND ACTUALLY** `Compute revisions for chunks` are backed into
enabled again. it is necessary for garbage collection of chunks.
- As far as existing users are concerned, this will not automatically change,
but the Doctor will inform us.
## 0.24.19
5th March, 2025
### New Feature
- Now we can generate a QR Code for transferring the configuration to another device.
- This QR Code can be scanned by the camera app or something QR Code Reader of another device, and via Obsidian URL, the configuration will be transferred.
- Note: This QR Code is not encrypted. So, please be careful when transferring the configuration.
## 0.24.18
28th February, 2025
### Fixed
- Now no chunk creation errors will be raised after switching `Compute revisions for chunks`.
- Some invisible file can be handled correctly (e.g., `writing-goals-history.csv`).
- Fetching configuration from the server is now saves the configuration immediately (if we are not in the wizard).
### Improved
- Mismatched configuration dialogue is now more informative, and rewritten to more user-friendly.
- Applying configuration mismatch is now without rebuilding (at our own risks).
- Now, rebuilding is decided more fine grained.
### Improved internally
- Translations can be nested. i.e., task:`Some procedure`, check: `%{task} checking`, checkfailed: `%{check} failed` produces `Some procedure checking failed`.
- Max to 10 levels of nesting
## 0.24.17
27th February, 2025
Confession. I got the default values wrong. So scary and sorry.
## 0.24.16
### Improved
#### Peer-to-Peer
@@ -116,6 +479,29 @@ And, this is just a single web page, without any server-side code. It is a stati
## 0.24.11
Peer-to-peer synchronisation has been implemented!
Until now, I have not provided a synchronisation server. More people may not
even know that I have shut down the test server. I confess that this is a bit
repetitive, but I confess it is a cautionary tale. This is out of a sense of
self-discipline that someone has occurred who could see your data. Even if the
'someone' is me. I should not be unaware of its superiority, even though
well-meaning and am a servant of all. (Half joking, but also serious). However,
now I can provide you with a signalling server. Because, to the best of my
knowledge, it is only the network that is connected to your device. Also, this
signalling server is just a Nostr relay, not my implementation. You can run your
implementation, which you consider trustworthy, on a trustworthy server. You do
not even have to trust me. Mate, it is great, isn't it? For your information,
strfry is running on my signalling server.
Nevertheless, that being said, to be more honest, I still have not decided what
to do with this signalling server if too much traffic comes in.
Note: Already you have noticed this, but let me mention it again, this is a
significantly large update. If you have noticed anything, please let me know. I
will try to fix it as soon as possible (Some address is on my
[profile](https://github.com/vrtmrz)).
### Improved
- New Translation: `es` (Spanish) by @zeedif (Thank you so much)!

View File

@@ -1,4 +1,4 @@
import { encrypt } from "npm:octagonal-wheels@0.1.11/encryption/encryption.js";
import { encrypt } from "npm:octagonal-wheels@0.1.30/encryption/encryption";
const noun = [
"waterfall",