Compare commits

...

291 Commits

Author SHA1 Message Date
vorotamoroz
556ce471f8 ## 0.25.43-patched-8 2026-02-20 14:28:28 +00:00
vorotamoroz
32b6717114 keep a note 2026-02-19 10:38:42 +00:00
vorotamoroz
e0e72fae72 Fixed: saving device name 2026-02-19 10:37:19 +00:00
vorotamoroz
203dd17421 for 0.25.43-patched-7, please refer to the updates.md 2026-02-19 10:23:45 +00:00
vorotamoroz
1bde2b2ff1 Fixed an issue where the StorageEventManager
Build by Vite is now testing
2026-02-19 04:18:18 +00:00
vorotamoroz
2bf1c775ee ## 0.25.43-patched-6
### Fixed

- Unlocking the remote database after rebuilding has been fixed.

### Refactored
- Now `StorageEventManagerBase` is separated from `StorageEventManagerObsidian` following their concerns.
- Now `FileAccessBase` is separated from `FileAccessObsidian` following their concerns.
2026-02-18 12:13:05 +00:00
vorotamoroz
4658e3735d Fix Shim 2026-02-17 10:56:05 +00:00
vorotamoroz
627edc96bf bump for beta 2026-02-17 10:14:13 +00:00
vorotamoroz
0a1917e83c Refactor for 0.25.43-patched-5 (very long, please refer the updates.md) 2026-02-17 10:14:04 +00:00
vorotamoroz
3201399bdf bump 2026-02-16 11:51:09 +00:00
vorotamoroz
2ae70e8f07 Refactor: DatabaseService and Replicator 2026-02-16 11:51:03 +00:00
vorotamoroz
2b9bb1ed06 beta bump 2026-02-16 06:51:52 +00:00
vorotamoroz
e63e3e6725 ### Refactor
- Module dependency refined. (For details, please refer to updates.md)
2026-02-16 06:50:31 +00:00
vorotamoroz
6e9ac6a9f9 - Application LifeCycle has now started in Main, not ServiceHub. 2026-02-14 15:21:00 +09:00
vorotamoroz
fb59c4a723 Refactored, please refer updates.md 2026-02-13 12:02:31 +00:00
vorotamoroz
1b5ca9e52c Refactor (write notes later) 2026-02-12 08:56:30 +00:00
vorotamoroz
787627a156 Refactor: Move some functions from modules to services 2026-02-12 06:27:29 +00:00
vorotamoroz
b1bba7685e Add note. 2026-02-12 03:39:56 +00:00
vorotamoroz
cdfc0ccead wow, auditing... bump. 2026-02-05 12:15:35 +00:00
vorotamoroz
0635cad350 bake i18n 2026-02-05 12:13:12 +00:00
vorotamoroz
6fd1fa6313 Fix typos and make P2P sync not experimental 2026-02-05 12:07:29 +00:00
vorotamoroz
12b1f881dc ### Fixed
- Encryption/decryption issues when using Object Storage as remote have been fixed.
2026-02-05 11:47:01 +00:00
vorotamoroz
bf3efab1af Merge pull request #733 from dayne/patch-2
Update regions setup-flyio-on-the-fly-v2.ipynb
2026-02-02 13:51:48 +09:00
vorotamoroz
da72fda221 Merge pull request #777 from oenhu/patch-1
Update README_cn.md
2026-02-02 13:42:14 +09:00
vorotamoroz
665501f485 Merge pull request #778 from oenhu/patch-2
Create tech_info_cn.md
2026-02-02 13:40:17 +09:00
vorotamoroz
6ee332fff8 Merge pull request #779 from oenhu/main
refactor: replace hardcoded strings with i18n keys
2026-02-02 13:38:01 +09:00
vorotamoroz
f66447cb59 Fix task 2026-02-02 13:00:28 +09:00
vorotamoroz
eb3120a8fd Fix CI 2026-02-02 12:57:49 +09:00
vorotamoroz
5fa39b3c6e Fix typo 2026-02-02 12:28:26 +09:00
vorotamoroz
91c35a88dd separate CIs 2026-02-02 12:27:37 +09:00
vorotamoroz
49f4d79f4f Update dependencies (Mostly translations) 2026-02-02 12:27:15 +09:00
vorotamoroz
abfd010467 Add the detail 2026-02-02 11:39:28 +09:00
vorotamoroz
cde1013359 Reducing ambiguity 2026-02-02 11:32:52 +09:00
vorotamoroz
9c7f9e4316 Tidy 2026-02-02 11:31:40 +09:00
vorotamoroz
aceda16c64 Add beta policy note 2026-02-02 11:31:06 +09:00
vorotamoroz
3656e5c725 Merge branch 'beta' 2026-02-02 11:00:12 +09:00
vorotamoroz
6915b160a2 Bump to stable 2026-02-02 10:57:15 +09:00
vorotamoroz
f464623bf6 revert for release script 2026-01-30 09:36:39 +00:00
vorotamoroz
b1f518071c add note 2026-01-30 09:27:31 +00:00
vorotamoroz
0e903c3520 Change: enabling PATH_TEST_INSTALL in production build 2026-01-30 09:25:12 +00:00
vorotamoroz
edcdfd97c4 Reduce dynamic function binding from APIService from 2026-01-30 09:23:54 +00:00
vorotamoroz
c2b7081215 Add some notes 2026-01-30 07:39:03 +00:00
vorotamoroz
cf1954b10e Bump for beta 2026-01-29 07:15:22 +00:00
vorotamoroz
46546e121f Refactor: Move webpeer from lib to main repository. 2026-01-26 09:17:01 +00:00
vorotamoroz
28146eec2c Refactor: Migrate the outdated, unstable platform abstraction layer to Services 2026-01-26 09:13:40 +00:00
vorotamoroz
3cd9b9e06d bump 2026-01-24 16:48:51 +09:00
vorotamoroz
7c43c61b85 ### Fixed
- No longer `No available splitter for settings!!` errors occur after fetching old remote settings while rebuilding local database.

### Improved

- Boot sequence warning is now kept in the in-editor notification area. (#748)

### New feature

- We can now set the maximum modified time for reflect events in the settings. (for #754)

### Refactored
- Module to service refactoring has been started for better maintainability:
  - UI module has been moved to UI service.

### Behaviour change
- Default chunk splitter version has been changed to `Rabin-Karp` for new installations.
2026-01-24 16:25:04 +09:00
vorotamoroz
465af4f3aa Urg: 0.25.40 for fix wrong eventType 2026-01-23 11:55:30 +00:00
vorotamoroz
0a1e3dcd51 bump 2026-01-23 06:27:06 +00:00
vorotamoroz
b97756d0cf add unittest 2026-01-23 05:45:53 +00:00
vorotamoroz
acf4bc3737 prettified 2026-01-23 05:45:19 +00:00
vorotamoroz
88838872e7 refactor_serivces
- Rewrite the service's binding/handler assignment systems
- Removed loopholes that allowed traversal between services to clarify dependencies.
- Consolidated the hidden state-related state, the handler, and the addition of bindings to the handler into a single object.
  - Currently, functions that can have handlers added implement either addHandler or setHandler directly on the function itself.
I understand there are differing opinions on this, but for now, this is how it stands.
- Services now possess a Context. Please ensure each platform has a class that inherits from ServiceContext.
- To permit services to be dynamically bound, the services themselves are now defined by interfaces.
2026-01-23 05:44:14 +00:00
vorotamoroz
7d3827d335 bump 2026-01-17 14:36:47 +09:00
vorotamoroz
92d3a0cfa2 ### Fixed
- Fixed an issue where indexedDB would not close correctly on some environments, causing unexpected errors during database operations.
2026-01-17 14:32:44 +09:00
vorotamoroz
bba26624ad bump 2026-01-15 04:12:19 +00:00
vorotamoroz
b82f497cab Add development guide link 2026-01-15 04:04:49 +00:00
vorotamoroz
37f4d13e75 Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2026-01-15 03:59:53 +00:00
vorotamoroz
7965f5342c Fix type 2026-01-15 12:47:26 +09:00
vorotamoroz
9cdc14dda8 Fix: CI 2026-01-15 12:45:29 +09:00
vorotamoroz
4f46276ebf Fix for CI 2026-01-15 12:45:13 +09:00
vorotamoroz
931d360fb1 Add summary note 2026-01-14 09:41:16 +00:00
vorotamoroz
f68c1855da Remove obsoleted file. 2026-01-14 08:07:40 +00:00
vorotamoroz
dff654b6e5 Fixed:
- Fixed mishandling of removing  duplicated queue on processing
- Dependency importing
2026-01-14 08:07:22 +00:00
vorotamoroz
7e85bcbf08 harness-ci.yml mod ci 2026-01-10 12:20:17 +09:00
vorotamoroz
38a695ea12 Fix actions 2026-01-09 12:18:38 +00:00
vorotamoroz
a502b0cd0c Split the tests 2026-01-09 12:15:57 +00:00
vorotamoroz
934f708753 Improve timing issue 2026-01-09 12:05:40 +00:00
vorotamoroz
0e574c6cb1 Flag executable 2026-01-09 11:50:23 +00:00
vorotamoroz
7375a85b07 Tests:
- More tests have been added.
2026-01-09 11:46:37 +00:00
vorotamoroz
4c3393d8b2 Fixed:
- Databases now correctly closed after rebuilding.
2026-01-09 11:45:00 +00:00
oenhu
a9f1bbff9f refactor: replace hardcoded strings with i18n keys 2026-01-09 00:29:40 +08:00
oenhu
f86815e420 Create tech_info_cn.md
Create Chinese version
2026-01-08 15:07:06 +08:00
oenhu
fd16b166ef Update README_cn.md
Update the translation content based on the latest English REDME.md.
2026-01-08 01:41:07 +08:00
vorotamoroz
02aa9319c3 Remove chromiumSandbox for Actions 2026-01-07 09:15:31 +00:00
vorotamoroz
1a72e46d53 Fix excluded 2026-01-07 09:12:21 +00:00
vorotamoroz
d755579968 Add excluded 2026-01-07 09:08:14 +00:00
vorotamoroz
b74ee9df77 Fix to pulling submodules 2026-01-07 09:03:18 +00:00
vorotamoroz
daa04bcea8 Add actions 2026-01-07 09:01:05 +00:00
vorotamoroz
b96b2f24a6 Add some script and run npm install 2026-01-07 09:00:55 +00:00
vorotamoroz
5569ab62df Change Port 2026-01-07 09:00:16 +00:00
vorotamoroz
d84b6c4f15 Flag executable 2026-01-07 08:41:43 +00:00
vorotamoroz
336f2c8a4d Add Test 2026-01-07 08:38:33 +00:00
vorotamoroz
b52ceec36a Update submodule 2026-01-07 08:33:06 +00:00
vorotamoroz
1e6400cf79 Tidied:
- Add some note for corrupting notice
- Array function standardised.

Fixed:
- Remove obsoleted accessing to Obsidian's global.
- Avoiding errors in exceptional circumstances.
- Removal of several outdated and inefficient QueueProcessor implementations
  - Log output is now a bit efficient.
2026-01-07 04:45:19 +00:00
vorotamoroz
1ff1ac951b - Remove old comments
- Fix Importing paths
2026-01-07 04:37:30 +00:00
vorotamoroz
aa6d771d17 bump 2025-12-25 10:30:04 +00:00
vorotamoroz
512c238415 ### Improved
- Now the garbage collector (V3) has been implemented. (Beta)
- Now the plug-in and device information is stored in the remote database.
2025-12-25 10:29:19 +00:00
vorotamoroz
55ffeeda10 bump 2025-12-24 03:38:40 +00:00
vorotamoroz
65f18b4160 ### Fixed
- Now the conflict resolution dialogue shows correctly which device only has older APIs (#764).
2025-12-24 03:37:41 +00:00
vorotamoroz
b0c1d6a1bf bump 2025-12-10 09:48:25 +00:00
vorotamoroz
ca19f2f2ed Modify build script and prettier config type (.prettierrc to .prettierrc.mjs 2025-12-10 09:47:31 +00:00
vorotamoroz
ac0378ca4b ### Behaviour change
- The plug-in automatically fetches the missing chunks even if `Fetch chunks on demand` is disabled.
    - This change is to avoid loss of data when receiving a bulk of revisions.
    - This can be prevented by enabling `Use Only Local Chunks` in the settings.
- Storage application now saved during each event and restored on startup.
- Synchronisation result application is also now saved during each event and restored on startup.
    - These may avoid some unexpected loss of data when the editor crashes.

### Fixed

- Now the plug-in waits for the application of pended batch changes before the synchronisation starts.
    - This may avoid some unexpected loss or unexpected conflicts.
      Plug-in sends custom headers correctly when RequestAPI is used.
- No longer causing unexpected chunk creation during `Reset synchronisation on This Device` with bucket sync.

### Refactored

- Synchronisation result application process has been refactored.
- Storage application process has been refactored.
    - Please report if you find any unexpected behaviour after this update. A bit of large refactoring.
2025-12-10 09:45:57 +00:00
vorotamoroz
f06f8d1eb6 bump 2025-12-05 11:08:48 +00:00
vorotamoroz
77074cb92f ### New feature
- We can analyse the local database with the `Analyse database usage` command.
- We can reset the notification threshold and check the remote usage at once with the `Reset notification threshold and check the remote database usage` command.

### Fixed

- Now the plug-in resets the remote size notification threshold after rebuild.
2025-12-05 11:07:59 +00:00
vorotamoroz
1274b6f683 Grammatical Fix 2025-12-02 11:28:56 +00:00
vorotamoroz
c54ae58c0f bump 2025-12-02 11:24:31 +00:00
vorotamoroz
3f54921e90 ### Improved
- Now the plugin warns when we are on the several file-related situations that may cause unexpected behaviour (#300).
2025-12-02 11:20:24 +00:00
vorotamoroz
1b070c2dd4 bump 2025-11-18 08:54:41 +00:00
vorotamoroz
0e5846b670 add BUILD_MODE switching by environment variables. 2025-11-18 08:52:25 +00:00
vorotamoroz
f81e71802b update dependency 2025-11-18 08:51:51 +00:00
vorotamoroz
4c761eebff ### Fixed
- Now fetching configuration from server can handle the empty remote correctly (reported on #756) by common-lib.
- No longer asking to switch adapters during rebuilding.
2025-11-18 08:51:32 +00:00
vorotamoroz
bf754d6e07 Merge branch 'dev' 2025-11-17 23:37:39 +09:00
vorotamoroz
3cc70b985a bump 2025-11-17 23:30:53 +09:00
vorotamoroz
0e81ec2586 ### Fixed
- Now we can save settings correctly again (#756).
2025-11-17 23:30:16 +09:00
vorotamoroz
1c49acd5b5 Merge pull request #755 from vrtmrz/dev
v0.25.29
2025-11-17 13:32:32 +09:00
vorotamoroz
bab66a64d7 bump again for release.. 2025-11-17 13:27:26 +09:00
vorotamoroz
477913456f OK. 2025-11-17 13:24:45 +09:00
vorotamoroz
b0661cdbab bump 2025-11-17 13:20:16 +09:00
vorotamoroz
18f9a842b7 ### New feature
- We can now configure hidden file synchronisation to always overwrite with the latest version (#579).

### Fixed
- Timing dependency issues during initialisation have been mitigated (#714)

### Improved
- Error logs now contain stack-traces for better inspection.
2025-11-17 13:18:55 +09:00
vorotamoroz
5130bc5f2a I'm really sorry. Is there any way I can test this locally? 2025-11-17 13:16:50 +09:00
vorotamoroz
ca8af80a27 again. 2025-11-17 13:15:56 +09:00
vorotamoroz
df273d273b Sorry for broke exist release... 2025-11-17 13:14:56 +09:00
vorotamoroz
23aa0a82ca fix again.. 2025-11-17 13:09:59 +09:00
vorotamoroz
8f488b205b fix releaseing 2025-11-17 13:07:47 +09:00
vorotamoroz
893eac5c92 Move from deprecated 2025-11-17 13:06:26 +09:00
vorotamoroz
cd6946bce2 node 22 to 24 2025-11-17 12:53:11 +09:00
vorotamoroz
174ca08954 Add production switch on environment vars 2025-11-17 12:52:52 +09:00
vorotamoroz
4af4d9c4bd bump 2025-11-12 09:23:49 +00:00
vorotamoroz
1b7a25598a ### Improved
- Now we can switch the database adapter between IndexedDB and IDB without rebuilding (#747).
- No longer checking for the adapter by `Doctor`.

### Changes

- The default adapter is reverted to IDB to avoid memory leaks (#747).

### Fixed (?)

- Reverted QR code library to v1.4.4 (To make sure #752).
2025-11-12 09:22:40 +00:00
vorotamoroz
e2a01c14cc Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-11-07 09:56:53 +00:00
vorotamoroz
a623b987c8 bump 2025-11-07 09:55:22 +00:00
vorotamoroz
db28b9ec11 ### Improved
- Some JWT notes have been added to the setting dialogue (#742).

### Fixed

- No longer wrong values encoded into the QR code.
- We can acknowledge why the QR codes have not been generated.

### Refactored

- Some dependencies have been updated.
- Internal functions have been modularised into `octagonal-wheels` packages and are well tested.
- Fixed importing from the parent project in library codes. (#729).
2025-11-07 09:54:12 +00:00
vorotamoroz
b2fbbb38f5 Fix tip formatting in jwt-on-couchdb.md
Updated tip formatting in JWT on CouchDB documentation.
2025-11-06 18:49:18 +09:00
vorotamoroz
33c01fdf1e bump 2025-11-06 09:43:58 +00:00
vorotamoroz
536c0426d6 Update lib for switch Refiner to Computed 2025-11-06 09:40:37 +00:00
vorotamoroz
2f848878c2 Add doc 2025-11-06 09:24:25 +00:00
vorotamoroz
c4f2baef5e ### Fixed
#### JWT Authentication

- Now we can use JWT Authentication ES512 correctly (#742).
- Several misdirections in the Setting dialogues have been fixed (i.e., seconds and minutes confusion...).
- The key area in the Setting dialogue has been enlarged and accepts newlines correctly.
- Caching of JWT tokens now works correctly
    - Tokens are now cached and reused until they expire.
    - They will be kept until 10% of the expiration duration is remaining or 10 seconds, whichever is longer (but at a maximum of 1 minute).
- JWT settings are now correctly displayed on the Setting dialogue.

#### Other fixes

- Receiving non-latest revisions no longer causes unexpected overwrites.
    - On receiving revisions that made conflicting changes, we are still able to handle them.

### Improved

- No longer duplicated message notifications are shown when a connection to the remote server fails.
    - Instead, a single notification is shown, and it will be kept on the notification area inside the editor until the situation is resolved.
- The notification area is no longer imposing, distracting, and overwhelming.
    - With a pale background, but bordered and with icons.
2025-11-06 09:24:16 +00:00
vorotamoroz
a5b88a8d47 Add tips 2025-11-06 02:10:12 +00:00
vorotamoroz
88e61fb41f Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-11-04 11:36:36 +00:00
vorotamoroz
9bf04332bb bump 2025-11-04 11:35:19 +00:00
vorotamoroz
5238dec3f2 fix: Now hidden file synchronisation respects the filters correctly (#631, #735) 2025-11-04 11:34:39 +00:00
vorotamoroz
2b7b411c52 Merge branch 'svelteui' 2025-11-04 11:09:22 +00:00
vorotamoroz
aab0f7f034 Fix Backporting mis 2025-11-04 10:58:46 +00:00
vorotamoroz
b3a0deb0e3 bump 2025-10-31 11:36:55 +01:00
vorotamoroz
b9138d1395 ### Fixed
- We can enter the fields on some dialogue correctly on mobile devices now.

### New features

- We can use TURN server for P2P connections now.
2025-10-31 11:33:06 +01:00
vorotamoroz
04997b84c0 Merge pull request #740 from shaielc/_local
Default to _local when node not supplied
2025-10-30 17:58:24 +09:00
vorotamoroz
7eb9807aa5 Fix import path 2025-10-30 09:35:17 +01:00
vorotamoroz
91a4f234f1 bump 2025-10-30 09:30:38 +01:00
vorotamoroz
82f2860938 ### Fixed
- P2P Replication got more robust and stable.

### Breaking changes

- Send configuration via Peer-to-Peer connection is not compatible with older versions.
2025-10-30 09:29:51 +01:00
shyelc
41a112cd8a Default to _local when node not supplied 2025-10-30 07:27:52 +00:00
vorotamoroz
294ebf0c31 Add automatic node detection 2025-10-30 13:24:01 +09:00
vorotamoroz
5443317157 merged 2025-10-30 02:38:30 +01:00
vorotamoroz
47fe9d2af3 Adjust method name to actual behaviour 2025-10-30 02:28:18 +01:00
Dayne Broderson
c76187c6d2 Update setup-flyio-on-the-fly-v2.ipynb
remove extranious newline
2025-10-29 17:07:34 -08:00
vorotamoroz
4c260a7d2b Merge pull request #723 from GrzybowskiBYD/patch-1
Fix an issue with CouchDB while using couchdb-init.sh
2025-10-29 10:35:00 +09:00
vorotamoroz
82f6fefd35 bump 2025-10-26 19:51:20 +09:00
vorotamoroz
ada8001fcb ### Fixed
- We are now able to enable optional features correctly again (#732).
- No longer oversized files have been processed, furthermore.
  - Before creating a chunk, the file is verified as the target.
  - The behaviour upon receiving replication has been changed as follows:
    - If the remote file is oversized, it is ignored.
    - If not, but while the local file is oversized, it is also ignored.
2025-10-26 19:38:45 +09:00
Dayne Broderson
b3b3ad843c Update regions setup-flyio-on-the-fly-v2.ipynb
Updating the regions list pulled from recent `fly platform regions`
2025-10-22 09:26:13 -08:00
vorotamoroz
8b81570035 Grammatically fixes 2025-10-22 14:02:52 +01:00
vorotamoroz
d3e50421e4 Merge branch 'svelteui' of https://github.com/vrtmrz/obsidian-livesync into svelteui 2025-10-22 14:02:20 +01:00
vorotamoroz
12605f4604 Add updates 2025-10-22 14:02:00 +01:00
vorotamoroz
2c0dd82886 Add updates 2025-10-22 13:56:24 +01:00
vorotamoroz
f5315aacb8 v0.25.23.beta1
### Fixed (This should be backported to 0.25.22 if the beta phase is prolonged)

- No longer larger files will not create a chunks during preparing `Reset Synchronisation on This Device`.

### Behaviour changes

- Setup wizard is now more `goal-oriented`. Brand-new screens are introduced.
- `Fetch everything` and `Rebuild everything` is now `Reset Synchronisation on This Device` and `Overwrite Server Data with This Device's Files`.
- Remote configuration and E2EE settings are now separated to each modal dialogue.
- Peer-to-Peer settings is also separated into its own modal dialogue.
- Setup-URI, and Report for the Issue are now not copied to clipboard automatically. Instead, there are copy dialogue and buttons to copy them explicitly.
- No longer optional features are introduced during the setup or `Reset Synchronisation on This Device`, `Overwrite Server Data with This Device's Files`.
- We cannot preform `Fetch everything` and `Rebuild everything` (Removed, so the old name) without restarting Obsidian now.

### Miscellaneous

- Setup QR Code generation is separated into a src/lib/src/API/processSetting.ts file. Please use it as a subrepository if you want to generate QR codes in your own application.
- Setup-URI is also separated into a src/lib/src/API/processSetting.ts
- Some direct access to web-APIs are now wrapped into the services layer.

### Dependency updates

- Many dependencies are updated. Please see `package.json`.
- As upgrading TypeScript, Fixed many UInt8Array<ArrayBuffer> and Uint8Array type mismatches.
2025-10-22 13:56:15 +01:00
vorotamoroz
5a93066870 bump 2025-10-15 01:02:24 +09:00
vorotamoroz
3a73073505 ### Fixed
- Fixed a bug that caused wrong event bindings and flag inversion (#727)
  - This caused following issues:
    - In some cases, settings changes were not applied or saved correctly.
    - Automatic synchronisation did not begin correctly.

### Improved
- Too large diffs are not shown in the file comparison view, due to performance reasons.
2025-10-15 01:00:24 +09:00
vorotamoroz
ee0c0ee611 Add a note to prevent my forget 2025-10-13 11:27:03 +09:00
vorotamoroz
d7ea30e304 Merge pull request #726 from vrtmrz/disenchant
Disenchant
2025-10-13 10:40:20 +09:00
vorotamoroz
2b9ded60f7 Add notes 2025-10-13 10:38:32 +09:00
vorotamoroz
40508822cf Bump and add readme 2025-10-13 10:13:45 +09:00
vorotamoroz
6f938d5f54 Update submodule commit reference 2025-10-13 09:48:25 +09:00
vorotamoroz
51dc44bfb0 bump 0.25.21.beta2 2025-10-08 05:01:07 +01:00
vorotamoroz
7c4f2bf78a ### Fixed
- Fixed wrong event type bindings (which caused some events not to be handled correctly).
- Fixed detected a timing issue in StorageEventManager
    - When multiple events for the same file are fired in quick succession, metadata has been kept older information. This induces unexpected wrong notifications and write prevention.
2025-10-08 05:00:42 +01:00
Marcin Grzybowski
d82122de24 Update couchdb-init.sh
Fixed the {"error":"nodedown","reason":"nonode@nohost is down"} issue
2025-10-06 15:11:59 +02:00
vorotamoroz
67c9b4cf06 bump for beta 2025-10-06 10:45:59 +01:00
vorotamoroz
4808876968 - Rename methods for automatic binging checking.
- Add automatic binging checks.
2025-10-06 10:40:25 +01:00
vorotamoroz
cccff21ecc Merge branch 'main' into disenchant and run prettier 2025-10-04 17:59:42 +09:00
vorotamoroz
d8415a97e5 Move some dependency to devDependency 2025-10-04 17:49:02 +09:00
vorotamoroz
85e9aa2978 For convenience 2025-10-04 17:20:59 +09:00
vorotamoroz
b4eb0e4868 Improved: copy a dev build to vault folder 2025-10-04 17:12:37 +09:00
vorotamoroz
3ea348f468 Merge pull request #716 from chriscross12324/bugfix/setting-panel-hierarchy
bugfix/setting-panel-hierarchy: Fixed visual inconsistencies
2025-10-04 17:07:25 +09:00
vorotamoroz
81362816d6 disenchanting and dispelling from the nightmarish implicit something
Indeed, even though if this changeset is mostly another nightmare. It might be in beta for a while.
2025-10-03 14:59:14 +01:00
Chris Coulthard
d6efe4510f bugfix/setting-panel-hierarchy: Updated styling to improve consistency. Updated various setting panes (Fixed hierarchy issues that caused some titles to overlap rather then bump and whitespace formatting) 2025-09-28 19:27:14 -07:00
vorotamoroz
ca5a7ae18c bump 2025-09-26 11:42:06 +01:00
vorotamoroz
a27652ac34 ### Fixed
- Chunk fetching no longer reports errors when the fetched chunk could not be saved (#710).
    - Just using the fetched chunk temporarily.
- Chunk fetching reports errors when the fetched chunk is surely corrupted (#710, #712).
- It no longer detects files that the plug-in has modified.
    - It may reduce unnecessary file comparisons and unexpected file states.

### Improved

- Now checking the remote database configuration respecting the CouchDB version (#714).
2025-09-26 11:40:41 +01:00
vorotamoroz
29b89efc47 ## 0.25.19
### Improved
- Now encoding/decoding for chunk data and encryption/decryption are performed in native functions (if they were available).
2025-09-18 12:29:09 +01:00
vorotamoroz
ef3eef2d08 Bump 2025-09-17 09:07:49 +01:00
vorotamoroz
ffbbe32e36 ### Fixed
- Property encryption detection now works correctly (On Self-hosted LiveSync, it was not broken, but as a library, it was not working correctly).
- Initialising the chunk splitter is now surely performed.
- DirectFileManipulator now works fine (as a library)
    - Old `DirectFileManipulatorV1` is now removed.

### Refactored

- Removed some unnecessary intermediate files.
2025-09-17 09:05:16 +01:00
vorotamoroz
0a5371cdee bump 2025-09-16 10:47:00 +01:00
vorotamoroz
466bb142e2 Refactored: removed some unnecessary intermediate file 2025-09-16 10:45:14 +01:00
vorotamoroz
d394a4ce7f ### Fixed
- No longer information-level logs have produced during toggling `Show only notifications` in the settings (#708).
- Ignoring filters for Hidden file sync now works correctly (#709).
2025-09-16 10:30:48 +01:00
vorotamoroz
71ce76e502 Merge pull request #704 from Gron-HD/main
Add Docker Compose troubleshooting for own server setup
2025-09-16 15:33:14 +09:00
vorotamoroz
ae7a7dd456 Merge pull request #706 from abhith/patch-1
docs(README): update links for Customisation Sync and Hidden File Sync
2025-09-16 15:31:00 +09:00
vorotamoroz
4048186bb5 bump 2025-09-04 11:46:50 +01:00
vorotamoroz
2b94fd9139 ## Improved
- Improved connectivity for P2P connections
- The connection to the signalling server can now be disconnected while in the background or when explicitly disconnected.
  - These features use a patch that has not been incorporated upstream.
2025-09-04 11:44:49 +01:00
vorotamoroz
ec72ece86d bump 2025-09-03 10:12:51 +01:00
vorotamoroz
e394a994c5 Fix typo 2025-09-03 10:11:46 +01:00
vorotamoroz
aa23b6a39a ### Improved
- Now we can configure `forcePathStyle` for bucket synchronisation (#707).
2025-09-03 10:08:49 +01:00
vorotamoroz
58e328a591 bump 2025-09-02 10:27:23 +01:00
vorotamoroz
1730c39d70 ### Fixed
- Opening IndexedDB handling has been ensured.
- Migration check of corrupted files detection has been fixed.
    - Now informs us about conflicted files as non-recoverable, but noted so.
    - No longer errors on not-found files.
2025-09-02 10:24:13 +01:00
Abhith Rajan
dfeac201a2 docs(README): update links for Customisation Sync and Hidden File Sync sections 2025-09-01 21:54:11 +04:00
vorotamoroz
b42152db5e bump 2025-09-01 12:28:01 +09:00
vorotamoroz
171cfc0a38 ### Fixed
- Conflict resolving dialogue now properly displays the changeset name instead of A or B (#691).
2025-09-01 12:23:38 +09:00
vorotamoroz
d2787bdb6a Update older dependencies 2025-09-01 12:21:12 +09:00
Ron Gerber
44b022f003 Add Docker Compose troubleshooting for own server setup
Added another option for the init command that passes the variables directly. When i tried the command above i got the mentioned error.
2025-08-30 01:00:58 +02:00
vorotamoroz
58845276e7 bump 2025-08-29 11:48:33 +01:00
vorotamoroz
a2cc093a9e ### Fixed
- Fixed an issue with automatic synchronisation starting (#702).
2025-08-29 11:46:11 +01:00
vorotamoroz
fec203a751 bump 2025-08-28 10:27:31 +01:00
vorotamoroz
1a06837769 ### Fixed
- Automatic translation detection on the first launch now works correctly (#630).
- No errors are shown during synchronisations in offline (if not explicitly requested) (#699).
- Missing some checking during automatic-synchronisation now works correctly.
2025-08-28 10:26:17 +01:00
vorotamoroz
18d1ce8ec8 bump 2025-08-26 11:17:09 +01:00
vorotamoroz
2221d8c4e8 Update dependency 2025-08-26 11:14:30 +01:00
vorotamoroz
08548f8630 ### New experimental feature
- We can perform Garbage Collection (Beta2) without rebuilding the entire database, and also fetch the database.

### Fixed

- Resetting the bucket now properly clears all uploaded files.

### Refactored

- Some files have been moved to better reflect their purpose and improve maintainability.
- The extensive LiveSyncLocalDB has been split into separate files for each role.
2025-08-26 11:09:33 +01:00
vorotamoroz
5d24c3b984 bump 2025-08-20 10:36:47 +01:00
vorotamoroz
de8fd43c8b ### Fixed
- CORS Checking messages now use replacements.
- Configuring CORS setting via the UI now respects the existing rules.
- Now startup-checking works correctly again, performs migration check serially and then it will also fix starting LiveSync or start-up sync. (#696)
- Statusline in editor now supported 'Bases'.
2025-08-20 10:36:22 +01:00
vorotamoroz
ed88761eaa bump 2025-08-18 06:32:19 +01:00
vorotamoroz
4dcb37f5a2 ## 0.25.8
### New feature
- Insecure chunk detection has been implemented.

### Fixed
- Unexpected `Failed to obtain PBKDF2 salt` or similar errors during bucket-synchronisation no longer occur.
- Unexpected long delays for chunk-missing documents when using bucket-synchronisation have been resolved.
- Fetched remote chunks are now properly stored in the local database if `Fetch chunks on demand` is enabled.
- The 'fetch' dialogue's message has been refined.
- No longer overwriting any corrupted documents to the storage on boot-sequence.

### Refactored
- Type errors have been corrected.
2025-08-18 06:26:50 +01:00
vorotamoroz
db0562eda1 An minor note added 2025-08-15 11:06:55 +01:00
vorotamoroz
b610d5d959 bump 2025-08-15 11:04:34 +01:00
vorotamoroz
5abba74f3b ## 0.25.7
### Fixed

- Off-loaded chunking have been fixed to ensure proper functionality (#693).
- Chunk document ID assignment has been fixed.
- Replication prevention message during version up detection has been improved (#686).
- `Keep A` and `Keep B` on Conflict resolving dialogue has been renamed to `Use Base` and `Use Conflicted` (#691).

### Improved

- Metadata and content-size unmatched documents are now detected and reported, prevented to be applied to the storage.

### New Features

- `Scan for Broken files` has been implemented on `Hatch` -> `TroubleShooting`.

### Refactored

- Off-loaded processes have been refactored for the better maintainability.
- Removed unused code.
2025-08-15 10:51:39 +01:00
vorotamoroz
021c1fccfe Merge pull request #667 from h-exx/main
Edited the Docker Compose documentation to make it work
2025-08-14 14:24:14 +09:00
vorotamoroz
0a30af479f Fix Fetch chunks on demand 2025-08-09 02:17:50 +09:00
vorotamoroz
a9c3f60fe7 bump 2025-08-09 01:51:17 +09:00
vorotamoroz
f996e056af ### Fixed
- Storage scanning no longer occurs when `Suspend file watching` is enabled (including boot-sequence).

### Improved
- Saving notes and files now consumes less memory.
- Chunk caching is now more efficient.
- Both of them (may) are effective for #692, #680, and some more.

### Changed
- `Incubate Chunks in Document` (also known as `Eden`) is now fully sunset.
- The `Compute revisions for chunks` setting has also been removed.
- As mentioned, `Memory cache size (by total characters)` has been removed.

### Refactored
- A significant refactoring of the core codebase is underway (please refer the release-note).
2025-08-09 01:45:41 +09:00
vorotamoroz
1073ee9e30 bump 2025-07-29 12:50:25 +01:00
vorotamoroz
f94653e60e ## 0.25.4
- The PBKDF2Salt is no longer corrupted when attempting replication while the device is offline (#686)
2025-07-29 12:45:28 +01:00
vorotamoroz
3dccf2076f Add draft design documents 2025-07-28 19:14:22 +09:00
vorotamoroz
3e78fe03e1 bump 2025-07-22 04:36:22 +01:00
vorotamoroz
4aa8fc3519 ## 0.25.3
### Fixed
- Now the `Doctor` at migration will save the configuration.
2025-07-22 04:35:40 +01:00
vorotamoroz
ba3d2220e1 bump again 2025-07-19 18:08:27 +09:00
vorotamoroz
8057b516af bump 2025-07-19 17:51:53 +09:00
vorotamoroz
f2b4431182 ## 0.25.1
19th July, 2025

### Refined and New Features
- Fetching the remote database on `RedFlag` now also retrieves remote configurations optionally.
- The setup wizard using Set-up URI and QR code has been improved.

### Changes
- The Set-up URI is now encrypted with a new encryption algorithm (mostly the same as `V2`).
2025-07-19 17:26:52 +09:00
vorotamoroz
badec46d9a 0.25.0 released 2025-07-19 15:21:36 +09:00
vorotamoroz
355e41f488 bump for beta 2025-07-14 00:36:09 +09:00
vorotamoroz
e0e7e1b5ca ### Fixed
- The encryption algorithm now uses HKDF with a master key.
- `Fetch everything from the remote` now works correctly.
- Extra log messages during QR code decoding have been removed.

### Changed
- Some settings have been moved to the `Patches` pane:

### Behavioural and API Changes
- `DirectFileManipulatorV2` now requires new settings (as you may already know, E2EEAlgorithm).
- The database version has been increased to `12` from `10`.
2025-07-14 00:33:40 +09:00
vorotamoroz
ce4b61557a bump 2025-07-10 11:24:59 +01:00
vorotamoroz
52b02f3888 ## 0.24.31
### Fixed

- The description of `Enable Developers' Debug Tools.` has been refined.
- Automatic conflict checking and resolution has been improved.
- Resolving conflicts dialogue will not be shown for the multiple files at once.
2025-07-10 11:12:44 +01:00
vorotamoroz
7535999388 Update updates.md 2025-07-09 22:28:50 +09:00
vorotamoroz
dccf8580b8 Update updates.md 2025-07-09 22:27:52 +09:00
vorotamoroz
e3964f3c5d bump 2025-07-09 12:48:37 +01:00
vorotamoroz
375e7bde31 ### New Feature
- New chunking algorithm `V3: Fine deduplication` has been added, and will be recommended after updates.
- New language `ko` (Korean) has been added.
- Chinese (Simplified) translation has been updated.

### Fixed

- Numeric settings are now never lost the focus during the value changing.

### Improved
- All translations have rewritten into YAML format, to easier manage and contribution.
- Doctor recommendations have now shown in the user-friendly notation.

### Refactored

- Never ending `ObsidianLiveSyncSettingTag.ts` finally had separated into each pane's file.
- Some commented-out codes have been removed.
2025-07-09 12:15:59 +01:00
Jacques Faulkner
341f0ab12d Updated Table of Contents 2025-06-27 10:39:13 +01:00
Jacques Faulkner
39340c1e1b Sorry 1 more, reworded the creating directories comments 2025-06-26 14:45:04 +01:00
Jacques Faulkner
55cdc58857 Edited warning to make a bit more sense 2025-06-26 14:42:23 +01:00
Jacques Faulkner
4f1a9dc4e8 Forgot a # 2025-06-25 18:54:22 +01:00
Jacques Faulkner
013818b7d0 Update setup_own_server.md 2025-06-25 18:53:31 +01:00
vorotamoroz
1179438df8 bump 2025-06-20 12:43:39 +01:00
vorotamoroz
47ea8f6859 ## 0.24.29
### Fixed

- Synchronisation with buckets now works correctly, regardless of whether a prefix is set or the bucket has been (re-) initialised (#664).
- An information message is now displayed again, during any automatic synchronisation is enabled (#662).

### Tidied up

- Importing paths have been tidied up.
2025-06-20 12:43:15 +01:00
vorotamoroz
670fe16486 Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-06-16 02:53:30 +01:00
vorotamoroz
3f0093916c Add some note 2025-06-16 02:52:59 +01:00
vorotamoroz
9503474d06 bump 2025-06-15 18:49:31 +09:00
vorotamoroz
ddf7b243e4 ## 0.24.28
### Fixed

- Batch Update is no longer available in LiveSync mode to avoid unexpected behaviour. (#653)
- Now compatible with Cloudflare R2 again for bucket synchronisation.
- Prevention of broken behaviour due to database connection failures added (#649).
2025-06-15 18:49:16 +09:00
vorotamoroz
f37561c3c1 Update Library 2025-06-15 18:24:19 +09:00
vorotamoroz
f01429decc Merge pull request #633 from jmarmstrong1207/patch-1
Add simple docker compose to the setup guide
2025-06-10 11:27:22 +09:00
vorotamoroz
c0fcb66924 bump 2025-06-10 02:52:45 +01:00
vorotamoroz
5f76b9809b ## 0.24.27
### Improved

- We can use prefix for path for the Bucket synchronisation.
- The "Use Request API to avoid `inevitable` CORS problem" option is now promoted to the normal setting, not a niche patch.

### Fixed

- Now switching replicators applied immediately, without the need to restart Obsidian.

### Tidied up

- Some dependencies have been updated to the latest version.
2025-06-10 02:51:47 +01:00
vorotamoroz
d61d6fec37 bump manifest 2025-05-14 13:55:42 +01:00
vorotamoroz
9fdd622824 bump 2025-05-14 13:55:17 +01:00
vorotamoroz
3b8d03a189 ## 0.24.26
### New Features

- Automatic display-language changing according to the Obsidian language
  setting.
- Now we can limit files to be synchronised even in the hidden files.
- "Use Request API to avoid `inevitable` CORS problem" has been implemented.
- `Show status icon instead of file warnings banner` has been implemented.

### Improved

- All regular expressions can be inverted by prefixing `!!` now.

### Fixed

- No longer unexpected files will be gathered during hidden file sync.
- No longer broken `\n` and new-line characters during the bucket
  synchronisation.
- We can purge the remote bucket again if we using MinIO instead of AWS S3 or
  Cloudflare R2.
- Purging the remote bucket is now more reliable.
- Some wrong messages have been fixed.

### Behaviour changed

- Entering into the deeper directories to gather the hidden files is now limited
  by `/` or `\/` prefixed ignore filters.

### Etcetera

- Some code has been tidied up.
- Trying less warning-suppressing and be more safer-coding.
- Dependent libraries have been updated to the latest version.
- Some build processes have been separated to `pre` and `post` processes.
2025-05-14 13:11:03 +01:00
James Armstrong
1f1a39e5a0 Add simple docker compose 2025-04-28 06:10:29 -07:00
vorotamoroz
d0e92cff7a refine readme 2025-04-23 06:45:03 +01:00
vorotamoroz
5addddc792 Update doc 2025-04-23 05:51:51 +01:00
vorotamoroz
d978892661 Update docs 2025-04-23 05:11:59 +01:00
vorotamoroz
cfb061a6a2 add note to troubleshooting 2025-04-23 05:10:40 +01:00
vorotamoroz
381055fc93 bump 2025-04-22 11:29:42 +01:00
vorotamoroz
37d12916fc ## 0.24.25
### Improved

- Peer-to-peer synchronisation has been got more robust.

### Fixed

- No longer broken falsy values in settings during set-up by the QR code generation.

### Refactored

- Some `window` references now have pointed to `globalThis`.
- Some sloppy-import has been fixed.
- A server side implementation `Synchromesh` has been suffixed with `deno` instead of `server` now.
2025-04-22 11:28:55 +01:00
vorotamoroz
944aa846c4 bump 2025-04-15 11:10:37 +01:00
vorotamoroz
abca808e29 ### Fixed
- No longer broken JSON files including `\n`, during the bucket synchronisation. (#623)
- Custom headers and JWT tokens are now correctly sent to the server during configuration checking. (#624)

### Improved

- Bucket synchronisation has been enhanced for better performance and reliability.
    - Now less duplicated chunks are sent to the server.
    - Fetching conflicted files from the server is now more reliable.
    - Dependent libraries have been updated to the latest version.
2025-04-15 11:09:49 +01:00
vorotamoroz
90bb610133 Update submodule 2025-04-14 03:23:24 +01:00
vorotamoroz
9c5e9fe63b Conclusion stated. 2025-04-11 14:13:22 +01:00
vorotamoroz
00dfae24d7 bump 2025-04-10 14:28:03 +01:00
vorotamoroz
d8a41fe45d ### New Feature
- Now, we can send custom headers to the server.
- Authentication with JWT in CouchDB is now supported.

### Improved

- The QR Code for set-up can be shown also from the setting dialogue now.
- Conflict checking for preventing unexpected overwriting on the boot-up process has been quite faster.

### Fixed

- Some bugs on Dev and Testing modules have been fixed.
2025-04-10 14:24:33 +01:00
vorotamoroz
30467d1c25 Add link to P2P pseudo client. 2025-04-04 12:30:21 +01:00
vorotamoroz
f8351f1d45 Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-04-04 11:52:12 +01:00
vorotamoroz
5924af98ab Update Library (Probably no effect) 2025-04-04 11:52:05 +01:00
vorotamoroz
2769b61da4 Update setup_own_server.md
Add note; #609
2025-04-04 18:24:13 +09:00
vorotamoroz
bb4409221d Update README.md 2025-04-03 20:57:15 +09:00
vorotamoroz
f398c14200 Bump for release-mistake. 2025-04-01 10:42:29 +01:00
vorotamoroz
27d58508dc Missed 2025-04-01 10:38:12 +01:00
vorotamoroz
d4dea5b226 bump 2025-04-01 10:21:38 +01:00
vorotamoroz
c79dc30cba ## 0.24.21
### Fixed

- No longer conflicted files are handled in the boot-up process. No more unexpected overwriting.
    - It ignores `Always overwrite with a newer file`, and always be prevented for the safety. Please pick it manually or open the file.
- Some log messages on conflict resolution has been corrected.
- Automatic merge notifications, displayed on the grounds of `same`, have been degraded to logs.

### Improved

- Now we can fetch the remote database with keeping local files completely intact.
    - In new option, all files are stored into the local database before the fetching, and will be merged automatically or detected as conflicts.
- The dialogue presenting options when performing `Fetch` are now more informative.

### Refactored

- Some class methods have been fixed its arguments to be more consistent.
- Types have been defined for some conditional results.
2025-04-01 10:20:21 +01:00
vorotamoroz
b3119ee8a9 bump and update dependencies 2025-03-24 18:55:02 +09:00
vorotamoroz
2a1d71da5c Improved: show details of TypeError using Obsidian API. 2025-03-24 12:04:58 +09:00
vorotamoroz
24f31ed19e Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-03-19 04:43:06 +01:00
vorotamoroz
a982629ae6 Update draft : possibly I should share cSpell dictionary. 2025-03-19 04:42:50 +01:00
vorotamoroz
85140aecab Update README.md 2025-03-05 20:40:04 +09:00
vorotamoroz
3f2e23ee88 bump 2025-03-05 11:13:58 +00:00
vorotamoroz
6049c19e8a ## 0.24.19
### New Feature

- Now we can generate a QR Code for transferring the configuration to another device.
2025-03-05 11:12:00 +00:00
vorotamoroz
65648683a3 bump 2025-02-28 12:02:45 +00:00
vorotamoroz
5d70f2c1e9 ## 0.24.18
### Fixed

- Now no chunk creation errors will be raised after switching `Compute revisions for chunks`.
- Some invisible file can be handled correctly (e.g., `writing-goals-history.csv`).
- Fetching configuration from the server is now saves the configuration immediately (if we are not in the wizard).

### Improved

- Mismatched configuration dialogue is now more informative, and rewritten to more user-friendly.
- Applying configuration mismatch is now without rebuilding (at our own risks).
- Now, rebuilding is decided more fine grained.

### Improved internally

- Translations can be nested. i.e., task:`Some procedure`, check: `%{task} checking`, checkfailed: `%{check} failed` produces `Some procedure checking failed`.
2025-02-28 11:58:15 +00:00
vorotamoroz
cbcfdc453e update default vault and bump for release 2025-02-27 13:39:28 +00:00
vorotamoroz
a4eb21593c bump 2025-02-27 13:24:51 +00:00
vorotamoroz
05eb2c8262 ## 0.24.16
### Improved

#### Peer-to-Peer

- Now peer-to-peer synchronisation checks the settings are compatible with each other.
- Peer-to-peer synchronisation now handles the platform and detects pseudo-clients.

#### General

- New migration method has been implemented, that called `Doctor`.

- The minimum interval for replication to be caused when an event occurs can now be configurable.
- Some detail note has been added and change nuance about the `Report` in the setting dialogue, which had less informative.

### Behaviour and default changed

- `Compute revisions for chunks` are backed into enabled again. it is necessary for garbage collection of chunks.

### Refactored

- Platform specific codes are more separated. No longer `node` modules were used in the browser and Obsidian.
2025-02-27 13:23:11 +00:00
vorotamoroz
fecefa3631 ### Fixed
- Now, even without WeakRef, Polyfill is used and the whole thing works without error. However, if you can switch WebView Engine, it is recommended to switch to a WebView Engine that supports WeakRef.

And bumped.
2025-02-20 10:40:18 +00:00
vorotamoroz
f8c4d5ccb0 Add a bit more. 2025-02-18 13:48:41 +00:00
vorotamoroz
e63e79bc8e Add note of flag files. 2025-02-18 13:36:15 +00:00
vorotamoroz
ed76125f3d bump 2025-02-18 13:02:54 +00:00
vorotamoroz
70f4e23474 ## 0.24.14
### Fixed

- Resolving conflicts of JSON files (and sensibly merging them) is now working fine, again!
    - And, failure logs are more informative.
- More robust to release the event listeners on unwatching the local database.

### Refactored

- JSON file conflict resolution dialogue has been rewritten into svelte v5.
- Upgrade eslint.
- Remove unnecessary pragma comments for eslint.
2025-02-18 12:59:18 +00:00
vorotamoroz
f6d5b78cc8 bump 2025-02-17 11:35:34 +00:00
vorotamoroz
405624b51b ## 0.24.13
### Fixed
#### General Replication
- No longer unexpected errors occur when the replication is stopped during for some reason (e.g., network disconnection).
#### Peer-to-Peer Synchronisation
- Set-up process will not receive data from unexpected sources.
- No longer resource leaks while enabling the `broadcasting changes`
- Logs are less verbose.
- Received data is now correctly dispatched to other devices.
- `Timeout` error now more informative.
- No longer timeout error occurs for reporting the progress to other devices.
- Decision dialogues for the same thing are not shown multiply at the same time anymore.
- Disconnection of the peer-to-peer synchronisation is now more robust and less error-prone.
#### Webpeer
- Now we can toggle Peers' configuration.
### Refactored
- Cross-platform compatibility layer has been improved.
- Common events are moved to the common library.
- Displaying replication status of the peer-to-peer synchronisation is separated from the main-log-logic.
- Some file names have been changed to be more consistent.
2025-02-17 11:33:35 +00:00
vorotamoroz
90c0ff22b9 Add paths for future maintenance. 2025-02-17 11:30:42 +00:00
vorotamoroz
67568ea886 bump 2025-02-14 11:28:01 +00:00
vorotamoroz
cc29b4058d 0.24.12
Fixed
-  No longer unnecessary acknowledgements are sent when starting peer-to-peer synchronisation.

Refactored
- Platform impedance-matching-layer has been improved.
- Some UIs have been get isomorphic among Obsidian and web applications (for `webpeer`).
2025-02-14 11:15:22 +00:00
238 changed files with 36136 additions and 17156 deletions

View File

@@ -1,9 +0,0 @@
node_modules
build
.eslintrc.js.bak
src/lib/src/patches/pouchdb-utils
esbuild.config.mjs
rollup.config.js
src/lib/test
src/lib/src/cli
main.js

View File

@@ -1,13 +1,35 @@
{
"root": true,
"parser": "@typescript-eslint/parser",
"plugins": ["@typescript-eslint"],
"extends": ["eslint:recommended", "plugin:@typescript-eslint/eslint-recommended", "plugin:@typescript-eslint/recommended"],
"plugins": [
"@typescript-eslint",
"eslint-plugin-svelte",
"eslint-plugin-import"
],
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended"
],
"parserOptions": {
"sourceType": "module",
"project": ["tsconfig.json"]
"project": [
"tsconfig.json"
]
},
"ignorePatterns": [],
"ignorePatterns": [
"**/node_modules/*",
"**/jest.config.js",
"src/lib/coverage",
"src/lib/browsertest",
"**/test.ts",
"**/tests.ts",
"**/**test.ts",
"**/**.test.ts",
"src/apps/**",
"esbuild.*.mjs",
"terser.*.mjs"
],
"rules": {
"no-unused-vars": "off",
"@typescript-eslint/no-unused-vars": [
@@ -34,4 +56,4 @@
}
]
}
}
}

68
.github/workflows/harness-ci.yml vendored Normal file
View File

@@ -0,0 +1,68 @@
# Run tests by Harnessed CI
name: harness-ci
on:
workflow_dispatch:
inputs:
testsuite:
description: 'Run specific test suite (leave empty to run all)'
type: choice
options:
- ''
- 'suite/'
- 'suitep2p/'
default: ''
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24.x'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install test dependencies (Playwright Chromium)
run: npm run test:install-dependencies
- name: Start test services (CouchDB)
run: npm run test:docker-couchdb:start
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suite/' }}
- name: Start test services (MinIO)
run: npm run test:docker-s3:start
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suite/' }}
- name: Start test services (Nostr Relay + WebPeer)
run: npm run test:docker-p2p:start
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suitep2p/' }}
- name: Run tests suite
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suite/' }}
env:
CI: true
run: npm run test suite/
- name: Run P2P tests suite
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suitep2p/' }}
env:
CI: true
run: npm run test suitep2p/
- name: Stop test services (CouchDB)
run: npm run test:docker-couchdb:stop
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suite/' }}
- name: Stop test services (MinIO)
run: npm run test:docker-s3:stop
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suite/' }}
- name: Stop test services (Nostr Relay + WebPeer)
run: npm run test:docker-p2p:stop
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suitep2p/' }}

View File

@@ -10,19 +10,19 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
fetch-depth: 0 # otherwise, you will failed to push refs to dest repo
submodules: recursive
- name: Use Node.js
uses: actions/setup-node@v1
uses: actions/setup-node@v4
with:
node-version: '22.x' # You might need to adjust this value to your own version
node-version: '24.x' # You might need to adjust this value to your own version
# Get the version number and put it in a variable
- name: Get Version
id: version
run: |
echo "::set-output name=tag::$(git describe --abbrev=0 --tags)"
echo "tag=$(git describe --abbrev=0 --tags)" >> $GITHUB_OUTPUT
# Build the plugin
- name: Build
id: build
@@ -36,59 +36,69 @@ jobs:
cp main.js manifest.json styles.css README.md ${{ github.event.repository.name }}
zip -r ${{ github.event.repository.name }}.zip ${{ github.event.repository.name }}
# Create the release on github
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VERSION: ${{ github.ref }}
# - name: Create Release
# id: create_release
# uses: actions/create-release@v1
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# VERSION: ${{ steps.version.outputs.tag }}
# with:
# tag_name: ${{ steps.version.outputs.tag }}
# release_name: ${{ steps.version.outputs.tag }}
# draft: true
# prerelease: false
# # Upload the packaged release file
# - name: Upload zip file
# id: upload-zip
# uses: actions/upload-release-asset@v1
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# with:
# upload_url: ${{ steps.create_release.outputs.upload_url }}
# asset_path: ./${{ github.event.repository.name }}.zip
# asset_name: ${{ github.event.repository.name }}-${{ steps.version.outputs.tag }}.zip
# asset_content_type: application/zip
# # Upload the main.js
# - name: Upload main.js
# id: upload-main
# uses: actions/upload-release-asset@v1
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# with:
# upload_url: ${{ steps.create_release.outputs.upload_url }}
# asset_path: ./main.js
# asset_name: main.js
# asset_content_type: text/javascript
# # Upload the manifest.json
# - name: Upload manifest.json
# id: upload-manifest
# uses: actions/upload-release-asset@v1
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# with:
# upload_url: ${{ steps.create_release.outputs.upload_url }}
# asset_path: ./manifest.json
# asset_name: manifest.json
# asset_content_type: application/json
# # Upload the style.css
# - name: Upload styles.css
# id: upload-css
# uses: actions/upload-release-asset@v1
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# with:
# upload_url: ${{ steps.create_release.outputs.upload_url }}
# asset_path: ./styles.css
# asset_name: styles.css
# asset_content_type: text/css
- name: Create Release and Upload Assets
uses: softprops/action-gh-release@v2
with:
tag_name: ${{ github.ref }}
release_name: ${{ github.ref }}
draft: true
prerelease: false
# Upload the packaged release file
- name: Upload zip file
id: upload-zip
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./${{ github.event.repository.name }}.zip
asset_name: ${{ github.event.repository.name }}-${{ steps.version.outputs.tag }}.zip
asset_content_type: application/zip
# Upload the main.js
- name: Upload main.js
id: upload-main
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./main.js
asset_name: main.js
asset_content_type: text/javascript
# Upload the manifest.json
- name: Upload manifest.json
id: upload-manifest
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./manifest.json
asset_name: manifest.json
asset_content_type: application/json
# Upload the style.css
- name: Upload styles.css
id: upload-css
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./styles.css
asset_name: styles.css
asset_content_type: text/css
# TODO: release notes???
files: |
${{ github.event.repository.name }}.zip
main.js
manifest.json
styles.css
name: ${{ steps.version.outputs.tag }}
tag_name: ${{ steps.version.outputs.tag }}
draft: true

33
.github/workflows/unit-ci.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
# Run Unit test without Harnesses
name: unit-ci
on:
workflow_dispatch:
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24.x'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install test dependencies (Playwright Chromium)
run: npm run test:install-dependencies
- name: Run unit tests suite
run: npm run test:unit

11
.gitignore vendored
View File

@@ -9,6 +9,7 @@ package-lock.json
# build
main.js
main_org.js
main_org_*.js
*.js.map
meta.json
meta-*.json
@@ -17,3 +18,13 @@ meta-*.json
# obsidian
data.json
.vscode
# environment variables
.env
# local config files
*.local
cov_profile/**
coverage

View File

@@ -1,7 +0,0 @@
{
"trailingComma": "es5",
"tabWidth": 4,
"printWidth": 120,
"semi": true,
"endOfLine": "lf"
}

20
.prettierrc.mjs Normal file
View File

@@ -0,0 +1,20 @@
import { readFileSync } from "fs";
let localPrettierConfig = {};
try {
const localConfig = readFileSync(".prettierrc.local", "utf-8");
localPrettierConfig = JSON.parse(localConfig);
console.log("Using local Prettier config from .prettierrc.local");
} catch (e) {
// no local config
}
const prettierConfig = {
trailingComma: "es5",
tabWidth: 4,
printWidth: 120,
semi: true,
endOfLine: "cr",
...localPrettierConfig,
};
export default prettierConfig;

11
.test.env Normal file
View File

@@ -0,0 +1,11 @@
hostname=http://localhost:5989/
dbname=livesync-test-db2
minioEndpoint=http://127.0.0.1:9000
username=admin
password=testpassword
accessKey=minioadmin
secretKey=minioadmin
bucketName=livesync-test-bucket
# ENABLE_DEBUGGER=true
# PRINT_LIVESYNC_LOGS=true
# ENABLE_UI=true

View File

@@ -1,30 +1,39 @@
<!-- For translation: 20240227r0 -->
# Self-hosted LiveSync
[Japanese docs](./README_ja.md) - [Chinese docs](./README_cn.md).
Self-hosted LiveSync is a community-implemented synchronization plugin, available on every obsidian-compatible platform and using CouchDB or Object Storage (e.g., MinIO, S3, R2, etc.) as the server.
Self-hosted LiveSync is a community-developed synchronisation plug-in available on all Obsidian-compatible platforms. It leverages robust server solutions such as CouchDB or object storage systems (e.g., MinIO, S3, R2, etc.) to ensure reliable data synchronisation.
Additionally, it supports peer-to-peer synchronisation using WebRTC now (experimental), enabling you to synchronise your notes directly between devices without relying on a server.
![obsidian_live_sync_demo](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif)
Note: This plugin cannot synchronise with the official "Obsidian Sync".
>[!IMPORTANT]
> This plug-in is not compatible with the official "Obsidian Sync" and cannot synchronise with it.
## Features
- Synchronise vaults efficiently with minimal traffic.
- Handle conflicting modifications effectively.
- Automatically merge simple conflicts.
- Use open-source solutions for the server.
- Compatible solutions are supported.
- Support end-to-end encryption.
- Synchronise settings, snippets, themes, and plug-ins via [Customisation Sync (Beta)](docs/settings.md#6-customization-sync-advanced) or [Hidden File Sync](docs/settings.md#7-hidden-files-advanced).
- Enable WebRTC peer-to-peer synchronisation without requiring a `host` (Experimental).
- This feature is still in the experimental stage. Please exercise caution when using it.
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices.
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).
- Synchronize vaults very efficiently with less traffic.
- Good at conflicted modification.
- Automatic merging for simple conflicts.
- Using OSS solution for the server.
- Compatible solutions can be used.
- Supporting End-to-end encryption.
- Synchronisation of settings, snippets, themes, and plug-ins, via [Customization sync(Beta)](#customization-sync) or [Hidden File Sync](#hiddenfilesync)
- WebClip from [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
This plug-in might be useful for researchers, engineers, and developers with a need to keep their notes fully self-hosted for security reasons. Or just anyone who would like the peace of mind of knowing that their notes are fully private.
This plug-in may be particularly useful for researchers, engineers, and developers who need to keep their notes fully self-hosted for security reasons. It is also suitable for anyone seeking the peace of mind that comes with knowing their notes remain entirely private.
>[!IMPORTANT]
> - Before installing or upgrading this plug-in, please back your vault up.
> - Do not enable this plugin with another synchronization solution at the same time (including iCloud and Obsidian Sync).
> - This is a synchronization plugin. Not a backup solution. Do not rely on this for backup.
> - Before installing or upgrading this plug-in, please back up your vault.
> - Do not enable this plug-in alongside another synchronisation solution at the same time (including iCloud and Obsidian Sync).
> - For backups, we also provide a plug-in called [Differential ZIP Backup](https://github.com/vrtmrz/diffzip).
## How to use
@@ -43,9 +52,11 @@ This plug-in might be useful for researchers, engineers, and developers with a n
1. [Setup CouchDB on fly.io](docs/setup_flyio.md)
2. [Setup your CouchDB](docs/setup_own_server.md)
2. Configure plug-in in [Quick Setup](docs/quick_setup.md)
> [!TIP]
> Now, fly.io has become not free. Fortunately, even though there are some issues, we are still able to use IBM Cloudant. Here is [Setup IBM Cloudant](docs/setup_cloudant.md). It will be updated soon!
> Fly.io is no longer free. Fortunately, despite some issues, we can still use IBM Cloudant. Refer to [Setup IBM Cloudant](docs/setup_cloudant.md).
> And also, we can use peer-to-peer synchronisation without a server. Or very cheap Object Storage -- Cloudflare R2 can be used for free.
> HOWEVER, most importantly, we can use the server that we trust. Therefore, please set up your own server.
> CouchDB can be run on a Raspberry Pi. (But please be careful about the security of your server).
## Information in StatusBar
@@ -75,20 +86,20 @@ Synchronization status is shown in the status bar with the following icons.
To prevent file and database corruption, please wait to stop Obsidian until all progress indicators have disappeared as possible (The plugin will also try to resume, though). Especially in case of if you have deleted or renamed files.
## Tips and Troubleshooting
If you are having problems getting the plugin working see: [Tips and Troubleshooting](docs/troubleshooting.md)
If you are having problems getting the plugin working see: [Tips and Troubleshooting](docs/troubleshooting.md).
## Acknowledgements
The project has been in continual progress and harmony because of
- Many [Contributors](https://github.com/vrtmrz/obsidian-livesync/graphs/contributors)
- Many [GitHub Sponsors](https://github.com/sponsors/vrtmrz#sponsors)
- JetBrains Community Programs / Support for Open-Source Projects <img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jetbrains.png" alt="JetBrains logo." height="24">
The project has been in continual progress and harmony thanks to:
- Many [Contributors](https://github.com/vrtmrz/obsidian-livesync/graphs/contributors).
- Many [GitHub Sponsors](https://github.com/sponsors/vrtmrz#sponsors).
- JetBrains Community Programs / Support for Open-Source Projects. <img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jetbrains.png" alt="JetBrains logo" height="24">
May those who have contributed be honoured and remembered for their kindness and generosity.
## Development Guide
Please refer to [Development Guide](devs.md) for development setup, testing infrastructure, code conventions, and more.
## License
Licensed under the MIT License.

View File

@@ -1,9 +1,12 @@
# Self-hosted LiveSync
Self-hosted LiveSync (自搭建在线同步) 是一个社区实现的在线同步插件。
使用一个自搭建的或者购买的 CouchDB 作为中转服务器。兼容所有支持 Obsidian 的平台。
它利用诸如CouchDB或对象存储系统例如MinIO、S3、R2等等强大的服务器解决方案以确保数据同步的可靠性。。兼容所有支持 Obsidian 的平台。
注意: 本插件与官方的 "Obsidian Sync" 服务不兼容
此外它现在支持使用WebRTC进行点对点同步实验性功能使您无需依赖服务器即可直接在设备之间同步笔记
>[!IMPORTANT]
>本插件与官方的 "Obsidian Sync" 服务不兼容。
![obsidian_live_sync_demo](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif)
@@ -11,119 +14,94 @@ Self-hosted LiveSync (自搭建在线同步) 是一个社区实现的在线同
## 功能
- 可视化的冲突解决器
- 接近实时的多设备双向同步
- 可使用 CouchDB 以及兼容的服务,如 IBM Cloudant
- 支持端到端加密
- 插件同步 (Beta)
- 从 [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf) 接收 WebClip (本功能不适用端到端加密)
- 以最少流量高效同步vault
- 有效处理冲突的修改。
- 自动合并简单冲突。
- 服务端使用开源的解决方案
- 支持兼容的解决方案。
- 支持端到端加密
- 同步设置、代码片段、主题和插件,通过 [Customisation Sync (Beta)](docs/settings.md#6-customization-sync-advanced) 或者 [Hidden File Sync](docs/settings.md#7-hidden-files-advanced).
- 启用 WebRTC 点对点同步,无需指定 `host`(实验性)。
- 此功能仍处于试验阶段。请在使用时务必谨慎。
- WebRTC 是一种点对点同步方法,因此**至少有一台设备必须在线才能进行同步**。
- 与其让您的设备作为稳定的对等节点保持在线,您可以使用两个 pseudo-peers:
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): 在服务器上运行的 pseudo-client 用于在设备之间接收和发送数据。
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): 用于在设备之间接收和发送数据的pseudo-client。
- 一个预构建的实例现已上线,地址为 [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (托管于vrtmrz博客网站). 这也是一个点对点的实例。可自由使用。
- 欲了解更多信息,请参阅[英文说明文章](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html)或[日文说明文章](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync)。
适用于出于安全原因需要将笔记完全自托管的研究人员、工程师或开发人员,以及任何喜欢笔记完全私密所带来的安全感的人。
此插件适用于出于安全原因需要将笔记完全自托管的研究人员、工程师或开发人员,以及任何喜欢笔记完全私密所带来的安全感的人。
## 重要提醒
- 请勿与其同步解决方案(包括 iCloudObsidian Sync一起使用。在启用此插件之前,请确保禁用所有其他同步方法以避免内容损坏或重复。如果要同步到多个服务,请一一进行,切勿同时启用两种同步方法
这包括不能将您的保管库放在云同步文件夹中(例如 iCloud 文件夹或 Dropbox 文件夹)
- 这是一个同步插件,不是备份解决方案。不要依赖它进行备份。
- 如果设备的存储空间耗尽,可能会发生数据库损坏。
- 隐藏文件或任何其他不可见文件不会保存在数据库中,因此不会被同步。(**并且可能会被删除**
>[!IMPORTANT]
> - 在安装或升级此插件之前,请务必备份您的保险库。
> - 请勿同时启用此插件与其同步方案包括iCloudObsidian Sync
> - 对于备份,我们还提供了一款名为[Differential ZIP Backup](https://github.com/vrtmrz/diffzip)的插件。
## 如何使用
### 准备好你的数据库
### 3分钟搞定——在fly.io上部署CouchDB
首先准备好你的数据库。IBM Cloudant 是用于测试的首选。或者,您也可以在自己的服务器上安装 CouchDB。有关更多信息请参阅以下内容
1. [Setup IBM Cloudant](docs/setup_cloudant.md)
2. [Setup your CouchDB](docs/setup_own_server_cn.md)
**推荐初学者第一次使用此方法**
[![LiveSync Setup onto Fly.io SpeedRun 2024 using Google Colab](https://img.youtube.com/vi/7sa_I1832Xc/0.jpg)](https://www.youtube.com/watch?v=7sa_I1832Xc)
Note: 正在征集更多搭建方法!目前在讨论的有 [使用 fly.io](https://github.com/vrtmrz/obsidian-livesync/discussions/85)
1. [Setup CouchDB on fly.io](docs/setup_flyio.md)
2. 在 [Quick Setup](docs/quick_setup.md) 中配置插件。
### 第一个设备
### 手动设置
1. 在您的设备上安装插件。
2. 配置远程数据库信息。
1. 将您的服务器信息填写到 `Remote Database configuration`(远程数据库配置)设置页中。
2. 建议启用 `End to End Encryption`(端到端加密)。输入密码后,单击“应用”。
3. 点击 `Test Database Connection` 并确保插件显示 `Connected to (你的数据库名称)`
4. 单击 `Check database configuration`(检查数据库配置)并确保所有测试均已通过。
3.`Sync Settings`(同步设置)选项卡中配置何时进行同步。(您也可以稍后再设置)
1. 如果要实时同步,请启用 `LiveSync`
2. 或者,根据您的需要设置同步方式。默认情况下,不会启用任何自动同步,这意味着您需要手动触发同步过程。
3. 其他配置也在这里。建议启用 `Use Trash for deleted files`(删除文件到回收站),但您也可以保持所有配置不变
4. 配置杂项功能。
1. 启用 `Show staus inside editor` 会在编辑器右上角显示状态。(推荐开启)
5. 回到编辑器。等待初始扫描完成。
6. 当状态不再变化并显示 ⏹️ 图标表示 COMPLETED没有 ⏳ 和 🧩 图标)时,您就可以与服务器同步了。
7. 按功能区上的复制图标或从命令面板运行 `Replicate now`(立刻复制)。这会将您的所有数据发送到服务器。
8. 打开命令面板,运行 `Copy setup URI`(复制设置链接),并设置密码。这会将您的配置导出到剪贴板,作为您导入其他设备的链接。
1. 配置服务器
1. [在fly.io上快速搭建CouchDB](docs/setup_flyio.md)
2. [自行搭建CouchDB](docs/setup_own_server.md)
2. 在[快速设置](docs/quick_setup.md)中配置插件
> [!提示]
> Fly.io现已不再免费。不过尽管存在一些问题我们仍可使用IBM Cloudant。请参考[搭建IBM Cloudant](docs/setup_cloudant.md)。
> 此外我们还可以采用点对点同步方式无需搭建服务器或者选用价格极低的对象存储——Cloudflare R2可免费使用
> 但最重要的是,我们可以选择自己信任的服务器。因此,建议您搭建自有服务器
> CouchDB可在树莓派上运行。但请务必注意服务器的安全性
**重要: 不要公开本链接,这个链接包含了你的所有认证信息!** (即使没有密码别人读不了)
### 后续设备
注意:如果要与非空的 vault 进行同步,文件的修改日期和时间必须互相匹配。否则,可能会发生额外的传输或文件可能会损坏。
为简单起见,我们强烈建议同步到一个全空的 vault。
## 状态栏中的信息
1. 安装插件
2. 打开您从第一台设备导出的链接。
3. 插件会询问您是否确定应用配置。 回答 `Yes`,然后按照以下说明进行操作:
1.`Keep local DB?` 回答 `Yes`
*注意:如果您希望保留本地现有 vault则必须对此问题回答 `No`,并对 `Rebuild the database?` 回答 `No`。*
2.`Keep remote DB?` 回答 `Yes`
3.`Replicate once?` 回答 `Yes`
完成后,您的所有设置将会从第一台设备成功导入。
4. 你的笔记应该很快就会同步。
## 文件看起来有损坏...
请再次打开配置链接并回答如下:
- 如果您的本地数据库看起来已损坏(当你的本地 Obsidian 文件看起来很奇怪)
-`Keep local DB?` 回答 `No`
- 如果您的远程数据库看起来已损坏(当复制时发生中断)
-`Keep remote DB?` 回答 `No`
如果您对两者都回答“否”,您的数据库将根据您设备上的内容重建。并且远程数据库将锁定其他设备,您必须再次同步所有设备。(此时,几乎所有文件都会与时间戳同步。因此您可以安全地使用现有的 vault
## 测试服务器
设置 Cloudant 或本地 CouchDB 实例有点复杂,所以我搭建了一个 [self-hosted-livesync 尝鲜服务器](https://olstaste.vrtmrz.net/)。欢迎免费尝试!
注意:请仔细阅读“限制”条目。不要发送您的私人 vault。
## 状态栏信息
同步状态将显示在状态栏。
同步状态显示在状态栏中,采用以下图标
- 活动指示器
- 📲 网络请求
- 状态
- ⏹️ 就绪
- 💤 LiveSync 已启用,正在等待更改
- ⚡️ 同步中
-一个错误出现了。
- ↑ 上传的 chunk 和元数据数量
- ↓ 下载的 chunk 和元数据数量
- ⏳ 等待的过程的数量
- 🧩 正在等待 chunk 的文件数量
如果你删除或更名了文件,请等待 ⏳ 图标消失。
- ⏹️ 已停止
- 💤 LiveSync已启用正在等待更改
- ⚡️ 同步中
-发生了错误
- 统计指标
- ↑ 上传的分块与元数据
- ↓ 下载的分块与元数据
- 进度指示器
- 📥 未处理的传输项
- 📄 正在进行的数据库操作
- 💾 正在进行的写入存储进程
- ⏳ 正在进行的读取存储进程
- 🛫 待处理的读取存储进程
- 📬 批量处理的读取存储进程
- ⚙️ 正在进行或待处理的隐藏文件存储进程
- 🧩 等待中的分块
- 🔌 正在进行的自定义项(配置、代码片段和插件)
为避免文件和数据库损坏,请等待所有进度指示器尽可能消失后再关闭 Obsidian插件也会尝试恢复同步进度。特别是在您已删除或重命名文件的情况下请务必遵守此操作。
## 提示
- 如果文件夹在复制后变为空,则默认情况下该文件夹会被删除。您可以关闭此行为。检查 [设置](docs/settings.md)。
- LiveSync 模式在移动设备上可能导致耗电量增加。建议使用定期同步 + 条件自动同步。
- 移动平台上的 Obsidian 无法连接到非安全 (HTTP) 或本地签名的服务器,即使设备上安装了根证书。
- 没有类似“exclude_folders”的配置。
- 同步时,文件按修改时间进行比较,较旧的将被较新的文件覆盖。然后插件检查冲突,如果需要合并,将打开一个对话框。
- 数据库中的文件在罕见情况下可能会损坏。当接收到的文件看起来已损坏时,插件不会将其写入本地存储。如果您的设备上有文件的本地版本,则可以通过编辑本地文件并进行同步来覆盖损坏的版本。但是,如果您的任何设备上都不存在该文件,则无法挽救该文件。在这种情况下,您可以从设置对话框中删除这些损坏的文件。
- 要阻止插件的启动流程(例如,为了修复数据库问题),您可以在 vault 的根目录创建一个 "redflag.md" 文件。
- 问:数据库在增长,我该如何缩小它?
答:每个文档都保存了过去 100 次修订,用于检测和解决冲突。想象一台设备已经离线一段时间,然后再次上线。设备必须将其笔记与远程保存的笔记进行比较。如果存在曾经相同的历史修订,则可以安全地直接更新这个文件(和 git 的快进原理一样)。即使文件不在修订历史中,我们也只需检查两个设备上该文件的公有修订版本之后的差异。这就像 git 的冲突解决方法。所以,如果想从根本上解决数据库太大的问题,我们像构建一个扩大版的 git repo 一样去重新设计数据库。
- 更多技术信息在 [技术信息](docs/tech_info.md)
- 如果你想在没有黑曜石的情况下同步文件,你可以使用[filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync)。
- WebClipper 也可在 Chrome Web Store 上使用:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
## 使用技巧与故障排除
如果您在配置插件时遇到问题,请参阅:[Tips and Troubleshooting](docs/troubleshooting.md).
仓库地址:[obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip) (文档施工中)
## 致谢
本项目得以持续顺利推进,离不开以下各方的贡献:
- 众多[贡献者](https://github.com/vrtmrz/obsidian-livesync/graphs/contributors)。
- 许多[GitHub 赞助人](https://github.com/sponsors/vrtmrz#sponsors)。
- JetBrains 社区计划/对开源项目的支持。<img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jetbrains.png" alt="JetBrains logo" height="24">
## License
愿所有作出贡献的人士因其善良与慷慨而受到尊敬与铭记。
The source code is licensed under the MIT License.
本源代码使用 MIT 协议授权。
## 许可协议
本项目采用 MIT 许可协议授权。

159
devs.md Normal file
View File

@@ -0,0 +1,159 @@
# Self-hosted LiveSync Development Guide
## Project Overview
Self-hosted LiveSync is an Obsidian plugin for synchronising vaults across devices using CouchDB, MinIO/S3, or peer-to-peer WebRTC. The codebase uses a modular architecture with TypeScript, Svelte, and PouchDB.
## Architecture
### Module System
The plugin uses a dynamic module system to reduce coupling and improve maintainability:
- **Service Hub**: Central registry for services using dependency injection
- Services are registered, and accessed via `this.services` (in most modules)
- **Module Loading**: All modules extend `AbstractModule` or `AbstractObsidianModule` (which extends `AbstractModule`). These modules are loaded in main.ts and some modules
- **Module Categories** (by directory):
- `core/` - Platform-independent core functionality
- `coreObsidian/` - Obsidian-specific core (e.g., `ModuleFileAccessObsidian`)
- `essential/` - Required modules (e.g., `ModuleMigration`, `ModuleKeyValueDB`)
- `features/` - Optional features (e.g., `ModuleLog`, `ModuleObsidianSettings`)
- `extras/` - Development/testing tools (e.g., `ModuleDev`, `ModuleIntegratedTest`)
### Key Architectural Components
- **LiveSyncLocalDB** (`src/lib/src/pouchdb/`): Local PouchDB database wrapper
- **Replicators** (`src/lib/src/replication/`): CouchDB, Journal, and MinIO sync engines
- **Service Hub** (`src/modules/services/`): Central service registry using dependency injection
- **Common Library** (`src/lib/`): Platform-independent sync logic, shared with other tools
### File Structure Conventions
- **Platform-specific code**: Use `.platform.ts` suffix (replaced with `.obsidian.ts` in production builds via esbuild)
- **Development code**: Use `.dev.ts` suffix (replaced with `.prod.ts` in production)
- **Path aliases**: `@/*` maps to `src/*`, `@lib/*` maps to `src/lib/src/*`
## Build & Development Workflow
### Commands
```bash
npm run check # TypeScript and svelte type checking
npm run dev # Development build with auto-rebuild (uses .env for test vault paths)
npm run build # Production build
npm run buildDev # Development build (one-time)
npm run bakei18n # Pre-build step: compile i18n resources (YAML → JSON → TS)
npm test # Run vitest tests (requires Docker services)
```
### Environment Setup
- Create `.env` file with `PATHS_TEST_INSTALL` pointing to test vault plug-in directories (`:` separated on Unix, `;` on Windows)
- Development builds auto-copy to these paths on build
### Testing Infrastructure
- **Deno Tests**: Unit tests for platform-independent code (e.g., `HashManager.test.ts`)
- **Vitest** (`vitest.config.ts`): E2E test by Browser-based-harness using Playwright
- **Docker Services**: Tests require CouchDB, MinIO (S3), and P2P services:
```bash
npm run test:docker-all:start # Start all test services
npm run test:full # Run tests with coverage
npm run test:docker-all:stop # Stop services
```
If some services are not needed, start only required ones (e.g., `test:docker-couchdb:start`)
Note that if services are already running, starting script will fail. Please stop them first.
- **Test Structure**:
- `test/suite/` - Integration tests for sync operations
- `test/unit/` - Unit tests (via vitest, as harness is browser-based)
- `test/harness/` - Mock implementations (e.g., `obsidian-mock.ts`)
## Code Conventions
### Internationalisation (i18n)
- **Translation Workflow**:
1. Edit YAML files in `src/lib/src/common/messagesYAML/` (human-editable)
2. Run `npm run bakei18n` to compile: YAML → JSON → TypeScript constants
3. Use `$t()`, `$msg()` functions for translations
You can also use `$f` for formatted messages with Tagged Template Literals.
- **Usage**:
```typescript
$msg("dialog.someKey"); // Typed key with autocomplete
$t("Some message"); // Direct translation
$f`Hello, ${userName}`; // Formatted message
```
- **Supported languages**: `def` (English), `de`, `es`, `ja`, `ko`, `ru`, `zh`, `zh-tw`
### File Path Handling
- Use tagged types from `types.ts`: `FilePath`, `FilePathWithPrefix`, `DocumentID`
- Prefix constants: `CHeader` (chunks), `ICHeader`/`ICHeaderEnd` (internal data)
- Path utilities in `src/lib/src/string_and_binary/path.ts`: `addPrefix()`, `stripAllPrefixes()`, `shouldBeIgnored()`
### Logging & Debugging
- Use `this._log(msg, LOG_LEVEL_INFO)` in modules (automatically prefixes with module name)
- Log levels: `LOG_LEVEL_DEBUG`, `LOG_LEVEL_VERBOSE`, `LOG_LEVEL_INFO`, `LOG_LEVEL_NOTICE`, `LOG_LEVEL_URGENT`
- LOG_LEVEL_NOTICE and above are reported to the user via Obsidian notices
- LOG_LEVEL_DEBUG is for debug only and not shown in default builds
- Dev mode creates `ls-debug/` folder in `.obsidian/` for debug outputs (e.g., missing translations)
- This causes pretty significant performance overhead.
## Common Patterns
### Module Implementation
```typescript
export class ModuleExample extends AbstractObsidianModule {
async _everyOnloadStart(): Promise<boolean> {
/* ... */
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
}
}
```
### Settings Management
- Settings defined in `src/lib/src/common/types.ts` (`ObsidianLiveSyncSettings`)
- Configuration metadata in `src/lib/src/common/settingConstants.ts`
- Use `this.services.setting.saveSettingData()` instead of using plugin methods directly
### Database Operations
- Local database operations through `LiveSyncLocalDB` (wraps PouchDB)
- Document types: `EntryDoc` (files), `EntryLeaf` (chunks), `PluginDataEntry` (plugin sync)
## Important Files
- [main.ts](src/main.ts) - Plugin entry point, module registration
- [esbuild.config.mjs](esbuild.config.mjs) - Build configuration with platform/dev file replacement
- [package.json](package.json) - Scripts reference and dependencies
## Beta Policy
- Beta versions are denoted by appending `-patched-N` to the base version number.
- `The base version` mostly corresponds to the stable release version.
- e.g., v0.25.41-patched-1 is equivalent to v0.25.42-beta1.
- This notation is due to SemVer incompatibility of Obsidian's plugin system.
- Hence, this release is `0.25.41-patched-1`.
- Each beta version may include larger changes, but bug fixes will often not be included.
- I think that in most cases, bug fixes will cause the stable releases.
- They will not be released per branch or backported; they will simply be released.
- Bug fixes for previous versions will be applied to the latest beta version.
This means, if xx.yy.02-patched-1 exists and there is a defect in xx.yy.01, a fix is applied to xx.yy.02-patched-1 and yields xx.yy.02-patched-2.
If the fix is required immediately, it is released as xx.yy.02 (with xx.yy.01-patched-1).
- This procedure remains unchanged from the current one.
- At the very least, I am using the latest beta.
- However, I will not be using a beta continuously for a week after it has been released. It is probably closer to an RC in nature.
In short, the situation remains unchanged for me, but it means you all become a little safer. Thank you for your understanding!
## Contribution Guidelines
- Follow existing code style and conventions
- Please bump dependencies with care, check artifacts after updates, with diff-tools and only expected changes in the build output (to avoid unexpected vulnerabilities).
- When adding new features, please consider it has an OSS implementation, and avoid using proprietary services or APIs that may limit usage.
- For example, any functionality to connect to a new type of server is expected to either have an OSS implementation available for that server, or to be managed under some responsibilities and/or limitations without disrupting existing functionality, and scope for surveillance reduced by some means (e.g., by client-side encryption, auditing the server ourselves).

BIN
docs/all_toggles.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

168
docs/datastructure.md Normal file
View File

@@ -0,0 +1,168 @@
# Data Structures of Self-Hosted LiveSync
## Overview
Self-hosted LiveSync uses the following types of documents:
- Metadata
- Legacy Metadata
- Binary Metadata
- Plain Metadata
- Chunk
- Versioning
- Synchronise Information
- Synchronise Parameters
- Milestone Information
## Description of Each Data Structure
All documents inherit from the `DatabaseEntry` interface. This is necessary for conflict resolution and deletion flags.
```ts
export interface DatabaseEntry {
_id: DocumentID;
_rev?: string;
_deleted?: boolean;
}
```
### Versioning Document
This document stores version information for Self-hosted LiveSync.
The ID is fixed as `obsydian_livesync_version` [VERSIONING_DOCID]. Yes, the typo has become a curse.
When Self-hosted LiveSync detects changes to this document via Replication, it reads the version information and checks compatibility.
In that case, if there are major changes, synchronisation may be stopped.
Please refer to negotiation.ts.
### Synchronise Information Document
This document stores information that should be verified in synchronisation settings.
The ID is fixed as `syncinfo` [SYNCINFO_ID].
The information stored in this document is only the conditions necessary for synchronisation to succeed, and as of v0.25.43, only a random string is stored.
This document is only used during rebuilds from the settings screen for CouchDB-based synchronisation, making it like an appendix. It may be removed in the future.
### Synchronise Parameters Document
This document stores synchronisation parameters.
Synchronisation parameters include the protocol version and salt used for encryption, but do not include chunking settings.
The ID is fixed as `_local/obsidian_livesync_sync_parameters` [DOCID_SYNC_PARAMETERS] or `_obsidian_livesync_journal_sync_parameters.json` [DOCID_JOURNAL_SYNC_PARAMETERS].
This document exists only on the remote and not locally.
This document stores the following information.
It is read each time before connecting and is used to verify that E2EE settings match.
This mismatch cannot be ignored and synchronisation will be stopped.
```ts
export interface SyncParameters extends DatabaseEntry {
_id: typeof DOCID_SYNC_PARAMETERS;
type: (typeof EntryTypes)["SYNC_PARAMETERS"];
protocolVersion: ProtocolVersion;
pbkdf2salt: string;
}
```
#### protocolVersion
This field indicates the protocol version used by the remote. Mostly, this value should be `2` (ProtocolVersions.ADVANCED_E2EE), which indicates safer E2EE support.
#### pbkdf2salt
This field stores the salt used for PBKDF2 key derivation on the remote. This salt and the passphrase provides E2EE encryption keys.
### Milestone Information Document
This document stores information about how the remote accepts and recognises clients.
The ID is fixed as `_local/obsidian_livesync_milestone` [MILESTONE_DOCID].
This document exists only on the remote and not locally.
This document is used to indicate synchronisation progress and includes the version range of accepted chunks for each node and adjustment values for each node.
Tweak Mismatched is determined based on the information in this document.
For details, please refer to LiveSyncReplicator.ts, LiveSyncJournalReplicator.ts, and LiveSyncDBFunctions.ts.
```ts
export interface EntryMilestoneInfo extends DatabaseEntry {
_id: typeof MILESTONE_DOCID;
type: EntryTypes["MILESTONE_INFO"];
created: number;
accepted_nodes: string[];
node_info: { [key: NodeKey]: NodeData };
locked: boolean;
cleaned?: boolean;
node_chunk_info: { [key: NodeKey]: ChunkVersionRange };
tweak_values: { [key: NodeKey]: TweakValues };
}
```
### locked
If the remote has been requested to lock out from any client, this is set to true.
When set to true, clients will stop synchronisation unless they are included in accepted_nodes.
### cleaned
If the remote has been cleaned up from any client, this is set to true.
In this case, clients will stop synchronisation as they need to rebuild again.
### Metadata Document
Metadata documents store metadata for Obsidian notes.
```ts
export interface MetadataDocument extends DatabaseEntry {
_id: DocumentID;
ctime: number;
mtime: number;
size: number;
deleted?: boolean;
eden: Record<string, EdenChunk>; // Obsolete
path: FilePathWithPrefix;
children: string[];
type: EntryTypes["NOTE_LEGACY" | "NOTE_BINARY" | "NOTE_PLAIN"];
}
```
### type
This field indicates the type of Metadata document.
By convention, Self-hosted LiveSync does not save the mime type of the file, but distinguishes them with this field. Please note this.
Possible values are as follows:
- NOTE_LEGACY: Legacy metadata document
- Please do not use
- NOTE_BINARY: Binary metadata document (newnote)
- NOTE_PLAIN: Plain metadata document (plain)
#### children
This field stores an array of Chunk Document IDs.
#### \_id, path
\_id is generated based on the path of the Obsidian note.
- If the path starts with `_`, it is converted to `/_` for convenience.
- If Case Sensitive is disabled, it is converted to lowercase.
When Obfuscation is enabled, the path field contains `f:{obfuscated path}`.
The path field stores the path as is. However, when Obfuscation is enabled, the obfuscated path is stored.
When Property Encryption is enabled, the path field stores all properties including children, mtime, ctime, and size in an encrypted state. Please refer to encryption.ts.
### Chunk Document
```ts
export type EntryLeaf = DatabaseEntry & {
_id: DocumentID;
type: EntryTypes["CHUNK"];
data: string;
};
```
Chunk documents store parts of note content.
- The type field is always `[CHUNK]`, `leaf`.
- The data field stores the chunk content.
- The \_id field is generated based on a hash of the content and the passphrase.
Hash functions used include xxHash and SHA-1, depending on settings.
Chunking methods used include Contextual Chunking and Rabin-Karp Chunking, depending on settings.

View File

@@ -0,0 +1,122 @@
# [WITHDRAWN] Chunk Aggregation by Prefix
## Goal
To address the "document explosion" and storage bloat issues caused by the current chunking mechanism, while preserving the benefits of content-addressable storage and efficient delta synchronisation. This design aims to significantly reduce the number of documents in the database and simplify Garbage Collection (GC).
## Motivation
Our current synchronisation solution splits files into content-defined chunks, with each chunk stored as a separate document in CouchDB, identified by its hash. This architecture effectively leverages CouchDB's replication for automatic deduplication and efficient transfer.
However, this approach faces significant challenges as the number of files and edits increases:
1. **Document Explosion:** A large vault can generate millions of chunk documents, severely degrading CouchDB's performance, particularly during view building and replication.
2. **Storage Bloat & GC Difficulty:** Obsolete chunks generated during edits are difficult to identify and remove. Since CouchDB's deletion (`_deleted: true`) is a soft delete, and compaction is a heavy, space-intensive operation, unused chunks perpetually consume storage, making GC impractical for many users.
3. **The "Eden" Problem:** A previous attempt, "Keep newborn chunks in Eden", aimed to mitigate this by embedding volatile chunks within the parent document. While it reduced the number of standalone chunks, it introduced a new issue: the parent document's history (`_revs_info`) became excessively large, causing its own form of database bloat and making compaction equally necessary but difficult to manage.
This new design addresses the root cause—the sheer number of documents—by aggregating chunks into sets.
## Prerequisites
- The new implementation must maintain the core benefit of deduplication to ensure efficient synchronisation.
- The solution must not introduce a single point of bottleneck and should handle concurrent writes from multiple clients gracefully.
- The system must provide a clear and feasible strategy for Garbage Collection.
- The design should be forward-compatible, allowing for a smooth migration path for existing users.
## Outlined Methods and Implementation Plans
### Abstract
This design introduces a two-tiered document structure to manage chunks: **Index Documents** and **Data Documents**. Chunks are no longer stored as individual documents. Instead, they are grouped into `Data Documents` based on a common hash prefix. The existence and location of each chunk are tracked by `Index Documents`, which are also grouped by the same prefix. This approach dramatically reduces the total document count.
### Detailed Implementation
**1. Document Structure:**
- **Index Document:** Maps chunk hashes to their corresponding Data Document ID. Identified by a prefix of the chunk hash.
- `_id`: `idx:{prefix}` (e.g., `idx:a9f1b`)
- Content:
```json
{
"_id": "idx:a9f1b",
"_rev": "...",
"chunks": {
"a9f1b12...": "dat:a9f1b-001",
"a9f1b34...": "dat:a9f1b-001",
"a9f1b56...": "dat:a9f1b-002"
}
}
```
- **Data Document:** Contains the actual chunk data as base64-encoded strings. Identified by a prefix and a sequential number.
- `_id`: `dat:{prefix}-{sequence}` (e.g., `dat:a9f1b-001`)
- Content:
```json
{
"_id": "dat:a9f1b-001",
"_rev": "...",
"chunks": {
"a9f1b12...": "...", // base64 data
"a9f1b34...": "..." // base64 data
}
}
```
**2. Configuration:**
- `chunk_prefix_length`: The number of characters from the start of a chunk hash to use as a prefix (e.g., `5`). This determines the granularity of aggregation.
- `data_doc_size_limit`: The maximum size for a single Data Document to prevent it from becoming too large (e.g., 1MB). When this limit is reached, a new Data Document with an incremented sequence number is created.
**3. Write/Save Operation Flow:**
When a client creates new chunks:
1. For each new chunk, determine its hash prefix.
2. Read the corresponding `Index Document` (e.g., `idx:a9f1b`).
3. From the index, determine which of the new chunks already exist in the database.
4. For the **truly new chunks only**:
a. Read the last `Data Document` for that prefix (e.g., `dat:a9f1b-005`).
b. If it is nearing its size limit, create a new one (`dat:a9f1b-006`).
c. Add the new chunk data to the Data Document and save it.
5. Update the `Index Document` with the locations of the newly added chunks.
**4. Handling Write Conflicts:**
Concurrent writes to the same `Index Document` or `Data Document` from multiple clients will cause conflicts (409 Conflict). This is expected and must be handled gracefully. Since additions are incremental, the client application must implement a **retry-and-merge loop**:
1. Attempt to save the document.
2. On a conflict, re-fetch the latest version of the document from the server.
3. Merge its own changes into the latest version.
4. Attempt to save again.
5. Repeat until successful or a retry limit is reached.
**5. Garbage Collection (GC):**
GC becomes a manageable, periodic batch process:
1. Scan all file metadata documents to build a master set of all *currently referenced* chunk hashes.
2. Iterate through all `Index Documents`. For each chunk listed:
a. If the chunk hash is not in the master reference set, it is garbage.
b. Remove the garbage entry from the `Index Document`.
c. Remove the corresponding data from its `Data Document`.
3. If a `Data Document` becomes empty after this process, it can be deleted.
## Test Strategy
1. **Unit Tests:** Implement tests for the conflict resolution logic (retry-and-merge loop) to ensure robustness.
2. **Integration Tests:**
- Verify that concurrent writes from multiple simulated clients result in a consistent, merged state without data loss.
- Run a full synchronisation scenario and confirm the resulting database has a significantly lower document count compared to the previous implementation.
3. **GC Test:** Simulate a scenario where files are deleted, run the GC process, and verify that orphaned chunks are correctly removed from both Index and Data documents, and that storage is reclaimed after compaction.
4. **Migration Test:** Develop and test a "rebuild" process for existing users, which migrates their chunk data into the new aggregated structure.
## Documentation Strategy
- This design document will be published to explain the new architecture.
- The configuration options (`chunk_prefix_length`, etc.) will be documented for advanced users.
- A guide for the migration/rebuild process will be provided.
## Future Work
The separation of index and data opens up a powerful possibility. While this design initially implements both within CouchDB, the `Data Documents` could be offloaded to a dedicated object storage service such as **S3, MinIO, or Cloudflare R2**.
In such a hybrid model, CouchDB would handle only the lightweight `Index Documents` and file metadata, serving as a high-speed synchronisation and coordination layer. The bulky chunk data would reside in a more cost-effective and scalable blob store. This would represent the ultimate evolution of this architecture, combining the best of both worlds.
## Consideration and Conclusion
This design directly addresses the scalability limitations of the original chunk-per-document model. By aggregating chunks into sets, it significantly reduces the document count, which in turn improves database performance and makes maintenance feasible. The explicit handling of write conflicts and a clear strategy for garbage collection make this a robust and sustainable long-term solution. It effectively resolves the problems identified in previous approaches, including the "Eden" experiment, by tackling the root cause of database bloat. This architecture provides a solid foundation for future growth and scalability.

View File

@@ -0,0 +1,127 @@
# [WIP] The design intent explanation for using metadata and chunks
## Abstract
## Goal
- To explain the following:
- What metadata and chunks are
- The design intent of using metadata and chunks
## Background and Motivation
We are using PouchDB and CouchDB for storing files and synchronising them. PouchDB is a JavaScript database that stores data on the device (browser, and of course, Obsidian), while CouchDB is a NoSQL database that stores data on the server. The two databases can be synchronised to keep data consistent across devices via the CouchDB replication protocol. This is a powerful and flexible way to store and synchronise data, including conflict management, but it is not well suited for files. Therefore, we needed to manage how to store files and synchronise them.
## Terminology
- Password:
- A string used to authenticate the user.
- Passphrase:
- A string used to encrypt and decrypt data.
- This is not a password.
- Encrypt:
- To convert data into a format that is unreadable to anyone.
- Can be decrypted by the user who has the passphrase.
- Should be 1:n, containing random data to ensure that even the same data, when encrypted, results in different outputs.
- Obfuscate:
- To convert data into a format that is not easily readable.
- Can be decrypted by the user who has the passphrase.
- Should be 1:1, containing no random data, and the same data is always obfuscated to the same result. It is necessarily unreadable.
- Hash:
- To convert data into a fixed-length string that is not easily readable.
- Cannot be decrypted.
- Should be 1:1, containing no random data, and the same data is always hashed to the same result.
## Designs
### Principles
- To synchronise and handle conflicts, we should keep the history of modifications.
- No data should be lost. Even though some extra data may be stored, it should be removed later, safely.
- Each stored data item should be as small as possible to transfer efficiently, but not so small as to be inefficient.
- Any type of file should be supported, including binary files.
- Encryption should be supported efficiently.
- This method should not depart too far from the PouchDB/CouchDB philosophy. It needs to leave room for other `remote`s, to benefit from custom replicators.
As a result, we have adopted the following design.
- Files are stored as one metadata entry and multiple chunks.
- Chunks are content-addressable, and the metadata contains the ids of the chunks.
- Chunks may be referenced from multiple metadata entries. They should be efficiently managed to avoid redundancy.
### Metadata Design
The metadata contains the following information:
| Field | Type | Description | Note |
| -------- | -------------------- | ---------------------------- | ----------------------------------------------------------------------------------------------------- |
| _id | string | The id of the metadata | It is created from the file path |
| _rev | string | The revision of the metadata | It is created by PouchDB |
| children | [string] | The ids of the chunks | |
| path | string | The path of the file | If Obfuscate path has been enabled, it has been encrypted |
| size | number | The size of the metadata | Not respected; for troubleshooting |
| ctime | string | The creation timestamp | This is not used to compare files, but when writing to storage, it will be used |
| mtime | string | The modification timestamp | This will be used to compare files, and will be written to storage |
| type | `plain` \| `newnote` | The type of the file | Children of type `plain` will not be base64 encoded, while `newnote` will be |
| e_ | boolean | The file is encrypted | Encryption is processed during transfer to the remote. In local storage, this property does not exist |
#### Decision Rule for `_id` of Metadata
```ts
// Note: This is pseudo code.
let _id = PATH;
if (!HANDLE_FILES_AS_CASE_SENSITIVE) {
_id = _id.toLowerCase();
}
if (_id.startsWith("_")) {
_id = "/" + _id;
}
if (OBFUSCATE_PATH) {
_id = `f:${OBFUSCATE_PATH(_id, E2EE_PASSPHRASE)}`;
}
return _id;
```
#### Expected Questions
- Why do we need to handle files as case-sensitive?
- Some filesystems are case-sensitive, while others are not. For example, Windows is not case-sensitive, while Linux is. Therefore, we need to handle files as case-sensitive to manage conflicts.
- The trade-off is that you will not be able to manage files with different cases, so this can be disabled if you only have case-sensitive terminals.
- Why obfuscate the path?
- E2EE only encrypts the content of the file, not metadata. Hence, E2EE alone is not enough to protect the vault completely. The path is also part of the metadata, so it should be obfuscated. This is a trade-off between security and performance. However, if you title a note with sensitive information, you should obfuscate the path.
- What is `f:`?
- It is a prefix to indicate that the path is obfuscated. It is used to distinguish between normal paths and obfuscated paths. Due to file enumeration, Self-hosted LiveSync should scan the files to find the metadata, excluding chunks and other information.
- Why does an unobfuscated path not start with `f:`?
- For compatibility. Self-hosted LiveSync, by its nature, must also be able to handle files created with newer versions as far as possible.
### Chunk Design
#### Chunk Structure
The chunk contains the following information:
| Field | Type | Description | Note |
| ----- | ------------ | ------------------------- | ----------------------------------------------------------------------------------------------------- |
| _id | `h:{string}` | The id of the chunk | It is created from the hash of the chunk content |
| _rev | string | The revision of the chunk | It is created by PouchDB |
| data | string | The content of the chunk | |
| type | `leaf` | Fixed | |
| e_ | boolean | The chunk is encrypted | Encryption is processed during transfer to the remote. In local storage, this property does not exist |
**SORRY, TO BE WRITTEN, BUT WE HAVE IMPLEMENTED `v2`, WHICH REQUIRES MORE INFORMATION.**
### How they are unified
## Deduplication and Optimisation
## Synchronisation Strategy
## Performance Considerations
## Security and Privacy
## Edge Cases

View File

@@ -0,0 +1,117 @@
# [IN DESIGN] Tiered Chunk Storage with Live Compaction
** VERY IMPORTANT NOTE: This design must be used with the new journal synchronisation method. Otherwise, we risk introducing the bloat of changes from hot-pack into the Bucket. (CouchDB/PouchDB can synchronise only the most recent changes, or resolve conflicts.) Previous Journal Sync **IS NOT**. Please proceed with caution. **
## Goal
To establish a highly efficient, robust, and scalable synchronisation architecture by introducing a tiered storage system inspired by Log-Structured Merge-Trees (LSM-Trees). This design aims to address the challenges of real-time synchronisation, specifically the massive generation of transient data, while minimising storage bloat and ensuring high performance.
## Motivation
Our previous designs, including "Chunk Aggregation by Prefix", successfully addressed the "document explosion" problem. However, the introduction of real-time editor synchronisation exposed a new, critical challenge: the constant generation of short-lived "garbage" chunks during user input. This "garbage storm" places immense pressure on storage, I/O, and the Garbage Collection (GC) process.
A simple aggregation strategy is insufficient because it treats all data equally, mixing valuable, stable chunks with transient, garbage chunks in permanent storage. This leads to storage bloat and inefficient compaction. We require a system that can intelligently distinguish between "hot" (volatile) and "cold" (stable) data, processing them in the most efficient manner possible.
## Outlined Methods and Implementation Plans
### Abstract
This design implements a two-tiered storage system within CouchDB.
1. **Level 0 Hot Storage:** A set of "Hot-Packs", one for each active client. These act as fast, append-only logs for all newly created chunks. They serve as a temporary staging area, absorbing the "garbage storm" of real-time editing.
2. **Level 1 Cold Storage:** The permanent, immutable storage for stable chunks, consisting of **Index Documents** for fast lookups and **Data Documents (Cold-Packs)** for storing chunk data.
A background "Compaction" process continuously promotes stable chunks from Hot Storage to Cold Storage, while automatically discarding garbage. This keeps the permanent storage clean and highly optimised.
### Detailed Implementation
**1. Document Structure:**
- **Hot-Pack Document (Level 0):** A per-client, append-only log.
- `_id`: `hotpack:{client_id}` (`client_id` could be the same as the `deviceNodeID` used in the `accepted_nodes` in MILESTONE_DOC; enables database 'lockout' for safe synchronisation)
- Content: A log of chunk creation events.
```json
{
"_id": "hotpack:a9f1b12...",
"_rev": "...",
"log": [
{ "hash": "abc...", "data": "...", "ts": ..., "file_id": "file1" },
{ "hash": "def...", "data": "...", "ts": ..., "file_id": "file2" }
]
}
```
- **Index Document (Level 1):** A fast, prefix-based lookup table for stable chunks.
- `_id`: `idx:{prefix}` (e.g., `idx:a9f1b`)
- Content: Maps a chunk hash to the ID of the Cold-Pack it resides in.
```json
{
"_id": "idx:a9f1b",
"chunks": { "a9f1b12...": "dat:1678886400" }
}
```
- **Cold-Pack Document (Level 1):** An immutable data block created by the compaction process.
- `_id`: `dat:{timestamp_or_uuid}` (e.g., `dat:1678886400123`)
- Content: A collection of stable chunks.
```json
{
"_id": "dat:1678886400123",
"chunks": { "a9f1b12...": "...", "c3d4e5f...": "..." }
}
```
- **Hot-Pack List Document:** A central registry of all active Hot-Packs. This might be a computed document that clients maintain in memory on startup.
- `_id`: `hotpack_list`
- Content: `{"active_clients": ["hotpack:a9f1b12...", "hotpack:c3d4e5f..."]}`
**2. Write/Save Operation Flow (Real-time Editing):**
1. A client generates a new chunk.
2. It **immediately appends** the chunk object (`{hash, data, ts, file_id}`) to its **own** Hot-Pack document's `log` array within its local PouchDB. This operation is extremely fast.
3. The PouchDB synchronisation process replicates this change to the remote CouchDB and other clients in the background. No other Hot-Packs are consulted during this write operation.
**3. Read/Load Operation Flow:**
To find a chunk's data:
1. The client first consults its in-memory list of active Hot-Pack IDs (see section 5).
2. It searches for the chunk hash in all **Hot-Pack documents**, starting from its own, then others. It reads them in reverse log order (newest first).
3. If not found, it consults the appropriate **Index Document (`idx:...`)** to get the ID of the Cold-Pack.
4. It then reads the chunk data from the corresponding **Cold-Pack document (`dat:...`)**.
**4. Compaction & Promotion Process (The "GC"):**
This is a background task run periodically by clients, or triggered when the number of unprocessed log entries exceeds a threshold (to maintain the ability to synchronise with the remote database, which has a limited document size).
1. The client takes its own Hot-Pack (`hotpack:{client_id}`) and scans its `log` array from the beginning (oldest first).
2. For each chunk in the log, it checks if the chunk is still referenced in the latest revision of any file.
- **If not referenced (Garbage):** The log entry is simply discarded.
- **If referenced (Stable):** The chunk is added to a "promotion batch".
3. After scanning a certain number of log entries, the client takes the "promotion batch".
4. It creates one or more new, immutable **Cold-Pack (`dat:...`)** documents to store the chunk data from the batch.
5. It updates the corresponding **Index (`idx:...`)** documents to point to the new Cold-Pack(s).
6. Once the promotion is successfully saved to the database, it **removes the processed entries from its Hot-Pack's `log` array**. This is a critical step to prevent reprocessing and keep the Hot-Pack small.
**5. Hot-Pack List Management:**
To know which Hot-Packs to read, clients will:
1. On startup, load the `hotpack_list` document into memory.
2. Use PouchDB's live `changes` feed to monitor the creation of new `hotpack:*` documents.
3. Upon detecting an unknown Hot-Pack, the client updates its in-memory list and attempts to update the central `hotpack_list` document (on a best-effort basis, with conflict resolution).
## Planned Test Strategy
1. **Unit Tests:** Test the Compaction/Promotion logic extensively. Ensure garbage is correctly identified and stable chunks are promoted correctly.
2. **Integration Tests:** Simulate a multi-client real-time editing session.
- Verify that writes are fast and responsive.
- Confirm that transient garbage chunks do not pollute the Cold Storage.
- Confirm that after a period of inactivity, compaction runs and the Hot-Packs shrink.
3. **Stress Tests:** Simulate many clients joining and leaving to test the robustness of the `hotpack_list` management.
## Documentation Strategy
- This design document will serve as the core architectural reference.
- The roles of each document type (Hot-Pack, Index, Cold-Pack, List) will be clearly explained for future developers.
- The logic of the Compaction/Promotion process will be detailed.
## Consideration and Conclusion
This tiered storage design is a direct evolution, born from the lessons of previous architectures. It embraces the ephemeral nature of data in real-time applications. By creating a "staging area" (Hot-Packs) for volatile data, it protects the integrity and performance of the permanent "cold" storage. The Compaction process acts as a self-cleaning mechanism, ensuring that only valuable, stable data is retained long-term. This is not just an optimisation; it is a fundamental shift that enables robust, high-performance, and scalable real-time synchronisation on top of CouchDB.

View File

@@ -0,0 +1,97 @@
# [IN DESIGN] Tiered Chunk Storage for Bucket Sync
## Goal
To evolve the "Journal Sync" mechanism by integrating the Tiered Storage architecture. This design aims to drastically reduce the size and number of sync packs, minimise storage consumption on the backend bucket, and establish a clear, efficient process for Garbage Collection, all while remaining protocol-agnostic.
## Motivation
The original "Journal Sync" liberates us from CouchDB's protocol, but it still packages and transfers entire document changes, including bulky and often transient chunk data. In a real-time or frequent-editing scenario, this results in:
1. **Bloated Sync Packs:** Packs become large with redundant or short-lived chunk data, increasing upload and download times.
2. **Inefficient Storage:** The backend bucket stores numerous packs containing overlapping and obsolete chunk data, wasting space.
3. **Impractical Garbage Collection:** Identifying and purging obsolete *chunk data* from within the pack-based journal history is extremely difficult.
This new design addresses these problems by fundamentally changing *what* is synchronised in the journal packs. We will synchronise lightweight metadata and logs, while handling bulk data separately.
## Outlined methods and implementation plans
### Abstract
This design adapts the Tiered Storage model for a bucket-based backend. The backend bucket is partitioned into distinct areas for different data types. The "Journal Sync" process is now responsible for synchronising only the "hot" volatile data and lightweight metadata. A separate, asynchronous "Compaction" process, which can be run by any client, is responsible for migrating stable data into permanent, deduplicated "cold" storage.
### Detailed Implementation
**1. Bucket Structure:**
The backend bucket will have four distinct logical areas (prefixes):
- `packs/`: For "Journal Sync" packs, containing the journal of metadata and Hot-Log changes.
- `hot_logs/`: A dedicated area for each client's "Hot-Log," containing newly created, volatile chunks.
- `indices/`: For prefix-based Index files, mapping chunk hashes to their permanent location in Cold Storage.
- `cold_chunks/`: For deduplicated, stable chunk data, stored by content hash.
**2. Data Structures (Client-side PouchDB & Backend Bucket):**
- **Client Metadata:** Standard file metadata documents, kept in the client's PouchDB.
- **Hot-Log (in `hot_logs/`):** A per-client, append-only log file on the bucket.
- Path: `hot_logs/{client_id}.jsonlog`
- Content: A sequence of JSON objects, one per line, representing chunk creation events. `{"hash": "...", "data": "...", "ts": ..., "file_id": "..."}`
- **Index File (in `indices/`):** A JSON file for a given hash prefix.
- Path: `indices/{prefix}.json`
- Content: Maps a chunk hash to its content hash (which is its key in `cold_chunks/`). `{"hash_abc...": true, "hash_def...": true}`
- **Cold Chunk (in `cold_chunks/`):** The raw, immutable, deduplicated chunk data.
- Path: `cold_chunks/{chunk_hash}`
**3. "Journal Sync" - Send/Receive Operation (Not Live):**
This process is now extremely lightweight.
1. **Send:**
a. The client takes all newly generated chunks and **appends them to its own Hot-Log file (`hot_logs/{client_id}.jsonlog`)** on the bucket.
b. The client updates its local file metadata in PouchDB.
c. It then creates a "Journal Sync" pack containing **only the PouchDB journal of the file metadata changes.** This pack is very small as it contains no chunk data.
d. The pack is uploaded to `packs/`.
2. **Receive:**
a. The client downloads new packs from `packs/` and applies the metadata journal to its local PouchDB.
b. It downloads the latest versions of all **other clients' Hot-Log files** from `hot_logs/`.
c. Now the client has a complete, up-to-date view of all metadata and all "hot" chunks.
**4. Read/Load Operation Flow:**
To find a chunk's data:
1. The client searches for the chunk hash in its local copy of all **Hot-Logs**.
2. If not found, it downloads and consults the appropriate **Index file (`indices/{prefix}.json`)**.
3. If the index confirms existence, it downloads the data from **`cold_chunks/{chunk_hash}`**.
**5. Compaction & Promotion Process (Asynchronous "GC"):**
This is a deliberate, offline-capable process that any client can choose to run.
1. The client "leases" its own Hot-Log for compaction.
2. It reads its entire `hot_logs/{client_id}.jsonlog`.
3. For each chunk in the log, it checks if the chunk is referenced in the *current, latest state* of the file metadata.
- **If not referenced (Garbage):** The log entry is discarded.
- **If referenced (Stable):** The chunk is added to a "promotion batch."
4. For each chunk in the promotion batch:
a. It checks the corresponding `indices/{prefix}.json` to see if the chunk already exists in Cold Storage.
b. If it does not exist, it **uploads the chunk data to `cold_chunks/{chunk_hash}`** and updates the `indices/{prefix}.json` file.
5. Once the entire Hot-Log has been processed, the client **deletes its `hot_logs/{client_id}.jsonlog` file** (or truncates it to empty), effectively completing the cycle.
## Test strategy
1. **Component Tests:** Test the Compaction process independently. Ensure it correctly identifies stable versus garbage chunks and populates the `cold_chunks/` and `indices/` areas correctly.
2. **Integration Tests:**
- Simulate a multi-client sync cycle. Verify that sync packs in `packs/` are small.
- Confirm that `hot_logs/` are correctly created and updated.
- Run the Compaction process and verify that data migrates correctly to cold storage and the hot log is cleared.
3. **Conflict Tests:** Simulate two clients trying to compact the same index file simultaneously and ensure the outcome is consistent (for example, via a locking mechanism or last-write-wins).
## Documentation strategy
- This design document will be the primary reference for the bucket-based architecture.
- The structure of the backend bucket (`packs/`, `hot_logs/`, etc.) will be clearly defined.
- A detailed description of how to run the Compaction process will be provided to users.
## Consideration and Conclusion
By applying the Tiered Storage model to "Journal Sync", we transform it into a remarkably efficient system. The synchronisation of everyday changes becomes extremely fast and lightweight, as only metadata journals are exchanged. The heavy lifting of data deduplication and permanent storage is offloaded to a separate, asynchronous Compaction process. This clear separation of concerns makes the system highly scalable, minimises storage costs, and finally provides a practical, robust solution for Garbage Collection in a protocol-agnostic, bucket-based environment.

View File

@@ -1,6 +1,6 @@
# Keep newborn chunks in Eden.
# Keep newborn chunks in Eden
NOTE: This is the planned feature design document. This is planned, but not be implemented now (v0.23.3). This has not reached the design freeze and will be added to from time to time.
Notice: deprecated. please refer to the result section of this document.
## Goal
@@ -19,15 +19,18 @@ Reduce the number of chunks which in volatile, and reduce the usage of storage o
- The problem is that this unnecessary chunking slows down both local and remote operations.
## Prerequisite
- The implementation must be able to control the size of the document appropriately so that it does not become non-transferable (1).
- The implementation must be such that data corruption can be avoided even if forward compatibility is not maintained; due to the nature of Self-hosted LiveSync, backward version connexions are expected.
- The implementation must be such that data corruption can be avoided even if forward compatibility is not maintained; due to the nature of Self-hosted LiveSync, backward version connexions are expected.
- Viewed as a feature:
- This feature should be disabled for migration users.
- This feature should be enabled for new users and after rebuilds of migrated users.
- Therefore, back into the implementation view, Ideally, the implementation should be such that data recovery can be achieved by immediately upgrading after replication.
## Outlined methods and implementation plans
### Abstract
To store and transfer only stable chunks independently and share them from multiple documents after stabilisation, new chunks, i.e. chunks that are considered non-stable, are modified to be stored in the document and transferred with the document. In this case, care should be taken not to exceed prerequisite (1).
If this is achieved, the non-leaf document will not be transferred, and even if it is, the chunk will be stored in the document, so that the size can be reduced by the compaction.
@@ -40,11 +43,11 @@ Details are given below.
type EntryWithEden = {
eden: {
[key: DocumentID]: {
data: string,
epoch: number, // The document revision which this chunk has been born.
}
}
}
data: string;
epoch: number; // The document revision which this chunk has been born.
};
};
};
```
2. The following configuration items are added:
Note: These configurations should be shared as `Tweaks value` between each client.
@@ -63,6 +66,7 @@ Details are given below.
5. In End-to-End Encryption, property `eden` of documents will also be encrypted.
### Note
- When this feature has been enabled, forward compatibility is temporarily lost. However, it is detected as missing chunks, and this data is not reflected in the storage in the old version. Therefore, no data loss will occur.
## Test strategy
@@ -77,5 +81,26 @@ Details are given below.
- Indeed, we lack a fulfilled configuration table. Efforts will be made and, if they can be produced, this document will then be referenced. But not required while in the experimental or beta feature.
- However, this might be an essential feature. Further efforts are desired.
## Results from actual operation
After implementing this feature, we have been using it for a while. The following results were obtained.
- Drawbacks were thought not to be a problem, but they were actually a problem:
- A document with `Eden` has a quite larger history compared to a document without `Eden`.
- Self-hosted LiveSync does not perform compaction aggressively, which results in the remote database becoming partially bloated.
- Compaction of the Remote Database (CouchDB) requires the same amount of free space as the size of the database. Therefore, it is not possible to perform compaction on a remote database if we reached to the maximum size of the database. It means that when we detect it, it is too late.
- We have mentioned that `We need compaction` in previous sections. However, but it was so hard to be determined whether the compaction is required or not, until the database is bloated. (Of course, it requires some time to compact the database, and, literally, some document loses its history. It is not a good idea to perform frequently and meaninglessly. We need manual decision, but indeed difficult to normal users).
### Consideration and Conclusion
To be described after implemented, tested, and, released.
This feature results in two aspects:
- For the users who are familiar with the CouchDB, this feature is a bit useful. They can watch and handle the database by themselves.
- For the users who are not familiar with the CouchDB, i.e., normal users, this feature is not so useful, either. They are not familiar with the database, and they do not know how to handle it. Therefore, they cannot decide whether the compaction is required or not.
Hence, this feature would be kept as an experimental feature, but it is not enabled by default. In addition to that, it is marked as deprecated. Detailed notice will be noisy for the users who are not familiar with the CouchDB. Details would be kept in this document, for the future.
It is not recommended to use this feature, unless the person who is familiar with the CouchDB and the database management.
Vorotamoroz has written this document. Bias: I am the first author of this plug-in, familiar with the CouchDB.
Research and development has been frozen on 2025-04-11. But, bugs will be fixed if they are found. Please feel free to report them.

View File

@@ -5,10 +5,15 @@
- [Setup a CouchDB server](#setup-a-couchdb-server)
- [Table of Contents](#table-of-contents)
- [1. Prepare CouchDB](#1-prepare-couchdb)
- [A. Using Docker container](#a-using-docker-container)
- [A. Using Docker](#a-using-docker)
- [1. Prepare](#1-prepare)
- [2. Run docker container](#2-run-docker-container)
- [B. Install CouchDB directly](#b-install-couchdb-directly)
- [B. Using Docker Compose](#b-using-docker-compose)
- [1. Prepare](#1-prepare-1)
- [2. Creating Compose file](#2-create-a-docker-composeyml-file-with-the-following-added-to-it)
- [3. Boot check](#3-run-the-docker-compose-file-to-boot-check)
- [4. Starting Docker Compose in background](#4-run-the-docker-compose-file-in-the-background)
- [C. Install CouchDB directly](#c-install-couchdb-directly)
- [2. Run couchdb-init.sh for initialise](#2-run-couchdb-initsh-for-initialise)
- [3. Expose CouchDB to the Internet](#3-expose-couchdb-to-the-internet)
- [4. Client Setup](#4-client-setup)
@@ -21,43 +26,95 @@
---
## 1. Prepare CouchDB
### A. Using Docker container
### A. Using Docker
#### 1. Prepare
```bash
# Prepare environment variables.
# Adding environment variables.
export hostname=localhost:5984
export username=goojdasjdas #Please change as you like.
export password=kpkdasdosakpdsa #Please change as you like
# Prepare directories which saving data and configurations.
# Creating the save data & configuration directories.
mkdir couchdb-data
mkdir couchdb-etc
```
#### 2. Run docker container
1. Boot Check.
```
$ docker run --name couchdb-for-ols --rm -it -e COUCHDB_USER=${username} -e COUCHDB_PASSWORD=${password} -v ${PWD}/couchdb-data:/opt/couchdb/data -v ${PWD}/couchdb-etc:/opt/couchdb/etc/local.d -p 5984:5984 couchdb
```
If your container has been exited, please check the permission of couchdb-data, and couchdb-etc.
Once CouchDB run, these directories will be owned by uid:`5984`. Please chown it for you again.
> [!WARNING]
> If your container threw an error or exited unexpectedly, please check the permission of couchdb-data, and couchdb-etc.
> Once CouchDB starts, these directories will be owned by uid:`5984`. Please chown it for that uid again.
2. Enable it in background
2. Enable it in the background
```
$ docker run --name couchdb-for-ols -d --restart always -e COUCHDB_USER=${username} -e COUCHDB_PASSWORD=${password} -v ${PWD}/couchdb-data:/opt/couchdb/data -v ${PWD}/couchdb-etc:/opt/couchdb/etc/local.d -p 5984:5984 couchdb
```
### B. Install CouchDB directly
Please refer the [official document](https://docs.couchdb.org/en/stable/install/index.html). However, we do not have to configure it fully. Just administrator needs to be configured.
Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
### B. Using Docker Compose
#### 1. Prepare
```
# Creating the save data & configuration directories.
mkdir couchdb-data
mkdir couchdb-etc
```
#### 2. Create a `docker-compose.yml` file with the following added to it
```
services:
couchdb:
image: couchdb:latest
container_name: couchdb-for-ols
user: 5984:5984
environment:
- COUCHDB_USER=<INSERT USERNAME HERE> #Please change as you like.
- COUCHDB_PASSWORD=<INSERT PASSWORD HERE> #Please change as you like.
volumes:
- ./couchdb-data:/opt/couchdb/data
- ./couchdb-etc:/opt/couchdb/etc/local.d
ports:
- 5984:5984
restart: unless-stopped
```
#### 3. Run the Docker Compose file to boot check
```
docker compose up
# Or if using the old version
docker-compose up
```
> [!WARNING]
> If your container threw an error or exited unexpectedly, please check the permission of couchdb-data, and couchdb-etc.
> Once CouchDB starts, these directories will be owned by uid:`5984`. Please chown it for that uid again.
#### 4. Run the Docker Compose file in the background
If all went well and didn't throw any errors, `CTRL+C` out of it, and then run this command
```
docker compose up -d
# Or if using the old version
docker-compose up -d
```
Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
### C. Install CouchDB directly
Please refer to the [official document](https://docs.couchdb.org/en/stable/install/index.html). However, we do not have to configure it fully. Just the administrator needs to be configured.
## 2. Run couchdb-init.sh for initialise
```
curl -s https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/main/utils/couchdb/couchdb-init.sh | bash
```
If it results like following:
If it results like the following:
```
-- Configuring CouchDB by REST APIs... -->
{"ok":true}
@@ -75,12 +132,17 @@ If it results like following:
Your CouchDB has been initialised successfully. If you want this manually, please read the script.
If you are using Docker Compose and the above command does not work or displays `ERROR: Hostname missing`, you can try running the following command, replacing the placeholders with your own values:
```
curl -s https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/main/utils/couchdb/couchdb-init.sh | hostname=http://<YOUR SERVER IP>:5984 username=<INSERT USERNAME HERE> password=<INSERT PASSWORD HERE> bash
```
## 3. Expose CouchDB to the Internet
- You can skip this instruction if you using only in intranet and only with desktop devices.
- For mobile devices, Obsidian requires a valid SSL certificate. Usually, it needs exposing the internet.
Whatever solutions we can use. For the simplicity, following sample uses Cloudflare Zero Trust for testing.
Whatever solutions we can use. For simplicity, the following sample uses Cloudflare Zero Trust for testing.
```
cloudflared tunnel --url http://localhost:5984
@@ -99,12 +161,12 @@ You will then get the following output:
:
:
```
Now `https://tiles-photograph-routine-groundwater.trycloudflare.com` is our server. Make it into background once please.
Now `https://tiles-photograph-routine-groundwater.trycloudflare.com` is our server. Make it into the background once, please.
## 4. Client Setup
> [!TIP]
> Now manually configuration is not recommended for some reasons. However, if you want to do so, please use `Setup wizard`. The recommended extra configurations will be also set.
> Now manual configuration is not recommended for some reasons. However, if you want to do so, please use `Setup wizard`. The recommended extra configurations will be also set.
### 1. Generate the setup URI on a desktop device or server
```bash
@@ -116,6 +178,13 @@ export password=abc123
deno run -A https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/main/utils/flyio/generate_setupuri.ts
```
> [!TIP]
> What is the `passphrase`? Is it different from `uri_passphrase`?
> Yes, the `passphrase` we have exported now is for an End-to-End Encryption passphrase.
> And, `uri_passphrase` that used in the `generate_setupuri.ts` is a different one; for decrypting Set-up URI at using that.
> Why: I (vorotamoroz) think that the passphrase of the Setup-URI should be different from the E2EE passphrase to prevent exposure caused by operational errors or the possibility of evil in our environment. On top of that, I believe that it is desirable for the Setup-URI to be random. Setup-URI is inevitably long, so it goes through the clipboard. I think that its passphrase should not go through the same path, so it should essentially be typed manually.
> Hence, if we keep empty for uri_passphrase, generate_setupuri.ts generates an adjective-noun-randomnumber passphrase so that we can remember it without going through the clipboard.
You will then get the following output:
```bash

16
docs/tech_info_cn.md Normal file
View File

@@ -0,0 +1,16 @@
# 架构设计
## 这个插件是怎么实现同步的.
![Synchronization](../images/1.png)
1. 当笔记创建或修改时Obsidian会触发事件。Self-hosted LiveSync捕获这些事件并将变更同步至本地PouchDB
2. PouchDB通过自动或手动方式将变更同步至远程CouchDB
3. 其他设备监听远程CouchDB的变更从而获取最新更新
4. Self-hosted LiveSync 将同步的变更集反映到Obsidian存储库中。
注:图示为简化演示,仅展示两个设备间的单向同步。实际为多设备间同时进行的双向同步。
## 降低带宽消耗的技术方案。
![dedupe](../images/2.png)

View File

@@ -1,10 +1,24 @@
# Terms used in this project
# Notes on Terminology, Spelling, Vocabulary Conventions
## Terms
## Spelling and Vocabulary conventions
### Chunks
<!-- TBW, sorry for the draft! -->
1. Almost all of the english words are written in British English. For example, "organisation" instead of "organization", "synchronisation" instead of "synchronization", etc. This convention originated from the author's personal preference but is now maintained for consistency.
2. Idiomatic terms, such as used in HTML, CSS, and JavaScript, are usually be aligned with the language used in the technology. For example, "color" instead of "colour", "program" instead of "programme", etc. Especially, terms which are used for attributes, properties, and methods are notable.
<!-- Please feel free to write any terms that should be mentioned. And please make pull request. I would love to fill the rest. -->
<!-- ### Chunks -->
3. We use `dialogue` in documentation for consistency. While `dialog` may appear in source code, particularly in class names, method names, and attributes (following technical conventions in No. 2), we consistently use `dialogue` for user-facing messages and general documentation text. This approach balances No. 1 with No. 2.
4. Contractions are not used. For example, "do not" instead of "don't", "cannot" instead of "can't", etc. especially `'d`.
- We may encounter difficulties with tenses.
5. However, try using affirmative forms, `Discard` instead of `Do not keep`, `Continue` instead of `Do not stop`, etc.
- Some languages, such as Japanese, have a different meaning for `yes` and `no` between affirmative and negative questions.
## Terminology
- Self-hosted LiveSync
- This plug-in name. `Self-hosted` is one word.
- LiveSync
- Very confusing term.
- As shorten-form of `Self-hosted LiveSync`.
- As a name of synchronisation mode. This should be changed to `Continuos`, in contrast to `Periodic`.

View File

@@ -0,0 +1,81 @@
---
title: "JWT Authentication on CouchDB"
livesync-version: 0.25.24
tags:
- tips
- CouchDB
- JWT
authors:
- vorotamoroz
---
# JWT Authentication on CouchDB
When using CouchDB as a backend for Self-hosted LiveSync, it is possible to enhance security by employing JWT (JSON Web Token) Authentication. In particular, using asymmetric keys (ES256 and ES512) provides greater security against token interception.
## Setting up JWT Authentication (Asymmetrical Key Example)
### 1. Generate a key pair
We can use `openssl` to generate an EC key pair as follows:
```bash
# Generate private key
# ES512 for secp521r1 curve, we can also use ES256 for prime256v1 curve
openssl ecparam -name secp521r1 -genkey -noout | openssl pkcs8 -topk8 -inform PEM -nocrypt -out private_key.pem
# openssl ecparam -name prime256v1 -genkey -noout | openssl pkcs8 -topk8 -inform PEM -nocrypt -out private_key.pem
# Generate public key in SPKI format
openssl ec -in private_key.pem -pubout -outform PEM -out public_key.pem
```
> [!TIP]
> A key generator will be provided again in a future version of the user interface.
### 2. Configure CouchDB to accept JWT tokens
The following configuration is required:
| Key | Value | Note |
| ------------------------------ | ----------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| chttpd/authentication_handlers | {chttpd_auth, jwt_authentication_handler} | In total, it may be `{chttpd_auth, jwt_authentication_handler}, {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}`, or something similar. |
| jwt_auth/required_claims | "exp" | |
| jwt_keys/ec:your_key_id | Your public key in PEM (SPKI) format | Replace `your_key_id` with your actual key ID. You can decide as you like. Note that you can add multiple keys if needed. If you want to use HSxxx, you should set `jwt_keys/hmac:your_key_id` with your HMAC secret. |
Note: When configuring CouchDB via web interface (Fauxton), new-lines on the public key should be replaced with `\n` for header and footer lines (So wired, but true I have tested). as follows:
```
-----BEGIN PUBLIC KEY-----
\nMIGbMBAGByqGSM49AgEGBSuBBAAjA4GGAAQBq0irb/+K0Qzo7ayIHj0Xtthcntjz
r665J5UYdEQMiTtku5rnp95RuN97uA2pPOJOacMBAoiVUnZ1pqEBz9xH9yoAixji
Ju...........................................................gTt
/xtqrJRwrEy986oRZRQ=
\n-----END PUBLIC KEY-----
```
For detailed information, please refer to the [CouchDB JWT Authentication Documentation](https://docs.couchdb.org/en/stable/api/server/authn.html#jwt-authentication).
### 3. Configure Self-hosted LiveSync to use JWT Authentication
| Setting | Description |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
| Use JWT Authentication | Enable this option to use JWT Authentication. |
| JWT Algorithm | Select the JWT signing algorithm (e.g., ES256, ES512) that matches your key pair. |
| JWT Key | Paste your private key in PEM (pkcs8) format. |
| JWT Expiration Duration | Set the token expiration time in minutes. Locally cached tokens are also invalidated after this duration. |
| JWT Key ID (kid) | Enter the key ID that you used when configuring CouchDB, i.e., the one that replaced `your_key_id`. |
| JWT Subject (sub) | Set your user ID; this overrides the original `Username` setting. If you have detected access with `Username`, you have failed to authorise with JWT. |
> [!IMPORTANT]
> Self-hosted LiveSync requests to CouchDB treat the user as `_admin`. If you want to restrict access, configure `jwt_auth/roles_claim_name` to a custom claim name. (Self-hosted LiveSync always sets `_couchdb.roles` with the value `["_admin"]`).
### 4. Test the configuration
Just try to `Test Settings and Continue` in the remote setup dialogue. If you have successfully authenticated, you are all set.
## Additional Notes
This feature is still experimental. Please ensure to test thoroughly in your environment before deploying to production.
However, we think that this is a great step towards enhancing security when using CouchDB with Self-hosted LiveSync. We shall enable this setting by default in future releases.
We would love to hear your feedback and any issues you encounter.

View File

@@ -0,0 +1,29 @@
---
title: "Peer-to-Peer Synchronisation Tips"
livesync-version: 0.25.24
tags:
- tips
- p2p
authors:
- vorotamoroz
---
# Peer-to-Peer Synchronisation Tips
> [!IMPORTANT]
> Peer-to-peer synchronisation is still an experimental feature. Although we have made every effort to ensure its reliability, it may not function correctly in all environments.
## Difficulties with Peer-to-Peer Synchronisation
It is often the case that peer-to-peer connections do not function correctly, for instance, when using mobile data services.
In such circumstances, we recommend connecting all devices to a single Virtual Private Network (VPN). It is advisable to select a service, such as Tailscale, which facilitates direct communication between peers wherever possible.
Should one be in an environment where even Tailscale is unable to connect, or where it cannot be lawfully installed, please continue reading.
## A More Detailed Explanation
The failure of a Peer-to-Peer connection via WebRTC can be attributed to several factors. These may include an unsuccessful UDP hole-punching attempt, or an intermediary gateway intentionally terminating the connection. Troubleshooting this matter is not a simple undertaking. Furthermore, and rather unfortunately, gateway administrators are typically aware of this type of network behaviour. Whilst a legitimate purpose for such traffic can be cited, such as for web conferencing, this is often insufficient to prevent it from being blocked.
This situation, however, is the primary reason that our project does not provide a TURN server. Although it is said that a TURN server within WebRTC does not decrypt communications, the project holds the view that the risk of a malicious party impersonating a TURN server must be avoided. Consequently, configuring a TURN server for relay communication is not currently possible through the user interface. Furthermore, there is no official project TURN server, which is to say, one that could be monitored by a third party.
We request that you provide your own server, using your own Fully Qualified Domain Name (FQDN), and subsequently enter its details into the advanced settings.
For testing purposes, Cloudflare's Real-Time TURN Service is exceedingly convenient and offers a generous amount of free data. However, it must be noted that because it is a well-known destination, such traffic is highly conspicuous. There is also a significant possibility that it may be blocked by default. We advise proceeding with caution.

View File

@@ -1,12 +1,24 @@
<!-- 2024-02-15 -->
# Tips and Troubleshooting
- [Tips and Troubleshooting](#tips-and-troubleshooting)
- [Tips](#tips)
- [CORS avoidance](#cors-avoidance)
- [CORS configuration with reverse proxy](#cors-configuration-with-reverse-proxy)
- [Nginx](#nginx)
- [Nginx and subdirectory](#nginx-and-subdirectory)
- [Caddy](#caddy)
- [Caddy and subdirectory](#caddy-and-subdirectory)
- [Apache](#apache)
- [Show all setting panes](#show-all-setting-panes)
- [How to resolve `Tweaks Mismatched of Changed`](#how-to-resolve-tweaks-mismatched-of-changed)
- [Notable bugs and fixes](#notable-bugs-and-fixes)
- [Binary files get bigger on iOS](#binary-files-get-bigger-on-ios)
- [Some setting name has been changed](#some-setting-name-has-been-changed)
- [FAQ](#faq)
- [Questions and Answers](#questions-and-answers)
- [How should I share the settings between multiple devices?](#how-should-i-share-the-settings-between-multiple-devices)
- [What should I enter for the passphrase of Setup-URI?](#what-should-i-enter-for-the-passphrase-of-setup-uri)
- [Why the settings of Self-hosted LiveSync itself is disabled in default?](#why-the-settings-of-self-hosted-livesync-itself-is-disabled-in-default)
- [The plug-in says `something went wrong`.](#the-plug-in-says-something-went-wrong)
- [A large number of files were deleted, and were synchronised!](#a-large-number-of-files-were-deleted-and-were-synchronised)
- [Why `Use an old adapter for compatibility` is somehow enabled in my vault?](#why-use-an-old-adapter-for-compatibility-is-somehow-enabled-in-my-vault)
- [ZIP (or any extensions) files were not synchronised. Why?](#zip-or-any-extensions-files-were-not-synchronised-why)
- [I hope to report the issue, but you said you needs `Report`. How to make it?](#i-hope-to-report-the-issue-but-you-said-you-needs-report-how-to-make-it)
@@ -14,25 +26,148 @@
- [Why are the logs volatile and ephemeral?](#why-are-the-logs-volatile-and-ephemeral)
- [Some network logs are not written into the file.](#some-network-logs-are-not-written-into-the-file)
- [If a file were deleted or trimmed, the capacity of the database should be reduced, right?](#if-a-file-were-deleted-or-trimmed-the-capacity-of-the-database-should-be-reduced-right)
- [How to launch the DevTools](#how-to-launch-the-devtools)
- [On Desktop Devices](#on-desktop-devices)
- [On Android](#on-android)
- [On iOS, iPadOS devices](#on-ios-ipados-devices)
- [How can I use the DevTools?](#how-can-i-use-the-devtools)
- [Checking the network log](#checking-the-network-log)
- [Troubleshooting](#troubleshooting)
- [While using Cloudflare Tunnels, often Obsidian API fallback and `524` error occurs.](#while-using-cloudflare-tunnels-often-obsidian-api-fallback-and-524-error-occurs)
- [On the mobile device, cannot synchronise on the local network!](#on-the-mobile-device-cannot-synchronise-on-the-local-network)
- [I think that something bad happening on the vault...](#i-think-that-something-bad-happening-on-the-vault)
- [Tips](#tips)
- [How to resolve `Tweaks Mismatched of Changed`](#how-to-resolve-tweaks-mismatched-of-changed)
- [Flag Files](#flag-files)
- [Old tips](#old-tips)
<!-- - -->
## Tips
### CORS avoidance
If we are unable to configure CORS properly for any reason (for example, if we cannot configure non-administered network devices), we may choose to ignore CORS.
To use the Obsidian API (also known as the Non-Native API) to bypass CORS, we can enable the toggle ``Use Request API to avoid `inevitable` CORS problem``.
<!-- Add **Long explanation of CORS** here for integrity -->
### CORS configuration with reverse proxy
- IMPORTANT: CouchDB handles CORS by itself. Do not process CORS on the reverse
proxy.
- Do not process `Option` requests on the reverse proxy!
- Make sure `host` and `X-Forwarded-For` headers are forwarded to the CouchDB.
- If you are using a subdirectory, make sure to handle it properly. More
detailed information is in the
[CouchDB documentation](https://docs.couchdb.org/en/stable/best-practices/reverse-proxies.html).
Minimal configurations are as follows:
#### Nginx
```nginx
location / {
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
```
#### Nginx and subdirectory
```nginx
location /couchdb {
rewrite ^ $request_uri;
rewrite ^/couchdb/(.*) /$1 break;
proxy_pass http://localhost:5984$uri;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /_session {
proxy_pass http://localhost:5984/_session;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
```
#### Caddy
```caddyfile
domain.com {
reverse_proxy localhost:5984
}
```
#### Caddy and subdirectory
```caddyfile
domain.com {
reverse_proxy /couchdb/* localhost:5984
reverse_proxy /_session/* localhost:5984/_session
}
```
#### Apache
Sorry, Apache is not recommended for CouchDB. Omit the configuration from here.
Please refer to the
[Official documentation](https://docs.couchdb.org/en/stable/best-practices/reverse-proxies.html#reverse-proxying-with-apache-http-server).
### Show all setting panes
Full pane is not shown by default. To show all panes, please toggle all in
`🧙‍♂️ Wizard` -> `Enable extra and advanced features`.
For your information, the all panes are as follows:
![All Panes](all_toggles.png)
### How to resolve `Tweaks Mismatched of Changed`
(Since v0.23.17)
If you have changed some configurations or tweaks which should be unified
between the devices, you will be asked how to reflect (or not) other devices at
the next synchronisation. It also occurs on the device itself, where changes are
made, to prevent unexpected configuration changes from unwanted propagation.\
(We may thank this behaviour if we have synchronised or backed up and restored
Self-hosted LiveSync. At least, for me so).
Following dialogue will be shown: ![Dialogue](tweak_mismatch_dialogue.png)
- If we want to propagate the setting of the device, we should choose
`Update with mine`.
- On other devices, we should choose `Use configured` to accept and use the
configured configuration.
- `Dismiss` can postpone a decision. However, we cannot synchronise until we
have decided.
Rest assured that in most cases we can choose `Use configured`. (Unless you are
certain that you have not changed the configuration).
If we see it for the first time, it reflects the settings of the device that has
been synchronised with the remote for the first time since the upgrade.
Probably, we can accept that.
<!-- Add here -->
## Notable bugs and fixes
### Binary files get bigger on iOS
- Reported at: v0.20.x
- Fixed at: v0.21.2 (Fixed but not reviewed)
- Required action: larger files will not be fixed automatically, please perform `Verify and repair all files`. If our local database and storage are not matched, we will be asked to apply which one.
- Required action: larger files will not be fixed automatically, please perform
`Verify and repair all files`. If our local database and storage are not
matched, we will be asked to apply which one.
### Some setting name has been changed
- Fixed at: v0.22.6
| Previous name | New name |
@@ -42,107 +177,253 @@
| Setup Wizard | Minimal Setup |
| Check database configuration | Check and Fix database configuration |
## FAQ
## Questions and Answers
### How should I share the settings between multiple devices?
- Device setup:
- Using `Setup URI` is the most straightforward way.
- Setting changes during use:
- Use `Sync settings via Markdown files` on the `🔄️ Sync settings` pane.
### What should I enter for the passphrase of Setup-URI?
- Anything you like is OK. However, the recommendation is as follows:
- Include the vault (group) information.
- Include the date of operation.
- Anything random for your security.
- For example, `MyVault-20240901-r4nd0mStr1ng`.
- Why?
- The Setup-URI is encoded; that means it cannot indicate the actual settings. Hence, if you use the same passphrase for multiple vaults, you may accidentally mix up vaults.
### Why the settings of Self-hosted LiveSync itself is disabled in default?
Basically, if we configure all `additionalSuffixOfDatabaseName` the same, we can synchronise this file between multiple devices.
(`additionalSuffixOfDatabaseName` should be unique in each device, not in the synchronised vaults).
However, if we synchronise the settings of Self-hosted LiveSync itself, we may encounter some unexpected behaviours.
For example, if a setting that 'let Self-hosted LiveSync setting be excluded' is synced, it is very unlikely that things will recover automatically after this, and there is little chance we will even notice this. Even if we change our minds and change the settings back on other devices. It could get even worse if incompatible changes are automatically reflected; everything will break.
### The plug-in says `something went wrong`.
There are many cases where this is really unclear. One possibility is that the chunk fetch did not go well.
1. Restarting Obsidian sometimes helps (fetch-order problem).
2. If actually there are no chunks, please perform `Recreate missing chunks for all files` on the `🧰 Hatch` pane at the other devices. And synchronise again. (also restart Obsidian may effect).
3. If the problem persists, please perform `Verify and repair all files` on the `🧰 Hatch` pane. If our local database and storage are not matched, we will be asked to apply which one.
### A large number of files were deleted, and were synchronised!
1. Backup everything important.
- Your local vault.
- Your CouchDB database (this can be done by replicating to another database).
2. Prepare the empty vault
3. Place `redflag.md` at the top of the vault.
4. Apply the settings **BUT DO NOT PROCEED TO RESTORE YET**.
- You can use `Setup URI`, QR Code, or manually apply the settings.
5. Set `Maximum file modification time for reflected file events` in `Remediation` on the `🩹 Patches` pane.
- If you know when the files were deleted, set the time a bit before that.
- If not, bisecting may help us.
6. Delete `redflag.md`.
7. Perform `Reset synchronisation on This Device` on the `🎛️ Maintenance` pane.
This mode is very fragile. Please be careful.
### Why `Use an old adapter for compatibility` is somehow enabled in my vault?
Because you are a compassionate and experienced user. Before v0.17.16, we used an old adapter for the local database. At that time, current default adapter has not been stable.
The new adapter has better performance and has a new feature like purging. Therefore, we should use new adapters and current default is so.
Because you are a compassionate and experienced user. Before v0.17.16, we used
an old adapter for the local database. At that time, current default adapter has
not been stable. The new adapter has better performance and has a new feature
like purging. Therefore, we should use new adapters and current default is so.
However, when switching from an old adapter to a new adapter, some converting or local database rebuilding is required, and it takes a few time. It was a long time ago now, but we once inconvenienced everyone in a hurry when we changed the format of our database.
For these reasons, this toggle is automatically on if we have upgraded from vault which using an old adapter.
However, when switching from an old adapter to a new adapter, some converting or
local database rebuilding is required, and it takes a few time. It was a long
time ago now, but we once inconvenienced everyone in a hurry when we changed the
format of our database. For these reasons, this toggle is automatically on if we
have upgraded from vault which using an old adapter.
When you rebuild everything or fetch from the remote again, you will be asked to switch this.
When you rebuild everything or fetch from the remote again, you will be asked to
switch this.
Therefore, experienced users (especially those stable enough not to have to rebuild the database) may have this toggle enabled in their Vault.
Please disable it when you have enough time.
Therefore, experienced users (especially those stable enough not to have to
rebuild the database) may have this toggle enabled in their Vault. Please
disable it when you have enough time.
### ZIP (or any extensions) files were not synchronised. Why?
It depends on Obsidian detects. May toggling `Detect all extensions` of `File and links` (setting of Obsidian) will help us.
It depends on Obsidian detects. May toggling `Detect all extensions` of
`File and links` (setting of Obsidian) will help us.
### I hope to report the issue, but you said you needs `Report`. How to make it?
We can copy the report to the clipboard, by pressing the `Make report` button on the `Hatch` pane.
![Screenshot](../images/hatch.png)
We can copy the report to the clipboard, by pressing the `Make report` button on
the `Hatch` pane. ![Screenshot](../images/hatch.png)
### Where can I check the log?
We can launch the log pane by `Show log` on the command palette.
And if you have troubled something, please enable the `Verbose Log` on the `General Setting` pane.
However, the logs would not be kept so long and cleared when restarted. If you want to check the logs, please enable `Write logs into the file` temporarily.
We can launch the log pane by `Show log` on the command palette. And if you have
troubled something, please enable the `Verbose Log` on the `General Setting`
pane.
However, the logs would not be kept so long and cleared when restarted. If you
want to check the logs, please enable `Write logs into the file` temporarily.
![ScreenShot](../images/write_logs_into_the_file.png)
> [!IMPORTANT]
>
> - Writing logs into the file will impact the performance.
> - Please make sure that you have erased all your confidential information before reporting issue.
> - Please make sure that you have erased all your confidential information
> before reporting issue.
### Why are the logs volatile and ephemeral?
To avoid unexpected exposure to our confidential things.
### Some network logs are not written into the file.
Especially the CORS error will be reported as a general error to the plug-in for security reasons. So we cannot detect and log it. We are only able to investigate them by [Checking the network log](#checking-the-network-log).
Especially the CORS error will be reported as a general error to the plug-in for
security reasons. So we cannot detect and log it. We are only able to
investigate them by [Checking the network log](#checking-the-network-log).
### If a file were deleted or trimmed, the capacity of the database should be reduced, right?
No, even though if files were deleted, chunks were not deleted.
Self-hosted LiveSync splits the files into multiple chunks and transfers only newly created. This behaviour enables us to less traffic. And, the chunks will be shared between the files to reduce the total usage of the database.
And one more thing, we can handle the conflicts on any device even though it has happened on other devices. This means that conflicts will happen in the past, after the time we have synchronised. Hence we cannot collect and delete the unused chunks even though if we are not currently referenced.
No, even though if files were deleted, chunks were not deleted. Self-hosted
LiveSync splits the files into multiple chunks and transfers only newly created.
This behaviour enables us to less traffic. And, the chunks will be shared
between the files to reduce the total usage of the database.
To shrink the database size, `Rebuild everything` only reliably and effectively. But do not worry, if we have synchronised well. We have the actual and real files. Only it takes a bit of time and traffics.
And one more thing, we can handle the conflicts on any device even though it has
happened on other devices. This means that conflicts will happen in the past,
after the time we have synchronised. Hence we cannot collect and delete the
unused chunks even though if we are not currently referenced.
To shrink the database size, `Rebuild everything` only reliably and effectively.
But do not worry, if we have synchronised well. We have the actual and real
files. Only it takes a bit of time and traffics.
### How to launch the DevTools
#### On Desktop Devices
We can launch the DevTools by pressing `ctrl`+`shift`+`i` (`Command`+`shift`+`i` on Mac).
#### On Android
Please refer to [Remote debug Android devices](https://developer.chrome.com/docs/devtools/remote-debugging/).
Once the DevTools have been launched, everything operates the same as on a PC.
#### On iOS, iPadOS devices
If we have a Mac, we can inspect from Safari on the Mac. Please refer to [Inspecting iOS and iPadOS](https://developer.apple.com/documentation/safari-developer-tools/inspecting-ios).
### How can I use the DevTools?
#### Checking the network log
1. Open the network pane.
2. Find the requests marked in red.
![Errored](../images/devtools1.png)
3. Capture the `Headers`, `Payload`, and, `Response`. **Please be sure to keep important information confidential**. If the `Response` contains secrets, you can omitted that.
Note: Headers contains a some credentials. **The path of the request URL, Remote Address, authority, and authorization must be concealed.**
![Concealed sample](../images/devtools2.png)
2. Find the requests marked in red.\
![Errored](../images/devtools1.png)
3. Capture the `Headers`, `Payload`, and, `Response`. **Please be sure to keep
important information confidential**. If the `Response` contains secrets, you
can omitted that. Note: Headers contains a some credentials. **The path of
the request URL, Remote Address, authority, and authorization must be
concealed.**\
![Concealed sample](../images/devtools2.png)
## Troubleshooting
<!-- Add here -->
### While using Cloudflare Tunnels, often Obsidian API fallback and `524` error occurs.
A `524` error occurs when the request to the server is not completed within a
`specified time`. This is a timeout error from Cloudflare. From the reported
issue, it seems to be 100 seconds. (#627).
Therefore, this error returns from Cloudflare, not from the server. Hence, the
result contains no CORS field. It means that this response makes the Obsidian
API fallback.
However, even if the Obsidian API fallback occurs, the request is still not
completed within the `specified time`, 100 seconds.
To solve this issue, we need to configure the timeout settings.
Please enable the toggle in `💪 Power users` -> `CouchDB Connection Tweak` ->
`Use timeouts instead of heartbeats`.
### On the mobile device, cannot synchronise on the local network!
Obsidian mobile is not able to connect to the non-secure end-point, such as starting with `http://`. Make sure your URI of CouchDB. Also not able to use a self-signed certificate.
Obsidian mobile is not able to connect to the non-secure end-point, such as
starting with `http://`. Make sure your URI of CouchDB. Also not able to use a
self-signed certificate.
### I think that something bad happening on the vault...
Place `redflag.md` on top of the vault, and restart Obsidian. The most simple way is to create a new note and rename it to `redflag`. Of course, we can put it without Obsidian.
If there is `redflag.md`, Self-hosted LiveSync suspends all database and storage processes.
Place the [flag file](#flag-files) on top of the vault, and restart Obsidian. The most simple
way is to create a new note and rename it to `redflag`. Of course, we can put it
without Obsidian.
## Tips
For example, if there is `redflag.md`, Self-hosted LiveSync suspends all database and storage
processes.
### How to resolve `Tweaks Mismatched of Changed`
### Flag Files
(Since v0.23.17)
The flag file is a simple Markdown file designed to prevent storage events and database events in self-hosted LiveSync.
Its very existence is significant; it may be left blank, or it may contain text; either is acceptable.
If you have changed some configurations or tweaks which should be unified between the devices, you will be asked how to reflect (or not) other devices at the next synchronisation. It also occurs on the device itself, where changes are made, to prevent unexpected configuration changes from unwanted propagation.
(We may thank this behaviour if we have synchronised or backed up and restored Self-hosted LiveSync. At least, for me so).
This file is in Markdown format so that it can be placed in the Vault externally, even if Obsidian fails to launch.
Following dialogue will be shown:
![Dialogue](tweak_mismatch_dialogue.png)
There are some options to use `redflag.md`.
- If we want to propagate the setting of the device, we should choose `Update with mine`.
- On other devices, we should choose `Use configured` to accept and use the configured configuration.
- `Dismiss` can postpone a decision. However, we cannot synchronise until we have decided.
| Filename | Human-Friendly Name | Description |
| ------------- | ------------------- | ------------------------------------------------------------------------------------ |
| `redflag.md` | - | Suspends all processes. |
| `redflag2.md` | `flag_rebuild.md` | Suspends all processes, and rebuild both local and remote databases by local files. |
| `redflag3.md` | `flag_fetch.md` | Suspends all processes, discard the local database, and fetch from the remote again. |
Rest assured that in most cases we can choose `Use configured`. (Unless you are certain that you have not changed the configuration).
If we see it for the first time, it reflects the settings of the device that has been synchronised with the remote for the first time since the upgrade. Probably, we can accept that.
<!-- Add here -->
When fetching everything remotely or performing a rebuild, restarting Obsidian
is performed once for safety reasons. At that time, Self-hosted LiveSync uses
these files to determine whether the process should be carried out. (The use of
normal markdown files is a trick to externally force cancellation in the event
of faults in the rebuild or fetch function itself, especially on mobile
devices). This mechanism is also used for set-up. And just for information,
these files are also not subject to synchronisation.
However, occasionally the deletion of files may fail. This should generally work
normally after restarting Obsidian. (As far as I can observe).
### Old tips
- Rarely, a file in the database could be corrupted. The plugin will not write to local storage when a file looks corrupted. If a local version of the file is on your device, the corruption could be fixed by editing the local file and synchronizing it. But if the file does not exist on any of your devices, then it can not be rescued. In this case, you can delete these items from the settings dialog.
- To stop the boot-up sequence (eg. for fixing problems on databases), you can put a `redflag.md` file (or directory) at the root of your vault.
Tip for iOS: a redflag directory can be created at the root of the vault using the File application.
- Also, with `redflag2.md` placed, we can automatically rebuild both the local and the remote databases during the boot-up sequence. With `redflag3.md`, we can discard only the local database and fetch from the remote again.
- Q: The database is growing, how can I shrink it down?
A: each of the docs is saved with their past 100 revisions for detecting and resolving conflicts. Picturing that one device has been offline for a while, and comes online again. The device has to compare its notes with the remotely saved ones. If there exists a historic revision in which the note used to be identical, it could be updated safely (like git fast-forward). Even if that is not in revision histories, we only have to check the differences after the revision that both devices commonly have. This is like git's conflict-resolving method. So, We have to make the database again like an enlarged git repo if you want to solve the root of the problem.
- And more technical Information is in the [Technical Information](tech_info.md)
- If you want to synchronize files without obsidian, you can use [filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync).
- WebClipper is also available on Chrome Web Store:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
Repo is here: [obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip). (Docs are a work in progress.)
- Rarely, a file in the database could be corrupted. The plugin will not write
to local storage when a file looks corrupted. If a local version of the file
is on your device, the corruption could be fixed by editing the local file and
synchronizing it. But if the file does not exist on any of your devices, then
it can not be rescued. In this case, you can delete these items from the
settings dialog.
- To stop the boot-up sequence (eg. for fixing problems on databases), you can
put a `redflag.md` file (or directory) at the root of your vault. Tip for iOS:
a redflag directory can be created at the root of the vault using the File
application.
- Also, with `redflag2.md` placed, we can automatically rebuild both the local
and the remote databases during the boot-up sequence. With `redflag3.md`, we
can discard only the local database and fetch from the remote again.
- Q: The database is growing, how can I shrink it down? A: each of the docs is
saved with their past 100 revisions for detecting and resolving conflicts.
Picturing that one device has been offline for a while, and comes online
again. The device has to compare its notes with the remotely saved ones. If
there exists a historic revision in which the note used to be identical, it
could be updated safely (like git fast-forward). Even if that is not in
revision histories, we only have to check the differences after the revision
that both devices commonly have. This is like git's conflict-resolving method.
So, We have to make the database again like an enlarged git repo if you want
to solve the root of the problem.
- And more technical Information is in the [Technical Information](tech_info.md)
- If you want to synchronize files without obsidian, you can use
[filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync).
- WebClipper is also available on Chrome Web
Store:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
Repo is here:
[obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip).
(Docs are a work in progress.)

View File

@@ -4,7 +4,7 @@ import esbuild from "esbuild";
import process from "process";
import builtins from "builtin-modules";
import sveltePlugin from "esbuild-svelte";
import sveltePreprocess from "svelte-preprocess";
import { sveltePreprocess } from "svelte-preprocess";
import fs from "node:fs";
// import terser from "terser";
import { minify } from "terser";
@@ -12,13 +12,21 @@ import inlineWorkerPlugin from "esbuild-plugin-inline-worker";
import { terserOption } from "./terser.config.mjs";
import path from "node:path";
const prod = process.argv[2] === "production";
const prod = process.argv[2] === "production" || process.env?.BUILD_MODE === "production";
const keepTest = true; //!prod;
const manifestJson = JSON.parse(fs.readFileSync("./manifest.json") + "");
const packageJson = JSON.parse(fs.readFileSync("./package.json") + "");
const updateInfo = JSON.stringify(fs.readFileSync("./updates.md") + "");
const PATHS_TEST_INSTALL = process.env?.PATHS_TEST_INSTALL || "";
const PATH_TEST_INSTALL = PATHS_TEST_INSTALL.split(path.delimiter).map(p => p.trim()).filter(p => p.length);
if (PATH_TEST_INSTALL) {
console.log(`Built files will be copied to ${PATH_TEST_INSTALL}`);
} else {
console.log("Development build: You can install the plug-in to Obsidian for testing by exporting the PATHS_TEST_INSTALL environment variable with the paths to your vault plugins directories separated by your system path delimiter (':' on Unix, ';' on Windows).");
}
const moduleAliasPlugin = {
name: "module-alias",
setup(build) {
@@ -95,6 +103,21 @@ const plugins = [
} else {
fs.copyFileSync("./main_org.js", "./main.js");
}
if (PATH_TEST_INSTALL) {
for (const installPath of PATH_TEST_INSTALL) {
const realPath = path.resolve(installPath);
console.log(`Copying built files to ${realPath}`);
if (!fs.existsSync(realPath)) {
console.warn(`Test install path ${installPath} does not exist`);
continue;
}
const manifestX = JSON.parse(fs.readFileSync("./manifest.json") + "");
manifestX.version = manifestJson.version + "." + Date.now();
fs.writeFileSync(path.join(installPath, "manifest.json"), JSON.stringify(manifestX, null, 2));
fs.copyFileSync("./main.js", path.join(installPath, "main.js"));
fs.copyFileSync("./styles.css", path.join(installPath, "styles.css"));
}
}
});
},
},

102
eslint.config.mjs Normal file
View File

@@ -0,0 +1,102 @@
import typescriptEslint from "@typescript-eslint/eslint-plugin";
import svelte from "eslint-plugin-svelte";
import _import from "eslint-plugin-import";
import { fixupPluginRules } from "@eslint/compat";
import tsParser from "@typescript-eslint/parser";
import path from "node:path";
import { fileURLToPath } from "node:url";
import js from "@eslint/js";
import { FlatCompat } from "@eslint/eslintrc";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const compat = new FlatCompat({
baseDirectory: __dirname,
recommendedConfig: js.configs.recommended,
allConfig: js.configs.all,
});
export default [
{
ignores: [
"**/node_modules/*",
"**/jest.config.js",
"src/lib/coverage",
"src/lib/browsertest",
"**/test.ts",
"**/tests.ts",
"**/**test.ts",
"**/**.test.ts",
"**/esbuild.*.mjs",
"**/terser.*.mjs",
"**/node_modules",
"**/build",
"**/.eslintrc.js.bak",
"src/lib/src/patches/pouchdb-utils",
"**/esbuild.config.mjs",
"**/rollup.config.js",
"modules/octagonal-wheels/rollup.config.js",
"modules/octagonal-wheels/dist/**/*",
"src/lib/test",
"src/lib/src/cli",
"**/main.js",
"src/apps/**/*",
".prettierrc.*.mjs",
".prettierrc.mjs",
"*.config.mjs"
],
},
...compat.extends(
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended"
),
{
plugins: {
"@typescript-eslint": typescriptEslint,
svelte,
import: fixupPluginRules(_import),
},
languageOptions: {
parser: tsParser,
ecmaVersion: 5,
sourceType: "module",
parserOptions: {
project: ["tsconfig.json"],
},
},
rules: {
"no-unused-vars": "off",
"@typescript-eslint/no-unused-vars": [
"error",
{
args: "none",
},
],
"no-unused-labels": "off",
"@typescript-eslint/ban-ts-comment": "off",
"no-prototype-builtins": "off",
"@typescript-eslint/no-empty-function": "off",
"require-await": "error",
"@typescript-eslint/require-await": "warn",
"@typescript-eslint/no-misused-promises": "warn",
"@typescript-eslint/no-floating-promises": "warn",
"no-async-promise-executor": "warn",
"@typescript-eslint/no-explicit-any": "off",
"@typescript-eslint/no-unnecessary-type-assertion": "error",
"no-constant-condition": [
"error",
{
checkLoops: false,
},
],
},
},
];

1
example.env Normal file
View File

@@ -0,0 +1 @@
PATHS_TEST_INSTALL=your-vault-plugin-path:and-another-path

View File

@@ -1,10 +0,0 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.24.0",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",
"authorUrl": "https://github.com/vrtmrz",
"isDesktopOnly": false
}

View File

@@ -1,7 +1,7 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.24.11",
"version": "0.25.43-patched-8",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",

22128
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,30 +1,73 @@
{
"name": "obsidian-livesync",
"version": "0.24.11",
"version": "0.25.43-patched-8",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js",
"type": "module",
"scripts": {
"bakei18n": "npx tsx ./src/lib/_tools/bakei18n.ts",
"dev": "node esbuild.config.mjs",
"build": "npm run bakei18n && node esbuild.config.mjs production",
"bakei18n": "npm run i18n:yaml2json && npm run i18n:bakejson",
"i18n:bakejson": "npx tsx ./src/lib/_tools/bakei18n.ts",
"i18n:yaml2json": "npx tsx ./src/lib/_tools/yaml2json.ts",
"i18n:json2yaml": "npx tsx ./src/lib/_tools/json2yaml.ts",
"prettyjson": "prettier --config ./.prettierrc.mjs ./src/lib/src/common/messagesJson/*.json --write --log-level error",
"postbakei18n": "prettier --config ./.prettierrc.mjs ./src/lib/src/common/messages/*.ts --write --log-level error",
"posti18n:yaml2json": "npm run prettyjson",
"predev": "npm run bakei18n",
"dev": "node --env-file=.env esbuild.config.mjs",
"prebuild": "npm run bakei18n",
"build": "node esbuild.config.mjs production",
"buildVite": "npx dotenv-cli -e .env -- vite build --mode production",
"buildViteOriginal": "npx dotenv-cli -e .env -- vite build --mode original",
"buildDev": "node esbuild.config.mjs dev",
"lint": "eslint src",
"svelte-check": "svelte-check --tsconfig ./tsconfig.json",
"tsc-check": "tsc --noEmit",
"pretty": "npm run prettyNoWrite -- --write --log-level error",
"prettyCheck": "npm run prettyNoWrite -- --check",
"prettyNoWrite": "prettier --config ./.prettierrc \"**/*.js\" \"**/*.ts\" \"**/*.json\" ",
"check": "npm run lint && npm run svelte-check && npm run tsc-check"
"prettyNoWrite": "prettier --config ./.prettierrc.mjs \"**/*.js\" \"**/*.ts\" \"**/*.json\" ",
"check": "npm run lint && npm run svelte-check",
"unittest": "deno test -A --no-check --coverage=cov_profile --v8-flags=--expose-gc --trace-leaks ./src/",
"test": "vitest run",
"test:unit": "vitest run --config vitest.config.unit.ts",
"test:unit:coverage": "vitest run --config vitest.config.unit.ts --coverage",
"test:install-playwright": "npx playwright install chromium",
"test:install-dependencies": "npm run test:install-playwright",
"test:coverage": "vitest run --coverage",
"test:docker-couchdb:up": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/couchdb-start.sh",
"test:docker-couchdb:init": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/couchdb-init.sh",
"test:docker-couchdb:start": "npm run test:docker-couchdb:up && sleep 5 && npm run test:docker-couchdb:init",
"test:docker-couchdb:down": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/couchdb-stop.sh",
"test:docker-couchdb:stop": "npm run test:docker-couchdb:down",
"test:docker-s3:up": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/minio-start.sh",
"test:docker-s3:init": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/minio-init.sh",
"test:docker-s3:start": "npm run test:docker-s3:up && sleep 3 && npm run test:docker-s3:init",
"test:docker-s3:down": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/minio-stop.sh",
"test:docker-s3:stop": "npm run test:docker-s3:down",
"test:docker-p2p:up": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/p2p-start.sh",
"test:docker-p2p:init": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/p2p-init.sh",
"test:docker-p2p:start": "npm run test:docker-p2p:up && sleep 3 && npm run test:docker-p2p:init",
"test:docker-p2p:down": "npx dotenv-cli -e .env -e .test.env -- ./test/shell/p2p-stop.sh",
"test:docker-p2p:stop": "npm run test:docker-p2p:down",
"test:docker-all:up": "npm run test:docker-couchdb:up && npm run test:docker-s3:up && npm run test:docker-p2p:up",
"test:docker-all:init": "npm run test:docker-couchdb:init && npm run test:docker-s3:init && npm run test:docker-p2p:init",
"test:docker-all:down": "npm run test:docker-couchdb:down && npm run test:docker-s3:down && npm run test:docker-p2p:down",
"test:docker-all:start": "npm run test:docker-all:up && sleep 5 && npm run test:docker-all:init",
"test:docker-all:stop": "npm run test:docker-all:down",
"test:full": "npm run test:docker-all:start && vitest run --coverage && npm run test:docker-all:stop"
},
"keywords": [],
"author": "vorotamoroz",
"license": "MIT",
"devDependencies": {
"@chialab/esbuild-plugin-worker": "^0.18.1",
"@tsconfig/svelte": "^5.0.4",
"@eslint/compat": "^1.2.7",
"@eslint/eslintrc": "^3.3.0",
"@eslint/js": "^9.21.0",
"@sveltejs/vite-plugin-svelte": "^6.2.1",
"@tsconfig/svelte": "^5.0.5",
"@types/deno": "^2.3.0",
"@types/diff-match-patch": "^1.0.36",
"@types/node": "^22.5.4",
"@types/node": "^22.13.8",
"@types/pouchdb": "^6.4.2",
"@types/pouchdb-adapter-http": "^6.1.6",
"@types/pouchdb-adapter-idb": "^6.1.7",
@@ -33,21 +76,30 @@
"@types/pouchdb-mapreduce": "^6.1.10",
"@types/pouchdb-replication": "^6.4.7",
"@types/transform-pouch": "^1.0.6",
"@typescript-eslint/eslint-plugin": "^8.23.0",
"@typescript-eslint/parser": "^8.23.0",
"builtin-modules": "^4.0.0",
"esbuild": "0.24.2",
"esbuild-svelte": "^0.9.0",
"eslint": "^8.57.0",
"eslint-config-airbnb-base": "^15.0.0",
"eslint-plugin-import": "^2.31.0",
"@typescript-eslint/eslint-plugin": "8.46.2",
"@typescript-eslint/parser": "8.46.2",
"@vitest/browser": "^4.0.16",
"@vitest/browser-playwright": "^4.0.16",
"@vitest/coverage-v8": "^4.0.16",
"builtin-modules": "5.0.0",
"dotenv": "^17.2.3",
"dotenv-cli": "^11.0.0",
"esbuild": "0.25.0",
"esbuild-plugin-inline-worker": "^0.1.1",
"esbuild-svelte": "^0.9.3",
"eslint": "^9.38.0",
"eslint-plugin-import": "^2.32.0",
"eslint-plugin-svelte": "^3.12.4",
"events": "^3.3.0",
"obsidian": "^1.7.2",
"postcss": "^8.5.1",
"glob": "^11.0.3",
"obsidian": "^1.8.7",
"playwright": "^1.57.0",
"postcss": "^8.5.3",
"postcss-load-config": "^6.0.1",
"pouchdb-adapter-http": "^9.0.0",
"pouchdb-adapter-idb": "^9.0.0",
"pouchdb-adapter-indexeddb": "^9.0.0",
"pouchdb-adapter-memory": "^9.0.0",
"pouchdb-core": "^9.0.0",
"pouchdb-errors": "^9.0.0",
"pouchdb-find": "^9.0.0",
@@ -55,28 +107,35 @@
"pouchdb-merge": "^9.0.0",
"pouchdb-replication": "^9.0.0",
"pouchdb-utils": "^9.0.0",
"prettier": "^3.4.2",
"svelte": "^5.19.7",
"prettier": "3.5.2",
"rollup-plugin-copy": "^3.5.0",
"svelte": "5.41.1",
"svelte-check": "^4.3.3",
"svelte-preprocess": "^6.0.3",
"terser": "^5.37.0",
"terser": "^5.39.0",
"transform-pouch": "^2.0.0",
"tslib": "^2.8.1",
"tsx": "^4.19.2",
"typescript": "^5.7.3"
"tsx": "^4.20.6",
"typescript": "5.9.3",
"vite": "^7.3.0",
"vitest": "^4.0.16",
"webdriverio": "^9.23.0",
"yaml": "^2.8.0"
},
"dependencies": {
"@aws-sdk/client-s3": "^3.645.0",
"@smithy/fetch-http-handler": "^3.2.4",
"@smithy/protocol-http": "^4.1.0",
"@smithy/querystring-builder": "^3.0.3",
"@aws-sdk/client-s3": "^3.808.0",
"@smithy/fetch-http-handler": "^5.0.2",
"@smithy/md5-js": "^4.0.2",
"@smithy/middleware-apply-body-checksum": "^4.1.0",
"@smithy/protocol-http": "^5.1.0",
"@smithy/querystring-builder": "^4.0.2",
"diff-match-patch": "^1.0.5",
"esbuild-plugin-inline-worker": "^0.1.1",
"fflate": "^0.8.2",
"idb": "^8.0.2",
"minimatch": "^10.0.1",
"octagonal-wheels": "^0.1.23",
"svelte-check": "^4.1.4",
"trystero": "^0.20.0",
"idb": "^8.0.3",
"minimatch": "^10.0.2",
"octagonal-wheels": "^0.1.45",
"qrcode-generator": "^1.4.4",
"trystero": "^0.22.0",
"xxhash-wasm-102": "npm:xxhash-wasm@^1.0.2"
}
}

View File

@@ -66,7 +66,7 @@
"outputs": [],
"source": [
"# see https://fly.io/docs/reference/regions/\n",
"region = \"nrt/Tokyo, Japan\" #@param [\"ams/Amsterdam, Netherlands\",\"arn/Stockholm, Sweden\",\"atl/Atlanta, Georgia (US)\",\"bog/Bogotá, Colombia\",\"bos/Boston, Massachusetts (US)\",\"cdg/Paris, France\",\"den/Denver, Colorado (US)\",\"dfw/Dallas, Texas (US)\",\"ewr/Secaucus, NJ (US)\",\"eze/Ezeiza, Argentina\",\"gdl/Guadalajara, Mexico\",\"gig/Rio de Janeiro, Brazil\",\"gru/Sao Paulo, Brazil\",\"hkg/Hong Kong, Hong Kong\",\"iad/Ashburn, Virginia (US)\",\"jnb/Johannesburg, South Africa\",\"lax/Los Angeles, California (US)\",\"lhr/London, United Kingdom\",\"mad/Madrid, Spain\",\"mia/Miami, Florida (US)\",\"nrt/Tokyo, Japan\",\"ord/Chicago, Illinois (US)\",\"otp/Bucharest, Romania\",\"phx/Phoenix, Arizona (US)\",\"qro/Querétaro, Mexico\",\"scl/Santiago, Chile\",\"sea/Seattle, Washington (US)\",\"sin/Singapore, Singapore\",\"sjc/San Jose, California (US)\",\"syd/Sydney, Australia\",\"waw/Warsaw, Poland\",\"yul/Montreal, Canada\",\"yyz/Toronto, Canada\" ] {allow-input: true}\n",
"region = \"nrt/Tokyo, Japan\" #@param [\"jnb/Johannesburg, South Africa\",\"bom/Mumbai, India\",\"sin/Singapore, Singapore\",\"syd/Sydney, Australia\",\"nrt/Tokyo, Japan\",\"ams/Amsterdam, Netherlands\",\"fra/Frankfurt, Germany\",\"lhr/London, United Kingdom\",\"cdg/Paris, France\",\"arn/Stockholm, Sweden\",\"iad/Ashburn, Virginia (US)\",\"ord/Chicago, Illinois (US)\",\"dfw/Dallas, Texas (US)\",\"lax/Los Angeles, California (US)\",\"sjc/San Jose, California (US)\",\"ewr/Secaucus, NJ (US)\",\"yyz/Toronto, Canada\",\"gru/Sao Paulo, Brazil\"] {allow-input: true}\n",
"%env region={region.split(\"/\")[0]}\n",
"#%env appame=\n",
"#%env username=\n",

24
src/apps/webpeer/.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

View File

@@ -0,0 +1,30 @@
# A pseudo client for Self-hosted LiveSync Peer-to-Peer Sync mode
## What is it for?
This is a pseudo client for the Self-hosted LiveSync Peer-to-Peer Sync mode. It is a simple pure-client-side web-application that can be connected to the Self-hosted LiveSync in peer-to-peer.
As long as you have a browser, it starts up, so if you leave it opened some device, it can replace your existing remote servers such as CouchDB.
> [!IMPORTANT]
> Of course, it has not been fully tested. Rather, it was created to be tested.
This pseudo client actually receives the data from other devices, and sends if some device requests it. However, it does not store **files** in the local storage. If you want to purge the data, please purge the browser's cache and indexedDB, local storage, etc.
## How to use it?
We can build the application by running the following command:
```bash
$ deno task build
```
Then, open the `dist/index.html` in the browser. It can be configured as the same as the Self-hosted LiveSync (Same components are used[^1]).
## Some notes
I will launch this application in the github pages later, so will be able to use it without building it. However, that shares the origin. Hence, the application that your have built and deployed would be more secure.
[^1]: Congrats! I made it modular. Finally...

1101
src/apps/webpeer/deno.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="icon.svg" />
<link rel="manifest" href="manifest.json" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Peer-to-Peer Daemon on Browser</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="./src/main.ts"></script>
</body>
</html>

View File

@@ -0,0 +1,26 @@
{
"name": "webpeer",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview",
"check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json"
},
"dependencies": {},
"devDependencies": {
"eslint-plugin-svelte": "^3.12.4",
"@sveltejs/vite-plugin-svelte": "^6.2.1",
"@tsconfig/svelte": "^5.0.5",
"svelte": "5.41.1",
"svelte-check": "^4.3.3",
"typescript": "5.9.3",
"vite": "^7.3.0"
},
"imports": {
"../../src/worker/bgWorker.ts": "../../src/worker/bgWorker.mock.ts",
"@lib/worker/bgWorker.ts": "@lib/worker/bgWorker.mock.ts"
}
}

View File

@@ -0,0 +1,52 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="512"
height="512"
viewBox="0 0 511.99998 511.99998"
version="1.1"
id="svg1"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs1" />
<g
id="layer1"
transform="translate(-22.694448,-28.922305)">
<g
id="g4"
transform="matrix(4.6921194,0,0,4.6921194,-266.26061,-494.11652)">
<rect
style="fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:0.2366;stroke-opacity:1"
id="rect2"
width="109.11913"
height="109.11913"
x="61.583057"
y="111.47176" />
<g
id="g3"
transform="matrix(0.77702959,0,0,0.77702959,22.523192,34.973874)">
<path
d="m 104.50787,75.245039 h -3.77394 l -10.90251,-29.352906 c 25.15963,-14.257127 33.96551,-46.12600067 20.12771,-71.285639 -14.25713,-25.159637 -46.126,-33.96551 -71.28564,-20.12771 -12.16049,6.709237 -21.38569,18.450401 -24.74031,31.868875 l -38.99744,-4.193274 c -2.93529,-18.4504 -20.12771,-31.449546 -38.578109,-28.514255 -16.773091,2.515964 -28.933582,16.773091 -28.933582,33.546184 0,5.8705823 1.677309,11.7411643 4.6126,17.1924183 l -46.964659,46.1260007 c -8.80587,-6.709236 -19.70838,-10.483182 -31.03022,-10.483182 -28.93358,0 -52.41591,23.482328 -52.41591,52.415908 0,28.933581 23.48233,52.415911 52.41591,52.415911 l 10.48319,62.89909 c -17.19242,7.54789 -25.15964,27.6756 -17.61175,44.86802 7.54789,17.19242 27.6756,25.15964 44.86802,17.61175 15.93444,-6.70924 23.90165,-24.32098 19.28905,-41.09408 l 36.900806,-19.28905 c 23.901654,26.41762 64.9957237,28.51425 91.832674,4.6126 13.41847,-12.16049 21.38569,-29.77224 21.38569,-47.80331 0,-4.6126 -0.41933,-9.64453 -1.67731,-14.25713 l 40.67475,-20.96636 c 12.57982,14.25713 33.96551,15.51511 48.22264,2.93529 14.25712,-12.57982 15.51511,-33.96551 2.93529,-48.222641 -7.96722,-6.70924 -17.19242,-10.90251 -26.83695,-10.90251 z m -223.92077,140.055311 c -5.45125,-5.45125 -12.99914,-8.80587 -20.54703,-9.64452 l -10.48319,-62.8991 c 10.06386,-3.35461 18.86973,-9.64452 25.15964,-18.03107 l 38.997438,20.54704 c -6.289909,16.77309 -5.031927,35.64282 3.354619,51.57725 z m 41.094077,-85.54276 -38.578107,-20.12771 c 1.67731,-5.45126 2.51596,-10.902511 2.51596,-16.773091 0,-10.90251 -3.35462,-21.38569 -9.64453,-30.191565 l 46.964658,-45.706673 c 15.934437,9.644527 36.900802,5.031927 46.545332,-10.9025087 1.25798,-2.096637 2.09663,-4.193273 2.93529,-6.28990997 l 38.99744,4.19327297 c 0.83865,15.0957817 8.38654,29.3529097 20.54703,38.5781097 l -34.3848403,64.157075 c -27.2562697,-10.902511 -58.7058147,-1.25798 -75.8982327,23.063 z m 148.861183,-20.12771 c 0,2.51596 0.41933,5.03193 1.25798,7.54789 l -38.57811,20.12771 c -4.6126,-9.2252 -11.74116,-16.77309 -20.12771,-22.64367 l 34.38484,-64.157077 c 5.45126,2.096637 11.32184,2.935291 17.19242,2.935291 3.35462,0 6.70924,-0.419327 10.06385,-0.838654 l 10.90251,29.352909 c -10.06385,5.87058 -15.51511,16.35376 -15.09578,27.675601 z"
id="path1-1"
style="overflow:hidden;stroke-width:4.19327"
transform="matrix(0.26458333,0,0,0.26458333,130.85167,139.42444)" />
<path
id="path1-8-1-7"
style="fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:0.073263;stroke-opacity:1"
d="m 140.38615,132.15285 c -0.55386,0 -1.00708,0.45307 -1.00708,1.00708 v 12.05537 c 0,0.5546 0.45322,1.00707 1.00708,1.00707 h 0.504 v 1.51154 h 2.01461 v -1.51154 h 10.07399 v 1.51154 h 2.01461 v -1.51154 h 0.50354 c 0.55461,0 1.00754,-0.45247 1.00754,-1.00707 v -12.05537 c 0,-0.55401 -0.45293,-1.00708 -1.00754,-1.00708 z m 0.504,1.51108 h 14.10321 v 11.04783 h -14.10321 z m 1.00753,1.00754 v 9.03321 h 12.0886 v -9.03321 z m 3.52524,1.99776 c 1.3854,0 2.51906,1.13321 2.51906,2.51861 0,1.38467 -1.13366,2.51816 -2.51906,2.51816 -1.38467,0 -2.51771,-1.13349 -2.51771,-2.51816 0,-1.3854 1.13304,-2.51861 2.51771,-2.51861 z m 7.05183,0 c 0.27767,0 0.504,0.22706 0.504,0.504 v 4.02968 c 0,0.27694 -0.22633,0.50309 -0.504,0.50309 -0.27693,0 -0.50354,-0.22615 -0.50354,-0.50309 v -4.02968 c 0,-0.27694 0.22661,-0.504 0.50354,-0.504 z m -7.55492,0.504 v 0.60461 c -0.42786,0.15092 -0.75526,0.50306 -0.90692,0.90601 h -0.60461 v 1.00753 h 0.60461 c 0.15166,0.42859 0.47906,0.756 0.90692,0.90692 v 0.60461 h 1.00753 v -0.60461 c 0.42786,-0.15092 0.75509,-0.50397 0.90601,-0.90692 h 0.60462 v -1.00753 h -0.60462 c -0.15092,-0.42786 -0.47815,-0.75509 -0.90601,-0.90601 v -0.60461 z" />
<path
id="path1-8-1-7-3"
style="fill:#ffffff;fill-opacity:1"
d="m 2576.8666,993.66142 c -7.56,0 -13.7458,6.19115 -13.7458,13.75308 v 164.5449 c 0,7.57 6.1858,13.7458 13.7458,13.7458 h 6.8838 v 20.6369 h 27.499 v -20.6369 h 137.5019 v 20.6369 h 27.4989 v -20.6369 h 6.8693 c 7.57,0 13.7531,-6.1758 13.7531,-13.7458 v -164.5449 c 0,-7.56193 -6.1831,-13.75308 -13.7531,-13.75308 z m 6.8838,20.62968 h 192.4998 v 150.799 h -192.4998 z m 13.7531,13.753 v 123.2929 h 164.9936 v -123.2929 z m 48.1141,27.2674 c 18.91,0 34.3827,15.4654 34.3827,34.3754 0,18.9 -15.4727,34.3755 -34.3827,34.3755 -18.9,0 -34.3682,-15.4755 -34.3682,-34.3755 0,-18.91 15.4682,-34.3754 34.3682,-34.3754 z m 96.2499,0 c 3.79,0 6.8838,3.0965 6.8838,6.8765 v 55.0051 c 0,3.78 -3.0938,6.8693 -6.8838,6.8693 -3.78,0 -6.8693,-3.0893 -6.8693,-6.8693 v -55.0051 c 0,-3.78 3.0893,-6.8765 6.8693,-6.8765 z m -103.1192,6.8765 v 8.2519 c -5.84,2.06 -10.3078,6.8705 -12.3778,12.3705 h -8.2518 v 13.7531 h 8.2518 c 2.07,5.85 6.5378,10.3178 12.3778,12.3778 v 8.2518 h 13.7531 v -8.2518 c 5.84,-2.06 10.3105,-6.8778 12.3705,-12.3778 h 8.2446 v -13.7531 h -8.2446 c -2.06,-5.84 -6.5305,-10.3105 -12.3705,-12.3705 v -8.2519 z"
transform="matrix(0.06289731,0,0,0.06289731,-82.022365,94.831671)" />
<path
id="path1-8-1"
style="fill:#ffffff;fill-opacity:1"
d="m 2576.8708,993.66021 c -7.56,0 -13.7505,6.18852 -13.7505,13.75049 v 164.5474 c 0,7.57 6.1905,13.7454 13.7505,13.7454 h 6.8778 v 20.6333 h 27.5009 v -20.6333 h 137.4996 v 20.6333 h 27.5009 v -20.6333 h 6.8727 c 7.57,0 13.7504,-6.1754 13.7504,-13.7454 v -164.5474 c 0,-7.56197 -6.1804,-13.75049 -13.7504,-13.75049 z m 6.8778,20.62819 h 192.5014 v 150.797 h -192.5014 z m 13.7504,13.7556 v 123.296 h 165.0005 v -123.296 z m 48.119,27.2668 c 18.91,0 34.3838,15.4687 34.3838,34.3787 0,18.9 -15.4738,34.3685 -34.3838,34.3685 -18.9,0 -34.3685,-15.4685 -34.3685,-34.3685 0,-18.91 15.4685,-34.3787 34.3685,-34.3787 z m 96.2533,0 c 3.79,0 6.8778,3.0977 6.8778,6.8777 v 55.0019 c 0,3.78 -3.0878,6.8676 -6.8778,6.8676 -3.78,0 -6.8727,-3.0876 -6.8727,-6.8676 v -55.0019 c 0,-3.78 3.0927,-6.8777 6.8727,-6.8777 z m -103.1209,6.8777 v 8.2523 c -5.84,2.06 -10.311,6.8709 -12.381,12.3709 h -8.2472 v 13.7505 h 8.2472 c 2.07,5.85 6.541,10.3159 12.381,12.3759 v 8.2523 h 13.7505 v -8.2523 c 5.84,-2.06 10.3108,-6.8759 12.3708,-12.3759 h 8.2473 v -13.7505 h -8.2473 c -2.06,-5.84 -6.5308,-10.3109 -12.3708,-12.3709 v -8.2523 z"
transform="matrix(0.08943055,0,0,0.08943055,-115.49313,85.768735)" />
</g>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 7.0 KiB

View File

@@ -0,0 +1,26 @@
{
"name": "WebPeer - Pseudo client for Self-hosted LiveSync Peer-to-Peer Replication",
"short_name": "WepPeer",
"description": "A web-based pseudo peer-to-peer replication client using as like server for background sync.",
"start_url": "./",
"display": "standalone",
"background_color": "#fff",
"theme_color": "#fff",
"orientation": "any",
"icons": [
{
"src": "./icon.svg",
"sizes": "512x512",
"type": "image/svg+xml",
"maskable": true
}
],
"additional_icons": [
{
"src": "./icon.svg",
"sizes": "512x512",
"type": "image/svg+xml",
"maskable": true
}
]
}

View File

@@ -0,0 +1,5 @@
<script lang="ts">
import SyncMain from "./SyncMain.svelte";
</script>
<SyncMain></SyncMain>

View File

@@ -0,0 +1,23 @@
import { LOG_LEVEL_VERBOSE } from "@lib/common/types";
import { defaultLoggerEnv, setGlobalLogFunction } from "@lib/common/logger";
import { writable } from "svelte/store";
export const logs = writable([] as string[]);
let _logs = [] as string[];
const maxLines = 10000;
setGlobalLogFunction((msg, level) => {
console.log(msg);
const msgstr = typeof msg === "string" ? msg : JSON.stringify(msg);
const strLog = `${new Date().toISOString()}\u2001${msgstr}`;
_logs.push(strLog);
if (_logs.length > maxLines) {
_logs = _logs.slice(_logs.length - maxLines);
}
logs.set(_logs);
});
defaultLoggerEnv.minLogLevel = LOG_LEVEL_VERBOSE;
export const storeP2PStatusLine = writable("");

View File

@@ -0,0 +1,370 @@
import { PouchDB } from "@lib/pouchdb/pouchdb-browser";
import {
type EntryDoc,
type LOG_LEVEL,
type P2PSyncSetting,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
P2P_DEFAULT_SETTINGS,
REMOTE_P2P,
} from "@lib/common/types";
import { eventHub } from "@lib/hub/hub";
import type { Confirm } from "@lib/interfaces/Confirm";
import { LOG_LEVEL_INFO, Logger } from "@lib/common/logger";
import { storeP2PStatusLine } from "./CommandsShim";
import {
EVENT_P2P_PEER_SHOW_EXTRA_MENU,
type CommandShim,
type PeerStatus,
type PluginShim,
} from "@lib/replication/trystero/P2PReplicatorPaneCommon";
import {
closeP2PReplicator,
openP2PReplicator,
P2PLogCollector,
type P2PReplicatorBase,
} from "@lib/replication/trystero/P2PReplicatorCore";
import type { SimpleStore } from "octagonal-wheels/databases/SimpleStoreBase";
import { reactiveSource } from "octagonal-wheels/dataobject/reactive_v2";
import { EVENT_SETTING_SAVED } from "@lib/events/coreEvents";
import { unique } from "octagonal-wheels/collection";
import { BrowserServiceHub } from "@lib/services/BrowserServices";
import { TrysteroReplicator } from "@lib/replication/trystero/TrysteroReplicator";
import { SETTING_KEY_P2P_DEVICE_NAME } from "@lib/common/types";
import { ServiceContext } from "@lib/services/base/ServiceBase";
import type { InjectableServiceHub } from "@lib/services/InjectableServices";
import { Menu } from "@lib/services/implements/browser/Menu";
import type { InjectableVaultServiceCompat } from "@lib/services/implements/injectable/InjectableVaultService";
import { SimpleStoreIDBv2 } from "octagonal-wheels/databases/SimpleStoreIDBv2";
import type { InjectableAPIService } from "@/lib/src/services/implements/injectable/InjectableAPIService";
import type { BrowserAPIService } from "@/lib/src/services/implements/browser/BrowserAPIService";
function addToList(item: string, list: string) {
return unique(
list
.split(",")
.map((e) => e.trim())
.concat(item)
.filter((p) => p)
).join(",");
}
function removeFromList(item: string, list: string) {
return list
.split(",")
.map((e) => e.trim())
.filter((p) => p !== item)
.filter((p) => p)
.join(",");
}
export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
storeP2PStatusLine = reactiveSource("");
plugin!: PluginShim;
// environment!: IEnvironment;
confirm!: Confirm;
// simpleStoreAPI!: ISimpleStoreAPI;
db?: PouchDB.Database<EntryDoc>;
services: InjectableServiceHub<ServiceContext>;
getDB() {
if (!this.db) {
throw new Error("DB not initialized");
}
return this.db;
}
_simpleStore!: SimpleStore<any>;
async closeDB() {
if (this.db) {
await this.db.close();
this.db = undefined;
}
}
constructor() {
const browserServiceHub = new BrowserServiceHub<ServiceContext>();
this.services = browserServiceHub;
(this.services.API as BrowserAPIService<ServiceContext>).getSystemVaultName.setHandler(
() => "p2p-livesync-web-peer"
);
}
async init() {
// const { simpleStoreAPI } = await getWrappedSynchromesh();
// this.confirm = confirm;
this.confirm = this.services.UI.confirm;
// this.environment = environment;
if (this.db) {
try {
await this.closeDB();
} catch (ex) {
Logger("Error closing db", LOG_LEVEL_VERBOSE);
Logger(ex, LOG_LEVEL_VERBOSE);
}
}
const repStore = SimpleStoreIDBv2.open<any>("p2p-livesync-web-peer");
this._simpleStore = repStore;
let _settings = (await repStore.get("settings")) || ({ ...P2P_DEFAULT_SETTINGS } as P2PSyncSetting);
this.services.setting.settings = _settings as any;
this.plugin = {
saveSettings: async () => {
await repStore.set("settings", _settings);
eventHub.emitEvent(EVENT_SETTING_SAVED, _settings);
},
get settings() {
return _settings;
},
set settings(newSettings: P2PSyncSetting) {
_settings = { ..._settings, ...newSettings };
},
rebuilder: null,
services: this.services,
// $$scheduleAppReload: () => {},
// $$getVaultName: () => "p2p-livesync-web-peer",
};
// const deviceName = this.getDeviceName();
const database_name = this.settings.P2P_AppID + "-" + this.settings.P2P_roomID + "p2p-livesync-web-peer";
this.db = new PouchDB<EntryDoc>(database_name);
setTimeout(() => {
if (this.settings.P2P_AutoStart && this.settings.P2P_Enabled) {
void this.open();
}
}, 1000);
return this;
}
get settings() {
return this.plugin.settings;
}
_log(msg: any, level?: LOG_LEVEL): void {
Logger(msg, level);
}
_notice(msg: string, key?: string): void {
Logger(msg, LOG_LEVEL_NOTICE, key);
}
getSettings(): P2PSyncSetting {
return this.settings;
}
simpleStore(): SimpleStore<any> {
return this._simpleStore;
}
handleReplicatedDocuments(docs: EntryDoc[]): Promise<boolean> {
// No op. This is a client and does not need to process the docs
return Promise.resolve(true);
}
getPluginShim() {
return {};
}
getConfig(key: string) {
const vaultName = this.services.vault.getVaultName();
const dbKey = `${vaultName}-${key}`;
return localStorage.getItem(dbKey);
}
setConfig(key: string, value: string) {
const vaultName = this.services.vault.getVaultName();
const dbKey = `${vaultName}-${key}`;
localStorage.setItem(dbKey, value);
}
getDeviceName(): string {
return this.getConfig(SETTING_KEY_P2P_DEVICE_NAME) ?? this.plugin.services.vault.getVaultName();
}
getPlatform(): string {
return "pseudo-replicator";
}
m?: Menu;
afterConstructor(): void {
eventHub.onEvent(EVENT_P2P_PEER_SHOW_EXTRA_MENU, ({ peer, event }) => {
if (this.m) {
this.m.hide();
}
this.m = new Menu()
.addItem((item) => item.setTitle("📥 Only Fetch").onClick(() => this.replicateFrom(peer)))
.addItem((item) => item.setTitle("📤 Only Send").onClick(() => this.replicateTo(peer)))
.addSeparator()
// .addItem((item) => {
// item.setTitle("🔧 Get Configuration").onClick(async () => {
// await this.getRemoteConfig(peer);
// });
// })
// .addSeparator()
.addItem((item) => {
const mark = peer.syncOnConnect ? "checkmark" : null;
item.setTitle("Toggle Sync on connect")
.onClick(async () => {
await this.toggleProp(peer, "syncOnConnect");
})
.setIcon(mark);
})
.addItem((item) => {
const mark = peer.watchOnConnect ? "checkmark" : null;
item.setTitle("Toggle Watch on connect")
.onClick(async () => {
await this.toggleProp(peer, "watchOnConnect");
})
.setIcon(mark);
})
.addItem((item) => {
const mark = peer.syncOnReplicationCommand ? "checkmark" : null;
item.setTitle("Toggle Sync on `Replicate now` command")
.onClick(async () => {
await this.toggleProp(peer, "syncOnReplicationCommand");
})
.setIcon(mark);
});
void this.m.showAtPosition({ x: event.x, y: event.y });
});
this.p2pLogCollector.p2pReplicationLine.onChanged((line) => {
storeP2PStatusLine.set(line.value);
});
}
_replicatorInstance?: TrysteroReplicator;
p2pLogCollector = new P2PLogCollector();
async open() {
await openP2PReplicator(this);
}
async close() {
await closeP2PReplicator(this);
}
enableBroadcastCastings() {
return this?._replicatorInstance?.enableBroadcastChanges();
}
disableBroadcastCastings() {
return this?._replicatorInstance?.disableBroadcastChanges();
}
async initialiseP2PReplicator(): Promise<TrysteroReplicator> {
await this.init();
try {
if (this._replicatorInstance) {
await this._replicatorInstance.close();
this._replicatorInstance = undefined;
}
if (!this.settings.P2P_AppID) {
this.settings.P2P_AppID = P2P_DEFAULT_SETTINGS.P2P_AppID;
}
const getInitialDeviceName = () =>
this.getConfig(SETTING_KEY_P2P_DEVICE_NAME) || this.services.vault.getVaultName();
const getSettings = () => this.settings;
const store = () => this.simpleStore();
const getDB = () => this.getDB();
const getConfirm = () => this.confirm;
const getPlatform = () => this.getPlatform();
const env = {
get db() {
return getDB();
},
get confirm() {
return getConfirm();
},
get deviceName() {
return getInitialDeviceName();
},
get platform() {
return getPlatform();
},
get settings() {
return getSettings();
},
processReplicatedDocs: async (docs: EntryDoc[]): Promise<void> => {
await this.handleReplicatedDocuments(docs);
// No op. This is a client and does not need to process the docs
},
get simpleStore() {
return store();
},
};
this._replicatorInstance = new TrysteroReplicator(env);
return this._replicatorInstance;
} catch (e) {
this._log(
e instanceof Error ? e.message : "Something occurred on Initialising P2P Replicator",
LOG_LEVEL_INFO
);
this._log(e, LOG_LEVEL_VERBOSE);
throw e;
}
}
get replicator() {
return this._replicatorInstance!;
}
async replicateFrom(peer: PeerStatus) {
await this.replicator.replicateFrom(peer.peerId);
}
async replicateTo(peer: PeerStatus) {
await this.replicator.requestSynchroniseToPeer(peer.peerId);
}
async getRemoteConfig(peer: PeerStatus) {
Logger(
`Requesting remote config for ${peer.name}. Please input the passphrase on the remote device`,
LOG_LEVEL_NOTICE
);
const remoteConfig = await this.replicator.getRemoteConfig(peer.peerId);
if (remoteConfig) {
Logger(`Remote config for ${peer.name} is retrieved successfully`);
const DROP = "Yes, and drop local database";
const KEEP = "Yes, but keep local database";
const CANCEL = "No, cancel";
const yn = await this.confirm.askSelectStringDialogue(
`Do you really want to apply the remote config? This will overwrite your current config immediately and restart.
And you can also drop the local database to rebuild from the remote device.`,
[DROP, KEEP, CANCEL] as const,
{
defaultAction: CANCEL,
title: "Apply Remote Config ",
}
);
if (yn === DROP || yn === KEEP) {
if (yn === DROP) {
if (remoteConfig.remoteType !== REMOTE_P2P) {
const yn2 = await this.confirm.askYesNoDialog(
`Do you want to set the remote type to "P2P Sync" to rebuild by "P2P replication"?`,
{
title: "Rebuild from remote device",
}
);
if (yn2 === "yes") {
remoteConfig.remoteType = REMOTE_P2P;
remoteConfig.P2P_RebuildFrom = peer.name;
}
}
}
this.plugin.settings = remoteConfig;
await this.plugin.saveSettings();
if (yn === DROP) {
await this.plugin.rebuilder.scheduleFetch();
} else {
await this.plugin.services.appLifecycle.scheduleRestart();
}
} else {
Logger(`Cancelled\nRemote config for ${peer.name} is not applied`, LOG_LEVEL_NOTICE);
}
} else {
Logger(`Cannot retrieve remote config for ${peer.peerId}`);
}
}
async toggleProp(peer: PeerStatus, prop: "syncOnConnect" | "watchOnConnect" | "syncOnReplicationCommand") {
const settingMap = {
syncOnConnect: "P2P_AutoSyncPeers",
watchOnConnect: "P2P_AutoWatchPeers",
syncOnReplicationCommand: "P2P_SyncOnReplication",
} as const;
const targetSetting = settingMap[prop];
if (peer[prop]) {
this.plugin.settings[targetSetting] = removeFromList(peer.name, this.plugin.settings[targetSetting]);
await this.plugin.saveSettings();
} else {
this.plugin.settings[targetSetting] = addToList(peer.name, this.plugin.settings[targetSetting]);
await this.plugin.saveSettings();
}
}
}
export const cmdSyncShim = new P2PReplicatorShim();

View File

@@ -0,0 +1,112 @@
<script lang="ts">
import { storeP2PStatusLine, logs } from "./CommandsShim";
import P2PReplicatorPane from "@/features/P2PSync/P2PReplicator/P2PReplicatorPane.svelte";
import { onMount, tick } from "svelte";
import { cmdSyncShim } from "./P2PReplicatorShim";
import { eventHub } from "@lib/hub/hub";
import { EVENT_LAYOUT_READY } from "@lib/events/coreEvents";
let synchronised = $state(cmdSyncShim.init());
onMount(() => {
eventHub.emitEvent(EVENT_LAYOUT_READY);
return () => {
synchronised.then((e) => e.close());
};
});
let elP: HTMLDivElement;
logs.subscribe((log) => {
tick().then(() => elP?.scrollTo({ top: elP.scrollHeight }));
});
let statusLine = $state("");
storeP2PStatusLine.subscribe((status) => {
statusLine = status;
});
</script>
<main>
<div class="control">
{#await synchronised then cmdSync}
<P2PReplicatorPane plugin={cmdSync.plugin} {cmdSync}></P2PReplicatorPane>
{:catch error}
<p>{error.message}</p>
{/await}
</div>
<div class="log">
<div class="status">
{statusLine}
</div>
<div class="logslist" bind:this={elP}>
{#each $logs as log}
<p>{log}</p>
{/each}
</div>
</div>
</main>
<style>
main {
display: flex;
flex-direction: row;
flex-grow: 1;
max-height: 100vh;
box-sizing: border-box;
}
@media (max-width: 900px) {
main {
flex-direction: column;
}
}
@media (device-orientation: portrait) {
main {
flex-direction: column;
}
}
.log {
display: flex;
flex-direction: column;
justify-content: flex-start;
align-items: flex-start;
padding: 1em;
min-width: 50%;
}
@media (max-width: 900px) {
.log {
max-height: 50vh;
}
}
@media (device-orientation: portrait) {
.log {
max-height: 50vh;
}
}
.control {
padding: 1em 1em;
overflow-y: scroll;
flex-grow: 1;
}
.status {
flex-grow: 0;
/* max-height: 40px; */
/* height: 40px; */
flex-shrink: 0;
}
.logslist {
display: flex;
flex-direction: column;
justify-content: flex-start;
align-items: flex-start;
/* padding: 1em; */
width: 100%;
overflow-y: scroll;
flex-grow: 1;
flex-shrink: 1;
/* max-height: calc(100% - 40px); */
}
p {
margin: 0;
white-space: pre-wrap;
text-align: left;
word-break: break-all;
}
</style>

View File

@@ -0,0 +1,74 @@
<script lang="ts">
import { Menu } from "@/lib/src/services/implements/browser/Menu";
import { getDialogContext } from "@lib/services/implements/base/SvelteDialog";
let result = $state<string | boolean>("");
const context = getDialogContext();
async function testUI() {
const confirm = await context.services.confirm;
const ret = await confirm.askString("Your name", "What is your name?", "John Doe", false);
result = ret;
}
let resultPassword = $state<string | boolean>("");
async function testPassword() {
const confirm = await context.services.confirm;
const ret = await confirm.askString("passphrase", "?", "anythingonlyyouknow", true);
resultPassword = ret;
}
async function testMenu(event: MouseEvent) {
const m = new Menu()
.addItem((item) => item.setTitle("📥 Only Fetch").onClick(() => {}))
.addItem((item) => item.setTitle("📤 Only Send").onClick(() => {}))
.addSeparator()
.addItem((item) => {
item.setTitle("🔧 Get Configuration").onClick(async () => {
console.log("Get Configuration");
});
})
.addSeparator()
.addItem((item) => {
const mark = "checkmark";
item.setTitle("Toggle Sync on connect")
.onClick(async () => {
console.log("Toggle Sync on connect");
// await this.toggleProp(peer, "syncOnConnect");
})
.setIcon(mark);
})
.addItem((item) => {
const mark = null;
item.setTitle("Toggle Watch on connect")
.onClick(async () => {
console.log("Toggle Watch on connect");
// await this.toggleProp(peer, "watchOnConnect");
})
.setIcon(mark);
})
.addItem((item) => {
const mark = null;
item.setTitle("Toggle Sync on `Replicate now` command")
.onClick(async () => {})
.setIcon(mark);
});
m.showAtPosition({ x: event.x, y: event.y });
}
</script>
<main>
<h1>UI Test</h1>
<article>
<div>
<button onclick={() => testUI()}> String input </button>
{result}
</div>
<div>
<button onclick={() => testPassword()}> Password Input </button>
{resultPassword}
</div>
<div>
<button onclick={testMenu}>Menu</button>
</div>
</article>
</main>

View File

@@ -0,0 +1,112 @@
:root {
font-family: Inter, system-ui, Avenir, Helvetica, Arial, sans-serif;
line-height: 1.5;
font-weight: 400;
color-scheme: light dark;
color: rgba(255, 255, 255, 0.87);
background-color: #242424;
font-synthesis: none;
text-rendering: optimizeLegibility;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
--background-primary: #ffffff;
--background-primary-alt: #e9e9e9;
--size-4-1: 0.25em;
--tag-background: #f0f0f0;
--tag-border-width: 1px;
--tag-border-color: #cfffdd;
--background-modifier-success: #d4f3e9;
--background-secondary: #f0f0f0;
--background-modifier-error: #f8d7da;
--background-modifier-error-hover: #f5c6cb;
--interactive-accent: #007bff;
--interactive-accent-hover: #0056b3;
--text-normal: #333;
--text-warning: #f0ad4e;
}
a {
font-weight: 500;
color: #646cff;
text-decoration: inherit;
}
a:hover {
color: #535bf2;
}
body {
margin: 0;
padding: 0;
display: flex;
min-width: 320px;
min-height: 100vh;
box-sizing: border-box;
}
h1 {
font-size: 3.2em;
line-height: 1.1;
}
.card {
padding: 2em;
}
#app {
margin: 0;
padding: 0;
text-align: center;
display: flex;
flex-grow: 1;
}
button {
border-radius: 8px;
border: 1px solid transparent;
padding: 0.6em 1.2em;
font-size: 1em;
font-weight: 500;
font-family: inherit;
background-color: #1a1a1a;
cursor: pointer;
transition: border-color 0.25s;
}
button:hover {
border-color: #646cff;
}
button:focus,
button:focus-visible {
outline: 4px auto -webkit-focus-ring-color;
}
input,
select {
border-radius: 8px;
border: 1px solid #1a1a1a;
padding: 0.6em 1.2em;
font-size: 1em;
font-weight: 500;
font-family: inherit;
transition: border-color 0.25s;
}
@media (prefers-color-scheme: light) {
:root {
color: #213547;
background-color: #ffffff;
}
a:hover {
color: #747bff;
}
button {
background-color: #f9f9f9;
}
}

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="26.6" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 308"><path fill="#FF3E00" d="M239.682 40.707C211.113-.182 154.69-12.301 113.895 13.69L42.247 59.356a82.198 82.198 0 0 0-37.135 55.056a86.566 86.566 0 0 0 8.536 55.576a82.425 82.425 0 0 0-12.296 30.719a87.596 87.596 0 0 0 14.964 66.244c28.574 40.893 84.997 53.007 125.787 27.016l71.648-45.664a82.182 82.182 0 0 0 37.135-55.057a86.601 86.601 0 0 0-8.53-55.577a82.409 82.409 0 0 0 12.29-30.718a87.573 87.573 0 0 0-14.963-66.244"></path><path fill="#FFF" d="M106.889 270.841c-23.102 6.007-47.497-3.036-61.103-22.648a52.685 52.685 0 0 1-9.003-39.85a49.978 49.978 0 0 1 1.713-6.693l1.35-4.115l3.671 2.697a92.447 92.447 0 0 0 28.036 14.007l2.663.808l-.245 2.659a16.067 16.067 0 0 0 2.89 10.656a17.143 17.143 0 0 0 18.397 6.828a15.786 15.786 0 0 0 4.403-1.935l71.67-45.672a14.922 14.922 0 0 0 6.734-9.977a15.923 15.923 0 0 0-2.713-12.011a17.156 17.156 0 0 0-18.404-6.832a15.78 15.78 0 0 0-4.396 1.933l-27.35 17.434a52.298 52.298 0 0 1-14.553 6.391c-23.101 6.007-47.497-3.036-61.101-22.649a52.681 52.681 0 0 1-9.004-39.849a49.428 49.428 0 0 1 22.34-33.114l71.664-45.677a52.218 52.218 0 0 1 14.563-6.398c23.101-6.007 47.497 3.036 61.101 22.648a52.685 52.685 0 0 1 9.004 39.85a50.559 50.559 0 0 1-1.713 6.692l-1.35 4.116l-3.67-2.693a92.373 92.373 0 0 0-28.037-14.013l-2.664-.809l.246-2.658a16.099 16.099 0 0 0-2.89-10.656a17.143 17.143 0 0 0-18.398-6.828a15.786 15.786 0 0 0-4.402 1.935l-71.67 45.674a14.898 14.898 0 0 0-6.73 9.975a15.9 15.9 0 0 0 2.709 12.012a17.156 17.156 0 0 0 18.404 6.832a15.841 15.841 0 0 0 4.402-1.935l27.345-17.427a52.147 52.147 0 0 1 14.552-6.397c23.101-6.006 47.497 3.037 61.102 22.65a52.681 52.681 0 0 1 9.003 39.848a49.453 49.453 0 0 1-22.34 33.12l-71.664 45.673a52.218 52.218 0 0 1-14.563 6.398"></path></svg>

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

@@ -0,0 +1,9 @@
import { mount } from "svelte";
import "./app.css";
import App from "./App.svelte";
const app = mount(App, {
target: document.getElementById("app")!,
});
export default app;

View File

@@ -0,0 +1,9 @@
import { mount } from "svelte";
import "./app.css";
import App from "./UITest.svelte";
const app = mount(App, {
target: document.getElementById("app")!,
});
export default app;

2
src/apps/webpeer/src/vite-env.d.ts vendored Normal file
View File

@@ -0,0 +1,2 @@
/// <reference types="svelte" />
/// <reference types="vite/client" />

View File

@@ -0,0 +1,7 @@
import { vitePreprocess } from "@sveltejs/vite-plugin-svelte";
export default {
// Consult https://svelte.dev/docs#compile-time-svelte-preprocess
// for more information about preprocessors
preprocess: vitePreprocess(),
};

View File

@@ -0,0 +1,25 @@
{
"extends": "@tsconfig/svelte/tsconfig.json",
"compilerOptions": {
"sourceRoot": "../",
"target": "ESNext",
"useDefineForClassFields": true,
"module": "ESNext",
"resolveJsonModule": true,
/**
* Typecheck JS in `.svelte` and `.js` files by default.
* Disable checkJs if you'd like to use dynamic types in JS.
* Note that setting allowJs false does not prevent the use
* of JS in `.svelte` files.
*/
"allowJs": true,
"checkJs": true,
"isolatedModules": true,
"moduleDetection": "force",
"paths": {
"@/*": ["../../*"],
"@lib/*": ["../../lib/src/*"]
}
},
"include": ["src/**/*.ts", "src/**/*.js", "src/**/*.svelte"]
}

View File

@@ -0,0 +1,4 @@
{
"files": [],
"references": [{ "path": "./tsconfig.app.json" }, { "path": "./tsconfig.node.json" }]
}

View File

@@ -0,0 +1,28 @@
{
"compilerOptions": {
"tsBuildInfoFile": "./node_modules/.tmp/tsconfig.node.tsbuildinfo",
"target": "ES2022",
"lib": ["ES2023"],
"module": "ESNext",
"skipLibCheck": true,
/* Bundler mode */
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"isolatedModules": true,
"moduleDetection": "force",
"noEmit": true,
/* Linting */
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedSideEffectImports": true,
"paths": {
"@/*": ["../../*"],
"@lib/*": ["../../lib/src/*"]
}
},
"include": ["vite.config.ts"]
}

16
src/apps/webpeer/ui.html Normal file
View File

@@ -0,0 +1,16 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="icon.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Peer-to-Peer Daemon on Browser</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="./src/uitest.ts"></script>
</body>
</html>

View File

@@ -0,0 +1,24 @@
import { defineConfig } from "vite";
import { svelte } from "@sveltejs/vite-plugin-svelte";
import path from "node:path";
// https://vite.dev/config/
export default defineConfig({
plugins: [svelte()],
resolve: {
alias: {
"@": path.resolve(__dirname, "../../"),
"@lib": path.resolve(__dirname, "../../lib/src"),
},
},
base: "./",
build: {
outDir: "dist",
emptyOutDir: true,
rollupOptions: {
input: {
index: "index.html",
// uitest: "uitest.html",
},
},
},
});

View File

@@ -1,51 +1,99 @@
import { deleteDB, type IDBPDatabase, openDB } from "idb";
export interface KeyValueDatabase {
get<T>(key: IDBValidKey): Promise<T>;
set<T>(key: IDBValidKey, value: T): Promise<IDBValidKey>;
del(key: IDBValidKey): Promise<void>;
clear(): Promise<void>;
keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]>;
close(): void;
destroy(): Promise<void>;
}
import type { KeyValueDatabase } from "../lib/src/interfaces/KeyValueDatabase.ts";
import { serialized } from "octagonal-wheels/concurrency/lock";
import { Logger } from "octagonal-wheels/common/logger";
const databaseCache: { [key: string]: IDBPDatabase<any> } = {};
export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => {
export { OpenKeyValueDatabase } from "./KeyValueDBv2.ts";
export const _OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => {
if (dbKey in databaseCache) {
databaseCache[dbKey].close();
delete databaseCache[dbKey];
}
const storeKey = dbKey;
const dbPromise = openDB(dbKey, 1, {
upgrade(db) {
db.createObjectStore(storeKey);
},
});
const db = await dbPromise;
databaseCache[dbKey] = db;
let db: IDBPDatabase<any> | null = null;
const _openDB = () => {
return serialized("keyvaluedb-" + dbKey, async () => {
const dbInstance = await openDB(dbKey, 1, {
upgrade(db, _oldVersion, _newVersion, _transaction, _event) {
return db.createObjectStore(storeKey);
},
blocking(currentVersion, blockedVersion, event) {
Logger(
`Blocking database open for ${dbKey}: currentVersion=${currentVersion}, blockedVersion=${blockedVersion}`
);
databaseCache[dbKey]?.close();
delete databaseCache[dbKey];
},
blocked(currentVersion, blockedVersion, event) {
Logger(
`Database open blocked for ${dbKey}: currentVersion=${currentVersion}, blockedVersion=${blockedVersion}`
);
},
terminated() {
Logger(`Database connection terminated for ${dbKey}`);
},
});
databaseCache[dbKey] = dbInstance;
return dbInstance;
});
};
const closeDB = () => {
if (db) {
db.close();
delete databaseCache[dbKey];
db = null;
}
};
db = await _openDB();
return {
async get<T>(key: IDBValidKey): Promise<T> {
if (!db) {
db = await _openDB();
databaseCache[dbKey] = db;
}
return await db.get(storeKey, key);
},
async set<T>(key: IDBValidKey, value: T) {
if (!db) {
db = await _openDB();
databaseCache[dbKey] = db;
}
return await db.put(storeKey, value, key);
},
async del(key: IDBValidKey) {
if (!db) {
db = await _openDB();
databaseCache[dbKey] = db;
}
return await db.delete(storeKey, key);
},
async clear() {
if (!db) {
db = await _openDB();
databaseCache[dbKey] = db;
}
return await db.clear(storeKey);
},
async keys(query?: IDBValidKey | IDBKeyRange, count?: number) {
if (!db) {
db = await _openDB();
databaseCache[dbKey] = db;
}
return await db.getAllKeys(storeKey, query, count);
},
close() {
delete databaseCache[dbKey];
return db.close();
return Promise.resolve(closeDB());
},
async destroy() {
delete databaseCache[dbKey];
db.close();
await deleteDB(dbKey);
// await closeDB();
await deleteDB(dbKey, {
blocked() {
console.warn(`Database delete blocked for ${dbKey}`);
},
});
},
};
};

154
src/common/KeyValueDBv2.ts Normal file
View File

@@ -0,0 +1,154 @@
import { LOG_LEVEL_VERBOSE, Logger } from "@/lib/src/common/logger";
import type { KeyValueDatabase } from "@/lib/src/interfaces/KeyValueDatabase";
import { deleteDB, openDB, type IDBPDatabase } from "idb";
import { serialized } from "octagonal-wheels/concurrency/lock";
const databaseCache = new Map<string, IDBKeyValueDatabase>();
export async function OpenKeyValueDatabase(dbKey: string): Promise<KeyValueDatabase> {
return await serialized(`OpenKeyValueDatabase-${dbKey}`, async () => {
const cachedDB = databaseCache.get(dbKey);
if (cachedDB) {
if (!cachedDB.isDestroyed) {
return cachedDB;
}
await cachedDB.ensuredDestroyed;
databaseCache.delete(dbKey);
}
const newDB = new IDBKeyValueDatabase(dbKey);
try {
await newDB.getIsReady();
databaseCache.set(dbKey, newDB);
return newDB;
} catch (e) {
databaseCache.delete(dbKey);
throw e;
}
});
}
export class IDBKeyValueDatabase implements KeyValueDatabase {
protected _dbPromise: Promise<IDBPDatabase<any>> | null = null;
protected dbKey: string;
protected storeKey: string;
protected _isDestroyed: boolean = false;
protected destroyedPromise: Promise<void> | null = null;
get isDestroyed() {
return this._isDestroyed;
}
get ensuredDestroyed(): Promise<void> {
if (this.destroyedPromise) {
return this.destroyedPromise;
}
return Promise.resolve();
}
async getIsReady(): Promise<boolean> {
await this.ensureDB();
return this.isDestroyed === false;
}
protected ensureDB() {
if (this._isDestroyed) {
throw new Error("Database is destroyed");
}
if (this._dbPromise) {
return this._dbPromise;
}
this._dbPromise = openDB(this.dbKey, undefined, {
upgrade: (db, _oldVersion, _newVersion, _transaction, _event) => {
if (!db.objectStoreNames.contains(this.storeKey)) {
return db.createObjectStore(this.storeKey);
}
},
blocking: (currentVersion, blockedVersion, event) => {
Logger(
`Blocking database open for ${this.dbKey}: currentVersion=${currentVersion}, blockedVersion=${blockedVersion}`,
LOG_LEVEL_VERBOSE
);
// This `this` is not this openDB instance, previously opened DB. Let it be closed in the terminated handler.
void this.closeDB(true);
},
blocked: (currentVersion, blockedVersion, event) => {
Logger(
`Database open blocked for ${this.dbKey}: currentVersion=${currentVersion}, blockedVersion=${blockedVersion}`,
LOG_LEVEL_VERBOSE
);
},
terminated: () => {
Logger(`Database connection terminated for ${this.dbKey}`, LOG_LEVEL_VERBOSE);
this._dbPromise = null;
},
}).catch((e) => {
this._dbPromise = null;
throw e;
});
return this._dbPromise;
}
protected async closeDB(setDestroyed: boolean = false) {
if (this._dbPromise) {
const tempPromise = this._dbPromise;
this._dbPromise = null;
try {
const dbR = await tempPromise;
dbR.close();
} catch (e) {
Logger(`Error closing database`);
Logger(e, LOG_LEVEL_VERBOSE);
}
}
this._dbPromise = null;
if (setDestroyed) {
this._isDestroyed = true;
this.destroyedPromise = Promise.resolve();
}
}
get DB(): Promise<IDBPDatabase<any>> {
if (this._isDestroyed) {
return Promise.reject(new Error("Database is destroyed"));
}
return this.ensureDB();
}
constructor(dbKey: string) {
this.dbKey = dbKey;
this.storeKey = dbKey;
}
async get<U>(key: IDBValidKey): Promise<U> {
const db = await this.DB;
return await db.get(this.storeKey, key);
}
async set<U>(key: IDBValidKey, value: U): Promise<IDBValidKey> {
const db = await this.DB;
await db.put(this.storeKey, value, key);
return key;
}
async del(key: IDBValidKey): Promise<void> {
const db = await this.DB;
return await db.delete(this.storeKey, key);
}
async clear(): Promise<void> {
const db = await this.DB;
return await db.clear(this.storeKey);
}
async keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]> {
const db = await this.DB;
return await db.getAllKeys(this.storeKey, query, count);
}
async close(): Promise<void> {
await this.closeDB();
}
async destroy(): Promise<void> {
this._isDestroyed = true;
this.destroyedPromise = (async () => {
await this.closeDB();
await deleteDB(this.dbKey, {
blocked: () => {
Logger(`Database delete blocked for ${this.dbKey}`);
},
});
})();
await this.destroyedPromise;
}
}

View File

@@ -1,10 +1,10 @@
import { ItemView } from "obsidian";
import { ItemView } from "@/deps.ts";
import { type mount, unmount } from "svelte";
export abstract class SvelteItemView extends ItemView {
abstract instantiateComponent(target: HTMLElement): ReturnType<typeof mount> | Promise<ReturnType<typeof mount>>;
component?: ReturnType<typeof mount>;
async onOpen() {
override async onOpen() {
await super.onOpen();
this.contentEl.empty();
await this._dismountComponent();
@@ -17,7 +17,7 @@ export abstract class SvelteItemView extends ItemView {
this.component = undefined;
}
}
async onClose() {
override async onClose() {
await super.onClose();
if (this.component) {
await unmount(this.component);

View File

@@ -1,23 +1,16 @@
import type { FilePathWithPrefix, ObsidianLiveSyncSettings } from "../lib/src/common/types";
import { eventHub } from "../lib/src/hub/hub";
import type ObsidianLiveSyncPlugin from "../main";
export const EVENT_LAYOUT_READY = "layout-ready";
export const EVENT_PLUGIN_LOADED = "plugin-loaded";
export const EVENT_PLUGIN_UNLOADED = "plugin-unloaded";
export const EVENT_SETTING_SAVED = "setting-saved";
export const EVENT_FILE_RENAMED = "file-renamed";
export const EVENT_FILE_SAVED = "file-saved";
export const EVENT_LEAF_ACTIVE_CHANGED = "leaf-active-changed";
export const EVENT_DATABASE_REBUILT = "database-rebuilt";
export const EVENT_LOG_ADDED = "log-added";
export const EVENT_REQUEST_OPEN_SETTINGS = "request-open-settings";
export const EVENT_REQUEST_OPEN_SETTING_WIZARD = "request-open-setting-wizard";
export const EVENT_REQUEST_OPEN_SETUP_URI = "request-open-setup-uri";
export const EVENT_REQUEST_COPY_SETUP_URI = "request-copy-setup-uri";
export const EVENT_REQUEST_SHOW_SETUP_QR = "request-show-setup-qr";
export const EVENT_REQUEST_RELOAD_SETTING_TAB = "reload-setting-tab";
@@ -26,32 +19,35 @@ export const EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG = "request-open-plugin-sync-d
export const EVENT_REQUEST_OPEN_P2P = "request-open-p2p";
export const EVENT_REQUEST_CLOSE_P2P = "request-close-p2p";
export const EVENT_REQUEST_RUN_DOCTOR = "request-run-doctor";
export const EVENT_REQUEST_RUN_FIX_INCOMPLETE = "request-run-fix-incomplete";
export const EVENT_ANALYSE_DB_USAGE = "analyse-db-usage";
export const EVENT_REQUEST_PERFORM_GC_V3 = "request-perform-gc-v3";
export const EVENT_REQUEST_CHECK_REMOTE_SIZE = "request-check-remote-size";
// export const EVENT_FILE_CHANGED = "file-changed";
declare global {
interface LSEvents {
[EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG]: undefined;
[EVENT_FILE_SAVED]: undefined;
[EVENT_REQUEST_OPEN_SETUP_URI]: undefined;
[EVENT_REQUEST_COPY_SETUP_URI]: undefined;
[EVENT_REQUEST_RELOAD_SETTING_TAB]: undefined;
[EVENT_PLUGIN_UNLOADED]: undefined;
[EVENT_SETTING_SAVED]: ObsidianLiveSyncSettings;
[EVENT_PLUGIN_LOADED]: ObsidianLiveSyncPlugin;
[EVENT_LAYOUT_READY]: undefined;
"event-file-changed": { file: FilePathWithPrefix; automated: boolean };
"document-stub-created": {
toc: Set<string>;
stub: { [key: string]: { [key: string]: Map<string, Record<string, string>> } };
};
[EVENT_PLUGIN_UNLOADED]: undefined;
[EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG]: undefined;
[EVENT_REQUEST_OPEN_SETTINGS]: undefined;
[EVENT_REQUEST_OPEN_SETTING_WIZARD]: undefined;
[EVENT_FILE_RENAMED]: { newPath: FilePathWithPrefix; old: FilePathWithPrefix };
[EVENT_REQUEST_RELOAD_SETTING_TAB]: undefined;
[EVENT_LEAF_ACTIVE_CHANGED]: undefined;
[EVENT_REQUEST_OPEN_P2P]: undefined;
[EVENT_REQUEST_CLOSE_P2P]: undefined;
[EVENT_DATABASE_REBUILT]: undefined;
[EVENT_REQUEST_OPEN_P2P]: undefined;
[EVENT_REQUEST_OPEN_SETUP_URI]: undefined;
[EVENT_REQUEST_COPY_SETUP_URI]: undefined;
[EVENT_REQUEST_SHOW_SETUP_QR]: undefined;
[EVENT_REQUEST_RUN_DOCTOR]: string;
[EVENT_REQUEST_RUN_FIX_INCOMPLETE]: undefined;
[EVENT_ANALYSE_DB_USAGE]: undefined;
[EVENT_REQUEST_CHECK_REMOTE_SIZE]: undefined;
[EVENT_REQUEST_PERFORM_GC_V3]: undefined;
}
}
export * from "../lib/src/events/coreEvents.ts";
export { eventHub };

View File

@@ -1,4 +1,4 @@
import { PersistentMap } from "../lib/src/dataobject/PersistentMap.ts";
import { PersistentMap } from "octagonal-wheels/dataobject/PersistentMap";
export let sameChangePairs: PersistentMap<number[]>;

View File

@@ -1,11 +1,6 @@
import { type PluginManifest, TFile } from "../deps.ts";
import {
type DatabaseEntry,
type EntryBody,
type FilePath,
type UXFileInfoStub,
type UXInternalFileInfoStub,
} from "../lib/src/common/types.ts";
import { type DatabaseEntry, type EntryBody, type FilePath } from "../lib/src/common/types.ts";
export type { CacheData, FileEventItem } from "../lib/src/common/types.ts";
export interface PluginDataEntry extends DatabaseEntry {
deviceVaultName: string;
@@ -54,37 +49,16 @@ export type queueItem = {
warned?: boolean;
};
export type CacheData = string | ArrayBuffer;
export type FileEventType = "CREATE" | "DELETE" | "CHANGED" | "INTERNAL";
export type FileEventArgs = {
file: UXFileInfoStub | UXInternalFileInfoStub;
cache?: CacheData;
oldPath?: string;
ctx?: any;
};
export type FileEventItem = {
type: FileEventType;
args: FileEventArgs;
key: string;
skipBatchWait?: boolean;
cancelled?: boolean;
batched?: boolean;
};
// Hidden items (Now means `chunk`)
export const CHeader = "h:";
// Plug-in Stored Container (Obsolete)
export const PSCHeader = "ps:";
export const PSCHeaderEnd = "ps;";
// Internal data Container
export const ICHeader = "i:";
export const ICHeaderEnd = "i;";
export const ICHeaderLength = ICHeader.length;
// Internal data Container (eXtended)
export const ICXHeader = "ix:";
export const FileWatchEventQueueMax = 10;
export const configURIBase = "obsidian://setuplivesync?settings=";
export { configURIBase, configURIBaseQR } from "../lib/src/common/types.ts";
export {
CHeader,
PSCHeader,
PSCHeaderEnd,
ICHeader,
ICHeaderEnd,
ICHeaderLength,
ICXHeader,
} from "../lib/src/common/models/fileaccess.const.ts";

View File

@@ -15,6 +15,7 @@ import {
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
type AnyEntry,
type CouchDBCredentials,
type DocumentID,
type EntryHasPath,
type FilePath,
@@ -22,16 +23,18 @@ import {
type UXFileInfo,
type UXFileInfoStub,
} from "../lib/src/common/types.ts";
import { CHeader, ICHeader, ICHeaderLength, ICXHeader, PSCHeader } from "./types.ts";
export { ICHeader, ICXHeader } from "./types.ts";
import type ObsidianLiveSyncPlugin from "../main.ts";
import { writeString } from "../lib/src/string_and_binary/convert.ts";
import { fireAndForget } from "../lib/src/common/utils.ts";
import { sameChangePairs } from "./stores.ts";
import type { KeyValueDatabase } from "./KeyValueDB.ts";
import { scheduleTask } from "octagonal-wheels/concurrency/task";
import { EVENT_PLUGIN_UNLOADED, eventHub } from "./events.ts";
import { AuthorizationHeaderGenerator } from "../lib/src/replication/httplib.ts";
import type { KeyValueDatabase } from "../lib/src/interfaces/KeyValueDatabase.ts";
export { scheduleTask, cancelTask, cancelAllTasks } from "../lib/src/concurrency/task.ts";
export { scheduleTask, cancelTask, cancelAllTasks } from "octagonal-wheels/concurrency/task";
// For backward compatibility, using the path for determining id.
// Only CouchDB unacceptable ID (that starts with an underscore) has been prefixed with "/".
@@ -59,37 +62,18 @@ export function id2path(id: DocumentID, entry?: EntryHasPath): FilePathWithPrefi
const fixedPath = temp.join(":") as FilePathWithPrefix;
return fixedPath;
}
export function getPath(entry: AnyEntry) {
return id2path(entry._id, entry);
}
export function getPathWithoutPrefix(entry: AnyEntry) {
const f = getPath(entry);
return stripAllPrefixes(f);
}
export function getPathFromTFile(file: TAbstractFile) {
return file.path as FilePath;
}
export function isInternalFile(file: UXFileInfoStub | string | FilePathWithPrefix) {
if (typeof file == "string") return file.startsWith(ICHeader);
if (file.isInternal) return true;
return false;
}
export function getPathFromUXFileInfo(file: UXFileInfoStub | string | FilePathWithPrefix) {
if (typeof file == "string") return file as FilePathWithPrefix;
return file.path;
}
export function getStoragePathFromUXFileInfo(file: UXFileInfoStub | string | FilePathWithPrefix) {
if (typeof file == "string") return stripAllPrefixes(file as FilePathWithPrefix);
return stripAllPrefixes(file.path);
}
export function getDatabasePathFromUXFileInfo(file: UXFileInfoStub | string | FilePathWithPrefix) {
if (typeof file == "string" && file.startsWith(ICXHeader)) return file as FilePathWithPrefix;
const prefix = isInternalFile(file) ? ICHeader : "";
if (typeof file == "string") return (prefix + stripAllPrefixes(file as FilePathWithPrefix)) as FilePathWithPrefix;
return (prefix + stripAllPrefixes(file.path)) as FilePathWithPrefix;
}
import {
isInternalFile,
getPathFromUXFileInfo,
getStoragePathFromUXFileInfo,
getDatabasePathFromUXFileInfo,
} from "@lib/common/typeUtils.ts";
export { isInternalFile, getPathFromUXFileInfo, getStoragePathFromUXFileInfo, getDatabasePathFromUXFileInfo };
const memos: { [key: string]: any } = {};
export function memoObject<T>(key: string, obj: T): T {
@@ -133,32 +117,14 @@ export function trimPrefix(target: string, prefix: string) {
return target.startsWith(prefix) ? target.substring(prefix.length) : target;
}
/**
* returns is internal chunk of file
* @param id ID
* @returns
*/
export function isInternalMetadata(id: FilePath | FilePathWithPrefix | DocumentID): boolean {
return id.startsWith(ICHeader);
}
export function stripInternalMetadataPrefix<T extends FilePath | FilePathWithPrefix | DocumentID>(id: T): T {
return id.substring(ICHeaderLength) as T;
}
export function id2InternalMetadataId(id: DocumentID): DocumentID {
return (ICHeader + id) as DocumentID;
}
// const CHeaderLength = CHeader.length;
export function isChunk(str: string): boolean {
return str.startsWith(CHeader);
}
export function isPluginMetadata(str: string): boolean {
return str.startsWith(PSCHeader);
}
export function isCustomisationSyncMetadata(str: string): boolean {
return str.startsWith(ICXHeader);
}
export {
isInternalMetadata,
id2InternalMetadataId,
isChunk,
isCustomisationSyncMetadata,
isPluginMetadata,
stripInternalMetadataPrefix,
} from "@lib/common/typeUtils.ts";
export class PeriodicProcessor {
_process: () => Promise<any>;
@@ -185,7 +151,7 @@ export class PeriodicProcessor {
() =>
fireAndForget(async () => {
await this.process();
if (this._plugin.$$isUnloaded()) {
if (this._plugin.services?.control?.hasUnloaded()) {
this.disable();
}
}),
@@ -229,17 +195,17 @@ export const _requestToCouchDBFetch = async (
export const _requestToCouchDB = async (
baseUri: string,
username: string,
password: string,
credentials: CouchDBCredentials,
origin: string,
path?: string,
body?: any,
method?: string
method?: string,
customHeaders?: Record<string, string>
) => {
const utf8str = String.fromCharCode.apply(null, [...writeString(`${username}:${password}`)]);
const encoded = window.btoa(utf8str);
const authHeader = "Basic " + encoded;
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin };
// Create each time to avoid caching.
const authHeaderGen = new AuthorizationHeaderGenerator();
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
const uri = `${baseUri}/${path}`;
const requestParam: RequestUrlParam = {
url: uri,
@@ -250,6 +216,9 @@ export const _requestToCouchDB = async (
};
return await requestUrl(requestParam);
};
/**
* @deprecated Use requestToCouchDBWithCredentials instead.
*/
export const requestToCouchDB = async (
baseUri: string,
username: string,
@@ -257,16 +226,36 @@ export const requestToCouchDB = async (
origin: string = "",
key?: string,
body?: string,
method?: string
method?: string,
customHeaders?: Record<string, string>
) => {
const uri = `_node/_local/_config${key ? "/" + key : ""}`;
return await _requestToCouchDB(baseUri, username, password, origin, uri, body, method);
return await _requestToCouchDB(
baseUri,
{ username, password, type: "basic" },
origin,
uri,
body,
method,
customHeaders
);
};
export const BASE_IS_NEW = Symbol("base");
export const TARGET_IS_NEW = Symbol("target");
export const EVEN = Symbol("even");
export function requestToCouchDBWithCredentials(
baseUri: string,
credentials: CouchDBCredentials,
origin: string = "",
key?: string,
body?: string,
method?: string,
customHeaders?: Record<string, string>
) {
const uri = `_node/_local/_config${key ? "/" + key : ""}`;
return _requestToCouchDB(baseUri, credentials, origin, uri, body, method, customHeaders);
}
import { BASE_IS_NEW, EVEN, TARGET_IS_NEW } from "@lib/common/models/shared.const.symbols.ts";
export { BASE_IS_NEW, EVEN, TARGET_IS_NEW };
// Why 2000? : ZIP FILE Does not have enough resolution.
const resolution = 2000;
export function compareMTime(
@@ -401,30 +390,6 @@ export function displayRev(rev: string) {
return `${number}-${hash.substring(0, 6)}`;
}
type DocumentProps = {
id: DocumentID;
rev?: string;
prefixedPath: FilePathWithPrefix;
path: FilePath;
isDeleted: boolean;
revDisplay: string;
shortenedId: string;
shortenedPath: string;
};
export function getDocProps(doc: AnyEntry): DocumentProps {
const id = doc._id;
const shortenedId = id.substring(0, 10);
const prefixedPath = getPath(doc);
const path = stripAllPrefixes(prefixedPath);
const rev = doc._rev;
const revDisplay = rev ? displayRev(rev) : "0-NOREVS";
// const prefix = prefixedPath.substring(0, prefixedPath.length - path.length);
const shortenedPath = path.substring(0, 10);
const isDeleted = doc._deleted || doc.deleted || false;
return { id, rev, revDisplay, prefixedPath, path, isDeleted, shortenedId, shortenedPath };
}
export function getLogLevel(showNotice: boolean) {
return showNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
}

View File

@@ -24,6 +24,13 @@ export {
parseYaml,
ItemView,
WorkspaceLeaf,
Menu,
request,
getLanguage,
ButtonComponent,
TextComponent,
ToggleComponent,
DropdownComponent,
} from "obsidian";
export type {
DataWriteOptions,
@@ -32,6 +39,8 @@ export type {
RequestUrlResponse,
MarkdownFileInfo,
ListedFiles,
ValueComponent,
Stat,
} from "obsidian";
import { normalizePath as normalizePath_ } from "obsidian";
const normalizePath = normalizePath_ as <T extends string | FilePath>(from: T) => T;

View File

@@ -45,7 +45,7 @@ import {
} from "../../lib/src/common/utils.ts";
import { digestHash } from "../../lib/src/string_and_binary/hash.ts";
import { arrayBufferToBase64, decodeBinary, readString } from "../../lib/src/string_and_binary/convert.ts";
import { serialized, shareRunningResult } from "../../lib/src/concurrency/lock.ts";
import { serialized, shareRunningResult } from "octagonal-wheels/concurrency/lock";
import { LiveSyncCommands } from "../LiveSyncCommands.ts";
import { stripAllPrefixes } from "../../lib/src/string_and_binary/path.ts";
import {
@@ -62,20 +62,29 @@ import {
scheduleTask,
} from "../../common/utils.ts";
import { JsonResolveModal } from "../HiddenFileCommon/JsonResolveModal.ts";
import { QueueProcessor } from "../../lib/src/concurrency/processor.ts";
import { QueueProcessor } from "octagonal-wheels/concurrency/processor";
import { pluginScanningCount } from "../../lib/src/mock_and_interop/stores.ts";
import type ObsidianLiveSyncPlugin from "../../main.ts";
import { base64ToArrayBuffer, base64ToString } from "octagonal-wheels/binary/base64";
import { ConflictResolveModal } from "../../modules/features/InteractiveConflictResolving/ConflictResolveModal.ts";
import { Semaphore } from "octagonal-wheels/concurrency/semaphore";
import type { IObsidianModule } from "../../modules/AbstractObsidianModule.ts";
import { EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG, eventHub } from "../../common/events.ts";
import { PluginDialogModal } from "./PluginDialogModal.ts";
import { $msg } from "src/lib/src/common/i18n.ts";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
const d = "\u200b";
const d2 = "\n";
declare global {
interface OPTIONAL_SYNC_FEATURES {
DISABLE: "DISABLE";
CUSTOMIZE: "CUSTOMIZE";
DISABLE_CUSTOM: "DISABLE_CUSTOM";
}
}
function serialize(data: PluginDataEx): string {
// For higher performance, create custom plug-in data strings.
// Self-hosted LiveSync uses `\n` to split chunks. Therefore, grouping together those with similar entropy would work nicely.
@@ -384,7 +393,7 @@ export type PluginDataEx = {
mtime: number;
};
export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
export class ConfigSync extends LiveSyncCommands {
constructor(plugin: ObsidianLiveSyncPlugin) {
super(plugin);
pluginScanningCount.onChanged((e) => {
@@ -402,7 +411,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
get useSyncPluginEtc() {
return this.plugin.settings.usePluginEtc;
}
_isThisModuleEnabled() {
isThisModuleEnabled() {
return this.plugin.settings.usePluginSync;
}
@@ -411,7 +420,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
pluginList: IPluginDataExDisplay[] = [];
showPluginSyncModal() {
if (!this._isThisModuleEnabled()) {
if (!this.isThisModuleEnabled()) {
return;
}
if (this.pluginDialog) {
@@ -482,8 +491,8 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
// Idea non-filter option?
return this.getFileCategory(filePath) != "";
}
async $everyOnDatabaseInitialized(showNotice: boolean) {
if (!this._isThisModuleEnabled()) return true;
private async _everyOnDatabaseInitialized(showNotice: boolean) {
if (!this.isThisModuleEnabled()) return true;
try {
this._log("Scanning customizations...");
await this.scanAllConfigFiles(showNotice);
@@ -494,16 +503,16 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
return true;
}
async $everyBeforeReplicate(showNotice: boolean) {
if (!this._isThisModuleEnabled()) return true;
async _everyBeforeReplicate(showNotice: boolean) {
if (!this.isThisModuleEnabled()) return true;
if (this.settings.autoSweepPlugins) {
await this.scanAllConfigFiles(showNotice);
return true;
}
return true;
}
async $everyOnResumeProcess(): Promise<boolean> {
if (!this._isThisModuleEnabled()) return true;
async _everyOnResumeProcess(): Promise<boolean> {
if (!this.isThisModuleEnabled()) return true;
if (this._isMainSuspended()) {
return true;
}
@@ -517,9 +526,9 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
);
return true;
}
$everyAfterResumeProcess(): Promise<boolean> {
_everyAfterResumeProcess(): Promise<boolean> {
const q = activeDocument.querySelector(`.livesync-ribbon-showcustom`);
q?.toggleClass("sls-hidden", !this._isThisModuleEnabled());
q?.toggleClass("sls-hidden", !this.isThisModuleEnabled());
return Promise.resolve(true);
}
async reloadPluginList(showMessage: boolean) {
@@ -633,7 +642,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
).startPipeline();
filenameToUnifiedKey(path: string, termOverRide?: string) {
const term = termOverRide || this.plugin.$$getDeviceAndVaultName();
const term = termOverRide || this.services.setting.getDeviceAndVaultName();
const category = this.getFileCategory(path);
const name =
category == "CONFIG" || category == "SNIPPET"
@@ -645,7 +654,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
filenameWithUnifiedKey(path: string, termOverRide?: string) {
const term = termOverRide || this.plugin.$$getDeviceAndVaultName();
const term = termOverRide || this.services.setting.getDeviceAndVaultName();
const category = this.getFileCategory(path);
const name =
category == "CONFIG" || category == "SNIPPET" ? path.split("/").slice(-1)[0] : path.split("/").slice(-2)[0];
@@ -654,7 +663,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
unifiedKeyPrefixOfTerminal(termOverRide?: string) {
const term = termOverRide || this.plugin.$$getDeviceAndVaultName();
const term = termOverRide || this.services.setting.getDeviceAndVaultName();
return `${ICXHeader}${term}/` as FilePathWithPrefix;
}
@@ -831,7 +840,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
const v2Path = (prefixPath + relativeFilename) as FilePathWithPrefix;
// console.warn(`Migrating ${v1Path} / ${relativeFilename} to ${v2Path}`);
this._log(`Migrating ${v1Path} / ${relativeFilename} to ${v2Path}`, LOG_LEVEL_VERBOSE);
const newId = await this.plugin.$$path2id(v2Path);
const newId = await this.services.path.path2id(v2Path);
// const buf =
const data = createBlob([DUMMY_HEAD, DUMMY_END, ...getDocDataAsArray(f.data)]);
@@ -861,7 +870,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
async updatePluginList(showMessage: boolean, updatedDocumentPath?: FilePathWithPrefix): Promise<void> {
if (!this._isThisModuleEnabled()) {
if (!this.isThisModuleEnabled()) {
this.pluginScanProcessor.clearQueue();
this.pluginList = [];
pluginList.set(this.pluginList);
@@ -999,7 +1008,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
await this.plugin.storageAccess.ensureDir(path);
// If the content has applied, modified time will be updated to the current time.
await this.plugin.storageAccess.writeHiddenFileAuto(path, content);
await this.storeCustomisationFileV2(path, this.plugin.$$getDeviceAndVaultName());
await this.storeCustomisationFileV2(path, this.services.setting.getDeviceAndVaultName());
} else {
const files = data.files;
for (const f of files) {
@@ -1042,7 +1051,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
await this.plugin.storageAccess.writeHiddenFileAuto(path, content, stat);
}
this._log(`Applied ${f.filename} of ${data.displayName || data.name}..`);
await this.storeCustomisationFileV2(path, this.plugin.$$getDeviceAndVaultName());
await this.storeCustomisationFileV2(path, this.services.setting.getDeviceAndVaultName());
}
}
} catch (ex) {
@@ -1114,7 +1123,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
);
}
} else if (data.category == "CONFIG") {
this.plugin.$$askReload();
this.services.appLifecycle.askRestart();
}
return true;
} catch (ex) {
@@ -1157,15 +1166,15 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
return false;
}
}
async $anyModuleParsedReplicationResultItem(docs: PouchDB.Core.ExistingDocument<EntryDoc>) {
if (!docs._id.startsWith(ICXHeader)) return undefined;
if (this._isThisModuleEnabled()) {
async _anyModuleParsedReplicationResultItem(docs: PouchDB.Core.ExistingDocument<EntryDoc>) {
if (!docs._id.startsWith(ICXHeader)) return false;
if (this.isThisModuleEnabled()) {
await this.updatePluginList(
false,
(docs as AnyEntry).path ? (docs as AnyEntry).path : this.getPath(docs as AnyEntry)
);
}
if (this._isThisModuleEnabled() && this.plugin.settings.notifyPluginOrSettingUpdated) {
if (this.isThisModuleEnabled() && this.plugin.settings.notifyPluginOrSettingUpdated) {
if (!this.pluginDialog || (this.pluginDialog && !this.pluginDialog.isOpened())) {
const fragment = createFragment((doc) => {
doc.createEl("span", undefined, (a) => {
@@ -1205,11 +1214,11 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
return true;
}
async $everyRealizeSettingSyncMode(): Promise<boolean> {
async _everyRealizeSettingSyncMode(): Promise<boolean> {
this.periodicPluginSweepProcessor?.disable();
if (!this._isMainReady) return true;
if (!this._isMainSuspended()) return true;
if (!this._isThisModuleEnabled()) return true;
if (!this.isThisModuleEnabled()) return true;
if (this.settings.autoSweepPlugins) {
await this.scanAllConfigFiles(false);
}
@@ -1306,7 +1315,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
);
return;
}
const docXDoc = await this.localDatabase.getDBEntryFromMeta(old, {}, false, false);
const docXDoc = await this.localDatabase.getDBEntryFromMeta(old, false, false);
if (docXDoc == false) {
throw "Could not load the document";
}
@@ -1345,7 +1354,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
});
}
async storeCustomizationFiles(path: FilePath, termOverRide?: string) {
const term = termOverRide || this.plugin.$$getDeviceAndVaultName();
const term = termOverRide || this.services.setting.getDeviceAndVaultName();
if (term == "") {
this._log("We have to configure the device name", LOG_LEVEL_NOTICE);
return;
@@ -1440,7 +1449,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
// this._log(`STORAGE --> DB:${prefixedFileName}: (config) Skipped (Same time)`, LOG_LEVEL_VERBOSE);
return true;
}
const oldC = await this.localDatabase.getDBEntryFromMeta(old, {}, false, false);
const oldC = await this.localDatabase.getDBEntryFromMeta(old, false, false);
if (oldC) {
const d = (await deserialize(getDocDataAsArray(oldC.data), {})) as PluginDataEx;
if (d.files.length == dt.files.length) {
@@ -1488,14 +1497,14 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
});
}
async $anyProcessOptionalFileEvent(path: FilePath): Promise<boolean | undefined> {
async _anyProcessOptionalFileEvent(path: FilePath): Promise<boolean> {
return await this.watchVaultRawEventsAsync(path);
}
async watchVaultRawEventsAsync(path: FilePath) {
if (!this._isMainReady) return false;
if (this._isMainSuspended()) return false;
if (!this._isThisModuleEnabled()) return false;
if (!this.isThisModuleEnabled()) return false;
// if (!this.isTargetPath(path)) return false;
const stat = await this.plugin.storageAccess.statHidden(path);
// Make sure that target is a file.
@@ -1535,7 +1544,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
await shareRunningResult("scanAllConfigFiles", async () => {
const logLevel = showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
this._log("Scanning customizing files.", logLevel, "scan-all-config");
const term = this.plugin.$$getDeviceAndVaultName();
const term = this.services.setting.getDeviceAndVaultName();
if (term == "") {
this._log("We have to configure the device name", LOG_LEVEL_NOTICE);
return;
@@ -1673,11 +1682,14 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
return filenames as FilePath[];
}
async $allAskUsingOptionalSyncFeature(opt: { enableFetch?: boolean; enableOverwrite?: boolean }): Promise<boolean> {
await this._askHiddenFileConfiguration(opt);
private async _allAskUsingOptionalSyncFeature(opt: {
enableFetch?: boolean;
enableOverwrite?: boolean;
}): Promise<boolean> {
await this.__askHiddenFileConfiguration(opt);
return true;
}
async _askHiddenFileConfiguration(opt: { enableFetch?: boolean; enableOverwrite?: boolean }) {
private async __askHiddenFileConfiguration(opt: { enableFetch?: boolean; enableOverwrite?: boolean }) {
const message = `Would you like to enable **Customization sync**?
> [!DETAILS]-
@@ -1707,7 +1719,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
}
$anyGetOptionalConflictCheckMethod(path: FilePathWithPrefix): Promise<boolean | "newer"> {
_anyGetOptionalConflictCheckMethod(path: FilePathWithPrefix): Promise<boolean | "newer"> {
if (isPluginMetadata(path)) {
return Promise.resolve("newer");
}
@@ -1717,7 +1729,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
return Promise.resolve(false);
}
$allSuspendExtraSync(): Promise<boolean> {
private _allSuspendExtraSync(): Promise<boolean> {
if (this.plugin.settings.usePluginSync || this.plugin.settings.autoSweepPlugins) {
this._log(
"Customisation sync have been temporarily disabled. Please enable them after the fetching, if you need them.",
@@ -1729,10 +1741,11 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
return Promise.resolve(true);
}
async $anyConfigureOptionalSyncFeature(mode: "CUSTOMIZE" | "DISABLE" | "DISABLE_CUSTOM") {
private async _allConfigureOptionalSyncFeature(mode: keyof OPTIONAL_SYNC_FEATURES) {
await this.configureHiddenFileSync(mode);
return true;
}
async configureHiddenFileSync(mode: "CUSTOMIZE" | "DISABLE" | "DISABLE_CUSTOM") {
async configureHiddenFileSync(mode: keyof OPTIONAL_SYNC_FEATURES) {
if (mode == "DISABLE") {
this.plugin.settings.usePluginSync = false;
await this.plugin.saveSettings();
@@ -1740,7 +1753,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
if (mode == "CUSTOMIZE") {
if (!this.plugin.$$getDeviceAndVaultName()) {
if (!this.services.setting.getDeviceAndVaultName()) {
let name = await this.plugin.confirm.askString("Device name", "Please set this device name", `desktop`);
if (!name) {
if (Platform.isAndroidApp) {
@@ -1764,7 +1777,7 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
name = name + Math.random().toString(36).slice(-4);
}
this.plugin.$$setDeviceAndVaultName(name);
this.services.setting.setDeviceAndVaultName(name);
}
this.plugin.settings.usePluginSync = true;
this.plugin.settings.useAdvancedMode = true;
@@ -1789,4 +1802,17 @@ export class ConfigSync extends LiveSyncCommands implements IObsidianModule {
}
return files;
}
override onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.fileProcessing.processOptionalFileEvent.addHandler(this._anyProcessOptionalFileEvent.bind(this));
services.conflict.getOptionalConflictCheckMethod.addHandler(this._anyGetOptionalConflictCheckMethod.bind(this));
services.replication.processVirtualDocument.addHandler(this._anyModuleParsedReplicationResultItem.bind(this));
services.setting.onRealiseSetting.addHandler(this._everyRealizeSettingSyncMode.bind(this));
services.appLifecycle.onResuming.addHandler(this._everyOnResumeProcess.bind(this));
services.appLifecycle.onResumed.addHandler(this._everyAfterResumeProcess.bind(this));
services.replication.onBeforeReplicate.addHandler(this._everyBeforeReplicate.bind(this));
services.databaseEvents.onDatabaseInitialised.addHandler(this._everyOnDatabaseInitialized.bind(this));
services.setting.suspendExtraSync.addHandler(this._allSuspendExtraSync.bind(this));
services.setting.suggestOptionalFeatures.addHandler(this._allAskUsingOptionalSyncFeature.bind(this));
services.setting.enableOptionalFeature.addHandler(this._allConfigureOptionalSyncFeature.bind(this));
}
}

View File

@@ -10,7 +10,7 @@
import { getDocData, timeDeltaToHumanReadable, unique } from "../../lib/src/common/utils";
import type ObsidianLiveSyncPlugin from "../../main";
// import { askString } from "../../common/utils";
import { Menu } from "obsidian";
import { Menu } from "@/deps.ts";
export let list: IPluginDataExDisplay[] = [];
export let thisTerm = "";

View File

@@ -14,7 +14,7 @@ export class PluginDialogModal extends Modal {
this.plugin = plugin;
}
onOpen() {
override onOpen() {
const { contentEl } = this;
this.contentEl.style.overflow = "auto";
this.contentEl.style.display = "flex";
@@ -28,7 +28,7 @@ export class PluginDialogModal extends Modal {
}
}
onClose() {
override onClose() {
if (this.component) {
void unmount(this.component);
this.component = undefined;

View File

@@ -10,7 +10,7 @@
pluginV2Progress,
} from "./CmdConfigSync.ts";
import PluginCombo from "./PluginCombo.svelte";
import { Menu, type PluginManifest } from "obsidian";
import { Menu, type PluginManifest } from "@/deps.ts";
import { unique } from "../../lib/src/common/utils";
import {
MODE_SELECTIVE,
@@ -25,7 +25,7 @@
export let plugin: ObsidianLiveSyncPlugin;
$: hideNotApplicable = false;
$: thisTerm = plugin.$$getDeviceAndVaultName();
$: thisTerm = plugin.services.setting.getDeviceAndVaultName();
const addOn = plugin.getAddOn(ConfigSync.name) as ConfigSync;
if (!addOn) {
@@ -98,7 +98,7 @@
await requestUpdate();
}
async function replicate() {
await plugin.$$replicate(true);
await plugin.services.replication.replicate(true);
}
function selectAllNewest(selectMode: boolean) {
selectNewestPulse++;
@@ -237,7 +237,7 @@
plugin.settings.pluginSyncExtendedSetting[key].files = files;
plugin.settings.pluginSyncExtendedSetting[key].mode = mode;
}
plugin.$$saveSettingData();
plugin.services.setting.saveSettingData();
}
function getIcon(mode: SYNC_MODE) {
if (mode in ICONS) {

View File

@@ -2,13 +2,14 @@ import { App, Modal } from "../../deps.ts";
import { type FilePath, type LoadedEntry } from "../../lib/src/common/types.ts";
import JsonResolvePane from "./JsonResolvePane.svelte";
import { waitForSignal } from "../../lib/src/common/utils.ts";
import { mount, unmount } from "svelte";
export class JsonResolveModal extends Modal {
// result: Array<[number, string]>;
filename: FilePath;
callback?: (keepRev?: string, mergedStr?: string) => Promise<void>;
docs: LoadedEntry[];
component?: JsonResolvePane;
component?: ReturnType<typeof mount>;
nameA: string;
nameB: string;
defaultSelect: string;
@@ -49,13 +50,13 @@ export class JsonResolveModal extends Modal {
this.callback = undefined;
}
onOpen() {
override onOpen() {
const { contentEl } = this;
this.titleEl.setText(this.title);
contentEl.empty();
if (this.component == undefined) {
this.component = new JsonResolvePane({
this.component = mount(JsonResolvePane, {
target: contentEl,
props: {
docs: this.docs,
@@ -73,7 +74,7 @@ export class JsonResolveModal extends Modal {
return;
}
onClose() {
override onClose() {
const { contentEl } = this;
contentEl.empty();
// contentEl.empty();
@@ -81,7 +82,7 @@ export class JsonResolveModal extends Modal {
void this.callback(undefined);
}
if (this.component != undefined) {
this.component.$destroy();
void unmount(this.component);
this.component = undefined;
}
}

View File

@@ -2,29 +2,64 @@
import { type Diff, DIFF_DELETE, DIFF_INSERT, diff_match_patch } from "../../deps.ts";
import type { FilePath, LoadedEntry } from "../../lib/src/common/types.ts";
import { decodeBinary, readString } from "../../lib/src/string_and_binary/convert.ts";
import { getDocData, mergeObject } from "../../lib/src/common/utils.ts";
import { getDocData, isObjectDifferent, mergeObject } from "../../lib/src/common/utils.ts";
export let docs: LoadedEntry[] = [];
export let callback: (keepRev?: string, mergedStr?: string) => Promise<void> = async (_, __) => {
Promise.resolve();
};
export let filename: FilePath = "" as FilePath;
export let nameA: string = "A";
export let nameB: string = "B";
export let defaultSelect: string = "";
export let keepOrder = false;
export let hideLocal: boolean = false;
let docA: LoadedEntry;
let docB: LoadedEntry;
let docAContent = "";
let docBContent = "";
let objA: any = {};
let objB: any = {};
let objAB: any = {};
let objBA: any = {};
let diffs: Diff[];
interface Props {
docs?: LoadedEntry[];
callback?: (keepRev?: string, mergedStr?: string) => Promise<void>;
filename?: FilePath;
nameA?: string;
nameB?: string;
defaultSelect?: string;
keepOrder?: boolean;
hideLocal?: boolean;
}
let {
docs = $bindable([]),
callback = $bindable((async (_, __) => {
Promise.resolve();
}) as (keepRev?: string, mergedStr?: string) => Promise<void>),
filename = $bindable("" as FilePath),
nameA = $bindable("A"),
nameB = $bindable("B"),
defaultSelect = $bindable("" as string),
keepOrder = $bindable(false),
hideLocal = $bindable(false),
}: Props = $props();
type JSONData = Record<string | number | symbol, any> | [any];
const docsArray = $derived.by(() => {
if (docs && docs.length >= 1) {
if (keepOrder || docs[0].mtime < docs[1].mtime) {
return { a: docs[0], b: docs[1] } as const;
} else {
return { a: docs[1], b: docs[0] } as const;
}
}
return { a: false, b: false } as const;
});
const docA = $derived(docsArray.a);
const docB = $derived(docsArray.b);
const docAContent = $derived(docA && docToString(docA));
const docBContent = $derived(docB && docToString(docB));
function parseJson(json: string | false) {
if (json === false) return false;
try {
return JSON.parse(json) as JSONData;
} catch (ex) {
return false;
}
}
const objA = $derived(parseJson(docAContent) || {});
const objB = $derived(parseJson(docBContent) || {});
const objAB = $derived(mergeObject(objA, objB));
const objBAw = $derived(mergeObject(objB, objA));
const objBA = $derived(isObjectDifferent(objBAw, objAB) ? objBAw : false);
let diffs: Diff[] = $derived.by(() => (objA && selectedObj ? getJsonDiff(objA, selectedObj) : []));
type SelectModes = "" | "A" | "B" | "AB" | "BA";
let mode: SelectModes = defaultSelect as SelectModes;
let mode: SelectModes = $state(defaultSelect as SelectModes);
function docToString(doc: LoadedEntry) {
return doc.datatype == "plain" ? getDocData(doc.data) : readString(new Uint8Array(decodeBinary(doc.data)));
@@ -45,6 +80,7 @@
return getDiff(JSON.stringify(a, null, 2), JSON.stringify(b, null, 2));
}
function apply() {
if (!docA || !docB) return;
if (docA._id == docB._id) {
if (mode == "A") return callback(docA._rev!, undefined);
if (mode == "B") return callback(docB._rev!, undefined);
@@ -59,50 +95,23 @@
function cancel() {
callback(undefined, undefined);
}
$: {
if (docs && docs.length >= 1) {
if (keepOrder || docs[0].mtime < docs[1].mtime) {
docA = docs[0];
docB = docs[1];
} else {
docA = docs[1];
docB = docs[0];
}
docAContent = docToString(docA);
docBContent = docToString(docB);
const mergedObjs = $derived.by(
() =>
({
"": false,
A: objA,
B: objB,
AB: objAB,
BA: objBA,
}) as Record<SelectModes, JSONData | false>
);
try {
objA = false;
objB = false;
objA = JSON.parse(docAContent);
objB = JSON.parse(docBContent);
objAB = mergeObject(objA, objB);
objBA = mergeObject(objB, objA);
if (JSON.stringify(objAB) == JSON.stringify(objBA)) {
objBA = false;
}
} catch (ex) {
objBA = false;
objAB = false;
}
}
}
$: mergedObjs = {
"": false,
A: objA,
B: objB,
AB: objAB,
BA: objBA,
};
let selectedObj = $derived(mode in mergedObjs ? mergedObjs[mode] : {});
$: selectedObj = mode in mergedObjs ? mergedObjs[mode] : {};
$: {
diffs = getJsonDiff(objA, selectedObj);
}
let modesSrc = $state([] as ["" | "A" | "B" | "AB" | "BA", string][]);
let modes = [] as ["" | "A" | "B" | "AB" | "BA", string][];
$: {
let newModes = [] as typeof modes;
const modes = $derived.by(() => {
let newModes = [] as typeof modesSrc;
if (!hideLocal) {
newModes.push(["", "Not now"]);
@@ -111,15 +120,15 @@
newModes.push(["B", nameB || "B"]);
newModes.push(["AB", `${nameA || "A"} + ${nameB || "B"}`]);
newModes.push(["BA", `${nameB || "B"} + ${nameA || "A"}`]);
modes = newModes;
}
return newModes;
});
</script>
<h2>{filename}</h2>
{#if !docA || !docB}
<div class="message">Just for a minute, please!</div>
<div class="buttons">
<button on:click={apply}>Dismiss</button>
<button onclick={apply}>Dismiss</button>
</div>
{:else}
<div class="options">
@@ -148,39 +157,39 @@
<div class="infos">
<table>
<tbody>
<tr>
<th>{nameA}</th>
<td
>{#if docA._id == docB._id}
Rev:{revStringToRevNumber(docA._rev)}
{/if}
{new Date(docA.mtime).toLocaleString()}</td
>
<td>
{docAContent.length} letters
</td>
</tr>
<tr>
<th>{nameB}</th>
<td
>{#if docA._id == docB._id}
Rev:{revStringToRevNumber(docB._rev)}
{/if}
{new Date(docB.mtime).toLocaleString()}</td
>
<td>
{docBContent.length} letters
</td>
</tr>
<tr>
<th>{nameA}</th>
<td
>{#if docA._id == docB._id}
Rev:{revStringToRevNumber(docA._rev)}
{/if}
{new Date(docA.mtime).toLocaleString()}</td
>
<td>
{docAContent && docAContent.length} letters
</td>
</tr>
<tr>
<th>{nameB}</th>
<td
>{#if docA._id == docB._id}
Rev:{revStringToRevNumber(docB._rev)}
{/if}
{new Date(docB.mtime).toLocaleString()}</td
>
<td>
{docBContent && docBContent.length} letters
</td>
</tr>
</tbody>
</table>
</div>
<div class="buttons">
{#if hideLocal}
<button on:click={cancel}>Cancel</button>
<button onclick={cancel}>Cancel</button>
{/if}
<button on:click={apply}>Apply</button>
<button onclick={apply}>Apply</button>
</div>
{/if}

View File

@@ -1,4 +1,4 @@
import { normalizePath, type PluginManifest, type ListedFiles } from "../../deps.ts";
import { type PluginManifest, type ListedFiles } from "../../deps.ts";
import {
type LoadedEntry,
type FilePathWithPrefix,
@@ -10,7 +10,6 @@ import {
MODE_PAUSED,
type SavingEntry,
type DocumentID,
type FilePathWithPrefixLC,
type UXFileInfo,
type UXStat,
LOG_LEVEL_DEBUG,
@@ -25,36 +24,45 @@ import {
readContent,
createBlob,
fireAndForget,
type CustomRegExp,
getFileRegExp,
} from "../../lib/src/common/utils.ts";
import {
compareMTime,
unmarkChanges,
getPath,
isInternalMetadata,
markChangesAreSame,
PeriodicProcessor,
TARGET_IS_NEW,
scheduleTask,
getDocProps,
getLogLevel,
autosaveCache,
type MapLike,
onlyInNTimes,
BASE_IS_NEW,
EVEN,
displayRev,
} from "../../common/utils.ts";
import { serialized, skipIfDuplicated } from "../../lib/src/concurrency/lock.ts";
import { serialized, skipIfDuplicated } from "octagonal-wheels/concurrency/lock";
import { JsonResolveModal } from "../HiddenFileCommon/JsonResolveModal.ts";
import { LiveSyncCommands } from "../LiveSyncCommands.ts";
import { addPrefix, stripAllPrefixes } from "../../lib/src/string_and_binary/path.ts";
import { QueueProcessor } from "../../lib/src/concurrency/processor.ts";
import { QueueProcessor } from "octagonal-wheels/concurrency/processor";
import { hiddenFilesEventCount, hiddenFilesProcessingCount } from "../../lib/src/mock_and_interop/stores.ts";
import type { IObsidianModule } from "../../modules/AbstractObsidianModule.ts";
import { EVENT_SETTING_SAVED, eventHub } from "../../common/events.ts";
import type { LiveSyncLocalDB } from "../../lib/src/pouchdb/LiveSyncLocalDB.ts";
import { Semaphore } from "octagonal-wheels/concurrency/semaphore";
import type { LiveSyncCore } from "../../main.ts";
type SyncDirection = "push" | "pull" | "safe" | "pullForce" | "pushForce";
declare global {
interface OPTIONAL_SYNC_FEATURES {
FETCH: "FETCH";
OVERWRITE: "OVERWRITE";
MERGE: "MERGE";
DISABLE: "DISABLE";
DISABLE_HIDDEN: "DISABLE_HIDDEN";
}
}
function getComparingMTime(
doc: (MetaEntry | LoadedEntry | false) | UXFileInfo | UXStat | null | undefined,
includeDeleted = false
@@ -70,21 +78,21 @@ function getComparingMTime(
return doc.mtime ?? 0;
}
export class HiddenFileSync extends LiveSyncCommands implements IObsidianModule {
_isThisModuleEnabled() {
export class HiddenFileSync extends LiveSyncCommands {
isThisModuleEnabled() {
return this.plugin.settings.syncInternalFiles;
}
periodicInternalFileScanProcessor: PeriodicProcessor = new PeriodicProcessor(
this.plugin,
async () => this._isThisModuleEnabled() && this._isDatabaseReady() && (await this.scanAllStorageChanges(false))
async () => this.isThisModuleEnabled() && this._isDatabaseReady() && (await this.scanAllStorageChanges(false))
);
get kvDB() {
return this.plugin.kvDB;
}
getConflictedDoc(path: FilePathWithPrefix, rev: string) {
return this.plugin.localDatabase.getConflictedDoc(path, rev);
return this.plugin.managers.conflictManager.getConflictedDoc(path, rev);
}
onunload() {
this.periodicInternalFileScanProcessor?.disable();
@@ -130,14 +138,19 @@ export class HiddenFileSync extends LiveSyncCommands implements IObsidianModule
this.updateSettingCache();
});
}
async $everyOnInitializeDatabase(db: LiveSyncLocalDB): Promise<boolean> {
// We cannot initialise autosaveCache because kvDB is not ready yet
// async _everyOnInitializeDatabase(db: LiveSyncLocalDB): Promise<boolean> {
// this._fileInfoLastProcessed = await autosaveCache(this.kvDB, "hidden-file-lastProcessed");
// this._databaseInfoLastProcessed = await autosaveCache(this.kvDB, "hidden-file-lastProcessed-database");
// this._fileInfoLastKnown = await autosaveCache(this.kvDB, "hidden-file-lastKnown");
// return true;
// }
private async _everyOnDatabaseInitialized(showNotice: boolean) {
this._fileInfoLastProcessed = await autosaveCache(this.kvDB, "hidden-file-lastProcessed");
this._databaseInfoLastProcessed = await autosaveCache(this.kvDB, "hidden-file-lastProcessed-database");
this._fileInfoLastKnown = await autosaveCache(this.kvDB, "hidden-file-lastKnown");
return true;
}
async $everyOnDatabaseInitialized(showNotice: boolean) {
if (this._isThisModuleEnabled()) {
if (this.isThisModuleEnabled()) {
if (this._fileInfoLastProcessed.size == 0 && this._fileInfoLastProcessed.size == 0) {
this._log(`No cache found. Performing startup scan.`, LOG_LEVEL_VERBOSE);
await this.performStartupScan(true);
@@ -147,9 +160,9 @@ export class HiddenFileSync extends LiveSyncCommands implements IObsidianModule
}
return true;
}
async $everyBeforeReplicate(showNotice: boolean) {
async _everyBeforeReplicate(showNotice: boolean) {
if (
this._isThisModuleEnabled() &&
this.isThisModuleEnabled() &&
this._isDatabaseReady() &&
this.settings.syncInternalFilesBeforeReplication &&
!this.settings.watchInternalFileChanges
@@ -159,82 +172,66 @@ export class HiddenFileSync extends LiveSyncCommands implements IObsidianModule
return true;
}
$everyOnloadAfterLoadSettings(): Promise<boolean> {
private _everyOnloadAfterLoadSettings(): Promise<boolean> {
this.updateSettingCache();
return Promise.resolve(true);
}
updateSettingCache() {
const ignorePatterns = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",")
.filter((e) => e)
.map((e) => new RegExp(e, "i"));
this.ignorePatterns = ignorePatterns;
this.shouldSkipFile = [] as FilePathWithPrefixLC[];
// Exclude files handled by customization sync
const configDir = normalizePath(this.app.vault.configDir);
const shouldSKip = !this.settings.usePluginSync
? []
: Object.values(this.settings.pluginSyncExtendedSetting)
.filter((e) => e.mode == MODE_SELECTIVE || e.mode == MODE_PAUSED)
.map((e) => e.files)
.flat()
.map((e) => `${configDir}/${e}`.toLowerCase());
this.shouldSkipFile = shouldSKip as FilePathWithPrefixLC[];
this._log(`Hidden file will skip ${this.shouldSkipFile.length} files`, LOG_LEVEL_INFO);
this.cacheCustomisationSyncIgnoredFiles.clear();
this.cacheFileRegExps.clear();
}
isReady() {
if (!this._isMainReady) return false;
if (this._isMainSuspended()) return false;
if (!this._isThisModuleEnabled()) return false;
if (!this.isThisModuleEnabled()) return false;
return true;
}
shouldSkipFile = [] as FilePathWithPrefixLC[];
async performStartupScan(showNotice: boolean) {
await this.applyOfflineChanges(showNotice);
}
async $everyOnResumeProcess(): Promise<boolean> {
async _everyOnResumeProcess(): Promise<boolean> {
this.periodicInternalFileScanProcessor?.disable();
if (this._isMainSuspended()) return true;
if (this._isThisModuleEnabled()) {
if (this.isThisModuleEnabled()) {
await this.performStartupScan(false);
}
this.periodicInternalFileScanProcessor.enable(
this._isThisModuleEnabled() && this.settings.syncInternalFilesInterval
this.isThisModuleEnabled() && this.settings.syncInternalFilesInterval
? this.settings.syncInternalFilesInterval * 1000
: 0
);
return true;
}
$everyRealizeSettingSyncMode(): Promise<boolean> {
_everyRealizeSettingSyncMode(): Promise<boolean> {
this.periodicInternalFileScanProcessor?.disable();
if (this._isMainSuspended()) return Promise.resolve(true);
if (!this.plugin.$$isReady()) return Promise.resolve(true);
if (!this.services.appLifecycle.isReady()) return Promise.resolve(true);
this.periodicInternalFileScanProcessor.enable(
this._isThisModuleEnabled() && this.settings.syncInternalFilesInterval
this.isThisModuleEnabled() && this.settings.syncInternalFilesInterval
? this.settings.syncInternalFilesInterval * 1000
: 0
);
const ignorePatterns = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",")
.filter((e) => e)
.map((e) => new RegExp(e, "i"));
this.ignorePatterns = ignorePatterns;
this.cacheFileRegExps.clear();
// const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
// this.ignorePatterns = ignorePatterns;
// const targetFilter = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
// this.targetPatterns = targetFilter;
return Promise.resolve(true);
}
async $anyProcessOptionalFileEvent(path: FilePath): Promise<boolean | undefined> {
async _anyProcessOptionalFileEvent(path: FilePath): Promise<boolean> {
if (this.isReady()) {
return await this.trackStorageFileModification(path);
return (await this.trackStorageFileModification(path)) || false;
}
return false;
}
$anyGetOptionalConflictCheckMethod(path: FilePathWithPrefix): Promise<boolean | "newer"> {
_anyGetOptionalConflictCheckMethod(path: FilePathWithPrefix): Promise<boolean | "newer"> {
if (isInternalMetadata(path)) {
this.queueConflictCheck(path);
return Promise.resolve(true);
@@ -242,12 +239,12 @@ export class HiddenFileSync extends LiveSyncCommands implements IObsidianModule
return Promise.resolve(false);
}
async $anyProcessOptionalSyncFiles(doc: LoadedEntry): Promise<boolean | undefined> {
async _anyProcessOptionalSyncFiles(doc: LoadedEntry): Promise<boolean> {
if (isInternalMetadata(doc._id)) {
if (this._isThisModuleEnabled()) {
if (this.isThisModuleEnabled()) {
//system file
const filename = getPath(doc);
if (await this.plugin.$$isTargetFile(filename)) {
const filename = this.getPath(doc);
if (await this.services.vault.isTargetFile(filename)) {
// this.procInternalFile(filename);
await this.processReplicationResult(doc);
return true;
@@ -546,8 +543,11 @@ Offline Changed files: ${processFiles.length}`;
forceWrite = false,
includeDeleted = true
): Promise<boolean | undefined> {
if (this.shouldSkipFile.some((e) => e.startsWith(path.toLowerCase()))) {
this._log(`Hidden file skipped: ${path} is synchronized in customization sync.`, LOG_LEVEL_VERBOSE);
if (!(await this.isTargetFile(path))) {
this._log(
`Storage file tracking: Hidden file skipped: ${path} is filtered out by the defined patterns.`,
LOG_LEVEL_VERBOSE
);
return false;
}
try {
@@ -700,8 +700,8 @@ Offline Changed files: ${processFiles.length}`;
revFrom._revs_info
?.filter((e) => e.status == "available" && Number(e.rev.split("-")[0]) < conflictedRevNo)
.first()?.rev ?? "";
const result = await this.plugin.localDatabase.mergeObject(
path,
const result = await this.plugin.managers.conflictManager.mergeObject(
doc.path,
commonBase,
doc._rev,
conflictedRev
@@ -722,6 +722,13 @@ Offline Changed files: ${processFiles.length}`;
} else {
this._log(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE);
}
// const pat = this.settings.syncInternalFileOverwritePatterns;
const regExp = getFileRegExp(this.settings, "syncInternalFileOverwritePatterns");
if (regExp.some((r) => r.test(stripAllPrefixes(path)))) {
this._log(`Overwrite rule applied for conflicted hidden file: ${path}`, LOG_LEVEL_INFO);
await this.resolveByNewerEntry(id, path, doc, revA, revB);
return [];
}
return [{ path, revA, revB, id, doc }];
}
// When not JSON file, resolve conflicts by choosing a newer one.
@@ -836,9 +843,32 @@ Offline Changed files: ${processFiles.length}`;
// <-- Conflict processing
// --> Event Source Handler (Database)
getDocProps(doc: LoadedEntry) {
/*
type DocumentProps = {
id: DocumentID;
rev?: string;
prefixedPath: FilePathWithPrefix;
path: FilePath;
isDeleted: boolean;
revDisplay: string;
shortenedId: string;
shortenedPath: string;
};
*/
const id = doc._id;
const shortenedId = id.substring(0, 10);
const prefixedPath = this.getPath(doc);
const path = stripAllPrefixes(prefixedPath);
const rev = doc._rev;
const revDisplay = rev ? displayRev(rev) : "0-NOREVS";
// const prefix = prefixedPath.substring(0, prefixedPath.length - path.length);
const shortenedPath = path.substring(0, 10);
const isDeleted = doc._deleted || doc.deleted || false;
return { id, rev, revDisplay, prefixedPath, path, isDeleted, shortenedId, shortenedPath };
}
async processReplicationResult(doc: LoadedEntry): Promise<boolean> {
const info = getDocProps(doc);
const info = this.getDocProps(doc);
const path = info.path;
const headerLine = `Tracking DB ${info.path} (${info.revDisplay}) :`;
const ret = await this.trackDatabaseFileModification(path, headerLine);
@@ -850,6 +880,108 @@ Offline Changed files: ${processFiles.length}`;
// --> Database Event Functions
cacheFileRegExps = new Map<string, CustomRegExp[][]>();
/**
* Parses the regular expression settings for hidden file synchronization.
* @returns An object containing the ignore and target filters.
*/
parseRegExpSettings() {
const regExpKey = `${this.plugin.settings.syncInternalFilesTargetPatterns}||${this.plugin.settings.syncInternalFilesIgnorePatterns}`;
let ignoreFilter: CustomRegExp[];
let targetFilter: CustomRegExp[];
if (this.cacheFileRegExps.has(regExpKey)) {
const cached = this.cacheFileRegExps.get(regExpKey)!;
ignoreFilter = cached[1];
targetFilter = cached[0];
} else {
ignoreFilter = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
targetFilter = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
this.cacheFileRegExps.clear();
this.cacheFileRegExps.set(regExpKey, [targetFilter, ignoreFilter]);
}
return { ignoreFilter, targetFilter };
}
/**
* Checks if the target file path matches the defined patterns.
*/
isTargetFileInPatterns(path: string): boolean {
const { ignoreFilter, targetFilter } = this.parseRegExpSettings();
if (ignoreFilter && ignoreFilter.length > 0) {
for (const pattern of ignoreFilter) {
if (pattern.test(path)) {
return false;
}
}
}
if (targetFilter && targetFilter.length > 0) {
for (const pattern of targetFilter) {
if (pattern.test(path)) {
return true;
}
}
// While having target patterns, it effects as an allow-list.
return false;
}
return true;
}
cacheCustomisationSyncIgnoredFiles = new Map<string, string[]>();
/**
* Gets the list of files ignored for customization synchronization.
* @returns An array of ignored file paths (lowercase).
*/
getCustomisationSynchronizationIgnoredFiles(): string[] {
const configDir = this.plugin.app.vault.configDir;
const key =
JSON.stringify(this.settings.pluginSyncExtendedSetting) + `||${this.settings.usePluginSync}||${configDir}`;
if (this.cacheCustomisationSyncIgnoredFiles.has(key)) {
return this.cacheCustomisationSyncIgnoredFiles.get(key)!;
}
this.cacheCustomisationSyncIgnoredFiles.clear();
const synchronisedInConfigSync = !this.settings.usePluginSync
? []
: Object.values(this.settings.pluginSyncExtendedSetting)
.filter((e) => e.mode == MODE_SELECTIVE || e.mode == MODE_PAUSED)
.map((e) => e.files)
.flat()
.map((e) => `${configDir}/${e}`.toLowerCase());
this.cacheCustomisationSyncIgnoredFiles.set(key, synchronisedInConfigSync);
return synchronisedInConfigSync;
}
/**
* Checks if the given path is not ignored by customization synchronization.
* @param path The file path to check.
* @returns True if the path is not ignored; otherwise, false.
*/
isNotIgnoredByCustomisationSync(path: string): boolean {
const ignoredFiles = this.getCustomisationSynchronizationIgnoredFiles();
const result = !ignoredFiles.some((e) => path.startsWith(e));
// console.warn(`Assertion: isNotIgnoredByCustomisationSync(${path}) = ${result}`);
return result;
}
isHiddenFileSyncHandlingPath(path: FilePath): boolean {
const result = path.startsWith(".") && !path.startsWith(".trash");
// console.warn(`Assertion: isHiddenFileSyncHandlingPath(${path}) = ${result}`);
return result;
}
async isTargetFile(path: FilePath): Promise<boolean> {
const result =
this.isTargetFileInPatterns(path) &&
this.isNotIgnoredByCustomisationSync(path) &&
this.isHiddenFileSyncHandlingPath(path);
// console.warn(`Assertion: isTargetFile(${path}) : ${result ? "✔️" : "❌"}`);
if (!result) {
return false;
}
const resultByFile = await this.services.vault.isIgnoredByIgnoreFile(path);
// console.warn(`${path} -> isIgnoredByIgnoreFile: ${resultByFile ? "❌" : "✔️"}`);
return !resultByFile;
}
async trackScannedDatabaseChange(
processFiles: MetaEntry[],
showNotice: boolean = false,
@@ -863,14 +995,21 @@ Offline Changed files: ${processFiles.length}`;
const processes = processFiles.map(async (file) => {
try {
const path = stripAllPrefixes(this.getPath(file));
await this.trackDatabaseFileModification(
path,
"[Hidden file scan]",
!forceWriteAll,
onlyNew,
file,
includeDeletion
);
if (!(await this.isTargetFile(path))) {
this._log(
`Database file tracking: Hidden file skipped: ${path} is filtered out by the defined patterns.`,
LOG_LEVEL_VERBOSE
);
} else {
await this.trackDatabaseFileModification(
path,
"[Hidden file scan]",
!forceWriteAll,
onlyNew,
file,
includeDeletion
);
}
notifyProgress();
} catch (ex) {
this._log(`Failed to process storage change file:${file}`, logLevel);
@@ -891,7 +1030,7 @@ Offline Changed files: ${processFiles.length}`;
p.log("Enumerating database files...");
const currentDatabaseFiles = await this.getAllDatabaseFiles();
const allDatabaseMap = Object.fromEntries(
currentDatabaseFiles.map((e) => [stripAllPrefixes(getPath(e)), e])
currentDatabaseFiles.map((e) => [stripAllPrefixes(this.getPath(e)), e])
);
const currentDatabaseFileNames = [...Object.keys(allDatabaseMap)] as FilePath[];
const untrackedLocal = currentStorageFiles.filter((e) => !this._fileInfoLastProcessed.has(e));
@@ -1092,14 +1231,14 @@ Offline Changed files: ${files.length}`;
// If something changes left, notify for reloading Obsidian.
if (updatedFolders.indexOf(this.plugin.app.vault.configDir) >= 0) {
if (!this.plugin.$$isReloadingScheduled()) {
if (!this.services.appLifecycle.isReloadingScheduled()) {
this.plugin.confirm.askInPopup(
`updated-any-hidden`,
`Some setting files have been modified\nPress {HERE} to schedule a reload of Obsidian, or press elsewhere to dismiss this message.`,
(anchor) => {
anchor.text = "HERE";
anchor.addEventListener("click", () => {
this.plugin.$$scheduleAppReload();
this.services.appLifecycle.scheduleRestart();
});
}
);
@@ -1134,14 +1273,14 @@ Offline Changed files: ${files.length}`;
: currentStorageFilesAll;
p.log("Enumerating database files...");
const allDatabaseFiles = await this.getAllDatabaseFiles();
const allDatabaseMap = new Map(allDatabaseFiles.map((e) => [stripAllPrefixes(getPath(e)), e]));
const allDatabaseMap = new Map(allDatabaseFiles.map((e) => [stripAllPrefixes(this.getPath(e)), e]));
const currentDatabaseFiles = targetFiles
? allDatabaseFiles.filter((e) => targetFiles.some((f) => f == stripAllPrefixes(getPath(e))))
? allDatabaseFiles.filter((e) => targetFiles.some((f) => f == stripAllPrefixes(this.getPath(e))))
: allDatabaseFiles;
const allFileNames = new Set([
...currentStorageFiles,
...currentDatabaseFiles.map((e) => stripAllPrefixes(getPath(e))),
...currentDatabaseFiles.map((e) => stripAllPrefixes(this.getPath(e))),
]);
const storageToDatabase = [] as FilePath[];
const databaseToStorage = [] as MetaEntry[];
@@ -1203,7 +1342,13 @@ Offline Changed files: ${files.length}`;
).rows
.filter((e) => isInternalMetadata(e.id as DocumentID))
.map((e) => e.doc) as MetaEntry[];
return allFiles;
const files = [] as MetaEntry[];
for (const file of allFiles) {
if (await this.isTargetFile(stripAllPrefixes(this.getPath(file)))) {
files.push(file);
}
}
return files;
}
async rebuildFromDatabase(showNotice: boolean, targetFiles: FilePath[] | false = false, onlyNew = false) {
@@ -1218,7 +1363,7 @@ Offline Changed files: ${files.length}`;
// However, in perspective of performance and future-proofing, I feel somewhat justified in doing it here.
const currentFiles = targetFiles
? allFiles.filter((e) => targetFiles.some((f) => f == stripAllPrefixes(getPath(e))))
? allFiles.filter((e) => targetFiles.some((f) => f == stripAllPrefixes(this.getPath(e))))
: allFiles;
p.once(`Database to Storage: ${currentFiles.length} files.`);
@@ -1261,7 +1406,7 @@ Offline Changed files: ${files.length}`;
const onlyNew = direction == "pull";
p.log(`Started: Database --> Storage ${onlyNew ? "(Only New)" : ""}`);
const updatedEntries = await this.rebuildFromDatabase(showMessage, targetFiles, onlyNew);
const updatedFiles = updatedEntries.map((e) => stripAllPrefixes(getPath(e)));
const updatedFiles = updatedEntries.map((e) => stripAllPrefixes(this.getPath(e)));
// making doubly sure, No more losing files.
await this.adoptCurrentStorageFilesAsProcessed(updatedFiles);
await this.adoptCurrentDatabaseFilesAsProcessed(updatedFiles);
@@ -1319,7 +1464,7 @@ Offline Changed files: ${files.length}`;
async storeInternalFileToDatabase(file: InternalFileInfo | UXFileInfo, forceWrite = false) {
const storeFilePath = stripAllPrefixes(file.path as FilePath);
const storageFilePath = file.path;
if (await this.plugin.$$isIgnoredByIgnoreFiles(storageFilePath)) {
if (await this.services.vault.isIgnoredByIgnoreFile(storageFilePath)) {
return undefined;
}
const prefixedFileName = addPrefix(storeFilePath, ICHeader);
@@ -1373,7 +1518,7 @@ Offline Changed files: ${files.length}`;
const displayFileName = filenameSrc;
const prefixedFileName = addPrefix(storeFilePath, ICHeader);
const mtime = new Date().getTime();
if (await this.plugin.$$isIgnoredByIgnoreFiles(storageFilePath)) {
if (await this.services.vault.isIgnoredByIgnoreFile(storageFilePath)) {
return undefined;
}
return await serialized("file-" + prefixedFileName, async () => {
@@ -1433,7 +1578,7 @@ Offline Changed files: ${files.length}`;
includeDeletion = true
) {
const prefixedFileName = addPrefix(storageFilePath, ICHeader);
if (await this.plugin.$$isIgnoredByIgnoreFiles(storageFilePath)) {
if (await this.services.vault.isIgnoredByIgnoreFile(storageFilePath)) {
return undefined;
}
return await serialized("file-" + prefixedFileName, async () => {
@@ -1480,7 +1625,7 @@ Offline Changed files: ${files.length}`;
}
const deleted = metaOnDB.deleted || metaOnDB._deleted || false;
if (deleted) {
const result = await this._deleteFile(storageFilePath);
const result = await this.__deleteFile(storageFilePath);
if (result == "OK") {
this.updateLastProcessedDeletion(storageFilePath, metaOnDB);
return true;
@@ -1490,11 +1635,11 @@ Offline Changed files: ${files.length}`;
}
return false;
} else {
const fileOnDB = await this.localDatabase.getDBEntryFromMeta(metaOnDB, {}, false, true, true);
const fileOnDB = await this.localDatabase.getDBEntryFromMeta(metaOnDB, false, true);
if (fileOnDB === false) {
throw new Error(`Failed to read file from database:${storageFilePath}`);
}
const resultStat = await this._writeFile(storageFilePath, fileOnDB, force);
const resultStat = await this.__writeFile(storageFilePath, fileOnDB, force);
if (resultStat) {
this.updateLastProcessed(storageFilePath, metaOnDB, resultStat);
this.queueNotification(storageFilePath);
@@ -1527,7 +1672,7 @@ Offline Changed files: ${files.length}`;
}
}
async _writeFile(storageFilePath: FilePath, fileOnDB: LoadedEntry, force: boolean): Promise<false | UXStat> {
async __writeFile(storageFilePath: FilePath, fileOnDB: LoadedEntry, force: boolean): Promise<false | UXStat> {
try {
const statBefore = await this.plugin.storageAccess.statHidden(storageFilePath);
const isExist = statBefore != null;
@@ -1566,7 +1711,7 @@ Offline Changed files: ${files.length}`;
}
}
async _deleteFile(storageFilePath: FilePath): Promise<false | "OK" | "ALREADY"> {
async __deleteFile(storageFilePath: FilePath): Promise<false | "OK" | "ALREADY"> {
const result = await this.__removeFile(storageFilePath);
if (result === false) {
this._log(`STORAGE <x- DB: ${storageFilePath}: deleting (hidden) Failed`);
@@ -1583,11 +1728,11 @@ Offline Changed files: ${files.length}`;
// <-- Database To Storage Functions
async $allAskUsingOptionalSyncFeature(opt: { enableFetch?: boolean; enableOverwrite?: boolean }) {
await this._askHiddenFileConfiguration(opt);
private async _allAskUsingOptionalSyncFeature(opt: { enableFetch?: boolean; enableOverwrite?: boolean }) {
await this.__askHiddenFileConfiguration(opt);
return true;
}
async _askHiddenFileConfiguration(opt: { enableFetch?: boolean; enableOverwrite?: boolean }) {
private async __askHiddenFileConfiguration(opt: { enableFetch?: boolean; enableOverwrite?: boolean }) {
const messageFetch = `${opt.enableFetch ? `> - Fetch: Use the files stored from other devices. Choose this option if you have already configured hidden file synchronization on those devices and wish to accept their files.\n` : ""}`;
const messageOverwrite = `${opt.enableOverwrite ? `> - Overwrite: Use the files from this device. Select this option if you want to overwrite the files stored on other devices.\n` : ""}`;
const messageMerge = `> - Merge: Merge the files from this device with those on other devices. Choose this option if you wish to combine files from multiple sources.
@@ -1633,7 +1778,7 @@ ${messageFetch}${messageOverwrite}${messageMerge}
}
}
$allSuspendExtraSync(): Promise<boolean> {
private _allSuspendExtraSync(): Promise<boolean> {
if (this.plugin.settings.syncInternalFiles) {
this._log(
"Hidden file synchronization have been temporarily disabled. Please enable them after the fetching, if you need them.",
@@ -1645,11 +1790,12 @@ ${messageFetch}${messageOverwrite}${messageMerge}
}
// --> Configuration handling
async $anyConfigureOptionalSyncFeature(mode: "FETCH" | "OVERWRITE" | "MERGE" | "DISABLE" | "DISABLE_HIDDEN") {
private async _allConfigureOptionalSyncFeature(mode: keyof OPTIONAL_SYNC_FEATURES) {
await this.configureHiddenFileSync(mode);
return true;
}
async configureHiddenFileSync(mode: "FETCH" | "OVERWRITE" | "MERGE" | "DISABLE" | "DISABLE_HIDDEN") {
async configureHiddenFileSync(mode: keyof OPTIONAL_SYNC_FEATURES) {
if (
mode != "FETCH" &&
mode != "OVERWRITE" &&
@@ -1683,31 +1829,13 @@ ${messageFetch}${messageOverwrite}${messageMerge}
// <-- Configuration handling
// --> Local Storage SubFunctions
ignorePatterns: RegExp[] = [];
async scanInternalFileNames() {
const configDir = normalizePath(this.app.vault.configDir);
const ignoreFilter = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",")
.filter((e) => e)
.map((e) => new RegExp(e, "i"));
const synchronisedInConfigSync = !this.settings.usePluginSync
? []
: Object.values(this.settings.pluginSyncExtendedSetting)
.filter((e) => e.mode == MODE_SELECTIVE || e.mode == MODE_PAUSED)
.map((e) => e.files)
.flat()
.map((e) => `${configDir}/${e}`.toLowerCase());
const root = this.app.vault.getRoot();
const findRoot = root.path;
const filenames = (await this.getFiles(findRoot, [], undefined, ignoreFilter))
.filter((e) => e.startsWith("."))
.filter((e) => !e.startsWith(".trash"));
const files = filenames.filter((path) =>
synchronisedInConfigSync.every((filterFile) => !path.toLowerCase().startsWith(filterFile))
);
return files as FilePath[];
const filenames = await this.getFiles(findRoot, (path) => this.isTargetFile(path));
return filenames as FilePath[];
}
async scanInternalFiles(): Promise<InternalFileInfo[]> {
@@ -1721,7 +1849,7 @@ ${messageFetch}${messageOverwrite}${messageMerge}
const result: InternalFileInfo[] = [];
for (const f of files) {
const w = await f;
if (await this.plugin.$$isIgnoredByIgnoreFiles(w.path)) {
if (await this.services.vault.isIgnoredByIgnoreFile(w.path)) {
continue;
}
const mtime = w.stat?.mtime ?? 0;
@@ -1737,7 +1865,7 @@ ${messageFetch}${messageOverwrite}${messageMerge}
return result;
}
async getFiles(path: string, ignoreList: string[], filter?: RegExp[], ignoreFilter?: RegExp[]) {
async getFiles(path: string, checkFunction: (path: FilePath) => Promise<boolean> | boolean) {
let w: ListedFiles;
try {
w = await this.app.vault.adapter.list(path);
@@ -1746,35 +1874,79 @@ ${messageFetch}${messageOverwrite}${messageMerge}
this._log(ex, LOG_LEVEL_VERBOSE);
return [];
}
const filesSrc = [
...w.files
.filter((e) => !ignoreList.some((ee) => e.endsWith(ee)))
.filter((e) => !filter || filter.some((ee) => e.match(ee)))
.filter((e) => !ignoreFilter || ignoreFilter.every((ee) => !e.match(ee))),
];
let files = [] as string[];
for (const file of filesSrc) {
if (!(await this.plugin.$$isIgnoredByIgnoreFiles(file))) {
files.push(file);
for (const file of w.files) {
if (!(await checkFunction(file as FilePath))) {
continue;
}
files.push(file);
}
for (const v of w.folders) {
if (!(await checkFunction(v as FilePath))) {
continue;
}
files = files.concat(await this.getFiles(v, checkFunction));
}
return files;
}
/*
async getFiles_(path: string, ignoreList: string[], filter?: CustomRegExp[], ignoreFilter?: CustomRegExp[]) {
let w: ListedFiles;
try {
w = await this.app.vault.adapter.list(path);
} catch (ex) {
this._log(`Could not traverse(HiddenSync):${path}`, LOG_LEVEL_INFO);
this._log(ex, LOG_LEVEL_VERBOSE);
return [];
}
let files = [] as string[];
for (const file of w.files) {
if (ignoreList && ignoreList.length > 0) {
if (ignoreList.some((e) => file.endsWith(e))) continue;
}
if (filter && filter.length > 0) {
if (!filter.some((e) => e.test(file))) {
continue;
}
}
if (ignoreFilter && ignoreFilter.some((ee) => ee.test(file))) {
continue;
}
if (await this.services.vault.isIgnoredByIgnoreFile(file)) continue;
files.push(file);
}
L1: for (const v of w.folders) {
for (const ignore of ignoreList) {
if (v.endsWith(ignore)) {
continue L1;
}
}
if (ignoreFilter && ignoreFilter.some((e) => v.match(e))) {
if (ignoreFilter && ignoreFilter.some((e) => e.test(v))) {
continue L1;
}
if (await this.plugin.$$isIgnoredByIgnoreFiles(v)) {
if (await this.services.vault.isIgnoredByIgnoreFile(v)) {
continue L1;
}
files = files.concat(await this.getFiles(v, ignoreList, filter, ignoreFilter));
files = files.concat(await this.getFiles_(v, ignoreList, filter, ignoreFilter));
}
return files;
}
*/
// <-- Local Storage SubFunctions
override onBindFunction(core: LiveSyncCore, services: typeof core.services) {
// No longer needed on initialisation
// services.databaseEvents.handleOnDatabaseInitialisation(this._everyOnInitializeDatabase.bind(this));
services.appLifecycle.onSettingLoaded.addHandler(this._everyOnloadAfterLoadSettings.bind(this));
services.fileProcessing.processOptionalFileEvent.addHandler(this._anyProcessOptionalFileEvent.bind(this));
services.conflict.getOptionalConflictCheckMethod.addHandler(this._anyGetOptionalConflictCheckMethod.bind(this));
services.replication.processOptionalSynchroniseResult.addHandler(this._anyProcessOptionalSyncFiles.bind(this));
services.setting.onRealiseSetting.addHandler(this._everyRealizeSettingSyncMode.bind(this));
services.appLifecycle.onResuming.addHandler(this._everyOnResumeProcess.bind(this));
services.replication.onBeforeReplicate.addHandler(this._everyBeforeReplicate.bind(this));
services.databaseEvents.onDatabaseInitialised.addHandler(this._everyOnDatabaseInitialized.bind(this));
services.setting.suspendExtraSync.addHandler(this._allSuspendExtraSync.bind(this));
services.setting.suggestOptionalFeatures.addHandler(this._allAskUsingOptionalSyncFeature.bind(this));
services.setting.enableOptionalFeature.addHandler(this._allConfigureOptionalSyncFeature.bind(this));
}
}

View File

@@ -1,17 +1,18 @@
import { LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
import { getPath } from "../common/utils.ts";
import { LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import {
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
type AnyEntry,
type DocumentID,
type EntryHasPath,
type FilePath,
type FilePathWithPrefix,
type LOG_LEVEL,
} from "../lib/src/common/types.ts";
import type ObsidianLiveSyncPlugin from "../main.ts";
import { MARK_DONE } from "../modules/features/ModuleLog.ts";
import type { LiveSyncCore } from "../main.ts";
import { __$checkInstanceBinding } from "../lib/src/dev/checks.ts";
import { createInstanceLogFunction } from "@/lib/src/services/lib/logUtils.ts";
let noticeIndex = 0;
export abstract class LiveSyncCommands {
@@ -25,40 +26,41 @@ export abstract class LiveSyncCommands {
get localDatabase() {
return this.plugin.localDatabase;
}
get services() {
return this.plugin.services;
}
id2path(id: DocumentID, entry?: EntryHasPath, stripPrefix?: boolean): FilePathWithPrefix {
return this.plugin.$$id2path(id, entry, stripPrefix);
}
// id2path(id: DocumentID, entry?: EntryHasPath, stripPrefix?: boolean): FilePathWithPrefix {
// return this.plugin.$$id2path(id, entry, stripPrefix);
// }
async path2id(filename: FilePathWithPrefix | FilePath, prefix?: string): Promise<DocumentID> {
return await this.plugin.$$path2id(filename, prefix);
return await this.services.path.path2id(filename, prefix);
}
getPath(entry: AnyEntry): FilePathWithPrefix {
return getPath(entry);
return this.services.path.getPath(entry);
}
constructor(plugin: ObsidianLiveSyncPlugin) {
this.plugin = plugin;
this.onBindFunction(plugin, plugin.services);
this._log = createInstanceLogFunction(this.constructor.name, this.services.API);
__$checkInstanceBinding(this);
}
abstract onunload(): void;
abstract onload(): void | Promise<void>;
_isMainReady() {
return this.plugin.$$isReady();
return this.plugin.services.appLifecycle.isReady();
}
_isMainSuspended() {
return this.plugin.$$isSuspended();
return this.services.appLifecycle.isSuspended();
}
_isDatabaseReady() {
return this.plugin.$$isDatabaseReady();
return this.services.database.isDatabaseReady();
}
_log = (msg: any, level: LOG_LEVEL = LOG_LEVEL_INFO, key?: string) => {
if (typeof msg === "string" && level !== LOG_LEVEL_NOTICE) {
msg = `[${this.constructor.name}]\u{200A} ${msg}`;
}
// console.log(msg);
Logger(msg, level, key);
};
_log: ReturnType<typeof createInstanceLogFunction>;
_verbose = (msg: any, key?: string) => {
this._log(msg, LOG_LEVEL_VERBOSE, key);
@@ -89,4 +91,8 @@ export abstract class LiveSyncCommands {
_debug = (msg: any, key?: string) => {
this._log(msg, LOG_LEVEL_VERBOSE, key);
};
onBindFunction(core: LiveSyncCore, services: typeof core.services) {
// Override if needed.
}
}

View File

@@ -1,18 +1,56 @@
import { sizeToHumanReadable } from "octagonal-wheels/number";
import { LOG_LEVEL_NOTICE, type MetaEntry } from "../../lib/src/common/types";
import {
EntryTypes,
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
type DocumentID,
type EntryDoc,
type EntryLeaf,
type FilePathWithPrefix,
type MetaEntry,
} from "../../lib/src/common/types";
import { getNoFromRev } from "../../lib/src/pouchdb/LiveSyncLocalDB";
import type { IObsidianModule } from "../../modules/AbstractObsidianModule";
import { LiveSyncCommands } from "../LiveSyncCommands";
import { serialized } from "octagonal-wheels/concurrency/lock_v2";
import { arrayToChunkedArray } from "octagonal-wheels/collection";
import { EVENT_ANALYSE_DB_USAGE, EVENT_REQUEST_PERFORM_GC_V3, eventHub } from "@/common/events";
import type { LiveSyncCouchDBReplicator } from "@/lib/src/replication/couchdb/LiveSyncReplicator";
import { delay, parseHeaderValues } from "@/lib/src/common/utils";
import { generateCredentialObject } from "@/lib/src/replication/httplib";
import { _requestToCouchDB } from "@/common/utils";
const DB_KEY_SEQ = "gc-seq";
const DB_KEY_CHUNK_SET = "chunk-set";
const DB_KEY_DOC_USAGE_MAP = "doc-usage-map";
type ChunkID = DocumentID;
type NoteDocumentID = DocumentID;
type Rev = string;
export class LocalDatabaseMaintenance extends LiveSyncCommands implements IObsidianModule {
$everyOnload(): Promise<boolean> {
return Promise.resolve(true);
}
type ChunkUsageMap = Map<NoteDocumentID, Map<Rev, Set<ChunkID>>>;
export class LocalDatabaseMaintenance extends LiveSyncCommands {
onunload(): void {
// NO OP.
}
onload(): void | Promise<void> {
// NO OP.
this.plugin.addCommand({
id: "analyse-database",
name: "Analyse Database Usage (advanced)",
icon: "database-search",
callback: async () => {
await this.analyseDatabase();
},
});
this.plugin.addCommand({
id: "gc-v3",
name: "Garbage Collection V3 (advanced, beta)",
icon: "trash-2",
callback: async () => {
await this.gcv3();
},
});
eventHub.onEvent(EVENT_ANALYSE_DB_USAGE, () => this.analyseDatabase());
eventHub.onEvent(EVENT_REQUEST_PERFORM_GC_V3, () => this.gcv3());
}
async allChunks(includeDeleted: boolean = false) {
const p = this._progress("", LOG_LEVEL_NOTICE);
@@ -28,7 +66,7 @@ export class LocalDatabaseMaintenance extends LiveSyncCommands implements IObsid
return this.localDatabase.localDatabase;
}
clearHash() {
this.localDatabase.hashCaches.clear();
this.localDatabase.clearCaches();
}
async confirm(title: string, message: string, affirmative = "Yes", negative = "No") {
@@ -262,4 +300,667 @@ Note: **Make sure to synchronise all devices before deletion.**
this.clearHash();
}
}
async scanUnusedChunks() {
const kvDB = this.plugin.kvDB;
const chunkSet = (await kvDB.get<Set<DocumentID>>(DB_KEY_CHUNK_SET)) || new Set();
const chunkUsageMap = (await kvDB.get<ChunkUsageMap>(DB_KEY_DOC_USAGE_MAP)) || new Map();
const KEEP_MAX_REVS = 10;
const unusedSet = new Set<DocumentID>([...chunkSet]);
for (const [, revIdMap] of chunkUsageMap) {
const sortedRevId = [...revIdMap.entries()].sort((a, b) => getNoFromRev(b[0]) - getNoFromRev(a[0]));
if (sortedRevId.length > KEEP_MAX_REVS) {
// If we have more revisions than we want to keep, we need to delete the extras
}
const keepRevID = sortedRevId.slice(0, KEEP_MAX_REVS);
keepRevID.forEach((e) => e[1].forEach((ee) => unusedSet.delete(ee)));
}
return {
chunkSet,
chunkUsageMap,
unusedSet,
};
}
/**
* Track changes in the database and update the chunk usage map for garbage collection.
* Note that this only able to perform without Fetch chunks on demand.
*/
async trackChanges(fromStart: boolean = false, showNotice: boolean = false) {
if (!this.isAvailable()) return;
const logLevel = showNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const kvDB = this.plugin.kvDB;
const previousSeq = fromStart ? "" : await kvDB.get<string>(DB_KEY_SEQ);
const chunkSet = (await kvDB.get<Set<DocumentID>>(DB_KEY_CHUNK_SET)) || new Set();
const chunkUsageMap = (await kvDB.get<ChunkUsageMap>(DB_KEY_DOC_USAGE_MAP)) || new Map();
const db = this.localDatabase.localDatabase;
const verbose = (msg: string) => this._verbose(msg);
const processDoc = async (doc: EntryDoc, isDeleted: boolean) => {
if (!("children" in doc)) {
return;
}
const id = doc._id;
const rev = doc._rev!;
const deleted = doc._deleted || isDeleted;
const softDeleted = doc.deleted;
const children = (doc.children || []) as DocumentID[];
if (!chunkUsageMap.has(id)) {
chunkUsageMap.set(id, new Map<Rev, Set<ChunkID>>());
}
for (const chunkId of children) {
if (deleted) {
chunkUsageMap.get(id)!.delete(rev);
// chunkSet.add(chunkId as DocumentID);
} else {
if (softDeleted) {
//TODO: Soft delete
chunkUsageMap.get(id)!.set(rev, (chunkUsageMap.get(id)!.get(rev) || new Set()).add(chunkId));
} else {
chunkUsageMap.get(id)!.set(rev, (chunkUsageMap.get(id)!.get(rev) || new Set()).add(chunkId));
}
}
}
verbose(
`Tracking chunk: ${id}/${rev} (${doc?.path}), deleted: ${deleted ? "yes" : "no"} Soft-Deleted:${softDeleted ? "yes" : "no"}`
);
return await Promise.resolve();
};
// let saveQueue = 0;
const saveState = async (seq: string | number) => {
await kvDB.set(DB_KEY_SEQ, seq);
await kvDB.set(DB_KEY_CHUNK_SET, chunkSet);
await kvDB.set(DB_KEY_DOC_USAGE_MAP, chunkUsageMap);
};
const processDocRevisions = async (doc: EntryDoc) => {
try {
const oldRevisions = await db.get(doc._id, { revs: true, revs_info: true, conflicts: true });
const allRevs = oldRevisions._revs_info?.length || 0;
const info = (oldRevisions._revs_info || [])
.filter((e) => e.status == "available" && e.rev != doc._rev)
.filter((info) => !chunkUsageMap.get(doc._id)?.has(info.rev));
const infoLength = info.length;
this._log(`Found ${allRevs} old revisions for ${doc._id} . ${infoLength} items to check `);
if (info.length > 0) {
const oldDocs = await Promise.all(
info
.filter((revInfo) => revInfo.status == "available")
.map((revInfo) => db.get(doc._id, { rev: revInfo.rev }))
).then((docs) => docs.filter((doc) => doc));
for (const oldDoc of oldDocs) {
await processDoc(oldDoc as EntryDoc, false);
}
}
} catch (ex) {
if ((ex as any)?.status == 404) {
this._log(`No revisions found for ${doc._id}`, LOG_LEVEL_VERBOSE);
} else {
this._log(`Error finding revisions for ${doc._id}`);
this._verbose(ex);
}
}
};
const processChange = async (doc: EntryDoc, isDeleted: boolean, seq: string | number) => {
if (doc.type === EntryTypes.CHUNK) {
if (isDeleted) return;
chunkSet.add(doc._id);
} else if ("children" in doc) {
await processDoc(doc, isDeleted);
await serialized("x-process-doc", async () => await processDocRevisions(doc));
}
};
// Track changes
let i = 0;
await db
.changes({
since: previousSeq || "",
live: false,
conflicts: true,
include_docs: true,
style: "all_docs",
return_docs: false,
})
.on("change", async (change) => {
// handle change
await processChange(change.doc!, change.deleted ?? false, change.seq);
if (i++ % 100 == 0) {
await saveState(change.seq);
}
})
.on("complete", async (info) => {
await saveState(info.last_seq);
});
// Track all changed docs and new-leafs;
const result = await this.scanUnusedChunks();
const message = `Total chunks: ${result.chunkSet.size}\nUnused chunks: ${result.unusedSet.size}`;
this._log(message, logLevel);
}
async performGC(showingNotice = false) {
if (!this.isAvailable()) return;
await this.trackChanges(false, showingNotice);
const title = "Are all devices synchronised?";
const confirmMessage = `This function deletes unused chunks from the device. If there are differences between devices, some chunks may be missing when resolving conflicts.
Be sure to synchronise before executing.
However, if you have deleted them, you may be able to recover them by performing Hatch -> Recreate missing chunks for all files.
Are you ready to delete unused chunks?`;
const logLevel = showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const BUTTON_OK = `Yes, delete chunks`;
const BUTTON_CANCEL = "Cancel";
const result = await this.plugin.confirm.askSelectStringDialogue(
confirmMessage,
[BUTTON_OK, BUTTON_CANCEL] as const,
{
title,
defaultAction: BUTTON_CANCEL,
}
);
if (result !== BUTTON_OK) {
this._log("User cancelled chunk deletion", logLevel);
return;
}
const { unusedSet, chunkSet } = await this.scanUnusedChunks();
const deleteChunks = await this.database.allDocs({
keys: [...unusedSet],
include_docs: true,
});
for (const chunk of deleteChunks.rows) {
if ((chunk as any)?.value?.deleted) {
chunkSet.delete(chunk.key as DocumentID);
}
}
const deleteDocs = deleteChunks.rows
.filter((e) => "doc" in e)
.map((e) => ({
...(e as any).doc!,
_deleted: true,
}));
this._log(`Deleting chunks: ${deleteDocs.length}`, logLevel);
const deleteChunkBatch = arrayToChunkedArray(deleteDocs, 100);
let successCount = 0;
let errored = 0;
for (const batch of deleteChunkBatch) {
const results = await this.database.bulkDocs(batch as EntryLeaf[]);
for (const result of results) {
if ("ok" in result) {
chunkSet.delete(result.id as DocumentID);
successCount++;
} else {
this._log(`Failed to delete doc: ${result.id}`, LOG_LEVEL_VERBOSE);
errored++;
}
}
this._log(`Deleting chunks: ${successCount} `, logLevel, "gc-preforming");
}
const message = `Garbage Collection completed.
Success: ${successCount}, Errored: ${errored}`;
this._log(message, logLevel);
const kvDB = this.plugin.kvDB;
await kvDB.set(DB_KEY_CHUNK_SET, chunkSet);
}
// Analyse the database and report chunk usage.
async analyseDatabase() {
if (!this.isAvailable()) return;
const db = this.localDatabase.localDatabase;
// Map of chunk ID to its info
type ChunkInfo = {
id: DocumentID;
refCount: number;
length: number;
};
const chunkMap = new Map<DocumentID, Set<ChunkInfo>>();
// Map of document ID to its info
type DocumentInfo = {
id: DocumentID;
rev: Rev;
chunks: Set<ChunkID>;
uniqueChunks: Set<ChunkID>;
sharedChunks: Set<ChunkID>;
path: FilePathWithPrefix;
};
const docMap = new Map<DocumentID, Set<DocumentInfo>>();
const info = await db.info();
// Total number of revisions to process (approximate)
const maxSeq = new Number(info.update_seq);
let processed = 0;
let read = 0;
let errored = 0;
// Fetch Tasks
const ft = [] as ReturnType<typeof fetchRevision>[];
// Fetch a specific revision of a document and make note of its chunks, or add chunk info.
const fetchRevision = async (id: DocumentID, rev: Rev, seq: string | number) => {
try {
processed++;
const doc = await db.get(id, { rev: rev });
if (doc) {
if ("children" in doc) {
const id = doc._id;
const rev = doc._rev;
const children = (doc.children || []) as DocumentID[];
const set = docMap.get(id) || new Set();
set.add({
id,
rev,
chunks: new Set(children),
uniqueChunks: new Set(),
sharedChunks: new Set(),
path: doc.path,
});
docMap.set(id, set);
} else if (doc.type === EntryTypes.CHUNK) {
const id = doc._id as DocumentID;
if (chunkMap.has(id)) {
return;
}
if (doc._deleted) {
// Deleted chunk, skip (possibly resurrected later)
return;
}
const length = doc.data.length;
const set = chunkMap.get(id) || new Set();
set.add({ id, length, refCount: 0 });
chunkMap.set(id, set);
}
read++;
} else {
this._log(`Analysing Database: not found: ${id} / ${rev}`);
errored++;
}
} catch (error) {
this._log(`Error fetching document ${id} / ${rev}: $`, LOG_LEVEL_NOTICE);
this._log(error, LOG_LEVEL_VERBOSE);
errored++;
}
if (processed % 100 == 0) {
this._log(`Analysing database: ${read} (${errored}) / ${maxSeq} `, LOG_LEVEL_NOTICE, "db-analyse");
}
};
// Enumerate all documents and their revisions.
const IDs = this.localDatabase.findEntryNames("", "", {});
for await (const id of IDs) {
const revList = await this.localDatabase.getRaw(id as DocumentID, {
revs: true,
revs_info: true,
conflicts: true,
});
const revInfos = revList._revs_info || [];
for (const revInfo of revInfos) {
// All available revisions should be processed.
// If the revision is not available, it means the revision is already tombstoned.
if (revInfo.status == "available") {
// Schedule fetch task
ft.push(fetchRevision(id as DocumentID, revInfo.rev, 0));
}
}
}
// Wait for all fetch tasks to complete.
await Promise.all(ft);
// Reference count marking and unique/shared chunk classification.
for (const [, docRevs] of docMap) {
for (const docRev of docRevs) {
for (const chunkId of docRev.chunks) {
const chunkInfos = chunkMap.get(chunkId);
if (chunkInfos) {
for (const chunkInfo of chunkInfos) {
if (chunkInfo.refCount === 0) {
docRev.uniqueChunks.add(chunkId);
} else {
docRev.sharedChunks.add(chunkId);
}
chunkInfo.refCount++;
}
}
}
}
}
// Prepare results
const result = [];
// Calculate total size of chunks in the given set.
const getTotalSize = (ids: Set<DocumentID>) => {
return [...ids].reduce((acc, chunkId) => {
const chunkInfos = chunkMap.get(chunkId);
if (chunkInfos) {
for (const chunkInfo of chunkInfos) {
acc += chunkInfo.length;
}
}
return acc;
}, 0);
};
// Compile results for each document revision
for (const doc of docMap.values()) {
for (const rev of doc) {
const title = `${rev.path} (${rev.rev})`;
const id = rev.id;
const revStr = `${getNoFromRev(rev.rev)}`;
const revHash = rev.rev.split("-")[1].substring(0, 6);
const path = rev.path;
const uniqueChunkCount = rev.uniqueChunks.size;
const sharedChunkCount = rev.sharedChunks.size;
const uniqueChunkSize = getTotalSize(rev.uniqueChunks);
const sharedChunkSize = getTotalSize(rev.sharedChunks);
result.push({
title,
path,
rev: revStr,
revHash,
id,
uniqueChunkCount: uniqueChunkCount,
sharedChunkCount,
uniqueChunkSize: uniqueChunkSize,
sharedChunkSize: sharedChunkSize,
});
}
}
const titleMap = {
title: "Title",
id: "Document ID",
path: "Path",
rev: "Revision No",
revHash: "Revision Hash",
uniqueChunkCount: "Unique Chunk Count",
sharedChunkCount: "Shared Chunk Count",
uniqueChunkSize: "Unique Chunk Size",
sharedChunkSize: "Shared Chunk Size",
} as const;
// Enumerate orphan chunks (not referenced by any document)
const orphanChunks = [...chunkMap.entries()].filter(([chunkId, infos]) => {
const totalRefCount = [...infos].reduce((acc, info) => acc + info.refCount, 0);
return totalRefCount === 0;
});
const orphanChunkSize = orphanChunks.reduce((acc, [chunkId, infos]) => {
for (const info of infos) {
acc += info.length;
}
return acc;
}, 0);
result.push({
title: "__orphan",
id: "__orphan",
path: "__orphan",
rev: "1",
revHash: "xxxxx",
uniqueChunkCount: orphanChunks.length,
sharedChunkCount: 0,
uniqueChunkSize: orphanChunkSize,
sharedChunkSize: 0,
} as any);
const csvSrc = result.map((e) => {
return [
`${e.title.replace(/"/g, '""')}"`,
`${e.id}`,
`${e.path}`,
`${e.rev}`,
`${e.revHash}`,
`${e.uniqueChunkCount}`,
`${e.sharedChunkCount}`,
`${e.uniqueChunkSize}`,
`${e.sharedChunkSize}`,
].join("\t");
});
// Add title row
csvSrc.unshift(Object.values(titleMap).join("\t"));
const csv = csvSrc.join("\n");
// Prompt to copy to clipboard
await this.services.UI.promptCopyToClipboard("Database Analysis data (TSV):", csv);
}
async compactDatabase() {
const replicator = this.plugin.replicator as LiveSyncCouchDBReplicator;
const remote = await replicator.connectRemoteCouchDBWithSetting(this.settings, false, false, true);
if (!remote) {
this._notice("Failed to connect to remote for compaction.", "gc-compact");
return;
}
if (typeof remote == "string") {
this._notice(`Failed to connect to remote for compaction. ${remote}`, "gc-compact");
return;
}
const compactResult = await remote.db.compact({
interval: 1000,
});
// Probably no need to wait, but just in case.
let timeout = 2 * 60 * 1000; // 2 minutes
do {
const status = await remote.db.info();
if ("compact_running" in status && status?.compact_running) {
this._notice("Compaction in progress on remote database...", "gc-compact");
await delay(2000);
timeout -= 2000;
if (timeout <= 0) {
this._notice("Compaction on remote database timed out.", "gc-compact");
break;
}
} else {
break;
}
} while (true);
if (compactResult && "ok" in compactResult) {
this._notice("Compaction on remote database completed successfully.", "gc-compact");
} else {
this._notice("Compaction on remote database failed.", "gc-compact");
}
}
/**
* Compact the database by temporarily setting the revision limit to 1.
* @returns
*/
async compactDatabaseWithRevLimit() {
// Temporarily set revs_limit to 1, perform compaction, and restore the original revs_limit.
// Very dangerous operation, so now suppressed.
return false;
const replicator = this.plugin.replicator as LiveSyncCouchDBReplicator;
const remote = await replicator.connectRemoteCouchDBWithSetting(this.settings, false, false, true);
if (!remote) {
this._notice("Failed to connect to remote for compaction.");
return;
}
if (typeof remote == "string") {
this._notice(`Failed to connect to remote for compaction. ${remote}`);
return;
}
const customHeaders = parseHeaderValues(this.settings.couchDB_CustomHeaders);
const credential = generateCredentialObject(this.settings);
const request = async (path: string, method: string = "GET", body: any = undefined) => {
const req = await _requestToCouchDB(
this.settings.couchDB_URI + (this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
credential,
window.origin,
path,
body,
method,
customHeaders
);
return req;
};
let revsLimit = "";
const req = await request(`_revs_limit`, "GET");
if (req.status == 200) {
revsLimit = req.text.trim();
this._info(`Remote database _revs_limit: ${revsLimit}`);
} else {
this._notice(`Failed to get remote database _revs_limit. Status: ${req.status}`);
return;
}
const req2 = await request(`_revs_limit`, "PUT", 1);
if (req2.status == 200) {
this._info(`Set remote database _revs_limit to 1 for compaction.`);
}
try {
await this.compactDatabase();
} finally {
// Restore revs_limit
if (revsLimit) {
const req3 = await request(`_revs_limit`, "PUT", parseInt(revsLimit));
if (req3.status == 200) {
this._info(`Restored remote database _revs_limit to ${revsLimit}.`);
} else {
this._notice(
`Failed to restore remote database _revs_limit. Status: ${req3.status} / ${req3.text}`
);
}
}
}
}
async gcv3() {
if (!this.isAvailable()) return;
const replicator = this.plugin.replicator as LiveSyncCouchDBReplicator;
// Start one-shot replication to ensure all changes are synced before GC.
const r0 = await replicator.openOneShotReplication(this.settings, false, false, "sync");
if (!r0) {
this._notice(
"Failed to start one-shot replication before Garbage Collection. Garbage Collection Cancelled."
);
return;
}
// Delete the chunk, but first verify the following:
// Fetch the list of accepted nodes from the replicator.
const OPTION_CANCEL = "Cancel Garbage Collection";
const info = await this.plugin.replicator.getConnectedDeviceList();
if (!info) {
this._notice("No connected device information found. Cancelling Garbage Collection.");
return;
}
const { accepted_nodes, node_info } = info;
//1. Compare accepted_nodes and node_info, and confirm whether it is acceptable to delete nodes not present in node_info.
const infoMissingNodes = [] as string[];
for (const node of accepted_nodes) {
if (!(node in node_info)) {
infoMissingNodes.push(node);
}
}
if (infoMissingNodes.length > 0) {
const message = `The following accepted nodes are missing its node information:\n- ${infoMissingNodes.join("\n- ")}\n\nThis indicates that they have not been connected for some time or have been left on an older version.
It is preferable to update all devices if possible. If you have any devices that are no longer in use, you can clear all accepted nodes by locking the remote once.`;
const OPTION_IGNORE = "Ignore and Proceed";
// const OPTION_DELETE = "Delete them and proceed";
const buttons = [OPTION_CANCEL, OPTION_IGNORE] as const;
const result = await this.plugin.confirm.askSelectStringDialogue(message, buttons, {
title: "Node Information Missing",
defaultAction: OPTION_CANCEL,
});
if (result === OPTION_CANCEL) {
this._notice("Garbage Collection cancelled by user.");
return;
} else if (result === OPTION_IGNORE) {
this._notice("Proceeding with Garbage Collection, ignoring missing nodes.");
}
}
//2. Check whether the progress values in NodeData are roughly the same (only the numerical part is needed).
const progressValues = Object.values(node_info)
.map((e) => e.progress.split("-")[0])
.map((e) => parseInt(e));
const maxProgress = Math.max(...progressValues);
const minProgress = Math.min(...progressValues);
const progressDifference = maxProgress - minProgress;
const OPTION_PROCEED = "Proceed Garbage Collection";
// - If they differ significantly, the node may not have completed synchronisation, potentially causing conflicts. Display a confirmation dialog as a precaution.
// - If they are not significantly different, display the standard confirmation dialogue message.
const detail = `> [!INFO]- The connected devices have been detected as follows:
${Object.entries(node_info)
.map(
([nodeId, nodeData]) =>
`> - Device: ${nodeData.device_name} (Node ID: ${nodeId})
> - Obsidian version: ${nodeData.app_version}
> - Plug-in version: ${nodeData.plugin_version}
> - Progress: ${nodeData.progress.split("-")[0]}`
)
.join("\n")}
`;
const message =
progressDifference != 0
? `Some devices have differing progress values (max: ${maxProgress}, min: ${minProgress}).
This may indicate that some devices have not completed synchronisation, which could lead to conflicts. Strongly recommend confirming that all devices are synchronised before proceeding.`
: `All devices have the same progress value (${maxProgress}). Your devices seem to be synchronised. And be able to proceed with Garbage Collection.`;
const buttons = [OPTION_PROCEED, OPTION_CANCEL] as const;
const defaultAction = progressDifference != 0 ? OPTION_CANCEL : OPTION_PROCEED;
const result = await this.plugin.confirm.askSelectStringDialogue(message + "\n\n" + detail, buttons, {
title: "Garbage Collection Confirmation",
defaultAction,
});
if (result !== OPTION_PROCEED) {
this._notice("Garbage Collection cancelled by user.");
return;
}
this._notice("Proceeding with Garbage Collection.");
//- 3. Once OK is confirmed in the dialogue, execute the chunk deletion. This is performed on the local database and immediately reflected on the remote. After reflecting on the remote, perform compaction.
const gcStartTime = Date.now();
// Perform Garbage Collection (new implementation).
const localDatabase = this.localDatabase.localDatabase;
const usedChunks = new Set<DocumentID>();
const allChunks = new Map<DocumentID, string>();
const IDs = this.localDatabase.findEntryNames("", "", {});
let i = 0;
const doc_count = (await localDatabase.info()).doc_count;
for await (const id of IDs) {
const doc = await this.localDatabase.getRaw(id as DocumentID);
i++;
if (i % 100 == 0) {
this._notice(`Garbage Collection: Scanned ${i} / ~${doc_count} `, "gc-scanning");
}
if (!doc) continue;
if ("children" in doc) {
const children = (doc.children || []) as DocumentID[];
for (const chunkId of children) {
usedChunks.add(chunkId);
}
} else if (doc.type === EntryTypes.CHUNK) {
allChunks.set(doc._id as DocumentID, doc._rev);
}
}
this._notice(
`Garbage Collection: Scanning completed. Total chunks: ${allChunks.size}, Used chunks: ${usedChunks.size}`,
"gc-scanning"
);
const unusedChunks = [...allChunks.keys()].filter((e) => !usedChunks.has(e));
this._notice(`Garbage Collection: Found ${unusedChunks.length} unused chunks to delete.`, "gc-scanning");
const deleteChunkDocs = unusedChunks.map(
(chunkId) =>
({
_id: chunkId,
_deleted: true,
_rev: allChunks.get(chunkId),
}) as EntryLeaf
);
const response = await localDatabase.bulkDocs(deleteChunkDocs);
const deletedCount = response.filter((e) => "ok" in e).length;
const gcEndTime = Date.now();
this._notice(
`Garbage Collection completed. Deleted chunks: ${deletedCount} / ${unusedChunks.length}. Time taken: ${(gcEndTime - gcStartTime) / 1000} seconds.`
);
// Send changes to remote
const r = await replicator.openOneShotReplication(this.settings, false, false, "pushOnly");
// Wait for replication to complete
if (!r) {
this._notice("Failed to start replication after Garbage Collection.");
return;
}
// Perform compaction
await this.compactDatabase();
this.clearHash();
}
}

View File

@@ -0,0 +1,275 @@
import { P2PReplicatorPaneView, VIEW_TYPE_P2P } from "./P2PReplicator/P2PReplicatorPaneView.ts";
import {
AutoAccepting,
LOG_LEVEL_NOTICE,
P2P_DEFAULT_SETTINGS,
REMOTE_P2P,
type EntryDoc,
type P2PSyncSetting,
type RemoteDBSettings,
} from "../../lib/src/common/types.ts";
import { LiveSyncCommands } from "../LiveSyncCommands.ts";
import {
LiveSyncTrysteroReplicator,
setReplicatorFunc,
} from "../../lib/src/replication/trystero/LiveSyncTrysteroReplicator.ts";
import { EVENT_REQUEST_OPEN_P2P, eventHub } from "../../common/events.ts";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator.ts";
import { LOG_LEVEL_INFO, LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
import type { CommandShim } from "../../lib/src/replication/trystero/P2PReplicatorPaneCommon.ts";
import {
addP2PEventHandlers,
closeP2PReplicator,
openP2PReplicator,
P2PLogCollector,
removeP2PReplicatorInstance,
type P2PReplicatorBase,
} from "../../lib/src/replication/trystero/P2PReplicatorCore.ts";
import { reactiveSource } from "octagonal-wheels/dataobject/reactive_v2";
import type { Confirm } from "../../lib/src/interfaces/Confirm.ts";
import type ObsidianLiveSyncPlugin from "../../main.ts";
import type { SimpleStore } from "octagonal-wheels/databases/SimpleStoreBase";
// import { getPlatformName } from "../../lib/src/PlatformAPIs/obsidian/Environment.ts";
import type { LiveSyncCore } from "../../main.ts";
import { TrysteroReplicator } from "../../lib/src/replication/trystero/TrysteroReplicator.ts";
import { SETTING_KEY_P2P_DEVICE_NAME } from "../../lib/src/common/types.ts";
export class P2PReplicator extends LiveSyncCommands implements P2PReplicatorBase, CommandShim {
storeP2PStatusLine = reactiveSource("");
getSettings(): P2PSyncSetting {
return this.plugin.settings;
}
getDB() {
return this.plugin.localDatabase.localDatabase;
}
get confirm(): Confirm {
return this.plugin.confirm;
}
_simpleStore!: SimpleStore<any>;
simpleStore(): SimpleStore<any> {
return this._simpleStore;
}
constructor(plugin: ObsidianLiveSyncPlugin) {
super(plugin);
setReplicatorFunc(() => this._replicatorInstance);
addP2PEventHandlers(this);
this.afterConstructor();
// onBindFunction is called in super class
// this.onBindFunction(plugin, plugin.services);
}
async handleReplicatedDocuments(docs: EntryDoc[]): Promise<boolean> {
// console.log("Processing Replicated Docs", docs);
return await this.services.replication.parseSynchroniseResult(
docs as PouchDB.Core.ExistingDocument<EntryDoc>[]
);
}
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator> {
const settings = { ...this.settings, ...settingOverride };
if (settings.remoteType == REMOTE_P2P) {
return Promise.resolve(new LiveSyncTrysteroReplicator(this.plugin));
}
return undefined!;
}
_replicatorInstance?: TrysteroReplicator;
p2pLogCollector = new P2PLogCollector();
afterConstructor() {
return;
}
async open() {
await openP2PReplicator(this);
}
async close() {
await closeP2PReplicator(this);
}
getConfig(key: string) {
return this.services.config.getSmallConfig(key);
}
setConfig(key: string, value: string) {
return this.services.config.setSmallConfig(key, value);
}
enableBroadcastCastings() {
return this?._replicatorInstance?.enableBroadcastChanges();
}
disableBroadcastCastings() {
return this?._replicatorInstance?.disableBroadcastChanges();
}
init() {
this._simpleStore = this.services.keyValueDB.openSimpleStore("p2p-sync");
return Promise.resolve(this);
}
async initialiseP2PReplicator(): Promise<TrysteroReplicator> {
await this.init();
try {
if (this._replicatorInstance) {
await this._replicatorInstance.close();
this._replicatorInstance = undefined;
}
if (!this.settings.P2P_AppID) {
this.settings.P2P_AppID = P2P_DEFAULT_SETTINGS.P2P_AppID;
}
const getInitialDeviceName = () =>
this.getConfig(SETTING_KEY_P2P_DEVICE_NAME) || this.services.vault.getVaultName();
const getSettings = () => this.settings;
const store = () => this.simpleStore();
const getDB = () => this.getDB();
const getConfirm = () => this.confirm;
const getPlatform = () => this.services.API.getPlatform();
const env = {
get db() {
return getDB();
},
get confirm() {
return getConfirm();
},
get deviceName() {
return getInitialDeviceName();
},
get platform() {
return getPlatform();
},
get settings() {
return getSettings();
},
processReplicatedDocs: async (docs: EntryDoc[]): Promise<void> => {
await this.handleReplicatedDocuments(docs);
// No op. This is a client and does not need to process the docs
},
get simpleStore() {
return store();
},
};
this._replicatorInstance = new TrysteroReplicator(env);
return this._replicatorInstance;
} catch (e) {
this._log(
e instanceof Error ? e.message : "Something occurred on Initialising P2P Replicator",
LOG_LEVEL_INFO
);
this._log(e, LOG_LEVEL_VERBOSE);
throw e;
}
}
onunload(): void {
removeP2PReplicatorInstance();
void this.close();
}
onload(): void | Promise<void> {
eventHub.onEvent(EVENT_REQUEST_OPEN_P2P, () => {
void this.openPane();
});
this.p2pLogCollector.p2pReplicationLine.onChanged((line) => {
this.storeP2PStatusLine.value = line.value;
});
}
async _everyOnInitializeDatabase(): Promise<boolean> {
await this.initialiseP2PReplicator();
return Promise.resolve(true);
}
private async _allSuspendExtraSync() {
this.plugin.settings.P2P_Enabled = false;
this.plugin.settings.P2P_AutoAccepting = AutoAccepting.NONE;
this.plugin.settings.P2P_AutoBroadcast = false;
this.plugin.settings.P2P_AutoStart = false;
this.plugin.settings.P2P_AutoSyncPeers = "";
this.plugin.settings.P2P_AutoWatchPeers = "";
return await Promise.resolve(true);
}
// async $everyOnLoadStart() {
// return await Promise.resolve();
// }
async openPane() {
await this.services.API.showWindow(VIEW_TYPE_P2P);
}
async _everyOnloadStart(): Promise<boolean> {
this.plugin.registerView(VIEW_TYPE_P2P, (leaf) => new P2PReplicatorPaneView(leaf, this.plugin));
this.plugin.addCommand({
id: "open-p2p-replicator",
name: "P2P Sync : Open P2P Replicator",
callback: async () => {
await this.openPane();
},
});
this.plugin.addCommand({
id: "p2p-establish-connection",
name: "P2P Sync : Connect to the Signalling Server",
checkCallback: (isChecking) => {
if (isChecking) {
return !(this._replicatorInstance?.server?.isServing ?? false);
}
void this.open();
},
});
this.plugin.addCommand({
id: "p2p-close-connection",
name: "P2P Sync : Disconnect from the Signalling Server",
checkCallback: (isChecking) => {
if (isChecking) {
return this._replicatorInstance?.server?.isServing ?? false;
}
Logger(`Closing P2P Connection`, LOG_LEVEL_NOTICE);
void this.close();
},
});
this.plugin.addCommand({
id: "replicate-now-by-p2p",
name: "Replicate now by P2P",
checkCallback: (isChecking) => {
if (isChecking) {
if (this.settings.remoteType == REMOTE_P2P) return false;
if (!this._replicatorInstance?.server?.isServing) return false;
return true;
}
void this._replicatorInstance?.replicateFromCommand(false);
},
});
this.plugin
.addRibbonIcon("waypoints", "P2P Replicator", async () => {
await this.openPane();
})
.addClass("livesync-ribbon-replicate-p2p");
return await Promise.resolve(true);
}
_everyAfterResumeProcess(): Promise<boolean> {
if (this.settings.P2P_Enabled && this.settings.P2P_AutoStart) {
setTimeout(() => void this.open(), 100);
}
const rep = this._replicatorInstance;
rep?.allowReconnection();
return Promise.resolve(true);
}
_everyBeforeSuspendProcess(): Promise<boolean> {
const rep = this._replicatorInstance;
rep?.disconnectFromServer();
return Promise.resolve(true);
}
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.getNewReplicator.addHandler(this._anyNewReplicator.bind(this));
services.databaseEvents.onDatabaseInitialisation.addHandler(this._everyOnInitializeDatabase.bind(this));
services.appLifecycle.onInitialise.addHandler(this._everyOnloadStart.bind(this));
services.appLifecycle.onSuspending.addHandler(this._everyBeforeSuspendProcess.bind(this));
services.appLifecycle.onResumed.addHandler(this._everyAfterResumeProcess.bind(this));
services.setting.suspendExtraSync.addHandler(this._allSuspendExtraSync.bind(this));
}
}

View File

@@ -1,222 +0,0 @@
import type { IObsidianModule } from "../../modules/AbstractObsidianModule";
import { P2PReplicatorPaneView, VIEW_TYPE_P2P } from "./P2PReplicator/P2PReplicatorPaneView.ts";
import { TrysteroReplicator } from "../../lib/src/replication/trystero/TrysteroReplicator.ts";
import {
AutoAccepting,
DEFAULT_SETTINGS,
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
REMOTE_P2P,
type EntryDoc,
type RemoteDBSettings,
} from "../../lib/src/common/types.ts";
import { LiveSyncCommands } from "../LiveSyncCommands.ts";
import {
LiveSyncTrysteroReplicator,
setReplicatorFunc,
} from "../../lib/src/replication/trystero/LiveSyncTrysteroReplicator.ts";
import {
EVENT_DATABASE_REBUILT,
EVENT_PLUGIN_UNLOADED,
EVENT_REQUEST_OPEN_P2P,
eventHub,
} from "../../common/events.ts";
import {
EVENT_ADVERTISEMENT_RECEIVED,
EVENT_DEVICE_LEAVED,
EVENT_P2P_REQUEST_FORCE_OPEN,
EVENT_REQUEST_STATUS,
} from "../../lib/src/replication/trystero/TrysteroReplicatorP2PServer.ts";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator.ts";
import { Logger } from "octagonal-wheels/common/logger";
import { $msg } from "../../lib/src/common/i18n.ts";
export class P2PReplicator extends LiveSyncCommands implements IObsidianModule {
$anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator> {
const settings = { ...this.settings, ...settingOverride };
if (settings.remoteType == REMOTE_P2P) {
return Promise.resolve(new LiveSyncTrysteroReplicator(this.plugin));
}
return undefined!;
}
_replicatorInstance?: TrysteroReplicator;
onunload(): void {
setReplicatorFunc(() => undefined);
void this.close();
}
onload(): void | Promise<void> {
setReplicatorFunc(() => this._replicatorInstance);
eventHub.onEvent(EVENT_ADVERTISEMENT_RECEIVED, (peerId) => this._replicatorInstance?.onNewPeer(peerId));
eventHub.onEvent(EVENT_DEVICE_LEAVED, (info) => this._replicatorInstance?.onPeerLeaved(info));
eventHub.onEvent(EVENT_REQUEST_STATUS, () => {
this._replicatorInstance?.requestStatus();
});
eventHub.onEvent(EVENT_P2P_REQUEST_FORCE_OPEN, () => {
void this.open();
});
eventHub.onEvent(EVENT_REQUEST_OPEN_P2P, () => {
void this.openPane();
});
eventHub.onEvent(EVENT_DATABASE_REBUILT, async () => {
await this.initialiseP2PReplicator();
});
eventHub.onEvent(EVENT_PLUGIN_UNLOADED, () => {
void this.close();
});
// throw new Error("Method not implemented.");
}
async $everyOnInitializeDatabase(): Promise<boolean> {
await this.initialiseP2PReplicator();
return Promise.resolve(true);
}
async $allSuspendExtraSync() {
this.plugin.settings.P2P_Enabled = false;
this.plugin.settings.P2P_AutoAccepting = AutoAccepting.NONE;
this.plugin.settings.P2P_AutoBroadcast = false;
this.plugin.settings.P2P_AutoStart = false;
this.plugin.settings.P2P_AutoSyncPeers = "";
this.plugin.settings.P2P_AutoWatchPeers = "";
return await Promise.resolve(true);
}
async $everyOnLoadStart() {
return await Promise.resolve();
}
async openPane() {
await this.plugin.$$showView(VIEW_TYPE_P2P);
}
async $everyOnloadStart(): Promise<boolean> {
this.plugin.registerView(VIEW_TYPE_P2P, (leaf) => new P2PReplicatorPaneView(leaf, this.plugin));
this.plugin.addCommand({
id: "open-p2p-replicator",
name: "P2P Sync : Open P2P Replicator",
callback: async () => {
await this.openPane();
},
});
this.plugin.addCommand({
id: "p2p-establish-connection",
name: "P2P Sync : Connect to the Signalling Server",
checkCallback: (isChecking) => {
if (isChecking) {
return !(this._replicatorInstance?.server?.isServing ?? false);
}
void this.open();
},
});
this.plugin.addCommand({
id: "p2p-close-connection",
name: "P2P Sync : Disconnect from the Signalling Server",
checkCallback: (isChecking) => {
if (isChecking) {
return this._replicatorInstance?.server?.isServing ?? false;
}
Logger(`Closing P2P Connection`, LOG_LEVEL_NOTICE);
void this.close();
},
});
this.plugin.addCommand({
id: "replicate-now-by-p2p",
name: "Replicate now by P2P",
checkCallback: (isChecking) => {
if (isChecking) {
if (this.settings.remoteType == REMOTE_P2P) return false;
if (!this._replicatorInstance?.server?.isServing) return false;
return true;
}
void this._replicatorInstance?.replicateFromCommand(false);
},
});
this.plugin
.addRibbonIcon("waypoints", "P2P Replicator", async () => {
await this.openPane();
})
.addClass("livesync-ribbon-replicate-p2p");
return await Promise.resolve(true);
}
$everyAfterResumeProcess(): Promise<boolean> {
if (this.settings.P2P_Enabled && this.settings.P2P_AutoStart) {
setTimeout(() => void this.open(), 100);
}
return Promise.resolve(true);
}
async open() {
if (!this.settings.P2P_Enabled) {
this._notice($msg("P2P.NotEnabled"));
return;
}
if (!this._replicatorInstance) {
await this.initialiseP2PReplicator();
}
await this._replicatorInstance?.open();
}
async close() {
await this._replicatorInstance?.close();
this._replicatorInstance = undefined;
}
getConfig(key: string) {
const vaultName = this.plugin.$$getVaultName();
const dbKey = `${vaultName}-${key}`;
return localStorage.getItem(dbKey);
}
setConfig(key: string, value: string) {
const vaultName = this.plugin.$$getVaultName();
const dbKey = `${vaultName}-${key}`;
localStorage.setItem(dbKey, value);
}
async initialiseP2PReplicator(): Promise<TrysteroReplicator> {
const getPlugin = () => this.plugin;
try {
// const plugin = this.plugin;
if (this._replicatorInstance) {
await this._replicatorInstance.close();
this._replicatorInstance = undefined;
}
if (!this.settings.P2P_AppID) {
this.settings.P2P_AppID = DEFAULT_SETTINGS.P2P_AppID;
}
const initialDeviceName = this.getConfig("p2p_device_name") || this.plugin.$$getDeviceAndVaultName();
const env = {
get db() {
return getPlugin().localDatabase.localDatabase;
},
get confirm() {
return getPlugin().confirm;
},
get deviceName() {
return initialDeviceName;
},
platform: "wip",
get settings() {
return getPlugin().settings;
},
async processReplicatedDocs(docs: EntryDoc[]): Promise<void> {
return await getPlugin().$$parseReplicationResult(
docs as PouchDB.Core.ExistingDocument<EntryDoc>[]
);
},
simpleStore: getPlugin().$$getSimpleStore("p2p-sync"),
};
this._replicatorInstance = new TrysteroReplicator(env);
// p2p_replicator.set(this.p2pReplicator);
return this._replicatorInstance;
} catch (e) {
this._log(
e instanceof Error ? e.message : "Something occurred on Initialising P2P Replicator",
LOG_LEVEL_INFO
);
this._log(e, LOG_LEVEL_VERBOSE);
throw e;
}
}
}

View File

@@ -1,16 +1,15 @@
<script lang="ts">
import { onMount, setContext } from "svelte";
import { AutoAccepting, DEFAULT_SETTINGS, type P2PSyncSetting } from "../../../lib/src/common/types";
import {
AutoAccepting,
DEFAULT_SETTINGS,
type ObsidianLiveSyncSettings,
type P2PSyncSetting,
} from "../../../lib/src/common/types";
import { type P2PReplicator } from "../CmdP2PSync";
import { AcceptedStatus, ConnectionStatus, type PeerStatus } from "./P2PReplicatorPaneView";
AcceptedStatus,
ConnectionStatus,
type CommandShim,
type PeerStatus,
type PluginShim,
} from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
import PeerStatusRow from "../P2PReplicator/PeerStatusRow.svelte";
import { EVENT_LAYOUT_READY, eventHub } from "../../../common/events";
import type ObsidianLiveSyncPlugin from "../../../main";
import {
type PeerInfo,
type P2PServerInfo,
@@ -20,29 +19,29 @@
} from "../../../lib/src/replication/trystero/TrysteroReplicatorP2PServer";
import { type P2PReplicatorStatus } from "../../../lib/src/replication/trystero/TrysteroReplicator";
import { $msg as _msg } from "../../../lib/src/common/i18n";
import { SETTING_KEY_P2P_DEVICE_NAME } from "../../../lib/src/common/types";
interface Props {
plugin: ObsidianLiveSyncPlugin;
plugin: PluginShim;
cmdSync: CommandShim;
}
let { plugin }: Props = $props();
const cmdSync = plugin.getAddOn<P2PReplicator>("P2PReplicator")!;
let { plugin, cmdSync }: Props = $props();
// const cmdSync = plugin.getAddOn<P2PReplicator>("P2PReplicator")!;
setContext("getReplicator", () => cmdSync);
const initialSettings = { ...plugin.settings };
let settings = $state<P2PSyncSetting>(initialSettings);
// const vaultName = plugin.$$getVaultName();
// const dbKey = `${vaultName}-p2p-device-name`;
const initialDeviceName = cmdSync.getConfig("p2p_device_name") ?? plugin.$$getVaultName();
let deviceName = $state<string>(initialDeviceName);
let deviceName = $state<string>("");
let eP2PEnabled = $state<boolean>(initialSettings.P2P_Enabled);
let eRelay = $state<string>(initialSettings.P2P_relays);
let eRoomId = $state<string>(initialSettings.P2P_roomID);
let ePassword = $state<string>(initialSettings.P2P_passphrase);
let eAppId = $state<string>(initialSettings.P2P_AppID);
let eDeviceName = $state<string>(initialDeviceName);
let eDeviceName = $state<string>("");
let eAutoAccept = $state<boolean>(initialSettings.P2P_AutoAccepting == AutoAccepting.ALL);
let eAutoStart = $state<boolean>(initialSettings.P2P_AutoStart);
let eAutoBroadcast = $state<boolean>(initialSettings.P2P_AutoBroadcast);
@@ -83,7 +82,7 @@
P2P_AutoBroadcast: eAutoBroadcast,
};
plugin.settings = newSettings;
cmdSync.setConfig("p2p_device_name", eDeviceName);
cmdSync.setConfig(SETTING_KEY_P2P_DEVICE_NAME, eDeviceName);
deviceName = eDeviceName;
await plugin.saveSettings();
}
@@ -100,7 +99,12 @@
let serverInfo = $state<P2PServerInfo | undefined>(undefined);
let replicatorInfo = $state<P2PReplicatorStatus | undefined>(undefined);
const applyLoadSettings = (d: ObsidianLiveSyncSettings, force: boolean) => {
const applyLoadSettings = (d: P2PSyncSetting, force: boolean) => {
if(force){
const initDeviceName = cmdSync.getConfig(SETTING_KEY_P2P_DEVICE_NAME) ?? plugin.services.vault.getVaultName();
deviceName = initDeviceName;
eDeviceName = initDeviceName;
}
const { P2P_relays, P2P_roomID, P2P_passphrase, P2P_AppID, P2P_AutoAccepting } = d;
if (force || !isP2PEnabledModified) eP2PEnabled = d.P2P_Enabled;
if (force || !isRelayModified) eRelay = P2P_relays;
@@ -221,10 +225,10 @@
await cmdSync.close();
}
function startBroadcasting() {
cmdSync._replicatorInstance?.enableBroadcastChanges();
void cmdSync.enableBroadcastCastings();
}
function stopBroadcasting() {
cmdSync._replicatorInstance?.disableBroadcastChanges();
void cmdSync.disableBroadcastCastings();
}
const initialDialogStatusKey = `p2p-dialog-status`;
@@ -249,6 +253,9 @@
};
cmdSync.setConfig(initialDialogStatusKey, JSON.stringify(dialogStatus));
});
let isObsidian = $derived.by(() => {
return plugin.services.API.getPlatform() === "obsidian";
});
</script>
<article>
@@ -264,81 +271,105 @@
{/each}
</details>
<h2>Connection Settings</h2>
<details bind:open={isSettingOpened}>
<summary>{eRelay}</summary>
<table class="settings">
<tbody>
<tr>
<th> Enable P2P Replicator </th>
<td>
<label class={{ "is-dirty": isP2PEnabledModified }}>
<input type="checkbox" bind:checked={eP2PEnabled} />
</label>
</td>
</tr><tr>
<th> Relay settings </th>
<td>
<label class={{ "is-dirty": isRelayModified }}>
<input
type="text"
placeholder="wss://exp-relay.vrtmrz.net, wss://xxxxx"
bind:value={eRelay}
/>
<button onclick={() => useDefaultRelay()}> Use vrtmrz's relay </button>
</label>
</td>
</tr>
<tr>
<th> Room ID </th>
<td>
<label class={{ "is-dirty": isRoomIdModified }}>
<input type="text" placeholder="anything-you-like" bind:value={eRoomId} />
<button onclick={() => chooseRandom()}> Use Random Number </button>
</label>
<span>
<small>
This can isolate your connections between devices. Use the same Room ID for the same
devices.</small
>
</span>
</td>
</tr>
<tr>
<th> Password </th>
<td>
<label class={{ "is-dirty": isPasswordModified }}>
<input type="password" placeholder="password" bind:value={ePassword} />
</label>
<span>
<small> This password is used to encrypt the connection. Use something long enough. </small>
</span>
</td>
</tr>
<tr>
<th> This device name </th>
<td>
<label class={{ "is-dirty": isDeviceNameModified }}>
<input type="text" placeholder="iphone-16" bind:value={eDeviceName} />
</label>
</td>
</tr>
<tr>
<th> Auto Connect </th>
<td>
<label class={{ "is-dirty": isAutoStartModified }}>
<input type="checkbox" bind:checked={eAutoStart} />
</label>
</td>
</tr>
<tr>
<th> Start change-broadcasting on Connect </th>
<td>
<label class={{ "is-dirty": isAutoBroadcastModified }}>
<input type="checkbox" bind:checked={eAutoBroadcast} />
</label>
</td>
</tr>
<!-- <tr>
{#if isObsidian}
You can configure in the Obsidian Plugin Settings.
{:else}
<details bind:open={isSettingOpened}>
<summary>{eRelay}</summary>
<table class="settings">
<tbody>
<tr>
<th> Enable P2P Replicator </th>
<td>
<label class={{ "is-dirty": isP2PEnabledModified }}>
<input type="checkbox" bind:checked={eP2PEnabled} />
</label>
</td>
</tr><tr>
<th> Relay settings </th>
<td>
<label class={{ "is-dirty": isRelayModified }}>
<input
type="text"
placeholder="wss://exp-relay.vrtmrz.net, wss://xxxxx"
bind:value={eRelay}
autocomplete="off"
/>
<button onclick={() => useDefaultRelay()}> Use vrtmrz's relay </button>
</label>
</td>
</tr>
<tr>
<th> Room ID </th>
<td>
<label class={{ "is-dirty": isRoomIdModified }}>
<input
type="text"
placeholder="anything-you-like"
bind:value={eRoomId}
autocomplete="off"
spellcheck="false"
autocorrect="off"
/>
<button onclick={() => chooseRandom()}> Use Random Number </button>
</label>
<span>
<small>
This can isolate your connections between devices. Use the same Room ID for the same
devices.</small
>
</span>
</td>
</tr>
<tr>
<th> Password </th>
<td>
<label class={{ "is-dirty": isPasswordModified }}>
<input type="password" placeholder="password" bind:value={ePassword} />
</label>
<span>
<small>
This password is used to encrypt the connection. Use something long enough.
</small>
</span>
</td>
</tr>
<tr>
<th> This device name </th>
<td>
<label class={{ "is-dirty": isDeviceNameModified }}>
<input
type="text"
placeholder="iphone-16"
bind:value={eDeviceName}
autocomplete="off"
/>
</label>
<span>
<small>
Device name to identify the device. Please use shorter one for the stable peer
detection, i.e., "iphone-16" or "macbook-2021".
</small>
</span>
</td>
</tr>
<tr>
<th> Auto Connect </th>
<td>
<label class={{ "is-dirty": isAutoStartModified }}>
<input type="checkbox" bind:checked={eAutoStart} />
</label>
</td>
</tr>
<tr>
<th> Start change-broadcasting on Connect </th>
<td>
<label class={{ "is-dirty": isAutoBroadcastModified }}>
<input type="checkbox" bind:checked={eAutoBroadcast} />
</label>
</td>
</tr>
<!-- <tr>
<th> Auto Accepting </th>
<td>
<label class={{ "is-dirty": isAutoAcceptModified }}>
@@ -346,11 +377,12 @@
</label>
</td>
</tr> -->
</tbody>
</table>
<button disabled={!isAnyModified} class="button mod-cta" onclick={saveAndApply}>Save and Apply</button>
<button disabled={!isAnyModified} class="button" onclick={revert}>Revert changes</button>
</details>
</tbody>
</table>
<button disabled={!isAnyModified} class="button mod-cta" onclick={saveAndApply}>Save and Apply</button>
<button disabled={!isAnyModified} class="button" onclick={revert}>Revert changes</button>
</details>
{/if}
<div>
<h2>Signaling Server Connection</h2>

View File

@@ -1,49 +1,171 @@
import { WorkspaceLeaf } from "obsidian";
import { Menu, WorkspaceLeaf } from "@/deps.ts";
import ReplicatorPaneComponent from "./P2PReplicatorPane.svelte";
import type ObsidianLiveSyncPlugin from "../../../main.ts";
import { mount } from "svelte";
import { SvelteItemView } from "../../../common/SvelteItemView.ts";
import { eventHub } from "../../../common/events.ts";
import { unique } from "octagonal-wheels/collection";
import { LOG_LEVEL_NOTICE, REMOTE_P2P } from "../../../lib/src/common/types.ts";
import { Logger } from "../../../lib/src/common/logger.ts";
import { P2PReplicator } from "../CmdP2PReplicator.ts";
import {
EVENT_P2P_PEER_SHOW_EXTRA_MENU,
type PeerStatus,
} from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon.ts";
export const VIEW_TYPE_P2P = "p2p-replicator";
export enum AcceptedStatus {
UNKNOWN = "Unknown",
ACCEPTED = "Accepted",
DENIED = "Denied",
ACCEPTED_IN_SESSION = "Accepted in session",
DENIED_IN_SESSION = "Denied in session",
function addToList(item: string, list: string) {
return unique(
list
.split(",")
.map((e) => e.trim())
.concat(item)
.filter((p) => p)
).join(",");
}
export enum ConnectionStatus {
CONNECTED = "Connected",
CONNECTED_LIVE = "Connected(live)",
DISCONNECTED = "Disconnected",
function removeFromList(item: string, list: string) {
return list
.split(",")
.map((e) => e.trim())
.filter((p) => p !== item)
.filter((p) => p)
.join(",");
}
export type PeerStatus = {
name: string;
peerId: string;
syncOnConnect: boolean;
watchOnConnect: boolean;
syncOnReplicationCommand: boolean;
accepted: AcceptedStatus;
status: ConnectionStatus;
isFetching: boolean;
isSending: boolean;
isWatching: boolean;
};
export class P2PReplicatorPaneView extends SvelteItemView {
plugin: ObsidianLiveSyncPlugin;
icon = "waypoints";
override icon = "waypoints";
title: string = "";
navigation = false;
override navigation = false;
getIcon(): string {
override getIcon(): string {
return "waypoints";
}
get replicator() {
const r = this.plugin.getAddOn<P2PReplicator>(P2PReplicator.name);
if (!r || !r._replicatorInstance) {
throw new Error("Replicator not found");
}
return r._replicatorInstance;
}
async replicateFrom(peer: PeerStatus) {
await this.replicator.replicateFrom(peer.peerId);
}
async replicateTo(peer: PeerStatus) {
await this.replicator.requestSynchroniseToPeer(peer.peerId);
}
async getRemoteConfig(peer: PeerStatus) {
Logger(
`Requesting remote config for ${peer.name}. Please input the passphrase on the remote device`,
LOG_LEVEL_NOTICE
);
const remoteConfig = await this.replicator.getRemoteConfig(peer.peerId);
if (remoteConfig) {
Logger(`Remote config for ${peer.name} is retrieved successfully`);
const DROP = "Yes, and drop local database";
const KEEP = "Yes, but keep local database";
const CANCEL = "No, cancel";
const yn = await this.plugin.confirm.askSelectStringDialogue(
`Do you really want to apply the remote config? This will overwrite your current config immediately and restart.
And you can also drop the local database to rebuild from the remote device.`,
[DROP, KEEP, CANCEL] as const,
{
defaultAction: CANCEL,
title: "Apply Remote Config ",
}
);
if (yn === DROP || yn === KEEP) {
if (yn === DROP) {
if (remoteConfig.remoteType !== REMOTE_P2P) {
const yn2 = await this.plugin.confirm.askYesNoDialog(
`Do you want to set the remote type to "P2P Sync" to rebuild by "P2P replication"?`,
{
title: "Rebuild from remote device",
}
);
if (yn2 === "yes") {
remoteConfig.remoteType = REMOTE_P2P;
remoteConfig.P2P_RebuildFrom = peer.name;
}
}
}
this.plugin.settings = remoteConfig;
await this.plugin.saveSettings();
if (yn === DROP) {
await this.plugin.rebuilder.scheduleFetch();
} else {
this.plugin.services.appLifecycle.scheduleRestart();
}
} else {
Logger(`Cancelled\nRemote config for ${peer.name} is not applied`, LOG_LEVEL_NOTICE);
}
} else {
Logger(`Cannot retrieve remote config for ${peer.peerId}`);
}
}
async toggleProp(peer: PeerStatus, prop: "syncOnConnect" | "watchOnConnect" | "syncOnReplicationCommand") {
const settingMap = {
syncOnConnect: "P2P_AutoSyncPeers",
watchOnConnect: "P2P_AutoWatchPeers",
syncOnReplicationCommand: "P2P_SyncOnReplication",
} as const;
const targetSetting = settingMap[prop];
if (peer[prop]) {
this.plugin.settings[targetSetting] = removeFromList(peer.name, this.plugin.settings[targetSetting]);
await this.plugin.saveSettings();
} else {
this.plugin.settings[targetSetting] = addToList(peer.name, this.plugin.settings[targetSetting]);
await this.plugin.saveSettings();
}
await this.plugin.saveSettings();
}
m?: Menu;
constructor(leaf: WorkspaceLeaf, plugin: ObsidianLiveSyncPlugin) {
super(leaf);
this.plugin = plugin;
eventHub.onEvent(EVENT_P2P_PEER_SHOW_EXTRA_MENU, ({ peer, event }) => {
if (this.m) {
this.m.hide();
}
this.m = new Menu()
.addItem((item) => item.setTitle("📥 Only Fetch").onClick(() => this.replicateFrom(peer)))
.addItem((item) => item.setTitle("📤 Only Send").onClick(() => this.replicateTo(peer)))
.addSeparator()
.addItem((item) => {
item.setTitle("🔧 Get Configuration").onClick(async () => {
await this.getRemoteConfig(peer);
});
})
.addSeparator()
.addItem((item) => {
const mark = peer.syncOnConnect ? "checkmark" : null;
item.setTitle("Toggle Sync on connect")
.onClick(async () => {
await this.toggleProp(peer, "syncOnConnect");
})
.setIcon(mark);
})
.addItem((item) => {
const mark = peer.watchOnConnect ? "checkmark" : null;
item.setTitle("Toggle Watch on connect")
.onClick(async () => {
await this.toggleProp(peer, "watchOnConnect");
})
.setIcon(mark);
})
.addItem((item) => {
const mark = peer.syncOnReplicationCommand ? "checkmark" : null;
item.setTitle("Toggle Sync on `Replicate now` command")
.onClick(async () => {
await this.toggleProp(peer, "syncOnReplicationCommand");
})
.setIcon(mark);
});
this.m.showAtPosition({ x: event.x, y: event.y });
});
}
getViewType() {
@@ -54,11 +176,22 @@ export class P2PReplicatorPaneView extends SvelteItemView {
return "Peer-to-Peer Replicator";
}
override async onClose(): Promise<void> {
await super.onClose();
if (this.m) {
this.m.hide();
}
}
instantiateComponent(target: HTMLElement) {
const cmdSync = this.plugin.getAddOn<P2PReplicator>(P2PReplicator.name);
if (!cmdSync) {
throw new Error("Replicator not found");
}
return mount(ReplicatorPaneComponent, {
target: target,
props: {
plugin: this.plugin,
plugin: cmdSync.plugin,
cmdSync: cmdSync,
},
});
}

View File

@@ -1,11 +1,9 @@
<script lang="ts">
import { getContext } from "svelte";
import { AcceptedStatus, ConnectionStatus, type PeerStatus } from "./P2PReplicatorPaneView";
import { Menu, Setting } from "obsidian";
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, Logger } from "octagonal-wheels/common/logger";
import type { P2PReplicator } from "../CmdP2PSync";
import { unique } from "../../../lib/src/common/utils";
import { REMOTE_P2P } from "src/lib/src/common/types";
import { AcceptedStatus, type PeerStatus } from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
import type { P2PReplicator } from "../CmdP2PReplicator";
import { eventHub } from "../../../common/events";
import { EVENT_P2P_PEER_SHOW_EXTRA_MENU } from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
interface Props {
peerStatus: PeerStatus;
@@ -94,154 +92,13 @@
function stopWatching() {
replicator.unwatchPeer(peer.peerId);
}
function replicateFrom() {
replicator.replicateFrom(peer.peerId);
}
function replicateTo() {
replicator.requestSynchroniseToPeer(peer.peerId);
}
function sync() {
replicator.sync(peer.peerId, false);
}
function addToList(item: string, list: string) {
return unique(
list
.split(",")
.map((e) => e.trim())
.concat(item)
.filter((p) => p)
).join(",");
}
function removeFromList(item: string, list: string) {
return list
.split(",")
.map((e) => e.trim())
.filter((p) => p !== item)
.filter((p) => p)
.join(",");
}
function moreMenu(evt: MouseEvent) {
const m = new Menu()
.addItem((item) => item.setTitle("📥 Only Fetch").onClick(() => replicateFrom()))
.addItem((item) => item.setTitle("📤 Only Send").onClick(() => replicateTo()))
.addSeparator()
.addItem((item) => {
item.setTitle("🔧 Get Configuration").onClick(async () => {
Logger(
`Requesting remote config for ${peer.name}. Please input the passphrase on the remote device`,
LOG_LEVEL_NOTICE
);
const remoteConfig = await replicator.getRemoteConfig(peer.peerId);
if (remoteConfig) {
Logger(`Remote config for ${peer.name} is retrieved successfully`);
const DROP = "Yes, and drop local database";
const KEEP = "Yes, but keep local database";
const CANCEL = "No, cancel";
const yn = await replicator.confirm.askSelectStringDialogue(
`Do you really want to apply the remote config? This will overwrite your current config immediately and restart.
And you can also drop the local database to rebuild from the remote device.`,
[DROP, KEEP, CANCEL] as const,
{
defaultAction: CANCEL,
title: "Apply Remote Config ",
}
);
if (yn === DROP || yn === KEEP) {
if (yn === DROP) {
if (remoteConfig.remoteType !== REMOTE_P2P) {
const yn2 = await replicator.confirm.askYesNoDialog(
`Do you want to set the remote type to "P2P Sync" to rebuild by "P2P replication"?`,
{
title: "Rebuild from remote device",
}
);
if (yn2 === "yes") {
remoteConfig.remoteType = REMOTE_P2P;
remoteConfig.P2P_RebuildFrom = peer.name;
}
}
}
cmdReplicator.plugin.settings = remoteConfig;
await cmdReplicator.plugin.saveSettings();
// await cmdReplicator.setConfig("rebuildFrom", peer.name);
if (yn === DROP) {
await cmdReplicator.plugin.rebuilder.scheduleFetch();
} else {
await cmdReplicator.plugin.$$scheduleAppReload();
}
} else {
Logger(`Cancelled\nRemote config for ${peer.name} is not applied`, LOG_LEVEL_NOTICE);
}
} else {
Logger(`Cannot retrieve remote config for ${peer.peerId}`);
}
});
})
.addSeparator()
.addItem((item) => {
const mark = peer.syncOnConnect ? "checkmark" : null;
item.setTitle("Toggle Sync on connect")
.onClick(async () => {
// TODO: Fix to prevent writing to settings directly
if (peer.syncOnConnect) {
cmdReplicator.settings.P2P_AutoSyncPeers = removeFromList(
peer.name,
cmdReplicator.settings.P2P_AutoSyncPeers
);
await cmdReplicator.plugin.saveSettings();
} else {
cmdReplicator.settings.P2P_AutoSyncPeers = addToList(
peer.name,
cmdReplicator.settings.P2P_AutoSyncPeers
);
await cmdReplicator.plugin.saveSettings();
}
})
.setIcon(mark);
})
.addItem((item) => {
const mark = peer.watchOnConnect ? "checkmark" : null;
item.setTitle("Toggle Watch on connect")
.onClick(async () => {
// TODO: Fix to prevent writing to settings directly
if (peer.watchOnConnect) {
cmdReplicator.settings.P2P_AutoWatchPeers = removeFromList(
peer.name,
cmdReplicator.settings.P2P_AutoWatchPeers
);
await cmdReplicator.plugin.saveSettings();
} else {
cmdReplicator.settings.P2P_AutoWatchPeers = addToList(
peer.name,
cmdReplicator.settings.P2P_AutoWatchPeers
);
await cmdReplicator.plugin.saveSettings();
}
})
.setIcon(mark);
})
.addItem((item) => {
const mark = peer.syncOnReplicationCommand ? "checkmark" : null;
item.setTitle("Toggle Sync on `Replicate now` command")
.onClick(async () => {
// TODO: Fix to prevent writing to settings directly
if (peer.syncOnReplicationCommand) {
cmdReplicator.settings.P2P_SyncOnReplication = removeFromList(
peer.name,
cmdReplicator.settings.P2P_SyncOnReplication
);
await cmdReplicator.plugin.saveSettings();
} else {
cmdReplicator.settings.P2P_SyncOnReplication = addToList(
peer.name,
cmdReplicator.settings.P2P_SyncOnReplication
);
await cmdReplicator.plugin.saveSettings();
}
})
.setIcon(mark);
});
m.showAtPosition({ x: evt.x, y: evt.y });
eventHub.emitEvent(EVENT_P2P_PEER_SHOW_EXTRA_MENU, { peer, event: evt });
}
</script>

Submodule src/lib updated: b8b05a2146...d402f2d7f3

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,210 @@
import type { FileEventItem } from "@/common/types";
import { HiddenFileSync } from "@/features/HiddenFileSync/CmdHiddenFileSync";
import type { FilePath, UXFileInfoStub, UXFolderInfo, UXInternalFileInfoStub } from "@lib/common/types";
import type { FileEvent } from "@lib/interfaces/StorageEventManager";
import { TFile, type TAbstractFile, TFolder } from "@/deps";
import { LOG_LEVEL_DEBUG } from "octagonal-wheels/common/logger";
import type ObsidianLiveSyncPlugin from "@/main";
import type { LiveSyncCore } from "@/main";
import {
StorageEventManagerBase,
type FileEventItemSentinel,
type StorageEventManagerBaseDependencies,
} from "@lib/managers/StorageEventManager";
import { InternalFileToUXFileInfoStub, TFileToUXFileInfoStub } from "@/modules/coreObsidian/storageLib/utilObsidian";
export class StorageEventManagerObsidian extends StorageEventManagerBase {
plugin: ObsidianLiveSyncPlugin;
core: LiveSyncCore;
// Necessary evil.
cmdHiddenFileSync: HiddenFileSync;
override isFile(file: UXFileInfoStub | UXInternalFileInfoStub | UXFolderInfo | TFile): boolean {
if (file instanceof TFile) {
return true;
}
if (super.isFile(file)) {
return true;
}
return !file.isFolder;
}
override isFolder(file: UXFileInfoStub | UXInternalFileInfoStub | UXFolderInfo | TFolder): boolean {
if (file instanceof TFolder) {
return true;
}
if (super.isFolder(file)) {
return true;
}
return !!file.isFolder;
}
constructor(plugin: ObsidianLiveSyncPlugin, core: LiveSyncCore, dependencies: StorageEventManagerBaseDependencies) {
super(dependencies);
this.plugin = plugin;
this.core = core;
this.cmdHiddenFileSync = this.plugin.getAddOn(HiddenFileSync.name) as HiddenFileSync;
}
async beginWatch() {
await this.snapShotRestored;
const plugin = this.plugin;
this.watchVaultChange = this.watchVaultChange.bind(this);
this.watchVaultCreate = this.watchVaultCreate.bind(this);
this.watchVaultDelete = this.watchVaultDelete.bind(this);
this.watchVaultRename = this.watchVaultRename.bind(this);
this.watchVaultRawEvents = this.watchVaultRawEvents.bind(this);
this.watchEditorChange = this.watchEditorChange.bind(this);
plugin.registerEvent(plugin.app.vault.on("modify", this.watchVaultChange));
plugin.registerEvent(plugin.app.vault.on("delete", this.watchVaultDelete));
plugin.registerEvent(plugin.app.vault.on("rename", this.watchVaultRename));
plugin.registerEvent(plugin.app.vault.on("create", this.watchVaultCreate));
//@ts-ignore : Internal API
plugin.registerEvent(plugin.app.vault.on("raw", this.watchVaultRawEvents));
plugin.registerEvent(plugin.app.workspace.on("editor-change", this.watchEditorChange));
}
watchEditorChange(editor: any, info: any) {
if (!("path" in info)) {
return;
}
if (!this.shouldBatchSave) {
return;
}
const file = info?.file as TFile;
if (!file) return;
if (this.storageAccess.isFileProcessing(file.path as FilePath)) {
// this._log(`Editor change skipped because the file is being processed: ${file.path}`, LOG_LEVEL_VERBOSE);
return;
}
if (!this.isWaiting(file.path as FilePath)) {
return;
}
const data = info?.data as string;
const fi: FileEvent = {
type: "CHANGED",
file: TFileToUXFileInfoStub(file),
cachedData: data,
};
void this.appendQueue([fi]);
}
watchVaultCreate(file: TAbstractFile, ctx?: any) {
if (file instanceof TFolder) return;
if (this.storageAccess.isFileProcessing(file.path as FilePath)) {
// this._log(`File create skipped because the file is being processed: ${file.path}`, LOG_LEVEL_VERBOSE);
return;
}
const fileInfo = TFileToUXFileInfoStub(file);
void this.appendQueue([{ type: "CREATE", file: fileInfo }], ctx);
}
watchVaultChange(file: TAbstractFile, ctx?: any) {
if (file instanceof TFolder) return;
if (this.storageAccess.isFileProcessing(file.path as FilePath)) {
// this._log(`File change skipped because the file is being processed: ${file.path}`, LOG_LEVEL_VERBOSE);
return;
}
const fileInfo = TFileToUXFileInfoStub(file);
void this.appendQueue([{ type: "CHANGED", file: fileInfo }], ctx);
}
watchVaultDelete(file: TAbstractFile, ctx?: any) {
if (file instanceof TFolder) return;
if (this.storageAccess.isFileProcessing(file.path as FilePath)) {
// this._log(`File delete skipped because the file is being processed: ${file.path}`, LOG_LEVEL_VERBOSE);
return;
}
const fileInfo = TFileToUXFileInfoStub(file, true);
void this.appendQueue([{ type: "DELETE", file: fileInfo }], ctx);
}
watchVaultRename(file: TAbstractFile, oldFile: string, ctx?: any) {
// vault Rename will not be raised for self-events (Self-hosted LiveSync will not handle 'rename').
if (file instanceof TFile) {
const fileInfo = TFileToUXFileInfoStub(file);
void this.appendQueue(
[
{
type: "DELETE",
file: {
path: oldFile as FilePath,
name: file.name,
stat: {
mtime: file.stat.mtime,
ctime: file.stat.ctime,
size: file.stat.size,
type: "file",
},
deleted: true,
},
skipBatchWait: true,
},
{ type: "CREATE", file: fileInfo, skipBatchWait: true },
],
ctx
);
}
}
// Watch raw events (Internal API)
watchVaultRawEvents(path: FilePath) {
if (this.storageAccess.isFileProcessing(path)) {
// this._log(`Raw file event skipped because the file is being processed: ${path}`, LOG_LEVEL_VERBOSE);
return;
}
// Only for internal files.
if (!this.settings) return;
// if (this.plugin.settings.useIgnoreFiles && this.plugin.ignoreFiles.some(e => path.endsWith(e.trim()))) {
if (this.settings.useIgnoreFiles) {
// If it is one of ignore files, refresh the cached one.
// (Calling$$isTargetFile will refresh the cache)
void this.vaultService.isTargetFile(path).then(() => this._watchVaultRawEvents(path));
} else {
void this._watchVaultRawEvents(path);
}
}
async _watchVaultRawEvents(path: FilePath) {
if (!this.settings.syncInternalFiles && !this.settings.usePluginSync) return;
if (!this.settings.watchInternalFileChanges) return;
if (!path.startsWith(this.plugin.app.vault.configDir)) return;
if (path.endsWith("/")) {
// Folder
return;
}
const isTargetFile = await this.cmdHiddenFileSync.isTargetFile(path);
if (!isTargetFile) return;
void this.appendQueue(
[
{
type: "INTERNAL",
file: InternalFileToUXFileInfoStub(path),
skipBatchWait: true, // Internal files should be processed immediately.
},
],
null
);
}
async _saveSnapshot(snapshot: (FileEventItem | FileEventItemSentinel)[]) {
await this.core.kvDB.set("storage-event-manager-snapshot", snapshot);
this._log(`Storage operation snapshot saved: ${snapshot.length} items`, LOG_LEVEL_DEBUG);
}
async _loadSnapshot() {
const snapShot = await this.core.kvDB.get<(FileEventItem | FileEventItemSentinel)[]>(
"storage-event-manager-snapshot"
);
return snapShot;
}
updateStatus() {
const allFileEventItems = this.bufferedQueuedItems.filter((e): e is FileEventItem => "args" in e);
const allItems = allFileEventItems.filter((e) => !e.cancelled);
const totalItems = allItems.length + this.concurrentProcessing.waiting;
const processing = this.processingCount;
const batchedCount = this._waitingMap.size;
this.fileProcessing.batched.value = batchedCount;
this.fileProcessing.processing.value = processing;
this.fileProcessing.totalQueued.value = totalItems + batchedCount + processing;
}
}

View File

@@ -1,147 +1,22 @@
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
import type { LOG_LEVEL } from "../lib/src/common/types";
import type { LiveSyncCore } from "../main";
import { unique } from "octagonal-wheels/collection";
import type { IObsidianModule } from "./AbstractObsidianModule.ts";
import type {
ICoreModuleBase,
AllInjectableProps,
AllExecuteProps,
EveryExecuteProps,
AnyExecuteProps,
ICoreModule,
} from "./ModuleTypes";
function isOverridableKey(key: string): key is keyof ICoreModuleBase {
return key.startsWith("$");
}
function isInjectableKey(key: string): key is keyof AllInjectableProps {
return key.startsWith("$$");
}
function isAllExecuteKey(key: string): key is keyof AllExecuteProps {
return key.startsWith("$all");
}
function isEveryExecuteKey(key: string): key is keyof EveryExecuteProps {
return key.startsWith("$every");
}
function isAnyExecuteKey(key: string): key is keyof AnyExecuteProps {
return key.startsWith("$any");
}
/**
* All $prefixed functions are hooked by the modules. Be careful to call them directly.
* Please refer to the module's source code to understand the function.
* $$ : Completely overridden functions.
* $all : Process all modules and return all results.
* $every : Process all modules until the first failure.
* $any : Process all modules until the first success.
* $ : Other interceptive points. You should manually assign the module
* All of above performed on injectModules function.
*/
export function injectModules<T extends ICoreModule>(target: T, modules: ICoreModule[]) {
const allKeys = unique([
...Object.keys(Object.getOwnPropertyDescriptors(target)),
...Object.keys(Object.getOwnPropertyDescriptors(Object.getPrototypeOf(target))),
]).filter((e) => e.startsWith("$")) as (keyof ICoreModule)[];
const moduleMap = new Map<string, IObsidianModule[]>();
for (const module of modules) {
for (const key of allKeys) {
if (isOverridableKey(key)) {
if (key in module) {
const list = moduleMap.get(key) || [];
if (typeof module[key] === "function") {
module[key] = module[key].bind(module) as any;
}
list.push(module);
moduleMap.set(key, list);
}
}
}
}
Logger(`Injecting modules for ${target.constructor.name}`, LOG_LEVEL_VERBOSE);
for (const key of allKeys) {
const modules = moduleMap.get(key) || [];
if (isInjectableKey(key)) {
if (modules.length == 0) {
throw new Error(`No module injected for ${key}. This is a fatal error.`);
}
target[key] = modules[0][key]! as any;
Logger(`[${modules[0].constructor.name}]: Injected ${key} `, LOG_LEVEL_VERBOSE);
} else if (isAllExecuteKey(key)) {
const modules = moduleMap.get(key) || [];
target[key] = async (...args: any) => {
for (const module of modules) {
try {
//@ts-ignore
await module[key]!(...args);
} catch (ex) {
Logger(`[${module.constructor.name}]: All handler for ${key} failed`, LOG_LEVEL_VERBOSE);
Logger(ex, LOG_LEVEL_VERBOSE);
}
}
return true;
};
for (const module of modules) {
Logger(`[${module.constructor.name}]: Injected (All) ${key} `, LOG_LEVEL_VERBOSE);
}
} else if (isEveryExecuteKey(key)) {
target[key] = async (...args: any) => {
for (const module of modules) {
try {
//@ts-ignore:2556
const ret = await module[key]!(...args);
if (ret !== undefined && !ret) {
// Failed then return that falsy value.
return ret;
}
} catch (ex) {
Logger(`[${module.constructor.name}]: Every handler for ${key} failed`);
Logger(ex, LOG_LEVEL_VERBOSE);
}
}
return true;
};
for (const module of modules) {
Logger(`[${module.constructor.name}]: Injected (Every) ${key} `, LOG_LEVEL_VERBOSE);
}
} else if (isAnyExecuteKey(key)) {
//@ts-ignore
target[key] = async (...args: any[]) => {
for (const module of modules) {
try {
//@ts-ignore:2556
const ret = await module[key](...args);
// If truly value returned, then return that value.
if (ret) {
return ret;
}
} catch (ex) {
Logger(`[${module.constructor.name}]: Any handler for ${key} failed`);
Logger(ex, LOG_LEVEL_VERBOSE);
}
}
return false;
};
for (const module of modules) {
Logger(`[${module.constructor.name}]: Injected (Any) ${key} `, LOG_LEVEL_VERBOSE);
}
} else {
Logger(`No injected handler for ${key} `, LOG_LEVEL_VERBOSE);
}
}
Logger(`Injected modules for ${target.constructor.name}`, LOG_LEVEL_VERBOSE);
return true;
}
import { LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
import type { AnyEntry, FilePathWithPrefix } from "@lib/common/types";
import type { LiveSyncCore } from "@/main";
import { stripAllPrefixes } from "@lib/string_and_binary/path";
import { createInstanceLogFunction } from "@lib/services/lib/logUtils";
export abstract class AbstractModule {
_log = (msg: any, level: LOG_LEVEL = LOG_LEVEL_INFO, key?: string) => {
if (typeof msg === "string" && level !== LOG_LEVEL_NOTICE) {
msg = `[${this.constructor.name}]\u{200A} ${msg}`;
_log = createInstanceLogFunction(this.constructor.name, this.services.API);
get services() {
if (!this.core._services) {
throw new Error("Services are not ready yet.");
}
// console.log(msg);
Logger(msg, level, key);
};
return this.core._services;
}
addCommand = this.services.API.addCommand.bind(this.services.API);
registerView = this.services.API.registerWindow.bind(this.services.API);
addRibbonIcon = this.services.API.addRibbonIcon.bind(this.services.API);
registerObsidianProtocolHandler = this.services.API.registerProtocolHandler.bind(this.services.API);
get localDatabase() {
return this.core.localDatabase;
@@ -153,14 +28,24 @@ export abstract class AbstractModule {
this.core.settings = value;
}
getPath(entry: AnyEntry): FilePathWithPrefix {
return this.services.path.getPath(entry);
}
getPathWithoutPrefix(entry: AnyEntry): FilePathWithPrefix {
return stripAllPrefixes(this.services.path.getPath(entry));
}
onBindFunction(core: LiveSyncCore, services: typeof core.services) {
// Override if needed.
}
constructor(public core: LiveSyncCore) {
Logger(`[${this.constructor.name}] Loaded`, LOG_LEVEL_VERBOSE);
}
saveSettings = this.core.saveSettings.bind(this.core);
// abstract $everyTest(): Promise<boolean>;
addTestResult(key: string, value: boolean, summary?: string, message?: string) {
this.core.$$addTestResult(`${this.constructor.name}`, key, value, summary, message);
this.services.test.addTestResult(`${this.constructor.name}`, key, value, summary, message);
}
testDone(result: boolean = true) {
return Promise.resolve(result);
@@ -185,4 +70,14 @@ export abstract class AbstractModule {
}
return this.testDone();
}
isMainReady() {
return this.services.appLifecycle.isReady();
}
isMainSuspended() {
return this.services.appLifecycle.isSuspended();
}
isDatabaseReady() {
return this.services.database.isDatabaseReady();
}
}

View File

@@ -10,45 +10,19 @@ export type ModuleKeys = keyof IObsidianModule;
export type ChainableModuleProps = ChainableExecuteFunction<ObsidianLiveSyncPlugin>;
export abstract class AbstractObsidianModule extends AbstractModule {
addCommand = this.plugin.addCommand.bind(this.plugin);
registerView = this.plugin.registerView.bind(this.plugin);
addRibbonIcon = this.plugin.addRibbonIcon.bind(this.plugin);
registerObsidianProtocolHandler = this.plugin.registerObsidianProtocolHandler.bind(this.plugin);
get localDatabase() {
return this.plugin.localDatabase;
}
get settings() {
return this.plugin.settings;
}
set settings(value) {
this.plugin.settings = value;
}
get app() {
return this.plugin.app;
}
constructor(
public plugin: ObsidianLiveSyncPlugin,
public core: LiveSyncCore
core: LiveSyncCore
) {
super(core);
}
saveSettings = this.plugin.saveSettings.bind(this.plugin);
_isMainReady() {
return this.core.$$isReady();
}
_isMainSuspended() {
return this.core.$$isSuspended();
}
_isDatabaseReady() {
return this.core.$$isDatabaseReady();
}
//should be overridden
_isThisModuleEnabled() {
isThisModuleEnabled() {
return true;
}
}

View File

@@ -1,354 +0,0 @@
import { LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { EVENT_FILE_SAVED, eventHub } from "../../common/events";
import {
getDatabasePathFromUXFileInfo,
getStoragePathFromUXFileInfo,
isInternalMetadata,
markChangesAreSame,
} from "../../common/utils";
import type {
UXFileInfoStub,
FilePathWithPrefix,
UXFileInfo,
MetaEntry,
LoadedEntry,
FilePath,
SavingEntry,
DocumentID,
} from "../../lib/src/common/types";
import type { DatabaseFileAccess } from "../interfaces/DatabaseFileAccess";
import { type IObsidianModule } from "../AbstractObsidianModule.ts";
import { isPlainText, shouldBeIgnored, stripAllPrefixes } from "../../lib/src/string_and_binary/path";
import {
createBlob,
createTextBlob,
delay,
determineTypeFromBlob,
isDocContentSame,
readContent,
} from "../../lib/src/common/utils";
import { serialized } from "octagonal-wheels/concurrency/lock";
import { AbstractModule } from "../AbstractModule.ts";
import { ICHeader } from "../../common/types.ts";
export class ModuleDatabaseFileAccess extends AbstractModule implements IObsidianModule, DatabaseFileAccess {
$everyOnload(): Promise<boolean> {
this.core.databaseFileAccess = this;
return Promise.resolve(true);
}
async $everyModuleTest(): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
const testString = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam nec purus nec nunc";
// Before test, we need to delete completely.
const conflicts = await this.getConflictedRevs("autoTest.md" as FilePathWithPrefix);
for (const rev of conflicts) {
await this.delete("autoTest.md" as FilePathWithPrefix, rev);
}
await this.delete("autoTest.md" as FilePathWithPrefix);
// OK, begin!
await this._test(
"storeContent",
async () => await this.storeContent("autoTest.md" as FilePathWithPrefix, testString)
);
// For test, we need to clear the caches.
await this.localDatabase.hashCaches.clear();
await this._test("readContent", async () => {
const content = await this.fetch("autoTest.md" as FilePathWithPrefix);
if (!content) return "File not found";
if (content.deleted) return "File is deleted";
return (await content.body.text()) == testString
? true
: `Content is not same ${await content.body.text()}`;
});
await this._test("delete", async () => await this.delete("autoTest.md" as FilePathWithPrefix));
await this._test("read deleted content", async () => {
const content = await this.fetch("autoTest.md" as FilePathWithPrefix);
if (!content) return true;
if (content.deleted) return true;
return `Still exist !:${await content.body.text()},${JSON.stringify(content, undefined, 2)}`;
});
await delay(100);
return this.testDone();
}
async checkIsTargetFile(file: UXFileInfoStub | FilePathWithPrefix): Promise<boolean> {
const path = getStoragePathFromUXFileInfo(file);
if (!(await this.core.$$isTargetFile(path))) {
this._log(`File is not target`, LOG_LEVEL_VERBOSE);
return false;
}
if (shouldBeIgnored(path)) {
this._log(`File should be ignored`, LOG_LEVEL_VERBOSE);
return false;
}
return true;
}
async delete(file: UXFileInfoStub | FilePathWithPrefix, rev?: string): Promise<boolean> {
if (!(await this.checkIsTargetFile(file))) {
return true;
}
const fullPath = getDatabasePathFromUXFileInfo(file);
try {
this._log(`deleteDB By path:${fullPath}`);
return await this.deleteFromDBbyPath(fullPath, rev);
} catch (ex) {
this._log(`Failed to delete ${fullPath}`);
this._log(ex, LOG_LEVEL_VERBOSE);
return false;
}
}
async createChunks(file: UXFileInfo, force: boolean = false, skipCheck?: boolean): Promise<boolean> {
return await this._store(file, force, skipCheck, true);
}
async store(file: UXFileInfo, force: boolean = false, skipCheck?: boolean): Promise<boolean> {
return await this._store(file, force, skipCheck, false);
}
async storeContent(path: FilePathWithPrefix, content: string): Promise<boolean> {
const blob = createTextBlob(content);
const bytes = (await blob.arrayBuffer()).byteLength;
const isInternal = path.startsWith(".") ? true : undefined;
const dummyUXFileInfo: UXFileInfo = {
name: path.split("/").pop() as string,
path: path,
stat: {
size: bytes,
ctime: Date.now(),
mtime: Date.now(),
type: "file",
},
body: blob,
isInternal,
};
return await this._store(dummyUXFileInfo, true, false, false);
}
async _store(
file: UXFileInfo,
force: boolean = false,
skipCheck?: boolean,
onlyChunks?: boolean
): Promise<boolean> {
if (!skipCheck) {
if (!(await this.checkIsTargetFile(file))) {
return true;
}
}
if (!file) {
this._log("File seems bad", LOG_LEVEL_VERBOSE);
return false;
}
// const path = getPathFromUXFileInfo(file);
const isPlain = isPlainText(file.name);
const possiblyLarge = !isPlain;
const content = file.body;
const datatype = determineTypeFromBlob(content);
const idPrefix = file.isInternal ? ICHeader : "";
const fullPath = getStoragePathFromUXFileInfo(file);
const fullPathOnDB = getDatabasePathFromUXFileInfo(file);
if (possiblyLarge) this._log(`Processing: ${fullPath}`, LOG_LEVEL_VERBOSE);
// if (isInternalMetadata(fullPath)) {
// this._log(`Internal file: ${fullPath}`, LOG_LEVEL_VERBOSE);
// return false;
// }
if (file.isInternal) {
if (file.deleted) {
file.stat = {
size: 0,
ctime: Date.now(),
mtime: Date.now(),
type: "file",
};
} else if (file.stat == undefined) {
const stat = await this.core.storageAccess.statHidden(file.path);
if (!stat) {
// We stored actually deleted or not since here, so this is an unexpected case. we should raise an error.
this._log(`Internal file not found: ${fullPath}`, LOG_LEVEL_VERBOSE);
return false;
}
file.stat = stat;
}
}
const idMain = await this.core.$$path2id(fullPath);
const id = (idPrefix + idMain) as DocumentID;
const d: SavingEntry = {
_id: id,
path: fullPathOnDB,
data: content,
ctime: file.stat.ctime,
mtime: file.stat.mtime,
size: file.stat.size,
children: [],
datatype: datatype,
type: datatype,
eden: {},
};
//upsert should locked
const msg = `STORAGE -> DB (${datatype}) `;
const isNotChanged = await serialized("file-" + fullPath, async () => {
if (force) {
this._log(msg + "Force writing " + fullPath, LOG_LEVEL_VERBOSE);
return false;
}
// Commented out temporarily: this checks that the file was made ourself.
// if (this.core.storageAccess.recentlyTouched(file)) {
// return true;
// }
try {
const old = await this.localDatabase.getDBEntry(d.path, undefined, false, true, false);
if (old !== false) {
const oldData = { data: old.data, deleted: old._deleted || old.deleted };
const newData = { data: d.data, deleted: d._deleted || d.deleted };
if (oldData.deleted != newData.deleted) return false;
if (!(await isDocContentSame(old.data, newData.data))) return false;
this._log(
msg + "Skipped (not changed) " + fullPath + (d._deleted || d.deleted ? " (deleted)" : ""),
LOG_LEVEL_VERBOSE
);
markChangesAreSame(old, d.mtime, old.mtime);
return true;
// d._rev = old._rev;
}
} catch (ex) {
this._log(
msg +
"Error, Could not check the diff for the old one." +
(force ? "force writing." : "") +
fullPath +
(d._deleted || d.deleted ? " (deleted)" : ""),
LOG_LEVEL_VERBOSE
);
this._log(ex, LOG_LEVEL_VERBOSE);
return !force;
}
return false;
});
if (isNotChanged) {
this._log(msg + " Skip " + fullPath, LOG_LEVEL_VERBOSE);
return true;
}
const ret = await this.localDatabase.putDBEntry(d, onlyChunks);
if (ret !== false) {
this._log(msg + fullPath);
eventHub.emitEvent(EVENT_FILE_SAVED);
}
return ret != false;
}
async getConflictedRevs(file: UXFileInfoStub | FilePathWithPrefix): Promise<string[]> {
if (!(await this.checkIsTargetFile(file))) {
return [];
}
const filename = getDatabasePathFromUXFileInfo(file);
const doc = await this.localDatabase.getDBEntryMeta(filename, { conflicts: true }, true);
if (doc === false) {
return [];
}
return doc._conflicts || [];
}
async fetch(
file: UXFileInfoStub | FilePathWithPrefix,
rev?: string,
waitForReady?: boolean,
skipCheck = false
): Promise<UXFileInfo | false> {
if (skipCheck && !(await this.checkIsTargetFile(file))) {
return false;
}
const entry = await this.fetchEntry(file, rev, waitForReady, true);
if (entry === false) {
return false;
}
const data = createBlob(readContent(entry));
const path = stripAllPrefixes(entry.path);
const fileInfo: UXFileInfo = {
name: path.split("/").pop() as string,
path: path,
stat: {
size: entry.size,
ctime: entry.ctime,
mtime: entry.mtime,
type: "file",
},
body: data,
deleted: entry.deleted || entry._deleted,
};
if (isInternalMetadata(entry.path)) {
fileInfo.isInternal = true;
}
return fileInfo;
}
async fetchEntryMeta(
file: UXFileInfoStub | FilePathWithPrefix,
rev?: string,
skipCheck = false
): Promise<MetaEntry | false> {
const dbFileName = getDatabasePathFromUXFileInfo(file);
if (skipCheck && !(await this.checkIsTargetFile(file))) {
return false;
}
const doc = await this.localDatabase.getDBEntryMeta(dbFileName, rev ? { rev: rev } : undefined, true);
if (doc === false) {
return false;
}
return doc as MetaEntry;
}
async fetchEntryFromMeta(
meta: MetaEntry,
waitForReady: boolean = true,
skipCheck = false
): Promise<LoadedEntry | false> {
if (skipCheck && !(await this.checkIsTargetFile(meta.path))) {
return false;
}
const doc = await this.localDatabase.getDBEntryFromMeta(
meta as LoadedEntry,
undefined,
false,
waitForReady,
true
);
if (doc === false) {
return false;
}
return doc;
}
async fetchEntry(
file: UXFileInfoStub | FilePathWithPrefix,
rev?: string,
waitForReady: boolean = true,
skipCheck = false
): Promise<LoadedEntry | false> {
if (skipCheck && !(await this.checkIsTargetFile(file))) {
return false;
}
const entry = await this.fetchEntryMeta(file, rev, true);
if (entry === false) {
return false;
}
const doc = await this.fetchEntryFromMeta(entry, waitForReady, true);
return doc;
}
async deleteFromDBbyPath(fullPath: FilePath | FilePathWithPrefix, rev?: string): Promise<boolean> {
if (!(await this.checkIsTargetFile(fullPath))) {
this._log(`storeFromStorage: File is not target: ${fullPath}`);
return true;
}
const opt = rev ? { rev: rev } : undefined;
const ret = await this.localDatabase.deleteDBEntry(fullPath, opt);
eventHub.emitEvent(EVENT_FILE_SAVED);
return ret;
}
}

View File

@@ -1,408 +0,0 @@
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { serialized } from "octagonal-wheels/concurrency/lock";
import type { FileEventItem } from "../../common/types";
import type {
FilePath,
FilePathWithPrefix,
MetaEntry,
UXFileInfo,
UXFileInfoStub,
UXInternalFileInfoStub,
} from "../../lib/src/common/types";
import { AbstractModule } from "../AbstractModule.ts";
import {
compareFileFreshness,
EVEN,
getPath,
getPathWithoutPrefix,
getStoragePathFromUXFileInfo,
markChangesAreSame,
} from "../../common/utils";
import { getDocDataAsArray, isDocContentSame, readContent } from "../../lib/src/common/utils";
import { shouldBeIgnored } from "../../lib/src/string_and_binary/path";
import type { ICoreModule } from "../ModuleTypes";
import { Semaphore } from "octagonal-wheels/concurrency/semaphore";
import { eventHub } from "../../common/events.ts";
export class ModuleFileHandler extends AbstractModule implements ICoreModule {
get db() {
return this.core.databaseFileAccess;
}
get storage() {
return this.core.storageAccess;
}
$everyOnloadStart(): Promise<boolean> {
this.core.fileHandler = this;
return Promise.resolve(true);
}
async readFileFromStub(file: UXFileInfoStub | UXFileInfo) {
if ("body" in file && file.body) {
return file;
}
const readFile = await this.storage.readStubContent(file);
if (!readFile) {
throw new Error(`File ${file.path} is not exist on the storage`);
}
return readFile;
}
async storeFileToDB(
info: UXFileInfoStub | UXFileInfo | UXInternalFileInfoStub | FilePathWithPrefix,
force: boolean = false,
onlyChunks: boolean = false
): Promise<boolean | undefined> {
const file = typeof info === "string" ? this.storage.getFileStub(info) : info;
if (file == null) {
this._log(`File ${info} is not exist on the storage`, LOG_LEVEL_VERBOSE);
return false;
}
// const file = item.args.file;
if (file.isInternal) {
this._log(
`Internal file ${file.path} is not allowed to be processed on processFileEvent`,
LOG_LEVEL_VERBOSE
);
return false;
}
// First, check the file on the database
const entry = await this.db.fetchEntry(file, undefined, true, true);
if (!entry || entry.deleted || entry._deleted) {
// If the file is not exist on the database, then it should be created.
const readFile = await this.readFileFromStub(file);
if (!onlyChunks) {
return await this.db.store(readFile);
} else {
return await this.db.createChunks(readFile, false, true);
}
}
// entry is exist on the database, check the difference between the file and the entry.
let shouldApplied = false;
if (!force && !onlyChunks) {
// 1. if the time stamp is far different, then it should be updated.
// Note: This checks only the mtime with the resolution reduced to 2 seconds.
// 2 seconds it for the ZIP file's mtime. If not, we cannot backup the vault as the ZIP file.
// This is hardcoded on `compareMtime` of `src/common/utils.ts`.
if (compareFileFreshness(file, entry) !== EVEN) {
shouldApplied = true;
}
// 2. if not, the content should be checked.
let readFile: UXFileInfo | undefined = undefined;
if (!shouldApplied) {
readFile = await this.readFileFromStub(file);
if (await isDocContentSame(getDocDataAsArray(entry.data), readFile.body)) {
// Timestamp is different but the content is same. therefore, two timestamps should be handled as same.
// So, mark the changes are same.
markChangesAreSame(file, file.stat.mtime, entry.mtime);
} else {
shouldApplied = true;
}
}
if (!shouldApplied) {
this._log(`File ${file.path} is not changed`, LOG_LEVEL_VERBOSE);
return true;
}
if (!readFile) readFile = await this.readFileFromStub(file);
// If the file is changed, then the file should be stored.
if (onlyChunks) {
return await this.db.createChunks(readFile, false, true);
} else {
return await this.db.store(readFile, false, true);
}
} else {
// If force is true, then it should be updated.
const readFile = await this.readFileFromStub(file);
if (onlyChunks) {
return await this.db.createChunks(readFile, true, true);
} else {
return await this.db.store(readFile, true, true);
}
}
}
async deleteFileFromDB(info: UXFileInfoStub | UXInternalFileInfoStub | FilePath): Promise<boolean | undefined> {
const file = typeof info === "string" ? this.storage.getFileStub(info) : info;
if (file == null) {
this._log(`File ${info} is not exist on the storage`, LOG_LEVEL_VERBOSE);
return false;
}
// const file = item.args.file;
if (file.isInternal) {
this._log(
`Internal file ${file.path} is not allowed to be processed on processFileEvent`,
LOG_LEVEL_VERBOSE
);
return false;
}
// First, check the file on the database
const entry = await this.db.fetchEntry(file, undefined, true, true);
if (!entry || entry.deleted || entry._deleted) {
this._log(`File ${file.path} is not exist or already deleted on the database`, LOG_LEVEL_VERBOSE);
return false;
}
// Check the file is already conflicted. if so, only the conflicted one should be deleted.
const conflictedRevs = await this.db.getConflictedRevs(file);
if (conflictedRevs.length > 0) {
// If conflicted, then it should be deleted. entry._rev should be own file's rev.
// TODO: I BELIEVED SO. BUT I NOTICED THAT I AN NOT SURE. I SHOULD CHECK THIS.
// ANYWAY, I SHOULD DELETE THE FILE. ACTUALLY WE SIMPLY DELETED THE FILE UNTIL PREVIOUS VERSIONS.
return await this.db.delete(file, entry._rev);
}
// Otherwise, the file should be deleted simply. This is the previous behaviour.
return await this.db.delete(file);
}
async deleteRevisionFromDB(
info: UXFileInfoStub | FilePath | FilePathWithPrefix,
rev: string
): Promise<boolean | undefined> {
//TODO: Possibly check the conflicting.
return await this.db.delete(info, rev);
}
async resolveConflictedByDeletingRevision(
info: UXFileInfoStub | FilePath,
rev: string
): Promise<boolean | undefined> {
const path = getStoragePathFromUXFileInfo(info);
if (!(await this.deleteRevisionFromDB(info, rev))) {
this._log(`Failed to delete the conflicted revision ${rev} of ${path}`, LOG_LEVEL_VERBOSE);
return false;
}
if (!(await this.dbToStorageWithSpecificRev(info, rev, true))) {
this._log(`Failed to apply the resolved revision ${rev} of ${path} to the storage`, LOG_LEVEL_VERBOSE);
return false;
}
}
async dbToStorageWithSpecificRev(
info: UXFileInfoStub | UXFileInfo | FilePath | null,
rev: string,
force?: boolean
): Promise<boolean> {
const file = typeof info === "string" ? this.storage.getFileStub(info) : info;
if (file == null) {
this._log(`File ${info} is not exist on the storage`, LOG_LEVEL_VERBOSE);
return false;
}
const docEntry = await this.db.fetchEntryMeta(file, rev, true);
if (!docEntry) {
this._log(`File ${file.path} is not exist on the database`, LOG_LEVEL_VERBOSE);
return false;
}
return await this.dbToStorage(docEntry, file, force);
}
async dbToStorage(
entryInfo: MetaEntry | FilePathWithPrefix,
info: UXFileInfoStub | UXFileInfo | FilePath | null,
force?: boolean
): Promise<boolean> {
const file = typeof info === "string" ? this.storage.getFileStub(info) : info;
const mode = file == null ? "create" : "modify";
const pathFromEntryInfo = typeof entryInfo === "string" ? entryInfo : getPath(entryInfo);
const docEntry = await this.db.fetchEntryMeta(pathFromEntryInfo, undefined, true);
if (!docEntry) {
this._log(`File ${pathFromEntryInfo} is not exist on the database`, LOG_LEVEL_VERBOSE);
return false;
}
const path = getPath(docEntry);
// 1. Check if it already conflicted.
const revs = await this.db.getConflictedRevs(path);
if (revs.length > 0) {
// Some conflicts are exist.
if (this.settings.writeDocumentsIfConflicted) {
// If configured to write the document even if conflicted, then it should be written.
// NO OP
} else {
// If not, then it should be checked. and will be processed later (i.e., after the conflict is resolved).
await this.core.$$queueConflictCheckIfOpen(path);
return true;
}
}
// 2. Check if the file is already exist on the storage.
const existDoc = this.storage.getStub(path);
if (existDoc && existDoc.isFolder) {
this._log(`Folder ${path} is already exist on the storage as a folder`, LOG_LEVEL_VERBOSE);
// We can do nothing, and other modules should also nothing to do.
return true;
}
// Check existence of both file and docEntry.
const existOnDB = !(docEntry._deleted || docEntry.deleted || false);
const existOnStorage = existDoc != null;
if (!existOnDB && !existOnStorage) {
this._log(`File ${path} seems to be deleted, but already not on storage`, LOG_LEVEL_VERBOSE);
return true;
}
if (!existOnDB && existOnStorage) {
// Deletion has been Transferred. Storage files will be deleted.
// Note: If the folder becomes empty, the folder will be deleted if not configured to keep it.
// This behaviour is implemented on the `ModuleFileAccessObsidian`.
// And it does not care actually deleted.
await this.storage.deleteVaultItem(path);
return true;
}
// Okay, the file is exist on the database. Let's check the file is exist on the storage.
const docRead = await this.db.fetchEntryFromMeta(docEntry);
if (!docRead) {
this._log(`File ${path} is not exist on the database`, LOG_LEVEL_VERBOSE);
return false;
}
const docData = readContent(docRead);
if (existOnStorage && !force) {
// The file is exist on the storage. Let's check the difference between the file and the entry.
// But, if force is true, then it should be updated.
// Ok, we have to compare.
let shouldApplied = false;
// 1. if the time stamp is far different, then it should be updated.
// Note: This checks only the mtime with the resolution reduced to 2 seconds.
// 2 seconds it for the ZIP file's mtime. If not, we cannot backup the vault as the ZIP file.
// This is hardcoded on `compareMtime` of `src/common/utils.ts`.
if (compareFileFreshness(existDoc, docEntry) !== EVEN) {
shouldApplied = true;
}
// 2. if not, the content should be checked.
if (!shouldApplied) {
const readFile = await this.readFileFromStub(existDoc);
if (await isDocContentSame(docData, readFile.body)) {
// The content is same. So, we do not need to update the file.
shouldApplied = false;
// Timestamp is different but the content is same. therefore, two timestamps should be handled as same.
// So, mark the changes are same.
markChangesAreSame(docRead, docRead.mtime, existDoc.stat.mtime);
} else {
shouldApplied = true;
}
}
if (!shouldApplied) {
this._log(`File ${docRead.path} is not changed`, LOG_LEVEL_VERBOSE);
return true;
}
// Let's apply the changes.
} else {
this._log(
`File ${docRead.path} ${existOnStorage ? "(new) " : ""} ${force ? " (forced)" : ""}`,
LOG_LEVEL_VERBOSE
);
}
await this.storage.ensureDir(path);
const ret = await this.storage.writeFileAuto(path, docData, { ctime: docRead.ctime, mtime: docRead.mtime });
this.storage.touched(path);
this.storage.triggerFileEvent(mode, path);
return ret;
}
async $anyHandlerProcessesFileEvent(item: FileEventItem): Promise<boolean | undefined> {
const eventItem = item.args;
const type = item.type;
const path = eventItem.file.path;
if (!(await this.core.$$isTargetFile(path))) {
this._log(`File ${path} is not the target file`, LOG_LEVEL_VERBOSE);
return false;
}
if (shouldBeIgnored(path)) {
this._log(`File ${path} should be ignored`, LOG_LEVEL_VERBOSE);
return false;
}
const lockKey = `processFileEvent-${path}`;
return await serialized(lockKey, async () => {
switch (type) {
case "CREATE":
case "CHANGED":
return await this.storeFileToDB(item.args.file);
case "DELETE":
return await this.deleteFileFromDB(item.args.file);
case "INTERNAL":
// this should be handled on the other module.
return false;
default:
this._log(`Unsupported event type: ${type}`, LOG_LEVEL_VERBOSE);
return false;
}
});
}
async $anyProcessReplicatedDoc(entry: MetaEntry): Promise<boolean | undefined> {
return await serialized(entry.path, async () => {
if (!(await this.core.$$isTargetFile(entry.path))) {
this._log(`File ${entry.path} is not the target file`, LOG_LEVEL_VERBOSE);
return false;
}
if (shouldBeIgnored(entry.path)) {
this._log(`File ${entry.path} should be ignored`, LOG_LEVEL_VERBOSE);
return false;
}
const path = getPath(entry);
const targetFile = this.storage.getStub(getPathWithoutPrefix(entry));
if (targetFile && targetFile.isFolder) {
this._log(`${getPath(entry)} is already exist as the folder`);
// Nothing to do and other modules should also nothing to do.
return true;
} else {
this._log(
`Processing ${path} (${entry._id.substring(0, 8)}: ${entry._rev?.substring(0, 5)}) :Started...`,
LOG_LEVEL_VERBOSE
);
// Before writing (or skipped ), merging dialogue should be cancelled.
eventHub.emitEvent("conflict-cancelled", path);
const ret = await this.dbToStorage(entry, targetFile);
this._log(`Processing ${path} (${entry._id.substring(0, 8)} :${entry._rev?.substring(0, 5)}) : Done`);
return ret;
}
});
}
async createAllChunks(showingNotice?: boolean): Promise<void> {
this._log("Collecting local files on the storage", LOG_LEVEL_VERBOSE);
const semaphore = Semaphore(10);
let processed = 0;
const filesStorageSrc = this.storage.getFiles();
const incProcessed = () => {
processed++;
if (processed % 25 == 0)
this._log(
`Creating missing chunks: ${processed} of ${total} files`,
showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO,
"chunkCreation"
);
};
const total = filesStorageSrc.length;
const procAllChunks = filesStorageSrc.map(async (file) => {
if (!(await this.core.$$isTargetFile(file))) {
incProcessed();
return true;
}
if (shouldBeIgnored(file.path)) {
incProcessed();
return true;
}
const release = await semaphore.acquire();
incProcessed();
try {
await this.storeFileToDB(file, false, true);
} catch (ex) {
this._log(ex, LOG_LEVEL_VERBOSE);
} finally {
release();
}
});
await Promise.all(procAllChunks);
this._log(
`Creating chunks Done: ${processed} of ${total} files`,
showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO,
"chunkCreation"
);
}
}

View File

@@ -1,25 +0,0 @@
import { $msg } from "../../lib/src/common/i18n";
import { LiveSyncLocalDB } from "../../lib/src/pouchdb/LiveSyncLocalDB.ts";
import { initializeStores } from "../../common/stores.ts";
import { AbstractModule } from "../AbstractModule.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
export class ModuleLocalDatabaseObsidian extends AbstractModule implements ICoreModule {
$everyOnloadStart(): Promise<boolean> {
return Promise.resolve(true);
}
async $$openDatabase(): Promise<boolean> {
if (this.localDatabase != null) {
await this.localDatabase.close();
}
const vaultName = this.core.$$getVaultName();
this._log($msg("moduleLocalDatabase.logWaitingForReady"));
this.core.localDatabase = new LiveSyncLocalDB(vaultName, this.core);
initializeStores(vaultName);
return await this.localDatabase.initializeDatabase();
}
$$isDatabaseReady(): boolean {
return this.localDatabase != null && this.localDatabase.isReady;
}
}

View File

@@ -1,33 +1,41 @@
import { PeriodicProcessor } from "../../common/utils";
import type { LiveSyncCore } from "../../main";
import { AbstractModule } from "../AbstractModule";
import type { ICoreModule } from "../ModuleTypes";
export class ModulePeriodicProcess extends AbstractModule implements ICoreModule {
periodicSyncProcessor = new PeriodicProcessor(this.core, async () => await this.core.$$replicate());
export class ModulePeriodicProcess extends AbstractModule {
periodicSyncProcessor = new PeriodicProcessor(this.core, async () => await this.services.replication.replicate());
_disablePeriodic() {
disablePeriodic() {
this.periodicSyncProcessor?.disable();
return Promise.resolve(true);
}
_resumePeriodic() {
resumePeriodic() {
this.periodicSyncProcessor.enable(
this.settings.periodicReplication ? this.settings.periodicReplicationInterval * 1000 : 0
);
return Promise.resolve(true);
}
$allOnUnload() {
return this._disablePeriodic();
private _allOnUnload() {
return this.disablePeriodic();
}
$everyBeforeRealizeSetting(): Promise<boolean> {
return this._disablePeriodic();
private _everyBeforeRealizeSetting(): Promise<boolean> {
return this.disablePeriodic();
}
$everyBeforeSuspendProcess(): Promise<boolean> {
return this._disablePeriodic();
private _everyBeforeSuspendProcess(): Promise<boolean> {
return this.disablePeriodic();
}
$everyAfterResumeProcess(): Promise<boolean> {
return this._resumePeriodic();
private _everyAfterResumeProcess(): Promise<boolean> {
return this.resumePeriodic();
}
$everyAfterRealizeSetting(): Promise<boolean> {
return this._resumePeriodic();
private _everyAfterRealizeSetting(): Promise<boolean> {
return this.resumePeriodic();
}
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.onUnload.addHandler(this._allOnUnload.bind(this));
services.setting.onBeforeRealiseSetting.addHandler(this._everyBeforeRealizeSetting.bind(this));
services.setting.onSettingRealised.addHandler(this._everyAfterRealizeSetting.bind(this));
services.appLifecycle.onSuspending.addHandler(this._everyBeforeSuspendProcess.bind(this));
services.appLifecycle.onResumed.addHandler(this._everyAfterResumeProcess.bind(this));
}
}

View File

@@ -1,19 +0,0 @@
import { AbstractModule } from "../AbstractModule";
import type { ICoreModule } from "../ModuleTypes";
import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser";
export class ModulePouchDB extends AbstractModule implements ICoreModule {
$$createPouchDBInstance<T extends object>(
name?: string,
options?: PouchDB.Configuration.DatabaseConfiguration
): PouchDB.Database<T> {
const optionPass = options ?? {};
if (this.settings.useIndexedDBAdapter) {
optionPass.adapter = "indexeddb";
//@ts-ignore :missing def
optionPass.purged_infos_limit = 1;
return new PouchDB(name + "-indexeddb", optionPass);
}
return new PouchDB(name, optionPass);
}
}

View File

@@ -1,257 +0,0 @@
import { delay } from "octagonal-wheels/promises";
import {
FLAGMD_REDFLAG2_HR,
FLAGMD_REDFLAG3_HR,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
REMOTE_COUCHDB,
REMOTE_MINIO,
} from "../../lib/src/common/types.ts";
import { AbstractModule } from "../AbstractModule.ts";
import type { Rebuilder } from "../interfaces/DatabaseRebuilder.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
import type { LiveSyncCouchDBReplicator } from "../../lib/src/replication/couchdb/LiveSyncReplicator.ts";
import { fetchAllUsedChunks } from "../../lib/src/pouchdb/utils_couchdb.ts";
import { EVENT_DATABASE_REBUILT, eventHub } from "src/common/events.ts";
export class ModuleRebuilder extends AbstractModule implements ICoreModule, Rebuilder {
$everyOnload(): Promise<boolean> {
this.core.rebuilder = this;
return Promise.resolve(true);
}
async $performRebuildDB(
method: "localOnly" | "remoteOnly" | "rebuildBothByThisDevice" | "localOnlyWithChunks"
): Promise<void> {
if (method == "localOnly") {
await this.$fetchLocal();
}
if (method == "localOnlyWithChunks") {
await this.$fetchLocal(true);
}
if (method == "remoteOnly") {
await this.$rebuildRemote();
}
if (method == "rebuildBothByThisDevice") {
await this.$rebuildEverything();
}
}
async askUsingOptionalFeature(opt: { enableFetch?: boolean; enableOverwrite?: boolean }) {
if (
(await this.core.confirm.askYesNoDialog(
"Do you want to enable extra features? If you are new to Self-hosted LiveSync, try the core feature first!",
{ title: "Enable extra features", defaultOption: "No", timeout: 15 }
)) == "yes"
) {
await this.core.$allAskUsingOptionalSyncFeature(opt);
}
}
async rebuildRemote() {
await this.core.$allSuspendExtraSync();
this.core.settings.isConfigured = true;
await this.core.$$realizeSettingSyncMode();
await this.core.$$markRemoteLocked();
await this.core.$$tryResetRemoteDatabase();
await this.core.$$markRemoteLocked();
await delay(500);
await this.askUsingOptionalFeature({ enableOverwrite: true });
await delay(1000);
await this.core.$$replicateAllToServer(true);
await delay(1000);
await this.core.$$replicateAllToServer(true, true);
}
$rebuildRemote(): Promise<void> {
return this.rebuildRemote();
}
async rebuildEverything() {
await this.core.$allSuspendExtraSync();
await this.askUseNewAdapter();
this.core.settings.isConfigured = true;
await this.core.$$realizeSettingSyncMode();
await this.resetLocalDatabase();
await delay(1000);
await this.core.$$initializeDatabase(true);
await this.core.$$markRemoteLocked();
await this.core.$$tryResetRemoteDatabase();
await this.core.$$markRemoteLocked();
await delay(500);
// We do not have any other devices' data, so we do not need to ask for overwriting.
await this.askUsingOptionalFeature({ enableOverwrite: false });
await delay(1000);
await this.core.$$replicateAllToServer(true);
await delay(1000);
await this.core.$$replicateAllToServer(true, true);
}
$rebuildEverything(): Promise<void> {
return this.rebuildEverything();
}
$fetchLocal(makeLocalChunkBeforeSync?: boolean): Promise<void> {
return this.fetchLocal(makeLocalChunkBeforeSync);
}
async scheduleRebuild(): Promise<void> {
try {
await this.core.storageAccess.writeFileAuto(FLAGMD_REDFLAG2_HR, "");
} catch (ex) {
this._log("Could not create red_flag_rebuild.md", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
}
this.core.$$performRestart();
}
async scheduleFetch(): Promise<void> {
try {
await this.core.storageAccess.writeFileAuto(FLAGMD_REDFLAG3_HR, "");
} catch (ex) {
this._log("Could not create red_flag_fetch.md", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
}
this.core.$$performRestart();
}
async $$tryResetRemoteDatabase(): Promise<void> {
await this.core.replicator.tryResetRemoteDatabase(this.settings);
}
async $$tryCreateRemoteDatabase(): Promise<void> {
await this.core.replicator.tryCreateRemoteDatabase(this.settings);
}
async $$resetLocalDatabase(): Promise<void> {
this.core.storageAccess.clearTouched();
await this.localDatabase.resetDatabase();
}
async suspendAllSync() {
this.core.settings.liveSync = false;
this.core.settings.periodicReplication = false;
this.core.settings.syncOnSave = false;
this.core.settings.syncOnEditorSave = false;
this.core.settings.syncOnStart = false;
this.core.settings.syncOnFileOpen = false;
this.core.settings.syncAfterMerge = false;
await this.core.$allSuspendExtraSync();
}
async suspendReflectingDatabase() {
if (this.core.settings.doNotSuspendOnFetching) return;
if (this.core.settings.remoteType == REMOTE_MINIO) return;
this._log(
`Suspending reflection: Database and storage changes will not be reflected in each other until completely finished the fetching.`,
LOG_LEVEL_NOTICE
);
this.core.settings.suspendParseReplicationResult = true;
this.core.settings.suspendFileWatching = true;
await this.core.saveSettings();
}
async resumeReflectingDatabase() {
if (this.core.settings.doNotSuspendOnFetching) return;
if (this.core.settings.remoteType == REMOTE_MINIO) return;
this._log(`Database and storage reflection has been resumed!`, LOG_LEVEL_NOTICE);
this.core.settings.suspendParseReplicationResult = false;
this.core.settings.suspendFileWatching = false;
await this.core.$$performFullScan(true);
await this.core.$everyBeforeReplicate(false); //TODO: Check actual need of this.
await this.core.saveSettings();
}
async askUseNewAdapter() {
if (!this.core.settings.useIndexedDBAdapter) {
const message = `Now this core has been configured to use the old database adapter for keeping compatibility. Do you want to deactivate it?`;
const CHOICE_YES = "Yes, disable and use latest";
const CHOICE_NO = "No, keep compatibility";
const choices = [CHOICE_YES, CHOICE_NO];
const ret = await this.core.confirm.confirmWithMessage(
"Database adapter",
message,
choices,
CHOICE_YES,
10
);
if (ret == CHOICE_YES) {
this.core.settings.useIndexedDBAdapter = true;
}
}
}
async fetchLocal(makeLocalChunkBeforeSync?: boolean) {
await this.core.$allSuspendExtraSync();
await this.askUseNewAdapter();
this.core.settings.isConfigured = true;
await this.suspendReflectingDatabase();
await this.core.$$realizeSettingSyncMode();
await this.resetLocalDatabase();
await delay(1000);
await this.core.$$openDatabase();
// this.core.isReady = true;
this.core.$$markIsReady();
if (makeLocalChunkBeforeSync) {
await this.core.fileHandler.createAllChunks(true);
}
await this.core.$$markRemoteResolved();
await delay(500);
await this.core.$$replicateAllFromServer(true);
await delay(1000);
await this.core.$$replicateAllFromServer(true);
await this.resumeReflectingDatabase();
await this.askUsingOptionalFeature({ enableFetch: true });
}
async fetchLocalWithRebuild() {
return await this.fetchLocal(true);
}
async $allSuspendAllSync(): Promise<boolean> {
await this.suspendAllSync();
return true;
}
async resetLocalDatabase() {
if (this.core.settings.isConfigured && this.core.settings.additionalSuffixOfDatabaseName == "") {
// Discard the non-suffixed database
await this.core.$$resetLocalDatabase();
}
const suffix = (await this.core.$anyGetAppId()) || "";
this.core.settings.additionalSuffixOfDatabaseName = suffix;
await this.core.$$resetLocalDatabase();
eventHub.emitEvent(EVENT_DATABASE_REBUILT);
}
async fetchRemoteChunks() {
if (
!this.core.settings.doNotSuspendOnFetching &&
this.core.settings.readChunksOnline &&
this.core.settings.remoteType == REMOTE_COUCHDB
) {
this._log(`Fetching chunks`, LOG_LEVEL_NOTICE);
const replicator = this.core.$$getReplicator() as LiveSyncCouchDBReplicator;
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(
this.settings,
this.core.$$isMobile(),
true
);
if (typeof remoteDB == "string") {
this._log(remoteDB, LOG_LEVEL_NOTICE);
} else {
await fetchAllUsedChunks(this.localDatabase.localDatabase, remoteDB.db);
}
this._log(`Fetching chunks done`, LOG_LEVEL_NOTICE);
}
}
async resolveAllConflictedFilesByNewerOnes() {
this._log(`Resolving conflicts by newer ones`, LOG_LEVEL_NOTICE);
const files = this.core.storageAccess.getFileNames();
let i = 0;
for (const file of files) {
if (i++ % 10)
this._log(
`Check and Processing ${i} / ${files.length}`,
LOG_LEVEL_NOTICE,
"resolveAllConflictedFilesByNewerOnes"
);
await this.core.$anyResolveConflictByNewest(file);
}
this._log(`Done!`, LOG_LEVEL_NOTICE, "resolveAllConflictedFilesByNewerOnes");
}
}

View File

@@ -1,423 +1,287 @@
import { fireAndForget, yieldMicrotask } from "octagonal-wheels/promises";
import type { LiveSyncLocalDB } from "../../lib/src/pouchdb/LiveSyncLocalDB";
import { fireAndForget } from "octagonal-wheels/promises";
import { AbstractModule } from "../AbstractModule";
import type { ICoreModule } from "../ModuleTypes";
import { Logger, LOG_LEVEL_NOTICE, LOG_LEVEL_INFO, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { isLockAcquired, shareRunningResult, skipIfDuplicated } from "octagonal-wheels/concurrency/lock";
import { purgeUnreferencedChunks, balanceChunkPurgedDBs } from "../../lib/src/pouchdb/utils_couchdb";
import { Logger, LOG_LEVEL_NOTICE, LOG_LEVEL_INFO } from "octagonal-wheels/common/logger";
import { skipIfDuplicated } from "octagonal-wheels/concurrency/lock";
import { balanceChunkPurgedDBs } from "@lib/pouchdb/chunks";
import { purgeUnreferencedChunks } from "@lib/pouchdb/chunks";
import { LiveSyncCouchDBReplicator } from "../../lib/src/replication/couchdb/LiveSyncReplicator";
import { throttle } from "octagonal-wheels/function";
import { arrayToChunkedArray } from "octagonal-wheels/collection";
import {
SYNCINFO_ID,
VER,
type EntryBody,
type EntryDoc,
type EntryLeaf,
type LoadedEntry,
type MetaEntry,
} from "../../lib/src/common/types";
import { QueueProcessor } from "octagonal-wheels/concurrency/processor";
import { getPath, isChunk, isValidPath, scheduleTask } from "../../common/utils";
import { isAnyNote } from "../../lib/src/common/utils";
import { EVENT_FILE_SAVED, eventHub } from "../../common/events";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import { globalSlipBoard } from "../../lib/src/bureau/bureau";
import { type EntryDoc, type RemoteType } from "../../lib/src/common/types";
import { scheduleTask } from "../../common/utils";
import { EVENT_FILE_SAVED, EVENT_SETTING_SAVED, eventHub } from "../../common/events";
export class ModuleReplicator extends AbstractModule implements ICoreModule {
$everyOnloadAfterLoadSettings(): Promise<boolean> {
import { $msg } from "../../lib/src/common/i18n";
import type { LiveSyncCore } from "../../main";
import { ReplicateResultProcessor } from "./ReplicateResultProcessor";
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
import { clearHandlers } from "@lib/replication/SyncParamsHandler";
import type { NecessaryServices } from "@/serviceFeatures/types";
function isOnlineAndCanReplicate(
errorManager: UnresolvedErrorManager,
host: NecessaryServices<"database", any>,
showMessage: boolean
): Promise<boolean> {
const errorMessage = "Network is offline";
const manager = host.services.database.managers.networkManager;
if (!manager.isOnline) {
errorManager.showError(errorMessage, showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
return Promise.resolve(false);
}
errorManager.clearError(errorMessage);
return Promise.resolve(true);
}
async function canReplicateWithPBKDF2(
errorManager: UnresolvedErrorManager,
host: NecessaryServices<"replicator" | "setting", any>,
showMessage: boolean
): Promise<boolean> {
const currentSettings = host.services.setting.currentSettings();
// TODO: check using PBKDF2 salt?
const errorMessage = $msg("Replicator.Message.InitialiseFatalError");
const replicator = host.services.replicator.getActiveReplicator();
if (!replicator) {
errorManager.showError(errorMessage, showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
return false;
}
errorManager.clearError(errorMessage);
const ensureMessage = "Failed to initialise the encryption key, preventing replication.";
const ensureResult = await replicator.ensurePBKDF2Salt(currentSettings, showMessage, true);
if (!ensureResult) {
errorManager.showError(ensureMessage, showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
return false;
}
errorManager.clearError(ensureMessage);
return ensureResult; // is true.
}
export class ModuleReplicator extends AbstractModule {
_replicatorType?: RemoteType;
processor: ReplicateResultProcessor = new ReplicateResultProcessor(this);
private _unresolvedErrorManager: UnresolvedErrorManager = new UnresolvedErrorManager(
this.core.services.appLifecycle
);
clearErrors() {
this._unresolvedErrorManager.clearErrors();
}
private _everyOnloadAfterLoadSettings(): Promise<boolean> {
eventHub.onEvent(EVENT_FILE_SAVED, () => {
if (this.settings.syncOnSave && !this.core.$$isSuspended()) {
scheduleTask("perform-replicate-after-save", 250, () => this.core.$$waitForReplicationOnce());
if (this.settings.syncOnSave && !this.core.services.appLifecycle.isSuspended()) {
scheduleTask("perform-replicate-after-save", 250, () => this.services.replication.replicateByEvent());
}
});
eventHub.onEvent(EVENT_SETTING_SAVED, (setting) => {
if (this.core.settings.suspendParseReplicationResult) {
this.processor.suspend();
} else {
this.processor.resume();
}
});
return Promise.resolve(true);
}
async setReplicator() {
const replicator = await this.core.$anyNewReplicator();
if (!replicator) {
this._log("No replicator is available, this is the fatal error.", LOG_LEVEL_NOTICE);
return false;
}
this.core.replicator = replicator;
await yieldMicrotask();
_onReplicatorInitialised(): Promise<boolean> {
// For now, we only need to clear the error related to replicator initialisation, but in the future, if there are more things to do when the replicator is initialised, we can add them here.
clearHandlers();
return Promise.resolve(true);
}
_everyOnDatabaseInitialized(showNotice: boolean): Promise<boolean> {
fireAndForget(() => this.processor.restoreFromSnapshotOnce());
return Promise.resolve(true);
}
async _everyBeforeReplicate(showMessage: boolean): Promise<boolean> {
await this.processor.restoreFromSnapshotOnce();
this.clearErrors();
return true;
}
$$getReplicator(): LiveSyncAbstractReplicator {
return this.core.replicator;
}
$everyOnInitializeDatabase(db: LiveSyncLocalDB): Promise<boolean> {
return this.setReplicator();
}
$everyOnResetDatabase(db: LiveSyncLocalDB): Promise<boolean> {
return this.setReplicator();
}
async $everyBeforeReplicate(showMessage: boolean): Promise<boolean> {
await this.loadQueuedFiles();
return true;
}
async $$replicate(showMessage: boolean = false): Promise<boolean | void> {
//--?
if (!this.core.$$isReady()) return;
if (isLockAcquired("cleanup")) {
Logger("Database cleaning up is in process. replication has been cancelled", LOG_LEVEL_NOTICE);
return;
}
if (this.settings.versionUpFlash != "") {
Logger("Open settings and check message, please. replication has been cancelled.", LOG_LEVEL_NOTICE);
return;
}
if (!(await this.core.$everyCommitPendingFileEvent())) {
Logger("Some file events are pending. Replication has been cancelled.", LOG_LEVEL_NOTICE);
return false;
}
if (!(await this.core.$everyBeforeReplicate(showMessage))) {
Logger(`Replication has been cancelled by some module failure`, LOG_LEVEL_NOTICE);
return false;
}
//<-- Here could be an module.
const ret = await this.core.replicator.openReplication(this.settings, false, showMessage, false);
if (!ret) {
if (this.core.replicator.tweakSettingsMismatched && this.core.replicator.preferredTweakValue) {
await this.core.$$askResolvingMismatchedTweaks(this.core.replicator.preferredTweakValue);
} else {
if (this.core.replicator?.remoteLockedAndDeviceNotAccepted) {
if (this.core.replicator.remoteCleaned && this.settings.useIndexedDBAdapter) {
Logger(
`The remote database has been cleaned.`,
showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO
);
await skipIfDuplicated("cleanup", async () => {
const count = await purgeUnreferencedChunks(this.localDatabase.localDatabase, true);
const message = `The remote database has been cleaned up.
/**
* obsolete method. No longer maintained and will be removed in the future.
* @deprecated v0.24.17
* @param showMessage If true, show message to the user.
*/
async cleaned(showMessage: boolean) {
Logger(`The remote database has been cleaned.`, showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
await skipIfDuplicated("cleanup", async () => {
const count = await purgeUnreferencedChunks(this.localDatabase.localDatabase, true);
const message = `The remote database has been cleaned up.
To synchronize, this device must be also cleaned up. ${count} chunk(s) will be erased from this device.
However, If there are many chunks to be deleted, maybe fetching again is faster.
We will lose the history of this device if we fetch the remote database again.
Even if you choose to clean up, you will see this option again if you exit Obsidian and then synchronise again.`;
const CHOICE_FETCH = "Fetch again";
const CHOICE_CLEAN = "Cleanup";
const CHOICE_DISMISS = "Dismiss";
const ret = await this.core.confirm.confirmWithMessage(
"Cleaned",
message,
[CHOICE_FETCH, CHOICE_CLEAN, CHOICE_DISMISS],
CHOICE_DISMISS,
30
);
if (ret == CHOICE_FETCH) {
await this.core.rebuilder.$performRebuildDB("localOnly");
}
if (ret == CHOICE_CLEAN) {
const replicator = this.core.$$getReplicator();
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(
this.settings,
this.core.$$isMobile(),
true
);
if (typeof remoteDB == "string") {
Logger(remoteDB, LOG_LEVEL_NOTICE);
return false;
}
const CHOICE_FETCH = "Fetch again";
const CHOICE_CLEAN = "Cleanup";
const CHOICE_DISMISS = "Dismiss";
const ret = await this.core.confirm.confirmWithMessage(
"Cleaned",
message,
[CHOICE_FETCH, CHOICE_CLEAN, CHOICE_DISMISS],
CHOICE_DISMISS,
30
);
if (ret == CHOICE_FETCH) {
await this.core.rebuilder.$performRebuildDB("localOnly");
}
if (ret == CHOICE_CLEAN) {
const replicator = this.services.replicator.getActiveReplicator();
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(
this.settings,
this.services.API.isMobile(),
true
);
if (typeof remoteDB == "string") {
Logger(remoteDB, LOG_LEVEL_NOTICE);
return false;
}
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
this.localDatabase.hashCaches.clear();
// Perform the synchronisation once.
if (
await this.core.replicator.openReplication(this.settings, false, showMessage, true)
) {
await balanceChunkPurgedDBs(this.localDatabase.localDatabase, remoteDB.db);
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
this.localDatabase.hashCaches.clear();
await this.core.$$getReplicator().markRemoteResolved(this.settings);
Logger(
"The local database has been cleaned up.",
showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO
);
} else {
Logger(
"Replication has been cancelled. Please try it again.",
showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO
);
}
}
});
} else {
const message = `
The remote database has been rebuilt.
To synchronize, this device must fetch everything again once.
Or if you are sure know what had been happened, we can unlock the database from the setting dialog.
`;
const CHOICE_FETCH = "Fetch again";
const CHOICE_DISMISS = "Dismiss";
const ret = await this.core.confirm.confirmWithMessage(
"Locked",
message,
[CHOICE_FETCH, CHOICE_DISMISS],
CHOICE_DISMISS,
10
);
if (ret == CHOICE_FETCH) {
const CHOICE_RESTART = "Restart";
const CHOICE_WITHOUT_RESTART = "Without restart";
if (
(await this.core.confirm.askSelectStringDialogue(
"Self-hosted LiveSync restarts Obsidian to fetch everything safely. However, you can do it without restarting. Please choose one.",
[CHOICE_RESTART, CHOICE_WITHOUT_RESTART],
{
title: "Fetch again",
defaultAction: CHOICE_RESTART,
timeout: 30,
}
)) == CHOICE_RESTART
) {
await this.core.rebuilder.scheduleFetch();
// await this.core.$$scheduleAppReload();
return;
} else {
await this.core.rebuilder.$performRebuildDB("localOnly");
}
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
this.localDatabase.clearCaches();
// Perform the synchronisation once.
if (await this.core.replicator.openReplication(this.settings, false, showMessage, true)) {
await balanceChunkPurgedDBs(this.localDatabase.localDatabase, remoteDB.db);
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
this.localDatabase.clearCaches();
await this.services.replicator.getActiveReplicator()?.markRemoteResolved(this.settings);
Logger("The local database has been cleaned up.", showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
} else {
Logger(
"Replication has been cancelled. Please try it again.",
showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO
);
}
}
});
}
private async onReplicationFailed(showMessage: boolean = false): Promise<boolean> {
const activeReplicator = this.services.replicator.getActiveReplicator();
if (!activeReplicator) {
Logger(`No active replicator found`, LOG_LEVEL_INFO);
return false;
}
if (activeReplicator.tweakSettingsMismatched && activeReplicator.preferredTweakValue) {
await this.services.tweakValue.askResolvingMismatched(activeReplicator.preferredTweakValue);
} else {
if (activeReplicator.remoteLockedAndDeviceNotAccepted) {
if (activeReplicator.remoteCleaned && this.settings.useIndexedDBAdapter) {
await this.cleaned(showMessage);
} else {
const message = $msg("Replicator.Dialogue.Locked.Message");
const CHOICE_FETCH = $msg("Replicator.Dialogue.Locked.Action.Fetch");
const CHOICE_DISMISS = $msg("Replicator.Dialogue.Locked.Action.Dismiss");
const CHOICE_UNLOCK = $msg("Replicator.Dialogue.Locked.Action.Unlock");
const ret = await this.core.confirm.askSelectStringDialogue(
message,
[CHOICE_FETCH, CHOICE_UNLOCK, CHOICE_DISMISS],
{
title: $msg("Replicator.Dialogue.Locked.Title"),
defaultAction: CHOICE_DISMISS,
timeout: 60,
}
);
if (ret == CHOICE_FETCH) {
this._log($msg("Replicator.Dialogue.Locked.Message.Fetch"), LOG_LEVEL_NOTICE);
await this.core.rebuilder.scheduleFetch();
this.services.appLifecycle.scheduleRestart();
return false;
} else if (ret == CHOICE_UNLOCK) {
await activeReplicator.markRemoteResolved(this.settings);
this._log($msg("Replicator.Dialogue.Locked.Message.Unlocked"), LOG_LEVEL_NOTICE);
return false;
}
}
}
}
return ret;
// TODO: Check again and true/false return. This will be the result for performReplication.
return false;
}
$$parseReplicationResult(docs: Array<PouchDB.Core.ExistingDocument<EntryDoc>>): void {
if (this.settings.suspendParseReplicationResult && !this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.suspend();
}
this.replicationResultProcessor.enqueueAll(docs);
if (!this.settings.suspendParseReplicationResult && this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
}
_saveQueuedFiles = throttle(() => {
const saveData = this.replicationResultProcessor._queue
.filter((e) => e !== undefined && e !== null)
.map((e) => e?._id ?? ("" as string)) as string[];
const kvDBKey = "queued-files";
// localStorage.setItem(lsKey, saveData);
fireAndForget(() => this.core.kvDB.set(kvDBKey, saveData));
}, 100);
saveQueuedFiles() {
this._saveQueuedFiles();
}
async loadQueuedFiles() {
if (this.settings.suspendParseReplicationResult) return;
if (!this.settings.isConfigured) return;
const kvDBKey = "queued-files";
// const ids = [...new Set(JSON.parse(localStorage.getItem(lsKey) || "[]"))] as string[];
const ids = [...new Set((await this.core.kvDB.get<string[]>(kvDBKey)) ?? [])];
const batchSize = 100;
const chunkedIds = arrayToChunkedArray(ids, batchSize);
// private async _replicateByEvent(): Promise<boolean | void> {
// const least = this.settings.syncMinimumInterval;
// if (least > 0) {
// return rateLimitedSharedExecution(KEY_REPLICATION_ON_EVENT, least, async () => {
// return await this.services.replication.replicate();
// });
// }
// return await shareRunningResult(`replication`, () => this.services.replication.replicate());
// }
// suspendParseReplicationResult is true, so we have to resume it if it is suspended.
if (this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
for await (const idsBatch of chunkedIds) {
const ret = await this.localDatabase.allDocsRaw<EntryDoc>({
keys: idsBatch,
include_docs: true,
limit: 100,
});
const docs = ret.rows.filter((e) => e.doc).map((e) => e.doc) as PouchDB.Core.ExistingDocument<EntryDoc>[];
const errors = ret.rows.filter((e) => !e.doc && !e.value.deleted);
if (errors.length > 0) {
Logger("Some queued processes were not resurrected");
Logger(JSON.stringify(errors), LOG_LEVEL_VERBOSE);
}
this.replicationResultProcessor.enqueueAll(docs);
}
if (this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
await this.replicationResultProcessor.waitForAllProcessed();
}
replicationResultProcessor = new QueueProcessor(
async (docs: PouchDB.Core.ExistingDocument<EntryDoc>[]) => {
if (this.settings.suspendParseReplicationResult) return;
const change = docs[0];
if (!change) return;
if (isChunk(change._id)) {
globalSlipBoard.submit("read-chunk", change._id, change as EntryLeaf);
return;
}
if (await this.core.$anyModuleParsedReplicationResultItem(change)) return;
// any addon needs this item?
// for (const proc of this.core.addOns) {
// if (await proc.parseReplicationResultItem(change)) {
// return;
// }
// }
if (change.type == "versioninfo") {
if (change.version > VER) {
this.core.replicator.closeReplication();
Logger(
`Remote database updated to incompatible version. update your Self-hosted LiveSync plugin.`,
LOG_LEVEL_NOTICE
);
}
return;
}
if (
change._id == SYNCINFO_ID || // Synchronisation information data
change._id.startsWith("_design") //design document
) {
return;
}
if (isAnyNote(change)) {
const docPath = getPath(change);
if (!(await this.core.$$isTargetFile(docPath))) {
Logger(`Skipped: ${docPath}`, LOG_LEVEL_VERBOSE);
return;
}
if (this.databaseQueuedProcessor._isSuspended) {
Logger(`Processing scheduled: ${docPath}`, LOG_LEVEL_INFO);
}
const size = change.size;
if (this.core.$$isFileSizeExceeded(size)) {
Logger(
`Processing ${docPath} has been skipped due to file size exceeding the limit`,
LOG_LEVEL_NOTICE
);
return;
}
this.databaseQueuedProcessor.enqueue(change);
}
return;
},
{
batchSize: 1,
suspended: true,
concurrentLimit: 100,
delay: 0,
totalRemainingReactiveSource: this.core.replicationResultCount,
}
)
.replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter((e) => e._id != newItem._id);
return [...q, newItem];
})
.startPipeline()
.onUpdateProgress(() => {
this.saveQueuedFiles();
});
databaseQueuedProcessor = new QueueProcessor(
async (docs: EntryBody[]) => {
const dbDoc = docs[0] as LoadedEntry; // It has no `data`
const path = getPath(dbDoc);
// If `Read chunks online` is disabled, chunks should be transferred before here.
// However, in some cases, chunks are after that. So, if missing chunks exist, we have to wait for them.
const doc = await this.localDatabase.getDBEntryFromMeta({ ...dbDoc }, {}, false, true, true);
if (!doc) {
Logger(
`Something went wrong while gathering content of ${path} (${dbDoc._id.substring(0, 8)}, ${dbDoc._rev?.substring(0, 10)}) `,
LOG_LEVEL_NOTICE
);
return;
}
if (await this.core.$anyProcessOptionalSyncFiles(dbDoc)) {
// Already processed
} else if (isValidPath(getPath(doc))) {
this.storageApplyingProcessor.enqueue(doc as MetaEntry);
} else {
Logger(`Skipped: ${path} (${doc._id.substring(0, 8)})`, LOG_LEVEL_VERBOSE);
}
return;
},
{
suspended: true,
batchSize: 1,
concurrentLimit: 10,
yieldThreshold: 1,
delay: 0,
totalRemainingReactiveSource: this.core.databaseQueueCount,
}
)
.replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter((e) => e._id != newItem._id);
return [...q, newItem];
})
.startPipeline();
storageApplyingProcessor = new QueueProcessor(
async (docs: MetaEntry[]) => {
const entry = docs[0];
await this.core.$anyProcessReplicatedDoc(entry);
return;
},
{
suspended: true,
batchSize: 1,
concurrentLimit: 6,
yieldThreshold: 1,
delay: 0,
totalRemainingReactiveSource: this.core.storageApplyingCount,
}
)
.replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter((e) => e._id != newItem._id);
return [...q, newItem];
})
.startPipeline();
$everyBeforeSuspendProcess(): Promise<boolean> {
this.core.replicator.closeReplication();
_parseReplicationResult(docs: Array<PouchDB.Core.ExistingDocument<EntryDoc>>): Promise<boolean> {
this.processor.enqueueAll(docs);
return Promise.resolve(true);
}
async $$replicateAllToServer(
showingNotice: boolean = false,
sendChunksInBulkDisabled: boolean = false
): Promise<boolean> {
if (!this.core.$$isReady()) return false;
if (!(await this.core.$everyBeforeReplicate(showingNotice))) {
Logger(`Replication has been cancelled by some module failure`, LOG_LEVEL_NOTICE);
return false;
}
if (!sendChunksInBulkDisabled) {
if (this.core.replicator instanceof LiveSyncCouchDBReplicator) {
if (
(await this.core.confirm.askYesNoDialog("Do you want to send all chunks before replication?", {
defaultOption: "No",
timeout: 20,
})) == "yes"
) {
await this.core.replicator.sendChunks(this.core.settings, undefined, true, 0);
}
}
}
const ret = await this.core.replicator.replicateAllToServer(this.settings, showingNotice);
if (ret) return true;
const checkResult = await this.core.$anyAfterConnectCheckFailed();
if (checkResult == "CHECKAGAIN") return await this.core.$$replicateAllToServer(showingNotice);
return !checkResult;
}
async $$replicateAllFromServer(showingNotice: boolean = false): Promise<boolean> {
if (!this.core.$$isReady()) return false;
const ret = await this.core.replicator.replicateAllFromServer(this.settings, showingNotice);
if (ret) return true;
const checkResult = await this.core.$anyAfterConnectCheckFailed();
if (checkResult == "CHECKAGAIN") return await this.core.$$replicateAllFromServer(showingNotice);
return !checkResult;
}
// _everyBeforeSuspendProcess(): Promise<boolean> {
// this.core.replicator?.closeReplication();
// return Promise.resolve(true);
// }
async $$waitForReplicationOnce(): Promise<boolean | void> {
return await shareRunningResult(`replication`, () => this.core.$$replicate());
// private async _replicateAllToServer(
// showingNotice: boolean = false,
// sendChunksInBulkDisabled: boolean = false
// ): Promise<boolean> {
// if (!this.services.appLifecycle.isReady()) return false;
// if (!(await this.services.replication.onBeforeReplicate(showingNotice))) {
// Logger($msg("Replicator.Message.SomeModuleFailed"), LOG_LEVEL_NOTICE);
// return false;
// }
// if (!sendChunksInBulkDisabled) {
// if (this.core.replicator instanceof LiveSyncCouchDBReplicator) {
// if (
// (await this.core.confirm.askYesNoDialog("Do you want to send all chunks before replication?", {
// defaultOption: "No",
// timeout: 20,
// })) == "yes"
// ) {
// await this.core.replicator.sendChunks(this.core.settings, undefined, true, 0);
// }
// }
// }
// const ret = await this.core.replicator.replicateAllToServer(this.settings, showingNotice);
// if (ret) return true;
// const checkResult = await this.services.replication.checkConnectionFailure();
// if (checkResult == "CHECKAGAIN") return await this.services.remote.replicateAllToRemote(showingNotice);
// return !checkResult;
// }
// async _replicateAllFromServer(showingNotice: boolean = false): Promise<boolean> {
// if (!this.services.appLifecycle.isReady()) return false;
// const ret = await this.core.replicator.replicateAllFromServer(this.settings, showingNotice);
// if (ret) return true;
// const checkResult = await this.services.replication.checkConnectionFailure();
// if (checkResult == "CHECKAGAIN") return await this.services.remote.replicateAllFromRemote(showingNotice);
// return !checkResult;
// }
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.onReplicatorInitialised.addHandler(this._onReplicatorInitialised.bind(this));
services.databaseEvents.onDatabaseInitialised.addHandler(this._everyOnDatabaseInitialized.bind(this));
services.appLifecycle.onSettingLoaded.addHandler(this._everyOnloadAfterLoadSettings.bind(this));
services.replication.parseSynchroniseResult.addHandler(this._parseReplicationResult.bind(this));
// --> These handlers can be separated.
const isOnlineAndCanReplicateWithHost = isOnlineAndCanReplicate.bind(null, this._unresolvedErrorManager, {
services: {
database: services.database,
},
serviceModules: {},
});
const canReplicateWithPBKDF2WithHost = canReplicateWithPBKDF2.bind(null, this._unresolvedErrorManager, {
services: {
replicator: services.replicator,
setting: services.setting,
},
serviceModules: {},
});
services.replication.onBeforeReplicate.addHandler(isOnlineAndCanReplicateWithHost, 10);
services.replication.onBeforeReplicate.addHandler(canReplicateWithPBKDF2WithHost, 20);
// <-- End of handlers that can be separated.
services.replication.onBeforeReplicate.addHandler(this._everyBeforeReplicate.bind(this), 100);
services.replication.onReplicationFailed.addHandler(this.onReplicationFailed.bind(this));
}
}

View File

@@ -3,30 +3,40 @@ import { REMOTE_MINIO, REMOTE_P2P, type RemoteDBSettings } from "../../lib/src/c
import { LiveSyncCouchDBReplicator } from "../../lib/src/replication/couchdb/LiveSyncReplicator";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import { AbstractModule } from "../AbstractModule";
import type { ICoreModule } from "../ModuleTypes";
import type { LiveSyncCore } from "../../main";
export class ModuleReplicatorCouchDB extends AbstractModule implements ICoreModule {
$anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator> {
export class ModuleReplicatorCouchDB extends AbstractModule {
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator | false> {
const settings = { ...this.settings, ...settingOverride };
// If new remote types were added, add them here. Do not use `REMOTE_COUCHDB` directly for the safety valve.
if (settings.remoteType == REMOTE_MINIO || settings.remoteType == REMOTE_P2P) {
return undefined!;
return Promise.resolve(false);
}
return Promise.resolve(new LiveSyncCouchDBReplicator(this.core));
}
$everyAfterResumeProcess(): Promise<boolean> {
_everyAfterResumeProcess(): Promise<boolean> {
if (this.services.appLifecycle.isSuspended()) return Promise.resolve(true);
if (!this.services.appLifecycle.isReady()) return Promise.resolve(true);
if (this.settings.remoteType != REMOTE_MINIO && this.settings.remoteType != REMOTE_P2P) {
// If LiveSync enabled, open replication
if (this.settings.liveSync) {
fireAndForget(() => this.core.replicator.openReplication(this.settings, true, false, false));
}
// If sync on start enabled, open replication
if (!this.settings.liveSync && this.settings.syncOnStart) {
// Possibly ok as if only share the result
fireAndForget(() => this.core.replicator.openReplication(this.settings, false, false, false));
const LiveSyncEnabled = this.settings.liveSync;
const continuous = LiveSyncEnabled;
const eventualOnStart = !LiveSyncEnabled && this.settings.syncOnStart;
// If enabled LiveSync or on start, open replication
if (LiveSyncEnabled || eventualOnStart) {
// And note that we do not open the conflict detection dialogue directly during this process.
// This should be raised explicitly if needed.
fireAndForget(async () => {
const canReplicate = await this.services.replication.isReplicationReady(false);
if (!canReplicate) return;
void this.core.replicator.openReplication(this.settings, continuous, false, false);
});
}
}
return Promise.resolve(true);
}
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.getNewReplicator.addHandler(this._anyNewReplicator.bind(this));
services.appLifecycle.onResumed.addHandler(this._everyAfterResumeProcess.bind(this));
}
}

View File

@@ -1,15 +1,18 @@
import { REMOTE_MINIO, type RemoteDBSettings } from "../../lib/src/common/types";
import { LiveSyncJournalReplicator } from "../../lib/src/replication/journal/LiveSyncJournalReplicator";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import type { LiveSyncCore } from "../../main";
import { AbstractModule } from "../AbstractModule";
import type { ICoreModule } from "../ModuleTypes";
export class ModuleReplicatorMinIO extends AbstractModule implements ICoreModule {
$anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator> {
export class ModuleReplicatorMinIO extends AbstractModule {
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator | false> {
const settings = { ...this.settings, ...settingOverride };
if (settings.remoteType == REMOTE_MINIO) {
return Promise.resolve(new LiveSyncJournalReplicator(this.core));
}
return undefined!;
return Promise.resolve(false);
}
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.getNewReplicator.addHandler(this._anyNewReplicator.bind(this));
}
}

View File

@@ -1,18 +1,18 @@
import { REMOTE_P2P, type RemoteDBSettings } from "../../lib/src/common/types";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import { AbstractModule } from "../AbstractModule";
import type { ICoreModule } from "../ModuleTypes";
import { LiveSyncTrysteroReplicator } from "../../lib/src/replication/trystero/LiveSyncTrysteroReplicator";
import type { LiveSyncCore } from "../../main";
export class ModuleReplicatorP2P extends AbstractModule implements ICoreModule {
$anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator> {
export class ModuleReplicatorP2P extends AbstractModule {
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator | false> {
const settings = { ...this.settings, ...settingOverride };
if (settings.remoteType == REMOTE_P2P) {
return Promise.resolve(new LiveSyncTrysteroReplicator(this.core));
}
return undefined!;
return Promise.resolve(false);
}
$everyAfterResumeProcess(): Promise<boolean> {
_everyAfterResumeProcess(): Promise<boolean> {
if (this.settings.remoteType == REMOTE_P2P) {
// // If LiveSync enabled, open replication
// if (this.settings.liveSync) {
@@ -27,4 +27,8 @@ export class ModuleReplicatorP2P extends AbstractModule implements ICoreModule {
return Promise.resolve(true);
}
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.getNewReplicator.addHandler(this._anyNewReplicator.bind(this));
services.appLifecycle.onResumed.addHandler(this._everyAfterResumeProcess.bind(this));
}
}

View File

@@ -1,167 +1,150 @@
import { LRUCache } from "octagonal-wheels/memory/LRUCache";
import {
getStoragePathFromUXFileInfo,
id2path,
isInternalMetadata,
path2id,
stripInternalMetadataPrefix,
useMemo,
} from "../../common/utils";
import {
LOG_LEVEL_VERBOSE,
type DocumentID,
type EntryHasPath,
type FilePath,
type FilePathWithPrefix,
type ObsidianLiveSyncSettings,
type UXFileInfoStub,
} from "../../lib/src/common/types";
import { addPrefix, isAcceptedAll } from "../../lib/src/string_and_binary/path";
import { getStoragePathFromUXFileInfo } from "../../common/utils";
import { LOG_LEVEL_DEBUG, LOG_LEVEL_VERBOSE, type UXFileInfoStub } from "../../lib/src/common/types";
import { isAcceptedAll } from "../../lib/src/string_and_binary/path";
import { AbstractModule } from "../AbstractModule";
import type { ICoreModule } from "../ModuleTypes";
import { EVENT_REQUEST_RELOAD_SETTING_TAB, EVENT_SETTING_SAVED, eventHub } from "../../common/events";
import { isDirty } from "../../lib/src/common/utils";
export class ModuleTargetFilter extends AbstractModule implements ICoreModule {
reloadIgnoreFiles() {
import type { LiveSyncCore } from "../../main";
import { Computed } from "octagonal-wheels/dataobject/Computed";
export class ModuleTargetFilter extends AbstractModule {
ignoreFiles: string[] = [];
private refreshSettings() {
this.ignoreFiles = this.settings.ignoreFiles.split(",").map((e) => e.trim());
}
$everyOnload(): Promise<boolean> {
eventHub.onEvent(EVENT_SETTING_SAVED, (evt: ObsidianLiveSyncSettings) => {
this.reloadIgnoreFiles();
});
eventHub.onEvent(EVENT_REQUEST_RELOAD_SETTING_TAB, () => {
this.reloadIgnoreFiles();
});
return Promise.resolve(true);
}
$$id2path(id: DocumentID, entry?: EntryHasPath, stripPrefix?: boolean): FilePathWithPrefix {
const tempId = id2path(id, entry);
if (stripPrefix && isInternalMetadata(tempId)) {
const out = stripInternalMetadataPrefix(tempId);
return out;
}
return tempId;
}
async $$path2id(filename: FilePathWithPrefix | FilePath, prefix?: string): Promise<DocumentID> {
const destPath = addPrefix(filename, prefix ?? "");
return await path2id(
destPath,
this.settings.usePathObfuscation ? this.settings.passphrase : "",
!this.settings.handleFilenameCaseSensitive
);
private _everyOnload(): Promise<boolean> {
void this.refreshSettings();
return Promise.resolve(true);
}
$$isFileSizeExceeded(size: number) {
if (this.settings.syncMaxSizeInMB > 0 && size > 0) {
if (this.settings.syncMaxSizeInMB * 1024 * 1024 < size) {
return true;
}
}
return false;
}
$$markFileListPossiblyChanged(): void {
_markFileListPossiblyChanged(): void {
this.totalFileEventCount++;
}
totalFileEventCount = 0;
get fileListPossiblyChanged() {
if (isDirty("totalFileEventCount", this.totalFileEventCount)) {
return true;
}
return false;
}
async $$isTargetFile(file: string | UXFileInfoStub, keepFileCheckList = false) {
const fileCount = useMemo<Record<string, number>>(
{
key: "fileCount", // forceUpdate: !keepFileCheckList,
},
(ctx, prev) => {
if (keepFileCheckList && prev) return prev;
if (!keepFileCheckList && prev && !this.fileListPossiblyChanged) {
return prev;
fileCountMap = new Computed({
evaluation: (fileEventCount: number) => {
const vaultFiles = this.core.storageAccess.getFileNames().sort();
const fileCountMap: Record<string, number> = {};
for (const file of vaultFiles) {
const lc = file.toLowerCase();
if (!fileCountMap[lc]) {
fileCountMap[lc] = 1;
} else {
fileCountMap[lc]++;
}
const fileList = (ctx.get("fileList") ?? []) as FilePathWithPrefix[];
// const fileNameList = (ctx.get("fileNameList") ?? []) as FilePath[];
// const fileNames =
const vaultFiles = this.core.storageAccess.getFileNames().sort();
if (prev && vaultFiles.length == fileList.length) {
const fl3 = new Set([...fileList, ...vaultFiles]);
if (fileList.length == fl3.size && vaultFiles.length == fl3.size) {
return prev;
}
}
ctx.set("fileList", vaultFiles);
const fileCount: Record<string, number> = {};
for (const file of vaultFiles) {
const lc = file.toLowerCase();
if (!fileCount[lc]) {
fileCount[lc] = 1;
} else {
fileCount[lc]++;
}
}
return fileCount;
}
);
return fileCountMap;
},
requiresUpdate: (args, previousArgs, previousResult) => {
if (!previousResult) return true;
if (previousResult instanceof Error) return true;
if (!previousArgs) return true;
if (args[0] === previousArgs[0]) {
return false;
}
return true;
},
});
totalFileEventCount = 0;
private async _isTargetFileByFileNameDuplication(file: string | UXFileInfoStub) {
await this.fileCountMap.updateValue(this.totalFileEventCount);
const fileCountMap = this.fileCountMap.value;
if (!fileCountMap) {
this._log("File count map is not ready yet.");
return false;
}
const filepath = getStoragePathFromUXFileInfo(file);
const lc = filepath.toLowerCase();
if (this.core.$$shouldCheckCaseInsensitive()) {
if (lc in fileCount && fileCount[lc] > 1) {
if (this.services.vault.shouldCheckCaseInsensitively()) {
if (lc in fileCountMap && fileCountMap[lc] > 1) {
this._log("File is duplicated (case-insensitive): " + filepath);
return false;
}
}
const fileNameLC = getStoragePathFromUXFileInfo(file).split("/").pop()?.toLowerCase();
if (this.settings.useIgnoreFiles) {
if (this.ignoreFiles.some((e) => e.toLowerCase() == fileNameLC)) {
// We must reload ignore files due to the its change.
await this.readIgnoreFile(filepath);
}
if (await this.core.$$isIgnoredByIgnoreFiles(file)) {
return false;
}
}
if (!this.localDatabase?.isTargetFile(filepath)) return false;
this._log("File is not duplicated: " + filepath, LOG_LEVEL_DEBUG);
return true;
}
ignoreFileCache = new LRUCache<string, string[] | false>(300, 250000, true);
ignoreFiles = [] as string[];
async readIgnoreFile(path: string) {
private ignoreFileCacheMap = new Map<string, string[] | undefined | false>();
private invalidateIgnoreFileCache(path: string) {
// This erases `/path/to/.ignorefile` from cache, therefore, next access will reload it.
// When detecting edited the ignore file, this method should be called.
// Do not check whether it exists in cache or not; just delete it.
const key = path.toLowerCase();
this.ignoreFileCacheMap.delete(key);
}
private async getIgnoreFile(path: string): Promise<string[] | false> {
const key = path.toLowerCase();
const cached = this.ignoreFileCacheMap.get(key);
if (cached !== undefined) {
// if cached is not undefined, cache hit (neither exists or not exists, string[] or false).
return cached;
}
try {
const file = await this.core.storageAccess.readFileText(path);
const gitignore = file.split(/\r?\n/g);
this.ignoreFileCache.set(path, gitignore);
// load the ignore file
if (!(await this.core.storageAccess.isExistsIncludeHidden(path))) {
// file does not exist, cache as not exists
this.ignoreFileCacheMap.set(key, false);
return false;
}
const file = await this.core.storageAccess.readHiddenFileText(path);
const gitignore = file
.split(/\r?\n/g)
.map((e) => e.replace(/\r$/, ""))
.map((e) => e.trim());
this.ignoreFileCacheMap.set(key, gitignore);
this._log(`[ignore] Ignore file loaded: ${path}`, LOG_LEVEL_VERBOSE);
return gitignore;
} catch (ex) {
this._log(`Failed to read ignore file ${path}`);
// Failed to read the ignore file, delete cache.
this._log(`[ignore] Failed to read ignore file ${path}`);
this._log(ex, LOG_LEVEL_VERBOSE);
this.ignoreFileCache.set(path, false);
this.ignoreFileCacheMap.set(key, undefined);
return false;
}
}
async getIgnoreFile(path: string) {
if (this.ignoreFileCache.has(path)) {
return this.ignoreFileCache.get(path) ?? false;
} else {
return await this.readIgnoreFile(path);
}
}
async $$isIgnoredByIgnoreFiles(file: string | UXFileInfoStub): Promise<boolean> {
if (!this.settings.useIgnoreFiles) {
return false;
}
private async _isTargetFileByLocalDB(file: string | UXFileInfoStub) {
const filepath = getStoragePathFromUXFileInfo(file);
if (this.ignoreFileCache.has(filepath)) {
// Renew
await this.readIgnoreFile(filepath);
if (!this.localDatabase?.isTargetFile(filepath)) {
this._log("File is not target by local DB: " + filepath);
return false;
}
if (!(await isAcceptedAll(filepath, this.ignoreFiles, (filename) => this.getIgnoreFile(filename)))) {
this._log("File is target by local DB: " + filepath, LOG_LEVEL_DEBUG);
return await Promise.resolve(true);
}
private async _isTargetFileFinal(file: string | UXFileInfoStub) {
this._log("File is target finally: " + getStoragePathFromUXFileInfo(file), LOG_LEVEL_DEBUG);
return await Promise.resolve(true);
}
private async _isTargetIgnoredByIgnoreFiles(file: string | UXFileInfoStub): Promise<boolean> {
if (!this.settings.useIgnoreFiles) {
return true;
}
return false;
const filepath = getStoragePathFromUXFileInfo(file);
this.invalidateIgnoreFileCache(filepath);
this._log("Checking ignore files for: " + filepath, LOG_LEVEL_DEBUG);
if (!(await isAcceptedAll(filepath, this.ignoreFiles, (filename) => this.getIgnoreFile(filename)))) {
this._log("File is ignored by ignore files: " + filepath);
return false;
}
this._log("File is not ignored by ignore files: " + filepath, LOG_LEVEL_DEBUG);
return true;
}
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.vault.markFileListPossiblyChanged.setHandler(this._markFileListPossiblyChanged.bind(this));
services.appLifecycle.onLoaded.addHandler(this._everyOnload.bind(this));
services.vault.isIgnoredByIgnoreFile.setHandler(this._isTargetIgnoredByIgnoreFiles.bind(this));
services.vault.isTargetFile.addHandler(this._isTargetFileByFileNameDuplication.bind(this), 10);
services.vault.isTargetFile.addHandler(this._isTargetIgnoredByIgnoreFiles.bind(this), 20);
services.vault.isTargetFile.addHandler(this._isTargetFileByLocalDB.bind(this), 30);
services.vault.isTargetFile.addHandler(this._isTargetFileFinal.bind(this), 100);
services.setting.onSettingRealised.addHandler(this.refreshSettings.bind(this));
}
}

View File

@@ -0,0 +1,485 @@
import {
SYNCINFO_ID,
VER,
type AnyEntry,
type EntryDoc,
type EntryLeaf,
type LoadedEntry,
type MetaEntry,
} from "@lib/common/types";
import type { ModuleReplicator } from "./ModuleReplicator";
import { isChunk, isValidPath } from "@/common/utils";
import type { LiveSyncCore } from "@/main";
import {
LOG_LEVEL_DEBUG,
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
Logger,
type LOG_LEVEL,
} from "@lib/common/logger";
import { fireAndForget, isAnyNote, throttle } from "@lib/common/utils";
import { Semaphore } from "octagonal-wheels/concurrency/semaphore_v2";
import { serialized } from "octagonal-wheels/concurrency/lock";
import type { ReactiveSource } from "octagonal-wheels/dataobject/reactive_v2";
const KV_KEY_REPLICATION_RESULT_PROCESSOR_SNAPSHOT = "replicationResultProcessorSnapshot";
type ReplicateResultProcessorState = {
queued: PouchDB.Core.ExistingDocument<EntryDoc>[];
processing: PouchDB.Core.ExistingDocument<EntryDoc>[];
};
function shortenId(id: string): string {
return id.length > 10 ? id.substring(0, 10) : id;
}
function shortenRev(rev: string | undefined): string {
if (!rev) return "undefined";
return rev.length > 10 ? rev.substring(0, 10) : rev;
}
export class ReplicateResultProcessor {
private log(message: string, level: LOG_LEVEL = LOG_LEVEL_INFO) {
Logger(`[ReplicateResultProcessor] ${message}`, level);
}
private logError(e: any) {
Logger(e, LOG_LEVEL_VERBOSE);
}
private replicator: ModuleReplicator;
constructor(replicator: ModuleReplicator) {
this.replicator = replicator;
}
get localDatabase() {
return this.replicator.core.localDatabase;
}
get services() {
return this.replicator.core.services;
}
get core(): LiveSyncCore {
return this.replicator.core;
}
getPath(entry: AnyEntry): string {
return this.services.path.getPath(entry);
}
public suspend() {
this._suspended = true;
}
public resume() {
this._suspended = false;
fireAndForget(() => this.runProcessQueue());
}
// Whether the processing is suspended
// If true, the processing queue processor bails the loop.
private _suspended: boolean = false;
public get isSuspended() {
return (
this._suspended ||
!this.core.services.appLifecycle.isReady ||
this.replicator.settings.suspendParseReplicationResult ||
this.core.services.appLifecycle.isSuspended()
);
}
/**
* Take a snapshot of the current processing state.
* This snapshot is stored in the KV database for recovery on restart.
*/
protected async _takeSnapshot() {
const snapshot = {
queued: this._queuedChanges.slice(),
processing: this._processingChanges.slice(),
} satisfies ReplicateResultProcessorState;
await this.core.kvDB.set(KV_KEY_REPLICATION_RESULT_PROCESSOR_SNAPSHOT, snapshot);
this.log(
`Snapshot taken. Queued: ${snapshot.queued.length}, Processing: ${snapshot.processing.length}`,
LOG_LEVEL_DEBUG
);
this.reportStatus();
}
/**
* Trigger taking a snapshot.
*/
protected _triggerTakeSnapshot() {
fireAndForget(() => this._takeSnapshot());
}
/**
* Throttled version of triggerTakeSnapshot.
*/
protected triggerTakeSnapshot = throttle(() => this._triggerTakeSnapshot(), 50);
/**
* Restore from snapshot.
*/
public async restoreFromSnapshot() {
const snapshot = await this.core.kvDB.get<ReplicateResultProcessorState>(
KV_KEY_REPLICATION_RESULT_PROCESSOR_SNAPSHOT
);
if (snapshot) {
// Restoring the snapshot re-runs processing for both queued and processing items.
const newQueue = [...snapshot.processing, ...snapshot.queued, ...this._queuedChanges];
this._queuedChanges = [];
this.enqueueAll(newQueue);
this.log(
`Restored from snapshot (${snapshot.processing.length + snapshot.queued.length} items)`,
LOG_LEVEL_INFO
);
// await this._takeSnapshot();
}
}
private _restoreFromSnapshot: Promise<void> | undefined = undefined;
/**
* Restore from snapshot only once.
* @returns Promise that resolves when restoration is complete.
*/
public restoreFromSnapshotOnce() {
if (!this._restoreFromSnapshot) {
this._restoreFromSnapshot = this.restoreFromSnapshot();
}
return this._restoreFromSnapshot;
}
/**
* Perform the given procedure while counting the concurrency.
* @param proc async procedure to perform
* @param countValue reactive source to count concurrency
* @returns result of the procedure
*/
async withCounting<T>(proc: () => Promise<T>, countValue: ReactiveSource<number>) {
countValue.value++;
try {
return await proc();
} finally {
countValue.value--;
}
}
/**
* Report the current status.
*/
protected reportStatus() {
this.services.replication.replicationResultCount.value =
this._queuedChanges.length + this._processingChanges.length;
}
/**
* Enqueue all the given changes for processing.
* @param changes Changes to enqueue
*/
public enqueueAll(changes: PouchDB.Core.ExistingDocument<EntryDoc>[]) {
for (const change of changes) {
// Check if the change is not a document change (e.g., chunk, versioninfo, syncinfo), and processed it directly.
const isProcessed = this.processIfNonDocumentChange(change);
if (!isProcessed) {
this.enqueueChange(change);
}
}
}
/**
* Process the change if it is not a document change.
* @param change Change to process
* @returns True if the change was processed; false otherwise
*/
protected processIfNonDocumentChange(change: PouchDB.Core.ExistingDocument<EntryDoc>) {
if (!change) {
this.log(`Received empty change`, LOG_LEVEL_VERBOSE);
return true;
}
if (isChunk(change._id)) {
// Emit event for new chunk
this.localDatabase.onNewLeaf(change as EntryLeaf);
this.log(`Processed chunk: ${shortenId(change._id)}`, LOG_LEVEL_DEBUG);
return true;
}
if (change.type == "versioninfo") {
this.log(`Version info document received: ${change._id}`, LOG_LEVEL_VERBOSE);
if (change.version > VER) {
// Incompatible version, stop replication.
this.core.replicator.closeReplication();
this.log(
`Remote database updated to incompatible version. update your Self-hosted LiveSync plugin.`,
LOG_LEVEL_NOTICE
);
}
return true;
}
if (
change._id == SYNCINFO_ID || // Synchronisation information data
change._id.startsWith("_design") //design document
) {
this.log(`Skipped system document: ${change._id}`, LOG_LEVEL_VERBOSE);
return true;
}
return false;
}
/**
* Queue of changes to be processed.
*/
private _queuedChanges: PouchDB.Core.ExistingDocument<EntryDoc>[] = [];
/**
* List of changes being processed.
*/
private _processingChanges: PouchDB.Core.ExistingDocument<EntryDoc>[] = [];
/**
* Enqueue the given document change for processing.
* @param doc Document change to enqueue
* @returns
*/
protected enqueueChange(doc: PouchDB.Core.ExistingDocument<EntryDoc>) {
const old = this._queuedChanges.find((e) => e._id == doc._id);
const path = "path" in doc ? this.getPath(doc) : "<unknown>";
const docNote = `${path} (${shortenId(doc._id)}, ${shortenRev(doc._rev)})`;
if (old) {
if (old._rev == doc._rev) {
this.log(`[Enqueue] skipped (Already queued): ${docNote}`, LOG_LEVEL_VERBOSE);
return;
}
const oldRev = old._rev ?? "";
const isDeletedBefore = old._deleted === true || ("deleted" in old && old.deleted === true);
const isDeletedNow = doc._deleted === true || ("deleted" in doc && doc.deleted === true);
// Replace the old queued change (This may performed batched updates, actually process performed always with the latest version, hence we can simply replace it if the change is the same type).
if (isDeletedBefore === isDeletedNow) {
this._queuedChanges = this._queuedChanges.filter((e) => e._id != doc._id);
this.log(`[Enqueue] requeued: ${docNote} (from rev: ${shortenRev(oldRev)})`, LOG_LEVEL_VERBOSE);
}
}
// Enqueue the change
this._queuedChanges.push(doc);
this.triggerTakeSnapshot();
this.triggerProcessQueue();
}
/**
* Trigger processing of the queued changes.
*/
protected triggerProcessQueue() {
fireAndForget(() => this.runProcessQueue());
}
/**
* Semaphore to limit concurrent processing.
* This is the per-id semaphore + concurrency-control (max 10 concurrent = 10 documents being processed at the same time).
*/
private _semaphore = Semaphore(10);
/**
* Flag indicating whether the process queue is currently running.
*/
private _isRunningProcessQueue: boolean = false;
/**
* Process the queued changes.
*/
private async runProcessQueue() {
// Avoid re-entrance, suspend processing, or empty queue loop consumption.
if (this._isRunningProcessQueue) return;
if (this.isSuspended) return;
if (this._queuedChanges.length == 0) return;
try {
this._isRunningProcessQueue = true;
while (this._queuedChanges.length > 0) {
// If getting suspended, bail the loop. Some concurrent tasks may still be running.
if (this.isSuspended) {
this.log(
`Processing has got suspended. Remaining items in queue: ${this._queuedChanges.length}`,
LOG_LEVEL_INFO
);
break;
}
// Acquire semaphore for new processing slot
// (per-document serialisation caps concurrency).
const releaser = await this._semaphore.acquire();
releaser();
// Dequeue the next change
const doc = this._queuedChanges.shift();
if (doc) {
this._processingChanges.push(doc);
void this.parseDocumentChange(doc);
}
// Take snapshot (to be restored on next startup if needed)
this.triggerTakeSnapshot();
}
} finally {
this._isRunningProcessQueue = false;
}
}
// Phase 1: parse replication result
/**
* Parse the given document change.
* @param change
* @returns
*/
async parseDocumentChange(change: PouchDB.Core.ExistingDocument<EntryDoc>) {
try {
if (isAnyNote(change)) {
const docMtime = change.mtime ?? 0;
const maxMTime = this.replicator.settings.maxMTimeForReflectEvents;
if (maxMTime > 0 && docMtime > maxMTime) {
const docPath = this.getPath(change);
this.log(
`Processing ${docPath} has been skipped due to modification time (${new Date(
docMtime * 1000
).toISOString()}) exceeding the limit`,
LOG_LEVEL_INFO
);
return;
}
}
// If the document is a virtual document, process it in the virtual document processor.
if (await this.services.replication.processVirtualDocument(change)) return;
// If the document is version info, check compatibility and return.
if (isAnyNote(change)) {
const docPath = this.getPath(change);
if (!(await this.services.vault.isTargetFile(docPath))) {
this.log(`Skipped: ${docPath}`, LOG_LEVEL_VERBOSE);
return;
}
const size = change.size;
// Note that this size check depends size that in metadata, not the actual content size.
if (this.services.vault.isFileSizeTooLarge(size)) {
this.log(
`Processing ${docPath} has been skipped due to file size exceeding the limit`,
LOG_LEVEL_NOTICE
);
return;
}
return await this.applyToDatabase(change);
}
this.log(`Skipped unexpected non-note document: ${change._id}`, LOG_LEVEL_INFO);
return;
} finally {
// Remove from processing queue
this._processingChanges = this._processingChanges.filter((e) => e !== change);
this.triggerTakeSnapshot();
}
}
// Phase 2: apply the document to database
protected applyToDatabase(doc: PouchDB.Core.ExistingDocument<AnyEntry>) {
return this.withCounting(async () => {
let releaser: Awaited<ReturnType<typeof this._semaphore.acquire>> | undefined = undefined;
try {
releaser = await this._semaphore.acquire();
await this._applyToDatabase(doc);
} catch (e) {
this.log(`Error while processing replication result`, LOG_LEVEL_NOTICE);
this.logError(e);
} finally {
// Remove from processing queue (To remove from "in-progress" list, and snapshot will not include it)
if (releaser) {
releaser();
}
}
}, this.services.replication.databaseQueueCount);
}
// Phase 2.1: process the document and apply to storage
// This function is serialized per document to avoid race-condition for the same document.
private _applyToDatabase(doc_: PouchDB.Core.ExistingDocument<AnyEntry>) {
const dbDoc = doc_ as LoadedEntry; // It has no `data`
const path = this.getPath(dbDoc);
return serialized(`replication-process:${dbDoc._id}`, async () => {
const docNote = `${path} (${shortenId(dbDoc._id)}, ${shortenRev(dbDoc._rev)})`;
const isRequired = await this.checkIsChangeRequiredForDatabaseProcessing(dbDoc);
if (!isRequired) {
this.log(`Skipped (Not latest): ${docNote}`, LOG_LEVEL_VERBOSE);
return;
}
// If `Read chunks online` is disabled, chunks should be transferred before here.
// However, in some cases, chunks are after that. So, if missing chunks exist, we have to wait for them.
// (If `Use Only Local Chunks` is enabled, we should not attempt to fetch chunks online automatically).
const isDeleted = dbDoc._deleted === true || ("deleted" in dbDoc && dbDoc.deleted === true);
// Gather full document if not deleted
const doc = isDeleted
? { ...dbDoc, data: "" }
: await this.localDatabase.getDBEntryFromMeta({ ...dbDoc }, false, true);
if (!doc) {
// Failed to gather content
this.log(`Failed to gather content of ${docNote}`, LOG_LEVEL_NOTICE);
return;
}
// Check if other processor wants to process this document, if so, skip processing here.
if (await this.services.replication.processOptionalSynchroniseResult(dbDoc)) {
// Already processed
this.log(`Processed by other processor: ${docNote}`, LOG_LEVEL_DEBUG);
} else if (isValidPath(this.getPath(doc))) {
// Apply to storage if the path is valid
await this.applyToStorage(doc as MetaEntry);
this.log(`Processed: ${docNote}`, LOG_LEVEL_DEBUG);
} else {
// Should process, but have an invalid path
this.log(`Unprocessed (Invalid path): ${docNote}`, LOG_LEVEL_VERBOSE);
}
return;
});
}
/**
* Phase 3: Apply the given entry to storage.
* @param entry
* @returns
*/
protected applyToStorage(entry: MetaEntry) {
return this.withCounting(async () => {
await this.services.replication.processSynchroniseResult(entry);
}, this.services.replication.storageApplyingCount);
}
/**
* Check whether processing is required for the given document.
* @param dbDoc Document to check
* @returns True if processing is required; false otherwise
*/
protected async checkIsChangeRequiredForDatabaseProcessing(dbDoc: LoadedEntry): Promise<boolean> {
const path = this.getPath(dbDoc);
try {
const savedDoc = await this.localDatabase.getRaw<LoadedEntry>(dbDoc._id, {
conflicts: true,
revs_info: true,
});
const newRev = dbDoc._rev ?? "";
const latestRev = savedDoc._rev ?? "";
const revisions = savedDoc._revs_info?.map((e) => e.rev) ?? [];
if (savedDoc._conflicts && savedDoc._conflicts.length > 0) {
// There are conflicts, so we have to process it.
// (May auto-resolve or user intervention will be occurred).
return true;
}
if (newRev == latestRev) {
// The latest revision. Simply we can process it.
return true;
}
const index = revisions.indexOf(newRev);
if (index >= 0) {
// The revision has been inserted before.
return false; // This means that the document already processed (While no conflict existed).
}
return true; // This mostly should not happen, but we have to process it just in case.
} catch (e: any) {
if ("status" in e && e.status == 404) {
// getRaw failed due to not existing, it may not be happened normally especially on replication.
// If the process caused by some other reason, we **probably** have to process it.
// Note that this is not a common case.
return true;
} else {
this.log(
`Failed to get existing document for ${path} (${shortenId(dbDoc._id)}, ${shortenRev(dbDoc._rev)}) `,
LOG_LEVEL_NOTICE
);
this.logError(e);
return false;
}
}
}
}

View File

@@ -2,48 +2,53 @@ import { AbstractModule } from "../AbstractModule.ts";
import { LOG_LEVEL_NOTICE, type FilePathWithPrefix } from "../../lib/src/common/types";
import { QueueProcessor } from "octagonal-wheels/concurrency/processor";
import { sendValue } from "octagonal-wheels/messagepassing/signal";
import type { ICoreModule } from "../ModuleTypes.ts";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleConflictChecker extends AbstractModule implements ICoreModule {
async $$queueConflictCheckIfOpen(file: FilePathWithPrefix): Promise<void> {
export class ModuleConflictChecker extends AbstractModule {
async _queueConflictCheckIfOpen(file: FilePathWithPrefix): Promise<void> {
const path = file;
if (this.settings.checkConflictOnlyOnOpen) {
const af = this.core.$$getActiveFilePath();
const af = this.services.vault.getActiveFilePath();
if (af && af != path) {
this._log(`${file} is conflicted, merging process has been postponed.`, LOG_LEVEL_NOTICE);
return;
}
}
await this.core.$$queueConflictCheck(path);
await this.services.conflict.queueCheckFor(path);
}
async $$queueConflictCheck(file: FilePathWithPrefix): Promise<void> {
const optionalConflictResult = await this.core.$anyGetOptionalConflictCheckMethod(file);
async _queueConflictCheck(file: FilePathWithPrefix): Promise<void> {
const optionalConflictResult = await this.services.conflict.getOptionalConflictCheckMethod(file);
if (optionalConflictResult == true) {
// The conflict has been resolved by another process.
return;
} else if (optionalConflictResult === "newer") {
// The conflict should be resolved by the newer entry.
await this.core.$anyResolveConflictByNewest(file);
await this.services.conflict.resolveByNewest(file);
} else {
this.conflictCheckQueue.enqueue(file);
}
}
$$waitForAllConflictProcessed(): Promise<boolean> {
_waitForAllConflictProcessed(): Promise<boolean> {
return this.conflictResolveQueue.waitForAllProcessed();
}
// TODO-> Move to ModuleConflictResolver?
conflictResolveQueue = new QueueProcessor(
async (filenames: FilePathWithPrefix[]) => {
await this.core.$$resolveConflict(filenames[0]);
const filename = filenames[0];
return await this.services.conflict.resolve(filename);
},
{
suspended: false,
batchSize: 1,
concurrentLimit: 1,
delay: 10,
// No need to limit concurrency to `1` here, subsequent process will handle it,
// And, some cases, we do not need to synchronised. (e.g., auto-merge available).
// Therefore, limiting global concurrency is performed on resolver with the UI.
concurrentLimit: 10,
delay: 0,
keepResultUntilDownstreamConnected: false,
}
).replaceEnqueueProcessor((queue, newEntity) => {
@@ -57,22 +62,21 @@ export class ModuleConflictChecker extends AbstractModule implements ICoreModule
new QueueProcessor(
(files: FilePathWithPrefix[]) => {
const filename = files[0];
// const file = await this.core.storageAccess.isExists(filename);
// if (!file) return [];
// if (!(file instanceof TFile)) return;
// if ((file instanceof TFolder)) return [];
// Check again?
return Promise.resolve([filename]);
// this.conflictResolveQueue.enqueueWithKey(filename, { filename, file });
},
{
suspended: false,
batchSize: 1,
concurrentLimit: 5,
delay: 10,
concurrentLimit: 10,
delay: 0,
keepResultUntilDownstreamConnected: true,
pipeTo: this.conflictResolveQueue,
totalRemainingReactiveSource: this.core.conflictProcessQueueCount,
totalRemainingReactiveSource: this.services.conflict.conflictProcessQueueCount,
}
);
override onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.conflict.queueCheckForIfOpen.setHandler(this._queueConflictCheckIfOpen.bind(this));
services.conflict.queueCheckFor.setHandler(this._queueConflictCheck.bind(this));
services.conflict.ensureAllProcessed.setHandler(this._waitForAllConflictProcessed.bind(this));
}
}

View File

@@ -20,8 +20,9 @@ import {
} from "../../common/utils";
import diff_match_patch from "diff-match-patch";
import { stripAllPrefixes, isPlainText } from "../../lib/src/string_and_binary/path";
import type { ICoreModule } from "../ModuleTypes.ts";
import { eventHub } from "../../common/events.ts";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
declare global {
interface LSEvents {
@@ -29,8 +30,8 @@ declare global {
}
}
export class ModuleConflictResolver extends AbstractModule implements ICoreModule {
async $$resolveConflictByDeletingRev(
export class ModuleConflictResolver extends AbstractModule {
private async _resolveConflictByDeletingRev(
path: FilePathWithPrefix,
deleteRevision: string,
subTitle = ""
@@ -44,13 +45,16 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
return MISSING_OR_ERROR;
}
eventHub.emitEvent("conflict-cancelled", path);
this._log(`${title} Conflicted revision deleted ${displayRev(deleteRevision)} ${path}`, LOG_LEVEL_INFO);
this._log(
`${title} Conflicted revision has been deleted ${displayRev(deleteRevision)} ${path}`,
LOG_LEVEL_INFO
);
if ((await this.core.databaseFileAccess.getConflictedRevs(path)).length != 0) {
this._log(`${title} some conflicts are left in ${path}`, LOG_LEVEL_INFO);
return AUTO_MERGED;
}
this._log(`${title} ${path} is a plugin metadata file, no need to write to storage`, LOG_LEVEL_INFO);
if (isPluginMetadata(path) || isCustomisationSyncMetadata(path)) {
this._log(`${title} ${path} is a plugin metadata file, no need to write to storage`, LOG_LEVEL_INFO);
return AUTO_MERGED;
}
// If no conflicts were found, write the resolved content to the storage.
@@ -58,7 +62,8 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
this._log(`Could not write the resolved content to the storage: ${path}`, LOG_LEVEL_NOTICE);
return MISSING_OR_ERROR;
}
this._log(`${path} Has been merged automatically`, LOG_LEVEL_NOTICE);
const level = subTitle.indexOf("same") !== -1 ? LOG_LEVEL_INFO : LOG_LEVEL_NOTICE;
this._log(`${path} has been merged automatically`, level);
return AUTO_MERGED;
}
@@ -78,7 +83,7 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
return MISSING_OR_ERROR;
}
// 2. As usual, delete the conflicted revision and if there are no conflicts, write the resolved content to the storage.
return await this.core.$$resolveConflictByDeletingRev(path, ret.conflictedRev, "Sensible");
return await this.services.conflict.resolveByDeletingRevision(path, ret.conflictedRev, "Sensible");
}
const { rightRev, leftLeaf, rightLeaf } = ret;
@@ -91,7 +96,7 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
}
if (rightLeaf == false) {
// Conflicted item could not load, delete this.
return await this.core.$$resolveConflictByDeletingRev(path, rightRev, "MISSING OLD REV");
return await this.services.conflict.resolveByDeletingRevision(path, rightRev, "MISSING OLD REV");
}
const isSame = leftLeaf.data == rightLeaf.data && leftLeaf.deleted == rightLeaf.deleted;
@@ -108,8 +113,10 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
`${isSame ? "same" : ""}`,
`${isBinary ? "binary" : ""}`,
`${alwaysNewer ? "alwaysNewer" : ""}`,
].join(",");
return await this.core.$$resolveConflictByDeletingRev(path, loser.rev, subTitle);
]
.filter((e) => e.trim())
.join(",");
return await this.services.conflict.resolveByDeletingRevision(path, loser.rev, subTitle);
}
// make diff.
const dmp = new diff_match_patch();
@@ -123,7 +130,7 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
};
}
async $$resolveConflict(filename: FilePathWithPrefix): Promise<void> {
private async _resolveConflict(filename: FilePathWithPrefix): Promise<void> {
// const filename = filenames[0];
return await serialized(`conflict-resolve:${filename}`, async () => {
const conflictCheckResult = await this.checkConflictAndPerformAutoMerge(filename);
@@ -138,16 +145,16 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
}
if (conflictCheckResult === AUTO_MERGED) {
//auto resolved, but need check again;
if (this.settings.syncAfterMerge && !this.core.$$isSuspended()) {
if (this.settings.syncAfterMerge && !this.services.appLifecycle.isSuspended()) {
//Wait for the running replication, if not running replication, run it once.
await this.core.$$waitForReplicationOnce();
await this.services.replication.replicateByEvent();
}
this._log("[conflict] Automatically merged, but we have to check it again");
await this.core.$$queueConflictCheck(filename);
await this.services.conflict.queueCheckFor(filename);
return;
}
if (this.settings.showMergeDialogOnlyOnActive) {
const af = this.core.$$getActiveFilePath();
const af = this.services.vault.getActiveFilePath();
if (af && af != filename) {
this._log(
`[conflict] ${filename} is conflicted. Merging process has been postponed to the file have got opened.`,
@@ -158,11 +165,11 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
}
this._log("[conflict] Manual merge required!");
eventHub.emitEvent("conflict-cancelled", filename);
await this.core.$anyResolveConflictByUI(filename, conflictCheckResult);
await this.services.conflict.resolveByUserInteraction(filename, conflictCheckResult);
});
}
async $anyResolveConflictByNewest(filename: FilePathWithPrefix): Promise<boolean> {
private async _anyResolveConflictByNewest(filename: FilePathWithPrefix): Promise<boolean> {
const currentRev = await this.core.databaseFileAccess.fetchEntryMeta(filename, undefined, true);
if (currentRev == false) {
this._log(`Could not get current revision of ${filename}`);
@@ -200,8 +207,34 @@ export class ModuleConflictResolver extends AbstractModule implements ICoreModul
this._log(
`conflict: Deleting the older revision ${mTimeAndRev[i][1]} (${new Date(mTimeAndRev[i][0]).toLocaleString()}) of ${filename}`
);
await this.core.$$resolveConflictByDeletingRev(filename, mTimeAndRev[i][1], "NEWEST");
await this.services.conflict.resolveByDeletingRevision(filename, mTimeAndRev[i][1], "NEWEST");
}
return true;
}
private async _resolveAllConflictedFilesByNewerOnes() {
this._log(`Resolving conflicts by newer ones`, LOG_LEVEL_NOTICE);
const files = this.core.storageAccess.getFileNames();
let i = 0;
for (const file of files) {
if (i++ % 10)
this._log(
`Check and Processing ${i} / ${files.length}`,
LOG_LEVEL_NOTICE,
"resolveAllConflictedFilesByNewerOnes"
);
await this.services.conflict.resolveByNewest(file);
}
this._log(`Done!`, LOG_LEVEL_NOTICE, "resolveAllConflictedFilesByNewerOnes");
}
override onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.conflict.resolveByDeletingRevision.setHandler(this._resolveConflictByDeletingRev.bind(this));
services.conflict.resolve.setHandler(this._resolveConflict.bind(this));
services.conflict.resolveByNewest.setHandler(this._anyResolveConflictByNewest.bind(this));
services.conflict.resolveAllConflictedFilesByNewerOnes.setHandler(
this._resolveAllConflictedFilesByNewerOnes.bind(this)
);
}
}

View File

@@ -1,16 +1,21 @@
import { LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { normalizePath } from "../../deps.ts";
import {
FLAGMD_REDFLAG,
FLAGMD_REDFLAG2,
FLAGMD_REDFLAG2_HR,
FLAGMD_REDFLAG3,
FLAGMD_REDFLAG3_HR,
FlagFilesHumanReadable,
FlagFilesOriginal,
REMOTE_MINIO,
TweakValuesShouldMatchedTemplate,
type ObsidianLiveSyncSettings,
} from "../../lib/src/common/types.ts";
import { AbstractModule } from "../AbstractModule.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
import type { LiveSyncCore } from "../../main.ts";
import FetchEverything from "../features/SetupWizard/dialogs/FetchEverything.svelte";
import RebuildEverything from "../features/SetupWizard/dialogs/RebuildEverything.svelte";
import { extractObject } from "octagonal-wheels/object";
import { SvelteDialogManagerBase } from "@lib/UI/svelteDialog.ts";
import type { ServiceContext } from "@lib/services/base/ServiceBase.ts";
export class ModuleRedFlag extends AbstractModule implements ICoreModule {
export class ModuleRedFlag extends AbstractModule {
async isFlagFileExist(path: string) {
const redflag = await this.core.storageAccess.isExists(normalizePath(path));
if (redflag) {
@@ -31,122 +36,296 @@ export class ModuleRedFlag extends AbstractModule implements ICoreModule {
}
}
isRedFlagRaised = async () => await this.isFlagFileExist(FLAGMD_REDFLAG);
isRedFlag2Raised = async () =>
(await this.isFlagFileExist(FLAGMD_REDFLAG2)) || (await this.isFlagFileExist(FLAGMD_REDFLAG2_HR));
isRedFlag3Raised = async () =>
(await this.isFlagFileExist(FLAGMD_REDFLAG3)) || (await this.isFlagFileExist(FLAGMD_REDFLAG3_HR));
isSuspendFlagActive = async () => await this.isFlagFileExist(FlagFilesOriginal.SUSPEND_ALL);
isRebuildFlagActive = async () =>
(await this.isFlagFileExist(FlagFilesOriginal.REBUILD_ALL)) ||
(await this.isFlagFileExist(FlagFilesHumanReadable.REBUILD_ALL));
isFetchAllFlagActive = async () =>
(await this.isFlagFileExist(FlagFilesOriginal.FETCH_ALL)) ||
(await this.isFlagFileExist(FlagFilesHumanReadable.FETCH_ALL));
async deleteRedFlag2() {
await this.deleteFlagFile(FLAGMD_REDFLAG2);
await this.deleteFlagFile(FLAGMD_REDFLAG2_HR);
async cleanupRebuildFlag() {
await this.deleteFlagFile(FlagFilesOriginal.REBUILD_ALL);
await this.deleteFlagFile(FlagFilesHumanReadable.REBUILD_ALL);
}
async deleteRedFlag3() {
await this.deleteFlagFile(FLAGMD_REDFLAG3);
await this.deleteFlagFile(FLAGMD_REDFLAG3_HR);
async cleanupFetchAllFlag() {
await this.deleteFlagFile(FlagFilesOriginal.FETCH_ALL);
await this.deleteFlagFile(FlagFilesHumanReadable.FETCH_ALL);
}
// dialogManager = new SvelteDialogManagerBase(this.core);
get dialogManager(): SvelteDialogManagerBase<ServiceContext> {
return this.core.services.UI.dialogManager;
}
async $everyOnLayoutReady(): Promise<boolean> {
try {
const isRedFlagRaised = await this.isRedFlagRaised();
const isRedFlag2Raised = await this.isRedFlag2Raised();
const isRedFlag3Raised = await this.isRedFlag3Raised();
if (isRedFlagRaised || isRedFlag2Raised || isRedFlag3Raised) {
if (isRedFlag2Raised) {
if (
(await this.core.confirm.askYesNoDialog(
"Rebuild everything has been scheduled! Are you sure to rebuild everything?",
{ defaultOption: "Yes", timeout: 0 }
)) !== "yes"
) {
await this.deleteRedFlag2();
await this.core.$$performRestart();
return false;
/**
* Adjust setting to remote if needed.
* @param extra result of dialogues that may contain preventFetchingConfig flag (e.g, from FetchEverything or RebuildEverything)
* @param config current configuration to retrieve remote preferred config
*/
async adjustSettingToRemoteIfNeeded(extra: { preventFetchingConfig: boolean }, config: ObsidianLiveSyncSettings) {
if (extra && extra.preventFetchingConfig) {
return;
}
// Remote configuration fetched and applied.
if (await this.adjustSettingToRemote(config)) {
config = this.core.settings;
} else {
this._log("Remote configuration not applied.", LOG_LEVEL_NOTICE);
}
console.debug(config);
}
/**
* Adjust setting to remote configuration.
* @param config current configuration to retrieve remote preferred config
* @returns updated configuration if applied, otherwise null.
*/
async adjustSettingToRemote(config: ObsidianLiveSyncSettings) {
// Fetch remote configuration unless prevented.
const SKIP_FETCH = "Skip and proceed";
const RETRY_FETCH = "Retry (recommended)";
let canProceed = false;
do {
const remoteTweaks = await this.services.tweakValue.fetchRemotePreferred(config);
if (!remoteTweaks) {
const choice = await this.core.confirm.askSelectStringDialogue(
"Could not fetch configuration from remote. If you are new to the Self-hosted LiveSync, this might be expected. If not, you should check your network or server settings.",
[SKIP_FETCH, RETRY_FETCH] as const,
{
defaultAction: RETRY_FETCH,
timeout: 0,
title: "Fetch Remote Configuration Failed",
}
);
if (choice === SKIP_FETCH) {
canProceed = true;
}
if (isRedFlag3Raised) {
if (
(await this.core.confirm.askYesNoDialog("Fetch again has been scheduled! Are you sure?", {
defaultOption: "Yes",
timeout: 0,
})) !== "yes"
) {
await this.deleteRedFlag3();
await this.core.$$performRestart();
return false;
}
}
this.settings.batchSave = false;
await this.core.$allSuspendAllSync();
await this.core.$allSuspendExtraSync();
this.settings.suspendFileWatching = true;
await this.saveSettings();
if (isRedFlag2Raised) {
} else {
const necessary = extractObject(TweakValuesShouldMatchedTemplate, remoteTweaks);
// Check if any necessary tweak value is different from current config.
const differentItems = Object.entries(necessary).filter(([key, value]) => {
return (config as any)[key] !== value;
});
if (differentItems.length === 0) {
this._log(
`${FLAGMD_REDFLAG2} or ${FLAGMD_REDFLAG2_HR} has been detected! Self-hosted LiveSync suspends all sync and rebuild everything.`,
"Remote configuration matches local configuration. No changes applied.",
LOG_LEVEL_NOTICE
);
await this.core.rebuilder.$rebuildEverything();
await this.deleteRedFlag2();
if (
(await this.core.confirm.askYesNoDialog(
"Do you want to resume file and database processing, and restart obsidian now?",
{ defaultOption: "Yes", timeout: 15 }
)) == "yes"
) {
this.settings.suspendFileWatching = false;
await this.saveSettings();
this.core.$$performRestart();
return false;
}
} else if (isRedFlag3Raised) {
this._log(
`${FLAGMD_REDFLAG3} or ${FLAGMD_REDFLAG3_HR} has been detected! Self-hosted LiveSync will discard the local database and fetch everything from the remote once again.`,
LOG_LEVEL_NOTICE
);
const makeLocalChunkBeforeSync =
(await this.core.confirm.askYesNoDialog(
`Do you want to create local chunks before fetching?
> [!MORE]-
> If creating local chunks before fetching, only the difference between the local and remote will be fetched.
`,
{ defaultOption: "Yes", title: "Trick to transfer efficiently" }
)) == "yes";
await this.core.rebuilder.$fetchLocal(makeLocalChunkBeforeSync);
await this.deleteRedFlag3();
if (this.settings.suspendFileWatching) {
if (
(await this.core.confirm.askYesNoDialog(
"Do you want to resume file and database processing, and restart obsidian now?",
{ defaultOption: "Yes", timeout: 15 }
)) == "yes"
) {
this.settings.suspendFileWatching = false;
await this.saveSettings();
this.core.$$performRestart();
return false;
}
} else {
this._log(
"Your content of files will be synchronised gradually. Please wait for the completion.",
LOG_LEVEL_NOTICE
);
}
} else {
// Case of FLAGMD_REDFLAG.
this.settings.writeLogToTheFile = true;
// await this.plugin.openDatabase();
const warningMessage =
"The red flag is raised! The whole initialize steps are skipped, and any file changes are not captured.";
this._log(warningMessage, LOG_LEVEL_NOTICE);
await this.core.confirm.askSelectStringDialogue(
"Your settings differed slightly from the server's. The plug-in has supplemented the incompatible parts with the server settings!",
["OK"] as const,
{
defaultAction: "OK",
timeout: 0,
}
);
}
config = {
...config,
...Object.fromEntries(differentItems),
} satisfies ObsidianLiveSyncSettings;
this.core.settings = config;
await this.core.services.setting.saveSettingData();
this._log("Remote configuration applied.", LOG_LEVEL_NOTICE);
canProceed = true;
return this.core.settings;
}
} while (!canProceed);
}
/**
* Process vault initialisation with suspending file watching and sync.
* @param proc process to be executed during initialisation, should return true if can be continued, false if app is unable to continue the process.
* @param keepSuspending whether to keep suspending file watching after the process.
* @returns result of the process, or false if error occurs.
*/
async processVaultInitialisation(proc: () => Promise<boolean>, keepSuspending = false) {
try {
// Disable batch saving and file watching during initialisation.
this.settings.batchSave = false;
await this.services.setting.suspendAllSync();
await this.services.setting.suspendExtraSync();
this.settings.suspendFileWatching = true;
await this.saveSettings();
try {
const result = await proc();
return result;
} catch (ex) {
this._log("Error during vault initialisation process.", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
return false;
}
} catch (ex) {
this._log("Error during vault initialisation.", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
return false;
} finally {
if (!keepSuspending) {
// Re-enable file watching after initialisation.
this.settings.suspendFileWatching = false;
await this.saveSettings();
}
}
}
/**
* Handle the rebuild everything scheduled operation.
* @returns true if can be continued, false if app restart is needed.
*/
async onRebuildEverythingScheduled() {
const method = await this.dialogManager.openWithExplicitCancel(RebuildEverything);
if (method === "cancelled") {
// Clean up the flag file and restart the app.
this._log("Rebuild everything cancelled by user.", LOG_LEVEL_NOTICE);
await this.cleanupRebuildFlag();
this.services.appLifecycle.performRestart();
return false;
}
const { extra } = method;
await this.adjustSettingToRemoteIfNeeded(extra, this.settings);
return await this.processVaultInitialisation(async () => {
await this.core.rebuilder.$rebuildEverything();
await this.cleanupRebuildFlag();
this._log("Rebuild everything operation completed.", LOG_LEVEL_NOTICE);
return true;
});
}
/**
* Handle the fetch all scheduled operation.
* @returns true if can be continued, false if app restart is needed.
*/
async onFetchAllScheduled() {
const method = await this.dialogManager.openWithExplicitCancel(FetchEverything);
if (method === "cancelled") {
this._log("Fetch everything cancelled by user.", LOG_LEVEL_NOTICE);
// Clean up the flag file and restart the app.
await this.cleanupFetchAllFlag();
this.services.appLifecycle.performRestart();
return false;
}
const { vault, extra } = method;
// If remote is MinIO, makeLocalChunkBeforeSync is not available. (because no-deduplication on sending).
const makeLocalChunkBeforeSyncAvailable = this.settings.remoteType !== REMOTE_MINIO;
const mapVaultStateToAction = {
identical: {
// If both are identical, no need to make local files/chunks before sync,
// Just for the efficiency, chunks should be made before sync.
makeLocalChunkBeforeSync: makeLocalChunkBeforeSyncAvailable,
makeLocalFilesBeforeSync: false,
},
independent: {
// If both are independent, nothing needs to be made before sync.
// Respect the remote state.
makeLocalChunkBeforeSync: false,
makeLocalFilesBeforeSync: false,
},
unbalanced: {
// If both are unbalanced, local files should be made before sync to avoid data loss.
// Then, chunks should be made before sync for the efficiency, but also the metadata made and should be detected as conflicting.
makeLocalChunkBeforeSync: false,
makeLocalFilesBeforeSync: true,
},
cancelled: {
// Cancelled case, not actually used.
makeLocalChunkBeforeSync: false,
makeLocalFilesBeforeSync: false,
},
} as const;
return await this.processVaultInitialisation(async () => {
await this.adjustSettingToRemoteIfNeeded(extra, this.settings);
// Okay, proceed to fetch everything.
const { makeLocalChunkBeforeSync, makeLocalFilesBeforeSync } = mapVaultStateToAction[vault];
this._log(
`Fetching everything with settings: makeLocalChunkBeforeSync=${makeLocalChunkBeforeSync}, makeLocalFilesBeforeSync=${makeLocalFilesBeforeSync}`,
LOG_LEVEL_INFO
);
await this.core.rebuilder.$fetchLocal(makeLocalChunkBeforeSync, !makeLocalFilesBeforeSync);
await this.cleanupFetchAllFlag();
this._log("Fetch everything operation completed. Vault files will be gradually synced.", LOG_LEVEL_NOTICE);
return true;
});
}
async onSuspendAllScheduled() {
this._log("SCRAM is detected. All operations are suspended.", LOG_LEVEL_NOTICE);
return await this.processVaultInitialisation(async () => {
this._log(
"All operations are suspended as per SCRAM.\nLogs will be written to the file. This might be a performance impact.",
LOG_LEVEL_NOTICE
);
this.settings.writeLogToTheFile = true;
await this.core.services.setting.saveSettingData();
return Promise.resolve(false);
}, true);
}
async verifyAndUnlockSuspension() {
if (!this.settings.suspendFileWatching) {
return true;
}
if (
(await this.core.confirm.askYesNoDialog(
"Do you want to resume file and database processing, and restart obsidian now?",
{ defaultOption: "Yes", timeout: 15 }
)) != "yes"
) {
// TODO: Confirm actually proceed to next process.
return true;
}
this.settings.suspendFileWatching = false;
await this.saveSettings();
this.services.appLifecycle.performRestart();
return false;
}
private async processFlagFilesOnStartup(): Promise<boolean> {
const isFlagSuspensionActive = await this.isSuspendFlagActive();
const isFlagRebuildActive = await this.isRebuildFlagActive();
const isFlagFetchAllActive = await this.isFetchAllFlagActive();
// TODO: Address the case when both flags are active (very unlikely though).
// if(isFlagFetchAllActive && isFlagRebuildActive) {
// const message = "Rebuild everything and Fetch everything flags are both detected.";
// await this.core.confirm.askSelectStringDialogue(
// "Both Rebuild Everything and Fetch Everything flags are detected. Please remove one of them and restart the app.",
// ["OK"] as const,)
if (isFlagFetchAllActive) {
const res = await this.onFetchAllScheduled();
if (res) {
return await this.verifyAndUnlockSuspension();
}
return false;
}
if (isFlagRebuildActive) {
const res = await this.onRebuildEverythingScheduled();
if (res) {
return await this.verifyAndUnlockSuspension();
}
return false;
}
if (isFlagSuspensionActive) {
const res = await this.onSuspendAllScheduled();
return res;
}
return true;
}
async _everyOnLayoutReady(): Promise<boolean> {
try {
const flagProcessResult = await this.processFlagFilesOnStartup();
return flagProcessResult;
} catch (ex) {
this._log("Something went wrong on FlagFile Handling", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
}
return true;
}
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
super.onBindFunction(core, services);
services.appLifecycle.onLayoutReady.addHandler(this._everyOnLayoutReady.bind(this));
}
}

View File

@@ -1,16 +0,0 @@
import { AbstractModule } from "../AbstractModule.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
export class ModuleRemoteGovernor extends AbstractModule implements ICoreModule {
async $$markRemoteLocked(lockByClean: boolean = false): Promise<void> {
return await this.core.replicator.markRemoteLocked(this.settings, true, lockByClean);
}
async $$markRemoteUnlocked(): Promise<void> {
return await this.core.replicator.markRemoteLocked(this.settings, false, false);
}
async $$markRemoteResolved(): Promise<void> {
return await this.core.replicator.markRemoteResolved(this.settings);
}
}

View File

@@ -2,94 +2,133 @@ import { Logger, LOG_LEVEL_NOTICE } from "octagonal-wheels/common/logger";
import { extractObject } from "octagonal-wheels/object";
import {
TweakValuesShouldMatchedTemplate,
CompatibilityBreakingTweakValues,
IncompatibleChanges,
confName,
type TweakValues,
type RemoteDBSettings,
IncompatibleChangesInSpecificPattern,
CompatibleButLossyChanges,
} from "../../lib/src/common/types.ts";
import { escapeMarkdownValue } from "../../lib/src/common/utils.ts";
import { AbstractModule } from "../AbstractModule.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
import { $msg } from "../../lib/src/common/i18n.ts";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleResolvingMismatchedTweaks extends AbstractModule implements ICoreModule {
async $anyAfterConnectCheckFailed(): Promise<boolean | "CHECKAGAIN" | undefined> {
export class ModuleResolvingMismatchedTweaks extends AbstractModule {
async _anyAfterConnectCheckFailed(): Promise<boolean | "CHECKAGAIN" | undefined> {
if (!this.core.replicator.tweakSettingsMismatched && !this.core.replicator.preferredTweakValue) return false;
const preferred = this.core.replicator.preferredTweakValue;
if (!preferred) return false;
const ret = await this.core.$$askResolvingMismatchedTweaks(preferred);
const ret = await this.services.tweakValue.askResolvingMismatched(preferred);
if (ret == "OK") return false;
if (ret == "CHECKAGAIN") return "CHECKAGAIN";
if (ret == "IGNORE") return true;
}
async $$checkAndAskResolvingMismatchedTweaks(
async _checkAndAskResolvingMismatchedTweaks(
preferred: Partial<TweakValues>
): Promise<[TweakValues | boolean, boolean]> {
const mine = extractObject(TweakValuesShouldMatchedTemplate, this.settings);
const items = Object.entries(TweakValuesShouldMatchedTemplate);
let rebuildRequired = false;
let rebuildRecommended = false;
// Making tables:
let table = `| Value name | This device | Configured | \n` + `|: --- |: --- :|: ---- :| \n`;
// let table = `| Value name | This device | Configured | \n` + `|: --- |: --- :|: ---- :| \n`;
const tableRows = [];
// const items = [mine,preferred]
for (const v of items) {
const key = v[0] as keyof typeof TweakValuesShouldMatchedTemplate;
const valueMine = escapeMarkdownValue(mine[key]);
const valuePreferred = escapeMarkdownValue(preferred[key]);
if (valueMine == valuePreferred) continue;
if (CompatibilityBreakingTweakValues.indexOf(key) !== -1) {
if (IncompatibleChanges.indexOf(key) !== -1) {
rebuildRequired = true;
}
table += `| ${confName(key)} | ${valueMine} | ${valuePreferred} | \n`;
for (const pattern of IncompatibleChangesInSpecificPattern) {
if (pattern.key !== key) continue;
// if from value supplied, check if current value have been violated : in other words, if the current value is the same as the from value, it should require a rebuild.
const isFromConditionMet = "from" in pattern ? pattern.from === mine[key] : false;
// and, if to value supplied, same as above.
const isToConditionMet = "to" in pattern ? pattern.to === preferred[key] : false;
// if either of them is true, it should require a rebuild, if the pattern is not a recommendation.
if (isFromConditionMet || isToConditionMet) {
if (pattern.isRecommendation) {
rebuildRecommended = true;
} else {
rebuildRequired = true;
}
}
}
if (CompatibleButLossyChanges.indexOf(key) !== -1) {
rebuildRecommended = true;
}
// table += `| ${confName(key)} | ${valueMine} | ${valuePreferred} | \n`;
tableRows.push(
$msg("TweakMismatchResolve.Table.Row", {
name: confName(key),
self: valueMine,
remote: valuePreferred,
})
);
}
const additionalMessage = rebuildRequired
? `
const additionalMessage =
rebuildRequired && this.core.settings.isConfigured
? $msg("TweakMismatchResolve.Message.WarningIncompatibleRebuildRequired")
: "";
const additionalMessage2 =
rebuildRecommended && this.core.settings.isConfigured
? $msg("TweakMismatchResolve.Message.WarningIncompatibleRebuildRecommended")
: "";
**Note**: We have detected that some of the values are different to make incompatible the local database with the remote database.
If you choose to use the configured values, the local database will be rebuilt, and if you choose to use the values of this device, the remote database will be rebuilt.
Both of them takes a few minutes. Please choose after considering the situation.`
: "";
const table = $msg("TweakMismatchResolve.Table", { rows: tableRows.join("\n") });
const message = `
Your configuration has not been matched with the one on the remote server.
(Which you had decided once before, or set by initially synchronised device).
const message = $msg("TweakMismatchResolve.Message.MainTweakResolving", {
table: table,
additionalMessage: [additionalMessage, additionalMessage2].filter((v) => v).join("\n"),
});
Configured values:
const CHOICE_USE_REMOTE = $msg("TweakMismatchResolve.Action.UseRemote");
const CHOICE_USE_REMOTE_WITH_REBUILD = $msg("TweakMismatchResolve.Action.UseRemoteWithRebuild");
const CHOICE_USE_REMOTE_PREVENT_REBUILD = $msg("TweakMismatchResolve.Action.UseRemoteAcceptIncompatible");
const CHOICE_USE_MINE = $msg("TweakMismatchResolve.Action.UseMine");
const CHOICE_USE_MINE_WITH_REBUILD = $msg("TweakMismatchResolve.Action.UseMineWithRebuild");
const CHOICE_USE_MINE_PREVENT_REBUILD = $msg("TweakMismatchResolve.Action.UseMineAcceptIncompatible");
const CHOICE_DISMISS = $msg("TweakMismatchResolve.Action.Dismiss");
${table}
const CHOICE_AND_VALUES = [] as [string, [result: TweakValues | boolean, rebuild: boolean]][];
Please select which one you want to use.
- Use configured: Update settings of this device by configured one on the remote server.
You should select this if you have changed the settings on ** another device **.
- Update with mine: Update settings on the remote server by the settings of this device.
You should select this if you have changed the settings on ** this device **.
- Dismiss: Ignore this message and keep the current settings.
You cannot synchronise until you resolve this issue without enabling \`Do not check configuration mismatch before replication\`.${additionalMessage}`;
const CHOICE_USE_REMOTE = "Use configured";
const CHOICE_USR_MINE = "Update with mine";
const CHOICE_DISMISS = "Dismiss";
const CHOICE_AND_VALUES = [
[CHOICE_USE_REMOTE, preferred],
[CHOICE_USR_MINE, true],
[CHOICE_DISMISS, false],
];
const CHOICES = Object.fromEntries(CHOICE_AND_VALUES) as Record<string, TweakValues | boolean>;
const retKey = await this.core.confirm.confirmWithMessage(
"Tweaks Mismatched or Changed",
message,
Object.keys(CHOICES),
CHOICE_DISMISS,
60
);
if (rebuildRequired) {
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE_WITH_REBUILD, [preferred, true]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE_WITH_REBUILD, [true, true]]);
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE_PREVENT_REBUILD, [preferred, false]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE_PREVENT_REBUILD, [true, false]]);
} else if (rebuildRecommended) {
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE, [preferred, false]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE, [true, false]]);
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE_WITH_REBUILD, [true, true]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE_WITH_REBUILD, [true, true]]);
} else {
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE, [preferred, false]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE, [true, false]]);
}
CHOICE_AND_VALUES.push([CHOICE_DISMISS, [false, false]]);
const CHOICES = Object.fromEntries(CHOICE_AND_VALUES) as Record<
string,
[TweakValues | boolean, performRebuild: boolean]
>;
const retKey = await this.core.confirm.askSelectStringDialogue(message, Object.keys(CHOICES), {
title: $msg("TweakMismatchResolve.Title.TweakResolving"),
timeout: 60,
defaultAction: CHOICE_DISMISS,
});
if (!retKey) return [false, false];
return [CHOICES[retKey], rebuildRequired];
return CHOICES[retKey];
}
async $$askResolvingMismatchedTweaks(): Promise<"OK" | "CHECKAGAIN" | "IGNORE"> {
async _askResolvingMismatchedTweaks(): Promise<"OK" | "CHECKAGAIN" | "IGNORE"> {
if (!this.core.replicator.tweakSettingsMismatched) {
return "OK";
}
@@ -99,7 +138,7 @@ Please select which one you want to use.
}
const preferred = extractObject(TweakValuesShouldMatchedTemplate, tweaks);
const [conf, rebuildRequired] = await this.core.$$checkAndAskResolvingMismatchedTweaks(preferred);
const [conf, rebuildRequired] = await this.services.tweakValue.checkAndAskResolvingMismatched(preferred);
if (!conf) return "IGNORE";
if (conf === true) {
@@ -116,7 +155,7 @@ Please select which one you want to use.
if (conf) {
this.settings = { ...this.settings, ...conf };
await this.core.replicator.setPreferredRemoteTweakSettings(this.settings);
await this.core.$$saveSettingData();
await this.services.setting.saveSettingData();
if (rebuildRequired) {
await this.core.rebuilder.$fetchLocal();
}
@@ -126,45 +165,83 @@ Please select which one you want to use.
return "IGNORE";
}
async $$checkAndAskUseRemoteConfiguration(
trialSetting: RemoteDBSettings
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
const replicator = await this.core.$anyNewReplicator(trialSetting);
async _fetchRemotePreferredTweakValues(trialSetting: RemoteDBSettings): Promise<TweakValues | false> {
const replicator = await this.services.replicator.getNewReplicator(trialSetting);
if (!replicator) {
this._log("The remote type is not supported for fetching preferred tweak values.", LOG_LEVEL_NOTICE);
return false;
}
if (await replicator.tryConnectRemote(trialSetting)) {
const preferred = await replicator.getRemotePreferredTweakValues(trialSetting);
if (preferred) {
return await this.$$askUseRemoteConfiguration(trialSetting, preferred);
} else {
this._log("Failed to get the preferred tweak values from the remote server.", LOG_LEVEL_NOTICE);
return preferred;
}
return { result: false, requireFetch: false };
} else {
this._log("Failed to connect to the remote server.", LOG_LEVEL_NOTICE);
return { result: false, requireFetch: false };
this._log("Failed to get the preferred tweak values from the remote server.", LOG_LEVEL_NOTICE);
return false;
}
this._log("Failed to connect to the remote server.", LOG_LEVEL_NOTICE);
return false;
}
async $$askUseRemoteConfiguration(
async _checkAndAskUseRemoteConfiguration(
trialSetting: RemoteDBSettings
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
const preferred = await this.services.tweakValue.fetchRemotePreferred(trialSetting);
if (preferred) {
return await this.services.tweakValue.askUseRemoteConfiguration(trialSetting, preferred);
}
return { result: false, requireFetch: false };
}
async _askUseRemoteConfiguration(
trialSetting: RemoteDBSettings,
preferred: TweakValues
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
const items = Object.entries(TweakValuesShouldMatchedTemplate);
let rebuildRequired = false;
let rebuildRecommended = false;
// Making tables:
let table = `| Value name | This device | Stored | \n` + `|: --- |: ---- :|: ---- :| \n`;
// let table = `| Value name | This device | On Remote | \n` + `|: --- |: ---- :|: ---- :| \n`;
let differenceCount = 0;
const tableRows = [] as string[];
// const items = [mine,preferred]
for (const v of items) {
const key = v[0] as keyof typeof TweakValuesShouldMatchedTemplate;
const valuePreferred = escapeMarkdownValue(preferred[key]);
const currentDisp = `${escapeMarkdownValue((trialSetting as TweakValues)?.[key])} |`;
const remoteValueForDisplay = escapeMarkdownValue(preferred[key]);
const currentValueForDisplay = `${escapeMarkdownValue((trialSetting as TweakValues)?.[key])}`;
if ((trialSetting as TweakValues)?.[key] !== preferred[key]) {
if (CompatibilityBreakingTweakValues.indexOf(key) !== -1) {
if (IncompatibleChanges.indexOf(key) !== -1) {
rebuildRequired = true;
}
for (const pattern of IncompatibleChangesInSpecificPattern) {
if (pattern.key !== key) continue;
// if from value supplied, check if current value have been violated : in other words, if the current value is the same as the from value, it should require a rebuild.
const isFromConditionMet =
"from" in pattern ? pattern.from === (trialSetting as TweakValues)?.[key] : false;
// and, if to value supplied, same as above.
const isToConditionMet = "to" in pattern ? pattern.to === preferred[key] : false;
// if either of them is true, it should require a rebuild, if the pattern is not a recommendation.
if (isFromConditionMet || isToConditionMet) {
if (pattern.isRecommendation) {
rebuildRecommended = true;
} else {
rebuildRequired = true;
}
}
}
if (CompatibleButLossyChanges.indexOf(key) !== -1) {
rebuildRecommended = true;
}
} else {
continue;
}
table += `| ${confName(key)} | ${currentDisp} ${valuePreferred} | \n`;
tableRows.push(
$msg("TweakMismatchResolve.Table.Row", {
name: confName(key),
self: currentValueForDisplay,
remote: remoteValueForDisplay,
})
);
differenceCount++;
}
@@ -174,33 +251,28 @@ Please select which one you want to use.
}
const additionalMessage =
rebuildRequired && this.core.settings.isConfigured
? `
>[!WARNING]
> Some remote configurations are not compatible with the local database of this device. Rebuilding the local database will be required.
***Please ensure that you have time and are connected to a stable network to apply!***`
? $msg("TweakMismatchResolve.Message.UseRemote.WarningRebuildRequired")
: "";
const additionalMessage2 =
rebuildRecommended && this.core.settings.isConfigured
? $msg("TweakMismatchResolve.Message.UseRemote.WarningRebuildRecommended")
: "";
const message = `
The settings in the remote database are as follows.
If you want to use these settings, please select "Use configured".
If you want to keep the settings of this device, please select "Dismiss".
const table = $msg("TweakMismatchResolve.Table", { rows: tableRows.join("\n") });
${table}
const message = $msg("TweakMismatchResolve.Message.Main", {
table: table,
additionalMessage: [additionalMessage, additionalMessage2].filter((v) => v).join("\n"),
});
>[!TIP]
> If you want to synchronise all settings, please use \`Sync settings via markdown\` after applying minimal configuration with this feature.
${additionalMessage}`;
const CHOICE_USE_REMOTE = "Use configured";
const CHOICE_DISMISS = "Dismiss";
const CHOICE_USE_REMOTE = $msg("TweakMismatchResolve.Action.UseConfigured");
const CHOICE_DISMISS = $msg("TweakMismatchResolve.Action.Dismiss");
// const CHOICE_AND_VALUES = [
// [CHOICE_USE_REMOTE, preferred],
// [CHOICE_DISMISS, false]]
const CHOICES = [CHOICE_USE_REMOTE, CHOICE_DISMISS];
const retKey = await this.core.confirm.askSelectStringDialogue(message, CHOICES, {
title: "Use Remote Configuration",
title: $msg("TweakMismatchResolve.Title.UseRemoteConfig"),
timeout: 0,
defaultAction: CHOICE_DISMISS,
});
@@ -211,4 +283,17 @@ ${additionalMessage}`;
}
return { result: false, requireFetch: false };
}
override onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.tweakValue.fetchRemotePreferred.setHandler(this._fetchRemotePreferredTweakValues.bind(this));
services.tweakValue.checkAndAskResolvingMismatched.setHandler(
this._checkAndAskResolvingMismatchedTweaks.bind(this)
);
services.tweakValue.askResolvingMismatched.setHandler(this._askResolvingMismatchedTweaks.bind(this));
services.tweakValue.checkAndAskUseRemoteConfiguration.setHandler(
this._checkAndAskUseRemoteConfiguration.bind(this)
);
services.tweakValue.askUseRemoteConfiguration.setHandler(this._askUseRemoteConfiguration.bind(this));
services.replication.checkConnectionFailure.addHandler(this._anyAfterConnectCheckFailed.bind(this));
}
}

Some files were not shown because too many files have changed in this diff Show More