Compare commits

...

236 Commits

Author SHA1 Message Date
vorotamoroz
b3a0deb0e3 bump 2025-10-31 11:36:55 +01:00
vorotamoroz
b9138d1395 ### Fixed
- We can enter the fields on some dialogue correctly on mobile devices now.

### New features

- We can use TURN server for P2P connections now.
2025-10-31 11:33:06 +01:00
vorotamoroz
7eb9807aa5 Fix import path 2025-10-30 09:35:17 +01:00
vorotamoroz
91a4f234f1 bump 2025-10-30 09:30:38 +01:00
vorotamoroz
82f2860938 ### Fixed
- P2P Replication got more robust and stable.

### Breaking changes

- Send configuration via Peer-to-Peer connection is not compatible with older versions.
2025-10-30 09:29:51 +01:00
vorotamoroz
5443317157 merged 2025-10-30 02:38:30 +01:00
vorotamoroz
47fe9d2af3 Adjust method name to actual behaviour 2025-10-30 02:28:18 +01:00
vorotamoroz
8b81570035 Grammatically fixes 2025-10-22 14:02:52 +01:00
vorotamoroz
d3e50421e4 Merge branch 'svelteui' of https://github.com/vrtmrz/obsidian-livesync into svelteui 2025-10-22 14:02:20 +01:00
vorotamoroz
12605f4604 Add updates 2025-10-22 14:02:00 +01:00
vorotamoroz
2c0dd82886 Add updates 2025-10-22 13:56:24 +01:00
vorotamoroz
f5315aacb8 v0.25.23.beta1
### Fixed (This should be backported to 0.25.22 if the beta phase is prolonged)

- No longer larger files will not create a chunks during preparing `Reset Synchronisation on This Device`.

### Behaviour changes

- Setup wizard is now more `goal-oriented`. Brand-new screens are introduced.
- `Fetch everything` and `Rebuild everything` is now `Reset Synchronisation on This Device` and `Overwrite Server Data with This Device's Files`.
- Remote configuration and E2EE settings are now separated to each modal dialogue.
- Peer-to-Peer settings is also separated into its own modal dialogue.
- Setup-URI, and Report for the Issue are now not copied to clipboard automatically. Instead, there are copy dialogue and buttons to copy them explicitly.
- No longer optional features are introduced during the setup or `Reset Synchronisation on This Device`, `Overwrite Server Data with This Device's Files`.
- We cannot preform `Fetch everything` and `Rebuild everything` (Removed, so the old name) without restarting Obsidian now.

### Miscellaneous

- Setup QR Code generation is separated into a src/lib/src/API/processSetting.ts file. Please use it as a subrepository if you want to generate QR codes in your own application.
- Setup-URI is also separated into a src/lib/src/API/processSetting.ts
- Some direct access to web-APIs are now wrapped into the services layer.

### Dependency updates

- Many dependencies are updated. Please see `package.json`.
- As upgrading TypeScript, Fixed many UInt8Array<ArrayBuffer> and Uint8Array type mismatches.
2025-10-22 13:56:15 +01:00
vorotamoroz
5a93066870 bump 2025-10-15 01:02:24 +09:00
vorotamoroz
3a73073505 ### Fixed
- Fixed a bug that caused wrong event bindings and flag inversion (#727)
  - This caused following issues:
    - In some cases, settings changes were not applied or saved correctly.
    - Automatic synchronisation did not begin correctly.

### Improved
- Too large diffs are not shown in the file comparison view, due to performance reasons.
2025-10-15 01:00:24 +09:00
vorotamoroz
ee0c0ee611 Add a note to prevent my forget 2025-10-13 11:27:03 +09:00
vorotamoroz
d7ea30e304 Merge pull request #726 from vrtmrz/disenchant
Disenchant
2025-10-13 10:40:20 +09:00
vorotamoroz
2b9ded60f7 Add notes 2025-10-13 10:38:32 +09:00
vorotamoroz
40508822cf Bump and add readme 2025-10-13 10:13:45 +09:00
vorotamoroz
6f938d5f54 Update submodule commit reference 2025-10-13 09:48:25 +09:00
vorotamoroz
51dc44bfb0 bump 0.25.21.beta2 2025-10-08 05:01:07 +01:00
vorotamoroz
7c4f2bf78a ### Fixed
- Fixed wrong event type bindings (which caused some events not to be handled correctly).
- Fixed detected a timing issue in StorageEventManager
    - When multiple events for the same file are fired in quick succession, metadata has been kept older information. This induces unexpected wrong notifications and write prevention.
2025-10-08 05:00:42 +01:00
vorotamoroz
67c9b4cf06 bump for beta 2025-10-06 10:45:59 +01:00
vorotamoroz
4808876968 - Rename methods for automatic binging checking.
- Add automatic binging checks.
2025-10-06 10:40:25 +01:00
vorotamoroz
cccff21ecc Merge branch 'main' into disenchant and run prettier 2025-10-04 17:59:42 +09:00
vorotamoroz
d8415a97e5 Move some dependency to devDependency 2025-10-04 17:49:02 +09:00
vorotamoroz
85e9aa2978 For convenience 2025-10-04 17:20:59 +09:00
vorotamoroz
b4eb0e4868 Improved: copy a dev build to vault folder 2025-10-04 17:12:37 +09:00
vorotamoroz
3ea348f468 Merge pull request #716 from chriscross12324/bugfix/setting-panel-hierarchy
bugfix/setting-panel-hierarchy: Fixed visual inconsistencies
2025-10-04 17:07:25 +09:00
vorotamoroz
81362816d6 disenchanting and dispelling from the nightmarish implicit something
Indeed, even though if this changeset is mostly another nightmare. It might be in beta for a while.
2025-10-03 14:59:14 +01:00
Chris Coulthard
d6efe4510f bugfix/setting-panel-hierarchy: Updated styling to improve consistency. Updated various setting panes (Fixed hierarchy issues that caused some titles to overlap rather then bump and whitespace formatting) 2025-09-28 19:27:14 -07:00
vorotamoroz
ca5a7ae18c bump 2025-09-26 11:42:06 +01:00
vorotamoroz
a27652ac34 ### Fixed
- Chunk fetching no longer reports errors when the fetched chunk could not be saved (#710).
    - Just using the fetched chunk temporarily.
- Chunk fetching reports errors when the fetched chunk is surely corrupted (#710, #712).
- It no longer detects files that the plug-in has modified.
    - It may reduce unnecessary file comparisons and unexpected file states.

### Improved

- Now checking the remote database configuration respecting the CouchDB version (#714).
2025-09-26 11:40:41 +01:00
vorotamoroz
29b89efc47 ## 0.25.19
### Improved
- Now encoding/decoding for chunk data and encryption/decryption are performed in native functions (if they were available).
2025-09-18 12:29:09 +01:00
vorotamoroz
ef3eef2d08 Bump 2025-09-17 09:07:49 +01:00
vorotamoroz
ffbbe32e36 ### Fixed
- Property encryption detection now works correctly (On Self-hosted LiveSync, it was not broken, but as a library, it was not working correctly).
- Initialising the chunk splitter is now surely performed.
- DirectFileManipulator now works fine (as a library)
    - Old `DirectFileManipulatorV1` is now removed.

### Refactored

- Removed some unnecessary intermediate files.
2025-09-17 09:05:16 +01:00
vorotamoroz
0a5371cdee bump 2025-09-16 10:47:00 +01:00
vorotamoroz
466bb142e2 Refactored: removed some unnecessary intermediate file 2025-09-16 10:45:14 +01:00
vorotamoroz
d394a4ce7f ### Fixed
- No longer information-level logs have produced during toggling `Show only notifications` in the settings (#708).
- Ignoring filters for Hidden file sync now works correctly (#709).
2025-09-16 10:30:48 +01:00
vorotamoroz
71ce76e502 Merge pull request #704 from Gron-HD/main
Add Docker Compose troubleshooting for own server setup
2025-09-16 15:33:14 +09:00
vorotamoroz
ae7a7dd456 Merge pull request #706 from abhith/patch-1
docs(README): update links for Customisation Sync and Hidden File Sync
2025-09-16 15:31:00 +09:00
vorotamoroz
4048186bb5 bump 2025-09-04 11:46:50 +01:00
vorotamoroz
2b94fd9139 ## Improved
- Improved connectivity for P2P connections
- The connection to the signalling server can now be disconnected while in the background or when explicitly disconnected.
  - These features use a patch that has not been incorporated upstream.
2025-09-04 11:44:49 +01:00
vorotamoroz
ec72ece86d bump 2025-09-03 10:12:51 +01:00
vorotamoroz
e394a994c5 Fix typo 2025-09-03 10:11:46 +01:00
vorotamoroz
aa23b6a39a ### Improved
- Now we can configure `forcePathStyle` for bucket synchronisation (#707).
2025-09-03 10:08:49 +01:00
vorotamoroz
58e328a591 bump 2025-09-02 10:27:23 +01:00
vorotamoroz
1730c39d70 ### Fixed
- Opening IndexedDB handling has been ensured.
- Migration check of corrupted files detection has been fixed.
    - Now informs us about conflicted files as non-recoverable, but noted so.
    - No longer errors on not-found files.
2025-09-02 10:24:13 +01:00
Abhith Rajan
dfeac201a2 docs(README): update links for Customisation Sync and Hidden File Sync sections 2025-09-01 21:54:11 +04:00
vorotamoroz
b42152db5e bump 2025-09-01 12:28:01 +09:00
vorotamoroz
171cfc0a38 ### Fixed
- Conflict resolving dialogue now properly displays the changeset name instead of A or B (#691).
2025-09-01 12:23:38 +09:00
vorotamoroz
d2787bdb6a Update older dependencies 2025-09-01 12:21:12 +09:00
Ron Gerber
44b022f003 Add Docker Compose troubleshooting for own server setup
Added another option for the init command that passes the variables directly. When i tried the command above i got the mentioned error.
2025-08-30 01:00:58 +02:00
vorotamoroz
58845276e7 bump 2025-08-29 11:48:33 +01:00
vorotamoroz
a2cc093a9e ### Fixed
- Fixed an issue with automatic synchronisation starting (#702).
2025-08-29 11:46:11 +01:00
vorotamoroz
fec203a751 bump 2025-08-28 10:27:31 +01:00
vorotamoroz
1a06837769 ### Fixed
- Automatic translation detection on the first launch now works correctly (#630).
- No errors are shown during synchronisations in offline (if not explicitly requested) (#699).
- Missing some checking during automatic-synchronisation now works correctly.
2025-08-28 10:26:17 +01:00
vorotamoroz
18d1ce8ec8 bump 2025-08-26 11:17:09 +01:00
vorotamoroz
2221d8c4e8 Update dependency 2025-08-26 11:14:30 +01:00
vorotamoroz
08548f8630 ### New experimental feature
- We can perform Garbage Collection (Beta2) without rebuilding the entire database, and also fetch the database.

### Fixed

- Resetting the bucket now properly clears all uploaded files.

### Refactored

- Some files have been moved to better reflect their purpose and improve maintainability.
- The extensive LiveSyncLocalDB has been split into separate files for each role.
2025-08-26 11:09:33 +01:00
vorotamoroz
5d24c3b984 bump 2025-08-20 10:36:47 +01:00
vorotamoroz
de8fd43c8b ### Fixed
- CORS Checking messages now use replacements.
- Configuring CORS setting via the UI now respects the existing rules.
- Now startup-checking works correctly again, performs migration check serially and then it will also fix starting LiveSync or start-up sync. (#696)
- Statusline in editor now supported 'Bases'.
2025-08-20 10:36:22 +01:00
vorotamoroz
ed88761eaa bump 2025-08-18 06:32:19 +01:00
vorotamoroz
4dcb37f5a2 ## 0.25.8
### New feature
- Insecure chunk detection has been implemented.

### Fixed
- Unexpected `Failed to obtain PBKDF2 salt` or similar errors during bucket-synchronisation no longer occur.
- Unexpected long delays for chunk-missing documents when using bucket-synchronisation have been resolved.
- Fetched remote chunks are now properly stored in the local database if `Fetch chunks on demand` is enabled.
- The 'fetch' dialogue's message has been refined.
- No longer overwriting any corrupted documents to the storage on boot-sequence.

### Refactored
- Type errors have been corrected.
2025-08-18 06:26:50 +01:00
vorotamoroz
db0562eda1 An minor note added 2025-08-15 11:06:55 +01:00
vorotamoroz
b610d5d959 bump 2025-08-15 11:04:34 +01:00
vorotamoroz
5abba74f3b ## 0.25.7
### Fixed

- Off-loaded chunking have been fixed to ensure proper functionality (#693).
- Chunk document ID assignment has been fixed.
- Replication prevention message during version up detection has been improved (#686).
- `Keep A` and `Keep B` on Conflict resolving dialogue has been renamed to `Use Base` and `Use Conflicted` (#691).

### Improved

- Metadata and content-size unmatched documents are now detected and reported, prevented to be applied to the storage.

### New Features

- `Scan for Broken files` has been implemented on `Hatch` -> `TroubleShooting`.

### Refactored

- Off-loaded processes have been refactored for the better maintainability.
- Removed unused code.
2025-08-15 10:51:39 +01:00
vorotamoroz
021c1fccfe Merge pull request #667 from h-exx/main
Edited the Docker Compose documentation to make it work
2025-08-14 14:24:14 +09:00
vorotamoroz
0a30af479f Fix Fetch chunks on demand 2025-08-09 02:17:50 +09:00
vorotamoroz
a9c3f60fe7 bump 2025-08-09 01:51:17 +09:00
vorotamoroz
f996e056af ### Fixed
- Storage scanning no longer occurs when `Suspend file watching` is enabled (including boot-sequence).

### Improved
- Saving notes and files now consumes less memory.
- Chunk caching is now more efficient.
- Both of them (may) are effective for #692, #680, and some more.

### Changed
- `Incubate Chunks in Document` (also known as `Eden`) is now fully sunset.
- The `Compute revisions for chunks` setting has also been removed.
- As mentioned, `Memory cache size (by total characters)` has been removed.

### Refactored
- A significant refactoring of the core codebase is underway (please refer the release-note).
2025-08-09 01:45:41 +09:00
vorotamoroz
1073ee9e30 bump 2025-07-29 12:50:25 +01:00
vorotamoroz
f94653e60e ## 0.25.4
- The PBKDF2Salt is no longer corrupted when attempting replication while the device is offline (#686)
2025-07-29 12:45:28 +01:00
vorotamoroz
3dccf2076f Add draft design documents 2025-07-28 19:14:22 +09:00
vorotamoroz
3e78fe03e1 bump 2025-07-22 04:36:22 +01:00
vorotamoroz
4aa8fc3519 ## 0.25.3
### Fixed
- Now the `Doctor` at migration will save the configuration.
2025-07-22 04:35:40 +01:00
vorotamoroz
ba3d2220e1 bump again 2025-07-19 18:08:27 +09:00
vorotamoroz
8057b516af bump 2025-07-19 17:51:53 +09:00
vorotamoroz
f2b4431182 ## 0.25.1
19th July, 2025

### Refined and New Features
- Fetching the remote database on `RedFlag` now also retrieves remote configurations optionally.
- The setup wizard using Set-up URI and QR code has been improved.

### Changes
- The Set-up URI is now encrypted with a new encryption algorithm (mostly the same as `V2`).
2025-07-19 17:26:52 +09:00
vorotamoroz
badec46d9a 0.25.0 released 2025-07-19 15:21:36 +09:00
vorotamoroz
355e41f488 bump for beta 2025-07-14 00:36:09 +09:00
vorotamoroz
e0e7e1b5ca ### Fixed
- The encryption algorithm now uses HKDF with a master key.
- `Fetch everything from the remote` now works correctly.
- Extra log messages during QR code decoding have been removed.

### Changed
- Some settings have been moved to the `Patches` pane:

### Behavioural and API Changes
- `DirectFileManipulatorV2` now requires new settings (as you may already know, E2EEAlgorithm).
- The database version has been increased to `12` from `10`.
2025-07-14 00:33:40 +09:00
vorotamoroz
ce4b61557a bump 2025-07-10 11:24:59 +01:00
vorotamoroz
52b02f3888 ## 0.24.31
### Fixed

- The description of `Enable Developers' Debug Tools.` has been refined.
- Automatic conflict checking and resolution has been improved.
- Resolving conflicts dialogue will not be shown for the multiple files at once.
2025-07-10 11:12:44 +01:00
vorotamoroz
7535999388 Update updates.md 2025-07-09 22:28:50 +09:00
vorotamoroz
dccf8580b8 Update updates.md 2025-07-09 22:27:52 +09:00
vorotamoroz
e3964f3c5d bump 2025-07-09 12:48:37 +01:00
vorotamoroz
375e7bde31 ### New Feature
- New chunking algorithm `V3: Fine deduplication` has been added, and will be recommended after updates.
- New language `ko` (Korean) has been added.
- Chinese (Simplified) translation has been updated.

### Fixed

- Numeric settings are now never lost the focus during the value changing.

### Improved
- All translations have rewritten into YAML format, to easier manage and contribution.
- Doctor recommendations have now shown in the user-friendly notation.

### Refactored

- Never ending `ObsidianLiveSyncSettingTag.ts` finally had separated into each pane's file.
- Some commented-out codes have been removed.
2025-07-09 12:15:59 +01:00
Jacques Faulkner
341f0ab12d Updated Table of Contents 2025-06-27 10:39:13 +01:00
Jacques Faulkner
39340c1e1b Sorry 1 more, reworded the creating directories comments 2025-06-26 14:45:04 +01:00
Jacques Faulkner
55cdc58857 Edited warning to make a bit more sense 2025-06-26 14:42:23 +01:00
Jacques Faulkner
4f1a9dc4e8 Forgot a # 2025-06-25 18:54:22 +01:00
Jacques Faulkner
013818b7d0 Update setup_own_server.md 2025-06-25 18:53:31 +01:00
vorotamoroz
1179438df8 bump 2025-06-20 12:43:39 +01:00
vorotamoroz
47ea8f6859 ## 0.24.29
### Fixed

- Synchronisation with buckets now works correctly, regardless of whether a prefix is set or the bucket has been (re-) initialised (#664).
- An information message is now displayed again, during any automatic synchronisation is enabled (#662).

### Tidied up

- Importing paths have been tidied up.
2025-06-20 12:43:15 +01:00
vorotamoroz
670fe16486 Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-06-16 02:53:30 +01:00
vorotamoroz
3f0093916c Add some note 2025-06-16 02:52:59 +01:00
vorotamoroz
9503474d06 bump 2025-06-15 18:49:31 +09:00
vorotamoroz
ddf7b243e4 ## 0.24.28
### Fixed

- Batch Update is no longer available in LiveSync mode to avoid unexpected behaviour. (#653)
- Now compatible with Cloudflare R2 again for bucket synchronisation.
- Prevention of broken behaviour due to database connection failures added (#649).
2025-06-15 18:49:16 +09:00
vorotamoroz
f37561c3c1 Update Library 2025-06-15 18:24:19 +09:00
vorotamoroz
f01429decc Merge pull request #633 from jmarmstrong1207/patch-1
Add simple docker compose to the setup guide
2025-06-10 11:27:22 +09:00
vorotamoroz
c0fcb66924 bump 2025-06-10 02:52:45 +01:00
vorotamoroz
5f76b9809b ## 0.24.27
### Improved

- We can use prefix for path for the Bucket synchronisation.
- The "Use Request API to avoid `inevitable` CORS problem" option is now promoted to the normal setting, not a niche patch.

### Fixed

- Now switching replicators applied immediately, without the need to restart Obsidian.

### Tidied up

- Some dependencies have been updated to the latest version.
2025-06-10 02:51:47 +01:00
vorotamoroz
d61d6fec37 bump manifest 2025-05-14 13:55:42 +01:00
vorotamoroz
9fdd622824 bump 2025-05-14 13:55:17 +01:00
vorotamoroz
3b8d03a189 ## 0.24.26
### New Features

- Automatic display-language changing according to the Obsidian language
  setting.
- Now we can limit files to be synchronised even in the hidden files.
- "Use Request API to avoid `inevitable` CORS problem" has been implemented.
- `Show status icon instead of file warnings banner` has been implemented.

### Improved

- All regular expressions can be inverted by prefixing `!!` now.

### Fixed

- No longer unexpected files will be gathered during hidden file sync.
- No longer broken `\n` and new-line characters during the bucket
  synchronisation.
- We can purge the remote bucket again if we using MinIO instead of AWS S3 or
  Cloudflare R2.
- Purging the remote bucket is now more reliable.
- Some wrong messages have been fixed.

### Behaviour changed

- Entering into the deeper directories to gather the hidden files is now limited
  by `/` or `\/` prefixed ignore filters.

### Etcetera

- Some code has been tidied up.
- Trying less warning-suppressing and be more safer-coding.
- Dependent libraries have been updated to the latest version.
- Some build processes have been separated to `pre` and `post` processes.
2025-05-14 13:11:03 +01:00
James Armstrong
1f1a39e5a0 Add simple docker compose 2025-04-28 06:10:29 -07:00
vorotamoroz
d0e92cff7a refine readme 2025-04-23 06:45:03 +01:00
vorotamoroz
5addddc792 Update doc 2025-04-23 05:51:51 +01:00
vorotamoroz
d978892661 Update docs 2025-04-23 05:11:59 +01:00
vorotamoroz
cfb061a6a2 add note to troubleshooting 2025-04-23 05:10:40 +01:00
vorotamoroz
381055fc93 bump 2025-04-22 11:29:42 +01:00
vorotamoroz
37d12916fc ## 0.24.25
### Improved

- Peer-to-peer synchronisation has been got more robust.

### Fixed

- No longer broken falsy values in settings during set-up by the QR code generation.

### Refactored

- Some `window` references now have pointed to `globalThis`.
- Some sloppy-import has been fixed.
- A server side implementation `Synchromesh` has been suffixed with `deno` instead of `server` now.
2025-04-22 11:28:55 +01:00
vorotamoroz
944aa846c4 bump 2025-04-15 11:10:37 +01:00
vorotamoroz
abca808e29 ### Fixed
- No longer broken JSON files including `\n`, during the bucket synchronisation. (#623)
- Custom headers and JWT tokens are now correctly sent to the server during configuration checking. (#624)

### Improved

- Bucket synchronisation has been enhanced for better performance and reliability.
    - Now less duplicated chunks are sent to the server.
    - Fetching conflicted files from the server is now more reliable.
    - Dependent libraries have been updated to the latest version.
2025-04-15 11:09:49 +01:00
vorotamoroz
90bb610133 Update submodule 2025-04-14 03:23:24 +01:00
vorotamoroz
9c5e9fe63b Conclusion stated. 2025-04-11 14:13:22 +01:00
vorotamoroz
00dfae24d7 bump 2025-04-10 14:28:03 +01:00
vorotamoroz
d8a41fe45d ### New Feature
- Now, we can send custom headers to the server.
- Authentication with JWT in CouchDB is now supported.

### Improved

- The QR Code for set-up can be shown also from the setting dialogue now.
- Conflict checking for preventing unexpected overwriting on the boot-up process has been quite faster.

### Fixed

- Some bugs on Dev and Testing modules have been fixed.
2025-04-10 14:24:33 +01:00
vorotamoroz
30467d1c25 Add link to P2P pseudo client. 2025-04-04 12:30:21 +01:00
vorotamoroz
f8351f1d45 Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-04-04 11:52:12 +01:00
vorotamoroz
5924af98ab Update Library (Probably no effect) 2025-04-04 11:52:05 +01:00
vorotamoroz
2769b61da4 Update setup_own_server.md
Add note; #609
2025-04-04 18:24:13 +09:00
vorotamoroz
bb4409221d Update README.md 2025-04-03 20:57:15 +09:00
vorotamoroz
f398c14200 Bump for release-mistake. 2025-04-01 10:42:29 +01:00
vorotamoroz
27d58508dc Missed 2025-04-01 10:38:12 +01:00
vorotamoroz
d4dea5b226 bump 2025-04-01 10:21:38 +01:00
vorotamoroz
c79dc30cba ## 0.24.21
### Fixed

- No longer conflicted files are handled in the boot-up process. No more unexpected overwriting.
    - It ignores `Always overwrite with a newer file`, and always be prevented for the safety. Please pick it manually or open the file.
- Some log messages on conflict resolution has been corrected.
- Automatic merge notifications, displayed on the grounds of `same`, have been degraded to logs.

### Improved

- Now we can fetch the remote database with keeping local files completely intact.
    - In new option, all files are stored into the local database before the fetching, and will be merged automatically or detected as conflicts.
- The dialogue presenting options when performing `Fetch` are now more informative.

### Refactored

- Some class methods have been fixed its arguments to be more consistent.
- Types have been defined for some conditional results.
2025-04-01 10:20:21 +01:00
vorotamoroz
b3119ee8a9 bump and update dependencies 2025-03-24 18:55:02 +09:00
vorotamoroz
2a1d71da5c Improved: show details of TypeError using Obsidian API. 2025-03-24 12:04:58 +09:00
vorotamoroz
24f31ed19e Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-03-19 04:43:06 +01:00
vorotamoroz
a982629ae6 Update draft : possibly I should share cSpell dictionary. 2025-03-19 04:42:50 +01:00
vorotamoroz
85140aecab Update README.md 2025-03-05 20:40:04 +09:00
vorotamoroz
3f2e23ee88 bump 2025-03-05 11:13:58 +00:00
vorotamoroz
6049c19e8a ## 0.24.19
### New Feature

- Now we can generate a QR Code for transferring the configuration to another device.
2025-03-05 11:12:00 +00:00
vorotamoroz
65648683a3 bump 2025-02-28 12:02:45 +00:00
vorotamoroz
5d70f2c1e9 ## 0.24.18
### Fixed

- Now no chunk creation errors will be raised after switching `Compute revisions for chunks`.
- Some invisible file can be handled correctly (e.g., `writing-goals-history.csv`).
- Fetching configuration from the server is now saves the configuration immediately (if we are not in the wizard).

### Improved

- Mismatched configuration dialogue is now more informative, and rewritten to more user-friendly.
- Applying configuration mismatch is now without rebuilding (at our own risks).
- Now, rebuilding is decided more fine grained.

### Improved internally

- Translations can be nested. i.e., task:`Some procedure`, check: `%{task} checking`, checkfailed: `%{check} failed` produces `Some procedure checking failed`.
2025-02-28 11:58:15 +00:00
vorotamoroz
cbcfdc453e update default vault and bump for release 2025-02-27 13:39:28 +00:00
vorotamoroz
a4eb21593c bump 2025-02-27 13:24:51 +00:00
vorotamoroz
05eb2c8262 ## 0.24.16
### Improved

#### Peer-to-Peer

- Now peer-to-peer synchronisation checks the settings are compatible with each other.
- Peer-to-peer synchronisation now handles the platform and detects pseudo-clients.

#### General

- New migration method has been implemented, that called `Doctor`.

- The minimum interval for replication to be caused when an event occurs can now be configurable.
- Some detail note has been added and change nuance about the `Report` in the setting dialogue, which had less informative.

### Behaviour and default changed

- `Compute revisions for chunks` are backed into enabled again. it is necessary for garbage collection of chunks.

### Refactored

- Platform specific codes are more separated. No longer `node` modules were used in the browser and Obsidian.
2025-02-27 13:23:11 +00:00
vorotamoroz
fecefa3631 ### Fixed
- Now, even without WeakRef, Polyfill is used and the whole thing works without error. However, if you can switch WebView Engine, it is recommended to switch to a WebView Engine that supports WeakRef.

And bumped.
2025-02-20 10:40:18 +00:00
vorotamoroz
f8c4d5ccb0 Add a bit more. 2025-02-18 13:48:41 +00:00
vorotamoroz
e63e79bc8e Add note of flag files. 2025-02-18 13:36:15 +00:00
vorotamoroz
ed76125f3d bump 2025-02-18 13:02:54 +00:00
vorotamoroz
70f4e23474 ## 0.24.14
### Fixed

- Resolving conflicts of JSON files (and sensibly merging them) is now working fine, again!
    - And, failure logs are more informative.
- More robust to release the event listeners on unwatching the local database.

### Refactored

- JSON file conflict resolution dialogue has been rewritten into svelte v5.
- Upgrade eslint.
- Remove unnecessary pragma comments for eslint.
2025-02-18 12:59:18 +00:00
vorotamoroz
f6d5b78cc8 bump 2025-02-17 11:35:34 +00:00
vorotamoroz
405624b51b ## 0.24.13
### Fixed
#### General Replication
- No longer unexpected errors occur when the replication is stopped during for some reason (e.g., network disconnection).
#### Peer-to-Peer Synchronisation
- Set-up process will not receive data from unexpected sources.
- No longer resource leaks while enabling the `broadcasting changes`
- Logs are less verbose.
- Received data is now correctly dispatched to other devices.
- `Timeout` error now more informative.
- No longer timeout error occurs for reporting the progress to other devices.
- Decision dialogues for the same thing are not shown multiply at the same time anymore.
- Disconnection of the peer-to-peer synchronisation is now more robust and less error-prone.
#### Webpeer
- Now we can toggle Peers' configuration.
### Refactored
- Cross-platform compatibility layer has been improved.
- Common events are moved to the common library.
- Displaying replication status of the peer-to-peer synchronisation is separated from the main-log-logic.
- Some file names have been changed to be more consistent.
2025-02-17 11:33:35 +00:00
vorotamoroz
90c0ff22b9 Add paths for future maintenance. 2025-02-17 11:30:42 +00:00
vorotamoroz
67568ea886 bump 2025-02-14 11:28:01 +00:00
vorotamoroz
cc29b4058d 0.24.12
Fixed
-  No longer unnecessary acknowledgements are sent when starting peer-to-peer synchronisation.

Refactored
- Platform impedance-matching-layer has been improved.
- Some UIs have been get isomorphic among Obsidian and web applications (for `webpeer`).
2025-02-14 11:15:22 +00:00
vorotamoroz
4e8243b3d5 Update release.yml 2025-02-13 22:02:22 +09:00
vorotamoroz
4eb1787784 bump 2025-02-13 12:58:15 +00:00
vorotamoroz
1cd1465f2c 0.24.11
Improved

- New Translation: `es` (Spanish) by @zeedif (Thank you so much)!
- Now all of messages can be selectable and copyable, also on the iPhone, iPad, and Android devices. Now we can copy or share the messages easily.

New Feature

- Peer-to-Peer Synchronisation has been implemented!

Fixed

- No longer memory or resource leaks when the plug-in is disabled.
- Now deleted chunks are correctly detected on conflict resolution, and we are guided to resurrect them.
- Hanging issue during the initial synchronisation has been fixed.
- Some unnecessary logs have been removed.
- Now all modal dialogues are correctly closed when the plug-in is disabled.

Refactor

- Several interfaces have been moved to the separated library.
- Translations have been moved to each language file, and during the build, they are merged into one file.
- Non-mobile friendly code has been removed and replaced with the safer code.
- Started writing Platform impedance-matching-layer.
- Svelte has been updated to v5.
- Some function have got more robust type definitions.
- Terser optimisation has slightly improved.
- During the build, analysis meta-file of the bundled codes will be generated.
2025-02-13 12:48:00 +00:00
vorotamoroz
45ceca8bb6 run prettier and update lib 2025-02-12 03:44:46 +00:00
vorotamoroz
7b385aab9e Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-02-12 03:22:50 +00:00
vorotamoroz
98411e5f48 Merge pull request #573 from zeedif/main
Migrate UI strings to $tf for translation support
2025-02-12 12:21:44 +09:00
vorotamoroz
b6687e2fb0 Adding bundle size analysis 2025-02-12 03:18:59 +00:00
Zeedif
658cbb7ded Merge branch 'main' into main 2025-02-06 20:46:26 -06:00
vorotamoroz
08a48154fa Add note 2025-02-05 01:35:17 +00:00
vorotamoroz
62501a5940 bump again 2025-02-05 01:33:31 +00:00
vorotamoroz
ccb3dd52de Dep update 2025-02-05 01:10:03 +00:00
vorotamoroz
3e5f4c8946 bump 2025-02-05 01:04:21 +00:00
vorotamoroz
54e64c59a9 - Prettified some source.
- Fixed the issue which the filename is shown as `undefined`.
- Fixed the issue where files transferred at short intervals were not reflected.
- Updated dependency (including new translation)
2025-02-05 00:58:51 +00:00
CRuiz
588840ff8b Reemplazar $tf por $msg 2025-02-04 08:22:43 -06:00
CRuiz
e6b8dfb279 Actualizar mensaje de recomendación de URI en la migración del módulo 2025-01-22 15:57:36 -06:00
CRuiz
73782c5389 Add translation ids 2025-01-22 13:41:18 -06:00
vorotamoroz
f2b667d75e bump 2025-01-22 11:56:36 +00:00
vorotamoroz
9b1588a65b ## 0.24.8
### Fixed
-   Some parallel-processing tasks are now performed more safely.
-   Some error messages has been fixed.
### Improved
-   Synchronisation is now more efficient and faster.
-   Saving chunks is a bit more robust.
### New Feature
-   We can remove orphaned chunks again, now!
2025-01-22 11:55:56 +00:00
vorotamoroz
0629bc04bb Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2025-01-07 11:33:13 +00:00
vorotamoroz
b0e97e6c96 Bump 2025-01-07 11:31:58 +00:00
vorotamoroz
7f853b0222 - 0.24.7
- Fixed (Security)
   - Assigning IDs to chunks has been corrected for more safety.
  - Fixed
    - Conflict resolution dialogue has been fixed
    - Resolving conflicts by timestamp has been fixed
  - Improved
    -  Notifications can be suppressed for the hidden files update now.
    -  No longer uses the old-xxhash and sha1 for generating the chunk ID.
2025-01-07 11:28:56 +00:00
vorotamoroz
3d4ad4a3b4 Update README.md
Add acknowledgements
2025-01-02 15:02:35 +09:00
vorotamoroz
4b1fff852a Merge pull request #544 from dcvdiego/main
make self-hosted docs slightly more readable
2024-12-23 12:28:37 +09:00
dcvdiego
08d7d24baf Update setup_own_server.md 2024-12-21 16:26:55 +00:00
vorotamoroz
9db3c3df0a bump 2024-12-14 02:32:07 +09:00
vorotamoroz
672940ad6f Fixed: No longer the empty-and-stilled log pane. 2024-12-14 02:31:31 +09:00
vorotamoroz
2338601fae 0.24.5 (by Library Fixing)
Fixed
- Fix some wrong behaviour during comparing JSON objects

Improved
- Reactive values are now shown more smoothly (Logs,a status line.
2024-12-13 10:59:05 +00:00
vorotamoroz
3e657b38a9 bump 2024-12-12 11:12:04 +00:00
vorotamoroz
21861d8c51 ## 0.24.4
### Fixed

-   Fixed so many inefficient and buggy modules inherited from the past.

### Improved

-   Tasks are now executed in an efficient asynchronous library.
-   On-demand chunk fetching is now more efficient and keeps the interval between requests.
    -   This will reduce the load on the server and the network.
    -   And, safe for the Cloudant.
2024-12-12 11:10:50 +00:00
vorotamoroz
3bb4aba395 Update dependencies 2024-12-12 11:10:29 +00:00
vorotamoroz
751de5a13e bump 2024-12-09 00:46:15 +00:00
vorotamoroz
29229f809b Update the doc 2024-12-09 00:40:15 +00:00
vorotamoroz
2d0dc2a389 Merge pull request #543 from Volkor3-16/main
Fix up wording of messages
2024-12-09 09:12:51 +09:00
Volkor
6cbe319b80 remove a unneeded "the" 2024-12-04 02:02:25 +11:00
Volkor
e9fe58f818 run pretty - revert crlf to lf
since most modern ides handle lf on windows fine, and the entire project is in lf already, i'll set this back
2024-12-04 01:13:12 +11:00
Volkor
61524e1c44 FIX: manual setup button 2024-12-04 00:17:41 +11:00
vorotamoroz
c25eaa09c9 fixed: Fixed font-family of in-editor status. 2024-12-03 10:15:53 +00:00
Volkor
71987e6814 Change a few more lines to more correctly reflect the settings 2024-11-27 23:39:47 +11:00
vorotamoroz
b15d0710e5 bump 2024-11-21 11:40:37 +00:00
vorotamoroz
9d304b3233 v0.24.2
Rewritten

-   Hidden File Sync is now respects the file changes on the storage. Not simply comparing modified times.
    -   This makes hidden file sync more robust and reliable.

Fixed

-   `Scan hidden files before replication` is now configurable again.
-   Some unexpected errors are now handled more gracefully.
-   Meaningless event passing during boot sequence is now prevented.
-   Error handling for non-existing files has been fixed.
-   Hidden files will not be batched to avoid the potential error.
    -   This behaviour had been causing the error in the previous versions in specific situations.
-   The log which checking automatic conflict resolution is now in verbose level.
-   Replication log (skipping non-targetting files) shows the correct information.
-   The dialogue that asking enabling optional feature during `Rebuild Everything` now prevents to show the `overwrite` option.
    -   The rebuilding device is the first, meaningless.
-   Files with different modified time but identical content are no longer processed repeatedly.
-   Some unexpected errors which caused after terminating plug-in are now avoided.
-

Improved

-   JSON files are now more transferred efficiently.
    -   Now the JSON files are transferred in more fine chunks, which makes the transfer more efficient.
2024-11-21 11:40:15 +00:00
dcvdiego
d062b13040 make docs slightly more readable 2024-11-19 16:28:55 +00:00
Volkor
7eceab59af Fix up wording of messages 2024-11-18 21:55:22 +11:00
vorotamoroz
ed5cb3e043 Format utility 2024-11-12 01:09:07 +00:00
vorotamoroz
574fdf9202 Format submodule 2024-11-12 01:08:35 +00:00
vorotamoroz
9ec7b809a9 I have adapted the new-line-char to the codebase. But I am not sure if this is the right thing to do. 2024-11-12 01:04:46 +00:00
vorotamoroz
4d302aff9d Merge pull request #533 from doublethefish/chore/formatting
chore(formatting): adds prettier for consistent coding style
2024-11-12 09:53:50 +09:00
vorotamoroz
b70009f4a9 Merge branch 'main' into chore/formatting 2024-11-12 09:37:52 +09:00
Frank Harrison
c24ee32f37 chore(formatting): ignores generated code in pretty 2024-11-11 09:45:53 +00:00
Frank Harrison
012d0aa4df chore(format): also format json (no effect)
Adds json-file formating to the prettier command. Currently this has no
effect, but if the `tabWidth` is changed to 2 and the `printWidth`
reduced to 88, then it will also impact the json files and keep the
formatting consistent across code and data.
2024-11-11 09:41:56 +00:00
Frank Harrison
5c97e5b672 chore(format): no intentional behaviour change - runs pretty 2024-11-11 09:41:52 +00:00
Frank Harrison
6e1eb36f3b chore(format): adds prettier and commands to run it
... including ci/cd check-only commands.

The code is quite complex and missing a lot of dev-ops types checks and
standards. One of the key problems with codebases like this is
on-boarding new developers to the codebase (like myself). When there is
a consistent and enforced coding-style (irrespective of what the coding
style is) it makes it _significantly_ easier for collaborators and
maintainers to get on with the job in hand. It also, from a day-2-day
developer perspective, significantly reduces cognitive overhead re
reading code.
Finally this is a "trial balloon" PR, if this patch is accepted I will
likely do more work on testing and docs for the project.

- The new prettier config is a non-standard setup, but a close-match to
  how the code _currently_ looks.
- 120 col-width print width (instead of the better and more
  information-dense 88), this is so the diff after applying prettier to
  the code is less disruptive, whilst still showing the benefits of using
  a prettier.
- We use `tabWidth` setting of 4 as the code uses that more common
  setting instead of the more compact 2 spaces - note that 2 often leads
  to more readable and compact code.
- We enforce trailing commas, as that seems to be the norm in this
  code-base. We choose the `es5` standard here.
- We enforce tailing semi-colons (`semi`) as the majority of code used
  that flavour of `js`/`ts`.
- For now we only run on code and not json files.

This is designed such that `npm run pretty` re-formats the code for
development, and when integrated with ci/cd, `prettyCheck` will return
non-zero exit codes when formatting doesn't match the coding standards.
2024-11-11 09:41:35 +00:00
vorotamoroz
8809aee327 Add default values to a new SetupURI. 2024-11-11 08:05:10 +00:00
vorotamoroz
fc04c557fc Merge branch 'main' of https://github.com/vrtmrz/obsidian-livesync 2024-11-11 01:23:07 +00:00
vorotamoroz
115a0d2d8a bump 2024-11-11 01:22:40 +00:00
vorotamoroz
2c97289ec8 Fixed
-   Vault History can show the correct information of match-or-not for each file and database even if it is a binary file.
-   `Sync settings via markdown` is now hidden during the setup wizard.
-   Verify and Fix will ignore the hidden files if the hidden file sync is disabled.

New feature
-   Now we can fetch the tweaks from the remote database while the setting dialogue and wizard are processing.

Improved
-   More things are moved to the modules.
    -   Includes the Main codebase. Now `main.ts` is almost stub.
-   EventHub is now more robust and typesafe.
2024-11-11 00:58:31 +00:00
vorotamoroz
6d472d17fd Fix output order 2 2024-11-09 10:31:34 +09:00
vorotamoroz
3a3aabfd11 Fix scripting 2024-11-09 10:21:05 +09:00
vorotamoroz
8b45dd1d24 .. 2024-11-08 11:12:35 +00:00
vorotamoroz
a2b36ccf31 v0.24.0! 2024-10-28 11:18:29 +09:00
vorotamoroz
25e30fa09d Merge tag '0.24.0.dev-rc8' 2024-10-28 10:24:38 +09:00
vorotamoroz
8f5bc387b4 Update manifest-beta.json 2024-10-25 20:44:32 +09:00
vorotamoroz
5afe24c460 0.24.0.dev-rc8 2024-10-25 12:39:32 +01:00
vorotamoroz
658a09f1cc Update manifest-beta.json 2024-10-24 20:46:08 +09:00
vorotamoroz
a9020a3aea 0.24.0.dev-rc7 2024-10-24 12:41:50 +01:00
vorotamoroz
293c731437 Update manifest-beta.json 2024-10-22 18:11:18 +09:00
vorotamoroz
5b4ae37030 0.24.0.dev-rc6 2024-10-22 10:08:20 +01:00
vorotamoroz
9e8d126259 Update manifest-beta.json 2024-10-21 17:52:28 +09:00
vorotamoroz
6d244a6e34 0.24.0.dev-rc5 2024-10-21 09:47:09 +01:00
vorotamoroz
1f0ad4eb1e Update manifest-beta.json 2024-10-18 19:23:48 +09:00
vorotamoroz
e0e0ab0426 0.24.0.dev-rc4 2024-10-18 11:14:58 +01:00
vorotamoroz
5023d6da0b Merge pull request #474 from nyawox/per-file-sync-grammar
fix: per-file customization sync description grammar
2024-10-18 11:43:03 +09:00
vorotamoroz
49160c7d57 Merge branch 'main' into per-file-sync-grammar 2024-10-18 11:42:49 +09:00
vorotamoroz
4434224c29 Merge pull request #479 from fuhrysteve/patch-1
add username and password to setup URI instructions
2024-10-18 11:40:23 +09:00
vorotamoroz
cf3b9e5522 Add manifest beta 2024-10-17 10:26:19 +01:00
vorotamoroz
7ca5ac5ac7 0.24.0.dev-rc3 2024-10-17 10:19:08 +01:00
vorotamoroz
095a3d20fb 0.24.0.dev-rc2 2024-10-17 09:57:42 +01:00
vorotamoroz
89e23b1bf4 Preparing v0.24.0 2024-10-16 12:44:07 +01:00
vorotamoroz
48315d657d bump 2024-09-24 14:02:53 +01:00
vorotamoroz
b73ca73776 Refined:
- Setting dialogue very slightly refined.
  - The hodgepodge inside the `Hatch` pane has been sorted into more explicit categorised panes.
  - Applying the settings will now be more informative.

New features:
- Word-segmented chunk building on users language.

Fixed:
- Sending chunks on `Send chunk in bulk` are now buffered to avoid the out-of-memory error.
- `Send chunk in bulk` is back to default disabled.
- Merging conflicts of JSON files are now works fine even if it contains `null`.
Development:
- Implemented the logic for automatically generating the stub of document for the setting dialogue.
2024-09-24 14:00:44 +01:00
vorotamoroz
48e4d57278 bump 2024-09-08 17:58:10 +09:00
vorotamoroz
7eae25edd0 - Fixed:
- Case-insensitive file handling
    - Full-lower-case files are no longer created during database checking.
  - Bulk chunk transfer
    - The default value will automatically adjust to an acceptable size when using IBM Cloudant.
2024-09-08 17:55:04 +09:00
vorotamoroz
3285c1694b bump 2024-09-07 01:45:12 +09:00
vorotamoroz
ede126d7d4 - 0.23.21:
- New Features:
    - Case-insensitive file handling
      - Files can now be handled case-insensitively.
      - This behaviour can be modified in the settings under `Handle files as Case-Sensitive` (Default: Prompt, Enabled for previous behaviour).
    - Improved chunk revision fixing
        - Revisions for chunks can now be fixed for faster chunk creation.
        - This can be adjusted in the settings under `Compute revisions for chunks` (Default: Prompt, Enabled for previous behaviour).
    - Bulk chunk transfer
      - Chunks can now be transferred in bulk during uploads.
      - This feature is enabled by default through `Send chunks in bulk`.
    - Creation of missing chunks without
      - Missing chunks can be created without storing notes, enhancing efficiency for first synchronisation or after prolonged periods without synchronisation.
  - Improvements:
    - File status scanning on the startup
      - Quite significant performance improvements.
      - No more missing scans of some files.
    - Status in editor enhancements
      - Significant performance improvements in the status display within the editor.
      - Notifications for files that will not be synchronised will now be properly communicated.
    - Encryption and Decryption
      - These processes are now performed in background threads to ensure fast and stable transfers.
    - Verify and repair all files
      - Got faster through parallel checking.
    - Migration on update
      - Migration messages and wizards have become more helpful.
  - Behavioural changes:
    - Chunk size adjustments
      - Large chunks will no longer be created for older, stable files, addressing storage consumption issues.
    - Flag file automation
      - Confirmation will be shown and we can cancel it.
  - Fixed:
    - Database File Scanning
      - All files in the database will now be enumerated correctly.
  - Miscellaneous
    - Dependency updated.
    - Now, tree shaking is left to terser, from esbuild.
2024-09-07 01:43:21 +09:00
Stephen J. Fuhry
f778107727 add username and password to setup URI instructions 2024-08-07 14:43:38 -04:00
vorotamoroz
630889680e bump 2024-07-31 02:32:02 +01:00
vorotamoroz
e46714e0f9 Fixed:
- Remote Storage Limit Notification dialogue has been fixed, now the chosen value is saved.
Improved:
- The Enlarging button on the enlarging threshold dialogue now displays the new value.
2024-07-31 02:31:13 +01:00
nyawox
12d825ea49 fix: per-file customization sync description grammar 2024-07-26 23:38:01 +09:00
174 changed files with 41010 additions and 14882 deletions

View File

@@ -1,5 +0,0 @@
node_modules
build
.eslintrc.js.bak
src/lib/src/patches/pouchdb-utils
esbuild.config.mjs

View File

@@ -2,7 +2,9 @@
"root": true,
"parser": "@typescript-eslint/parser",
"plugins": [
"@typescript-eslint"
"@typescript-eslint",
"eslint-plugin-svelte",
"eslint-plugin-import"
],
"extends": [
"eslint:recommended",
@@ -15,6 +17,18 @@
"tsconfig.json"
]
},
"ignorePatterns": [
"**/node_modules/*",
"**/jest.config.js",
"src/lib/coverage",
"src/lib/browsertest",
"**/test.ts",
"**/tests.ts",
"**/**test.ts",
"**/**.test.ts",
"esbuild.*.mjs",
"terser.*.mjs"
],
"rules": {
"no-unused-vars": "off",
"@typescript-eslint/no-unused-vars": [
@@ -23,12 +37,22 @@
"args": "none"
}
],
"no-unused-labels": "off",
"@typescript-eslint/ban-ts-comment": "off",
"no-prototype-builtins": "off",
"@typescript-eslint/no-empty-function": "off",
"require-await": "warn",
"no-async-promise-executor": "off",
"@typescript-eslint/require-await": "warn",
"@typescript-eslint/no-misused-promises": "warn",
"@typescript-eslint/no-floating-promises": "warn",
"no-async-promise-executor": "warn",
"@typescript-eslint/no-explicit-any": "off",
"@typescript-eslint/no-unnecessary-type-assertion": "error"
"@typescript-eslint/no-unnecessary-type-assertion": "error",
"no-constant-condition": [
"error",
{
"checkLoops": false
}
]
}
}

View File

@@ -17,7 +17,7 @@ jobs:
- name: Use Node.js
uses: actions/setup-node@v1
with:
node-version: '18.x' # You might need to adjust this value to your own version
node-version: '22.x' # You might need to adjust this value to your own version
# Get the version number and put it in a variable
- name: Get Version
id: version

7
.gitignore vendored
View File

@@ -9,8 +9,15 @@ package-lock.json
# build
main.js
main_org.js
main_org_*.js
*.js.map
meta.json
meta-*.json
# obsidian
data.json
.vscode
# environment variables
.env

2
.prettierignore Normal file
View File

@@ -0,0 +1,2 @@
pouchdb-browser.js
main_org.js

7
.prettierrc Normal file
View File

@@ -0,0 +1,7 @@
{
"trailingComma": "es5",
"tabWidth": 4,
"printWidth": 120,
"semi": true,
"endOfLine": "lf"
}

View File

@@ -1,30 +1,39 @@
<!-- For translation: 20240227r0 -->
# Self-hosted LiveSync
[Japanese docs](./README_ja.md) - [Chinese docs](./README_cn.md).
Self-hosted LiveSync is a community-implemented synchronization plugin, available on every obsidian-compatible platform and using CouchDB or Object Storage (e.g., MinIO, S3, R2, etc.) as the server.
Self-hosted LiveSync is a community-developed synchronisation plug-in available on all Obsidian-compatible platforms. It leverages robust server solutions such as CouchDB or object storage systems (e.g., MinIO, S3, R2, etc.) to ensure reliable data synchronisation.
Additionally, it supports peer-to-peer synchronisation using WebRTC now (experimental), enabling you to synchronise your notes directly between devices without relying on a server.
![obsidian_live_sync_demo](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif)
Note: This plugin cannot synchronise with the official "Obsidian Sync".
>[!IMPORTANT]
> This plug-in is not compatible with the official "Obsidian Sync" and cannot synchronise with it.
## Features
- Synchronise vaults efficiently with minimal traffic.
- Handle conflicting modifications effectively.
- Automatically merge simple conflicts.
- Use open-source solutions for the server.
- Compatible solutions are supported.
- Support end-to-end encryption.
- Synchronise settings, snippets, themes, and plug-ins via [Customisation Sync (Beta)](docs/settings.md#6-customization-sync-advanced) or [Hidden File Sync](docs/settings.md#7-hidden-files-advanced).
- Enable WebRTC peer-to-peer synchronisation without requiring a `host` (Experimental).
- This feature is still in the experimental stage. Please exercise caution when using it.
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices.
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).
- Synchronize vaults very efficiently with less traffic.
- Good at conflicted modification.
- Automatic merging for simple conflicts.
- Using OSS solution for the server.
- Compatible solutions can be used.
- Supporting End-to-end encryption.
- Synchronisation of settings, snippets, themes, and plug-ins, via [Customization sync(Beta)](#customization-sync) or [Hidden File Sync](#hiddenfilesync)
- WebClip from [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
This plug-in might be useful for researchers, engineers, and developers with a need to keep their notes fully self-hosted for security reasons. Or just anyone who would like the peace of mind of knowing that their notes are fully private.
This plug-in may be particularly useful for researchers, engineers, and developers who need to keep their notes fully self-hosted for security reasons. It is also suitable for anyone seeking the peace of mind that comes with knowing their notes remain entirely private.
>[!IMPORTANT]
> - Before installing or upgrading this plug-in, please back your vault up.
> - Do not enable this plugin with another synchronization solution at the same time (including iCloud and Obsidian Sync).
> - This is a synchronization plugin. Not a backup solution. Do not rely on this for backup.
> - Before installing or upgrading this plug-in, please back up your vault.
> - Do not enable this plug-in alongside another synchronisation solution at the same time (including iCloud and Obsidian Sync).
> - For backups, we also provide a plug-in called [Differential ZIP Backup](https://github.com/vrtmrz/diffzip).
## How to use
@@ -43,9 +52,11 @@ This plug-in might be useful for researchers, engineers, and developers with a n
1. [Setup CouchDB on fly.io](docs/setup_flyio.md)
2. [Setup your CouchDB](docs/setup_own_server.md)
2. Configure plug-in in [Quick Setup](docs/quick_setup.md)
> [!TIP]
> Now, fly.io has become not free. Fortunately, even though there are some issues, we are still able to use IBM Cloudant. Here is [Setup IBM Cloudant](docs/setup_cloudant.md). It will be updated soon!
> Fly.io is no longer free. Fortunately, despite some issues, we can still use IBM Cloudant. Refer to [Setup IBM Cloudant](docs/setup_cloudant.md).
> And also, we can use peer-to-peer synchronisation without a server. Or very cheap Object Storage -- Cloudflare R2 can be used for free.
> HOWEVER, most importantly, we can use the server that we trust. Therefore, please set up your own server.
> CouchDB can be run on a Raspberry Pi. (But please be careful about the security of your server).
## Information in StatusBar
@@ -75,10 +86,16 @@ Synchronization status is shown in the status bar with the following icons.
To prevent file and database corruption, please wait to stop Obsidian until all progress indicators have disappeared as possible (The plugin will also try to resume, though). Especially in case of if you have deleted or renamed files.
## Tips and Troubleshooting
If you are having problems getting the plugin working see: [Tips and Troubleshooting](docs/troubleshooting.md)
If you are having problems getting the plugin working see: [Tips and Troubleshooting](docs/troubleshooting.md).
## Acknowledgements
The project has been in continual progress and harmony thanks to:
- Many [Contributors](https://github.com/vrtmrz/obsidian-livesync/graphs/contributors).
- Many [GitHub Sponsors](https://github.com/sponsors/vrtmrz#sponsors).
- JetBrains Community Programs / Support for Open-Source Projects. <img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jetbrains.png" alt="JetBrains logo" height="24">
May those who have contributed be honoured and remembered for their kindness and generosity.
## License

93
README_es.md Normal file
View File

@@ -0,0 +1,93 @@
<!-- For translation: 20240227r0 -->
# Self-hosted LiveSync
[Documentación en inglés](./README_ja.md) - [Documentación en japonés](./README_ja.md) - [Documentación en chino](./README_cn.md).
Self-hosted LiveSync es un plugin de sincronización implementado por la comunidad, disponible en todas las plataformas compatibles con Obsidian y utiliza CouchDB o Almacenamiento de Objetos (por ejemplo, MinIO, S3, R2, etc.) como servidor.
![Demostración de Obsidian Live Sync](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif)
Nota: Este plugin no puede sincronizarse con el "Obsidian Sync" oficial.
## Características
- Sincroniza bóvedas de manera eficiente con menos tráfico.
- Buen manejo de modificaciones en conflicto.
- Fusión automática para conflictos simples.
- Uso de soluciones de código abierto para el servidor.
- Pueden usarse soluciones compatibles.
- Soporte de cifrado de extremo a extremo.
- Sincronización de configuraciones, fragmentos, temas y complementos a través de [Sincronización de personalización \(Beta\)](#customization-sync) o [Sincronización de archivos ocultos](#hiddenfilesync)
- WebClip de [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
Este plugin puede ser útil para investigadores, ingenieros y desarrolladores que necesitan mantener sus notas totalmente autoalojadas por razones de seguridad, o para aquellos que deseen tener la tranquilidad de saber que sus notas son totalmente privadas.
>[!IMPORTANTE]
> - Antes de instalar o actualizar este plugin, realice un respaldo de su bóveda.
> - No active este plugin junto con otra solución de sincronización al mismo tiempo (incluyendo iCloud y Obsidian Sync).
> - Este es un plugin de sincronización, no una solución de respaldo. No confíe en él para realizar respaldos.
## Cómo usar
### Configuración en 3 minutos - CouchDB en fly.io
**Recomendado para principiantes**
[![Configuración de LiveSync en Fly.io 2024 usando Google Colab](https://img.youtube.com/vi/7sa_I1832Xc/0.jpg)](https://www.youtube.com/watch?v=7sa_I1832Xc)
1. [Configurar CouchDB en fly.io](docs/setup_flyio_es.md)
2. Configurar el plugin en [Configuración rápida](docs/quick_setup_es.md)
### Configuración manual
1. Configurar el servidor
1. [Configurar CouchDB en fly.io](docs/setup_flyio_es.md)
2. [Configurar su CouchDB](docs/setup_own_server_es.md)
2. Configura el plugin en [Configuración rápida](docs/quick_setup_es.md)
> [!CONSEJO]
> Actualmente, fly.io ya no es gratuito. Afortunadamente, aunque hay algunos problemas, aún podemos usar IBM Cloudant. Aquí está como [Configurar IBM Cloudant](docs/setup_cloudant.md). ¡Se actualizará pronto!
## Información en la barra de estado
El estado de sincronización se muestra en la barra de estado con los siguientes iconos.
- Indicador de actividad
- 📲 Solicitud de red
- Estado
- ⏹️ Detenido
- 💤 LiveSync activado. Esperando cambios
- ⚡️ Sincronización en progreso
- ⚠ Ocurrió un error
- Indicador estadístico
- ↑ Chunks y metadatos subidos
- ↓ Chunks y metadatos descargados
- Indicador de progreso
- 📥 Elementos transferidos sin procesar
- 📄 Operación de base de datos en curso
- 💾 Procesos de escritura en almacenamiento en curso
- ⏳ Procesos de lectura en almacenamiento en curso
- 🛫 Procesos de lectura en almacenamiento pendientes
- 📬 Procesos de lectura en almacenamiento por lotes
- ⚙️ Procesos de almacenamiento de archivos ocultos en curso o pendientes
- 🧩 Chunks en espera
- 🔌 Elementos de personalización en curso (Configuración, fragmentos y plugins)
Para prevenir la corrupción de archivos y bases de datos, antes de detener Obsidian espere hasta que todos los indicadores de progreso hayan desaparecido (el plugin también intentará reanudar, sin embargo). Especialmente en caso de que haya eliminado o renombrado archivos.
## Consejos y Solución de Problemas
Si tienes problemas para hacer funcionar el plugin, consulta: [Consejos y solución de problemas](docs/troubleshooting_es.md).
## Agradecimientos
El proyecto ha progresado y mantenido en armonía gracias a:
- Muchos [Colaboradores](https://github.com/vrtmrz/obsidian-livesync/graphs/contributors)
- Muchos [Patrocinadores de GitHub](https://github.com/sponsors/vrtmrz#sponsors)
- Programas comunitarios de JetBrains / Soporte para Proyectos de Código Abierto <img src="https://resources.jetbrains.com/storage/products/company/brand/logos/jetbrains.png" alt="JetBrains logo." height="24">
Que aquellos que han contribuido sean honrados y recordados por su amabilidad y generosidad.
## Licencia
Licenciado bajo la Licencia MIT.

View File

@@ -25,10 +25,10 @@ npm run buildDev
## Make messages to be translated
1. Find the message that you want to be translated.
2. Change the literal to a tagged template literal using `$f`, like below.
2. Change the literal to use `$tf`, like below.
```diff
- Logger("Could not determine passphrase to save data.json! You probably make the configuration sure again!", LOG_LEVEL_URGENT);
+ Logger($f`Could not determine passphrase to save data.json! You probably make the configuration sure again!`, LOG_LEVEL_URGENT);
+ Logger($tf('someKeyForPassphraseError'), LOG_LEVEL_URGENT);
```
3. Make the PR to `https://github.com/vrtmrz/obsidian-livesync`.
4. Follow the steps of "Add translations for already defined terms" to add the translations.

BIN
docs/all_toggles.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

View File

@@ -0,0 +1,122 @@
# [WITHDRAWN] Chunk Aggregation by Prefix
## Goal
To address the "document explosion" and storage bloat issues caused by the current chunking mechanism, while preserving the benefits of content-addressable storage and efficient delta synchronisation. This design aims to significantly reduce the number of documents in the database and simplify Garbage Collection (GC).
## Motivation
Our current synchronisation solution splits files into content-defined chunks, with each chunk stored as a separate document in CouchDB, identified by its hash. This architecture effectively leverages CouchDB's replication for automatic deduplication and efficient transfer.
However, this approach faces significant challenges as the number of files and edits increases:
1. **Document Explosion:** A large vault can generate millions of chunk documents, severely degrading CouchDB's performance, particularly during view building and replication.
2. **Storage Bloat & GC Difficulty:** Obsolete chunks generated during edits are difficult to identify and remove. Since CouchDB's deletion (`_deleted: true`) is a soft delete, and compaction is a heavy, space-intensive operation, unused chunks perpetually consume storage, making GC impractical for many users.
3. **The "Eden" Problem:** A previous attempt, "Keep newborn chunks in Eden", aimed to mitigate this by embedding volatile chunks within the parent document. While it reduced the number of standalone chunks, it introduced a new issue: the parent document's history (`_revs_info`) became excessively large, causing its own form of database bloat and making compaction equally necessary but difficult to manage.
This new design addresses the root cause—the sheer number of documents—by aggregating chunks into sets.
## Prerequisites
- The new implementation must maintain the core benefit of deduplication to ensure efficient synchronisation.
- The solution must not introduce a single point of bottleneck and should handle concurrent writes from multiple clients gracefully.
- The system must provide a clear and feasible strategy for Garbage Collection.
- The design should be forward-compatible, allowing for a smooth migration path for existing users.
## Outlined Methods and Implementation Plans
### Abstract
This design introduces a two-tiered document structure to manage chunks: **Index Documents** and **Data Documents**. Chunks are no longer stored as individual documents. Instead, they are grouped into `Data Documents` based on a common hash prefix. The existence and location of each chunk are tracked by `Index Documents`, which are also grouped by the same prefix. This approach dramatically reduces the total document count.
### Detailed Implementation
**1. Document Structure:**
- **Index Document:** Maps chunk hashes to their corresponding Data Document ID. Identified by a prefix of the chunk hash.
- `_id`: `idx:{prefix}` (e.g., `idx:a9f1b`)
- Content:
```json
{
"_id": "idx:a9f1b",
"_rev": "...",
"chunks": {
"a9f1b12...": "dat:a9f1b-001",
"a9f1b34...": "dat:a9f1b-001",
"a9f1b56...": "dat:a9f1b-002"
}
}
```
- **Data Document:** Contains the actual chunk data as base64-encoded strings. Identified by a prefix and a sequential number.
- `_id`: `dat:{prefix}-{sequence}` (e.g., `dat:a9f1b-001`)
- Content:
```json
{
"_id": "dat:a9f1b-001",
"_rev": "...",
"chunks": {
"a9f1b12...": "...", // base64 data
"a9f1b34...": "..." // base64 data
}
}
```
**2. Configuration:**
- `chunk_prefix_length`: The number of characters from the start of a chunk hash to use as a prefix (e.g., `5`). This determines the granularity of aggregation.
- `data_doc_size_limit`: The maximum size for a single Data Document to prevent it from becoming too large (e.g., 1MB). When this limit is reached, a new Data Document with an incremented sequence number is created.
**3. Write/Save Operation Flow:**
When a client creates new chunks:
1. For each new chunk, determine its hash prefix.
2. Read the corresponding `Index Document` (e.g., `idx:a9f1b`).
3. From the index, determine which of the new chunks already exist in the database.
4. For the **truly new chunks only**:
a. Read the last `Data Document` for that prefix (e.g., `dat:a9f1b-005`).
b. If it is nearing its size limit, create a new one (`dat:a9f1b-006`).
c. Add the new chunk data to the Data Document and save it.
5. Update the `Index Document` with the locations of the newly added chunks.
**4. Handling Write Conflicts:**
Concurrent writes to the same `Index Document` or `Data Document` from multiple clients will cause conflicts (409 Conflict). This is expected and must be handled gracefully. Since additions are incremental, the client application must implement a **retry-and-merge loop**:
1. Attempt to save the document.
2. On a conflict, re-fetch the latest version of the document from the server.
3. Merge its own changes into the latest version.
4. Attempt to save again.
5. Repeat until successful or a retry limit is reached.
**5. Garbage Collection (GC):**
GC becomes a manageable, periodic batch process:
1. Scan all file metadata documents to build a master set of all *currently referenced* chunk hashes.
2. Iterate through all `Index Documents`. For each chunk listed:
a. If the chunk hash is not in the master reference set, it is garbage.
b. Remove the garbage entry from the `Index Document`.
c. Remove the corresponding data from its `Data Document`.
3. If a `Data Document` becomes empty after this process, it can be deleted.
## Test Strategy
1. **Unit Tests:** Implement tests for the conflict resolution logic (retry-and-merge loop) to ensure robustness.
2. **Integration Tests:**
- Verify that concurrent writes from multiple simulated clients result in a consistent, merged state without data loss.
- Run a full synchronisation scenario and confirm the resulting database has a significantly lower document count compared to the previous implementation.
3. **GC Test:** Simulate a scenario where files are deleted, run the GC process, and verify that orphaned chunks are correctly removed from both Index and Data documents, and that storage is reclaimed after compaction.
4. **Migration Test:** Develop and test a "rebuild" process for existing users, which migrates their chunk data into the new aggregated structure.
## Documentation Strategy
- This design document will be published to explain the new architecture.
- The configuration options (`chunk_prefix_length`, etc.) will be documented for advanced users.
- A guide for the migration/rebuild process will be provided.
## Future Work
The separation of index and data opens up a powerful possibility. While this design initially implements both within CouchDB, the `Data Documents` could be offloaded to a dedicated object storage service such as **S3, MinIO, or Cloudflare R2**.
In such a hybrid model, CouchDB would handle only the lightweight `Index Documents` and file metadata, serving as a high-speed synchronisation and coordination layer. The bulky chunk data would reside in a more cost-effective and scalable blob store. This would represent the ultimate evolution of this architecture, combining the best of both worlds.
## Consideration and Conclusion
This design directly addresses the scalability limitations of the original chunk-per-document model. By aggregating chunks into sets, it significantly reduces the document count, which in turn improves database performance and makes maintenance feasible. The explicit handling of write conflicts and a clear strategy for garbage collection make this a robust and sustainable long-term solution. It effectively resolves the problems identified in previous approaches, including the "Eden" experiment, by tackling the root cause of database bloat. This architecture provides a solid foundation for future growth and scalability.

View File

@@ -0,0 +1,127 @@
# [WIP] The design intent explanation for using metadata and chunks
## Abstract
## Goal
- To explain the following:
- What metadata and chunks are
- The design intent of using metadata and chunks
## Background and Motivation
We are using PouchDB and CouchDB for storing files and synchronising them. PouchDB is a JavaScript database that stores data on the device (browser, and of course, Obsidian), while CouchDB is a NoSQL database that stores data on the server. The two databases can be synchronised to keep data consistent across devices via the CouchDB replication protocol. This is a powerful and flexible way to store and synchronise data, including conflict management, but it is not well suited for files. Therefore, we needed to manage how to store files and synchronise them.
## Terminology
- Password:
- A string used to authenticate the user.
- Passphrase:
- A string used to encrypt and decrypt data.
- This is not a password.
- Encrypt:
- To convert data into a format that is unreadable to anyone.
- Can be decrypted by the user who has the passphrase.
- Should be 1:n, containing random data to ensure that even the same data, when encrypted, results in different outputs.
- Obfuscate:
- To convert data into a format that is not easily readable.
- Can be decrypted by the user who has the passphrase.
- Should be 1:1, containing no random data, and the same data is always obfuscated to the same result. It is necessarily unreadable.
- Hash:
- To convert data into a fixed-length string that is not easily readable.
- Cannot be decrypted.
- Should be 1:1, containing no random data, and the same data is always hashed to the same result.
## Designs
### Principles
- To synchronise and handle conflicts, we should keep the history of modifications.
- No data should be lost. Even though some extra data may be stored, it should be removed later, safely.
- Each stored data item should be as small as possible to transfer efficiently, but not so small as to be inefficient.
- Any type of file should be supported, including binary files.
- Encryption should be supported efficiently.
- This method should not depart too far from the PouchDB/CouchDB philosophy. It needs to leave room for other `remote`s, to benefit from custom replicators.
As a result, we have adopted the following design.
- Files are stored as one metadata entry and multiple chunks.
- Chunks are content-addressable, and the metadata contains the ids of the chunks.
- Chunks may be referenced from multiple metadata entries. They should be efficiently managed to avoid redundancy.
### Metadata Design
The metadata contains the following information:
| Field | Type | Description | Note |
| -------- | -------------------- | ---------------------------- | ----------------------------------------------------------------------------------------------------- |
| _id | string | The id of the metadata | It is created from the file path |
| _rev | string | The revision of the metadata | It is created by PouchDB |
| children | [string] | The ids of the chunks | |
| path | string | The path of the file | If Obfuscate path has been enabled, it has been encrypted |
| size | number | The size of the metadata | Not respected; for troubleshooting |
| ctime | string | The creation timestamp | This is not used to compare files, but when writing to storage, it will be used |
| mtime | string | The modification timestamp | This will be used to compare files, and will be written to storage |
| type | `plain` \| `newnote` | The type of the file | Children of type `plain` will not be base64 encoded, while `newnote` will be |
| e_ | boolean | The file is encrypted | Encryption is processed during transfer to the remote. In local storage, this property does not exist |
#### Decision Rule for `_id` of Metadata
```ts
// Note: This is pseudo code.
let _id = PATH;
if (!HANDLE_FILES_AS_CASE_SENSITIVE) {
_id = _id.toLowerCase();
}
if (_id.startsWith("_")) {
_id = "/" + _id;
}
if (OBFUSCATE_PATH) {
_id = `f:${OBFUSCATE_PATH(_id, E2EE_PASSPHRASE)}`;
}
return _id;
```
#### Expected Questions
- Why do we need to handle files as case-sensitive?
- Some filesystems are case-sensitive, while others are not. For example, Windows is not case-sensitive, while Linux is. Therefore, we need to handle files as case-sensitive to manage conflicts.
- The trade-off is that you will not be able to manage files with different cases, so this can be disabled if you only have case-sensitive terminals.
- Why obfuscate the path?
- E2EE only encrypts the content of the file, not metadata. Hence, E2EE alone is not enough to protect the vault completely. The path is also part of the metadata, so it should be obfuscated. This is a trade-off between security and performance. However, if you title a note with sensitive information, you should obfuscate the path.
- What is `f:`?
- It is a prefix to indicate that the path is obfuscated. It is used to distinguish between normal paths and obfuscated paths. Due to file enumeration, Self-hosted LiveSync should scan the files to find the metadata, excluding chunks and other information.
- Why does an unobfuscated path not start with `f:`?
- For compatibility. Self-hosted LiveSync, by its nature, must also be able to handle files created with newer versions as far as possible.
### Chunk Design
#### Chunk Structure
The chunk contains the following information:
| Field | Type | Description | Note |
| ----- | ------------ | ------------------------- | ----------------------------------------------------------------------------------------------------- |
| _id | `h:{string}` | The id of the chunk | It is created from the hash of the chunk content |
| _rev | string | The revision of the chunk | It is created by PouchDB |
| data | string | The content of the chunk | |
| type | `leaf` | Fixed | |
| e_ | boolean | The chunk is encrypted | Encryption is processed during transfer to the remote. In local storage, this property does not exist |
**SORRY, TO BE WRITTEN, BUT WE HAVE IMPLEMENTED `v2`, WHICH REQUIRES MORE INFORMATION.**
### How they are unified
## Deduplication and Optimisation
## Synchronisation Strategy
## Performance Considerations
## Security and Privacy
## Edge Cases

View File

@@ -0,0 +1,117 @@
# [IN DESIGN] Tiered Chunk Storage with Live Compaction
** VERY IMPORTANT NOTE: This design must be used with the new journal synchronisation method. Otherwise, we risk introducing the bloat of changes from hot-pack into the Bucket. (CouchDB/PouchDB can synchronise only the most recent changes, or resolve conflicts.) Previous Journal Sync **IS NOT**. Please proceed with caution. **
## Goal
To establish a highly efficient, robust, and scalable synchronisation architecture by introducing a tiered storage system inspired by Log-Structured Merge-Trees (LSM-Trees). This design aims to address the challenges of real-time synchronisation, specifically the massive generation of transient data, while minimising storage bloat and ensuring high performance.
## Motivation
Our previous designs, including "Chunk Aggregation by Prefix", successfully addressed the "document explosion" problem. However, the introduction of real-time editor synchronisation exposed a new, critical challenge: the constant generation of short-lived "garbage" chunks during user input. This "garbage storm" places immense pressure on storage, I/O, and the Garbage Collection (GC) process.
A simple aggregation strategy is insufficient because it treats all data equally, mixing valuable, stable chunks with transient, garbage chunks in permanent storage. This leads to storage bloat and inefficient compaction. We require a system that can intelligently distinguish between "hot" (volatile) and "cold" (stable) data, processing them in the most efficient manner possible.
## Outlined Methods and Implementation Plans
### Abstract
This design implements a two-tiered storage system within CouchDB.
1. **Level 0 Hot Storage:** A set of "Hot-Packs", one for each active client. These act as fast, append-only logs for all newly created chunks. They serve as a temporary staging area, absorbing the "garbage storm" of real-time editing.
2. **Level 1 Cold Storage:** The permanent, immutable storage for stable chunks, consisting of **Index Documents** for fast lookups and **Data Documents (Cold-Packs)** for storing chunk data.
A background "Compaction" process continuously promotes stable chunks from Hot Storage to Cold Storage, while automatically discarding garbage. This keeps the permanent storage clean and highly optimised.
### Detailed Implementation
**1. Document Structure:**
- **Hot-Pack Document (Level 0):** A per-client, append-only log.
- `_id`: `hotpack:{client_id}` (`client_id` could be the same as the `deviceNodeID` used in the `accepted_nodes` in MILESTONE_DOC; enables database 'lockout' for safe synchronisation)
- Content: A log of chunk creation events.
```json
{
"_id": "hotpack:a9f1b12...",
"_rev": "...",
"log": [
{ "hash": "abc...", "data": "...", "ts": ..., "file_id": "file1" },
{ "hash": "def...", "data": "...", "ts": ..., "file_id": "file2" }
]
}
```
- **Index Document (Level 1):** A fast, prefix-based lookup table for stable chunks.
- `_id`: `idx:{prefix}` (e.g., `idx:a9f1b`)
- Content: Maps a chunk hash to the ID of the Cold-Pack it resides in.
```json
{
"_id": "idx:a9f1b",
"chunks": { "a9f1b12...": "dat:1678886400" }
}
```
- **Cold-Pack Document (Level 1):** An immutable data block created by the compaction process.
- `_id`: `dat:{timestamp_or_uuid}` (e.g., `dat:1678886400123`)
- Content: A collection of stable chunks.
```json
{
"_id": "dat:1678886400123",
"chunks": { "a9f1b12...": "...", "c3d4e5f...": "..." }
}
```
- **Hot-Pack List Document:** A central registry of all active Hot-Packs. This might be a computed document that clients maintain in memory on startup.
- `_id`: `hotpack_list`
- Content: `{"active_clients": ["hotpack:a9f1b12...", "hotpack:c3d4e5f..."]}`
**2. Write/Save Operation Flow (Real-time Editing):**
1. A client generates a new chunk.
2. It **immediately appends** the chunk object (`{hash, data, ts, file_id}`) to its **own** Hot-Pack document's `log` array within its local PouchDB. This operation is extremely fast.
3. The PouchDB synchronisation process replicates this change to the remote CouchDB and other clients in the background. No other Hot-Packs are consulted during this write operation.
**3. Read/Load Operation Flow:**
To find a chunk's data:
1. The client first consults its in-memory list of active Hot-Pack IDs (see section 5).
2. It searches for the chunk hash in all **Hot-Pack documents**, starting from its own, then others. It reads them in reverse log order (newest first).
3. If not found, it consults the appropriate **Index Document (`idx:...`)** to get the ID of the Cold-Pack.
4. It then reads the chunk data from the corresponding **Cold-Pack document (`dat:...`)**.
**4. Compaction & Promotion Process (The "GC"):**
This is a background task run periodically by clients, or triggered when the number of unprocessed log entries exceeds a threshold (to maintain the ability to synchronise with the remote database, which has a limited document size).
1. The client takes its own Hot-Pack (`hotpack:{client_id}`) and scans its `log` array from the beginning (oldest first).
2. For each chunk in the log, it checks if the chunk is still referenced in the latest revision of any file.
- **If not referenced (Garbage):** The log entry is simply discarded.
- **If referenced (Stable):** The chunk is added to a "promotion batch".
3. After scanning a certain number of log entries, the client takes the "promotion batch".
4. It creates one or more new, immutable **Cold-Pack (`dat:...`)** documents to store the chunk data from the batch.
5. It updates the corresponding **Index (`idx:...`)** documents to point to the new Cold-Pack(s).
6. Once the promotion is successfully saved to the database, it **removes the processed entries from its Hot-Pack's `log` array**. This is a critical step to prevent reprocessing and keep the Hot-Pack small.
**5. Hot-Pack List Management:**
To know which Hot-Packs to read, clients will:
1. On startup, load the `hotpack_list` document into memory.
2. Use PouchDB's live `changes` feed to monitor the creation of new `hotpack:*` documents.
3. Upon detecting an unknown Hot-Pack, the client updates its in-memory list and attempts to update the central `hotpack_list` document (on a best-effort basis, with conflict resolution).
## Planned Test Strategy
1. **Unit Tests:** Test the Compaction/Promotion logic extensively. Ensure garbage is correctly identified and stable chunks are promoted correctly.
2. **Integration Tests:** Simulate a multi-client real-time editing session.
- Verify that writes are fast and responsive.
- Confirm that transient garbage chunks do not pollute the Cold Storage.
- Confirm that after a period of inactivity, compaction runs and the Hot-Packs shrink.
3. **Stress Tests:** Simulate many clients joining and leaving to test the robustness of the `hotpack_list` management.
## Documentation Strategy
- This design document will serve as the core architectural reference.
- The roles of each document type (Hot-Pack, Index, Cold-Pack, List) will be clearly explained for future developers.
- The logic of the Compaction/Promotion process will be detailed.
## Consideration and Conclusion
This tiered storage design is a direct evolution, born from the lessons of previous architectures. It embraces the ephemeral nature of data in real-time applications. By creating a "staging area" (Hot-Packs) for volatile data, it protects the integrity and performance of the permanent "cold" storage. The Compaction process acts as a self-cleaning mechanism, ensuring that only valuable, stable data is retained long-term. This is not just an optimisation; it is a fundamental shift that enables robust, high-performance, and scalable real-time synchronisation on top of CouchDB.

View File

@@ -0,0 +1,97 @@
# [IN DESIGN] Tiered Chunk Storage for Bucket Sync
## Goal
To evolve the "Journal Sync" mechanism by integrating the Tiered Storage architecture. This design aims to drastically reduce the size and number of sync packs, minimise storage consumption on the backend bucket, and establish a clear, efficient process for Garbage Collection, all while remaining protocol-agnostic.
## Motivation
The original "Journal Sync" liberates us from CouchDB's protocol, but it still packages and transfers entire document changes, including bulky and often transient chunk data. In a real-time or frequent-editing scenario, this results in:
1. **Bloated Sync Packs:** Packs become large with redundant or short-lived chunk data, increasing upload and download times.
2. **Inefficient Storage:** The backend bucket stores numerous packs containing overlapping and obsolete chunk data, wasting space.
3. **Impractical Garbage Collection:** Identifying and purging obsolete *chunk data* from within the pack-based journal history is extremely difficult.
This new design addresses these problems by fundamentally changing *what* is synchronised in the journal packs. We will synchronise lightweight metadata and logs, while handling bulk data separately.
## Outlined methods and implementation plans
### Abstract
This design adapts the Tiered Storage model for a bucket-based backend. The backend bucket is partitioned into distinct areas for different data types. The "Journal Sync" process is now responsible for synchronising only the "hot" volatile data and lightweight metadata. A separate, asynchronous "Compaction" process, which can be run by any client, is responsible for migrating stable data into permanent, deduplicated "cold" storage.
### Detailed Implementation
**1. Bucket Structure:**
The backend bucket will have four distinct logical areas (prefixes):
- `packs/`: For "Journal Sync" packs, containing the journal of metadata and Hot-Log changes.
- `hot_logs/`: A dedicated area for each client's "Hot-Log," containing newly created, volatile chunks.
- `indices/`: For prefix-based Index files, mapping chunk hashes to their permanent location in Cold Storage.
- `cold_chunks/`: For deduplicated, stable chunk data, stored by content hash.
**2. Data Structures (Client-side PouchDB & Backend Bucket):**
- **Client Metadata:** Standard file metadata documents, kept in the client's PouchDB.
- **Hot-Log (in `hot_logs/`):** A per-client, append-only log file on the bucket.
- Path: `hot_logs/{client_id}.jsonlog`
- Content: A sequence of JSON objects, one per line, representing chunk creation events. `{"hash": "...", "data": "...", "ts": ..., "file_id": "..."}`
- **Index File (in `indices/`):** A JSON file for a given hash prefix.
- Path: `indices/{prefix}.json`
- Content: Maps a chunk hash to its content hash (which is its key in `cold_chunks/`). `{"hash_abc...": true, "hash_def...": true}`
- **Cold Chunk (in `cold_chunks/`):** The raw, immutable, deduplicated chunk data.
- Path: `cold_chunks/{chunk_hash}`
**3. "Journal Sync" - Send/Receive Operation (Not Live):**
This process is now extremely lightweight.
1. **Send:**
a. The client takes all newly generated chunks and **appends them to its own Hot-Log file (`hot_logs/{client_id}.jsonlog`)** on the bucket.
b. The client updates its local file metadata in PouchDB.
c. It then creates a "Journal Sync" pack containing **only the PouchDB journal of the file metadata changes.** This pack is very small as it contains no chunk data.
d. The pack is uploaded to `packs/`.
2. **Receive:**
a. The client downloads new packs from `packs/` and applies the metadata journal to its local PouchDB.
b. It downloads the latest versions of all **other clients' Hot-Log files** from `hot_logs/`.
c. Now the client has a complete, up-to-date view of all metadata and all "hot" chunks.
**4. Read/Load Operation Flow:**
To find a chunk's data:
1. The client searches for the chunk hash in its local copy of all **Hot-Logs**.
2. If not found, it downloads and consults the appropriate **Index file (`indices/{prefix}.json`)**.
3. If the index confirms existence, it downloads the data from **`cold_chunks/{chunk_hash}`**.
**5. Compaction & Promotion Process (Asynchronous "GC"):**
This is a deliberate, offline-capable process that any client can choose to run.
1. The client "leases" its own Hot-Log for compaction.
2. It reads its entire `hot_logs/{client_id}.jsonlog`.
3. For each chunk in the log, it checks if the chunk is referenced in the *current, latest state* of the file metadata.
- **If not referenced (Garbage):** The log entry is discarded.
- **If referenced (Stable):** The chunk is added to a "promotion batch."
4. For each chunk in the promotion batch:
a. It checks the corresponding `indices/{prefix}.json` to see if the chunk already exists in Cold Storage.
b. If it does not exist, it **uploads the chunk data to `cold_chunks/{chunk_hash}`** and updates the `indices/{prefix}.json` file.
5. Once the entire Hot-Log has been processed, the client **deletes its `hot_logs/{client_id}.jsonlog` file** (or truncates it to empty), effectively completing the cycle.
## Test strategy
1. **Component Tests:** Test the Compaction process independently. Ensure it correctly identifies stable versus garbage chunks and populates the `cold_chunks/` and `indices/` areas correctly.
2. **Integration Tests:**
- Simulate a multi-client sync cycle. Verify that sync packs in `packs/` are small.
- Confirm that `hot_logs/` are correctly created and updated.
- Run the Compaction process and verify that data migrates correctly to cold storage and the hot log is cleared.
3. **Conflict Tests:** Simulate two clients trying to compact the same index file simultaneously and ensure the outcome is consistent (for example, via a locking mechanism or last-write-wins).
## Documentation strategy
- This design document will be the primary reference for the bucket-based architecture.
- The structure of the backend bucket (`packs/`, `hot_logs/`, etc.) will be clearly defined.
- A detailed description of how to run the Compaction process will be provided to users.
## Consideration and Conclusion
By applying the Tiered Storage model to "Journal Sync", we transform it into a remarkably efficient system. The synchronisation of everyday changes becomes extremely fast and lightweight, as only metadata journals are exchanged. The heavy lifting of data deduplication and permanent storage is offloaded to a separate, asynchronous Compaction process. This clear separation of concerns makes the system highly scalable, minimises storage costs, and finally provides a practical, robust solution for Garbage Collection in a protocol-agnostic, bucket-based environment.

View File

@@ -1,6 +1,6 @@
# Keep newborn chunks in Eden.
# Keep newborn chunks in Eden
NOTE: This is the planned feature design document. This is planned, but not be implemented now (v0.23.3). This has not reached the design freeze and will be added to from time to time.
Notice: deprecated. please refer to the result section of this document.
## Goal
@@ -19,15 +19,18 @@ Reduce the number of chunks which in volatile, and reduce the usage of storage o
- The problem is that this unnecessary chunking slows down both local and remote operations.
## Prerequisite
- The implementation must be able to control the size of the document appropriately so that it does not become non-transferable (1).
- The implementation must be such that data corruption can be avoided even if forward compatibility is not maintained; due to the nature of Self-hosted LiveSync, backward version connexions are expected.
- The implementation must be such that data corruption can be avoided even if forward compatibility is not maintained; due to the nature of Self-hosted LiveSync, backward version connexions are expected.
- Viewed as a feature:
- This feature should be disabled for migration users.
- This feature should be enabled for new users and after rebuilds of migrated users.
- Therefore, back into the implementation view, Ideally, the implementation should be such that data recovery can be achieved by immediately upgrading after replication.
## Outlined methods and implementation plans
### Abstract
To store and transfer only stable chunks independently and share them from multiple documents after stabilisation, new chunks, i.e. chunks that are considered non-stable, are modified to be stored in the document and transferred with the document. In this case, care should be taken not to exceed prerequisite (1).
If this is achieved, the non-leaf document will not be transferred, and even if it is, the chunk will be stored in the document, so that the size can be reduced by the compaction.
@@ -40,11 +43,11 @@ Details are given below.
type EntryWithEden = {
eden: {
[key: DocumentID]: {
data: string,
epoch: number, // The document revision which this chunk has been born.
}
}
}
data: string;
epoch: number; // The document revision which this chunk has been born.
};
};
};
```
2. The following configuration items are added:
Note: These configurations should be shared as `Tweaks value` between each client.
@@ -63,6 +66,7 @@ Details are given below.
5. In End-to-End Encryption, property `eden` of documents will also be encrypted.
### Note
- When this feature has been enabled, forward compatibility is temporarily lost. However, it is detected as missing chunks, and this data is not reflected in the storage in the old version. Therefore, no data loss will occur.
## Test strategy
@@ -77,5 +81,26 @@ Details are given below.
- Indeed, we lack a fulfilled configuration table. Efforts will be made and, if they can be produced, this document will then be referenced. But not required while in the experimental or beta feature.
- However, this might be an essential feature. Further efforts are desired.
## Results from actual operation
After implementing this feature, we have been using it for a while. The following results were obtained.
- Drawbacks were thought not to be a problem, but they were actually a problem:
- A document with `Eden` has a quite larger history compared to a document without `Eden`.
- Self-hosted LiveSync does not perform compaction aggressively, which results in the remote database becoming partially bloated.
- Compaction of the Remote Database (CouchDB) requires the same amount of free space as the size of the database. Therefore, it is not possible to perform compaction on a remote database if we reached to the maximum size of the database. It means that when we detect it, it is too late.
- We have mentioned that `We need compaction` in previous sections. However, but it was so hard to be determined whether the compaction is required or not, until the database is bloated. (Of course, it requires some time to compact the database, and, literally, some document loses its history. It is not a good idea to perform frequently and meaninglessly. We need manual decision, but indeed difficult to normal users).
### Consideration and Conclusion
To be described after implemented, tested, and, released.
This feature results in two aspects:
- For the users who are familiar with the CouchDB, this feature is a bit useful. They can watch and handle the database by themselves.
- For the users who are not familiar with the CouchDB, i.e., normal users, this feature is not so useful, either. They are not familiar with the database, and they do not know how to handle it. Therefore, they cannot decide whether the compaction is required or not.
Hence, this feature would be kept as an experimental feature, but it is not enabled by default. In addition to that, it is marked as deprecated. Detailed notice will be noisy for the users who are not familiar with the CouchDB. Details would be kept in this document, for the future.
It is not recommended to use this feature, unless the person who is familiar with the CouchDB and the database management.
Vorotamoroz has written this document. Bias: I am the first author of this plug-in, familiar with the CouchDB.
Research and development has been frozen on 2025-04-11. But, bugs will be fixed if they are found. Please feel free to report them.

View File

@@ -1,267 +1,759 @@
NOTE: This document surely became outdated. I'll improve this doc in a while. but your contributions are always welcome.
NOTE: This document not completed. I'll improve this doc in a while. but your contributions are always welcome.
# Settings of this plugin
# Settings of Self-hosted LiveSync
The settings dialog has been quite long, so I split each configuration into tabs.
If you feel something, please feel free to inform me.
There are many settings in Self-hosted LiveSync. This document describes each setting in detail (not how-to). Configuration and settings are divided into several categories and indicated by icons. The icon is as follows:
| icon | description |
| :---: | ----------------------------------------------------------------- |
| 🛰️ | [Remote Database Configurations](#remote-database-configurations) |
| 📦 | [Local Database Configurations](#local-database-configurations) |
| ⚙️ | [General Settings](#general-settings) |
| 🔁 | [Sync Settings](#sync-settings) |
| 🔧 | [Miscellaneous](#miscellaneous) |
| 🧰 | [Hatch](#miscellaneous) |
| 🔌 | [Plugin and its settings](#plugin-and-its-settings) |
| 🚑 | [Corrupted data](#corrupted-data) |
| Icon | Description |
| :--: | ------------------------------------------------------------------ |
| 💬 | [0. Change Log](#0-change-log) |
| 🧙‍♂️ | [1. Setup](#1-setup) |
| ⚙️ | [2. General Settings](#2-general-settings) |
| 🛰️ | [3. Remote Configuration](#3-remote-configuration) |
| 🔄 | [4. Sync Settings](#4-sync-settings) |
| 🚦 | [5. Selector (Advanced)](#5-selector-advanced) |
| 🔌 | [6. Customization sync (Advanced)](#6-customization-sync-advanced) |
| 🧰 | [7. Hatch](#7-hatch) |
| 🔧 | [8. Advanced (Advanced)](#8-advanced-advanced) |
| 💪 | [9. Power users (Power User)](#9-power-users-power-user) |
| 🩹 | [10. Patches (Edge Case)](#10-patches-edge-case) |
| 🎛️ | [11. Maintenance](#11-maintenance) |
## Remote Database Configurations
Configure the settings of synchronize server. If any synchronization is enabled, you can't edit this section. Please disable all synchronization to change.
## 0. Change Log
### URI
URI of CouchDB. In the case of Cloudant, It's "External Endpoint(preferred)".
**Do not end it up with a slash** when it doesn't contain the database name.
This pane shows version up information. You can check what has been changed in recent versions.
### Username
Your CouchDB's Username. Administrator's privilege is preferred.
## 1. Setup
### Password
Your CouchDB's Password.
Note: This password is saved into your Obsidian's vault in plain text.
This pane is used for setting up Self-hosted LiveSync. There are several options to set up Self-hosted LiveSync.
### Database Name
The Database name to synchronize.
If not exist, created automatically.
### 1. Quick Setup
Most preferred method to setup Self-hosted LiveSync. You can setup Self-hosted LiveSync with a few clicks.
### End to End Encryption
Encrypt your database. It affects only the database, your files are left as plain.
#### Connect with Setup URI
The encryption algorithm is AES-GCM.
Setup the Self-hosted LiveSync with the `setup URI` which is [copied from another device](#copy-current-settings-as-a-new-setup-uri) or the setup script.
Note: If you want to use "Plugins and their settings", you have to enable this.
#### Manual setup
### Passphrase
The passphrase to used as the key of encryption. Please use the long text.
Step-by-step setup for Self-hosted LiveSync. You can setup Self-hosted LiveSync manually with Minimal setting items.
### Apply
Set the End to End encryption enabled and its passphrase for use in replication.
If you change the passphrase of an existing database, overwriting the remote database is strongly recommended.
#### Enable LiveSync
This button only appears when the setup was not completed. If you have completed the setup manually, you can enable LiveSync on this device by this button.
### Overwrite remote database
Overwrite the remote database with the local database using the passphrase you applied.
### 2. To setup other devices
#### Copy the current settings to a Setup URI
### Rebuild
Rebuild remote and local databases with local files. It will delete all document history and retained chunks, and shrink the database.
You can copy the current settings as a new setup URI. And this URI can be used to setup the other devices as [Use the copied setup URI](#use-the-copied-setup-uri).
### Test Database connection
You can check the connection by clicking this button.
### 3. Reset
### Check database configuration
You can check and modify your CouchDB configuration from here directly.
#### Discard existing settings and databases
### Lock remote database.
Other devices are banned from the database when you have locked the database.
If you have something troubled with other devices, you can protect the vault and remote database with your device.
Reset the Self-hosted LiveSync settings and databases.
**Hazardous operation. Please be careful when using this.**
## Local Database Configurations
"Local Database" is created inside your obsidian.
### 4. Enable extra and advanced features
### Batch database update
Delay database update until raise replication, open another file, window visibility changes, or file events except for file modification.
This option can not be used with LiveSync at the same time.
To keep the set-up dialogue simple, some panes are hidden in default. You can enable them here.
#### Enable advanced features
### Fetch rebuilt DB.
If one device rebuilds or locks the remote database, every other device will be locked out from the remote database until it fetches rebuilt DB.
Setting key: useAdvancedMode
### minimum chunk size and LongLine threshold
The configuration of chunk splitting.
Following panes will be shown when you enable this setting.
| Icon | Description |
| :--: | ------------------------------------------------------------------ |
| 🚦 | [5. Selector (Advanced)](#5-selector-advanced) |
| 🔌 | [6. Customization sync (Advanced)](#6-customization-sync-advanced) |
| 🔧 | [8. Advanced (Advanced)](#8-advanced-advanced) |
Self-hosted LiveSync splits the note into chunks for efficient synchronization. This chunk should be longer than the "Minimum chunk size".
#### Enable poweruser features
Specifically, the length of the chunk is determined by the following orders.
Setting key: usePowerUserMode
1. Find the nearest newline character, and if it is farther than LongLineThreshold, this piece becomes an independent chunk.
Following panes will be shown when you enable this setting.
| Icon | Description |
| :--: | ------------------------------------------------------------------ |
| 💪 | [9. Power users (Power User)](#9-power-users-power-user) |
2. If not, find the nearest to these items.
1. A newline character
2. An empty line (Windows style)
3. An empty line (non-Windows style)
3. Compare the farther in these 3 positions and the next "newline\]#" position, and pick a shorter piece as a chunk.
#### Enable edge case treatment features
This rule was made empirically from my dataset. If this rule acts as badly on your data. Please give me the information.
Setting key: useEdgeCaseMode
You can dump saved note structure to `Dump informations of this doc`. Replace every character with x except newline and "#" when sending information to me.
Following panes will be shown when you enable this setting.
| Icon | Description |
| :--: | ------------------------------------------------------------------ |
| 🩹 | [10. Patches (Edge Case)](#10-patches-edge-case) |
The default values are 20 letters and 250 letters.
## 2. General Settings
## General Settings
### 1. Appearance
### Do not show low-priority log
If you enable this option, log only the entries with the popup.
#### Display Language
### Verbose log
Setting key: displayLanguage
## Sync Settings
You can change the display language. It is independent of the system language and/or Obsidian's language.
Note: Not all messages have been translated. And, please revert to "Default" when reporting errors. Of course, your contribution to translation is always welcome!
### LiveSync
Do LiveSync.
#### Show status inside the editor
It is the one of raison d'être of this plugin.
Setting key: showStatusOnEditor
Useful, but this method drains many batteries on the mobile and uses not the ignorable amount of data transfer.
We can show the status of synchronisation inside the editor.
This method is exclusive to other synchronization methods.
Reflected after reboot
### Periodic Sync
Synchronize periodically.
#### Show status as icons only
### Periodic Sync Interval
Unit is seconds.
Setting key: showOnlyIconsOnEditor
### Sync on Save
Synchronize when the note has been modified or created.
Show status as icons only. This is useful when you want to save space on the status bar.
### Sync on File Open
Synchronize when the note is opened.
#### Show status on the status bar
### Sync on Start
Synchronize when Obsidian started.
Setting key: showStatusOnStatusbar
### Use Trash for deleted files
When the file has been deleted on remote devices, deletion will be replicated to the local device and the file will be deleted.
We can show the status of synchronisation on the status bar. (Default: On)
If this option is enabled, move deleted files into the trash instead delete actually.
### 2. Logging
### Do not delete empty folder
Self-hosted LiveSync will delete the folder when the folder becomes empty. If this option is enabled, leave it as an empty folder.
#### Show only notifications
### Use newer file if conflicted (beta)
Always use the newer file to resolve and overwrite when conflict has occurred.
Setting key: lessInformationInLog
Prevent logging and show only notification. Please disable when you report the logs
### Experimental.
### Sync hidden files
#### Verbose Log
Synchronize hidden files.
Setting key: showVerboseLog
- Scan hidden files before replication.
If you enable this option, all hidden files are scanned once before replication.
Show verbose log. Please enable when you report the logs
- Scan hidden files periodicaly.
If you enable this option, all hidden files will be scanned each [n] seconds.
## 3. Remote Configuration
Hidden files are not actively detected, so we need scanning.
### 1. Remote Server
Each scan stores the file with their modification time. And if the file has been disappeared, the fact is also stored. Then, When the entry of the hidden file has been replicated, it will be reflected in the storage if the entry is newer than storage.
#### Remote Type
Therefore, the clock must be adjusted. If the modification time is determined to be older, the changeset will be skipped or cancelled (It means, **deleted**), even if the file spawned in a hidden folder.
Setting key: remoteType
### Advanced settings
Self-hosted LiveSync using PouchDB and synchronizes with the remote by [this protocol](https://docs.couchdb.org/en/stable/replication/protocol.html).
So, it splits every entry into chunks to be acceptable by the database with limited payload size and document size.
Remote server type
However, it was not enough.
According to [2.4.2.5.2. Upload Batch of Changed Documents](https://docs.couchdb.org/en/stable/replication/protocol.html#upload-batch-of-changed-documents) in [Replicate Changes](https://docs.couchdb.org/en/stable/replication/protocol.html#replicate-changes), it might become a bigger request.
### 2. Notification
Unfortunately, there is no way to deal with this automatically by size for every request.
Therefore, I made it possible to configure this.
#### Notify when the estimated remote storage size exceeds on start up
Note: If you set these values lower number, the number of requests will increase.
Therefore, if you are far from the server, the total throughput will be low, and the traffic will increase.
Setting key: notifyThresholdOfRemoteStorageSize
### Batch size
Number of change feed items to process at a time. Defaults to 250.
MB (0 to disable). We can get a notification when the estimated remote storage size exceeds this value.
### Batch limit
Number of batches to process at a time. Defaults to 40. This along with batch size controls how many docs are kept in memory at a time.
### 3. Privacy & Encryption
## Miscellaneous
#### End-to-End Encryption
### Show status inside editor
Show information inside the editor pane.
It would be useful for mobile.
Setting key: encrypt
### Check integrity on saving
Check all chunks are correctly saved on saving.
Enable end-to-end encryption. enabling this is recommend. If you change the passphrase, you need to rebuild databases (You will be informed).
### Presets
You can set synchronization method at once as these pattern:
- LiveSync
- LiveSync : enabled
- Batch database update : disabled
- Periodic Sync : disabled
- Sync on Save : disabled
- Sync on File Open : disabled
- Sync on Start : disabled
- Periodic w/ batch
- LiveSync : disabled
- Batch database update : enabled
- Periodic Sync : enabled
- Sync on Save : disabled
- Sync on File Open : enabled
- Sync on Start : enabled
- Disable all sync
- LiveSync : disabled
- Batch database update : disabled
- Periodic Sync : disabled
- Sync on Save : disabled
- Sync on File Open : disabled
- Sync on Start : disabled
#### Passphrase
Setting key: passphrase
## Hatch
From here, everything is under the hood. Please handle it with care.
Encrypting passphrase. If you change the passphrase, you need to rebuild databases (You will be informed).
When there are problems with synchronization, the warning message is shown Under this section header.
#### Path Obfuscation
- Pattern 1
![CorruptedData](../images/lock_pattern1.png)
This message is shown when the remote database is locked and your device is not marked as "resolved".
Almost it is happened by enabling End-to-End encryption or History has been dropped.
If you enabled End-to-End encryption, you can unlock the remote database by "Apply and receive" automatically. Or "Drop and receive" when you dropped. If you want to unlock manually, click "mark this device as resolved".
Setting key: usePathObfuscation
- Pattern 2
![CorruptedData](../images/lock_pattern2.png)
The remote database indicates that has been unlocked Pattern 1.
When you mark all devices as resolved, you can unlock the database.
But, there's no problem even if you leave it as it is.
In default, the path of the file is not obfuscated to improve the performance. If you enable this, the path of the file will be obfuscated. This is useful when you want to hide the path of the file.
### Verify and repair all files
read all files in the vault, and update them into the database if there's diff or could not read from the database.
#### Use dynamic iteration count (Experimental)
### Suspend file watching
If enable this option, Self-hosted LiveSync dismisses every file change or deletes the event.
Setting key: useDynamicIterationCount
From here, these commands are used inside applying encryption passphrases or dropping histories.
This is an experimental feature and not recommended. If you enable this, the iteration count of the encryption will be dynamically determined. This is useful when you want to improve the performance.
Usually, doesn't use it so much. But sometimes it could be handy.
---
## Plugins and settings (beta)
**now writing from here onwards, sorry**
### Enable plugin synchronization
If you want to use this feature, you have to activate this feature by this switch.
---
### Sweep plugins automatically
Plugin sweep will run before replication automatically.
### 4. Fetch settings
### Sweep plugins periodically
Plugin sweep will run each 1 minute.
#### Fetch config from remote server
### Notify updates
When replication is complete, a message will be notified if a newer version of the plugin applied to this device is configured on another device.
Fetch necessary settings from already configured remote server.
### Device and Vault name
To save the plugins, you have to set a unique name every each device.
### 5. Minio,S3,R2
### Open
Open the "Plugins and their settings" dialog.
#### Endpoint URL
### Corrupted or missing data
![CorruptedData](../images/corrupted_data.png)
Setting key: endpoint
When Self-hosted LiveSync could not write to the file on the storage, the files are shown here. If you have the old data in your vault, change it once, it will be cured. Or you can use the "File History" plugin.
#### Access Key
Setting key: accessKey
#### Secret Key
Setting key: secretKey
#### Region
Setting key: region
#### Bucket Name
Setting key: bucket
#### Use Custom HTTP Handler
Setting key: useCustomRequestHandler
Enable this if your Object Storage doesn't support CORS
#### Test Connection
#### Apply Settings
### 6. CouchDB
#### Server URI
Setting key: couchDB_URI
#### Username
Setting key: couchDB_USER
username
#### Password
Setting key: couchDB_PASSWORD
password
#### Database Name
Setting key: couchDB_DBNAME
#### Test Database Connection
Open database connection. If the remote database is not found and you have permission to create a database, the database will be created.
#### Validate Database Configuration
Checks and fixes any potential issues with the database config.
#### Apply Settings
## 4. Sync Settings
### 1. Synchronization Preset
#### Presets
Setting key: preset
Apply preset configuration
### 2. Synchronization Method
#### Sync Mode
Setting key: syncMode
#### Periodic Sync interval
Setting key: periodicReplicationInterval
Interval (sec)
#### Sync on Save
Setting key: syncOnSave
Starts synchronisation when a file is saved.
#### Sync on Editor Save
Setting key: syncOnEditorSave
When you save a file in the editor, start a sync automatically
#### Sync on File Open
Setting key: syncOnFileOpen
Forces the file to be synced when opened.
#### Sync on Startup
Setting key: syncOnStart
Automatically Sync all files when opening Obsidian.
#### Sync after merging file
Setting key: syncAfterMerge
Sync automatically after merging files
### 3. Update thinning
#### Batch database update
Setting key: batchSave
Reducing the frequency with which on-disk changes are reflected into the DB
#### Minimum delay for batch database updating
Setting key: batchSaveMinimumDelay
Seconds. Saving to the local database will be delayed until this value after we stop typing or saving.
#### Maximum delay for batch database updating
Setting key: batchSaveMaximumDelay
Saving will be performed forcefully after this number of seconds.
### 4. Deletion Propagation (Advanced)
#### Use the trash bin
Setting key: trashInsteadDelete
Move remotely deleted files to the trash, instead of deleting.
#### Keep empty folder
Setting key: doNotDeleteFolder
Should we keep folders that don't have any files inside?
### 5. Conflict resolution (Advanced)
#### (BETA) Always overwrite with a newer file
Setting key: resolveConflictsByNewerFile
Testing only - Resolve file conflicts by syncing newer copies of the file, this can overwrite modified files. Be Warned.
#### Delay conflict resolution of inactive files
Setting key: checkConflictOnlyOnOpen
Should we only check for conflicts when a file is opened?
#### Delay merge conflict prompt for inactive files.
Setting key: showMergeDialogOnlyOnActive
Should we prompt you about conflicting files when a file is opened?
### 6. Sync settings via markdown (Advanced)
#### Filename
Setting key: settingSyncFile
Save settings to a markdown file. You will be notified when new settings arrive. You can set different files by the platform.
#### Write credentials in the file
Setting key: writeCredentialsForSettingSync
(Not recommended) If set, credentials will be stored in the file.
#### Notify all setting files
Setting key: notifyAllSettingSyncFile
### 7. Hidden Files (Advanced)
#### Hidden file synchronization
#### Enable Hidden files sync
#### Scan for hidden files before replication
Setting key: syncInternalFilesBeforeReplication
#### Scan hidden files periodically
Setting key: syncInternalFilesInterval
Seconds, 0 to disable
## 5. Selector (Advanced)
### 1. Normal Files
#### Synchronising files
(RegExp) Empty to sync all files. Set filter as a regular expression to limit synchronising files.
#### Non-Synchronising files
(RegExp) If this is set, any changes to local and remote files that match this will be skipped.
#### Maximum file size
Setting key: syncMaxSizeInMB
(MB) If this is set, changes to local and remote files that are larger than this will be skipped. If the file becomes smaller again, a newer one will be used.
#### (Beta) Use ignore files
Setting key: useIgnoreFiles
If this is set, changes to local files which are matched by the ignore files will be skipped. Remote changes are determined using local ignore files.
#### Ignore files
Setting key: ignoreFiles
Comma separated `.gitignore, .dockerignore`
### 2. Hidden Files (Advanced)
#### Ignore patterns
#### Add default patterns
## 6. Customization sync (Advanced)
### 1. Customization Sync
#### Device name
Setting key: deviceAndVaultName
Unique name between all synchronized devices. To edit this setting, please disable customization sync once.
#### Per-file-saved customization sync
Setting key: usePluginSyncV2
If enabled per-filed efficient customization sync will be used. We need a small migration when enabling this. And all devices should be updated to v0.23.18. Once we enabled this, we lost a compatibility with old versions.
#### Enable customization sync
Setting key: usePluginSync
#### Scan customization automatically
Setting key: autoSweepPlugins
Scan customization before replicating.
#### Scan customization periodically
Setting key: autoSweepPluginsPeriodic
Scan customization every 1 minute.
#### Notify customized
Setting key: notifyPluginOrSettingUpdated
Notify when other device has newly customized.
#### Open
Open the dialog
## 7. Hatch
### 1. Reporting Issue
#### Make report to inform the issue
#### Write logs into the file
Setting key: writeLogToTheFile
Warning! This will have a serious impact on performance. And the logs will not be synchronised under the default name. Please be careful with logs; they often contain your confidential information.
### 2. Scram Switches
#### Suspend file watching
Setting key: suspendFileWatching
Stop watching for file changes.
#### Suspend database reflecting
Setting key: suspendParseReplicationResult
Stop reflecting database changes to storage files.
### 3. Recovery and Repair
#### Recreate missing chunks for all files
This will recreate chunks for all files. If there were missing chunks, this may fix the errors.
#### Resolve All conflicted files by the newer one
Resolve all conflicted files by the newer one. Caution: This will overwrite the older one, and cannot resurrect the overwritten one.
#### Verify and repair all files
Compare the content of files between on local database and storage. If not matched, you will be asked which one you want to keep.
#### Check and convert non-path-obfuscated files
### 4. Reset
#### Back to non-configured
#### Delete all customization sync data
## 8. Advanced (Advanced)
### 1. Memory cache
#### Memory cache size (by total items)
Setting key: hashCacheMaxCount
#### Memory cache size (by total characters)
Setting key: hashCacheMaxAmount
(Mega chars)
### 2. Local Database Tweak
#### Enhance chunk size
Setting key: customChunkSize
#### Use splitting-limit-capped chunk splitter
Setting key: enableChunkSplitterV2
If enabled, chunks will be split into no more than 100 items. However, dedupe is slightly weaker.
#### Use Segmented-splitter
Setting key: useSegmenter
If this enabled, chunks will be split into semantically meaningful segments. Not all platforms support this feature.
### 3. Transfer Tweak
#### Fetch chunks on demand
Setting key: readChunksOnline
(ex. Read chunks online) If this option is enabled, LiveSync reads chunks online directly instead of replicating them locally. Increasing Custom chunk size is recommended.
#### Batch size of on-demand fetching
Setting key: concurrencyOfReadChunksOnline
#### The delay for consecutive on-demand fetches
Setting key: minimumIntervalOfReadChunksOnline
## 9. Power users (Power User)
### 1. Remote Database Tweak
#### Incubate Chunks in Document (Beta)
Setting key: useEden
If enabled, newly created chunks are temporarily kept within the document, and graduated to become independent chunks once stabilised.
#### Maximum Incubating Chunks
Setting key: maxChunksInEden
The maximum number of chunks that can be incubated within the document. Chunks exceeding this number will immediately graduate to independent chunks.
#### Maximum Incubating Chunk Size
Setting key: maxTotalLengthInEden
The maximum total size of chunks that can be incubated within the document. Chunks exceeding this size will immediately graduate to independent chunks.
#### Maximum Incubation Period
Setting key: maxAgeInEden
The maximum duration for which chunks can be incubated within the document. Chunks exceeding this period will graduate to independent chunks.
#### Data Compression (Experimental)
Setting key: enableCompression
### 2. CouchDB Connection Tweak
#### Batch size
Setting key: batch_size
Number of changes to sync at a time. Defaults to 50. Minimum is 2.
#### Batch limit
Setting key: batches_limit
Number of batches to process at a time. Defaults to 40. Minimum is 2. This along with batch size controls how many docs are kept in memory at a time.
#### Use timeouts instead of heartbeats
Setting key: useTimeouts
If this option is enabled, PouchDB will hold the connection open for 60 seconds, and if no change arrives in that time, close and reopen the socket, instead of holding it open indefinitely. Useful when a proxy limits request duration but can increase resource usage.
### 3. Configuration Encryption
#### Encrypting sensitive configuration items
Setting key: configPassphraseStore
#### Passphrase of sensitive configuration items
Setting key: configPassphrase
This passphrase will not be copied to another device. It will be set to `Default` until you configure it again.
### 4. Developer
#### Enable Developers' Debug Tools.
Setting key: enableDebugTools
Requires restart of Obsidian
## 10. Patches (Edge Case)
### 1. Compatibility (Metadata)
#### Do not keep metadata of deleted files.
Setting key: deleteMetadataOfDeletedFiles
#### Delete old metadata of deleted files on start-up
Setting key: automaticallyDeleteMetadataOfDeletedFiles
(Days passed, 0 to disable automatic-deletion)
### 2. Compatibility (Conflict Behaviour)
#### Always prompt merge conflicts
Setting key: disableMarkdownAutoMerge
Should we prompt you for every single merge, even if we can safely merge automatcially?
#### Apply Latest Change if Conflicting
Setting key: writeDocumentsIfConflicted
Enable this option to automatically apply the most recent change to documents even when it conflicts
### 3. Compatibility (Database structure)
#### (Obsolete) Use an old adapter for compatibility (obsolete)
Setting key: useIndexedDBAdapter
Before v0.17.16, we used an old adapter for the local database. Now the new adapter is preferred. However, it needs local database rebuilding. Please disable this toggle when you have enough time. If leave it enabled, also while fetching from the remote database, you will be asked to disable this.
#### Compute revisions for chunks (Previous behaviour)
Setting key: doNotUseFixedRevisionForChunks
If this enabled, all chunks will be stored with the revision made from its content. (Previous behaviour)
#### Handle files as Case-Sensitive
Setting key: handleFilenameCaseSensitive
If this enabled, All files are handled as case-Sensitive (Previous behaviour).
### 4. Compatibility (Internal API Usage)
#### Scan changes on customization sync
Setting key: watchInternalFileChanges
Do not use internal API
### 5. Edge case addressing (Database)
#### Database suffix
Setting key: additionalSuffixOfDatabaseName
LiveSync could not handle multiple vaults which have same name without different prefix, This should be automatically configured.
#### The Hash algorithm for chunk IDs (Experimental)
Setting key: hashAlg
### 6. Edge case addressing (Behaviour)
#### Fetch database with previous behaviour
Setting key: doNotSuspendOnFetching
#### Keep empty folder
Setting key: doNotDeleteFolder
Should we keep folders that don't have any files inside?
### 7. Edge case addressing (Processing)
#### Do not split chunks in the background
Setting key: disableWorkerForGeneratingChunks
If disabled(toggled), chunks will be split on the UI thread (Previous behaviour).
#### Process small files in the foreground
Setting key: processSmallFilesInUIThread
If enabled, the file under 1kb will be processed in the UI thread.
### 8. Compatibility (Trouble addressed)
#### Do not check configuration mismatch before replication
Setting key: disableCheckingConfigMismatch
## 11. Maintenance
### 1. Scram!
#### Lock Server
Lock the remote server to prevent synchronization with other devices.
#### Emergency restart
Disables all synchronization and restart.
### 2. Syncing
#### Resend
Resend all chunks to the remote.
#### Reset journal received history
Initialise journal received history. On the next sync, every item except this device sent will be downloaded again.
#### Reset journal sent history
Initialise journal sent history. On the next sync, every item except this device received will be sent again.
### 3. Rebuilding Operations (Local)
#### Fetch from remote
Restore or reconstruct local database from remote.
#### Fetch rebuilt DB (Save local documents before)
Restore or reconstruct local database from remote database but use local chunks.
### 4. Total Overhaul
#### Rebuild everything
Rebuild local and remote database with local files.
### 5. Rebuilding Operations (Remote Only)
#### Perform cleanup
Reduces storage space by discarding all non-latest revisions. This requires the same amount of free space on the remote server and the local client.
#### Overwrite remote
Overwrite remote with local DB and passphrase.
#### Reset all journal counter
Initialise all journal history, On the next sync, every item will be received and sent.
#### Purge all journal counter
Purge all download/upload cache.
#### Fresh Start Wipe
Delete all data on the remote server.
### 6. Deprecated
#### Run database cleanup
Attempt to shrink the database by deleting unused chunks. This may not work consistently. Use the 'Rebuild everything' under Total Overhaul.
### 7. Reset
#### Delete local database to reset or uninstall Self-hosted LiveSync

View File

@@ -5,10 +5,15 @@
- [Setup a CouchDB server](#setup-a-couchdb-server)
- [Table of Contents](#table-of-contents)
- [1. Prepare CouchDB](#1-prepare-couchdb)
- [A. Using Docker container](#a-using-docker-container)
- [A. Using Docker](#a-using-docker)
- [1. Prepare](#1-prepare)
- [2. Run docker container](#2-run-docker-container)
- [B. Install CouchDB directly](#b-install-couchdb-directly)
- [B. Using Docker Compose](#b-using-docker-compose)
- [1. Prepare](#1-prepare-1)
- [2. Creating Compose file](#2-create-a-docker-composeyml-file-with-the-following-added-to-it)
- [3. Boot check](#3-run-the-docker-compose-file-to-boot-check)
- [4. Starting Docker Compose in background](#4-run-the-docker-compose-file-in-the-background)
- [C. Install CouchDB directly](#c-install-couchdb-directly)
- [2. Run couchdb-init.sh for initialise](#2-run-couchdb-initsh-for-initialise)
- [3. Expose CouchDB to the Internet](#3-expose-couchdb-to-the-internet)
- [4. Client Setup](#4-client-setup)
@@ -21,43 +26,95 @@
---
## 1. Prepare CouchDB
### A. Using Docker container
### A. Using Docker
#### 1. Prepare
```bash
# Prepare environment variables.
# Adding environment variables.
export hostname=localhost:5984
export username=goojdasjdas #Please change as you like.
export password=kpkdasdosakpdsa #Please change as you like
# Prepare directories which saving data and configurations.
# Creating the save data & configuration directories.
mkdir couchdb-data
mkdir couchdb-etc
```
#### 2. Run docker container
1. Boot Check.
```
$ docker run --name couchdb-for-ols --rm -it -e COUCHDB_USER=${username} -e COUCHDB_PASSWORD=${password} -v ${PWD}/couchdb-data:/opt/couchdb/data -v ${PWD}/couchdb-etc:/opt/couchdb/etc/local.d -p 5984:5984 couchdb
```
If your container has been exited, please check the permission of couchdb-data, and couchdb-etc.
Once CouchDB run, these directories will be owned by uid:`5984`. Please chown it for you again.
> [!WARNING]
> If your container threw an error or exited unexpectedly, please check the permission of couchdb-data, and couchdb-etc.
> Once CouchDB starts, these directories will be owned by uid:`5984`. Please chown it for that uid again.
2. Enable it in background
2. Enable it in the background
```
$ docker run --name couchdb-for-ols -d --restart always -e COUCHDB_USER=${username} -e COUCHDB_PASSWORD=${password} -v ${PWD}/couchdb-data:/opt/couchdb/data -v ${PWD}/couchdb-etc:/opt/couchdb/etc/local.d -p 5984:5984 couchdb
```
### B. Install CouchDB directly
Please refer the [official document](https://docs.couchdb.org/en/stable/install/index.html). However, we do not have to configure it fully. Just administrator needs to be configured.
Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
### B. Using Docker Compose
#### 1. Prepare
```
# Creating the save data & configuration directories.
mkdir couchdb-data
mkdir couchdb-etc
```
#### 2. Create a `docker-compose.yml` file with the following added to it
```
services:
couchdb:
image: couchdb:latest
container_name: couchdb-for-ols
user: 5984:5984
environment:
- COUCHDB_USER=<INSERT USERNAME HERE> #Please change as you like.
- COUCHDB_PASSWORD=<INSERT PASSWORD HERE> #Please change as you like.
volumes:
- ./couchdb-data:/opt/couchdb/data
- ./couchdb-etc:/opt/couchdb/etc/local.d
ports:
- 5984:5984
restart: unless-stopped
```
#### 3. Run the Docker Compose file to boot check
```
docker compose up
# Or if using the old version
docker-compose up
```
> [!WARNING]
> If your container threw an error or exited unexpectedly, please check the permission of couchdb-data, and couchdb-etc.
> Once CouchDB starts, these directories will be owned by uid:`5984`. Please chown it for that uid again.
#### 4. Run the Docker Compose file in the background
If all went well and didn't throw any errors, `CTRL+C` out of it, and then run this command
```
docker compose up -d
# Or if using the old version
docker-compose up -d
```
Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
### C. Install CouchDB directly
Please refer to the [official document](https://docs.couchdb.org/en/stable/install/index.html). However, we do not have to configure it fully. Just the administrator needs to be configured.
## 2. Run couchdb-init.sh for initialise
```
$ curl -s https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/main/utils/couchdb/couchdb-init.sh | bash
curl -s https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/main/utils/couchdb/couchdb-init.sh | bash
```
If it results like following:
If it results like the following:
```
-- Configuring CouchDB by REST APIs... -->
{"ok":true}
@@ -75,15 +132,25 @@ If it results like following:
Your CouchDB has been initialised successfully. If you want this manually, please read the script.
If you are using Docker Compose and the above command does not work or displays `ERROR: Hostname missing`, you can try running the following command, replacing the placeholders with your own values:
```
curl -s https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/main/utils/couchdb/couchdb-init.sh | hostname=http://<YOUR SERVER IP>:5984 username=<INSERT USERNAME HERE> password=<INSERT PASSWORD HERE> bash
```
## 3. Expose CouchDB to the Internet
- You can skip this instruction if you using only in intranet and only with desktop devices.
- For mobile devices, Obsidian requires a valid SSL certificate. Usually, it needs exposing the internet.
Whatever solutions we can use. For the simplicity, following sample uses Cloudflare Zero Trust for testing.
Whatever solutions we can use. For simplicity, the following sample uses Cloudflare Zero Trust for testing.
```
cloudflared tunnel --url http://localhost:5984
```
You will then get the following output:
```
$ cloudflared tunnel --url http://localhost:5984
2024-02-14T10:35:25Z INF Thank you for trying Cloudflare Tunnel. Doing so, without a Cloudflare account, is a quick way to experiment and try it out. However, be aware that these account-less Tunnels have no uptime guarantee. If you intend to use Tunnels in production you should use a pre-created named tunnel by following: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps
2024-02-14T10:35:25Z INF Requesting new quick Tunnel on trycloudflare.com...
2024-02-14T10:35:26Z INF +--------------------------------------------------------------------------------------------+
@@ -94,19 +161,33 @@ $ cloudflared tunnel --url http://localhost:5984
:
:
```
Now `https://tiles-photograph-routine-groundwater.trycloudflare.com` is our server. Make it into background once please.
Now `https://tiles-photograph-routine-groundwater.trycloudflare.com` is our server. Make it into the background once, please.
## 4. Client Setup
> [!TIP]
> Now manually configuration is not recommended for some reasons. However, if you want to do so, please use `Setup wizard`. The recommended extra configurations will be also set.
> Now manual configuration is not recommended for some reasons. However, if you want to do so, please use `Setup wizard`. The recommended extra configurations will be also set.
### 1. Generate the setup URI on a desktop device or server
```bash
$ export hostname=https://tiles-photograph-routine-groundwater.trycloudflare.com #Point to your vault
$ export database=obsidiannotes #Please change as you like
$ export passphrase=dfsapkdjaskdjasdas #Please change as you like
$ deno run -A https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/main/utils/flyio/generate_setupuri.ts
export hostname=https://tiles-photograph-routine-groundwater.trycloudflare.com #Point to your vault
export database=obsidiannotes #Please change as you like
export passphrase=dfsapkdjaskdjasdas #Please change as you like
export username=johndoe
export password=abc123
deno run -A https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/main/utils/flyio/generate_setupuri.ts
```
> [!TIP]
> What is the `passphrase`? Is it different from `uri_passphrase`?
> Yes, the `passphrase` we have exported now is for an End-to-End Encryption passphrase.
> And, `uri_passphrase` that used in the `generate_setupuri.ts` is a different one; for decrypting Set-up URI at using that.
> Why: I (vorotamoroz) think that the passphrase of the Setup-URI should be different from the E2EE passphrase to prevent exposure caused by operational errors or the possibility of evil in our environment. On top of that, I believe that it is desirable for the Setup-URI to be random. Setup-URI is inevitably long, so it goes through the clipboard. I think that its passphrase should not go through the same path, so it should essentially be typed manually.
> Hence, if we keep empty for uri_passphrase, generate_setupuri.ts generates an adjective-noun-randomnumber passphrase so that we can remember it without going through the clipboard.
You will then get the following output:
```bash
obsidian://setuplivesync?settings=%5B%22tm2DpsOE74nJAryprZO2M93wF%2Fvg.......4b26ed33230729%22%5D
Your passphrase of Setup-URI is: patient-haze
@@ -206,4 +287,4 @@ entryPoints:
address: ":443"
...
```
```

View File

@@ -1,10 +1,24 @@
# Terms used in this project
# Notes on Terminology, Spelling, Vocabulary Conventions
## Terms
## Spelling and Vocabulary conventions
### Chunks
<!-- TBW, sorry for the draft! -->
1. Almost all of the english words are written in British English. For example, "organisation" instead of "organization", "synchronisation" instead of "synchronization", etc. This convention originated from the author's personal preference but is now maintained for consistency.
2. Idiomatic terms, such as used in HTML, CSS, and JavaScript, are usually be aligned with the language used in the technology. For example, "color" instead of "colour", "program" instead of "programme", etc. Especially, terms which are used for attributes, properties, and methods are notable.
<!-- Please feel free to write any terms that should be mentioned. And please make pull request. I would love to fill the rest. -->
<!-- ### Chunks -->
3. We use `dialogue` in documentation for consistency. While `dialog` may appear in source code, particularly in class names, method names, and attributes (following technical conventions in No. 2), we consistently use `dialogue` for user-facing messages and general documentation text. This approach balances No. 1 with No. 2.
4. Contractions are not used. For example, "do not" instead of "don't", "cannot" instead of "can't", etc. especially `'d`.
- We may encounter difficulties with tenses.
5. However, try using affirmative forms, `Discard` instead of `Do not keep`, `Continue` instead of `Do not stop`, etc.
- Some languages, such as Japanese, have a different meaning for `yes` and `no` between affirmative and negative questions.
## Terminology
- Self-hosted LiveSync
- This plug-in name. `Self-hosted` is one word.
- LiveSync
- Very confusing term.
- As shorten-form of `Self-hosted LiveSync`.
- As a name of synchronisation mode. This should be changed to `Continuos`, in contrast to `Periodic`.

View File

@@ -1,8 +1,16 @@
<!-- 2024-02-15 -->
# Tips and Troubleshooting
- [Tips and Troubleshooting](#tips-and-troubleshooting)
- [Tips](#tips)
- [CORS avoidance](#cors-avoidance)
- [CORS configuration with reverse proxy](#cors-configuration-with-reverse-proxy)
- [Nginx](#nginx)
- [Nginx and subdirectory](#nginx-and-subdirectory)
- [Caddy](#caddy)
- [Caddy and subdirectory](#caddy-and-subdirectory)
- [Apache](#apache)
- [Show all setting panes](#show-all-setting-panes)
- [How to resolve `Tweaks Mismatched of Changed`](#how-to-resolve-tweaks-mismatched-of-changed)
- [Notable bugs and fixes](#notable-bugs-and-fixes)
- [Binary files get bigger on iOS](#binary-files-get-bigger-on-ios)
- [Some setting name has been changed](#some-setting-name-has-been-changed)
@@ -14,25 +22,145 @@
- [Why are the logs volatile and ephemeral?](#why-are-the-logs-volatile-and-ephemeral)
- [Some network logs are not written into the file.](#some-network-logs-are-not-written-into-the-file)
- [If a file were deleted or trimmed, the capacity of the database should be reduced, right?](#if-a-file-were-deleted-or-trimmed-the-capacity-of-the-database-should-be-reduced-right)
- [How to launch the DevTools](#how-to-launch-the-devtools)
- [On Desktop Devices](#on-desktop-devices)
- [On Android](#on-android)
- [On iOS, iPadOS devices](#on-ios-ipados-devices)
- [How can I use the DevTools?](#how-can-i-use-the-devtools)
- [Checking the network log](#checking-the-network-log)
- [Troubleshooting](#troubleshooting)
- [While using Cloudflare Tunnels, often Obsidian API fallback and `524` error occurs.](#while-using-cloudflare-tunnels-often-obsidian-api-fallback-and-524-error-occurs)
- [On the mobile device, cannot synchronise on the local network!](#on-the-mobile-device-cannot-synchronise-on-the-local-network)
- [I think that something bad happening on the vault...](#i-think-that-something-bad-happening-on-the-vault)
- [Tips](#tips)
- [How to resolve `Tweaks Mismatched of Changed`](#how-to-resolve-tweaks-mismatched-of-changed)
- [Old tips](#old-tips)
<!-- - -->
## Tips
### CORS avoidance
If we are unable to configure CORS properly for any reason (for example, if we cannot configure non-administered network devices), we may choose to ignore CORS.
To use the Obsidian API (also known as the Non-Native API) to bypass CORS, we can enable the toggle ``Use Request API to avoid `inevitable` CORS problem``.
<!-- Add **Long explanation of CORS** here for integrity -->
### CORS configuration with reverse proxy
- IMPORTANT: CouchDB handles CORS by itself. Do not process CORS on the reverse
proxy.
- Do not process `Option` requests on the reverse proxy!
- Make sure `host` and `X-Forwarded-For` headers are forwarded to the CouchDB.
- If you are using a subdirectory, make sure to handle it properly. More
detailed information is in the
[CouchDB documentation](https://docs.couchdb.org/en/stable/best-practices/reverse-proxies.html).
Minimal configurations are as follows:
#### Nginx
```nginx
location / {
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
```
#### Nginx and subdirectory
```nginx
location /couchdb {
rewrite ^ $request_uri;
rewrite ^/couchdb/(.*) /$1 break;
proxy_pass http://localhost:5984$uri;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /_session {
proxy_pass http://localhost:5984/_session;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
```
#### Caddy
```caddyfile
domain.com {
reverse_proxy localhost:5984
}
```
#### Caddy and subdirectory
```caddyfile
domain.com {
reverse_proxy /couchdb/* localhost:5984
reverse_proxy /_session/* localhost:5984/_session
}
```
#### Apache
Sorry, Apache is not recommended for CouchDB. Omit the configuration from here.
Please refer to the
[Official documentation](https://docs.couchdb.org/en/stable/best-practices/reverse-proxies.html#reverse-proxying-with-apache-http-server).
### Show all setting panes
Full pane is not shown by default. To show all panes, please toggle all in
`🧙‍♂️ Wizard` -> `Enable extra and advanced features`.
For your information, the all panes are as follows:
![All Panes](all_toggles.png)
### How to resolve `Tweaks Mismatched of Changed`
(Since v0.23.17)
If you have changed some configurations or tweaks which should be unified
between the devices, you will be asked how to reflect (or not) other devices at
the next synchronisation. It also occurs on the device itself, where changes are
made, to prevent unexpected configuration changes from unwanted propagation.\
(We may thank this behaviour if we have synchronised or backed up and restored
Self-hosted LiveSync. At least, for me so).
Following dialogue will be shown: ![Dialogue](tweak_mismatch_dialogue.png)
- If we want to propagate the setting of the device, we should choose
`Update with mine`.
- On other devices, we should choose `Use configured` to accept and use the
configured configuration.
- `Dismiss` can postpone a decision. However, we cannot synchronise until we
have decided.
Rest assured that in most cases we can choose `Use configured`. (Unless you are
certain that you have not changed the configuration).
If we see it for the first time, it reflects the settings of the device that has
been synchronised with the remote for the first time since the upgrade.
Probably, we can accept that.
<!-- Add here -->
## Notable bugs and fixes
### Binary files get bigger on iOS
- Reported at: v0.20.x
- Fixed at: v0.21.2 (Fixed but not reviewed)
- Required action: larger files will not be fixed automatically, please perform `Verify and repair all files`. If our local database and storage are not matched, we will be asked to apply which one.
- Required action: larger files will not be fixed automatically, please perform
`Verify and repair all files`. If our local database and storage are not
matched, we will be asked to apply which one.
### Some setting name has been changed
- Fixed at: v0.22.6
| Previous name | New name |
@@ -46,103 +174,191 @@
### Why `Use an old adapter for compatibility` is somehow enabled in my vault?
Because you are a compassionate and experienced user. Before v0.17.16, we used an old adapter for the local database. At that time, current default adapter has not been stable.
The new adapter has better performance and has a new feature like purging. Therefore, we should use new adapters and current default is so.
Because you are a compassionate and experienced user. Before v0.17.16, we used
an old adapter for the local database. At that time, current default adapter has
not been stable. The new adapter has better performance and has a new feature
like purging. Therefore, we should use new adapters and current default is so.
However, when switching from an old adapter to a new adapter, some converting or local database rebuilding is required, and it takes a few time. It was a long time ago now, but we once inconvenienced everyone in a hurry when we changed the format of our database.
For these reasons, this toggle is automatically on if we have upgraded from vault which using an old adapter.
However, when switching from an old adapter to a new adapter, some converting or
local database rebuilding is required, and it takes a few time. It was a long
time ago now, but we once inconvenienced everyone in a hurry when we changed the
format of our database. For these reasons, this toggle is automatically on if we
have upgraded from vault which using an old adapter.
When you rebuild everything or fetch from the remote again, you will be asked to switch this.
When you rebuild everything or fetch from the remote again, you will be asked to
switch this.
Therefore, experienced users (especially those stable enough not to have to rebuild the database) may have this toggle enabled in their Vault.
Please disable it when you have enough time.
Therefore, experienced users (especially those stable enough not to have to
rebuild the database) may have this toggle enabled in their Vault. Please
disable it when you have enough time.
### ZIP (or any extensions) files were not synchronised. Why?
It depends on Obsidian detects. May toggling `Detect all extensions` of `File and links` (setting of Obsidian) will help us.
It depends on Obsidian detects. May toggling `Detect all extensions` of
`File and links` (setting of Obsidian) will help us.
### I hope to report the issue, but you said you needs `Report`. How to make it?
We can copy the report to the clipboard, by pressing the `Make report` button on the `Hatch` pane.
![Screenshot](../images/hatch.png)
We can copy the report to the clipboard, by pressing the `Make report` button on
the `Hatch` pane. ![Screenshot](../images/hatch.png)
### Where can I check the log?
We can launch the log pane by `Show log` on the command palette.
And if you have troubled something, please enable the `Verbose Log` on the `General Setting` pane.
However, the logs would not be kept so long and cleared when restarted. If you want to check the logs, please enable `Write logs into the file` temporarily.
We can launch the log pane by `Show log` on the command palette. And if you have
troubled something, please enable the `Verbose Log` on the `General Setting`
pane.
However, the logs would not be kept so long and cleared when restarted. If you
want to check the logs, please enable `Write logs into the file` temporarily.
![ScreenShot](../images/write_logs_into_the_file.png)
> [!IMPORTANT]
>
> - Writing logs into the file will impact the performance.
> - Please make sure that you have erased all your confidential information before reporting issue.
> - Please make sure that you have erased all your confidential information
> before reporting issue.
### Why are the logs volatile and ephemeral?
To avoid unexpected exposure to our confidential things.
### Some network logs are not written into the file.
Especially the CORS error will be reported as a general error to the plug-in for security reasons. So we cannot detect and log it. We are only able to investigate them by [Checking the network log](#checking-the-network-log).
Especially the CORS error will be reported as a general error to the plug-in for
security reasons. So we cannot detect and log it. We are only able to
investigate them by [Checking the network log](#checking-the-network-log).
### If a file were deleted or trimmed, the capacity of the database should be reduced, right?
No, even though if files were deleted, chunks were not deleted.
Self-hosted LiveSync splits the files into multiple chunks and transfers only newly created. This behaviour enables us to less traffic. And, the chunks will be shared between the files to reduce the total usage of the database.
And one more thing, we can handle the conflicts on any device even though it has happened on other devices. This means that conflicts will happen in the past, after the time we have synchronised. Hence we cannot collect and delete the unused chunks even though if we are not currently referenced.
No, even though if files were deleted, chunks were not deleted. Self-hosted
LiveSync splits the files into multiple chunks and transfers only newly created.
This behaviour enables us to less traffic. And, the chunks will be shared
between the files to reduce the total usage of the database.
And one more thing, we can handle the conflicts on any device even though it has
happened on other devices. This means that conflicts will happen in the past,
after the time we have synchronised. Hence we cannot collect and delete the
unused chunks even though if we are not currently referenced.
To shrink the database size, `Rebuild everything` only reliably and effectively.
But do not worry, if we have synchronised well. We have the actual and real
files. Only it takes a bit of time and traffics.
### How to launch the DevTools
#### On Desktop Devices
We can launch the DevTools by pressing `ctrl`+`shift`+`i` (`Command`+`shift`+`i` on Mac).
#### On Android
Please refer to [Remote debug Android devices](https://developer.chrome.com/docs/devtools/remote-debugging/).
Once the DevTools have been launched, everything operates the same as on a PC.
#### On iOS, iPadOS devices
If we have a Mac, we can inspect from Safari on the Mac. Please refer to [Inspecting iOS and iPadOS](https://developer.apple.com/documentation/safari-developer-tools/inspecting-ios).
To shrink the database size, `Rebuild everything` only reliably and effectively. But do not worry, if we have synchronised well. We have the actual and real files. Only it takes a bit of time and traffics.
### How can I use the DevTools?
#### Checking the network log
1. Open the network pane.
2. Find the requests marked in red.
![Errored](../images/devtools1.png)
3. Capture the `Headers`, `Payload`, and, `Response`. **Please be sure to keep important information confidential**. If the `Response` contains secrets, you can omitted that.
Note: Headers contains a some credentials. **The path of the request URL, Remote Address, authority, and authorization must be concealed.**
![Concealed sample](../images/devtools2.png)
2. Find the requests marked in red.\
![Errored](../images/devtools1.png)
3. Capture the `Headers`, `Payload`, and, `Response`. **Please be sure to keep
important information confidential**. If the `Response` contains secrets, you
can omitted that. Note: Headers contains a some credentials. **The path of
the request URL, Remote Address, authority, and authorization must be
concealed.**\
![Concealed sample](../images/devtools2.png)
## Troubleshooting
<!-- Add here -->
### While using Cloudflare Tunnels, often Obsidian API fallback and `524` error occurs.
A `524` error occurs when the request to the server is not completed within a
`specified time`. This is a timeout error from Cloudflare. From the reported
issue, it seems to be 100 seconds. (#627).
Therefore, this error returns from Cloudflare, not from the server. Hence, the
result contains no CORS field. It means that this response makes the Obsidian
API fallback.
However, even if the Obsidian API fallback occurs, the request is still not
completed within the `specified time`, 100 seconds.
To solve this issue, we need to configure the timeout settings.
Please enable the toggle in `💪 Power users` -> `CouchDB Connection Tweak` ->
`Use timeouts instead of heartbeats`.
### On the mobile device, cannot synchronise on the local network!
Obsidian mobile is not able to connect to the non-secure end-point, such as starting with `http://`. Make sure your URI of CouchDB. Also not able to use a self-signed certificate.
Obsidian mobile is not able to connect to the non-secure end-point, such as
starting with `http://`. Make sure your URI of CouchDB. Also not able to use a
self-signed certificate.
### I think that something bad happening on the vault...
Place `redflag.md` on top of the vault, and restart Obsidian. The most simple way is to create a new note and rename it to `redflag`. Of course, we can put it without Obsidian.
If there is `redflag.md`, Self-hosted LiveSync suspends all database and storage processes.
Place `redflag.md` on top of the vault, and restart Obsidian. The most simple
way is to create a new note and rename it to `redflag`. Of course, we can put it
without Obsidian.
## Tips
If there is `redflag.md`, Self-hosted LiveSync suspends all database and storage
processes.
### How to resolve `Tweaks Mismatched of Changed`
There are some options to use `redflag.md`.
(Since v0.23.17)
| Filename | Human-Friendly Name | Description |
| ------------- | ------------------- | ------------------------------------------------------------------------------------ |
| `redflag.md` | - | Suspends all processes. |
| `redflag2.md` | `flag_rebuild.md` | Suspends all processes, and rebuild both local and remote databases by local files. |
| `redflag3.md` | `flag_fetch.md` | Suspends all processes, discard the local database, and fetch from the remote again. |
If you have changed some configurations or tweaks which should be unified between the devices, you will be asked how to reflect (or not) other devices at the next synchronisation. It also occurs on the device itself, where changes are made, to prevent unexpected configuration changes from unwanted propagation.
(We may thank this behaviour if we have synchronised or backed up and restored Self-hosted LiveSync. At least, for me so).
Following dialogue will be shown:
![Dialogue](tweak_mismatch_dialogue.png)
- If we want to propagate the setting of the device, we should choose `Update with mine`.
- On other devices, we should choose `Use configured` to accept and use the configured configuration.
- `Dismiss` can postpone a decision. However, we cannot synchronise until we have decided.
Rest assured that in most cases we can choose `Use configured`. (Unless you are certain that you have not changed the configuration).
If we see it for the first time, it reflects the settings of the device that has been synchronised with the remote for the first time since the upgrade. Probably, we can accept that.
<!-- Add here -->
When fetching everything remotely or performing a rebuild, restarting Obsidian
is performed once for safety reasons. At that time, Self-hosted LiveSync uses
these files to determine whether the process should be carried out. (The use of
normal markdown files is a trick to externally force cancellation in the event
of faults in the rebuild or fetch function itself, especially on mobile
devices). This mechanism is also used for set-up. And just for information,
these files are also not subject to synchronisation.
However, occasionally the deletion of files may fail. This should generally work
normally after restarting Obsidian. (As far as I can observe).
### Old tips
- Rarely, a file in the database could be corrupted. The plugin will not write to local storage when a file looks corrupted. If a local version of the file is on your device, the corruption could be fixed by editing the local file and synchronizing it. But if the file does not exist on any of your devices, then it can not be rescued. In this case, you can delete these items from the settings dialog.
- To stop the boot-up sequence (eg. for fixing problems on databases), you can put a `redflag.md` file (or directory) at the root of your vault.
Tip for iOS: a redflag directory can be created at the root of the vault using the File application.
- Also, with `redflag2.md` placed, we can automatically rebuild both the local and the remote databases during the boot-up sequence. With `redflag3.md`, we can discard only the local database and fetch from the remote again.
- Q: The database is growing, how can I shrink it down?
A: each of the docs is saved with their past 100 revisions for detecting and resolving conflicts. Picturing that one device has been offline for a while, and comes online again. The device has to compare its notes with the remotely saved ones. If there exists a historic revision in which the note used to be identical, it could be updated safely (like git fast-forward). Even if that is not in revision histories, we only have to check the differences after the revision that both devices commonly have. This is like git's conflict-resolving method. So, We have to make the database again like an enlarged git repo if you want to solve the root of the problem.
- And more technical Information is in the [Technical Information](tech_info.md)
- If you want to synchronize files without obsidian, you can use [filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync).
- WebClipper is also available on Chrome Web Store:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
Repo is here: [obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip). (Docs are a work in progress.)
- Rarely, a file in the database could be corrupted. The plugin will not write
to local storage when a file looks corrupted. If a local version of the file
is on your device, the corruption could be fixed by editing the local file and
synchronizing it. But if the file does not exist on any of your devices, then
it can not be rescued. In this case, you can delete these items from the
settings dialog.
- To stop the boot-up sequence (eg. for fixing problems on databases), you can
put a `redflag.md` file (or directory) at the root of your vault. Tip for iOS:
a redflag directory can be created at the root of the vault using the File
application.
- Also, with `redflag2.md` placed, we can automatically rebuild both the local
and the remote databases during the boot-up sequence. With `redflag3.md`, we
can discard only the local database and fetch from the remote again.
- Q: The database is growing, how can I shrink it down? A: each of the docs is
saved with their past 100 revisions for detecting and resolving conflicts.
Picturing that one device has been offline for a while, and comes online
again. The device has to compare its notes with the remotely saved ones. If
there exists a historic revision in which the note used to be identical, it
could be updated safely (like git fast-forward). Even if that is not in
revision histories, we only have to check the differences after the revision
that both devices commonly have. This is like git's conflict-resolving method.
So, We have to make the database again like an enlarged git repo if you want
to solve the root of the problem.
- And more technical Information is in the [Technical Information](tech_info.md)
- If you want to synchronize files without obsidian, you can use
[filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync).
- WebClipper is also available on Chrome Web
Store:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
Repo is here:
[obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip).
(Docs are a work in progress.)

View File

@@ -4,75 +4,73 @@ import esbuild from "esbuild";
import process from "process";
import builtins from "builtin-modules";
import sveltePlugin from "esbuild-svelte";
import sveltePreprocess from "svelte-preprocess";
import { sveltePreprocess } from "svelte-preprocess";
import fs from "node:fs";
// import terser from "terser";
import { minify } from "terser";
import inlineWorkerPlugin from "esbuild-plugin-inline-worker";
const banner = `/*
THIS IS A GENERATED/BUNDLED FILE BY ESBUILD AND TERSER
if you want to view the source, please visit the github repository of this plugin
*/
`;
import { terserOption } from "./terser.config.mjs";
import path from "node:path";
const prod = process.argv[2] === "production";
const dev = process.argv[2] === "dev";
const keepTest = !prod || dev;
const terserOpt = {
sourceMap: !prod
? {
url: "inline",
}
: {},
format: {
indent_level: 2,
beautify: true,
comments: "some",
ecma: 2018,
preamble: banner,
webkit: true,
},
parse: {
// parse options
},
compress: {
// compress options
defaults: false,
evaluate: true,
inline: 3,
join_vars: true,
loops: true,
passes: prod ? 4 : 1,
reduce_vars: true,
reduce_funcs: true,
arrows: true,
collapse_vars: true,
comparisons: true,
lhs_constants: true,
hoist_props: true,
side_effects: true,
if_return: true,
ecma: 2018,
unused: true,
},
ecma: 2018, // specify one of: 5, 2015, 2016, etc.
enclose: false, // or specify true, or "args:values"
keep_classnames: true,
keep_fnames: true,
ie8: false,
module: false,
// nameCache: null, // or specify a name cache object
safari10: false,
toplevel: false,
};
const keepTest = true; //!prod;
const manifestJson = JSON.parse(fs.readFileSync("./manifest.json") + "");
const packageJson = JSON.parse(fs.readFileSync("./package.json") + "");
const updateInfo = JSON.stringify(fs.readFileSync("./updates.md") + "");
const PATHS_TEST_INSTALL = process.env?.PATHS_TEST_INSTALL || "";
const PATH_TEST_INSTALL = PATHS_TEST_INSTALL.split(path.delimiter).map(p => p.trim()).filter(p => p.length);
if (!prod) {
if (PATH_TEST_INSTALL) {
console.log(`Built files will be copied to ${PATH_TEST_INSTALL}`);
} else {
console.log("Development build: You can install the plug-in to Obsidian for testing by exporting the PATHS_TEST_INSTALL environment variable with the paths to your vault plugins directories separated by your system path delimiter (':' on Unix, ';' on Windows).");
}
} else {
console.log("Production build");
}
const moduleAliasPlugin = {
name: "module-alias",
setup(build) {
build.onResolve({ filter: /.(dev)(.ts|)$/ }, (args) => {
// console.log(args.path);
if (prod) {
let prodTs = args.path.replace(".dev", ".prod");
const statFile = prodTs.endsWith(".ts") ? prodTs : prodTs + ".ts";
const realPath = path.join(args.resolveDir, statFile);
console.log(`Checking ${statFile}`);
if (fs.existsSync(realPath)) {
console.log(`Replaced ${args.path} with ${prodTs}`);
return {
path: realPath,
namespace: "file",
};
}
}
return null;
});
build.onResolve({ filter: /.(platform)(.ts|)$/ }, (args) => {
// console.log(args.path);
if (prod) {
let prodTs = args.path.replace(".platform", ".obsidian");
const statFile = prodTs.endsWith(".ts") ? prodTs : prodTs + ".ts";
const realPath = path.join(args.resolveDir, statFile);
console.log(`Checking ${statFile}`);
if (fs.existsSync(realPath)) {
console.log(`Replaced ${args.path} with ${prodTs}`);
return {
path: realPath,
namespace: "file",
};
}
}
return null;
});
},
};
/** @type esbuild.Plugin[] */
const plugins = [
{
@@ -81,15 +79,27 @@ const plugins = [
let count = 0;
build.onEnd(async (result) => {
if (count++ === 0) {
console.log("first build:", result);
console.log("first build:");
if (prod) {
console.log("MetaFile:");
if (result.metafile) {
fs.writeFileSync("meta.json", JSON.stringify(result.metafile));
let text = await esbuild.analyzeMetafile(result.metafile, {
verbose: true,
});
// console.log(text);
}
}
} else {
console.log("subsequent build:");
}
const filename = `meta-${prod ? "prod" : "dev"}.json`;
await fs.promises.writeFile(filename, JSON.stringify(result.metafile, null, 2));
if (prod) {
console.log("Performing terser");
const src = fs.readFileSync("./main_org.js").toString();
// @ts-ignore
const ret = await minify(src, terserOpt);
const ret = await minify(src, terserOption);
if (ret && ret.code) {
fs.writeFileSync("./main.js", ret.code);
}
@@ -97,15 +107,45 @@ const plugins = [
} else {
fs.copyFileSync("./main_org.js", "./main.js");
}
if (PATH_TEST_INSTALL) {
for (const installPath of PATH_TEST_INSTALL) {
const realPath = path.resolve(installPath);
console.log(`Copying built files to ${realPath}`);
if (!fs.existsSync(realPath)) {
console.warn(`Test install path ${installPath} does not exist`);
continue;
}
const manifestX = JSON.parse(fs.readFileSync("./manifest.json") + "");
manifestX.version = manifestJson.version + "." + Date.now();
fs.writeFileSync(path.join(installPath, "manifest.json"), JSON.stringify(manifestX, null, 2));
fs.copyFileSync("./main.js", path.join(installPath, "main.js"));
fs.copyFileSync("./styles.css", path.join(installPath, "styles.css"));
}
}
});
},
},
];
const externals = ["obsidian", "electron", "crypto", "@codemirror/autocomplete", "@codemirror/collab", "@codemirror/commands", "@codemirror/language", "@codemirror/lint", "@codemirror/search", "@codemirror/state", "@codemirror/view", "@lezer/common", "@lezer/highlight", "@lezer/lr"];
const externals = [
"obsidian",
"electron",
"crypto",
"@codemirror/autocomplete",
"@codemirror/collab",
"@codemirror/commands",
"@codemirror/language",
"@codemirror/lint",
"@codemirror/search",
"@codemirror/state",
"@codemirror/view",
"@lezer/common",
"@lezer/highlight",
"@lezer/lr",
];
const context = await esbuild.context({
banner: {
js: banner,
js: "// Leave it all to terser",
},
entryPoints: ["src/main.ts"],
bundle: true,
@@ -121,8 +161,9 @@ const context = await esbuild.context({
target: "es2018",
logLevel: "info",
platform: "browser",
metafile: true,
sourcemap: prod ? false : "inline",
treeShaking: true,
treeShaking: false,
outfile: "main_org.js",
mainFields: ["browser", "module", "main"],
minifyWhitespace: false,
@@ -132,6 +173,7 @@ const context = await esbuild.context({
dropLabels: prod && !keepTest ? ["TEST", "DEV"] : [],
// keepNames: true,
plugins: [
moduleAliasPlugin,
inlineWorkerPlugin({
external: externals,
treeShaking: true,
@@ -144,7 +186,7 @@ const context = await esbuild.context({
],
});
if (prod || dev) {
if (prod) {
await context.rebuild();
process.exit(0);
} else {

99
eslint.config.mjs Normal file
View File

@@ -0,0 +1,99 @@
import typescriptEslint from "@typescript-eslint/eslint-plugin";
import svelte from "eslint-plugin-svelte";
import _import from "eslint-plugin-import";
import { fixupPluginRules } from "@eslint/compat";
import tsParser from "@typescript-eslint/parser";
import path from "node:path";
import { fileURLToPath } from "node:url";
import js from "@eslint/js";
import { FlatCompat } from "@eslint/eslintrc";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const compat = new FlatCompat({
baseDirectory: __dirname,
recommendedConfig: js.configs.recommended,
allConfig: js.configs.all,
});
export default [
{
ignores: [
"**/node_modules/*",
"**/jest.config.js",
"src/lib/coverage",
"src/lib/browsertest",
"**/test.ts",
"**/tests.ts",
"**/**test.ts",
"**/**.test.ts",
"**/esbuild.*.mjs",
"**/terser.*.mjs",
"**/node_modules",
"**/build",
"**/.eslintrc.js.bak",
"src/lib/src/patches/pouchdb-utils",
"**/esbuild.config.mjs",
"**/rollup.config.js",
"modules/octagonal-wheels/rollup.config.js",
"modules/octagonal-wheels/dist/**/*",
"src/lib/test",
"src/lib/src/cli",
"**/main.js",
"src/lib/apps/webpeer/*"
],
},
...compat.extends(
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended"
),
{
plugins: {
"@typescript-eslint": typescriptEslint,
svelte,
import: fixupPluginRules(_import),
},
languageOptions: {
parser: tsParser,
ecmaVersion: 5,
sourceType: "module",
parserOptions: {
project: ["tsconfig.json"],
},
},
rules: {
"no-unused-vars": "off",
"@typescript-eslint/no-unused-vars": [
"error",
{
args: "none",
},
],
"no-unused-labels": "off",
"@typescript-eslint/ban-ts-comment": "off",
"no-prototype-builtins": "off",
"@typescript-eslint/no-empty-function": "off",
"require-await": "error",
"@typescript-eslint/require-await": "warn",
"@typescript-eslint/no-misused-promises": "warn",
"@typescript-eslint/no-floating-promises": "warn",
"no-async-promise-executor": "warn",
"@typescript-eslint/no-explicit-any": "off",
"@typescript-eslint/no-unnecessary-type-assertion": "error",
"no-constant-condition": [
"error",
{
checkLoops: false,
},
],
},
},
];

1
example.env Normal file
View File

@@ -0,0 +1 @@
PATHS_TEST_INSTALL=your-vault-plugin-path:and-another-path

10
manifest-beta.json Normal file
View File

@@ -0,0 +1,10 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.25.24.beta3",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",
"authorUrl": "https://github.com/vrtmrz",
"isDesktopOnly": false
}

View File

@@ -1,7 +1,7 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.23.19",
"version": "0.25.24.beta3",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",

16826
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,45 +1,70 @@
{
"name": "obsidian-livesync",
"version": "0.23.19",
"version": "0.25.24.beta3",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js",
"type": "module",
"scripts": {
"dev": "node esbuild.config.mjs",
"bakei18n": "npx tsx ./src/lib/_tools/bakei18n.ts",
"i18n:bakejson": "npx tsx ./src/lib/_tools/bakei18n.ts",
"i18n:yaml2json": "npx tsx ./src/lib/_tools/yaml2json.ts",
"i18n:json2yaml": "npx tsx ./src/lib/_tools/json2yaml.ts",
"prettyjson": "prettier --config ./.prettierrc ./src/lib/src/common/messagesJson/*.json --write --log-level error",
"postbakei18n": "prettier --config ./.prettierrc ./src/lib/src/common/messages/*.ts --write --log-level error",
"posti18n:yaml2json": "npm run prettyjson",
"predev": "npm run bakei18n",
"dev": "node --env-file=.env esbuild.config.mjs",
"prebuild": "npm run bakei18n",
"build": "node esbuild.config.mjs production",
"buildDev": "node esbuild.config.mjs dev",
"lint": "eslint src"
"lint": "eslint src",
"svelte-check": "svelte-check --tsconfig ./tsconfig.json",
"tsc-check": "tsc --noEmit",
"pretty": "npm run prettyNoWrite -- --write --log-level error",
"prettyCheck": "npm run prettyNoWrite -- --check",
"prettyNoWrite": "prettier --config ./.prettierrc \"**/*.js\" \"**/*.ts\" \"**/*.json\" ",
"check": "npm run lint && npm run svelte-check",
"unittest": "deno test -A --no-check --coverage=cov_profile --v8-flags=--expose-gc --trace-leaks ./src/"
},
"keywords": [],
"author": "vorotamoroz",
"license": "MIT",
"devDependencies": {
"@tsconfig/svelte": "^5.0.4",
"@chialab/esbuild-plugin-worker": "^0.18.1",
"@eslint/compat": "^1.2.7",
"@eslint/eslintrc": "^3.3.0",
"@eslint/js": "^9.21.0",
"@sveltejs/vite-plugin-svelte": "^6.2.1",
"@tsconfig/svelte": "^5.0.5",
"@types/deno": "^2.3.0",
"@types/diff-match-patch": "^1.0.36",
"@types/node": "^20.14.10",
"@types/node": "^22.13.8",
"@types/pouchdb": "^6.4.2",
"@types/pouchdb-adapter-http": "^6.1.6",
"@types/pouchdb-adapter-idb": "^6.1.7",
"@types/pouchdb-browser": "^6.1.5",
"@types/pouchdb-core": "^7.0.14",
"@types/pouchdb-core": "^7.0.15",
"@types/pouchdb-mapreduce": "^6.1.10",
"@types/pouchdb-replication": "^6.4.7",
"@types/transform-pouch": "^1.0.6",
"@typescript-eslint/eslint-plugin": "^7.16.0",
"@typescript-eslint/parser": "^7.16.0",
"builtin-modules": "^4.0.0",
"esbuild": "0.23.0",
"esbuild-svelte": "^0.8.1",
"eslint": "^8.57.0",
"eslint-config-airbnb-base": "^15.0.0",
"eslint-plugin-import": "^2.29.1",
"@typescript-eslint/eslint-plugin": "8.46.2",
"@typescript-eslint/parser": "8.46.2",
"builtin-modules": "5.0.0",
"esbuild": "0.25.0",
"esbuild-plugin-inline-worker": "^0.1.1",
"esbuild-svelte": "^0.9.3",
"eslint": "^9.38.0",
"eslint-plugin-import": "^2.32.0",
"eslint-plugin-svelte": "^3.12.4",
"events": "^3.3.0",
"obsidian": "^1.5.7",
"postcss": "^8.4.39",
"glob": "^11.0.3",
"obsidian": "^1.8.7",
"postcss": "^8.5.3",
"postcss-load-config": "^6.0.1",
"pouchdb-adapter-http": "^9.0.0",
"pouchdb-adapter-idb": "^9.0.0",
"pouchdb-adapter-indexeddb": "^9.0.0",
"pouchdb-adapter-memory": "^9.0.0",
"pouchdb-core": "^9.0.0",
"pouchdb-errors": "^9.0.0",
"pouchdb-find": "^9.0.0",
@@ -47,25 +72,31 @@
"pouchdb-merge": "^9.0.0",
"pouchdb-replication": "^9.0.0",
"pouchdb-utils": "^9.0.0",
"svelte": "^4.2.18",
"svelte-preprocess": "^6.0.2",
"terser": "^5.31.2",
"prettier": "3.5.2",
"svelte": "5.41.1",
"svelte-check": "^4.3.3",
"svelte-preprocess": "^6.0.3",
"terser": "^5.39.0",
"transform-pouch": "^2.0.0",
"tslib": "^2.6.3",
"typescript": "^5.5.3"
"tslib": "^2.8.1",
"tsx": "^4.20.6",
"typescript": "5.9.3",
"yaml": "^2.8.0"
},
"dependencies": {
"@aws-sdk/client-s3": "^3.614.0",
"@smithy/fetch-http-handler": "^3.2.1",
"@smithy/protocol-http": "^4.0.3",
"@smithy/querystring-builder": "^3.0.3",
"@aws-sdk/client-s3": "^3.808.0",
"@smithy/fetch-http-handler": "^5.0.2",
"@smithy/md5-js": "^4.0.2",
"@smithy/middleware-apply-body-checksum": "^4.1.0",
"@smithy/protocol-http": "^5.1.0",
"@smithy/querystring-builder": "^4.0.2",
"diff-match-patch": "^1.0.5",
"esbuild-plugin-inline-worker": "^0.1.1",
"fflate": "^0.8.2",
"idb": "^8.0.0",
"minimatch": "^10.0.1",
"octagonal-wheels": "^0.1.13",
"xxhash-wasm": "0.4.2",
"idb": "^8.0.3",
"minimatch": "^10.0.2",
"octagonal-wheels": "^0.1.42",
"qrcode-generator": "^1.4.4",
"trystero": "^0.22.0",
"xxhash-wasm-102": "npm:xxhash-wasm@^1.0.2"
}
}

View File

@@ -1,13 +1,5 @@
import { deleteDB, type IDBPDatabase, openDB } from "idb";
export interface KeyValueDatabase {
get<T>(key: IDBValidKey): Promise<T>;
set<T>(key: IDBValidKey, value: T): Promise<IDBValidKey>;
del(key: IDBValidKey): Promise<void>;
clear(): Promise<void>;
keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]>;
close(): void;
destroy(): Promise<void>;
}
import type { KeyValueDatabase } from "../lib/src/interfaces/KeyValueDatabase.ts";
const databaseCache: { [key: string]: IDBPDatabase<any> } = {};
export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => {
if (dbKey in databaseCache) {
@@ -16,8 +8,8 @@ export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatab
}
const storeKey = dbKey;
const dbPromise = openDB(dbKey, 1, {
upgrade(db) {
db.createObjectStore(storeKey);
upgrade(db, _oldVersion, _newVersion, _transaction, _event) {
return db.createObjectStore(storeKey);
},
});
const db = await dbPromise;

View File

@@ -0,0 +1,28 @@
import { ItemView } from "obsidian";
import { type mount, unmount } from "svelte";
export abstract class SvelteItemView extends ItemView {
abstract instantiateComponent(target: HTMLElement): ReturnType<typeof mount> | Promise<ReturnType<typeof mount>>;
component?: ReturnType<typeof mount>;
async onOpen() {
await super.onOpen();
this.contentEl.empty();
await this._dismountComponent();
this.component = await this.instantiateComponent(this.contentEl);
return;
}
async _dismountComponent() {
if (this.component) {
await unmount(this.component);
this.component = undefined;
}
}
async onClose() {
await super.onClose();
if (this.component) {
await unmount(this.component);
this.component = undefined;
}
return;
}
}

View File

@@ -1,235 +0,0 @@
import { ButtonComponent } from "obsidian";
import { App, FuzzySuggestModal, MarkdownRenderer, Modal, Plugin, Setting } from "../deps.ts";
import ObsidianLiveSyncPlugin from "../main.ts";
//@ts-ignore
import PluginPane from "../ui/PluginPane.svelte";
export class PluginDialogModal extends Modal {
plugin: ObsidianLiveSyncPlugin;
component: PluginPane | undefined;
isOpened() {
return this.component != undefined;
}
constructor(app: App, plugin: ObsidianLiveSyncPlugin) {
super(app);
this.plugin = plugin;
}
onOpen() {
const { contentEl } = this;
this.contentEl.style.overflow = "auto";
this.contentEl.style.display = "flex";
this.contentEl.style.flexDirection = "column";
this.titleEl.setText("Customization Sync (Beta3)")
if (!this.component) {
this.component = new PluginPane({
target: contentEl,
props: { plugin: this.plugin },
});
}
}
onClose() {
if (this.component) {
this.component.$destroy();
this.component = undefined;
}
}
}
export class InputStringDialog extends Modal {
result: string | false = false;
onSubmit: (result: string | false) => void;
title: string;
key: string;
placeholder: string;
isManuallyClosed = false;
isPassword = false;
constructor(app: App, title: string, key: string, placeholder: string, isPassword: boolean, onSubmit: (result: string | false) => void) {
super(app);
this.onSubmit = onSubmit;
this.title = title;
this.placeholder = placeholder;
this.key = key;
this.isPassword = isPassword;
}
onOpen() {
const { contentEl } = this;
this.titleEl.setText(this.title);
const formEl = contentEl.createDiv();
new Setting(formEl).setName(this.key).setClass(this.isPassword ? "password-input" : "normal-input").addText((text) =>
text.onChange((value) => {
this.result = value;
})
);
new Setting(formEl).addButton((btn) =>
btn
.setButtonText("Ok")
.setCta()
.onClick(() => {
this.isManuallyClosed = true;
this.close();
})
).addButton((btn) =>
btn
.setButtonText("Cancel")
.setCta()
.onClick(() => {
this.close();
})
);
}
onClose() {
const { contentEl } = this;
contentEl.empty();
if (this.isManuallyClosed) {
this.onSubmit(this.result);
} else {
this.onSubmit(false);
}
}
}
export class PopoverSelectString extends FuzzySuggestModal<string> {
app: App;
callback: ((e: string) => void) | undefined = () => { };
getItemsFun: () => string[] = () => {
return ["yes", "no"];
}
constructor(app: App, note: string, placeholder: string | undefined, getItemsFun: (() => string[]) | undefined, callback: (e: string) => void) {
super(app);
this.app = app;
this.setPlaceholder((placeholder ?? "y/n) ") + note);
if (getItemsFun) this.getItemsFun = getItemsFun;
this.callback = callback;
}
getItems(): string[] {
return this.getItemsFun();
}
getItemText(item: string): string {
return item;
}
onChooseItem(item: string, evt: MouseEvent | KeyboardEvent): void {
// debugger;
this.callback?.(item);
this.callback = undefined;
}
onClose(): void {
setTimeout(() => {
if (this.callback) {
this.callback("");
this.callback = undefined;
}
}, 100);
}
}
export class MessageBox extends Modal {
plugin: Plugin;
title: string;
contentMd: string;
buttons: string[];
result: string | false = false;
isManuallyClosed = false;
defaultAction: string | undefined;
timeout: number | undefined;
timer: ReturnType<typeof setInterval> | undefined = undefined;
defaultButtonComponent: ButtonComponent | undefined;
onSubmit: (result: string | false) => void;
constructor(plugin: Plugin, title: string, contentMd: string, buttons: string[], defaultAction: (typeof buttons)[number], timeout: number | undefined, onSubmit: (result: (typeof buttons)[number] | false) => void) {
super(plugin.app);
this.plugin = plugin;
this.title = title;
this.contentMd = contentMd;
this.buttons = buttons;
this.onSubmit = onSubmit;
this.defaultAction = defaultAction;
this.timeout = timeout;
if (this.timeout) {
this.timer = setInterval(() => {
if (this.timeout === undefined) return;
this.timeout--;
if (this.timeout < 0) {
if (this.timer) {
clearInterval(this.timer);
this.timer = undefined;
}
this.result = defaultAction;
this.isManuallyClosed = true;
this.close();
} else {
this.defaultButtonComponent?.setButtonText(`( ${this.timeout} ) ${defaultAction}`);
}
}, 1000);
}
}
onOpen() {
const { contentEl } = this;
this.titleEl.setText(this.title);
contentEl.addEventListener("click", () => {
if (this.timer) {
clearInterval(this.timer);
this.timer = undefined;
}
})
const div = contentEl.createDiv();
MarkdownRenderer.render(this.plugin.app, this.contentMd, div, "/", this.plugin);
const buttonSetting = new Setting(contentEl);
buttonSetting.controlEl.style.flexWrap = "wrap";
for (const button of this.buttons) {
buttonSetting.addButton((btn) => {
btn
.setButtonText(button)
.onClick(() => {
this.isManuallyClosed = true;
this.result = button;
if (this.timer) {
clearInterval(this.timer);
this.timer = undefined;
}
this.close();
})
if (button == this.defaultAction) {
this.defaultButtonComponent = btn;
}
return btn;
}
)
}
}
onClose() {
const { contentEl } = this;
contentEl.empty();
if (this.timer) {
clearInterval(this.timer);
this.timer = undefined;
}
if (this.isManuallyClosed) {
this.onSubmit(this.result);
} else {
this.onSubmit(false);
}
}
}
export function confirmWithMessage(plugin: Plugin, title: string, contentMd: string, buttons: string[], defaultAction: (typeof buttons)[number], timeout?: number): Promise<(typeof buttons)[number] | false> {
return new Promise((res) => {
const dialog = new MessageBox(plugin, title, contentMd, buttons, defaultAction, timeout, (result) => res(result));
dialog.open();
});
}

47
src/common/events.ts Normal file
View File

@@ -0,0 +1,47 @@
import { eventHub } from "../lib/src/hub/hub";
import type ObsidianLiveSyncPlugin from "../main";
export const EVENT_PLUGIN_LOADED = "plugin-loaded";
export const EVENT_PLUGIN_UNLOADED = "plugin-unloaded";
export const EVENT_FILE_SAVED = "file-saved";
export const EVENT_LEAF_ACTIVE_CHANGED = "leaf-active-changed";
export const EVENT_REQUEST_OPEN_SETTINGS = "request-open-settings";
export const EVENT_REQUEST_OPEN_SETTING_WIZARD = "request-open-setting-wizard";
export const EVENT_REQUEST_OPEN_SETUP_URI = "request-open-setup-uri";
export const EVENT_REQUEST_COPY_SETUP_URI = "request-copy-setup-uri";
export const EVENT_REQUEST_SHOW_SETUP_QR = "request-show-setup-qr";
export const EVENT_REQUEST_RELOAD_SETTING_TAB = "reload-setting-tab";
export const EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG = "request-open-plugin-sync-dialog";
export const EVENT_REQUEST_OPEN_P2P = "request-open-p2p";
export const EVENT_REQUEST_CLOSE_P2P = "request-close-p2p";
export const EVENT_REQUEST_RUN_DOCTOR = "request-run-doctor";
export const EVENT_REQUEST_RUN_FIX_INCOMPLETE = "request-run-fix-incomplete";
// export const EVENT_FILE_CHANGED = "file-changed";
declare global {
interface LSEvents {
[EVENT_PLUGIN_LOADED]: ObsidianLiveSyncPlugin;
[EVENT_PLUGIN_UNLOADED]: undefined;
[EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG]: undefined;
[EVENT_REQUEST_OPEN_SETTINGS]: undefined;
[EVENT_REQUEST_OPEN_SETTING_WIZARD]: undefined;
[EVENT_REQUEST_RELOAD_SETTING_TAB]: undefined;
[EVENT_LEAF_ACTIVE_CHANGED]: undefined;
[EVENT_REQUEST_CLOSE_P2P]: undefined;
[EVENT_REQUEST_OPEN_P2P]: undefined;
[EVENT_REQUEST_OPEN_SETUP_URI]: undefined;
[EVENT_REQUEST_COPY_SETUP_URI]: undefined;
[EVENT_REQUEST_SHOW_SETUP_QR]: undefined;
[EVENT_REQUEST_RUN_DOCTOR]: string;
[EVENT_REQUEST_RUN_FIX_INCOMPLETE]: undefined;
}
}
export * from "../lib/src/events/coreEvents.ts";
export { eventHub };

View File

@@ -0,0 +1,12 @@
import type { TFile } from "../deps";
import type { FilePathWithPrefix, LoadedEntry } from "../lib/src/common/types";
export const EVENT_REQUEST_SHOW_HISTORY = "show-history";
declare global {
interface LSEvents {
[EVENT_REQUEST_SHOW_HISTORY]:
| { file: TFile; fileOnDB: LoadedEntry }
| { file: FilePathWithPrefix; fileOnDB: LoadedEntry };
}
}

View File

@@ -1,4 +1,4 @@
import { PersistentMap } from "../lib/src/dataobject/PersistentMap.ts";
import { PersistentMap } from "octagonal-wheels/dataobject/PersistentMap";
export let sameChangePairs: PersistentMap<number[]>;

View File

@@ -1,5 +1,6 @@
import { type PluginManifest, TFile } from "../deps.ts";
import { type DatabaseEntry, type EntryBody, type FilePath } from "../lib/src/common/types.ts";
export type { CacheData, FileEventItem } from "../lib/src/common/types.ts";
export interface PluginDataEntry extends DatabaseEntry {
deviceVaultName: string;
@@ -48,23 +49,6 @@ export type queueItem = {
warned?: boolean;
};
export type CacheData = string | ArrayBuffer;
export type FileEventType = "CREATE" | "DELETE" | "CHANGED" | "RENAME" | "INTERNAL";
export type FileEventArgs = {
file: FileInfo | InternalFileInfo;
cache?: CacheData;
oldPath?: string;
ctx?: any;
}
export type FileEventItem = {
type: FileEventType,
args: FileEventArgs,
key: string,
skipBatchWait?: boolean,
cancelled?: boolean,
batched?: boolean
}
// Hidden items (Now means `chunk`)
export const CHeader = "h:";
@@ -82,4 +66,4 @@ export const ICXHeader = "ix:";
export const FileWatchEventQueueMax = 10;
export const configURIBase = "obsidian://setuplivesync?settings=";
export const configURIBaseQR = "obsidian://setuplivesync?settingsQR=";

View File

@@ -1,28 +1,57 @@
import { normalizePath, Platform, TAbstractFile, App, type RequestUrlParam, requestUrl, TFile } from "../deps.ts";
import { path2id_base, id2path_base, isValidFilenameInLinux, isValidFilenameInDarwin, isValidFilenameInWidows, isValidFilenameInAndroid, stripAllPrefixes } from "../lib/src/string_and_binary/path.ts";
import { normalizePath, Platform, TAbstractFile, type RequestUrlParam, requestUrl } from "../deps.ts";
import {
path2id_base,
id2path_base,
isValidFilenameInLinux,
isValidFilenameInDarwin,
isValidFilenameInWidows,
isValidFilenameInAndroid,
stripAllPrefixes,
} from "../lib/src/string_and_binary/path.ts";
import { Logger } from "../lib/src/common/logger.ts";
import { LOG_LEVEL_VERBOSE, type AnyEntry, type DocumentID, type EntryHasPath, type FilePath, type FilePathWithPrefix } from "../lib/src/common/types.ts";
import {
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
type AnyEntry,
type CouchDBCredentials,
type DocumentID,
type EntryHasPath,
type FilePath,
type FilePathWithPrefix,
type UXFileInfo,
type UXFileInfoStub,
} from "../lib/src/common/types.ts";
import { CHeader, ICHeader, ICHeaderLength, ICXHeader, PSCHeader } from "./types.ts";
import { InputStringDialog, PopoverSelectString } from "./dialogs.ts";
import type ObsidianLiveSyncPlugin from "../main.ts";
import { writeString } from "../lib/src/string_and_binary/convert.ts";
import { fireAndForget } from "../lib/src/common/utils.ts";
import { sameChangePairs } from "./stores.ts";
export { scheduleTask, setPeriodicTask, cancelTask, cancelAllTasks, cancelPeriodicTask, cancelAllPeriodicTask, } from "../lib/src/concurrency/task.ts";
import { scheduleTask } from "octagonal-wheels/concurrency/task";
import { EVENT_PLUGIN_UNLOADED, eventHub } from "./events.ts";
import { promiseWithResolver, type PromiseWithResolvers } from "octagonal-wheels/promises";
import { AuthorizationHeaderGenerator } from "../lib/src/replication/httplib.ts";
import type { KeyValueDatabase } from "../lib/src/interfaces/KeyValueDatabase.ts";
export { scheduleTask, cancelTask, cancelAllTasks } from "octagonal-wheels/concurrency/task";
// For backward compatibility, using the path for determining id.
// Only CouchDB unacceptable ID (that starts with an underscore) has been prefixed with "/".
// The first slash will be deleted when the path is normalized.
export async function path2id(filename: FilePathWithPrefix | FilePath, obfuscatePassphrase: string | false): Promise<DocumentID> {
export async function path2id(
filename: FilePathWithPrefix | FilePath,
obfuscatePassphrase: string | false,
caseInsensitive: boolean
): Promise<DocumentID> {
const temp = filename.split(":");
const path = temp.pop();
const normalizedPath = normalizePath(path as FilePath);
temp.push(normalizedPath);
const fixedPath = temp.join(":") as FilePathWithPrefix;
const out = await path2id_base(fixedPath, obfuscatePassphrase);
const out = await path2id_base(fixedPath, obfuscatePassphrase, caseInsensitive);
return out;
}
export function id2path(id: DocumentID, entry?: EntryHasPath): FilePathWithPrefix {
@@ -36,7 +65,6 @@ export function id2path(id: DocumentID, entry?: EntryHasPath): FilePathWithPrefi
}
export function getPath(entry: AnyEntry) {
return id2path(entry._id, entry);
}
export function getPathWithoutPrefix(entry: AnyEntry) {
const f = getPath(entry);
@@ -47,6 +75,25 @@ export function getPathFromTFile(file: TAbstractFile) {
return file.path as FilePath;
}
export function isInternalFile(file: UXFileInfoStub | string | FilePathWithPrefix) {
if (typeof file == "string") return file.startsWith(ICHeader);
if (file.isInternal) return true;
return false;
}
export function getPathFromUXFileInfo(file: UXFileInfoStub | string | FilePathWithPrefix) {
if (typeof file == "string") return file as FilePathWithPrefix;
return file.path;
}
export function getStoragePathFromUXFileInfo(file: UXFileInfoStub | string | FilePathWithPrefix) {
if (typeof file == "string") return stripAllPrefixes(file as FilePathWithPrefix);
return stripAllPrefixes(file.path);
}
export function getDatabasePathFromUXFileInfo(file: UXFileInfoStub | string | FilePathWithPrefix) {
if (typeof file == "string" && file.startsWith(ICXHeader)) return file as FilePathWithPrefix;
const prefix = isInternalFile(file) ? ICHeader : "";
if (typeof file == "string") return (prefix + stripAllPrefixes(file as FilePathWithPrefix)) as FilePathWithPrefix;
return (prefix + stripAllPrefixes(file.path)) as FilePathWithPrefix;
}
const memos: { [key: string]: any } = {};
export function memoObject<T>(key: string, obj: T): T {
@@ -56,7 +103,7 @@ export function memoObject<T>(key: string, obj: T): T {
export async function memoIfNotExist<T>(key: string, func: () => T | Promise<T>): Promise<T> {
if (!(key in memos)) {
const w = func();
const v = w instanceof Promise ? (await w) : w;
const v = w instanceof Promise ? await w : w;
memos[key] = v;
}
return memos[key] as T;
@@ -72,196 +119,6 @@ export function disposeMemoObject(key: string) {
delete memos[key];
}
export function isSensibleMargeApplicable(path: string) {
if (path.endsWith(".md")) return true;
return false;
}
export function isObjectMargeApplicable(path: string) {
if (path.endsWith(".canvas")) return true;
if (path.endsWith(".json")) return true;
return false;
}
export function tryParseJSON(str: string, fallbackValue?: any) {
try {
return JSON.parse(str);
} catch (ex) {
return fallbackValue;
}
}
const MARK_OPERATOR = `\u{0001}`;
const MARK_DELETED = `${MARK_OPERATOR}__DELETED`;
const MARK_ISARRAY = `${MARK_OPERATOR}__ARRAY`;
const MARK_SWAPPED = `${MARK_OPERATOR}__SWAP`;
function unorderedArrayToObject(obj: Array<any>) {
return obj.map(e => ({ [e.id as string]: e })).reduce((p, c) => ({ ...p, ...c }), {})
}
function objectToUnorderedArray(obj: object) {
const entries = Object.entries(obj);
if (entries.some(e => e[0] != e[1]?.id)) throw new Error("Item looks like not unordered array")
return entries.map(e => e[1]);
}
function generatePatchUnorderedArray(from: Array<any>, to: Array<any>) {
if (from.every(e => typeof (e) == "object" && ("id" in e)) && to.every(e => typeof (e) == "object" && ("id" in e))) {
const fObj = unorderedArrayToObject(from);
const tObj = unorderedArrayToObject(to);
const diff = generatePatchObj(fObj, tObj);
if (Object.keys(diff).length > 0) {
return { [MARK_ISARRAY]: diff };
} else {
return {};
}
}
return { [MARK_SWAPPED]: to };
}
export function generatePatchObj(from: Record<string | number | symbol, any>, to: Record<string | number | symbol, any>) {
const entries = Object.entries(from);
const tempMap = new Map<string | number | symbol, any>(entries);
const ret = {} as Record<string | number | symbol, any>;
const newEntries = Object.entries(to);
for (const [key, value] of newEntries) {
if (!tempMap.has(key)) {
//New
ret[key] = value;
tempMap.delete(key);
} else {
//Exists
const v = tempMap.get(key);
if (typeof (v) !== typeof (value) || (Array.isArray(v) !== Array.isArray(value))) {
//if type is not match, replace completely.
ret[key] = { [MARK_SWAPPED]: value };
} else {
if (typeof (v) == "object" && typeof (value) == "object" && !Array.isArray(v) && !Array.isArray(value)) {
const wk = generatePatchObj(v, value);
if (Object.keys(wk).length > 0) ret[key] = wk;
} else if (typeof (v) == "object" && typeof (value) == "object" && Array.isArray(v) && Array.isArray(value)) {
const wk = generatePatchUnorderedArray(v, value);
if (Object.keys(wk).length > 0) ret[key] = wk;
} else if (typeof (v) != "object" && typeof (value) != "object") {
if (JSON.stringify(tempMap.get(key)) !== JSON.stringify(value)) {
ret[key] = value;
}
} else {
if (JSON.stringify(tempMap.get(key)) !== JSON.stringify(value)) {
ret[key] = { [MARK_SWAPPED]: value };
}
}
}
tempMap.delete(key);
}
}
//Not used item, means deleted one
for (const [key,] of tempMap) {
ret[key] = MARK_DELETED
}
return ret;
}
export function applyPatch(from: Record<string | number | symbol, any>, patch: Record<string | number | symbol, any>) {
const ret = from;
const patches = Object.entries(patch);
for (const [key, value] of patches) {
if (value == MARK_DELETED) {
delete ret[key];
continue;
}
if (typeof (value) == "object") {
if (MARK_SWAPPED in value) {
ret[key] = value[MARK_SWAPPED];
continue;
}
if (MARK_ISARRAY in value) {
if (!(key in ret)) ret[key] = [];
if (!Array.isArray(ret[key])) {
throw new Error("Patch target type is mismatched (array to something)");
}
const orgArrayObject = unorderedArrayToObject(ret[key]);
const appliedObject = applyPatch(orgArrayObject, value[MARK_ISARRAY]);
const appliedArray = objectToUnorderedArray(appliedObject);
ret[key] = [...appliedArray];
} else {
if (!(key in ret)) {
ret[key] = value;
continue;
}
ret[key] = applyPatch(ret[key], value);
}
} else {
ret[key] = value;
}
}
return ret;
}
export function mergeObject(
objA: Record<string | number | symbol, any> | [any],
objB: Record<string | number | symbol, any> | [any]
) {
const newEntries = Object.entries(objB);
const ret: any = { ...objA };
if (
typeof objA !== typeof objB ||
Array.isArray(objA) !== Array.isArray(objB)
) {
return objB;
}
for (const [key, v] of newEntries) {
if (key in ret) {
const value = ret[key];
if (
typeof v !== typeof value ||
Array.isArray(v) !== Array.isArray(value)
) {
//if type is not match, replace completely.
ret[key] = v;
} else {
if (
typeof v == "object" &&
typeof value == "object" &&
!Array.isArray(v) &&
!Array.isArray(value)
) {
ret[key] = mergeObject(v, value);
} else if (
typeof v == "object" &&
typeof value == "object" &&
Array.isArray(v) &&
Array.isArray(value)
) {
ret[key] = [...new Set([...v, ...value])];
} else {
ret[key] = v;
}
}
} else {
ret[key] = v;
}
}
const retSorted = Object.fromEntries(Object.entries(ret).sort((a, b) => a[0] < b[0] ? -1 : a[0] > b[0] ? 1 : 0));
if (Array.isArray(objA) && Array.isArray(objB)) {
return Object.values(retSorted);
}
return retSorted;
}
export function flattenObject(obj: Record<string | number | symbol, any>, path: string[] = []): [string, any][] {
if (typeof (obj) != "object") return [[path.join("."), obj]];
if (Array.isArray(obj)) return [[path.join("."), JSON.stringify(obj)]];
const e = Object.entries(obj);
const ret = []
for (const [key, value] of e) {
const p = flattenObject(value, [...path, key]);
ret.push(...p);
}
return ret;
}
export function isValidPath(filename: string) {
if (Platform.isDesktop) {
// if(Platform.isMacOS) return isValidFilenameInDarwin(filename);
@@ -280,11 +137,10 @@ export function trimPrefix(target: string, prefix: string) {
return target.startsWith(prefix) ? target.substring(prefix.length) : target;
}
/**
* returns is internal chunk of file
* @param id ID
* @returns
* @returns
*/
export function isInternalMetadata(id: FilePath | FilePathWithPrefix | DocumentID): boolean {
return id.startsWith(ICHeader);
@@ -293,7 +149,7 @@ export function stripInternalMetadataPrefix<T extends FilePath | FilePathWithPre
return id.substring(ICHeaderLength) as T;
}
export function id2InternalMetadataId(id: DocumentID): DocumentID {
return ICHeader + id as DocumentID;
return (ICHeader + id) as DocumentID;
}
// const CHeaderLength = CHeader.length;
@@ -308,37 +164,16 @@ export function isCustomisationSyncMetadata(str: string): boolean {
return str.startsWith(ICXHeader);
}
export const askYesNo = (app: App, message: string): Promise<"yes" | "no"> => {
return new Promise((res) => {
const popover = new PopoverSelectString(app, message, undefined, undefined, (result) => res(result as "yes" | "no"));
popover.open();
});
};
export const askSelectString = (app: App, message: string, items: string[]): Promise<string> => {
const getItemsFun = () => items;
return new Promise((res) => {
const popover = new PopoverSelectString(app, message, "", getItemsFun, (result) => res(result));
popover.open();
});
};
export const askString = (app: App, title: string, key: string, placeholder: string, isPassword: boolean = false): Promise<string | false> => {
return new Promise((res) => {
const dialog = new InputStringDialog(app, title, key, placeholder, isPassword, (result) => res(result));
dialog.open();
});
};
export class PeriodicProcessor {
_process: () => Promise<any>;
_timer?: number;
_timer?: number = undefined;
_plugin: ObsidianLiveSyncPlugin;
constructor(plugin: ObsidianLiveSyncPlugin, process: () => Promise<any>) {
this._plugin = plugin;
this._process = process;
eventHub.onceEvent(EVENT_PLUGIN_UNLOADED, () => {
this.disable();
});
}
async process() {
try {
@@ -350,12 +185,16 @@ export class PeriodicProcessor {
enable(interval: number) {
this.disable();
if (interval == 0) return;
this._timer = window.setInterval(() => fireAndForget(async () => {
await this.process();
if (this._plugin._unloaded) {
this.disable();
}
}), interval);
this._timer = window.setInterval(
() =>
fireAndForget(async () => {
await this.process();
if (this._plugin.services?.appLifecycle?.hasUnloaded()) {
this.disable();
}
}),
interval
);
this._plugin.registerInterval(this._timer);
}
disable() {
@@ -366,11 +205,21 @@ export class PeriodicProcessor {
}
}
export const _requestToCouchDBFetch = async (baseUri: string, username: string, password: string, path?: string, body?: string | any, method?: string) => {
export const _requestToCouchDBFetch = async (
baseUri: string,
username: string,
password: string,
path?: string,
body?: string | any,
method?: string
) => {
const utf8str = String.fromCharCode.apply(null, [...writeString(`${username}:${password}`)]);
const encoded = window.btoa(utf8str);
const authHeader = "Basic " + encoded;
const transformedHeaders: Record<string, string> = { authorization: authHeader, "content-type": "application/json" };
const transformedHeaders: Record<string, string> = {
authorization: authHeader,
"content-type": "application/json",
};
const uri = `${baseUri}/${path}`;
const requestParam = {
url: uri,
@@ -380,13 +229,21 @@ export const _requestToCouchDBFetch = async (baseUri: string, username: string,
body: JSON.stringify(body),
};
return await fetch(uri, requestParam);
}
};
export const _requestToCouchDB = async (baseUri: string, username: string, password: string, origin: string, path?: string, body?: any, method?: string) => {
const utf8str = String.fromCharCode.apply(null, [...writeString(`${username}:${password}`)]);
const encoded = window.btoa(utf8str);
const authHeader = "Basic " + encoded;
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin };
export const _requestToCouchDB = async (
baseUri: string,
credentials: CouchDBCredentials,
origin: string,
path?: string,
body?: any,
method?: string,
customHeaders?: Record<string, string>
) => {
// Create each time to avoid caching.
const authHeaderGen = new AuthorizationHeaderGenerator();
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
const uri = `${baseUri}/${path}`;
const requestParam: RequestUrlParam = {
url: uri,
@@ -396,37 +253,57 @@ export const _requestToCouchDB = async (baseUri: string, username: string, passw
body: body ? JSON.stringify(body) : undefined,
};
return await requestUrl(requestParam);
}
export const requestToCouchDB = async (baseUri: string, username: string, password: string, origin: string = "", key?: string, body?: string, method?: string) => {
};
/**
* @deprecated Use requestToCouchDBWithCredentials instead.
*/
export const requestToCouchDB = async (
baseUri: string,
username: string,
password: string,
origin: string = "",
key?: string,
body?: string,
method?: string,
customHeaders?: Record<string, string>
) => {
const uri = `_node/_local/_config${key ? "/" + key : ""}`;
return await _requestToCouchDB(baseUri, username, password, origin, uri, body, method);
return await _requestToCouchDB(
baseUri,
{ username, password, type: "basic" },
origin,
uri,
body,
method,
customHeaders
);
};
export async function performRebuildDB(plugin: ObsidianLiveSyncPlugin, method: "localOnly" | "remoteOnly" | "rebuildBothByThisDevice" | "localOnlyWithChunks") {
if (method == "localOnly") {
await plugin.addOnSetup.fetchLocal();
}
if (method == "localOnlyWithChunks") {
await plugin.addOnSetup.fetchLocal(true);
}
if (method == "remoteOnly") {
await plugin.addOnSetup.rebuildRemote();
}
if (method == "rebuildBothByThisDevice") {
await plugin.addOnSetup.rebuildEverything();
}
export function requestToCouchDBWithCredentials(
baseUri: string,
credentials: CouchDBCredentials,
origin: string = "",
key?: string,
body?: string,
method?: string,
customHeaders?: Record<string, string>
) {
const uri = `_node/_local/_config${key ? "/" + key : ""}`;
return _requestToCouchDB(baseUri, credentials, origin, uri, body, method, customHeaders);
}
export const BASE_IS_NEW = Symbol("base");
export const TARGET_IS_NEW = Symbol("target");
export const EVEN = Symbol("even");
// Why 2000? : ZIP FILE Does not have enough resolution.
const resolution = 2000;
export function compareMTime(baseMTime: number, targetMTime: number): typeof BASE_IS_NEW | typeof TARGET_IS_NEW | typeof EVEN {
const truncatedBaseMTime = (~~(baseMTime / resolution)) * resolution;
const truncatedTargetMTime = (~~(targetMTime / resolution)) * resolution;
export function compareMTime(
baseMTime: number,
targetMTime: number
): typeof BASE_IS_NEW | typeof TARGET_IS_NEW | typeof EVEN {
const truncatedBaseMTime = ~~(baseMTime / resolution) * resolution;
const truncatedTargetMTime = ~~(targetMTime / resolution) * resolution;
// Logger(`Resolution MTime ${truncatedBaseMTime} and ${truncatedTargetMTime} `, LOG_LEVEL_VERBOSE);
if (truncatedBaseMTime == truncatedTargetMTime) return EVEN;
if (truncatedBaseMTime > truncatedTargetMTime) return BASE_IS_NEW;
@@ -434,30 +311,43 @@ export function compareMTime(baseMTime: number, targetMTime: number): typeof BAS
throw new Error("Unexpected error");
}
export function markChangesAreSame(file: TFile | AnyEntry | string, mtime1: number, mtime2: number) {
function getKey(file: AnyEntry | string | UXFileInfoStub) {
const key = typeof file == "string" ? file : stripAllPrefixes(file.path);
return key;
}
export function markChangesAreSame(file: AnyEntry | string | UXFileInfoStub, mtime1: number, mtime2: number) {
if (mtime1 === mtime2) return true;
const key = typeof file == "string" ? file : file instanceof TFile ? file.path : file.path ?? file._id;
const key = getKey(file);
const pairs = sameChangePairs.get(key, []) || [];
if (pairs.some(e => e == mtime1 || e == mtime2)) {
if (pairs.some((e) => e == mtime1 || e == mtime2)) {
sameChangePairs.set(key, [...new Set([...pairs, mtime1, mtime2])]);
} else {
sameChangePairs.set(key, [mtime1, mtime2]);
}
}
export function isMarkedAsSameChanges(file: TFile | AnyEntry | string, mtimes: number[]) {
const key = typeof file == "string" ? file : file instanceof TFile ? file.path : file.path ?? file._id;
export function unmarkChanges(file: AnyEntry | string | UXFileInfoStub) {
const key = getKey(file);
sameChangePairs.delete(key);
}
export function isMarkedAsSameChanges(file: UXFileInfoStub | AnyEntry | string, mtimes: number[]) {
const key = getKey(file);
const pairs = sameChangePairs.get(key, []) || [];
if (mtimes.every(e => pairs.indexOf(e) !== -1)) {
if (mtimes.every((e) => pairs.indexOf(e) !== -1)) {
return EVEN;
}
}
export function compareFileFreshness(baseFile: TFile | AnyEntry | undefined, checkTarget: TFile | AnyEntry | undefined): typeof BASE_IS_NEW | typeof TARGET_IS_NEW | typeof EVEN {
export function compareFileFreshness(
baseFile: UXFileInfoStub | AnyEntry | undefined,
checkTarget: UXFileInfo | AnyEntry | undefined
): typeof BASE_IS_NEW | typeof TARGET_IS_NEW | typeof EVEN {
if (baseFile === undefined && checkTarget == undefined) return EVEN;
if (baseFile == undefined) return TARGET_IS_NEW;
if (checkTarget == undefined) return BASE_IS_NEW;
const modifiedBase = baseFile instanceof TFile ? baseFile?.stat?.mtime ?? 0 : baseFile?.mtime ?? 0;
const modifiedTarget = checkTarget instanceof TFile ? checkTarget?.stat?.mtime ?? 0 : checkTarget?.mtime ?? 0;
const modifiedBase = "stat" in baseFile ? (baseFile?.stat?.mtime ?? 0) : (baseFile?.mtime ?? 0);
const modifiedTarget = "stat" in checkTarget ? (checkTarget?.stat?.mtime ?? 0) : (checkTarget?.mtime ?? 0);
if (modifiedBase && modifiedTarget && isMarkedAsSameChanges(baseFile, [modifiedBase, modifiedTarget])) {
return EVEN;
@@ -465,3 +355,330 @@ export function compareFileFreshness(baseFile: TFile | AnyEntry | undefined, che
return compareMTime(modifiedBase, modifiedTarget);
}
const _cached = new Map<
string,
{
value: any;
context: Map<string, any>;
}
>();
export type MemoOption = {
key: string;
forceUpdate?: boolean;
validator?: (context: Map<string, any>) => boolean;
};
export function useMemo<T>(
{ key, forceUpdate, validator }: MemoOption,
updateFunc: (context: Map<string, any>, prev: T) => T
): T {
const cached = _cached.get(key);
const context = cached?.context || new Map<string, any>();
if (cached && !forceUpdate && (!validator || (validator && !validator(context)))) {
return cached.value;
}
const value = updateFunc(context, cached?.value);
if (value !== cached?.value) {
_cached.set(key, { value, context });
}
return value;
}
// const _static = new Map<string, any>();
const _staticObj = new Map<
string,
{
value: any;
}
>();
export function useStatic<T>(key: string): { value: T | undefined };
export function useStatic<T>(key: string, initial: T): { value: T };
export function useStatic<T>(key: string, initial?: T) {
// if (!_static.has(key) && initial) {
// _static.set(key, initial);
// }
const obj = _staticObj.get(key);
if (obj !== undefined) {
return obj;
} else {
// let buf = initial;
const obj = {
_buf: initial,
get value() {
return this._buf as T;
},
set value(value: T) {
this._buf = value;
},
};
_staticObj.set(key, obj);
return obj;
}
}
export function disposeMemo(key: string) {
_cached.delete(key);
}
export function disposeAllMemo() {
_cached.clear();
}
export function displayRev(rev: string) {
const [number, hash] = rev.split("-");
return `${number}-${hash.substring(0, 6)}`;
}
type DocumentProps = {
id: DocumentID;
rev?: string;
prefixedPath: FilePathWithPrefix;
path: FilePath;
isDeleted: boolean;
revDisplay: string;
shortenedId: string;
shortenedPath: string;
};
export function getDocProps(doc: AnyEntry): DocumentProps {
const id = doc._id;
const shortenedId = id.substring(0, 10);
const prefixedPath = getPath(doc);
const path = stripAllPrefixes(prefixedPath);
const rev = doc._rev;
const revDisplay = rev ? displayRev(rev) : "0-NOREVS";
// const prefix = prefixedPath.substring(0, prefixedPath.length - path.length);
const shortenedPath = path.substring(0, 10);
const isDeleted = doc._deleted || doc.deleted || false;
return { id, rev, revDisplay, prefixedPath, path, isDeleted, shortenedId, shortenedPath };
}
export function getLogLevel(showNotice: boolean) {
return showNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
}
export type MapLike<K, V> = {
set(key: K, value: V): Map<K, V>;
clear(): void;
delete(key: K): boolean;
get(key: K): V | undefined;
has(key: K): boolean;
keys: () => IterableIterator<K>;
get size(): number;
};
export async function autosaveCache<K, V>(db: KeyValueDatabase, mapKey: string): Promise<MapLike<K, V>> {
const savedData = (await db.get<Map<K, V>>(mapKey)) ?? new Map<K, V>();
const _commit = () => {
try {
scheduleTask("commit-map-save-" + mapKey, 250, async () => {
await db.set(mapKey, savedData);
});
} catch {
// NO OP.
}
};
return {
set(key: K, value: V) {
const modified = savedData.get(key) !== value;
const result = savedData.set(key, value);
if (modified) {
_commit();
}
return result;
},
clear(): void {
savedData.clear();
_commit();
},
delete(key: K): boolean {
const result = savedData.delete(key);
if (result) {
_commit();
}
return result;
},
get(key: K): V | undefined {
return savedData.get(key);
},
has(key) {
return savedData.has(key);
},
keys() {
return savedData.keys();
},
get size() {
return savedData.size;
},
};
}
export function onlyInNTimes(n: number, proc: (progress: number) => any) {
let counter = 0;
return function () {
if (counter++ % n == 0) {
proc(counter);
}
};
}
const waitingTasks = {} as Record<string, { task?: PromiseWithResolvers<any>; previous: number; leastNext: number }>;
export function rateLimitedSharedExecution<T>(key: string, interval: number, proc: () => Promise<T>): Promise<T> {
if (!(key in waitingTasks)) {
waitingTasks[key] = { task: undefined, previous: 0, leastNext: 0 };
}
if (waitingTasks[key].task) {
// Extend the previous execution time.
waitingTasks[key].leastNext = Date.now() + interval;
return waitingTasks[key].task.promise;
}
const previous = waitingTasks[key].previous;
const delay = previous == 0 ? 0 : Math.max(interval - (Date.now() - previous), 0);
const task = promiseWithResolver<T>();
void task.promise.finally(() => {
if (waitingTasks[key].task === task) {
waitingTasks[key].task = undefined;
waitingTasks[key].previous = Math.max(Date.now(), waitingTasks[key].leastNext);
}
});
waitingTasks[key] = {
task,
previous: Date.now(),
leastNext: Date.now() + interval,
};
void scheduleTask("thin-out-" + key, delay, async () => {
try {
task.resolve(await proc());
} catch (ex) {
task.reject(ex);
}
});
return task.promise;
}
export function updatePreviousExecutionTime(key: string, timeDelta: number = 0) {
if (!(key in waitingTasks)) {
waitingTasks[key] = { task: undefined, previous: 0, leastNext: 0 };
}
waitingTasks[key].leastNext = Math.max(Date.now() + timeDelta, waitingTasks[key].leastNext);
}
const prefixMapObject = {
s: {
1: "V",
2: "W",
3: "X",
4: "Y",
5: "Z",
},
o: {
1: "v",
2: "w",
3: "x",
4: "y",
5: "z",
},
} as Record<string, Record<number, string>>;
const decodePrefixMapObject = Object.fromEntries(
Object.entries(prefixMapObject).flatMap(([prefix, map]) =>
Object.entries(map).map(([len, char]) => [char, { prefix, len: parseInt(len) }])
)
);
const prefixMapNumber = {
n: {
1: "a",
2: "b",
3: "c",
4: "d",
5: "e",
},
N: {
1: "A",
2: "B",
3: "C",
4: "D",
5: "E",
},
} as Record<string, Record<number, string>>;
const decodePrefixMapNumber = Object.fromEntries(
Object.entries(prefixMapNumber).flatMap(([prefix, map]) =>
Object.entries(map).map(([len, char]) => [char, { prefix, len: parseInt(len) }])
)
);
export function encodeAnyArray(obj: any[]): string {
const tempArray = obj.map((v) => {
if (v === null) return "n";
if (v === false) return "f";
if (v === true) return "t";
if (v === undefined) return "u";
if (typeof v == "number") {
const b36 = v.toString(36);
const strNum = v.toString();
const expression = b36.length < strNum.length ? "N" : "n";
const encodedStr = expression == "N" ? b36 : strNum;
const len = encodedStr.length.toString(36);
const lenLen = len.length;
const prefix2 = prefixMapNumber[expression][lenLen];
return prefix2 + len + encodedStr;
}
const str = typeof v == "string" ? v : JSON.stringify(v);
const prefix = typeof v == "string" ? "s" : "o";
const length = str.length.toString(36);
const lenLen = length.length;
const prefix2 = prefixMapObject[prefix][lenLen];
return prefix2 + length + str;
});
const w = tempArray.join("");
return w;
}
const decodeMapConstant = {
u: undefined,
n: null,
f: false,
t: true,
} as Record<string, any>;
export function decodeAnyArray(str: string): any[] {
const result = [];
let i = 0;
while (i < str.length) {
const char = str[i];
i++;
if (char in decodeMapConstant) {
result.push(decodeMapConstant[char]);
continue;
}
if (char in decodePrefixMapNumber) {
const { prefix, len } = decodePrefixMapNumber[char];
const lenStr = str.substring(i, i + len);
i += len;
const radix = prefix == "N" ? 36 : 10;
const lenNum = parseInt(lenStr, 36);
const value = str.substring(i, i + lenNum);
i += lenNum;
result.push(parseInt(value, radix));
continue;
}
const { prefix, len } = decodePrefixMapObject[char];
const lenStr = str.substring(i, i + len);
i += len;
const lenNum = parseInt(lenStr, 36);
const value = str.substring(i, i + lenNum);
i += lenNum;
if (prefix == "s") {
result.push(value);
} else {
result.push(JSON.parse(value));
}
}
return result;
}

View File

@@ -1,13 +1,39 @@
import { type FilePath } from "./lib/src/common/types.ts";
export {
addIcon, App, debounce, Editor, FuzzySuggestModal, MarkdownRenderer, MarkdownView, Modal, Notice, Platform, Plugin, PluginSettingTab, requestUrl, sanitizeHTMLToDom, Setting, stringifyYaml, TAbstractFile, TextAreaComponent, TFile, TFolder,
parseYaml, ItemView, WorkspaceLeaf
addIcon,
App,
debounce,
Editor,
FuzzySuggestModal,
MarkdownRenderer,
MarkdownView,
Modal,
Notice,
Platform,
Plugin,
PluginSettingTab,
requestUrl,
sanitizeHTMLToDom,
Setting,
stringifyYaml,
TAbstractFile,
TextAreaComponent,
TFile,
TFolder,
parseYaml,
ItemView,
WorkspaceLeaf,
} from "obsidian";
export type { DataWriteOptions, PluginManifest, RequestUrlParam, RequestUrlResponse, MarkdownFileInfo, ListedFiles } from "obsidian";
import {
normalizePath as normalizePath_
export type {
DataWriteOptions,
PluginManifest,
RequestUrlParam,
RequestUrlResponse,
MarkdownFileInfo,
ListedFiles,
} from "obsidian";
import { normalizePath as normalizePath_ } from "obsidian";
const normalizePath = normalizePath_ as <T extends string | FilePath>(from: T) => T;
export { normalizePath }
export { type Diff, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch } from "diff-match-patch";
export { normalizePath };
export { type Diff, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch } from "diff-match-patch";

View File

@@ -1,776 +0,0 @@
import { normalizePath, type PluginManifest, type ListedFiles } from "../deps.ts";
import { type EntryDoc, type LoadedEntry, type InternalFileEntry, type FilePathWithPrefix, type FilePath, LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, MODE_SELECTIVE, MODE_PAUSED, type SavingEntry, type DocumentID } from "../lib/src/common/types.ts";
import { type InternalFileInfo, ICHeader, ICHeaderEnd } from "../common/types.ts";
import { readAsBlob, isDocContentSame, sendSignal, readContent, createBlob } from "../lib/src/common/utils.ts";
import { Logger } from "../lib/src/common/logger.ts";
import { PouchDB } from "../lib/src/pouchdb/pouchdb-browser.js";
import { isInternalMetadata, PeriodicProcessor } from "../common/utils.ts";
import { serialized } from "../lib/src/concurrency/lock.ts";
import { JsonResolveModal } from "../ui/JsonResolveModal.ts";
import { LiveSyncCommands } from "./LiveSyncCommands.ts";
import { addPrefix, stripAllPrefixes } from "../lib/src/string_and_binary/path.ts";
import { QueueProcessor } from "../lib/src/concurrency/processor.ts";
import { hiddenFilesEventCount, hiddenFilesProcessingCount } from "../lib/src/mock_and_interop/stores.ts";
export class HiddenFileSync extends LiveSyncCommands {
periodicInternalFileScanProcessor: PeriodicProcessor = new PeriodicProcessor(this.plugin, async () => this.settings.syncInternalFiles && this.localDatabase.isReady && await this.syncInternalFilesAndDatabase("push", false));
get kvDB() {
return this.plugin.kvDB;
}
getConflictedDoc(path: FilePathWithPrefix, rev: string) {
return this.plugin.getConflictedDoc(path, rev);
}
onunload() {
this.periodicInternalFileScanProcessor?.disable();
}
onload() {
this.plugin.addCommand({
id: "livesync-scaninternal",
name: "Sync hidden files",
callback: () => {
this.syncInternalFilesAndDatabase("safe", true);
},
});
}
async onInitializeDatabase(showNotice: boolean) {
if (this.settings.syncInternalFiles) {
try {
Logger("Synchronizing hidden files...");
await this.syncInternalFilesAndDatabase("push", showNotice);
Logger("Synchronizing hidden files done");
} catch (ex) {
Logger("Synchronizing hidden files failed");
Logger(ex, LOG_LEVEL_VERBOSE);
}
}
}
async beforeReplicate(showNotice: boolean) {
if (this.localDatabase.isReady && this.settings.syncInternalFiles && this.settings.syncInternalFilesBeforeReplication && !this.settings.watchInternalFileChanges) {
await this.syncInternalFilesAndDatabase("push", showNotice);
}
}
async onResume() {
this.periodicInternalFileScanProcessor?.disable();
if (this.plugin.suspended)
return;
if (this.settings.syncInternalFiles) {
await this.syncInternalFilesAndDatabase("safe", false);
}
this.periodicInternalFileScanProcessor.enable(this.settings.syncInternalFiles && this.settings.syncInternalFilesInterval ? (this.settings.syncInternalFilesInterval * 1000) : 0);
}
parseReplicationResultItem(docs: PouchDB.Core.ExistingDocument<EntryDoc>) {
return false;
}
realizeSettingSyncMode(): Promise<void> {
this.periodicInternalFileScanProcessor?.disable();
if (this.plugin.suspended)
return Promise.resolve();
if (!this.plugin.isReady)
return Promise.resolve();
this.periodicInternalFileScanProcessor.enable(this.settings.syncInternalFiles && this.settings.syncInternalFilesInterval ? (this.settings.syncInternalFilesInterval * 1000) : 0);
return Promise.resolve();
}
procInternalFile(filename: string) {
this.internalFileProcessor.enqueue(filename);
}
internalFileProcessor = new QueueProcessor<string, any>(
async (filenames) => {
Logger(`START :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE);
await this.syncInternalFilesAndDatabase("pull", false, false, filenames);
Logger(`DONE :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE);
return;
}, { batchSize: 100, concurrentLimit: 1, delay: 10, yieldThreshold: 100, suspended: false, totalRemainingReactiveSource: hiddenFilesEventCount }
);
recentProcessedInternalFiles = [] as string[];
async watchVaultRawEventsAsync(path: FilePath) {
if (!this.settings.syncInternalFiles) return;
// Exclude files handled by customization sync
const configDir = normalizePath(this.app.vault.configDir);
const synchronisedInConfigSync = !this.settings.usePluginSync ? [] : Object.values(this.settings.pluginSyncExtendedSetting).filter(e => e.mode == MODE_SELECTIVE || e.mode == MODE_PAUSED).map(e => e.files).flat().map(e => `${configDir}/${e}`.toLowerCase());
if (synchronisedInConfigSync.some(e => e.startsWith(path.toLowerCase()))) {
Logger(`Hidden file skipped: ${path} is synchronized in customization sync.`, LOG_LEVEL_VERBOSE);
return;
}
const stat = await this.vaultAccess.adapterStat(path);
// sometimes folder is coming.
if (stat != null && stat.type != "file") {
return;
}
const mtime = stat == null ? 0 : stat?.mtime ?? 0;
const storageMTime = ~~((mtime) / 1000);
const key = `${path}-${storageMTime}`;
if (mtime != 0 && this.recentProcessedInternalFiles.contains(key)) {
//If recently processed, it may caused by self.
return;
}
this.recentProcessedInternalFiles = [key, ...this.recentProcessedInternalFiles].slice(0, 100);
// const id = await this.path2id(path, ICHeader);
const prefixedFileName = addPrefix(path, ICHeader);
const filesOnDB = await this.localDatabase.getDBEntryMeta(prefixedFileName);
const dbMTime = ~~((filesOnDB && filesOnDB.mtime || 0) / 1000);
// Skip unchanged file.
if (dbMTime == storageMTime) {
// Logger(`STORAGE --> DB:${path}: (hidden) Nothing changed`);
return;
}
// Do not compare timestamp. Always local data should be preferred except this plugin wrote one.
if (storageMTime == 0) {
await this.deleteInternalFileOnDatabase(path);
} else {
await this.storeInternalFileToDatabase({ path: path, mtime, ctime: stat?.ctime ?? mtime, size: stat?.size ?? 0 });
}
}
async resolveConflictOnInternalFiles() {
// Scan all conflicted internal files
const conflicted = this.localDatabase.findEntries(ICHeader, ICHeaderEnd, { conflicts: true });
this.conflictResolutionProcessor.suspend();
try {
for await (const doc of conflicted) {
if (!("_conflicts" in doc))
continue;
if (isInternalMetadata(doc._id)) {
this.conflictResolutionProcessor.enqueue(doc.path);
}
}
} catch (ex) {
Logger("something went wrong on resolving all conflicted internal files");
Logger(ex, LOG_LEVEL_VERBOSE);
}
await this.conflictResolutionProcessor.startPipeline().waitForAllProcessed();
}
async resolveByNewerEntry(id: DocumentID, path: FilePathWithPrefix, currentDoc: EntryDoc, currentRev: string, conflictedRev: string) {
const conflictedDoc = await this.localDatabase.getRaw(id, { rev: conflictedRev });
// determine which revision should been deleted.
// simply check modified time
const mtimeCurrent = ("mtime" in currentDoc && currentDoc.mtime) || 0;
const mtimeConflicted = ("mtime" in conflictedDoc && conflictedDoc.mtime) || 0;
// Logger(`Revisions:${new Date(mtimeA).toLocaleString} and ${new Date(mtimeB).toLocaleString}`);
// console.log(`mtime:${mtimeA} - ${mtimeB}`);
const delRev = mtimeCurrent < mtimeConflicted ? currentRev : conflictedRev;
// delete older one.
await this.localDatabase.removeRevision(id, delRev);
Logger(`Older one has been deleted:${path}`);
const cc = await this.localDatabase.getRaw(id, { conflicts: true });
if (cc._conflicts?.length === 0) {
await this.extractInternalFileFromDatabase(stripAllPrefixes(path))
} else {
this.conflictResolutionProcessor.enqueue(path);
}
// check the file again
}
conflictResolutionProcessor = new QueueProcessor(async (paths: FilePathWithPrefix[]) => {
const path = paths[0];
sendSignal(`cancel-internal-conflict:${path}`);
try {
// Retrieve data
const id = await this.path2id(path, ICHeader);
const doc = await this.localDatabase.getRaw(id, { conflicts: true });
// if (!("_conflicts" in doc)){
// return [];
// }
if (doc._conflicts === undefined) return [];
if (doc._conflicts.length == 0)
return [];
Logger(`Hidden file conflicted:${path}`);
const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
const revA = doc._rev;
const revB = conflicts[0];
if (path.endsWith(".json")) {
const conflictedRev = conflicts[0];
const conflictedRevNo = Number(conflictedRev.split("-")[0]);
//Search
const revFrom = (await this.localDatabase.getRaw<EntryDoc>(id, { revs_info: true }));
const commonBase = revFrom._revs_info?.filter(e => e.status == "available" && Number(e.rev.split("-")[0]) < conflictedRevNo).first()?.rev ?? "";
const result = await this.plugin.mergeObject(path, commonBase, doc._rev, conflictedRev);
if (result) {
Logger(`Object merge:${path}`, LOG_LEVEL_INFO);
const filename = stripAllPrefixes(path);
const isExists = await this.plugin.vaultAccess.adapterExists(filename);
if (!isExists) {
await this.vaultAccess.ensureDirectory(filename);
}
await this.plugin.vaultAccess.adapterWrite(filename, result);
const stat = await this.vaultAccess.adapterStat(filename);
if (!stat) {
throw new Error(`conflictResolutionProcessor: Failed to stat file ${filename}`);
}
await this.storeInternalFileToDatabase({ path: filename, ...stat });
await this.extractInternalFileFromDatabase(filename);
await this.localDatabase.removeRevision(id, revB);
this.conflictResolutionProcessor.enqueue(path);
return [];
} else {
Logger(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE);
}
return [{ path, revA, revB, id, doc }];
}
// When not JSON file, resolve conflicts by choosing a newer one.
await this.resolveByNewerEntry(id, path, doc, revA, revB);
return [];
} catch (ex) {
Logger(`Failed to resolve conflict (Hidden): ${path}`);
Logger(ex, LOG_LEVEL_VERBOSE);
return [];
}
}, {
suspended: false, batchSize: 1, concurrentLimit: 5, delay: 10, keepResultUntilDownstreamConnected: true, yieldThreshold: 10,
pipeTo: new QueueProcessor(async (results) => {
const { id, doc, path, revA, revB } = results[0];
const docAMerge = await this.localDatabase.getDBEntry(path, { rev: revA });
const docBMerge = await this.localDatabase.getDBEntry(path, { rev: revB });
if (docAMerge != false && docBMerge != false) {
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
// Again for other conflicted revisions.
this.conflictResolutionProcessor.enqueue(path);
}
return;
} else {
// If either revision could not read, force resolving by the newer one.
await this.resolveByNewerEntry(id, path, doc, revA, revB);
}
}, { suspended: false, batchSize: 1, concurrentLimit: 1, delay: 10, keepResultUntilDownstreamConnected: false, yieldThreshold: 10 })
})
queueConflictCheck(path: FilePathWithPrefix) {
this.conflictResolutionProcessor.enqueue(path);
}
//TODO: Tidy up. Even though it is experimental feature, So dirty...
async syncInternalFilesAndDatabase(direction: "push" | "pull" | "safe" | "pullForce" | "pushForce", showMessage: boolean, filesAll: InternalFileInfo[] | false = false, targetFiles: string[] | false = false) {
await this.resolveConflictOnInternalFiles();
const logLevel = showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
Logger("Scanning hidden files.", logLevel, "sync_internal");
const ignorePatterns = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",").filter(e => e).map(e => new RegExp(e, "i"));
const configDir = normalizePath(this.app.vault.configDir);
let files: InternalFileInfo[] =
filesAll ? filesAll : (await this.scanInternalFiles())
const synchronisedInConfigSync = !this.settings.usePluginSync ? [] : Object.values(this.settings.pluginSyncExtendedSetting).filter(e => e.mode == MODE_SELECTIVE || e.mode == MODE_PAUSED).map(e => e.files).flat().map(e => `${configDir}/${e}`.toLowerCase());
files = files.filter(file => synchronisedInConfigSync.every(filterFile => !file.path.toLowerCase().startsWith(filterFile)))
const filesOnDB = ((await this.localDatabase.allDocsRaw({ startkey: ICHeader, endkey: ICHeaderEnd, include_docs: true })).rows.map(e => e.doc) as InternalFileEntry[]).filter(e => !e.deleted);
const allFileNamesSrc = [...new Set([...files.map(e => normalizePath(e.path)), ...filesOnDB.map(e => stripAllPrefixes(this.getPath(e)))])];
const allFileNames = allFileNamesSrc.filter(filename => !targetFiles || (targetFiles && targetFiles.indexOf(filename) !== -1)).filter(path => synchronisedInConfigSync.every(filterFile => !path.toLowerCase().startsWith(filterFile)))
function compareMTime(a: number, b: number) {
const wa = ~~(a / 1000);
const wb = ~~(b / 1000);
const diff = wa - wb;
return diff;
}
const fileCount = allFileNames.length;
let processed = 0;
let filesChanged = 0;
// count updated files up as like this below:
// .obsidian: 2
// .obsidian/workspace: 1
// .obsidian/plugins: 1
// .obsidian/plugins/recent-files-obsidian: 1
// .obsidian/plugins/recent-files-obsidian/data.json: 1
const updatedFolders: { [key: string]: number; } = {};
const countUpdatedFolder = (path: string) => {
const pieces = path.split("/");
let c = pieces.shift();
let pathPieces = "";
filesChanged++;
while (c) {
pathPieces += (pathPieces != "" ? "/" : "") + c;
pathPieces = normalizePath(pathPieces);
if (!(pathPieces in updatedFolders)) {
updatedFolders[pathPieces] = 0;
}
updatedFolders[pathPieces]++;
c = pieces.shift();
}
};
// Cache update time information for files which have already been processed (mainly for files that were skipped due to the same content)
let caches: { [key: string]: { storageMtime: number; docMtime: number; }; } = {};
caches = await this.kvDB.get<{ [key: string]: { storageMtime: number; docMtime: number; }; }>("diff-caches-internal") || {};
const filesMap = files.reduce((acc, cur) => {
acc[cur.path] = cur;
return acc;
}, {} as { [key: string]: InternalFileInfo; });
const filesOnDBMap = filesOnDB.reduce((acc, cur) => {
acc[stripAllPrefixes(this.getPath(cur))] = cur;
return acc;
}, {} as { [key: string]: InternalFileEntry; });
await new QueueProcessor(async (filenames: FilePath[]) => {
const filename = filenames[0];
processed++;
if (processed % 100 == 0) {
Logger(`Hidden file: ${processed}/${fileCount}`, logLevel, "sync_internal");
}
if (!filename) return [];
if (ignorePatterns.some(e => filename.match(e)))
return [];
if (await this.plugin.isIgnoredByIgnoreFiles(filename)) {
return [];
}
const fileOnStorage = filename in filesMap ? filesMap[filename] : undefined;
const fileOnDatabase = filename in filesOnDBMap ? filesOnDBMap[filename] : undefined;
return [{
filename,
fileOnStorage,
fileOnDatabase,
}]
}, { suspended: true, batchSize: 1, concurrentLimit: 10, delay: 0, totalRemainingReactiveSource: hiddenFilesProcessingCount })
.pipeTo(new QueueProcessor(async (params) => {
const
{
filename,
fileOnStorage: xFileOnStorage,
fileOnDatabase: xFileOnDatabase
} = params[0];
if (xFileOnStorage && xFileOnDatabase) {
const cache = filename in caches ? caches[filename] : { storageMtime: 0, docMtime: 0 };
// Both => Synchronize
if ((direction != "pullForce" && direction != "pushForce") && xFileOnDatabase.mtime == cache.docMtime && xFileOnStorage.mtime == cache.storageMtime) {
return;
}
const nw = compareMTime(xFileOnStorage.mtime, xFileOnDatabase.mtime);
if (nw > 0 || direction == "pushForce") {
await this.storeInternalFileToDatabase(xFileOnStorage);
}
if (nw < 0 || direction == "pullForce") {
// skip if not extraction performed.
if (!await this.extractInternalFileFromDatabase(filename))
return;
}
// If process successfully updated or file contents are same, update cache.
cache.docMtime = xFileOnDatabase.mtime;
cache.storageMtime = xFileOnStorage.mtime;
caches[filename] = cache;
countUpdatedFolder(filename);
} else if (!xFileOnStorage && xFileOnDatabase) {
if (direction == "push" || direction == "pushForce") {
if (xFileOnDatabase.deleted)
return;
await this.deleteInternalFileOnDatabase(filename, false);
} else if (direction == "pull" || direction == "pullForce") {
if (await this.extractInternalFileFromDatabase(filename)) {
countUpdatedFolder(filename);
}
} else if (direction == "safe") {
if (xFileOnDatabase.deleted)
return;
if (await this.extractInternalFileFromDatabase(filename)) {
countUpdatedFolder(filename);
}
}
} else if (xFileOnStorage && !xFileOnDatabase) {
if (direction == "push" || direction == "pushForce" || direction == "safe") {
await this.storeInternalFileToDatabase(xFileOnStorage);
} else {
await this.extractInternalFileFromDatabase(xFileOnStorage.path);
}
} else {
throw new Error("Invalid state on hidden file sync");
// Something corrupted?
}
return;
}, { suspended: true, batchSize: 1, concurrentLimit: 5, delay: 0 }))
.root
.enqueueAll(allFileNames)
.startPipeline().waitForAllDoneAndTerminate();
await this.kvDB.set("diff-caches-internal", caches);
// When files has been retrieved from the database. they must be reloaded.
if ((direction == "pull" || direction == "pullForce") && filesChanged != 0) {
// Show notification to restart obsidian when something has been changed in configDir.
if (configDir in updatedFolders) {
// Numbers of updated files that is below of configDir.
let updatedCount = updatedFolders[configDir];
try {
//@ts-ignore
const manifests = Object.values(this.app.plugins.manifests) as any as PluginManifest[];
//@ts-ignore
const enabledPlugins = this.app.plugins.enabledPlugins as Set<string>;
const enabledPluginManifests = manifests.filter(e => enabledPlugins.has(e.id));
for (const manifest of enabledPluginManifests) {
if (manifest.dir && manifest.dir in updatedFolders) {
// If notified about plug-ins, reloading Obsidian may not be necessary.
updatedCount -= updatedFolders[manifest.dir];
const updatePluginId = manifest.id;
const updatePluginName = manifest.name;
this.plugin.askInPopup(`updated-${updatePluginId}`, `Files in ${updatePluginName} has been updated, Press {HERE} to reload ${updatePluginName}, or press elsewhere to dismiss this message.`, (anchor) => {
anchor.text = "HERE";
anchor.addEventListener("click", async () => {
Logger(`Unloading plugin: ${updatePluginName}`, LOG_LEVEL_NOTICE, "plugin-reload-" + updatePluginId);
// @ts-ignore
await this.app.plugins.unloadPlugin(updatePluginId);
// @ts-ignore
await this.app.plugins.loadPlugin(updatePluginId);
Logger(`Plugin reloaded: ${updatePluginName}`, LOG_LEVEL_NOTICE, "plugin-reload-" + updatePluginId);
});
}
);
}
}
} catch (ex) {
Logger("Error on checking plugin status.");
Logger(ex, LOG_LEVEL_VERBOSE);
}
// If something changes left, notify for reloading Obsidian.
if (updatedCount != 0) {
if (!this.plugin.isReloadingScheduled) {
this.plugin.askInPopup(`updated-any-hidden`, `Hidden files have been synchronised, Press {HERE} to schedule a reload of Obsidian, or press elsewhere to dismiss this message.`, (anchor) => {
anchor.text = "HERE";
anchor.addEventListener("click", () => {
this.plugin.scheduleAppReload();
});
});
}
}
}
}
Logger(`Hidden files scanned: ${filesChanged} files had been modified`, logLevel, "sync_internal");
}
async storeInternalFileToDatabase(file: InternalFileInfo, forceWrite = false) {
if (await this.plugin.isIgnoredByIgnoreFiles(file.path)) {
return
}
const id = await this.path2id(file.path, ICHeader);
const prefixedFileName = addPrefix(file.path, ICHeader);
const content = createBlob(await this.plugin.vaultAccess.adapterReadAuto(file.path));
const mtime = file.mtime;
return await serialized("file-" + prefixedFileName, async () => {
try {
const old = await this.localDatabase.getDBEntry(prefixedFileName, undefined, false, false);
let saveData: SavingEntry;
if (old === false) {
saveData = {
_id: id,
path: prefixedFileName,
data: content,
mtime,
ctime: mtime,
datatype: "newnote",
size: file.size,
children: [],
deleted: false,
type: "newnote",
eden: {},
};
} else {
if (await isDocContentSame(readAsBlob(old), content) && !forceWrite) {
// Logger(`STORAGE --> DB:${file.path}: (hidden) Not changed`, LOG_LEVEL_VERBOSE);
return;
}
saveData =
{
...old,
data: content,
mtime,
size: file.size,
datatype: old.datatype,
children: [],
deleted: false,
type: old.datatype,
};
}
const ret = await this.localDatabase.putDBEntry(saveData);
Logger(`STORAGE --> DB:${file.path}: (hidden) Done`);
return ret;
} catch (ex) {
Logger(`STORAGE --> DB:${file.path}: (hidden) Failed`);
Logger(ex, LOG_LEVEL_VERBOSE);
return false;
}
});
}
async deleteInternalFileOnDatabase(filename: FilePath, forceWrite = false) {
const id = await this.path2id(filename, ICHeader);
const prefixedFileName = addPrefix(filename, ICHeader);
const mtime = new Date().getTime();
if (await this.plugin.isIgnoredByIgnoreFiles(filename)) {
return
}
await serialized("file-" + prefixedFileName, async () => {
try {
const old = await this.localDatabase.getDBEntryMeta(prefixedFileName, undefined, true) as InternalFileEntry | false;
let saveData: InternalFileEntry;
if (old === false) {
saveData = {
_id: id,
path: prefixedFileName,
mtime,
ctime: mtime,
size: 0,
children: [],
deleted: true,
type: "newnote",
eden: {}
};
} else {
// Remove all conflicted before deleting.
const conflicts = await this.localDatabase.getRaw(old._id, { conflicts: true });
if (conflicts._conflicts !== undefined) {
for (const conflictRev of conflicts._conflicts) {
await this.localDatabase.removeRevision(old._id, conflictRev);
Logger(`STORAGE -x> DB:${filename}: (hidden) conflict removed ${old._rev} => ${conflictRev}`, LOG_LEVEL_VERBOSE);
}
}
if (old.deleted) {
Logger(`STORAGE -x> DB:${filename}: (hidden) already deleted`);
return;
}
saveData =
{
...old,
mtime,
size: 0,
children: [],
deleted: true,
type: "newnote",
};
}
await this.localDatabase.putRaw(saveData);
Logger(`STORAGE -x> DB:${filename}: (hidden) Done`);
} catch (ex) {
Logger(`STORAGE -x> DB:${filename}: (hidden) Failed`);
Logger(ex, LOG_LEVEL_VERBOSE);
return false;
}
});
}
async extractInternalFileFromDatabase(filename: FilePath, force = false) {
const isExists = await this.plugin.vaultAccess.adapterExists(filename);
const prefixedFileName = addPrefix(filename, ICHeader);
if (await this.plugin.isIgnoredByIgnoreFiles(filename)) {
return;
}
return await serialized("file-" + prefixedFileName, async () => {
try {
// Check conflicted status
const fileOnDB = await this.localDatabase.getDBEntry(prefixedFileName, { conflicts: true }, false, true, true);
if (fileOnDB === false)
throw new Error(`File not found on database.:${filename}`);
// Prevent overwrite for Prevent overwriting while some conflicted revision exists.
if (fileOnDB?._conflicts?.length) {
Logger(`Hidden file ${filename} has conflicted revisions, to keep in safe, writing to storage has been prevented`, LOG_LEVEL_INFO);
return;
}
const deleted = fileOnDB.deleted || fileOnDB._deleted || false;
if (deleted) {
if (!isExists) {
Logger(`STORAGE <x- DB:${filename}: deleted (hidden) Deleted on DB, but the file is already not found on storage.`);
} else {
Logger(`STORAGE <x- DB:${filename}: deleted (hidden).`);
await this.plugin.vaultAccess.adapterRemove(filename);
try {
//@ts-ignore internalAPI
await this.app.vault.adapter.reconcileInternalFile(filename);
} catch (ex) {
Logger("Failed to call internal API(reconcileInternalFile)", LOG_LEVEL_VERBOSE);
Logger(ex, LOG_LEVEL_VERBOSE);
}
}
return true;
}
if (!isExists) {
await this.vaultAccess.ensureDirectory(filename);
await this.plugin.vaultAccess.adapterWrite(filename, readContent(fileOnDB), { mtime: fileOnDB.mtime, ctime: fileOnDB.ctime });
try {
//@ts-ignore internalAPI
await this.app.vault.adapter.reconcileInternalFile(filename);
} catch (ex) {
Logger("Failed to call internal API(reconcileInternalFile)", LOG_LEVEL_VERBOSE);
Logger(ex, LOG_LEVEL_VERBOSE);
}
Logger(`STORAGE <-- DB:${filename}: written (hidden,new${force ? ", force" : ""})`);
return true;
} else {
const content = await this.plugin.vaultAccess.adapterReadAuto(filename);
const docContent = readContent(fileOnDB);
if (await isDocContentSame(content, docContent) && !force) {
// Logger(`STORAGE <-- DB:${filename}: skipped (hidden) Not changed`, LOG_LEVEL_VERBOSE);
return true;
}
await this.plugin.vaultAccess.adapterWrite(filename, docContent, { mtime: fileOnDB.mtime, ctime: fileOnDB.ctime });
try {
//@ts-ignore internalAPI
await this.app.vault.adapter.reconcileInternalFile(filename);
} catch (ex) {
Logger("Failed to call internal API(reconcileInternalFile)", LOG_LEVEL_VERBOSE);
Logger(ex, LOG_LEVEL_VERBOSE);
}
Logger(`STORAGE <-- DB:${filename}: written (hidden, overwrite${force ? ", force" : ""})`);
return true;
}
} catch (ex) {
Logger(`STORAGE <-- DB:${filename}: written (hidden, overwrite${force ? ", force" : ""}) Failed`);
Logger(ex, LOG_LEVEL_VERBOSE);
return false;
}
});
}
showJSONMergeDialogAndMerge(docA: LoadedEntry, docB: LoadedEntry): Promise<boolean> {
return new Promise((res) => {
Logger("Opening data-merging dialog", LOG_LEVEL_VERBOSE);
const docs = [docA, docB];
const path = stripAllPrefixes(docA.path);
const modal = new JsonResolveModal(this.app, path, [docA, docB], async (keep, result) => {
// modal.close();
try {
const filename = path;
let needFlush = false;
if (!result && !keep) {
Logger(`Skipped merging: ${filename}`);
res(false);
return;
}
//Delete old revisions
if (result || keep) {
for (const doc of docs) {
if (doc._rev != keep) {
if (await this.localDatabase.deleteDBEntry(this.getPath(doc), { rev: doc._rev })) {
Logger(`Conflicted revision has been deleted: ${filename}`);
needFlush = true;
}
}
}
}
if (!keep && result) {
const isExists = await this.plugin.vaultAccess.adapterExists(filename);
if (!isExists) {
await this.vaultAccess.ensureDirectory(filename);
}
await this.plugin.vaultAccess.adapterWrite(filename, result);
const stat = await this.plugin.vaultAccess.adapterStat(filename);
if (!stat) {
throw new Error("Stat failed");
}
const mtime = stat?.mtime ?? 0;
await this.storeInternalFileToDatabase({ path: filename, mtime, ctime: stat?.ctime ?? mtime, size: stat?.size ?? 0 }, true);
try {
//@ts-ignore internalAPI
await this.app.vault.adapter.reconcileInternalFile(filename);
} catch (ex) {
Logger("Failed to call internal API(reconcileInternalFile)", LOG_LEVEL_VERBOSE);
Logger(ex, LOG_LEVEL_VERBOSE);
}
Logger(`STORAGE <-- DB:${filename}: written (hidden,merged)`);
}
if (needFlush) {
await this.extractInternalFileFromDatabase(filename, false);
Logger(`STORAGE --> DB:${filename}: extracted (hidden,merged)`);
}
res(true);
} catch (ex) {
Logger("Could not merge conflicted json");
Logger(ex, LOG_LEVEL_VERBOSE);
res(false);
}
});
modal.open();
});
}
async scanInternalFiles(): Promise<InternalFileInfo[]> {
const configDir = normalizePath(this.app.vault.configDir);
const ignoreFilter = this.settings.syncInternalFilesIgnorePatterns
.replace(/\n| /g, "")
.split(",").filter(e => e).map(e => new RegExp(e, "i"));
const synchronisedInConfigSync = !this.settings.usePluginSync ? [] : Object.values(this.settings.pluginSyncExtendedSetting).filter(e => e.mode == MODE_SELECTIVE || e.mode == MODE_PAUSED).map(e => e.files).flat().map(e => `${configDir}/${e}`.toLowerCase());
const root = this.app.vault.getRoot();
const findRoot = root.path;
const filenames = (await this.getFiles(findRoot, [], undefined, ignoreFilter)).filter(e => e.startsWith(".")).filter(e => !e.startsWith(".trash"));
const files = filenames.filter(path => synchronisedInConfigSync.every(filterFile => !path.toLowerCase().startsWith(filterFile))).map(async (e) => {
return {
path: e as FilePath,
stat: await this.plugin.vaultAccess.adapterStat(e)
};
});
const result: InternalFileInfo[] = [];
for (const f of files) {
const w = await f;
if (await this.plugin.isIgnoredByIgnoreFiles(w.path)) {
continue
}
const mtime = w.stat?.mtime ?? 0
const ctime = w.stat?.ctime ?? mtime;
const size = w.stat?.size ?? 0;
result.push({
...w,
mtime, ctime, size
});
}
return result;
}
async getFiles(
path: string,
ignoreList: string[],
filter?: RegExp[],
ignoreFilter?: RegExp[]
) {
let w: ListedFiles;
try {
w = await this.app.vault.adapter.list(path);
} catch (ex) {
Logger(`Could not traverse(HiddenSync):${path}`, LOG_LEVEL_INFO);
Logger(ex, LOG_LEVEL_VERBOSE);
return [];
}
const filesSrc = [
...w.files
.filter((e) => !ignoreList.some((ee) => e.endsWith(ee)))
.filter((e) => !filter || filter.some((ee) => e.match(ee)))
.filter((e) => !ignoreFilter || ignoreFilter.every((ee) => !e.match(ee)))
];
let files = [] as string[];
for (const file of filesSrc) {
if (!await this.plugin.isIgnoredByIgnoreFiles(file)) {
files.push(file);
}
}
L1: for (const v of w.folders) {
for (const ignore of ignoreList) {
if (v.endsWith(ignore)) {
continue L1;
}
}
if (ignoreFilter && ignoreFilter.some(e => v.match(e))) {
continue L1;
}
if (await this.plugin.isIgnoredByIgnoreFiles(v)) {
continue L1;
}
files = files.concat(await this.getFiles(v, ignoreList, filter, ignoreFilter));
}
return files;
}
}

View File

@@ -1,423 +0,0 @@
import { type EntryDoc, type ObsidianLiveSyncSettings, DEFAULT_SETTINGS, LOG_LEVEL_NOTICE, REMOTE_COUCHDB, REMOTE_MINIO } from "../lib/src/common/types.ts";
import { configURIBase } from "../common/types.ts";
import { Logger } from "../lib/src/common/logger.ts";
import { PouchDB } from "../lib/src/pouchdb/pouchdb-browser.js";
import { askSelectString, askYesNo, askString } from "../common/utils.ts";
import { decrypt, encrypt } from "../lib/src/encryption/e2ee_v2.ts";
import { LiveSyncCommands } from "./LiveSyncCommands.ts";
import { delay, fireAndForget } from "../lib/src/common/utils.ts";
import { confirmWithMessage } from "../common/dialogs.ts";
import { Platform } from "../deps.ts";
import { fetchAllUsedChunks } from "../lib/src/pouchdb/utils_couchdb.ts";
import type { LiveSyncCouchDBReplicator } from "../lib/src/replication/couchdb/LiveSyncReplicator.js";
export class SetupLiveSync extends LiveSyncCommands {
onunload() { }
onload(): void | Promise<void> {
this.plugin.registerObsidianProtocolHandler("setuplivesync", async (conf: any) => await this.setupWizard(conf.settings));
this.plugin.addCommand({
id: "livesync-copysetupuri",
name: "Copy settings as a new setup URI",
callback: () => fireAndForget(this.command_copySetupURI()),
});
this.plugin.addCommand({
id: "livesync-copysetupuri-short",
name: "Copy settings as a new setup URI (With customization sync)",
callback: () => fireAndForget(this.command_copySetupURIWithSync()),
});
this.plugin.addCommand({
id: "livesync-copysetupurifull",
name: "Copy settings as a new setup URI (Full)",
callback: () => fireAndForget(this.command_copySetupURIFull()),
});
this.plugin.addCommand({
id: "livesync-opensetupuri",
name: "Use the copied setup URI (Formerly Open setup URI)",
callback: () => fireAndForget(this.command_openSetupURI()),
});
}
onInitializeDatabase(showNotice: boolean) { }
beforeReplicate(showNotice: boolean) { }
onResume() { }
parseReplicationResultItem(docs: PouchDB.Core.ExistingDocument<EntryDoc>): boolean | Promise<boolean> {
return false;
}
async realizeSettingSyncMode() { }
async command_copySetupURI(stripExtra = true) {
const encryptingPassphrase = await askString(this.app, "Encrypt your settings", "The passphrase to encrypt the setup URI", "", true);
if (encryptingPassphrase === false)
return;
const setting = { ...this.settings, configPassphraseStore: "", encryptedCouchDBConnection: "", encryptedPassphrase: "" } as Partial<ObsidianLiveSyncSettings>;
if (stripExtra) {
delete setting.pluginSyncExtendedSetting;
}
const keys = Object.keys(setting) as (keyof ObsidianLiveSyncSettings)[];
for (const k of keys) {
if (JSON.stringify(k in setting ? setting[k] : "") == JSON.stringify(k in DEFAULT_SETTINGS ? DEFAULT_SETTINGS[k] : "*")) {
delete setting[k];
}
}
const encryptedSetting = encodeURIComponent(await encrypt(JSON.stringify(setting), encryptingPassphrase, false));
const uri = `${configURIBase}${encryptedSetting}`;
await navigator.clipboard.writeText(uri);
Logger("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
}
async command_copySetupURIFull() {
const encryptingPassphrase = await askString(this.app, "Encrypt your settings", "The passphrase to encrypt the setup URI", "", true);
if (encryptingPassphrase === false)
return;
const setting = { ...this.settings, configPassphraseStore: "", encryptedCouchDBConnection: "", encryptedPassphrase: "" };
const encryptedSetting = encodeURIComponent(await encrypt(JSON.stringify(setting), encryptingPassphrase, false));
const uri = `${configURIBase}${encryptedSetting}`;
await navigator.clipboard.writeText(uri);
Logger("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
}
async command_copySetupURIWithSync() {
await this.command_copySetupURI(false);
}
async command_openSetupURI() {
const setupURI = await askString(this.app, "Easy setup", "Set up URI", `${configURIBase}aaaaa`);
if (setupURI === false)
return;
if (!setupURI.startsWith(`${configURIBase}`)) {
Logger("Set up URI looks wrong.", LOG_LEVEL_NOTICE);
return;
}
const config = decodeURIComponent(setupURI.substring(configURIBase.length));
console.dir(config);
await this.setupWizard(config);
}
async setupWizard(confString: string) {
try {
const oldConf = JSON.parse(JSON.stringify(this.settings));
const encryptingPassphrase = await askString(this.app, "Passphrase", "The passphrase to decrypt your setup URI", "", true);
if (encryptingPassphrase === false)
return;
const newConf = await JSON.parse(await decrypt(confString, encryptingPassphrase, false));
if (newConf) {
const result = await askYesNo(this.app, "Importing LiveSync's conf, OK?");
if (result == "yes") {
const newSettingW = Object.assign({}, DEFAULT_SETTINGS, newConf) as ObsidianLiveSyncSettings;
this.plugin.replicator.closeReplication();
this.settings.suspendFileWatching = true;
console.dir(newSettingW);
// Back into the default method once.
newSettingW.configPassphraseStore = "";
newSettingW.encryptedPassphrase = "";
newSettingW.encryptedCouchDBConnection = "";
newSettingW.additionalSuffixOfDatabaseName = `${("appId" in this.app ? this.app.appId : "")}`
const setupJustImport = "Just import setting";
const setupAsNew = "Set it up as secondary or subsequent device";
const setupAsMerge = "Secondary device but try keeping local changes";
const setupAgain = "Reconfigure and reconstitute the data";
const setupManually = "Leave everything to me";
newSettingW.syncInternalFiles = false;
newSettingW.usePluginSync = false;
newSettingW.isConfigured = true;
// Migrate completely obsoleted configuration.
if (!newSettingW.useIndexedDBAdapter) {
newSettingW.useIndexedDBAdapter = true;
}
const setupType = await askSelectString(this.app, "How would you like to set it up?", [setupAsNew, setupAgain, setupAsMerge, setupJustImport, setupManually]);
if (setupType == setupJustImport) {
this.plugin.settings = newSettingW;
this.plugin.usedPassphrase = "";
await this.plugin.saveSettings();
} else if (setupType == setupAsNew) {
this.plugin.settings = newSettingW;
this.plugin.usedPassphrase = "";
await this.fetchLocal();
} else if (setupType == setupAsMerge) {
this.plugin.settings = newSettingW;
this.plugin.usedPassphrase = "";
await this.fetchLocalWithRebuild();
} else if (setupType == setupAgain) {
const confirm = "I know this operation will rebuild all my databases with files on this device, and files that are on the remote database and I didn't synchronize to any other devices will be lost and want to proceed indeed.";
if (await askSelectString(this.app, "Do you really want to do this?", ["Cancel", confirm]) != confirm) {
return;
}
this.plugin.settings = newSettingW;
this.plugin.usedPassphrase = "";
await this.rebuildEverything();
} else if (setupType == setupManually) {
const keepLocalDB = await askYesNo(this.app, "Keep local DB?");
const keepRemoteDB = await askYesNo(this.app, "Keep remote DB?");
if (keepLocalDB == "yes" && keepRemoteDB == "yes") {
// nothing to do. so peaceful.
this.plugin.settings = newSettingW;
this.plugin.usedPassphrase = "";
this.suspendAllSync();
this.suspendExtraSync();
await this.plugin.saveSettings();
const replicate = await askYesNo(this.app, "Unlock and replicate?");
if (replicate == "yes") {
await this.plugin.replicate(true);
await this.plugin.markRemoteUnlocked();
}
Logger("Configuration loaded.", LOG_LEVEL_NOTICE);
return;
}
if (keepLocalDB == "no" && keepRemoteDB == "no") {
const reset = await askYesNo(this.app, "Drop everything?");
if (reset != "yes") {
Logger("Cancelled", LOG_LEVEL_NOTICE);
this.plugin.settings = oldConf;
return;
}
}
let initDB;
this.plugin.settings = newSettingW;
this.plugin.usedPassphrase = "";
await this.plugin.saveSettings();
if (keepLocalDB == "no") {
await this.plugin.resetLocalDatabase();
await this.plugin.localDatabase.initializeDatabase();
const rebuild = await askYesNo(this.app, "Rebuild the database?");
if (rebuild == "yes") {
initDB = this.plugin.initializeDatabase(true);
} else {
await this.plugin.markRemoteResolved();
}
}
if (keepRemoteDB == "no") {
await this.plugin.tryResetRemoteDatabase();
await this.plugin.markRemoteLocked();
}
if (keepLocalDB == "no" || keepRemoteDB == "no") {
const replicate = await askYesNo(this.app, "Replicate once?");
if (replicate == "yes") {
if (initDB != null) {
await initDB;
}
await this.plugin.replicate(true);
}
}
}
}
Logger("Configuration loaded.", LOG_LEVEL_NOTICE);
} else {
Logger("Cancelled.", LOG_LEVEL_NOTICE);
}
} catch (ex) {
Logger("Couldn't parse or decrypt configuration uri.", LOG_LEVEL_NOTICE);
}
}
suspendExtraSync() {
Logger("Hidden files and plugin synchronization have been temporarily disabled. Please enable them after the fetching, if you need them.", LOG_LEVEL_NOTICE)
this.plugin.settings.syncInternalFiles = false;
this.plugin.settings.usePluginSync = false;
this.plugin.settings.autoSweepPlugins = false;
}
async askHiddenFileConfiguration(opt: { enableFetch?: boolean, enableOverwrite?: boolean }) {
this.plugin.addOnSetup.suspendExtraSync();
const message = `Would you like to enable \`Hidden File Synchronization\` or \`Customization sync\`?
${opt.enableFetch ? " - Fetch: Use files stored from other devices. \n" : ""}${opt.enableOverwrite ? "- Overwrite: Use files from this device. \n" : ""}- Custom: Synchronize only customization files with a dedicated interface.
- Keep them disabled: Do not use hidden file synchronization.
Of course, we are able to disable these features.`
const CHOICE_FETCH = "Fetch";
const CHOICE_OVERWRITE = "Overwrite";
const CHOICE_CUSTOMIZE = "Custom";
const CHOICE_DISMISS = "keep them disabled";
const choices = [];
if (opt?.enableFetch) {
choices.push(CHOICE_FETCH);
}
if (opt?.enableOverwrite) {
choices.push(CHOICE_OVERWRITE);
}
choices.push(CHOICE_CUSTOMIZE);
choices.push(CHOICE_DISMISS);
const ret = await confirmWithMessage(this.plugin, "Hidden file sync", message, choices, CHOICE_DISMISS, 40);
if (ret == CHOICE_FETCH) {
await this.configureHiddenFileSync("FETCH");
} else if (ret == CHOICE_OVERWRITE) {
await this.configureHiddenFileSync("OVERWRITE");
} else if (ret == CHOICE_DISMISS) {
await this.configureHiddenFileSync("DISABLE");
} else if (ret == CHOICE_CUSTOMIZE) {
await this.configureHiddenFileSync("CUSTOMIZE");
}
}
async configureHiddenFileSync(mode: "FETCH" | "OVERWRITE" | "MERGE" | "DISABLE" | "CUSTOMIZE") {
this.plugin.addOnSetup.suspendExtraSync();
if (mode == "DISABLE") {
this.plugin.settings.syncInternalFiles = false;
this.plugin.settings.usePluginSync = false;
await this.plugin.saveSettings();
return;
}
if (mode != "CUSTOMIZE") {
Logger("Gathering files for enabling Hidden File Sync", LOG_LEVEL_NOTICE);
if (mode == "FETCH") {
await this.plugin.addOnHiddenFileSync.syncInternalFilesAndDatabase("pullForce", true);
} else if (mode == "OVERWRITE") {
await this.plugin.addOnHiddenFileSync.syncInternalFilesAndDatabase("pushForce", true);
} else if (mode == "MERGE") {
await this.plugin.addOnHiddenFileSync.syncInternalFilesAndDatabase("safe", true);
}
this.plugin.settings.syncInternalFiles = true;
await this.plugin.saveSettings();
Logger(`Done! Restarting the app is strongly recommended!`, LOG_LEVEL_NOTICE);
} else if (mode == "CUSTOMIZE") {
if (!this.plugin.deviceAndVaultName) {
let name = await askString(this.app, "Device name", "Please set this device name", `desktop`);
if (!name) {
if (Platform.isAndroidApp) {
name = "android-app"
} else if (Platform.isIosApp) {
name = "ios"
} else if (Platform.isMacOS) {
name = "macos"
} else if (Platform.isMobileApp) {
name = "mobile-app"
} else if (Platform.isMobile) {
name = "mobile"
} else if (Platform.isSafari) {
name = "safari"
} else if (Platform.isDesktop) {
name = "desktop"
} else if (Platform.isDesktopApp) {
name = "desktop-app"
} else {
name = "unknown"
}
name = name + Math.random().toString(36).slice(-4);
}
this.plugin.deviceAndVaultName = name;
}
this.plugin.settings.usePluginSync = true;
await this.plugin.saveSettings();
await this.plugin.addOnConfigSync.scanAllConfigFiles(true);
}
}
suspendAllSync() {
this.plugin.settings.liveSync = false;
this.plugin.settings.periodicReplication = false;
this.plugin.settings.syncOnSave = false;
this.plugin.settings.syncOnEditorSave = false;
this.plugin.settings.syncOnStart = false;
this.plugin.settings.syncOnFileOpen = false;
this.plugin.settings.syncAfterMerge = false;
//this.suspendExtraSync();
}
async suspendReflectingDatabase() {
if (this.plugin.settings.doNotSuspendOnFetching) return;
if (this.plugin.settings.remoteType == REMOTE_MINIO) return;
Logger(`Suspending reflection: Database and storage changes will not be reflected in each other until completely finished the fetching.`, LOG_LEVEL_NOTICE);
this.plugin.settings.suspendParseReplicationResult = true;
this.plugin.settings.suspendFileWatching = true;
await this.plugin.saveSettings();
}
async resumeReflectingDatabase() {
if (this.plugin.settings.doNotSuspendOnFetching) return;
if (this.plugin.settings.remoteType == REMOTE_MINIO) return;
Logger(`Database and storage reflection has been resumed!`, LOG_LEVEL_NOTICE);
this.plugin.settings.suspendParseReplicationResult = false;
this.plugin.settings.suspendFileWatching = false;
await this.plugin.syncAllFiles(true);
await this.plugin.loadQueuedFiles();
await this.plugin.saveSettings();
}
async askUseNewAdapter() {
if (!this.plugin.settings.useIndexedDBAdapter) {
const message = `Now this plugin has been configured to use the old database adapter for keeping compatibility. Do you want to deactivate it?`;
const CHOICE_YES = "Yes, disable and use latest";
const CHOICE_NO = "No, keep compatibility";
const choices = [CHOICE_YES, CHOICE_NO];
const ret = await confirmWithMessage(this.plugin, "Database adapter", message, choices, CHOICE_YES, 10);
if (ret == CHOICE_YES) {
this.plugin.settings.useIndexedDBAdapter = true;
}
}
}
async resetLocalDatabase() {
if (this.plugin.settings.isConfigured && this.plugin.settings.additionalSuffixOfDatabaseName == "") {
// Discard the non-suffixed database
await this.plugin.resetLocalDatabase();
}
this.plugin.settings.additionalSuffixOfDatabaseName = `${("appId" in this.app ? this.app.appId : "")}`
await this.plugin.resetLocalDatabase();
}
async fetchRemoteChunks() {
if (!this.plugin.settings.doNotSuspendOnFetching && this.plugin.settings.readChunksOnline && this.plugin.settings.remoteType == REMOTE_COUCHDB) {
Logger(`Fetching chunks`, LOG_LEVEL_NOTICE);
const replicator = this.plugin.getReplicator() as LiveSyncCouchDBReplicator;
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.plugin.getIsMobile(), true);
if (typeof remoteDB == "string") {
Logger(remoteDB, LOG_LEVEL_NOTICE);
} else {
await fetchAllUsedChunks(this.localDatabase.localDatabase, remoteDB.db);
}
Logger(`Fetching chunks done`, LOG_LEVEL_NOTICE);
}
}
async fetchLocal(makeLocalChunkBeforeSync?: boolean) {
this.suspendExtraSync();
await this.askUseNewAdapter();
this.plugin.settings.isConfigured = true;
await this.suspendReflectingDatabase();
await this.plugin.realizeSettingSyncMode();
await this.resetLocalDatabase();
await delay(1000);
await this.plugin.openDatabase();
this.plugin.isReady = true;
if (makeLocalChunkBeforeSync) {
await this.plugin.initializeDatabase(true);
}
await this.plugin.markRemoteResolved();
await delay(500);
await this.plugin.replicateAllFromServer(true);
await delay(1000);
await this.plugin.replicateAllFromServer(true);
await this.resumeReflectingDatabase();
await this.askHiddenFileConfiguration({ enableFetch: true });
}
async fetchLocalWithRebuild() {
return await this.fetchLocal(true);
}
async rebuildRemote() {
this.suspendExtraSync();
this.plugin.settings.isConfigured = true;
await this.plugin.realizeSettingSyncMode();
await this.plugin.markRemoteLocked();
await this.plugin.tryResetRemoteDatabase();
await this.plugin.markRemoteLocked();
await delay(500);
await this.askHiddenFileConfiguration({ enableOverwrite: true });
await delay(1000);
await this.plugin.replicateAllToServer(true);
await delay(1000);
await this.plugin.replicateAllToServer(true);
}
async rebuildEverything() {
this.suspendExtraSync();
await this.askUseNewAdapter();
this.plugin.settings.isConfigured = true;
await this.plugin.realizeSettingSyncMode();
await this.resetLocalDatabase();
await delay(1000);
await this.plugin.initializeDatabase(true);
await this.plugin.markRemoteLocked();
await this.plugin.tryResetRemoteDatabase();
await this.plugin.markRemoteLocked();
await delay(500);
await this.askHiddenFileConfiguration({ enableOverwrite: true });
await delay(1000);
await this.plugin.replicateAllToServer(true);
await delay(1000);
await this.plugin.replicateAllToServer(true);
}
}

View File

@@ -1,10 +1,15 @@
<script lang="ts">
import { PluginDataExDisplayV2, type IPluginDataExDisplay } from "../../features/CmdConfigSync";
import {
ConfigSync,
PluginDataExDisplayV2,
type IPluginDataExDisplay,
type PluginDataExFile,
} from "./CmdConfigSync.ts";
import { Logger } from "../../lib/src/common/logger";
import { type FilePath, LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "../../lib/src/common/types";
import { getDocData, timeDeltaToHumanReadable, unique } from "../../lib/src/common/utils";
import type ObsidianLiveSyncPlugin from "../../main";
import { askString } from "../../common/utils";
// import { askString } from "../../common/utils";
import { Menu } from "obsidian";
export let list: IPluginDataExDisplay[] = [];
@@ -15,13 +20,21 @@
export let applyAllPluse = 0;
export let applyData: (data: IPluginDataExDisplay) => Promise<boolean>;
export let compareData: (dataA: IPluginDataExDisplay, dataB: IPluginDataExDisplay, compareEach?: boolean) => Promise<boolean>;
export let compareData: (
dataA: IPluginDataExDisplay,
dataB: IPluginDataExDisplay,
compareEach?: boolean
) => Promise<boolean>;
export let deleteData: (data: IPluginDataExDisplay) => Promise<boolean>;
export let hidden: boolean;
export let plugin: ObsidianLiveSyncPlugin;
export let isMaintenanceMode: boolean = false;
export let isFlagged: boolean = false;
const addOn = plugin.addOnConfigSync;
const addOn = plugin.getAddOn<ConfigSync>(ConfigSync.name)!;
if (!addOn) {
Logger(`Could not load the add-on ${ConfigSync.name}`, LOG_LEVEL_INFO);
throw new Error(`Could not load the add-on ${ConfigSync.name}`);
}
export let selected = "";
let freshness = "";
@@ -147,7 +160,11 @@
canCompare = result.canCompare;
pickToCompare = false;
if (canCompare) {
if (local?.files.length == remote?.files.length && local?.files.length == 1 && local?.files[0].filename == remote?.files[0].filename) {
if (
local?.files.length == remote?.files.length &&
local?.files.length == 1 &&
local?.files[0].filename == remote?.files[0].filename
) {
pickToCompare = false;
} else {
pickToCompare = true;
@@ -246,7 +263,11 @@
const selectedItem = list.find((e) => e.term == selected);
await compareItems(local, selectedItem);
}
async function compareItems(local: IPluginDataExDisplay | undefined, remote: IPluginDataExDisplay | undefined, filename?: string) {
async function compareItems(
local: IPluginDataExDisplay | undefined,
remote: IPluginDataExDisplay | undefined,
filename?: string
) {
if (local && remote) {
if (!filename) {
if (await compareData(local, remote)) {
@@ -254,8 +275,10 @@
}
return;
} else {
const localCopy = local instanceof PluginDataExDisplayV2 ? new PluginDataExDisplayV2(local) : { ...local };
const remoteCopy = remote instanceof PluginDataExDisplayV2 ? new PluginDataExDisplayV2(remote) : { ...remote };
const localCopy =
local instanceof PluginDataExDisplayV2 ? new PluginDataExDisplayV2(local) : { ...local };
const remoteCopy =
remote instanceof PluginDataExDisplayV2 ? new PluginDataExDisplayV2(remote) : { ...remote };
localCopy.files = localCopy.files.filter((e) => e.filename == filename);
remoteCopy.files = remoteCopy.files.filter((e) => e.filename == filename);
if (await compareData(localCopy, remoteCopy, true)) {
@@ -283,9 +306,17 @@
menu.addItem((item) => item.setTitle("Compare file").setIsLabel(true));
menu.addSeparator();
const files = unique(local.files.map((e) => e.filename).concat(selectedItem.files.map((e) => e.filename)));
const convDate = (dt: PluginDataExFile | undefined) => {
if (!dt) return "(Missing)";
const d = new Date(dt.mtime);
return d.toLocaleString();
};
for (const filename of files) {
menu.addItem((item) => {
item.setTitle(filename).onClick((e) => compareItems(local, selectedItem, filename));
const localFile = local.files.find((e) => e.filename == filename);
const remoteFile = selectedItem.files.find((e) => e.filename == filename);
const title = `${filename} (${convDate(localFile)} <--> ${convDate(remoteFile)})`;
item.setTitle(title).onClick((e) => compareItems(local, selectedItem, filename));
});
}
menu.showAtMouseEvent(evt);
@@ -303,7 +334,7 @@
Logger(`Could not find local item`, LOG_LEVEL_VERBOSE);
return;
}
const duplicateTermName = await askString(plugin.app, "Duplicate", "device name", "");
const duplicateTermName = await plugin.confirm.askString("Duplicate", "device name", "");
if (duplicateTermName) {
if (duplicateTermName.contains("/")) {
Logger(`We can not use "/" to the device name`, LOG_LEVEL_NOTICE);
@@ -317,7 +348,7 @@
</script>
{#if terms.length > 0}
<span class="spacer" />
<span class="spacer"></span>
{#if !hidden}
<span class="chip-wrap">
<span class="chip modified">{freshness}</span>
@@ -339,12 +370,15 @@
<button on:click={compareSelected}>⮂</button>
{/if}
{:else}
<button disabled />
<!-- svelte-ignore a11y_consider_explicit_label -->
<button disabled></button>
{/if}
<button on:click={applySelected}>✓</button>
{:else}
<button disabled />
<button disabled />
<!-- svelte-ignore a11y_consider_explicit_label -->
<button disabled></button>
<!-- svelte-ignore a11y_consider_explicit_label -->
<button disabled></button>
{/if}
{#if isMaintenanceMode}
{#if selected != ""}
@@ -355,10 +389,12 @@
{/if}
{/if}
{:else}
<span class="spacer" />
<span class="spacer"></span>
<span class="message even">All the same or non-existent</span>
<button disabled />
<button disabled />
<!-- svelte-ignore a11y_consider_explicit_label -->
<button disabled></button>
<!-- svelte-ignore a11y_consider_explicit_label -->
<button disabled></button>
{/if}
<style>

View File

@@ -0,0 +1,37 @@
import { mount, unmount } from "svelte";
import { App, Modal } from "../../deps.ts";
import ObsidianLiveSyncPlugin from "../../main.ts";
import PluginPane from "./PluginPane.svelte";
export class PluginDialogModal extends Modal {
plugin: ObsidianLiveSyncPlugin;
component: ReturnType<typeof mount> | undefined;
isOpened() {
return this.component != undefined;
}
constructor(app: App, plugin: ObsidianLiveSyncPlugin) {
super(app);
this.plugin = plugin;
}
onOpen() {
const { contentEl } = this;
this.contentEl.style.overflow = "auto";
this.contentEl.style.display = "flex";
this.contentEl.style.flexDirection = "column";
this.titleEl.setText("Customization Sync (Beta3)");
if (!this.component) {
this.component = mount(PluginPane, {
target: contentEl,
props: { plugin: this.plugin },
});
}
}
onClose() {
if (this.component) {
void unmount(this.component);
this.component = undefined;
}
}
}

View File

@@ -1,18 +1,46 @@
<script lang="ts">
import { onMount } from "svelte";
import ObsidianLiveSyncPlugin from "../main";
import { type IPluginDataExDisplay, pluginIsEnumerating, pluginList, pluginManifestStore, pluginV2Progress } from "../features/CmdConfigSync";
import PluginCombo from "./components/PluginCombo.svelte";
import ObsidianLiveSyncPlugin from "../../main";
import {
ConfigSync,
type IPluginDataExDisplay,
pluginIsEnumerating,
pluginList,
pluginManifestStore,
pluginV2Progress,
} from "./CmdConfigSync.ts";
import PluginCombo from "./PluginCombo.svelte";
import { Menu, type PluginManifest } from "obsidian";
import { unique } from "../lib/src/common/utils";
import { MODE_SELECTIVE, MODE_AUTOMATIC, MODE_PAUSED, type SYNC_MODE, type PluginSyncSettingEntry, MODE_SHINY } from "../lib/src/common/types";
import { normalizePath } from "../deps";
import { unique } from "../../lib/src/common/utils";
import {
MODE_SELECTIVE,
MODE_AUTOMATIC,
MODE_PAUSED,
type SYNC_MODE,
MODE_SHINY,
} from "../../lib/src/common/types";
import { normalizePath } from "../../deps";
import { HiddenFileSync } from "../HiddenFileSync/CmdHiddenFileSync.ts";
import { LOG_LEVEL_NOTICE, Logger } from "octagonal-wheels/common/logger";
export let plugin: ObsidianLiveSyncPlugin;
$: hideNotApplicable = false;
$: thisTerm = plugin.deviceAndVaultName;
$: thisTerm = plugin.services.setting.getDeviceAndVaultName();
const addOn = plugin.addOnConfigSync;
const addOn = plugin.getAddOn(ConfigSync.name) as ConfigSync;
if (!addOn) {
const msg =
"AddOn Module (ConfigSync) has not been loaded. This is very unexpected situation. Please report this issue.";
Logger(msg, LOG_LEVEL_NOTICE);
throw new Error(msg);
}
const addOnHiddenFileSync = plugin.getAddOn(HiddenFileSync.name) as HiddenFileSync;
if (!addOnHiddenFileSync) {
const msg =
"AddOn Module (HiddenFileSync) has not been loaded. This is very unexpected situation. Please report this issue.";
Logger(msg, LOG_LEVEL_NOTICE);
throw new Error(msg);
}
let list: IPluginDataExDisplay[] = [];
@@ -70,7 +98,7 @@
await requestUpdate();
}
async function replicate() {
await plugin.replicate(true);
await plugin.services.replication.replicate(true);
}
function selectAllNewest(selectMode: boolean) {
selectNewestPulse++;
@@ -86,7 +114,11 @@
async function applyData(data: IPluginDataExDisplay): Promise<boolean> {
return await addOn.applyData(data);
}
async function compareData(docA: IPluginDataExDisplay, docB: IPluginDataExDisplay, compareEach = false): Promise<boolean> {
async function compareData(
docA: IPluginDataExDisplay,
docB: IPluginDataExDisplay,
compareEach = false
): Promise<boolean> {
return await addOn.compareUsingDisplayData(docA, docB, compareEach);
}
async function deleteData(data: IPluginDataExDisplay): Promise<boolean> {
@@ -117,7 +149,7 @@
setMode(key, MODE_AUTOMATIC);
const configDir = normalizePath(plugin.app.vault.configDir);
const files = (plugin.settings.pluginSyncExtendedSetting[key]?.files ?? []).map((e) => `${configDir}/${e}`);
plugin.addOnHiddenFileSync.syncInternalFilesAndDatabase(direction, true, false, files);
addOnHiddenFileSync.initialiseInternalFileSync(direction, true, files);
}
function askOverwriteModeForAutomatic(evt: MouseEvent, key: string) {
const menu = new Menu();
@@ -186,7 +218,7 @@
.filter((e) => `${e.category}/${e.name}` == key)
.map((e) => e.files)
.flat()
.map((e) => e.filename),
.map((e) => e.filename)
);
if (mode == MODE_SELECTIVE) {
automaticList.delete(key);
@@ -205,7 +237,7 @@
plugin.settings.pluginSyncExtendedSetting[key].files = files;
plugin.settings.pluginSyncExtendedSetting[key].mode = mode;
}
plugin.saveSettingData();
plugin.services.setting.saveSettingData();
}
function getIcon(mode: SYNC_MODE) {
if (mode in ICONS) {
@@ -236,7 +268,15 @@
.map((e) => ({ category: e[0], name: e[1], displayName: e[1] })),
]
.sort((a, b) => (a.displayName ?? a.name).localeCompare(b.displayName ?? b.name))
.reduce((p, c) => ({ ...p, [c.category]: unique(c.category in p ? [...p[c.category], c.displayName ?? c.name] : [c.displayName ?? c.name]) }), {} as Record<string, string[]>);
.reduce(
(p, c) => ({
...p,
[c.category]: unique(
c.category in p ? [...p[c.category], c.displayName ?? c.name] : [c.displayName ?? c.name]
),
}),
{} as Record<string, string[]>
);
}
$: {
displayKeys = computeDisplayKeys(list);
@@ -324,7 +364,12 @@
</div>
<div class="body">
{#if mode == MODE_SELECTIVE || mode == MODE_SHINY}
<PluginCombo {...options} isFlagged={mode == MODE_SHINY} list={list.filter((e) => e.category == key && e.name == name)} hidden={false} />
<PluginCombo
{...options}
isFlagged={mode == MODE_SHINY}
list={list.filter((e) => e.category == key && e.name == name)}
hidden={false}
/>
{:else}
<div class="statusnote">{TITLES[mode]}</div>
{/if}
@@ -346,7 +391,10 @@
{@const modeEtc = automaticListDisp.get(bindKeyETC) ?? MODE_SELECTIVE}
<div class="labelrow {hideEven ? 'hideeven' : ''}">
<div class="title">
<button class="status" on:click={(evt) => askMode(evt, `${PREFIX_PLUGIN_ALL}/${name}`, bindKeyAll)}>
<button
class="status"
on:click={(evt) => askMode(evt, `${PREFIX_PLUGIN_ALL}/${name}`, bindKeyAll)}
>
{getIcon(modeAll)}
</button>
<span class="name">{nameMap.get(`plugins/${name}`) || name}</span>
@@ -360,14 +408,22 @@
{#if modeAll == MODE_SELECTIVE || modeAll == MODE_SHINY}
<div class="filerow {hideEven ? 'hideeven' : ''}">
<div class="filetitle">
<button class="status" on:click={(evt) => askMode(evt, `${PREFIX_PLUGIN_MAIN}/${name}/MAIN`, bindKeyMain)}>
<button
class="status"
on:click={(evt) => askMode(evt, `${PREFIX_PLUGIN_MAIN}/${name}/MAIN`, bindKeyMain)}
>
{getIcon(modeMain)}
</button>
<span class="name">MAIN</span>
</div>
<div class="body">
{#if modeMain == MODE_SELECTIVE || modeMain == MODE_SHINY}
<PluginCombo {...options} isFlagged={modeMain == MODE_SHINY} list={filterList(listX, ["PLUGIN_MAIN"])} hidden={false} />
<PluginCombo
{...options}
isFlagged={modeMain == MODE_SHINY}
list={filterList(listX, ["PLUGIN_MAIN"])}
hidden={false}
/>
{:else}
<div class="statusnote">{TITLES[modeMain]}</div>
{/if}
@@ -375,14 +431,22 @@
</div>
<div class="filerow {hideEven ? 'hideeven' : ''}">
<div class="filetitle">
<button class="status" on:click={(evt) => askMode(evt, `${PREFIX_PLUGIN_DATA}/${name}`, bindKeyData)}>
<button
class="status"
on:click={(evt) => askMode(evt, `${PREFIX_PLUGIN_DATA}/${name}`, bindKeyData)}
>
{getIcon(modeData)}
</button>
<span class="name">DATA</span>
</div>
<div class="body">
{#if modeData == MODE_SELECTIVE || modeData == MODE_SHINY}
<PluginCombo {...options} isFlagged={modeData == MODE_SHINY} list={filterList(listX, ["PLUGIN_DATA"])} hidden={false} />
<PluginCombo
{...options}
isFlagged={modeData == MODE_SHINY}
list={filterList(listX, ["PLUGIN_DATA"])}
hidden={false}
/>
{:else}
<div class="statusnote">{TITLES[modeData]}</div>
{/if}
@@ -391,14 +455,22 @@
{#if useSyncPluginEtc}
<div class="filerow {hideEven ? 'hideeven' : ''}">
<div class="filetitle">
<button class="status" on:click={(evt) => askMode(evt, `${PREFIX_PLUGIN_ETC}/${name}`, bindKeyETC)}>
<button
class="status"
on:click={(evt) => askMode(evt, `${PREFIX_PLUGIN_ETC}/${name}`, bindKeyETC)}
>
{getIcon(modeEtc)}
</button>
<span class="name">Other files</span>
</div>
<div class="body">
{#if modeEtc == MODE_SELECTIVE || modeEtc == MODE_SHINY}
<PluginCombo {...options} isFlagged={modeEtc == MODE_SHINY} list={filterList(listX, ["PLUGIN_ETC"])} hidden={false} />
<PluginCombo
{...options}
isFlagged={modeEtc == MODE_SHINY}
list={filterList(listX, ["PLUGIN_ETC"])}
hidden={false}
/>
{:else}
<div class="statusnote">{TITLES[modeEtc]}</div>
{/if}
@@ -470,10 +542,12 @@
padding-right: 4px;
flex-wrap: wrap;
}
.filerow.hideeven:has(.even),
.labelrow.hideeven:has(.even) {
.filerow.hideeven:has(:global(.even)),
.labelrow.hideeven:has(:global(.even)) {
display: none;
}
.noterow {
min-height: 2em;
display: flex;

View File

@@ -1,14 +1,15 @@
import { App, Modal } from "../deps.ts";
import { type FilePath, type LoadedEntry } from "../lib/src/common/types.ts";
import { App, Modal } from "../../deps.ts";
import { type FilePath, type LoadedEntry } from "../../lib/src/common/types.ts";
import JsonResolvePane from "./JsonResolvePane.svelte";
import { waitForSignal } from "../lib/src/common/utils.ts";
import { waitForSignal } from "../../lib/src/common/utils.ts";
import { mount, unmount } from "svelte";
export class JsonResolveModal extends Modal {
// result: Array<[number, string]>;
filename: FilePath;
callback?: (keepRev?: string, mergedStr?: string) => Promise<void>;
docs: LoadedEntry[];
component?: JsonResolvePane;
component?: ReturnType<typeof mount>;
nameA: string;
nameB: string;
defaultSelect: string;
@@ -16,10 +17,18 @@ export class JsonResolveModal extends Modal {
hideLocal: boolean;
title: string = "Conflicted Setting";
constructor(app: App, filename: FilePath,
docs: LoadedEntry[], callback: (keepRev?: string, mergedStr?: string) => Promise<void>,
nameA?: string, nameB?: string, defaultSelect?: string,
keepOrder?: boolean, hideLocal?: boolean, title: string = "Conflicted Setting") {
constructor(
app: App,
filename: FilePath,
docs: LoadedEntry[],
callback: (keepRev?: string, mergedStr?: string) => Promise<void>,
nameA?: string,
nameB?: string,
defaultSelect?: string,
keepOrder?: boolean,
hideLocal?: boolean,
title: string = "Conflicted Setting"
) {
super(app);
this.callback = callback;
this.filename = filename;
@@ -30,11 +39,14 @@ export class JsonResolveModal extends Modal {
this.defaultSelect = defaultSelect || "";
this.title = title;
this.hideLocal = hideLocal ?? false;
waitForSignal(`cancel-internal-conflict:${filename}`).then(() => this.close());
void waitForSignal(`cancel-internal-conflict:${filename}`).then(() => this.close());
}
async UICallback(keepRev?: string, mergedStr?: string) {
if (this.callback) {
await this.callback(keepRev, mergedStr);
}
this.close();
await this.callback?.(keepRev, mergedStr);
this.callback = undefined;
}
@@ -44,7 +56,7 @@ export class JsonResolveModal extends Modal {
contentEl.empty();
if (this.component == undefined) {
this.component = new JsonResolvePane({
this.component = mount(JsonResolvePane, {
target: contentEl,
props: {
docs: this.docs,
@@ -54,23 +66,23 @@ export class JsonResolveModal extends Modal {
defaultSelect: this.defaultSelect,
keepOrder: this.keepOrder,
hideLocal: this.hideLocal,
callback: (keepRev: string | undefined, mergedStr: string | undefined) => this.UICallback(keepRev, mergedStr),
callback: (keepRev: string | undefined, mergedStr: string | undefined) =>
this.UICallback(keepRev, mergedStr),
},
});
}
return;
}
onClose() {
const { contentEl } = this;
contentEl.empty();
// contentEl.empty();
if (this.callback != undefined) {
this.callback(undefined);
void this.callback(undefined);
}
if (this.component != undefined) {
this.component.$destroy();
void unmount(this.component);
this.component = undefined;
}
}

View File

@@ -0,0 +1,228 @@
<script lang="ts">
import { type Diff, DIFF_DELETE, DIFF_INSERT, diff_match_patch } from "../../deps.ts";
import type { FilePath, LoadedEntry } from "../../lib/src/common/types.ts";
import { decodeBinary, readString } from "../../lib/src/string_and_binary/convert.ts";
import { getDocData, isObjectDifferent, mergeObject } from "../../lib/src/common/utils.ts";
interface Props {
docs?: LoadedEntry[];
callback?: (keepRev?: string, mergedStr?: string) => Promise<void>;
filename?: FilePath;
nameA?: string;
nameB?: string;
defaultSelect?: string;
keepOrder?: boolean;
hideLocal?: boolean;
}
let {
docs = $bindable([]),
callback = $bindable((async (_, __) => {
Promise.resolve();
}) as (keepRev?: string, mergedStr?: string) => Promise<void>),
filename = $bindable("" as FilePath),
nameA = $bindable("A"),
nameB = $bindable("B"),
defaultSelect = $bindable("" as string),
keepOrder = $bindable(false),
hideLocal = $bindable(false),
}: Props = $props();
type JSONData = Record<string | number | symbol, any> | [any];
const docsArray = $derived.by(() => {
if (docs && docs.length >= 1) {
if (keepOrder || docs[0].mtime < docs[1].mtime) {
return { a: docs[0], b: docs[1] } as const;
} else {
return { a: docs[1], b: docs[0] } as const;
}
}
return { a: false, b: false } as const;
});
const docA = $derived(docsArray.a);
const docB = $derived(docsArray.b);
const docAContent = $derived(docA && docToString(docA));
const docBContent = $derived(docB && docToString(docB));
function parseJson(json: string | false) {
if (json === false) return false;
try {
return JSON.parse(json) as JSONData;
} catch (ex) {
return false;
}
}
const objA = $derived(parseJson(docAContent) || {});
const objB = $derived(parseJson(docBContent) || {});
const objAB = $derived(mergeObject(objA, objB));
const objBAw = $derived(mergeObject(objB, objA));
const objBA = $derived(isObjectDifferent(objBAw, objAB) ? objBAw : false);
let diffs: Diff[] = $derived.by(() => (objA && selectedObj ? getJsonDiff(objA, selectedObj) : []));
type SelectModes = "" | "A" | "B" | "AB" | "BA";
let mode: SelectModes = $state(defaultSelect as SelectModes);
function docToString(doc: LoadedEntry) {
return doc.datatype == "plain" ? getDocData(doc.data) : readString(new Uint8Array(decodeBinary(doc.data)));
}
function revStringToRevNumber(rev?: string) {
if (!rev) return "";
return rev.split("-")[0];
}
function getDiff(left: string, right: string) {
const dmp = new diff_match_patch();
const mapLeft = dmp.diff_linesToChars_(left, right);
const diffLeftSrc = dmp.diff_main(mapLeft.chars1, mapLeft.chars2, false);
dmp.diff_charsToLines_(diffLeftSrc, mapLeft.lineArray);
return diffLeftSrc;
}
function getJsonDiff(a: object, b: object) {
return getDiff(JSON.stringify(a, null, 2), JSON.stringify(b, null, 2));
}
function apply() {
if (!docA || !docB) return;
if (docA._id == docB._id) {
if (mode == "A") return callback(docA._rev!, undefined);
if (mode == "B") return callback(docB._rev!, undefined);
} else {
if (mode == "A") return callback(undefined, docToString(docA));
if (mode == "B") return callback(undefined, docToString(docB));
}
if (mode == "BA") return callback(undefined, JSON.stringify(objBA, null, 2));
if (mode == "AB") return callback(undefined, JSON.stringify(objAB, null, 2));
callback(undefined, undefined);
}
function cancel() {
callback(undefined, undefined);
}
const mergedObjs = $derived.by(
() =>
({
"": false,
A: objA,
B: objB,
AB: objAB,
BA: objBA,
}) as Record<SelectModes, JSONData | false>
);
let selectedObj = $derived(mode in mergedObjs ? mergedObjs[mode] : {});
let modesSrc = $state([] as ["" | "A" | "B" | "AB" | "BA", string][]);
const modes = $derived.by(() => {
let newModes = [] as typeof modesSrc;
if (!hideLocal) {
newModes.push(["", "Not now"]);
newModes.push(["A", nameA || "A"]);
}
newModes.push(["B", nameB || "B"]);
newModes.push(["AB", `${nameA || "A"} + ${nameB || "B"}`]);
newModes.push(["BA", `${nameB || "B"} + ${nameA || "A"}`]);
return newModes;
});
</script>
<h2>{filename}</h2>
{#if !docA || !docB}
<div class="message">Just for a minute, please!</div>
<div class="buttons">
<button onclick={apply}>Dismiss</button>
</div>
{:else}
<div class="options">
{#each modes as m}
{#if m[0] == "" || mergedObjs[m[0]] != false}
<label class={`sls-setting-label ${m[0] == mode ? "selected" : ""}`}
><input type="radio" name="disp" bind:group={mode} value={m[0]} class="sls-setting-tab" />
<div class="sls-setting-menu-btn">{m[1]}</div></label
>
{/if}
{/each}
</div>
{#if selectedObj != false}
<div class="op-scrollable json-source">
{#each diffs as diff}
<span class={diff[0] == DIFF_DELETE ? "deleted" : diff[0] == DIFF_INSERT ? "added" : "normal"}
>{diff[1]}</span
>
{/each}
</div>
{:else}
NO PREVIEW
{/if}
<div class="infos">
<table>
<tbody>
<tr>
<th>{nameA}</th>
<td
>{#if docA._id == docB._id}
Rev:{revStringToRevNumber(docA._rev)}
{/if}
{new Date(docA.mtime).toLocaleString()}</td
>
<td>
{docAContent && docAContent.length} letters
</td>
</tr>
<tr>
<th>{nameB}</th>
<td
>{#if docA._id == docB._id}
Rev:{revStringToRevNumber(docB._rev)}
{/if}
{new Date(docB.mtime).toLocaleString()}</td
>
<td>
{docBContent && docBContent.length} letters
</td>
</tr>
</tbody>
</table>
</div>
<div class="buttons">
{#if hideLocal}
<button onclick={cancel}>Cancel</button>
{/if}
<button onclick={apply}>Apply</button>
</div>
{/if}
<style>
.spacer {
flex-grow: 1;
}
.infos {
display: flex;
justify-content: space-between;
margin: 4px 0.5em;
}
.deleted {
text-decoration: line-through;
}
* {
box-sizing: border-box;
}
.scroller {
display: flex;
flex-direction: column;
overflow-y: scroll;
max-height: 60vh;
user-select: text;
-webkit-user-select: text;
}
.json-source {
white-space: pre;
height: auto;
overflow: auto;
min-height: var(--font-ui-medium);
flex-grow: 1;
}
</style>

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,20 @@
import { type AnyEntry, type DocumentID, type EntryDoc, type EntryHasPath, type FilePath, type FilePathWithPrefix } from "../lib/src/common/types.ts";
import { PouchDB } from "../lib/src/pouchdb/pouchdb-browser.js";
import { LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
import { getPath } from "../common/utils.ts";
import {
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
type AnyEntry,
type DocumentID,
type FilePath,
type FilePathWithPrefix,
type LOG_LEVEL,
} from "../lib/src/common/types.ts";
import type ObsidianLiveSyncPlugin from "../main.ts";
import { MARK_DONE } from "../modules/features/ModuleLog.ts";
import type { LiveSyncCore } from "../main.ts";
import { __$checkInstanceBinding } from "../lib/src/dev/checks.ts";
let noticeIndex = 0;
export abstract class LiveSyncCommands {
plugin: ObsidianLiveSyncPlugin;
get app() {
@@ -14,27 +26,77 @@ export abstract class LiveSyncCommands {
get localDatabase() {
return this.plugin.localDatabase;
}
get vaultAccess() {
return this.plugin.vaultAccess;
}
id2path(id: DocumentID, entry?: EntryHasPath, stripPrefix?: boolean): FilePathWithPrefix {
return this.plugin.id2path(id, entry, stripPrefix);
get services() {
return this.plugin.services;
}
// id2path(id: DocumentID, entry?: EntryHasPath, stripPrefix?: boolean): FilePathWithPrefix {
// return this.plugin.$$id2path(id, entry, stripPrefix);
// }
async path2id(filename: FilePathWithPrefix | FilePath, prefix?: string): Promise<DocumentID> {
return await this.plugin.path2id(filename, prefix);
return await this.services.path.path2id(filename, prefix);
}
getPath(entry: AnyEntry): FilePathWithPrefix {
return this.plugin.getPath(entry);
return getPath(entry);
}
constructor(plugin: ObsidianLiveSyncPlugin) {
this.plugin = plugin;
this.onBindFunction(plugin, plugin.services);
__$checkInstanceBinding(this);
}
abstract onunload(): void;
abstract onload(): void | Promise<void>;
abstract onInitializeDatabase(showNotice: boolean): void | Promise<void>;
abstract beforeReplicate(showNotice: boolean): void | Promise<void>;
abstract onResume(): void | Promise<void>;
abstract parseReplicationResultItem(docs: PouchDB.Core.ExistingDocument<EntryDoc>): Promise<boolean> | boolean;
abstract realizeSettingSyncMode(): Promise<void>;
_isMainReady() {
return this.plugin.services.appLifecycle.isReady();
}
_isMainSuspended() {
return this.services.appLifecycle.isSuspended();
}
_isDatabaseReady() {
return this.services.database.isDatabaseReady();
}
_log = (msg: any, level: LOG_LEVEL = LOG_LEVEL_INFO, key?: string) => {
if (typeof msg === "string" && level !== LOG_LEVEL_NOTICE) {
msg = `[${this.constructor.name}]\u{200A} ${msg}`;
}
// console.log(msg);
Logger(msg, level, key);
};
_verbose = (msg: any, key?: string) => {
this._log(msg, LOG_LEVEL_VERBOSE, key);
};
_info = (msg: any, key?: string) => {
this._log(msg, LOG_LEVEL_INFO, key);
};
_notice = (msg: any, key?: string) => {
this._log(msg, LOG_LEVEL_NOTICE, key);
};
_progress = (prefix: string = "", level: LOG_LEVEL = LOG_LEVEL_NOTICE) => {
const key = `keepalive-progress-${noticeIndex++}`;
return {
log: (msg: any) => {
this._log(prefix + msg, level, key);
},
once: (msg: any) => {
this._log(prefix + msg, level);
},
done: (msg: string = "Done") => {
this._log(prefix + msg + MARK_DONE, level, key);
},
};
};
_debug = (msg: any, key?: string) => {
this._log(msg, LOG_LEVEL_VERBOSE, key);
};
onBindFunction(core: LiveSyncCore, services: typeof core.services) {
// Override if needed.
}
}

View File

@@ -0,0 +1,488 @@
import { sizeToHumanReadable } from "octagonal-wheels/number";
import {
EntryTypes,
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
type DocumentID,
type EntryDoc,
type EntryLeaf,
type MetaEntry,
} from "../../lib/src/common/types";
import { getNoFromRev } from "../../lib/src/pouchdb/LiveSyncLocalDB";
import { LiveSyncCommands } from "../LiveSyncCommands";
import { serialized } from "octagonal-wheels/concurrency/lock_v2";
import { arrayToChunkedArray } from "octagonal-wheels/collection";
const DB_KEY_SEQ = "gc-seq";
const DB_KEY_CHUNK_SET = "chunk-set";
const DB_KEY_DOC_USAGE_MAP = "doc-usage-map";
type ChunkID = DocumentID;
type NoteDocumentID = DocumentID;
type Rev = string;
type ChunkUsageMap = Map<NoteDocumentID, Map<Rev, Set<ChunkID>>>;
export class LocalDatabaseMaintenance extends LiveSyncCommands {
onunload(): void {
// NO OP.
}
onload(): void | Promise<void> {
// NO OP.
}
async allChunks(includeDeleted: boolean = false) {
const p = this._progress("", LOG_LEVEL_NOTICE);
p.log("Retrieving chunks informations..");
try {
const ret = await this.localDatabase.allChunks(includeDeleted);
return ret;
} finally {
p.done();
}
}
get database() {
return this.localDatabase.localDatabase;
}
clearHash() {
this.localDatabase.clearCaches();
}
async confirm(title: string, message: string, affirmative = "Yes", negative = "No") {
return (
(await this.plugin.confirm.askSelectStringDialogue(message, [affirmative, negative], {
title,
defaultAction: affirmative,
})) === affirmative
);
}
isAvailable() {
if (!this.settings.doNotUseFixedRevisionForChunks) {
this._notice("Please enable 'Compute revisions for chunks' in settings to use Garbage Collection.");
return false;
}
if (this.settings.readChunksOnline) {
this._notice("Please disable 'Read chunks online' in settings to use Garbage Collection.");
return false;
}
return true;
}
/**
* Resurrect deleted chunks that are still used in the database.
*/
async resurrectChunks() {
if (!this.isAvailable()) return;
const { used, existing } = await this.allChunks(true);
const excessiveDeletions = [...existing]
.filter(([key, e]) => e._deleted)
.filter(([key, e]) => used.has(e._id))
.map(([key, e]) => e);
const completelyLostChunks = [] as string[];
// Data lost chunks : chunks that are deleted and data is purged.
const dataLostChunks = [...existing]
.filter(([key, e]) => e._deleted && e.data === "")
.map(([key, e]) => e)
.filter((e) => used.has(e._id));
for (const e of dataLostChunks) {
// Retrieve the data from the previous revision.
const doc = await this.database.get(e._id, { rev: e._rev, revs: true, revs_info: true, conflicts: true });
const history = doc._revs_info || [];
// Chunks are immutable. So, we can resurrect the chunk by copying the data from any of previous revisions.
let resurrected = null as null | string;
const availableRevs = history
.filter((e) => e.status == "available")
.map((e) => e.rev)
.sort((a, b) => getNoFromRev(a) - getNoFromRev(b));
for (const rev of availableRevs) {
const revDoc = await this.database.get(e._id, { rev: rev });
if (revDoc.type == "leaf" && revDoc.data !== "") {
// Found the data.
resurrected = revDoc.data;
break;
}
}
// If the data is not found, we cannot resurrect the chunk, add it to the excessiveDeletions.
if (resurrected !== null) {
excessiveDeletions.push({ ...e, data: resurrected, _deleted: false });
} else {
completelyLostChunks.push(e._id);
}
}
// Chunks to be resurrected.
const resurrectChunks = excessiveDeletions.filter((e) => e.data !== "").map((e) => ({ ...e, _deleted: false }));
if (resurrectChunks.length == 0) {
this._notice("No chunks are found to be resurrected.");
return;
}
const message = `We have following chunks that are deleted but still used in the database.
- Completely lost chunks: ${completelyLostChunks.length}
- Resurrectable chunks: ${resurrectChunks.length}
Do you want to resurrect these chunks?`;
if (await this.confirm("Resurrect Chunks", message, "Resurrect", "Cancel")) {
const result = await this.database.bulkDocs(resurrectChunks);
this.clearHash();
const resurrectedChunks = result.filter((e) => "ok" in e).map((e) => e.id);
this._notice(`Resurrected chunks: ${resurrectedChunks.length} / ${resurrectChunks.length}`);
} else {
this._notice("Resurrect operation is cancelled.");
}
}
/**
* Commit deletion of files that are marked as deleted.
* This method makes the deletion permanent, and the files will not be recovered.
* After this, chunks that are used in the deleted files become ready for compaction.
*/
async commitFileDeletion() {
if (!this.isAvailable()) return;
const p = this._progress("", LOG_LEVEL_NOTICE);
p.log("Searching for deleted files..");
const docs = await this.database.allDocs<MetaEntry>({ include_docs: true });
const deletedDocs = docs.rows.filter(
(e) => (e.doc?.type == "newnote" || e.doc?.type == "plain") && e.doc?.deleted
);
if (deletedDocs.length == 0) {
p.done("No deleted files found.");
return;
}
p.log(`Found ${deletedDocs.length} deleted files.`);
const message = `We have following files that are marked as deleted.
- Deleted files: ${deletedDocs.length}
Are you sure to delete these files permanently?
Note: **Make sure to synchronise all devices before deletion.**
> [!Note]
> This operation affects the database permanently. Deleted files will not be recovered after this operation.
> And, the chunks that are used in the deleted files will be ready for compaction.`;
const deletingDocs = deletedDocs.map((e) => ({ ...e.doc, _deleted: true }) as MetaEntry);
if (await this.confirm("Delete Files", message, "Delete", "Cancel")) {
const result = await this.database.bulkDocs(deletingDocs);
this.clearHash();
p.done(`Deleted ${result.filter((e) => "ok" in e).length} / ${deletedDocs.length} files.`);
} else {
p.done("Deletion operation is cancelled.");
}
}
/**
* Commit deletion of chunks that are not used in the database.
* This method makes the deletion permanent, and the chunks will not be recovered if the database run compaction.
* After this, the database can shrink the database size by compaction.
* It is recommended to compact the database after this operation (History should be kept once before compaction).
*/
async commitChunkDeletion() {
if (!this.isAvailable()) return;
const { existing } = await this.allChunks(true);
const deletedChunks = [...existing].filter(([key, e]) => e._deleted && e.data !== "").map(([key, e]) => e);
const deletedNotVacantChunks = deletedChunks.map((e) => ({ ...e, data: "", _deleted: true }));
const size = deletedChunks.reduce((acc, e) => acc + e.data.length, 0);
const humanSize = sizeToHumanReadable(size);
const message = `We have following chunks that are marked as deleted.
- Deleted chunks: ${deletedNotVacantChunks.length} (${humanSize})
Are you sure to delete these chunks permanently?
Note: **Make sure to synchronise all devices before deletion.**
> [!Note]
> This operation finally reduces the capacity of the remote.`;
if (deletedNotVacantChunks.length == 0) {
this._notice("No deleted chunks found.");
return;
}
if (await this.confirm("Delete Chunks", message, "Delete", "Cancel")) {
const result = await this.database.bulkDocs(deletedNotVacantChunks);
this.clearHash();
this._notice(
`Deleted chunks: ${result.filter((e) => "ok" in e).length} / ${deletedNotVacantChunks.length}`
);
} else {
this._notice("Deletion operation is cancelled.");
}
}
/**
* Compact the database.
* This method removes all deleted chunks that are not used in the database.
* Make sure all devices are synchronized before running this method.
*/
async markUnusedChunks() {
if (!this.isAvailable()) return;
const { used, existing } = await this.allChunks();
const existChunks = [...existing];
const unusedChunks = existChunks.filter(([key, e]) => !used.has(e._id)).map(([key, e]) => e);
const deleteChunks = unusedChunks.map((e) => ({
...e,
_deleted: true,
}));
const size = deleteChunks.reduce((acc, e) => acc + e.data.length, 0);
const humanSize = sizeToHumanReadable(size);
if (deleteChunks.length == 0) {
this._notice("No unused chunks found.");
return;
}
const message = `We have following chunks that are not used from any files.
- Chunks: ${deleteChunks.length} (${humanSize})
Are you sure to mark these chunks to be deleted?
Note: **Make sure to synchronise all devices before deletion.**
> [!Note]
> This operation will not reduces the capacity of the remote until permanent deletion.`;
if (await this.confirm("Mark unused chunks", message, "Mark", "Cancel")) {
const result = await this.database.bulkDocs(deleteChunks);
this.clearHash();
this._notice(`Marked chunks: ${result.filter((e) => "ok" in e).length} / ${deleteChunks.length}`);
}
}
async removeUnusedChunks() {
const { used, existing } = await this.allChunks();
const existChunks = [...existing];
const unusedChunks = existChunks.filter(([key, e]) => !used.has(e._id)).map(([key, e]) => e);
const deleteChunks = unusedChunks.map((e) => ({
...e,
data: "",
_deleted: true,
}));
const size = unusedChunks.reduce((acc, e) => acc + e.data.length, 0);
const humanSize = sizeToHumanReadable(size);
if (deleteChunks.length == 0) {
this._notice("No unused chunks found.");
return;
}
const message = `We have following chunks that are not used from any files.
- Chunks: ${deleteChunks.length} (${humanSize})
Are you sure to delete these chunks?
Note: **Make sure to synchronise all devices before deletion.**
> [!Note]
> Chunks referenced from deleted files are not deleted. Please run "Commit File Deletion" before this operation.`;
if (await this.confirm("Mark unused chunks", message, "Mark", "Cancel")) {
const result = await this.database.bulkDocs(deleteChunks);
this._notice(`Deleted chunks: ${result.filter((e) => "ok" in e).length} / ${deleteChunks.length}`);
this.clearHash();
}
}
async scanUnusedChunks() {
const kvDB = this.plugin.kvDB;
const chunkSet = (await kvDB.get<Set<DocumentID>>(DB_KEY_CHUNK_SET)) || new Set();
const chunkUsageMap = (await kvDB.get<ChunkUsageMap>(DB_KEY_DOC_USAGE_MAP)) || new Map();
const KEEP_MAX_REVS = 10;
const unusedSet = new Set<DocumentID>([...chunkSet]);
for (const [, revIdMap] of chunkUsageMap) {
const sortedRevId = [...revIdMap.entries()].sort((a, b) => getNoFromRev(b[0]) - getNoFromRev(a[0]));
if (sortedRevId.length > KEEP_MAX_REVS) {
// If we have more revisions than we want to keep, we need to delete the extras
}
const keepRevID = sortedRevId.slice(0, KEEP_MAX_REVS);
keepRevID.forEach((e) => e[1].forEach((ee) => unusedSet.delete(ee)));
}
return {
chunkSet,
chunkUsageMap,
unusedSet,
};
}
/**
* Track changes in the database and update the chunk usage map for garbage collection.
* Note that this only able to perform without Fetch chunks on demand.
*/
async trackChanges(fromStart: boolean = false, showNotice: boolean = false) {
if (!this.isAvailable()) return;
const logLevel = showNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const kvDB = this.plugin.kvDB;
const previousSeq = fromStart ? "" : await kvDB.get<string>(DB_KEY_SEQ);
const chunkSet = (await kvDB.get<Set<DocumentID>>(DB_KEY_CHUNK_SET)) || new Set();
const chunkUsageMap = (await kvDB.get<ChunkUsageMap>(DB_KEY_DOC_USAGE_MAP)) || new Map();
const db = this.localDatabase.localDatabase;
const verbose = (msg: string) => this._verbose(msg);
const processDoc = async (doc: EntryDoc, isDeleted: boolean) => {
if (!("children" in doc)) {
return;
}
const id = doc._id;
const rev = doc._rev!;
const deleted = doc._deleted || isDeleted;
const softDeleted = doc.deleted;
const children = (doc.children || []) as DocumentID[];
if (!chunkUsageMap.has(id)) {
chunkUsageMap.set(id, new Map<Rev, Set<ChunkID>>());
}
for (const chunkId of children) {
if (deleted) {
chunkUsageMap.get(id)!.delete(rev);
// chunkSet.add(chunkId as DocumentID);
} else {
if (softDeleted) {
//TODO: Soft delete
chunkUsageMap.get(id)!.set(rev, (chunkUsageMap.get(id)!.get(rev) || new Set()).add(chunkId));
} else {
chunkUsageMap.get(id)!.set(rev, (chunkUsageMap.get(id)!.get(rev) || new Set()).add(chunkId));
}
}
}
verbose(
`Tracking chunk: ${id}/${rev} (${doc?.path}), deleted: ${deleted ? "yes" : "no"} Soft-Deleted:${softDeleted ? "yes" : "no"}`
);
return await Promise.resolve();
};
// let saveQueue = 0;
const saveState = async (seq: string | number) => {
await kvDB.set(DB_KEY_SEQ, seq);
await kvDB.set(DB_KEY_CHUNK_SET, chunkSet);
await kvDB.set(DB_KEY_DOC_USAGE_MAP, chunkUsageMap);
};
const processDocRevisions = async (doc: EntryDoc) => {
try {
const oldRevisions = await db.get(doc._id, { revs: true, revs_info: true, conflicts: true });
const allRevs = oldRevisions._revs_info?.length || 0;
const info = (oldRevisions._revs_info || [])
.filter((e) => e.status == "available" && e.rev != doc._rev)
.filter((info) => !chunkUsageMap.get(doc._id)?.has(info.rev));
const infoLength = info.length;
this._log(`Found ${allRevs} old revisions for ${doc._id} . ${infoLength} items to check `);
if (info.length > 0) {
const oldDocs = await Promise.all(
info
.filter((revInfo) => revInfo.status == "available")
.map((revInfo) => db.get(doc._id, { rev: revInfo.rev }))
).then((docs) => docs.filter((doc) => doc));
for (const oldDoc of oldDocs) {
await processDoc(oldDoc as EntryDoc, false);
}
}
} catch (ex) {
if ((ex as any)?.status == 404) {
this._log(`No revisions found for ${doc._id}`, LOG_LEVEL_VERBOSE);
} else {
this._log(`Error finding revisions for ${doc._id}`);
this._verbose(ex);
}
}
};
const processChange = async (doc: EntryDoc, isDeleted: boolean, seq: string | number) => {
if (doc.type === EntryTypes.CHUNK) {
if (isDeleted) return;
chunkSet.add(doc._id);
} else if ("children" in doc) {
await processDoc(doc, isDeleted);
await serialized("x-process-doc", async () => await processDocRevisions(doc));
}
};
// Track changes
let i = 0;
await db
.changes({
since: previousSeq || "",
live: false,
conflicts: true,
include_docs: true,
style: "all_docs",
return_docs: false,
})
.on("change", async (change) => {
// handle change
await processChange(change.doc!, change.deleted ?? false, change.seq);
if (i++ % 100 == 0) {
await saveState(change.seq);
}
})
.on("complete", async (info) => {
await saveState(info.last_seq);
});
// Track all changed docs and new-leafs;
const result = await this.scanUnusedChunks();
const message = `Total chunks: ${result.chunkSet.size}\nUnused chunks: ${result.unusedSet.size}`;
this._log(message, logLevel);
}
async performGC(showingNotice = false) {
if (!this.isAvailable()) return;
await this.trackChanges(false, showingNotice);
const title = "Are all devices synchronised?";
const confirmMessage = `This function deletes unused chunks from the device. If there are differences between devices, some chunks may be missing when resolving conflicts.
Be sure to synchronise before executing.
However, if you have deleted them, you may be able to recover them by performing Hatch -> Recreate missing chunks for all files.
Are you ready to delete unused chunks?`;
const logLevel = showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const BUTTON_OK = `Yes, delete chunks`;
const BUTTON_CANCEL = "Cancel";
const result = await this.plugin.confirm.askSelectStringDialogue(
confirmMessage,
[BUTTON_OK, BUTTON_CANCEL] as const,
{
title,
defaultAction: BUTTON_CANCEL,
}
);
if (result !== BUTTON_OK) {
this._log("User cancelled chunk deletion", logLevel);
return;
}
const { unusedSet, chunkSet } = await this.scanUnusedChunks();
const deleteChunks = await this.database.allDocs({
keys: [...unusedSet],
include_docs: true,
});
for (const chunk of deleteChunks.rows) {
if ((chunk as any)?.value?.deleted) {
chunkSet.delete(chunk.key as DocumentID);
}
}
const deleteDocs = deleteChunks.rows
.filter((e) => "doc" in e)
.map((e) => ({
...(e as any).doc!,
_deleted: true,
}));
this._log(`Deleting chunks: ${deleteDocs.length}`, logLevel);
const deleteChunkBatch = arrayToChunkedArray(deleteDocs, 100);
let successCount = 0;
let errored = 0;
for (const batch of deleteChunkBatch) {
const results = await this.database.bulkDocs(batch as EntryLeaf[]);
for (const result of results) {
if ("ok" in result) {
chunkSet.delete(result.id as DocumentID);
successCount++;
} else {
this._log(`Failed to delete doc: ${result.id}`, LOG_LEVEL_VERBOSE);
errored++;
}
}
this._log(`Deleting chunks: ${successCount} `, logLevel, "gc-preforming");
}
const message = `Garbage Collection completed.
Success: ${successCount}, Errored: ${errored}`;
this._log(message, logLevel);
const kvDB = this.plugin.kvDB;
await kvDB.set(DB_KEY_CHUNK_SET, chunkSet);
}
}

View File

@@ -0,0 +1,281 @@
import { P2PReplicatorPaneView, VIEW_TYPE_P2P } from "./P2PReplicator/P2PReplicatorPaneView.ts";
import {
AutoAccepting,
LOG_LEVEL_NOTICE,
P2P_DEFAULT_SETTINGS,
REMOTE_P2P,
type EntryDoc,
type P2PSyncSetting,
type RemoteDBSettings,
} from "../../lib/src/common/types.ts";
import { LiveSyncCommands } from "../LiveSyncCommands.ts";
import {
LiveSyncTrysteroReplicator,
setReplicatorFunc,
} from "../../lib/src/replication/trystero/LiveSyncTrysteroReplicator.ts";
import { EVENT_REQUEST_OPEN_P2P, eventHub } from "../../common/events.ts";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator.ts";
import { LOG_LEVEL_INFO, LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
import type { CommandShim } from "../../lib/src/replication/trystero/P2PReplicatorPaneCommon.ts";
import {
addP2PEventHandlers,
closeP2PReplicator,
openP2PReplicator,
P2PLogCollector,
removeP2PReplicatorInstance,
type P2PReplicatorBase,
} from "../../lib/src/replication/trystero/P2PReplicatorCore.ts";
import { reactiveSource } from "octagonal-wheels/dataobject/reactive_v2";
import type { Confirm } from "../../lib/src/interfaces/Confirm.ts";
import type ObsidianLiveSyncPlugin from "../../main.ts";
import type { SimpleStore } from "octagonal-wheels/databases/SimpleStoreBase";
import { getPlatformName } from "../../lib/src/PlatformAPIs/obsidian/Environment.ts";
import type { LiveSyncCore } from "../../main.ts";
import { TrysteroReplicator } from "../../lib/src/replication/trystero/TrysteroReplicator.ts";
import { SETTING_KEY_P2P_DEVICE_NAME } from "../../lib/src/common/types.ts";
export class P2PReplicator extends LiveSyncCommands implements P2PReplicatorBase, CommandShim {
storeP2PStatusLine = reactiveSource("");
getSettings(): P2PSyncSetting {
return this.plugin.settings;
}
get settings() {
return this.plugin.settings;
}
getDB() {
return this.plugin.localDatabase.localDatabase;
}
get confirm(): Confirm {
return this.plugin.confirm;
}
_simpleStore!: SimpleStore<any>;
simpleStore(): SimpleStore<any> {
return this._simpleStore;
}
constructor(plugin: ObsidianLiveSyncPlugin) {
super(plugin);
setReplicatorFunc(() => this._replicatorInstance);
addP2PEventHandlers(this);
this.afterConstructor();
// onBindFunction is called in super class
// this.onBindFunction(plugin, plugin.services);
}
async handleReplicatedDocuments(docs: EntryDoc[]): Promise<void> {
// console.log("Processing Replicated Docs", docs);
return await this.services.replication.parseSynchroniseResult(
docs as PouchDB.Core.ExistingDocument<EntryDoc>[]
);
}
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator> {
const settings = { ...this.settings, ...settingOverride };
if (settings.remoteType == REMOTE_P2P) {
return Promise.resolve(new LiveSyncTrysteroReplicator(this.plugin));
}
return undefined!;
}
_replicatorInstance?: TrysteroReplicator;
p2pLogCollector = new P2PLogCollector();
afterConstructor() {
return;
}
async open() {
await openP2PReplicator(this);
}
async close() {
await closeP2PReplicator(this);
}
getConfig(key: string) {
return this.services.config.getSmallConfig(key);
}
setConfig(key: string, value: string) {
return this.services.config.setSmallConfig(key, value);
}
enableBroadcastCastings() {
return this?._replicatorInstance?.enableBroadcastChanges();
}
disableBroadcastCastings() {
return this?._replicatorInstance?.disableBroadcastChanges();
}
init() {
this._simpleStore = this.services.database.openSimpleStore("p2p-sync");
return Promise.resolve(this);
}
async initialiseP2PReplicator(): Promise<TrysteroReplicator> {
await this.init();
try {
if (this._replicatorInstance) {
await this._replicatorInstance.close();
this._replicatorInstance = undefined;
}
if (!this.settings.P2P_AppID) {
this.settings.P2P_AppID = P2P_DEFAULT_SETTINGS.P2P_AppID;
}
const getInitialDeviceName = () =>
this.getConfig(SETTING_KEY_P2P_DEVICE_NAME) || this.services.vault.getVaultName();
const getSettings = () => this.settings;
const store = () => this.simpleStore();
const getDB = () => this.getDB();
const getConfirm = () => this.confirm;
const getPlatform = () => this.getPlatform();
const env = {
get db() {
return getDB();
},
get confirm() {
return getConfirm();
},
get deviceName() {
return getInitialDeviceName();
},
get platform() {
return getPlatform();
},
get settings() {
return getSettings();
},
processReplicatedDocs: async (docs: EntryDoc[]): Promise<void> => {
await this.handleReplicatedDocuments(docs);
// No op. This is a client and does not need to process the docs
},
get simpleStore() {
return store();
},
};
this._replicatorInstance = new TrysteroReplicator(env);
return this._replicatorInstance;
} catch (e) {
this._log(
e instanceof Error ? e.message : "Something occurred on Initialising P2P Replicator",
LOG_LEVEL_INFO
);
this._log(e, LOG_LEVEL_VERBOSE);
throw e;
}
}
getPlatform(): string {
return getPlatformName();
}
onunload(): void {
removeP2PReplicatorInstance();
void this.close();
}
onload(): void | Promise<void> {
eventHub.onEvent(EVENT_REQUEST_OPEN_P2P, () => {
void this.openPane();
});
this.p2pLogCollector.p2pReplicationLine.onChanged((line) => {
this.storeP2PStatusLine.value = line.value;
});
}
async _everyOnInitializeDatabase(): Promise<boolean> {
await this.initialiseP2PReplicator();
return Promise.resolve(true);
}
private async _allSuspendExtraSync() {
this.plugin.settings.P2P_Enabled = false;
this.plugin.settings.P2P_AutoAccepting = AutoAccepting.NONE;
this.plugin.settings.P2P_AutoBroadcast = false;
this.plugin.settings.P2P_AutoStart = false;
this.plugin.settings.P2P_AutoSyncPeers = "";
this.plugin.settings.P2P_AutoWatchPeers = "";
return await Promise.resolve(true);
}
// async $everyOnLoadStart() {
// return await Promise.resolve();
// }
async openPane() {
await this.services.API.showWindow(VIEW_TYPE_P2P);
}
async _everyOnloadStart(): Promise<boolean> {
this.plugin.registerView(VIEW_TYPE_P2P, (leaf) => new P2PReplicatorPaneView(leaf, this.plugin));
this.plugin.addCommand({
id: "open-p2p-replicator",
name: "P2P Sync : Open P2P Replicator",
callback: async () => {
await this.openPane();
},
});
this.plugin.addCommand({
id: "p2p-establish-connection",
name: "P2P Sync : Connect to the Signalling Server",
checkCallback: (isChecking) => {
if (isChecking) {
return !(this._replicatorInstance?.server?.isServing ?? false);
}
void this.open();
},
});
this.plugin.addCommand({
id: "p2p-close-connection",
name: "P2P Sync : Disconnect from the Signalling Server",
checkCallback: (isChecking) => {
if (isChecking) {
return this._replicatorInstance?.server?.isServing ?? false;
}
Logger(`Closing P2P Connection`, LOG_LEVEL_NOTICE);
void this.close();
},
});
this.plugin.addCommand({
id: "replicate-now-by-p2p",
name: "Replicate now by P2P",
checkCallback: (isChecking) => {
if (isChecking) {
if (this.settings.remoteType == REMOTE_P2P) return false;
if (!this._replicatorInstance?.server?.isServing) return false;
return true;
}
void this._replicatorInstance?.replicateFromCommand(false);
},
});
this.plugin
.addRibbonIcon("waypoints", "P2P Replicator", async () => {
await this.openPane();
})
.addClass("livesync-ribbon-replicate-p2p");
return await Promise.resolve(true);
}
_everyAfterResumeProcess(): Promise<boolean> {
if (this.settings.P2P_Enabled && this.settings.P2P_AutoStart) {
setTimeout(() => void this.open(), 100);
}
const rep = this._replicatorInstance;
rep?.allowReconnection();
return Promise.resolve(true);
}
_everyBeforeSuspendProcess(): Promise<boolean> {
const rep = this._replicatorInstance;
rep?.disconnectFromServer();
return Promise.resolve(true);
}
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.handleGetNewReplicator(this._anyNewReplicator.bind(this));
services.databaseEvents.handleOnDatabaseInitialisation(this._everyOnInitializeDatabase.bind(this));
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
services.appLifecycle.handleOnSuspending(this._everyBeforeSuspendProcess.bind(this));
services.appLifecycle.handleOnResumed(this._everyAfterResumeProcess.bind(this));
services.setting.handleSuspendExtraSync(this._allSuspendExtraSync.bind(this));
}
}

View File

@@ -0,0 +1,496 @@
<script lang="ts">
import { onMount, setContext } from "svelte";
import { AutoAccepting, DEFAULT_SETTINGS, type P2PSyncSetting } from "../../../lib/src/common/types";
import {
AcceptedStatus,
ConnectionStatus,
type CommandShim,
type PeerStatus,
type PluginShim,
} from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
import PeerStatusRow from "../P2PReplicator/PeerStatusRow.svelte";
import { EVENT_LAYOUT_READY, eventHub } from "../../../common/events";
import {
type PeerInfo,
type P2PServerInfo,
EVENT_SERVER_STATUS,
EVENT_REQUEST_STATUS,
EVENT_P2P_REPLICATOR_STATUS,
} from "../../../lib/src/replication/trystero/TrysteroReplicatorP2PServer";
import { type P2PReplicatorStatus } from "../../../lib/src/replication/trystero/TrysteroReplicator";
import { $msg as _msg } from "../../../lib/src/common/i18n";
import { SETTING_KEY_P2P_DEVICE_NAME } from "../../../lib/src/common/types";
interface Props {
plugin: PluginShim;
cmdSync: CommandShim;
}
let { plugin, cmdSync }: Props = $props();
// const cmdSync = plugin.getAddOn<P2PReplicator>("P2PReplicator")!;
setContext("getReplicator", () => cmdSync);
const initialSettings = { ...plugin.settings };
let settings = $state<P2PSyncSetting>(initialSettings);
// const vaultName = service.vault.getVaultName();
// const dbKey = `${vaultName}-p2p-device-name`;
const initialDeviceName = cmdSync.getConfig(SETTING_KEY_P2P_DEVICE_NAME) ?? plugin.services.vault.getVaultName();
let deviceName = $state<string>(initialDeviceName);
let eP2PEnabled = $state<boolean>(initialSettings.P2P_Enabled);
let eRelay = $state<string>(initialSettings.P2P_relays);
let eRoomId = $state<string>(initialSettings.P2P_roomID);
let ePassword = $state<string>(initialSettings.P2P_passphrase);
let eAppId = $state<string>(initialSettings.P2P_AppID);
let eDeviceName = $state<string>(initialDeviceName);
let eAutoAccept = $state<boolean>(initialSettings.P2P_AutoAccepting == AutoAccepting.ALL);
let eAutoStart = $state<boolean>(initialSettings.P2P_AutoStart);
let eAutoBroadcast = $state<boolean>(initialSettings.P2P_AutoBroadcast);
const isP2PEnabledModified = $derived.by(() => eP2PEnabled !== settings.P2P_Enabled);
const isRelayModified = $derived.by(() => eRelay !== settings.P2P_relays);
const isRoomIdModified = $derived.by(() => eRoomId !== settings.P2P_roomID);
const isPasswordModified = $derived.by(() => ePassword !== settings.P2P_passphrase);
const isAppIdModified = $derived.by(() => eAppId !== settings.P2P_AppID);
const isDeviceNameModified = $derived.by(() => eDeviceName !== deviceName);
const isAutoAcceptModified = $derived.by(() => eAutoAccept !== (settings.P2P_AutoAccepting == AutoAccepting.ALL));
const isAutoStartModified = $derived.by(() => eAutoStart !== settings.P2P_AutoStart);
const isAutoBroadcastModified = $derived.by(() => eAutoBroadcast !== settings.P2P_AutoBroadcast);
const isAnyModified = $derived.by(
() =>
isP2PEnabledModified ||
isRelayModified ||
isRoomIdModified ||
isPasswordModified ||
isAppIdModified ||
isDeviceNameModified ||
isAutoAcceptModified ||
isAutoStartModified ||
isAutoBroadcastModified
);
async function saveAndApply() {
const newSettings = {
...plugin.settings,
P2P_Enabled: eP2PEnabled,
P2P_relays: eRelay,
P2P_roomID: eRoomId,
P2P_passphrase: ePassword,
P2P_AppID: eAppId,
P2P_AutoAccepting: eAutoAccept ? AutoAccepting.ALL : AutoAccepting.NONE,
P2P_AutoStart: eAutoStart,
P2P_AutoBroadcast: eAutoBroadcast,
};
plugin.settings = newSettings;
cmdSync.setConfig(SETTING_KEY_P2P_DEVICE_NAME, eDeviceName);
deviceName = eDeviceName;
await plugin.saveSettings();
}
async function revert() {
eP2PEnabled = settings.P2P_Enabled;
eRelay = settings.P2P_relays;
eRoomId = settings.P2P_roomID;
ePassword = settings.P2P_passphrase;
eAppId = settings.P2P_AppID;
eAutoAccept = settings.P2P_AutoAccepting == AutoAccepting.ALL;
eAutoStart = settings.P2P_AutoStart;
eAutoBroadcast = settings.P2P_AutoBroadcast;
}
let serverInfo = $state<P2PServerInfo | undefined>(undefined);
let replicatorInfo = $state<P2PReplicatorStatus | undefined>(undefined);
const applyLoadSettings = (d: P2PSyncSetting, force: boolean) => {
const { P2P_relays, P2P_roomID, P2P_passphrase, P2P_AppID, P2P_AutoAccepting } = d;
if (force || !isP2PEnabledModified) eP2PEnabled = d.P2P_Enabled;
if (force || !isRelayModified) eRelay = P2P_relays;
if (force || !isRoomIdModified) eRoomId = P2P_roomID;
if (force || !isPasswordModified) ePassword = P2P_passphrase;
if (force || !isAppIdModified) eAppId = P2P_AppID;
const newAutoAccept = P2P_AutoAccepting === AutoAccepting.ALL;
if (force || !isAutoAcceptModified) eAutoAccept = newAutoAccept;
if (force || !isAutoStartModified) eAutoStart = d.P2P_AutoStart;
if (force || !isAutoBroadcastModified) eAutoBroadcast = d.P2P_AutoBroadcast;
settings = d;
};
onMount(() => {
const r = eventHub.onEvent("setting-saved", async (d) => {
applyLoadSettings(d, false);
closeServer();
});
const rx = eventHub.onEvent(EVENT_LAYOUT_READY, () => {
applyLoadSettings(plugin.settings, true);
});
const r2 = eventHub.onEvent(EVENT_SERVER_STATUS, (status) => {
serverInfo = status;
advertisements = status?.knownAdvertisements ?? [];
});
const r3 = eventHub.onEvent(EVENT_P2P_REPLICATOR_STATUS, (status) => {
replicatorInfo = status;
});
eventHub.emitEvent(EVENT_REQUEST_STATUS);
return () => {
r();
r2();
r3();
};
});
let isConnected = $derived.by(() => {
return serverInfo?.isConnected ?? false;
});
let serverPeerId = $derived.by(() => {
return serverInfo?.serverPeerId ?? "";
});
let advertisements = $state<PeerInfo[]>([]);
let autoSyncPeers = $derived.by(() =>
settings.P2P_AutoSyncPeers.split(",")
.map((e) => e.trim())
.filter((e) => e)
);
let autoWatchPeers = $derived.by(() =>
settings.P2P_AutoWatchPeers.split(",")
.map((e) => e.trim())
.filter((e) => e)
);
let syncOnCommand = $derived.by(() =>
settings.P2P_SyncOnReplication.split(",")
.map((e) => e.trim())
.filter((e) => e)
);
const peers = $derived.by(() =>
advertisements.map((ad) => {
let accepted: AcceptedStatus;
const isTemporaryAccepted = ad.isTemporaryAccepted;
if (isTemporaryAccepted === undefined) {
if (ad.isAccepted === undefined) {
accepted = AcceptedStatus.UNKNOWN;
} else {
accepted = ad.isAccepted ? AcceptedStatus.ACCEPTED : AcceptedStatus.DENIED;
}
} else if (isTemporaryAccepted === true) {
accepted = AcceptedStatus.ACCEPTED_IN_SESSION;
} else {
accepted = AcceptedStatus.DENIED_IN_SESSION;
}
const isFetching = replicatorInfo?.replicatingFrom.indexOf(ad.peerId) !== -1;
const isSending = replicatorInfo?.replicatingTo.indexOf(ad.peerId) !== -1;
const isWatching = replicatorInfo?.watchingPeers.indexOf(ad.peerId) !== -1;
const syncOnStart = autoSyncPeers.indexOf(ad.name) !== -1;
const watchOnStart = autoWatchPeers.indexOf(ad.name) !== -1;
const syncOnReplicationCommand = syncOnCommand.indexOf(ad.name) !== -1;
const st: PeerStatus = {
name: ad.name,
peerId: ad.peerId,
accepted: accepted,
status: ad.isAccepted ? ConnectionStatus.CONNECTED : ConnectionStatus.DISCONNECTED,
isSending: isSending,
isFetching: isFetching,
isWatching: isWatching,
syncOnConnect: syncOnStart,
watchOnConnect: watchOnStart,
syncOnReplicationCommand: syncOnReplicationCommand,
};
return st;
})
);
function useDefaultRelay() {
eRelay = DEFAULT_SETTINGS.P2P_relays;
}
function _generateRandom() {
return (Math.floor(Math.random() * 1000) + 1000).toString().substring(1);
}
function generateRandom(length: number) {
let buf = "";
while (buf.length < length) {
buf += "-" + _generateRandom();
}
return buf.substring(1, length);
}
function chooseRandom() {
eRoomId = generateRandom(12) + "-" + Math.random().toString(36).substring(2, 5);
}
async function openServer() {
await cmdSync.open();
}
async function closeServer() {
await cmdSync.close();
}
function startBroadcasting() {
void cmdSync.enableBroadcastCastings();
}
function stopBroadcasting() {
void cmdSync.disableBroadcastCastings();
}
const initialDialogStatusKey = `p2p-dialog-status`;
const getDialogStatus = () => {
try {
const initialDialogStatus = JSON.parse(cmdSync.getConfig(initialDialogStatusKey) ?? "{}") as {
notice?: boolean;
setting?: boolean;
};
return initialDialogStatus;
} catch (e) {
return {};
}
};
const initialDialogStatus = getDialogStatus();
let isNoticeOpened = $state<boolean>(initialDialogStatus.notice ?? true);
let isSettingOpened = $state<boolean>(initialDialogStatus.setting ?? true);
$effect(() => {
const dialogStatus = {
notice: isNoticeOpened,
setting: isSettingOpened,
};
cmdSync.setConfig(initialDialogStatusKey, JSON.stringify(dialogStatus));
});
let isObsidian = $derived.by(() => {
return plugin.services.API.getPlatform() === "obsidian";
});
</script>
<article>
<h1>Peer to Peer Replicator</h1>
<details bind:open={isNoticeOpened}>
<summary>{_msg("P2P.Note.Summary")}</summary>
<p class="important">{_msg("P2P.Note.important_note")}</p>
<p class="important-sub">
{_msg("P2P.Note.important_note_sub")}
</p>
{#each _msg("P2P.Note.description").split("\n\n") as paragraph}
<p>{paragraph}</p>
{/each}
</details>
<h2>Connection Settings</h2>
{#if isObsidian}
You can configure in the Obsidian Plugin Settings.
{:else}
<details bind:open={isSettingOpened}>
<summary>{eRelay}</summary>
<table class="settings">
<tbody>
<tr>
<th> Enable P2P Replicator </th>
<td>
<label class={{ "is-dirty": isP2PEnabledModified }}>
<input type="checkbox" bind:checked={eP2PEnabled} />
</label>
</td>
</tr><tr>
<th> Relay settings </th>
<td>
<label class={{ "is-dirty": isRelayModified }}>
<input
type="text"
placeholder="wss://exp-relay.vrtmrz.net, wss://xxxxx"
bind:value={eRelay}
autocomplete="off"
/>
<button onclick={() => useDefaultRelay()}> Use vrtmrz's relay </button>
</label>
</td>
</tr>
<tr>
<th> Room ID </th>
<td>
<label class={{ "is-dirty": isRoomIdModified }}>
<input
type="text"
placeholder="anything-you-like"
bind:value={eRoomId}
autocomplete="off"
spellcheck="false"
autocorrect="off"
/>
<button onclick={() => chooseRandom()}> Use Random Number </button>
</label>
<span>
<small>
This can isolate your connections between devices. Use the same Room ID for the same
devices.</small
>
</span>
</td>
</tr>
<tr>
<th> Password </th>
<td>
<label class={{ "is-dirty": isPasswordModified }}>
<input type="password" placeholder="password" bind:value={ePassword} />
</label>
<span>
<small>
This password is used to encrypt the connection. Use something long enough.
</small>
</span>
</td>
</tr>
<tr>
<th> This device name </th>
<td>
<label class={{ "is-dirty": isDeviceNameModified }}>
<input
type="text"
placeholder="iphone-16"
bind:value={eDeviceName}
autocomplete="off"
/>
</label>
<span>
<small>
Device name to identify the device. Please use shorter one for the stable peer
detection, i.e., "iphone-16" or "macbook-2021".
</small>
</span>
</td>
</tr>
<tr>
<th> Auto Connect </th>
<td>
<label class={{ "is-dirty": isAutoStartModified }}>
<input type="checkbox" bind:checked={eAutoStart} />
</label>
</td>
</tr>
<tr>
<th> Start change-broadcasting on Connect </th>
<td>
<label class={{ "is-dirty": isAutoBroadcastModified }}>
<input type="checkbox" bind:checked={eAutoBroadcast} />
</label>
</td>
</tr>
<!-- <tr>
<th> Auto Accepting </th>
<td>
<label class={{ "is-dirty": isAutoAcceptModified }}>
<input type="checkbox" bind:checked={eAutoAccept} />
</label>
</td>
</tr> -->
</tbody>
</table>
<button disabled={!isAnyModified} class="button mod-cta" onclick={saveAndApply}>Save and Apply</button>
<button disabled={!isAnyModified} class="button" onclick={revert}>Revert changes</button>
</details>
{/if}
<div>
<h2>Signaling Server Connection</h2>
<div>
{#if !isConnected}
<p>No Connection</p>
{:else}
<p>Connected to Signaling Server (as Peer ID: {serverPeerId})</p>
{/if}
</div>
<div>
{#if !isConnected}
<button onclick={openServer}>Connect</button>
{:else}
<button onclick={closeServer}>Disconnect</button>
{#if replicatorInfo?.isBroadcasting !== undefined}
{#if replicatorInfo?.isBroadcasting}
<button onclick={stopBroadcasting}>Stop Broadcasting</button>
{:else}
<button onclick={startBroadcasting}>Start Broadcasting</button>
{/if}
{/if}
<details>
<summary>Broadcasting?</summary>
<p>
<small>
If you want to use `LiveSync`, you should broadcast changes. All `watching` peers which
detects this will start the replication for fetching. <br />
However, This should not be enabled if you want to increase your secrecy more.
</small>
</p>
</details>
{/if}
</div>
</div>
<div>
<h2>Peers</h2>
<table class="peers">
<thead>
<tr>
<th>Name</th>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
{#each peers as peer}
<PeerStatusRow peerStatus={peer}></PeerStatusRow>
{/each}
</tbody>
</table>
</div>
</article>
<style>
article {
max-width: 100%;
}
article p {
user-select: text;
-webkit-user-select: text;
}
h2 {
margin-top: var(--size-4-1);
margin-bottom: var(--size-4-1);
padding-bottom: var(--size-4-1);
border-bottom: 1px solid var(--background-modifier-border);
}
label.is-dirty {
background-color: var(--background-modifier-error);
}
input {
background-color: transparent;
}
th {
/* display: flex;
justify-content: center;
align-items: center; */
min-height: var(--input-height);
}
td {
min-height: var(--input-height);
}
td > label {
display: flex;
flex-direction: row;
align-items: center;
justify-content: flex-start;
min-height: var(--input-height);
}
td > label > * {
margin: auto var(--size-4-1);
}
table.peers {
width: 100%;
}
.important {
color: var(--text-error);
font-size: 1.2em;
font-weight: bold;
}
.important-sub {
color: var(--text-warning);
}
.settings label {
display: flex;
flex-direction: row;
align-items: center;
justify-content: flex-start;
flex-wrap: wrap;
}
</style>

View File

@@ -0,0 +1,198 @@
import { Menu, WorkspaceLeaf } from "obsidian";
import ReplicatorPaneComponent from "./P2PReplicatorPane.svelte";
import type ObsidianLiveSyncPlugin from "../../../main.ts";
import { mount } from "svelte";
import { SvelteItemView } from "../../../common/SvelteItemView.ts";
import { eventHub } from "../../../common/events.ts";
import { unique } from "octagonal-wheels/collection";
import { LOG_LEVEL_NOTICE, REMOTE_P2P } from "../../../lib/src/common/types.ts";
import { Logger } from "../../../lib/src/common/logger.ts";
import { P2PReplicator } from "../CmdP2PReplicator.ts";
import {
EVENT_P2P_PEER_SHOW_EXTRA_MENU,
type PeerStatus,
} from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon.ts";
export const VIEW_TYPE_P2P = "p2p-replicator";
function addToList(item: string, list: string) {
return unique(
list
.split(",")
.map((e) => e.trim())
.concat(item)
.filter((p) => p)
).join(",");
}
function removeFromList(item: string, list: string) {
return list
.split(",")
.map((e) => e.trim())
.filter((p) => p !== item)
.filter((p) => p)
.join(",");
}
export class P2PReplicatorPaneView extends SvelteItemView {
plugin: ObsidianLiveSyncPlugin;
icon = "waypoints";
title: string = "";
navigation = false;
getIcon(): string {
return "waypoints";
}
get replicator() {
const r = this.plugin.getAddOn<P2PReplicator>(P2PReplicator.name);
if (!r || !r._replicatorInstance) {
throw new Error("Replicator not found");
}
return r._replicatorInstance;
}
async replicateFrom(peer: PeerStatus) {
await this.replicator.replicateFrom(peer.peerId);
}
async replicateTo(peer: PeerStatus) {
await this.replicator.requestSynchroniseToPeer(peer.peerId);
}
async getRemoteConfig(peer: PeerStatus) {
Logger(
`Requesting remote config for ${peer.name}. Please input the passphrase on the remote device`,
LOG_LEVEL_NOTICE
);
const remoteConfig = await this.replicator.getRemoteConfig(peer.peerId);
if (remoteConfig) {
Logger(`Remote config for ${peer.name} is retrieved successfully`);
const DROP = "Yes, and drop local database";
const KEEP = "Yes, but keep local database";
const CANCEL = "No, cancel";
const yn = await this.plugin.confirm.askSelectStringDialogue(
`Do you really want to apply the remote config? This will overwrite your current config immediately and restart.
And you can also drop the local database to rebuild from the remote device.`,
[DROP, KEEP, CANCEL] as const,
{
defaultAction: CANCEL,
title: "Apply Remote Config ",
}
);
if (yn === DROP || yn === KEEP) {
if (yn === DROP) {
if (remoteConfig.remoteType !== REMOTE_P2P) {
const yn2 = await this.plugin.confirm.askYesNoDialog(
`Do you want to set the remote type to "P2P Sync" to rebuild by "P2P replication"?`,
{
title: "Rebuild from remote device",
}
);
if (yn2 === "yes") {
remoteConfig.remoteType = REMOTE_P2P;
remoteConfig.P2P_RebuildFrom = peer.name;
}
}
}
this.plugin.settings = remoteConfig;
await this.plugin.saveSettings();
if (yn === DROP) {
await this.plugin.rebuilder.scheduleFetch();
} else {
this.plugin.services.appLifecycle.scheduleRestart();
}
} else {
Logger(`Cancelled\nRemote config for ${peer.name} is not applied`, LOG_LEVEL_NOTICE);
}
} else {
Logger(`Cannot retrieve remote config for ${peer.peerId}`);
}
}
async toggleProp(peer: PeerStatus, prop: "syncOnConnect" | "watchOnConnect" | "syncOnReplicationCommand") {
const settingMap = {
syncOnConnect: "P2P_AutoSyncPeers",
watchOnConnect: "P2P_AutoWatchPeers",
syncOnReplicationCommand: "P2P_SyncOnReplication",
} as const;
const targetSetting = settingMap[prop];
if (peer[prop]) {
this.plugin.settings[targetSetting] = removeFromList(peer.name, this.plugin.settings[targetSetting]);
await this.plugin.saveSettings();
} else {
this.plugin.settings[targetSetting] = addToList(peer.name, this.plugin.settings[targetSetting]);
await this.plugin.saveSettings();
}
await this.plugin.saveSettings();
}
m?: Menu;
constructor(leaf: WorkspaceLeaf, plugin: ObsidianLiveSyncPlugin) {
super(leaf);
this.plugin = plugin;
eventHub.onEvent(EVENT_P2P_PEER_SHOW_EXTRA_MENU, ({ peer, event }) => {
if (this.m) {
this.m.hide();
}
this.m = new Menu()
.addItem((item) => item.setTitle("📥 Only Fetch").onClick(() => this.replicateFrom(peer)))
.addItem((item) => item.setTitle("📤 Only Send").onClick(() => this.replicateTo(peer)))
.addSeparator()
.addItem((item) => {
item.setTitle("🔧 Get Configuration").onClick(async () => {
await this.getRemoteConfig(peer);
});
})
.addSeparator()
.addItem((item) => {
const mark = peer.syncOnConnect ? "checkmark" : null;
item.setTitle("Toggle Sync on connect")
.onClick(async () => {
await this.toggleProp(peer, "syncOnConnect");
})
.setIcon(mark);
})
.addItem((item) => {
const mark = peer.watchOnConnect ? "checkmark" : null;
item.setTitle("Toggle Watch on connect")
.onClick(async () => {
await this.toggleProp(peer, "watchOnConnect");
})
.setIcon(mark);
})
.addItem((item) => {
const mark = peer.syncOnReplicationCommand ? "checkmark" : null;
item.setTitle("Toggle Sync on `Replicate now` command")
.onClick(async () => {
await this.toggleProp(peer, "syncOnReplicationCommand");
})
.setIcon(mark);
});
this.m.showAtPosition({ x: event.x, y: event.y });
});
}
getViewType() {
return VIEW_TYPE_P2P;
}
getDisplayText() {
return "Peer-to-Peer Replicator";
}
override async onClose(): Promise<void> {
await super.onClose();
if (this.m) {
this.m.hide();
}
}
instantiateComponent(target: HTMLElement) {
const cmdSync = this.plugin.getAddOn<P2PReplicator>(P2PReplicator.name);
if (!cmdSync) {
throw new Error("Replicator not found");
}
return mount(ReplicatorPaneComponent, {
target: target,
props: {
plugin: cmdSync.plugin,
cmdSync: cmdSync,
},
});
}
}

View File

@@ -0,0 +1,259 @@
<script lang="ts">
import { getContext } from "svelte";
import { AcceptedStatus, type PeerStatus } from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
import type { P2PReplicator } from "../CmdP2PReplicator";
import { eventHub } from "../../../common/events";
import { EVENT_P2P_PEER_SHOW_EXTRA_MENU } from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
interface Props {
peerStatus: PeerStatus;
}
let { peerStatus }: Props = $props();
let peer = $derived(peerStatus);
function select<T extends string | number | symbol, U>(d: T, cond: Record<T, U>): U;
function select<T extends string | number | symbol, U, V>(d: T, cond: Record<T, U>, def: V): U | V;
function select<T extends string | number | symbol, U>(d: T, cond: Record<T, U>, def?: U): U | undefined {
return d in cond ? cond[d] : def;
}
let statusChips = $derived.by(() =>
[
peer.isWatching ? ["WATCHING"] : [],
peer.isFetching ? ["FETCHING"] : [],
peer.isSending ? ["SENDING"] : [],
].flat()
);
let acceptedStatusChip = $derived.by(() =>
select(
peer.accepted.toString(),
{
[AcceptedStatus.ACCEPTED]: "ACCEPTED",
[AcceptedStatus.ACCEPTED_IN_SESSION]: "ACCEPTED (in session)",
[AcceptedStatus.DENIED_IN_SESSION]: "DENIED (in session)",
[AcceptedStatus.DENIED]: "DENIED",
[AcceptedStatus.UNKNOWN]: "NEW",
},
""
)
);
const classList = {
["SENDING"]: "connected",
["FETCHING"]: "connected",
["WATCHING"]: "connected-live",
["WAITING"]: "waiting",
["ACCEPTED"]: "accepted",
["DENIED"]: "denied",
["NEW"]: "unknown",
};
let isAccepted = $derived.by(
() => peer.accepted === AcceptedStatus.ACCEPTED || peer.accepted === AcceptedStatus.ACCEPTED_IN_SESSION
);
let isDenied = $derived.by(
() => peer.accepted === AcceptedStatus.DENIED || peer.accepted === AcceptedStatus.DENIED_IN_SESSION
);
let isNew = $derived.by(() => peer.accepted === AcceptedStatus.UNKNOWN);
function makeDecision(isAccepted: boolean, isTemporary: boolean) {
cmdReplicator._replicatorInstance?.server?.makeDecision({
peerId: peer.peerId,
name: peer.name,
decision: isAccepted,
isTemporary: isTemporary,
});
}
function revokeDecision() {
cmdReplicator._replicatorInstance?.server?.revokeDecision({
peerId: peer.peerId,
name: peer.name,
});
}
const cmdReplicator = getContext<() => P2PReplicator>("getReplicator")();
const replicator = cmdReplicator._replicatorInstance!;
const peerAttrLabels = $derived.by(() => {
const attrs = [];
if (peer.syncOnConnect) {
attrs.push("✔ SYNC");
}
if (peer.watchOnConnect) {
attrs.push("✔ WATCH");
}
if (peer.syncOnReplicationCommand) {
attrs.push("✔ SELECT");
}
return attrs;
});
function startWatching() {
replicator.watchPeer(peer.peerId);
}
function stopWatching() {
replicator.unwatchPeer(peer.peerId);
}
function sync() {
replicator.sync(peer.peerId, false);
}
function moreMenu(evt: MouseEvent) {
eventHub.emitEvent(EVENT_P2P_PEER_SHOW_EXTRA_MENU, { peer, event: evt });
}
</script>
<tr>
<td>
<div class="info">
<div class="row name">
<span class="peername">{peer.name}</span>
</div>
<div class="row peer-id">
<span class="peerid">({peer.peerId})</span>
</div>
</div>
<div class="status-chips">
<div class="row">
<span class="chip {select(acceptedStatusChip, classList)}">{acceptedStatusChip}</span>
</div>
{#if isAccepted}
<div class="row">
{#each statusChips as chip}
<span class="chip {select(chip, classList)}">{chip}</span>
{/each}
</div>
{/if}
<div class="row">
{#each peerAttrLabels as attr}
<span class="chip attr">{attr}</span>
{/each}
</div>
</div>
</td>
<td>
<div class="buttons">
<div class="row">
{#if isNew}
{#if !isAccepted}
<button class="button" onclick={() => makeDecision(true, true)}>Accept in session</button>
<button class="button mod-cta" onclick={() => makeDecision(true, false)}>Accept</button>
{/if}
{#if !isDenied}
<button class="button" onclick={() => makeDecision(false, true)}>Deny in session</button>
<button class="button mod-warning" onclick={() => makeDecision(false, false)}>Deny</button>
{/if}
{:else}
<button class="button mod-warning" onclick={() => revokeDecision()}>Revoke</button>
{/if}
</div>
</div>
</td>
<td>
{#if isAccepted}
<div class="buttons">
<div class="row">
<button class="button" onclick={sync} disabled={peer.isSending || peer.isFetching}>🔄</button>
<!-- <button class="button" onclick={replicateFrom} disabled={peer.isFetching}>📥</button>
<button class="button" onclick={replicateTo} disabled={peer.isSending}>📤</button> -->
{#if peer.isWatching}
<button class="button" onclick={stopWatching}>Stop ⚡</button>
{:else}
<button class="button" onclick={startWatching} title="live"></button>
{/if}
<button class="button" onclick={moreMenu}>...</button>
</div>
</div>
{/if}
</td>
</tr>
<style>
tr:nth-child(odd) {
background-color: var(--background-primary-alt);
}
.info {
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
padding: var(--size-4-1) var(--size-4-1);
}
.peer-id {
font-size: 0.8em;
}
.status-chips {
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
/* min-width: 10em; */
}
.buttons {
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
}
.buttons .row {
display: flex;
justify-content: center;
align-items: center;
flex-wrap: wrap;
/* padding: var(--size-4-1) var(--size-4-1); */
}
.chip {
display: inline-block;
padding: 4px 8px;
margin: 4px;
border-radius: 4px;
font-size: 0.75em;
font-weight: bold;
background-color: var(--tag-background);
border: var(--tag-border-width) solid var(--tag-border-color);
}
.chip.connected {
background-color: var(--background-modifier-success);
color: var(--text-normal);
}
.chip.connected-live {
background-color: var(--background-modifier-success);
border-color: var(--background-modifier-success);
color: var(--text-normal);
}
.chip.accepted {
background-color: var(--background-modifier-success);
color: var(--text-normal);
}
.chip.waiting {
background-color: var(--background-secondary);
}
.chip.unknown {
background-color: var(--background-primary);
color: var(--text-warning);
}
.chip.denied {
background-color: var(--background-modifier-error);
color: var(--text-error);
}
.chip.attr {
background-color: var(--background-secondary);
}
.button {
margin: var(--size-4-1);
}
.button.affirmative {
background-color: var(--interactive-accent);
color: var(--text-normal);
}
.button.affirmative:hover {
background-color: var(--interactive-accent-hover);
}
.button.negative {
background-color: var(--background-modifier-error);
color: var(--text-error);
}
.button.negative:hover {
background-color: var(--background-modifier-error-hover);
}
</style>

Submodule src/lib updated: f0253a8548...a8210ce046

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,197 @@
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
import type { LOG_LEVEL } from "../lib/src/common/types";
import type { LiveSyncCore } from "../main";
import { __$checkInstanceBinding } from "../lib/src/dev/checks";
// import { unique } from "octagonal-wheels/collection";
// import type { IObsidianModule } from "./AbstractObsidianModule.ts";
// import type {
// ICoreModuleBase,
// AllInjectableProps,
// AllExecuteProps,
// EveryExecuteProps,
// AnyExecuteProps,
// ICoreModule,
// } from "./ModuleTypes";
// function isOverridableKey(key: string): key is keyof ICoreModuleBase {
// return key.startsWith("$");
// }
// function isInjectableKey(key: string): key is keyof AllInjectableProps {
// return key.startsWith("$$");
// }
// function isAllExecuteKey(key: string): key is keyof AllExecuteProps {
// return key.startsWith("$all");
// }
// function isEveryExecuteKey(key: string): key is keyof EveryExecuteProps {
// return key.startsWith("$every");
// }
// function isAnyExecuteKey(key: string): key is keyof AnyExecuteProps {
// return key.startsWith("$any");
// }
/**
* All $prefixed functions are hooked by the modules. Be careful to call them directly.
* Please refer to the module's source code to understand the function.
* $$ : Completely overridden functions.
* $all : Process all modules and return all results.
* $every : Process all modules until the first failure.
* $any : Process all modules until the first success.
* $ : Other interceptive points. You should manually assign the module
* All of above performed on injectModules function.
*/
// export function injectModules<T extends ICoreModule>(target: T, modules: ICoreModule[]) {
// const allKeys = unique([
// ...Object.keys(Object.getOwnPropertyDescriptors(target)),
// ...Object.keys(Object.getOwnPropertyDescriptors(Object.getPrototypeOf(target))),
// ]).filter((e) => e.startsWith("$")) as (keyof ICoreModule)[];
// const moduleMap = new Map<string, IObsidianModule[]>();
// for (const module of modules) {
// for (const key of allKeys) {
// if (isOverridableKey(key)) {
// if (key in module) {
// const list = moduleMap.get(key) || [];
// if (typeof module[key] === "function") {
// module[key] = module[key].bind(module) as any;
// }
// list.push(module);
// moduleMap.set(key, list);
// }
// }
// }
// }
// Logger(`Injecting modules for ${target.constructor.name}`, LOG_LEVEL_VERBOSE);
// for (const key of allKeys) {
// const modules = moduleMap.get(key) || [];
// if (isInjectableKey(key)) {
// if (modules.length == 0) {
// throw new Error(`No module injected for ${key}. This is a fatal error.`);
// }
// target[key] = modules[0][key]! as any;
// Logger(`[${modules[0].constructor.name}]: Injected ${key} `, LOG_LEVEL_VERBOSE);
// } else if (isAllExecuteKey(key)) {
// const modules = moduleMap.get(key) || [];
// target[key] = async (...args: any) => {
// for (const module of modules) {
// try {
// //@ts-ignore
// await module[key]!(...args);
// } catch (ex) {
// Logger(`[${module.constructor.name}]: All handler for ${key} failed`, LOG_LEVEL_VERBOSE);
// Logger(ex, LOG_LEVEL_VERBOSE);
// }
// }
// return true;
// };
// for (const module of modules) {
// Logger(`[${module.constructor.name}]: Injected (All) ${key} `, LOG_LEVEL_VERBOSE);
// }
// } else if (isEveryExecuteKey(key)) {
// target[key] = async (...args: any) => {
// for (const module of modules) {
// try {
// //@ts-ignore:2556
// const ret = await module[key]!(...args);
// if (ret !== undefined && !ret) {
// // Failed then return that falsy value.
// return ret;
// }
// } catch (ex) {
// Logger(`[${module.constructor.name}]: Every handler for ${key} failed`);
// Logger(ex, LOG_LEVEL_VERBOSE);
// }
// }
// return true;
// };
// for (const module of modules) {
// Logger(`[${module.constructor.name}]: Injected (Every) ${key} `, LOG_LEVEL_VERBOSE);
// }
// } else if (isAnyExecuteKey(key)) {
// //@ts-ignore
// target[key] = async (...args: any[]) => {
// for (const module of modules) {
// try {
// //@ts-ignore:2556
// const ret = await module[key](...args);
// // If truly value returned, then return that value.
// if (ret) {
// return ret;
// }
// } catch (ex) {
// Logger(`[${module.constructor.name}]: Any handler for ${key} failed`);
// Logger(ex, LOG_LEVEL_VERBOSE);
// }
// }
// return false;
// };
// for (const module of modules) {
// Logger(`[${module.constructor.name}]: Injected (Any) ${key} `, LOG_LEVEL_VERBOSE);
// }
// } else {
// Logger(`No injected handler for ${key} `, LOG_LEVEL_VERBOSE);
// }
// }
// Logger(`Injected modules for ${target.constructor.name}`, LOG_LEVEL_VERBOSE);
// return true;
// }
export abstract class AbstractModule {
_log = (msg: any, level: LOG_LEVEL = LOG_LEVEL_INFO, key?: string) => {
if (typeof msg === "string" && level !== LOG_LEVEL_NOTICE) {
msg = `[${this.constructor.name}]\u{200A} ${msg}`;
}
// console.log(msg);
Logger(msg, level, key);
};
get localDatabase() {
return this.core.localDatabase;
}
get settings() {
return this.core.settings;
}
set settings(value) {
this.core.settings = value;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services) {
// Override if needed.
}
constructor(public core: LiveSyncCore) {
this.onBindFunction(core, core.services);
Logger(`[${this.constructor.name}] Loaded`, LOG_LEVEL_VERBOSE);
__$checkInstanceBinding(this);
}
saveSettings = this.core.saveSettings.bind(this.core);
addTestResult(key: string, value: boolean, summary?: string, message?: string) {
this.services.test.addTestResult(`${this.constructor.name}`, key, value, summary, message);
}
testDone(result: boolean = true) {
return Promise.resolve(result);
}
testFail(message: string) {
this._log(message, LOG_LEVEL_NOTICE);
return this.testDone(false);
}
async _test(key: string, process: () => Promise<any>) {
this._log(`Testing ${key}`, LOG_LEVEL_VERBOSE);
try {
const ret = await process();
if (ret !== true) {
this.addTestResult(key, false, ret.toString());
return this.testFail(`${key} failed: ${ret}`);
}
this.addTestResult(key, true, "");
} catch (ex: any) {
this.addTestResult(key, false, "Failed by Exception", ex.toString());
return this.testFail(`${key} failed: ${ex}`);
}
return this.testDone();
}
get services() {
return this.core._services;
}
}

View File

@@ -0,0 +1,54 @@
import { type Prettify } from "../lib/src/common/types";
import type { LiveSyncCore } from "../main";
import type ObsidianLiveSyncPlugin from "../main";
import { AbstractModule } from "./AbstractModule.ts";
import type { ChainableExecuteFunction, OverridableFunctionsKeys } from "./ModuleTypes";
export type IObsidianModuleBase = OverridableFunctionsKeys<ObsidianLiveSyncPlugin>;
export type IObsidianModule = Prettify<Partial<IObsidianModuleBase>>;
export type ModuleKeys = keyof IObsidianModule;
export type ChainableModuleProps = ChainableExecuteFunction<ObsidianLiveSyncPlugin>;
export abstract class AbstractObsidianModule extends AbstractModule {
addCommand = this.plugin.addCommand.bind(this.plugin);
registerView = this.plugin.registerView.bind(this.plugin);
addRibbonIcon = this.plugin.addRibbonIcon.bind(this.plugin);
registerObsidianProtocolHandler = this.plugin.registerObsidianProtocolHandler.bind(this.plugin);
get localDatabase() {
return this.plugin.localDatabase;
}
get settings() {
return this.plugin.settings;
}
set settings(value) {
this.plugin.settings = value;
}
get app() {
return this.plugin.app;
}
constructor(
public plugin: ObsidianLiveSyncPlugin,
public core: LiveSyncCore
) {
super(core);
}
saveSettings = this.plugin.saveSettings.bind(this.plugin);
isMainReady() {
return this.services.appLifecycle.isReady();
}
isMainSuspended() {
return this.services.appLifecycle.isSuspended();
}
isDatabaseReady() {
return this.services.database.isDatabaseReady();
}
//should be overridden
isThisModuleEnabled() {
return true;
}
}

View File

@@ -0,0 +1,55 @@
import type { Prettify } from "../lib/src/common/types";
import type { LiveSyncCore } from "../main";
export type OverridableFunctionsKeys<T> = {
[K in keyof T as K extends `$${string}` ? K : never]: T[K];
};
export type ChainableExecuteFunction<T> = {
[K in keyof T as K extends `$${string}`
? T[K] extends (...args: any) => ChainableFunctionResult
? K
: never
: never]: T[K];
};
export type ICoreModuleBase = OverridableFunctionsKeys<LiveSyncCore>;
export type ICoreModule = Prettify<Partial<ICoreModuleBase>>;
export type CoreModuleKeys = keyof ICoreModule;
export type ChainableFunctionResult =
| Promise<boolean | undefined | string>
| Promise<boolean | undefined>
| Promise<boolean>
| Promise<void>;
export type ChainableFunctionResultOrAll = Promise<boolean | undefined | string | void>;
type AllExecuteFunction<T> = {
[K in keyof T as K extends `$all${string}`
? T[K] extends (...args: any[]) => ChainableFunctionResultOrAll
? K
: never
: never]: T[K];
};
type EveryExecuteFunction<T> = {
[K in keyof T as K extends `$every${string}`
? T[K] extends (...args: any[]) => ChainableFunctionResult
? K
: never
: never]: T[K];
};
type AnyExecuteFunction<T> = {
[K in keyof T as K extends `$any${string}`
? T[K] extends (...args: any[]) => ChainableFunctionResult
? K
: never
: never]: T[K];
};
type InjectableFunction<T> = {
[K in keyof T as K extends `$$${string}` ? (T[K] extends (...args: any[]) => any ? K : never) : never]: T[K];
};
export type AllExecuteProps = AllExecuteFunction<LiveSyncCore>;
export type EveryExecuteProps = EveryExecuteFunction<LiveSyncCore>;
export type AnyExecuteProps = AnyExecuteFunction<LiveSyncCore>;
export type AllInjectableProps = InjectableFunction<LiveSyncCore>;

View File

@@ -0,0 +1,352 @@
import { LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { EVENT_FILE_SAVED, eventHub } from "../../common/events";
import {
getDatabasePathFromUXFileInfo,
getStoragePathFromUXFileInfo,
isInternalMetadata,
markChangesAreSame,
} from "../../common/utils";
import type {
UXFileInfoStub,
FilePathWithPrefix,
UXFileInfo,
MetaEntry,
LoadedEntry,
FilePath,
SavingEntry,
DocumentID,
} from "../../lib/src/common/types";
import type { DatabaseFileAccess } from "../interfaces/DatabaseFileAccess";
import { isPlainText, shouldBeIgnored, stripAllPrefixes } from "../../lib/src/string_and_binary/path";
import {
createBlob,
createTextBlob,
delay,
determineTypeFromBlob,
isDocContentSame,
readContent,
} from "../../lib/src/common/utils";
import { serialized } from "octagonal-wheels/concurrency/lock";
import { AbstractModule } from "../AbstractModule.ts";
import { ICHeader } from "../../common/types.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleDatabaseFileAccess extends AbstractModule implements DatabaseFileAccess {
private _everyOnload(): Promise<boolean> {
this.core.databaseFileAccess = this;
return Promise.resolve(true);
}
private async _everyModuleTest(): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
const testString = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam nec purus nec nunc";
// Before test, we need to delete completely.
const conflicts = await this.getConflictedRevs("autoTest.md" as FilePathWithPrefix);
for (const rev of conflicts) {
await this.delete("autoTest.md" as FilePathWithPrefix, rev);
}
await this.delete("autoTest.md" as FilePathWithPrefix);
// OK, begin!
await this._test(
"storeContent",
async () => await this.storeContent("autoTest.md" as FilePathWithPrefix, testString)
);
// For test, we need to clear the caches.
this.localDatabase.clearCaches();
await this._test("readContent", async () => {
const content = await this.fetch("autoTest.md" as FilePathWithPrefix);
if (!content) return "File not found";
if (content.deleted) return "File is deleted";
return (await content.body.text()) == testString
? true
: `Content is not same ${await content.body.text()}`;
});
await this._test("delete", async () => await this.delete("autoTest.md" as FilePathWithPrefix));
await this._test("read deleted content", async () => {
const content = await this.fetch("autoTest.md" as FilePathWithPrefix);
if (!content) return true;
if (content.deleted) return true;
return `Still exist !:${await content.body.text()},${JSON.stringify(content, undefined, 2)}`;
});
await delay(100);
return this.testDone();
}
async checkIsTargetFile(file: UXFileInfoStub | FilePathWithPrefix): Promise<boolean> {
const path = getStoragePathFromUXFileInfo(file);
if (!(await this.services.vault.isTargetFile(path))) {
this._log(`File is not target`, LOG_LEVEL_VERBOSE);
return false;
}
if (shouldBeIgnored(path)) {
this._log(`File should be ignored`, LOG_LEVEL_VERBOSE);
return false;
}
return true;
}
async delete(file: UXFileInfoStub | FilePathWithPrefix, rev?: string): Promise<boolean> {
if (!(await this.checkIsTargetFile(file))) {
return true;
}
const fullPath = getDatabasePathFromUXFileInfo(file);
try {
this._log(`deleteDB By path:${fullPath}`);
return await this.deleteFromDBbyPath(fullPath, rev);
} catch (ex) {
this._log(`Failed to delete ${fullPath}`);
this._log(ex, LOG_LEVEL_VERBOSE);
return false;
}
}
async createChunks(file: UXFileInfo, force: boolean = false, skipCheck?: boolean): Promise<boolean> {
return await this.__store(file, force, skipCheck, true);
}
async store(file: UXFileInfo, force: boolean = false, skipCheck?: boolean): Promise<boolean> {
return await this.__store(file, force, skipCheck, false);
}
async storeContent(path: FilePathWithPrefix, content: string): Promise<boolean> {
const blob = createTextBlob(content);
const bytes = (await blob.arrayBuffer()).byteLength;
const isInternal = path.startsWith(".") ? true : undefined;
const dummyUXFileInfo: UXFileInfo = {
name: path.split("/").pop() as string,
path: path,
stat: {
size: bytes,
ctime: Date.now(),
mtime: Date.now(),
type: "file",
},
body: blob,
isInternal,
};
return await this.__store(dummyUXFileInfo, true, false, false);
}
private async __store(
file: UXFileInfo,
force: boolean = false,
skipCheck?: boolean,
onlyChunks?: boolean
): Promise<boolean> {
if (!skipCheck) {
if (!(await this.checkIsTargetFile(file))) {
return true;
}
}
if (!file) {
this._log("File seems bad", LOG_LEVEL_VERBOSE);
return false;
}
// const path = getPathFromUXFileInfo(file);
const isPlain = isPlainText(file.name);
const possiblyLarge = !isPlain;
const content = file.body;
const datatype = determineTypeFromBlob(content);
const idPrefix = file.isInternal ? ICHeader : "";
const fullPath = getStoragePathFromUXFileInfo(file);
const fullPathOnDB = getDatabasePathFromUXFileInfo(file);
if (possiblyLarge) this._log(`Processing: ${fullPath}`, LOG_LEVEL_VERBOSE);
// if (isInternalMetadata(fullPath)) {
// this._log(`Internal file: ${fullPath}`, LOG_LEVEL_VERBOSE);
// return false;
// }
if (file.isInternal) {
if (file.deleted) {
file.stat = {
size: 0,
ctime: Date.now(),
mtime: Date.now(),
type: "file",
};
} else if (file.stat == undefined) {
const stat = await this.core.storageAccess.statHidden(file.path);
if (!stat) {
// We stored actually deleted or not since here, so this is an unexpected case. we should raise an error.
this._log(`Internal file not found: ${fullPath}`, LOG_LEVEL_VERBOSE);
return false;
}
file.stat = stat;
}
}
const idMain = await this.services.path.path2id(fullPath);
const id = (idPrefix + idMain) as DocumentID;
const d: SavingEntry = {
_id: id,
path: fullPathOnDB,
data: content,
ctime: file.stat.ctime,
mtime: file.stat.mtime,
size: file.stat.size,
children: [],
datatype: datatype,
type: datatype,
eden: {},
};
//upsert should locked
const msg = `STORAGE -> DB (${datatype}) `;
const isNotChanged = await serialized("file-" + fullPath, async () => {
if (force) {
this._log(msg + "Force writing " + fullPath, LOG_LEVEL_VERBOSE);
return false;
}
// Commented out temporarily: this checks that the file was made ourself.
// if (this.core.storageAccess.recentlyTouched(file)) {
// return true;
// }
try {
const old = await this.localDatabase.getDBEntry(d.path, undefined, false, true, false);
if (old !== false) {
const oldData = { data: old.data, deleted: old._deleted || old.deleted };
const newData = { data: d.data, deleted: d._deleted || d.deleted };
if (oldData.deleted != newData.deleted) return false;
if (!(await isDocContentSame(old.data, newData.data))) return false;
this._log(
msg + "Skipped (not changed) " + fullPath + (d._deleted || d.deleted ? " (deleted)" : ""),
LOG_LEVEL_VERBOSE
);
markChangesAreSame(old, d.mtime, old.mtime);
return true;
// d._rev = old._rev;
}
} catch (ex) {
this._log(
msg +
"Error, Could not check the diff for the old one." +
(force ? "force writing." : "") +
fullPath +
(d._deleted || d.deleted ? " (deleted)" : ""),
LOG_LEVEL_VERBOSE
);
this._log(ex, LOG_LEVEL_VERBOSE);
return !force;
}
return false;
});
if (isNotChanged) {
this._log(msg + " Skip " + fullPath, LOG_LEVEL_VERBOSE);
return true;
}
const ret = await this.localDatabase.putDBEntry(d, onlyChunks);
if (ret !== false) {
this._log(msg + fullPath);
eventHub.emitEvent(EVENT_FILE_SAVED);
}
return ret != false;
}
async getConflictedRevs(file: UXFileInfoStub | FilePathWithPrefix): Promise<string[]> {
if (!(await this.checkIsTargetFile(file))) {
return [];
}
const filename = getDatabasePathFromUXFileInfo(file);
const doc = await this.localDatabase.getDBEntryMeta(filename, { conflicts: true }, true);
if (doc === false) {
return [];
}
return doc._conflicts || [];
}
async fetch(
file: UXFileInfoStub | FilePathWithPrefix,
rev?: string,
waitForReady?: boolean,
skipCheck = false
): Promise<UXFileInfo | false> {
if (skipCheck && !(await this.checkIsTargetFile(file))) {
return false;
}
const entry = await this.fetchEntry(file, rev, waitForReady, true);
if (entry === false) {
return false;
}
const data = createBlob(readContent(entry));
const path = stripAllPrefixes(entry.path);
const fileInfo: UXFileInfo = {
name: path.split("/").pop() as string,
path: path,
stat: {
size: entry.size,
ctime: entry.ctime,
mtime: entry.mtime,
type: "file",
},
body: data,
deleted: entry.deleted || entry._deleted,
};
if (isInternalMetadata(entry.path)) {
fileInfo.isInternal = true;
}
return fileInfo;
}
async fetchEntryMeta(
file: UXFileInfoStub | FilePathWithPrefix,
rev?: string,
skipCheck = false
): Promise<MetaEntry | false> {
const dbFileName = getDatabasePathFromUXFileInfo(file);
if (skipCheck && !(await this.checkIsTargetFile(file))) {
return false;
}
const doc = await this.localDatabase.getDBEntryMeta(dbFileName, rev ? { rev: rev } : undefined, true);
if (doc === false) {
return false;
}
return doc as MetaEntry;
}
async fetchEntryFromMeta(
meta: MetaEntry,
waitForReady: boolean = true,
skipCheck = false
): Promise<LoadedEntry | false> {
if (skipCheck && !(await this.checkIsTargetFile(meta.path))) {
return false;
}
const doc = await this.localDatabase.getDBEntryFromMeta(meta as LoadedEntry, false, waitForReady);
if (doc === false) {
return false;
}
return doc;
}
async fetchEntry(
file: UXFileInfoStub | FilePathWithPrefix,
rev?: string,
waitForReady: boolean = true,
skipCheck = false
): Promise<LoadedEntry | false> {
if (skipCheck && !(await this.checkIsTargetFile(file))) {
return false;
}
const entry = await this.fetchEntryMeta(file, rev, true);
if (entry === false) {
return false;
}
const doc = await this.fetchEntryFromMeta(entry, waitForReady, true);
return doc;
}
async deleteFromDBbyPath(fullPath: FilePath | FilePathWithPrefix, rev?: string): Promise<boolean> {
if (!(await this.checkIsTargetFile(fullPath))) {
this._log(`storeFromStorage: File is not target: ${fullPath}`);
return true;
}
const opt = rev ? { rev: rev } : undefined;
const ret = await this.localDatabase.deleteDBEntry(fullPath, opt);
eventHub.emitEvent(EVENT_FILE_SAVED);
return ret;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnLoaded(this._everyOnload.bind(this));
services.test.handleTest(this._everyModuleTest.bind(this));
}
}

View File

@@ -0,0 +1,440 @@
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { serialized } from "octagonal-wheels/concurrency/lock";
import type { FileEventItem } from "../../common/types";
import type {
FilePath,
FilePathWithPrefix,
MetaEntry,
UXFileInfo,
UXFileInfoStub,
UXInternalFileInfoStub,
} from "../../lib/src/common/types";
import { AbstractModule } from "../AbstractModule.ts";
import {
compareFileFreshness,
EVEN,
getPath,
getPathWithoutPrefix,
getStoragePathFromUXFileInfo,
markChangesAreSame,
} from "../../common/utils";
import { getDocDataAsArray, isDocContentSame, readAsBlob, readContent } from "../../lib/src/common/utils";
import { shouldBeIgnored } from "../../lib/src/string_and_binary/path";
import { Semaphore } from "octagonal-wheels/concurrency/semaphore";
import { eventHub } from "../../common/events.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleFileHandler extends AbstractModule {
get db() {
return this.core.databaseFileAccess;
}
get storage() {
return this.core.storageAccess;
}
_everyOnloadStart(): Promise<boolean> {
this.core.fileHandler = this;
return Promise.resolve(true);
}
async readFileFromStub(file: UXFileInfoStub | UXFileInfo) {
if ("body" in file && file.body) {
return file;
}
const readFile = await this.storage.readStubContent(file);
if (!readFile) {
throw new Error(`File ${file.path} is not exist on the storage`);
}
return readFile;
}
async storeFileToDB(
info: UXFileInfoStub | UXFileInfo | UXInternalFileInfoStub | FilePathWithPrefix,
force: boolean = false,
onlyChunks: boolean = false
): Promise<boolean> {
const file = typeof info === "string" ? this.storage.getFileStub(info) : info;
if (file == null) {
this._log(`File ${info} is not exist on the storage`, LOG_LEVEL_VERBOSE);
return false;
}
// const file = item.args.file;
if (file.isInternal) {
this._log(
`Internal file ${file.path} is not allowed to be processed on processFileEvent`,
LOG_LEVEL_VERBOSE
);
return false;
}
// First, check the file on the database
const entry = await this.db.fetchEntry(file, undefined, true, true);
if (!entry || entry.deleted || entry._deleted) {
// If the file is not exist on the database, then it should be created.
const readFile = await this.readFileFromStub(file);
if (!onlyChunks) {
return await this.db.store(readFile);
} else {
return await this.db.createChunks(readFile, false, true);
}
}
// entry is exist on the database, check the difference between the file and the entry.
let shouldApplied = false;
if (!force && !onlyChunks) {
// 1. if the time stamp is far different, then it should be updated.
// Note: This checks only the mtime with the resolution reduced to 2 seconds.
// 2 seconds it for the ZIP file's mtime. If not, we cannot backup the vault as the ZIP file.
// This is hardcoded on `compareMtime` of `src/common/utils.ts`.
if (compareFileFreshness(file, entry) !== EVEN) {
shouldApplied = true;
}
// 2. if not, the content should be checked.
let readFile: UXFileInfo | undefined = undefined;
if (!shouldApplied) {
readFile = await this.readFileFromStub(file);
if (!readFile) {
this._log(`File ${file.path} is not exist on the storage`, LOG_LEVEL_NOTICE);
return false;
}
if (await isDocContentSame(getDocDataAsArray(entry.data), readFile.body)) {
// Timestamp is different but the content is same. therefore, two timestamps should be handled as same.
// So, mark the changes are same.
markChangesAreSame(readFile, readFile.stat.mtime, entry.mtime);
} else {
shouldApplied = true;
}
}
if (!shouldApplied) {
this._log(`File ${file.path} is not changed`, LOG_LEVEL_VERBOSE);
return true;
}
if (!readFile) readFile = await this.readFileFromStub(file);
// If the file is changed, then the file should be stored.
if (onlyChunks) {
return await this.db.createChunks(readFile, false, true);
} else {
return await this.db.store(readFile, false, true);
}
} else {
// If force is true, then it should be updated.
const readFile = await this.readFileFromStub(file);
if (onlyChunks) {
return await this.db.createChunks(readFile, true, true);
} else {
return await this.db.store(readFile, true, true);
}
}
}
async deleteFileFromDB(info: UXFileInfoStub | UXInternalFileInfoStub | FilePath): Promise<boolean> {
const file = typeof info === "string" ? this.storage.getFileStub(info) : info;
if (file == null) {
this._log(`File ${info} is not exist on the storage`, LOG_LEVEL_VERBOSE);
return false;
}
// const file = item.args.file;
if (file.isInternal) {
this._log(
`Internal file ${file.path} is not allowed to be processed on processFileEvent`,
LOG_LEVEL_VERBOSE
);
return false;
}
// First, check the file on the database
const entry = await this.db.fetchEntry(file, undefined, true, true);
if (!entry || entry.deleted || entry._deleted) {
this._log(`File ${file.path} is not exist or already deleted on the database`, LOG_LEVEL_VERBOSE);
return false;
}
// Check the file is already conflicted. if so, only the conflicted one should be deleted.
const conflictedRevs = await this.db.getConflictedRevs(file);
if (conflictedRevs.length > 0) {
// If conflicted, then it should be deleted. entry._rev should be own file's rev.
// TODO: I BELIEVED SO. BUT I NOTICED THAT I AN NOT SURE. I SHOULD CHECK THIS.
// ANYWAY, I SHOULD DELETE THE FILE. ACTUALLY WE SIMPLY DELETED THE FILE UNTIL PREVIOUS VERSIONS.
return await this.db.delete(file, entry._rev);
}
// Otherwise, the file should be deleted simply. This is the previous behaviour.
return await this.db.delete(file);
}
async deleteRevisionFromDB(
info: UXFileInfoStub | FilePath | FilePathWithPrefix,
rev: string
): Promise<boolean | undefined> {
//TODO: Possibly check the conflicting.
return await this.db.delete(info, rev);
}
async resolveConflictedByDeletingRevision(
info: UXFileInfoStub | FilePath,
rev: string
): Promise<boolean | undefined> {
const path = getStoragePathFromUXFileInfo(info);
if (!(await this.deleteRevisionFromDB(info, rev))) {
this._log(`Failed to delete the conflicted revision ${rev} of ${path}`, LOG_LEVEL_VERBOSE);
return false;
}
if (!(await this.dbToStorageWithSpecificRev(info, rev, true))) {
this._log(`Failed to apply the resolved revision ${rev} of ${path} to the storage`, LOG_LEVEL_VERBOSE);
return false;
}
}
async dbToStorageWithSpecificRev(
info: UXFileInfoStub | UXFileInfo | FilePath | null,
rev: string,
force?: boolean
): Promise<boolean> {
const file = typeof info === "string" ? this.storage.getFileStub(info) : info;
if (file == null) {
this._log(`File ${info} is not exist on the storage`, LOG_LEVEL_VERBOSE);
return false;
}
const docEntry = await this.db.fetchEntryMeta(file, rev, true);
if (!docEntry) {
this._log(`File ${file.path} is not exist on the database`, LOG_LEVEL_VERBOSE);
return false;
}
return await this.dbToStorage(docEntry, file, force);
}
async dbToStorage(
entryInfo: MetaEntry | FilePathWithPrefix,
info: UXFileInfoStub | UXFileInfo | FilePath | null,
force?: boolean
): Promise<boolean> {
const file = typeof info === "string" ? this.storage.getFileStub(info) : info;
const mode = file == null ? "create" : "modify";
const pathFromEntryInfo = typeof entryInfo === "string" ? entryInfo : getPath(entryInfo);
const docEntry = await this.db.fetchEntryMeta(pathFromEntryInfo, undefined, true);
if (!docEntry) {
this._log(`File ${pathFromEntryInfo} is not exist on the database`, LOG_LEVEL_VERBOSE);
return false;
}
const path = getPath(docEntry);
// 1. Check if it already conflicted.
const revs = await this.db.getConflictedRevs(path);
if (revs.length > 0) {
// Some conflicts are exist.
if (this.settings.writeDocumentsIfConflicted) {
// If configured to write the document even if conflicted, then it should be written.
// NO OP
} else {
// If not, then it should be checked. and will be processed later (i.e., after the conflict is resolved).
await this.services.conflict.queueCheckForIfOpen(path);
return true;
}
}
// 2. Check if the file is already exist on the storage.
const existDoc = this.storage.getStub(path);
if (existDoc && existDoc.isFolder) {
this._log(`Folder ${path} is already exist on the storage as a folder`, LOG_LEVEL_VERBOSE);
// We can do nothing, and other modules should also nothing to do.
return true;
}
// Check existence of both file and docEntry.
const existOnDB = !(docEntry._deleted || docEntry.deleted || false);
const existOnStorage = existDoc != null;
if (!existOnDB && !existOnStorage) {
this._log(`File ${path} seems to be deleted, but already not on storage`, LOG_LEVEL_VERBOSE);
return true;
}
if (!existOnDB && existOnStorage) {
// Deletion has been Transferred. Storage files will be deleted.
// Note: If the folder becomes empty, the folder will be deleted if not configured to keep it.
// This behaviour is implemented on the `ModuleFileAccessObsidian`.
// And it does not care actually deleted.
await this.storage.deleteVaultItem(path);
return true;
}
// Okay, the file is exist on the database. Let's check the file is exist on the storage.
const docRead = await this.db.fetchEntryFromMeta(docEntry);
if (!docRead) {
this._log(`File ${path} is not exist on the database`, LOG_LEVEL_VERBOSE);
return false;
}
// If we want to process size mismatched files -- in case of having files created by some integrations, enable the toggle.
if (!this.settings.processSizeMismatchedFiles) {
// Check the file is not corrupted
// (Zero is a special case, may be created by some APIs and it might be acceptable).
if (docRead.size != 0 && docRead.size !== readAsBlob(docRead).size) {
this._log(`File ${path} seems to be corrupted! Writing prevented.`, LOG_LEVEL_NOTICE);
return false;
}
}
const docData = readContent(docRead);
if (existOnStorage && !force) {
// The file is exist on the storage. Let's check the difference between the file and the entry.
// But, if force is true, then it should be updated.
// Ok, we have to compare.
let shouldApplied = false;
// 1. if the time stamp is far different, then it should be updated.
// Note: This checks only the mtime with the resolution reduced to 2 seconds.
// 2 seconds it for the ZIP file's mtime. If not, we cannot backup the vault as the ZIP file.
// This is hardcoded on `compareMtime` of `src/common/utils.ts`.
if (compareFileFreshness(existDoc, docEntry) !== EVEN) {
shouldApplied = true;
}
// 2. if not, the content should be checked.
if (!shouldApplied) {
const readFile = await this.readFileFromStub(existDoc);
if (await isDocContentSame(docData, readFile.body)) {
// The content is same. So, we do not need to update the file.
shouldApplied = false;
// Timestamp is different but the content is same. therefore, two timestamps should be handled as same.
// So, mark the changes are same.
markChangesAreSame(docRead, docRead.mtime, existDoc.stat.mtime);
} else {
shouldApplied = true;
}
}
if (!shouldApplied) {
this._log(`File ${docRead.path} is not changed`, LOG_LEVEL_VERBOSE);
return true;
}
// Let's apply the changes.
} else {
this._log(
`File ${docRead.path} ${existOnStorage ? "(new) " : ""} ${force ? " (forced)" : ""}`,
LOG_LEVEL_VERBOSE
);
}
await this.storage.ensureDir(path);
const ret = await this.storage.writeFileAuto(path, docData, { ctime: docRead.ctime, mtime: docRead.mtime });
await this.storage.touched(path);
this.storage.triggerFileEvent(mode, path);
return ret;
}
private async _anyHandlerProcessesFileEvent(item: FileEventItem): Promise<boolean> {
const eventItem = item.args;
const type = item.type;
const path = eventItem.file.path;
if (!(await this.services.vault.isTargetFile(path))) {
this._log(`File ${path} is not the target file`, LOG_LEVEL_VERBOSE);
return false;
}
if (shouldBeIgnored(path)) {
this._log(`File ${path} should be ignored`, LOG_LEVEL_VERBOSE);
return false;
}
const lockKey = `processFileEvent-${path}`;
return await serialized(lockKey, async () => {
switch (type) {
case "CREATE":
case "CHANGED":
return await this.storeFileToDB(item.args.file);
case "DELETE":
return await this.deleteFileFromDB(item.args.file);
case "INTERNAL":
// this should be handled on the other module.
return false;
default:
this._log(`Unsupported event type: ${type}`, LOG_LEVEL_VERBOSE);
return false;
}
});
}
async _anyProcessReplicatedDoc(entry: MetaEntry): Promise<boolean> {
return await serialized(entry.path, async () => {
if (!(await this.services.vault.isTargetFile(entry.path))) {
this._log(`File ${entry.path} is not the target file`, LOG_LEVEL_VERBOSE);
return false;
}
if (this.services.vault.isFileSizeTooLarge(entry.size)) {
this._log(`File ${entry.path} is too large (on database) to be processed`, LOG_LEVEL_VERBOSE);
return false;
}
if (shouldBeIgnored(entry.path)) {
this._log(`File ${entry.path} should be ignored`, LOG_LEVEL_VERBOSE);
return false;
}
const path = getPath(entry);
const targetFile = this.storage.getStub(getPathWithoutPrefix(entry));
if (targetFile && targetFile.isFolder) {
this._log(`${getPath(entry)} is already exist as the folder`);
// Nothing to do and other modules should also nothing to do.
return true;
} else {
if (targetFile && this.services.vault.isFileSizeTooLarge(targetFile.stat.size)) {
this._log(`File ${targetFile.path} is too large (on storage) to be processed`, LOG_LEVEL_VERBOSE);
return false;
}
this._log(
`Processing ${path} (${entry._id.substring(0, 8)} :${entry._rev?.substring(0, 5)}) : Started...`,
LOG_LEVEL_VERBOSE
);
// Before writing (or skipped ), merging dialogue should be cancelled.
eventHub.emitEvent("conflict-cancelled", path);
const ret = await this.dbToStorage(entry, targetFile);
this._log(`Processing ${path} (${entry._id.substring(0, 8)} :${entry._rev?.substring(0, 5)}) : Done`);
return ret;
}
});
}
async createAllChunks(showingNotice?: boolean): Promise<void> {
this._log("Collecting local files on the storage", LOG_LEVEL_VERBOSE);
const semaphore = Semaphore(10);
let processed = 0;
const filesStorageSrc = this.storage.getFiles();
const incProcessed = () => {
processed++;
if (processed % 25 == 0)
this._log(
`Creating missing chunks: ${processed} of ${total} files`,
showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO,
"chunkCreation"
);
};
const total = filesStorageSrc.length;
const procAllChunks = filesStorageSrc.map(async (file) => {
if (!(await this.services.vault.isTargetFile(file))) {
incProcessed();
return true;
}
if (this.services.vault.isFileSizeTooLarge(file.stat.size)) {
incProcessed();
return true;
}
if (shouldBeIgnored(file.path)) {
incProcessed();
return true;
}
const release = await semaphore.acquire();
incProcessed();
try {
await this.storeFileToDB(file, false, true);
} catch (ex) {
this._log(ex, LOG_LEVEL_VERBOSE);
} finally {
release();
}
});
await Promise.all(procAllChunks);
this._log(
`Creating chunks Done: ${processed} of ${total} files`,
showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO,
"chunkCreation"
);
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
services.fileProcessing.handleProcessFileEvent(this._anyHandlerProcessesFileEvent.bind(this));
services.replication.handleProcessSynchroniseResult(this._anyProcessReplicatedDoc.bind(this));
}
}

View File

@@ -0,0 +1,46 @@
import { $msg } from "../../lib/src/common/i18n";
import { LiveSyncLocalDB } from "../../lib/src/pouchdb/LiveSyncLocalDB.ts";
import { initializeStores } from "../../common/stores.ts";
import { AbstractModule } from "../AbstractModule.ts";
import { LiveSyncManagers } from "../../lib/src/managers/LiveSyncManagers.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleLocalDatabaseObsidian extends AbstractModule {
_everyOnloadStart(): Promise<boolean> {
return Promise.resolve(true);
}
private async _openDatabase(): Promise<boolean> {
if (this.localDatabase != null) {
await this.localDatabase.close();
}
const vaultName = this.services.vault.getVaultName();
this._log($msg("moduleLocalDatabase.logWaitingForReady"));
const getDB = () => this.core.localDatabase.localDatabase;
const getSettings = () => this.core.settings;
this.core.managers = new LiveSyncManagers({
get database() {
return getDB();
},
getActiveReplicator: () => this.core.replicator,
id2path: this.services.path.id2path,
// path2id: this.core.$$path2id.bind(this.core),
path2id: this.services.path.path2id,
get settings() {
return getSettings();
},
});
this.core.localDatabase = new LiveSyncLocalDB(vaultName, this.core);
initializeStores(vaultName);
return await this.localDatabase.initializeDatabase();
}
_isDatabaseReady(): boolean {
return this.localDatabase != null && this.localDatabase.isReady;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.database.handleIsDatabaseReady(this._isDatabaseReady.bind(this));
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
services.database.handleOpenDatabase(this._openDatabase.bind(this));
}
}

View File

@@ -0,0 +1,41 @@
import { PeriodicProcessor } from "../../common/utils";
import type { LiveSyncCore } from "../../main";
import { AbstractModule } from "../AbstractModule";
export class ModulePeriodicProcess extends AbstractModule {
periodicSyncProcessor = new PeriodicProcessor(this.core, async () => await this.services.replication.replicate());
disablePeriodic() {
this.periodicSyncProcessor?.disable();
return Promise.resolve(true);
}
resumePeriodic() {
this.periodicSyncProcessor.enable(
this.settings.periodicReplication ? this.settings.periodicReplicationInterval * 1000 : 0
);
return Promise.resolve(true);
}
private _allOnUnload() {
return this.disablePeriodic();
}
private _everyBeforeRealizeSetting(): Promise<boolean> {
return this.disablePeriodic();
}
private _everyBeforeSuspendProcess(): Promise<boolean> {
return this.disablePeriodic();
}
private _everyAfterResumeProcess(): Promise<boolean> {
return this.resumePeriodic();
}
private _everyAfterRealizeSetting(): Promise<boolean> {
return this.resumePeriodic();
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnUnload(this._allOnUnload.bind(this));
services.setting.handleBeforeRealiseSetting(this._everyBeforeRealizeSetting.bind(this));
services.setting.handleSettingRealised(this._everyAfterRealizeSetting.bind(this));
services.appLifecycle.handleOnSuspending(this._everyBeforeSuspendProcess.bind(this));
services.appLifecycle.handleOnResumed(this._everyAfterResumeProcess.bind(this));
}
}

View File

@@ -0,0 +1,22 @@
import { AbstractModule } from "../AbstractModule";
import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser";
import type { LiveSyncCore } from "../../main";
export class ModulePouchDB extends AbstractModule {
_createPouchDBInstance<T extends object>(
name?: string,
options?: PouchDB.Configuration.DatabaseConfiguration
): PouchDB.Database<T> {
const optionPass = options ?? {};
if (this.settings.useIndexedDBAdapter) {
optionPass.adapter = "indexeddb";
//@ts-ignore :missing def
optionPass.purged_infos_limit = 1;
return new PouchDB(name + "-indexeddb", optionPass);
}
return new PouchDB(name, optionPass);
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.database.handleCreatePouchDBInstance(this._createPouchDBInstance.bind(this));
}
}

View File

@@ -0,0 +1,280 @@
import { delay } from "octagonal-wheels/promises";
import {
FLAGMD_REDFLAG2_HR,
FLAGMD_REDFLAG3_HR,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
REMOTE_COUCHDB,
REMOTE_MINIO,
} from "../../lib/src/common/types.ts";
import { AbstractModule } from "../AbstractModule.ts";
import type { Rebuilder } from "../interfaces/DatabaseRebuilder.ts";
import type { LiveSyncCouchDBReplicator } from "../../lib/src/replication/couchdb/LiveSyncReplicator.ts";
import { fetchAllUsedChunks } from "@/lib/src/pouchdb/chunks.ts";
import { EVENT_DATABASE_REBUILT, eventHub } from "src/common/events.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleRebuilder extends AbstractModule implements Rebuilder {
private _everyOnload(): Promise<boolean> {
this.core.rebuilder = this;
return Promise.resolve(true);
}
async $performRebuildDB(
method: "localOnly" | "remoteOnly" | "rebuildBothByThisDevice" | "localOnlyWithChunks"
): Promise<void> {
if (method == "localOnly") {
await this.$fetchLocal();
}
if (method == "localOnlyWithChunks") {
await this.$fetchLocal(true);
}
if (method == "remoteOnly") {
await this.$rebuildRemote();
}
if (method == "rebuildBothByThisDevice") {
await this.$rebuildEverything();
}
}
async informOptionalFeatures() {
await this.core.services.UI.showMarkdownDialog(
"All optional features are disabled",
`Customisation Sync and Hidden File Sync will all be disabled.
Please enable them from the settings screen after setup is complete.`,
["OK"]
);
}
async askUsingOptionalFeature(opt: { enableFetch?: boolean; enableOverwrite?: boolean }) {
if (
(await this.core.confirm.askYesNoDialog(
"Do you want to enable extra features? If you are new to Self-hosted LiveSync, try the core feature first!",
{ title: "Enable extra features", defaultOption: "No", timeout: 15 }
)) == "yes"
) {
await this.services.setting.suggestOptionalFeatures(opt);
}
}
async rebuildRemote() {
await this.services.setting.suspendExtraSync();
this.core.settings.isConfigured = true;
await this.services.setting.realiseSetting();
await this.services.remote.markLocked();
await this.services.remote.tryResetDatabase();
await this.services.remote.markLocked();
await delay(500);
// await this.askUsingOptionalFeature({ enableOverwrite: true });
await delay(1000);
await this.services.remote.replicateAllToRemote(true);
await delay(1000);
await this.services.remote.replicateAllToRemote(true, true);
await this.informOptionalFeatures();
}
$rebuildRemote(): Promise<void> {
return this.rebuildRemote();
}
async rebuildEverything() {
await this.services.setting.suspendExtraSync();
await this.askUseNewAdapter();
this.core.settings.isConfigured = true;
await this.services.setting.realiseSetting();
await this.resetLocalDatabase();
await delay(1000);
await this.services.databaseEvents.initialiseDatabase(true, true, true);
await this.services.remote.markLocked();
await this.services.remote.tryResetDatabase();
await this.services.remote.markLocked();
await delay(500);
// We do not have any other devices' data, so we do not need to ask for overwriting.
// await this.askUsingOptionalFeature({ enableOverwrite: false });
await delay(1000);
await this.services.remote.replicateAllToRemote(true);
await delay(1000);
await this.services.remote.replicateAllToRemote(true, true);
await this.informOptionalFeatures();
}
$rebuildEverything(): Promise<void> {
return this.rebuildEverything();
}
$fetchLocal(makeLocalChunkBeforeSync?: boolean, preventMakeLocalFilesBeforeSync?: boolean): Promise<void> {
return this.fetchLocal(makeLocalChunkBeforeSync, preventMakeLocalFilesBeforeSync);
}
async scheduleRebuild(): Promise<void> {
try {
await this.core.storageAccess.writeFileAuto(FLAGMD_REDFLAG2_HR, "");
} catch (ex) {
this._log("Could not create red_flag_rebuild.md", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
}
this.services.appLifecycle.performRestart();
}
async scheduleFetch(): Promise<void> {
try {
await this.core.storageAccess.writeFileAuto(FLAGMD_REDFLAG3_HR, "");
} catch (ex) {
this._log("Could not create red_flag_fetch.md", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
}
this.services.appLifecycle.performRestart();
}
private async _tryResetRemoteDatabase(): Promise<void> {
await this.core.replicator.tryResetRemoteDatabase(this.settings);
}
private async _tryCreateRemoteDatabase(): Promise<void> {
await this.core.replicator.tryCreateRemoteDatabase(this.settings);
}
private async _resetLocalDatabase(): Promise<boolean> {
this.core.storageAccess.clearTouched();
return await this.localDatabase.resetDatabase();
}
async suspendAllSync() {
this.core.settings.liveSync = false;
this.core.settings.periodicReplication = false;
this.core.settings.syncOnSave = false;
this.core.settings.syncOnEditorSave = false;
this.core.settings.syncOnStart = false;
this.core.settings.syncOnFileOpen = false;
this.core.settings.syncAfterMerge = false;
await this.services.setting.suspendExtraSync();
}
async suspendReflectingDatabase() {
if (this.core.settings.doNotSuspendOnFetching) return;
if (this.core.settings.remoteType == REMOTE_MINIO) return;
this._log(
`Suspending reflection: Database and storage changes will not be reflected in each other until completely finished the fetching.`,
LOG_LEVEL_NOTICE
);
this.core.settings.suspendParseReplicationResult = true;
this.core.settings.suspendFileWatching = true;
await this.core.saveSettings();
}
async resumeReflectingDatabase() {
if (this.core.settings.doNotSuspendOnFetching) return;
if (this.core.settings.remoteType == REMOTE_MINIO) return;
this._log(`Database and storage reflection has been resumed!`, LOG_LEVEL_NOTICE);
this.core.settings.suspendParseReplicationResult = false;
this.core.settings.suspendFileWatching = false;
await this.services.vault.scanVault(true);
await this.services.replication.onBeforeReplicate(false); //TODO: Check actual need of this.
await this.core.saveSettings();
}
async askUseNewAdapter() {
if (!this.core.settings.useIndexedDBAdapter) {
const message = `Now this core has been configured to use the old database adapter for keeping compatibility. Do you want to deactivate it?`;
const CHOICE_YES = "Yes, disable and use latest";
const CHOICE_NO = "No, keep compatibility";
const choices = [CHOICE_YES, CHOICE_NO];
const ret = await this.core.confirm.confirmWithMessage(
"Database adapter",
message,
choices,
CHOICE_YES,
10
);
if (ret == CHOICE_YES) {
this.core.settings.useIndexedDBAdapter = true;
}
}
}
async fetchLocal(makeLocalChunkBeforeSync?: boolean, preventMakeLocalFilesBeforeSync?: boolean) {
await this.services.setting.suspendExtraSync();
await this.askUseNewAdapter();
this.core.settings.isConfigured = true;
await this.suspendReflectingDatabase();
await this.services.setting.realiseSetting();
await this.resetLocalDatabase();
await delay(1000);
await this.services.database.openDatabase();
// this.core.isReady = true;
this.services.appLifecycle.markIsReady();
if (makeLocalChunkBeforeSync) {
await this.core.fileHandler.createAllChunks(true);
} else if (!preventMakeLocalFilesBeforeSync) {
await this.services.databaseEvents.initialiseDatabase(true, true, true);
} else {
// Do not create local file entries before sync (Means use remote information)
}
await this.services.remote.markResolved();
await delay(500);
await this.services.remote.replicateAllFromRemote(true);
await delay(1000);
await this.services.remote.replicateAllFromRemote(true);
await this.resumeReflectingDatabase();
await this.informOptionalFeatures();
// No longer enable
// await this.askUsingOptionalFeature({ enableFetch: true });
}
async fetchLocalWithRebuild() {
return await this.fetchLocal(true);
}
private async _allSuspendAllSync(): Promise<boolean> {
await this.suspendAllSync();
return true;
}
async resetLocalDatabase() {
if (this.core.settings.isConfigured && this.core.settings.additionalSuffixOfDatabaseName == "") {
// Discard the non-suffixed database
await this.services.database.resetDatabase();
}
const suffix = this.services.API.getAppID() || "";
this.core.settings.additionalSuffixOfDatabaseName = suffix;
await this.services.database.resetDatabase();
eventHub.emitEvent(EVENT_DATABASE_REBUILT);
}
async fetchRemoteChunks() {
if (
!this.core.settings.doNotSuspendOnFetching &&
this.core.settings.readChunksOnline &&
this.core.settings.remoteType == REMOTE_COUCHDB
) {
this._log(`Fetching chunks`, LOG_LEVEL_NOTICE);
const replicator = this.services.replicator.getActiveReplicator() as LiveSyncCouchDBReplicator;
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(
this.settings,
this.services.API.isMobile(),
true
);
if (typeof remoteDB == "string") {
this._log(remoteDB, LOG_LEVEL_NOTICE);
} else {
await fetchAllUsedChunks(this.localDatabase.localDatabase, remoteDB.db);
}
this._log(`Fetching chunks done`, LOG_LEVEL_NOTICE);
}
}
async resolveAllConflictedFilesByNewerOnes() {
this._log(`Resolving conflicts by newer ones`, LOG_LEVEL_NOTICE);
const files = this.core.storageAccess.getFileNames();
let i = 0;
for (const file of files) {
if (i++ % 10)
this._log(
`Check and Processing ${i} / ${files.length}`,
LOG_LEVEL_NOTICE,
"resolveAllConflictedFilesByNewerOnes"
);
await this.services.conflict.resolveByNewest(file);
}
this._log(`Done!`, LOG_LEVEL_NOTICE, "resolveAllConflictedFilesByNewerOnes");
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnLoaded(this._everyOnload.bind(this));
services.database.handleResetDatabase(this._resetLocalDatabase.bind(this));
services.remote.handleTryResetDatabase(this._tryResetRemoteDatabase.bind(this));
services.remote.handleTryCreateDatabase(this._tryCreateRemoteDatabase.bind(this));
services.setting.handleSuspendAllSync(this._allSuspendAllSync.bind(this));
}
}

View File

@@ -0,0 +1,520 @@
import { fireAndForget, yieldMicrotask } from "octagonal-wheels/promises";
import type { LiveSyncLocalDB } from "../../lib/src/pouchdb/LiveSyncLocalDB";
import { AbstractModule } from "../AbstractModule";
import { Logger, LOG_LEVEL_NOTICE, LOG_LEVEL_INFO, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { isLockAcquired, shareRunningResult, skipIfDuplicated } from "octagonal-wheels/concurrency/lock";
import { balanceChunkPurgedDBs } from "@/lib/src/pouchdb/chunks";
import { purgeUnreferencedChunks } from "@/lib/src/pouchdb/chunks";
import { LiveSyncCouchDBReplicator } from "../../lib/src/replication/couchdb/LiveSyncReplicator";
import { throttle } from "octagonal-wheels/function";
import { arrayToChunkedArray } from "octagonal-wheels/collection";
import {
SYNCINFO_ID,
VER,
type EntryBody,
type EntryDoc,
type EntryLeaf,
type LoadedEntry,
type MetaEntry,
type RemoteType,
} from "../../lib/src/common/types";
import { QueueProcessor } from "octagonal-wheels/concurrency/processor";
import {
getPath,
isChunk,
isValidPath,
rateLimitedSharedExecution,
scheduleTask,
updatePreviousExecutionTime,
} from "../../common/utils";
import { isAnyNote } from "../../lib/src/common/utils";
import { EVENT_FILE_SAVED, EVENT_SETTING_SAVED, eventHub } from "../../common/events";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import { $msg } from "../../lib/src/common/i18n";
import { clearHandlers } from "../../lib/src/replication/SyncParamsHandler";
import type { LiveSyncCore } from "../../main";
const KEY_REPLICATION_ON_EVENT = "replicationOnEvent";
const REPLICATION_ON_EVENT_FORECASTED_TIME = 5000;
export class ModuleReplicator extends AbstractModule {
_replicatorType?: RemoteType;
private _everyOnloadAfterLoadSettings(): Promise<boolean> {
eventHub.onEvent(EVENT_FILE_SAVED, () => {
if (this.settings.syncOnSave && !this.core.services.appLifecycle.isSuspended()) {
scheduleTask("perform-replicate-after-save", 250, () => this.services.replication.replicateByEvent());
}
});
eventHub.onEvent(EVENT_SETTING_SAVED, (setting) => {
if (this._replicatorType !== setting.remoteType) {
void this.setReplicator();
}
});
return Promise.resolve(true);
}
async setReplicator() {
const replicator = await this.services.replicator.getNewReplicator();
if (!replicator) {
this._log($msg("Replicator.Message.InitialiseFatalError"), LOG_LEVEL_NOTICE);
return false;
}
if (this.core.replicator) {
await this.core.replicator.closeReplication();
this._log("Replicator closed for changing", LOG_LEVEL_VERBOSE);
}
this.core.replicator = replicator;
this._replicatorType = this.settings.remoteType;
await yieldMicrotask();
// Clear any existing sync parameter handlers (means clearing key-deriving salt).
clearHandlers();
return true;
}
_getReplicator(): LiveSyncAbstractReplicator {
return this.core.replicator;
}
_everyOnInitializeDatabase(db: LiveSyncLocalDB): Promise<boolean> {
return this.setReplicator();
}
_everyOnResetDatabase(db: LiveSyncLocalDB): Promise<boolean> {
return this.setReplicator();
}
async ensureReplicatorPBKDF2Salt(showMessage: boolean = false): Promise<boolean> {
// Checking salt
const replicator = this.services.replicator.getActiveReplicator();
if (!replicator) {
this._log($msg("Replicator.Message.InitialiseFatalError"), LOG_LEVEL_NOTICE);
return false;
}
return await replicator.ensurePBKDF2Salt(this.settings, showMessage, true);
}
async _everyBeforeReplicate(showMessage: boolean): Promise<boolean> {
// Checking salt
if (!this.core.managers.networkManager.isOnline) {
this._log("Network is offline", showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
return false;
}
// Showing message is false: that because be shown here. (And it is a fatal error, no way to hide it).
if (!(await this.ensureReplicatorPBKDF2Salt(false))) {
Logger("Failed to initialise the encryption key, preventing replication.", LOG_LEVEL_NOTICE);
return false;
}
await this.loadQueuedFiles();
return true;
}
private async _replicate(showMessage: boolean = false): Promise<boolean | void> {
try {
updatePreviousExecutionTime(KEY_REPLICATION_ON_EVENT, REPLICATION_ON_EVENT_FORECASTED_TIME);
return await this.$$_replicate(showMessage);
} finally {
updatePreviousExecutionTime(KEY_REPLICATION_ON_EVENT);
}
}
/**
* obsolete method. No longer maintained and will be removed in the future.
* @deprecated v0.24.17
* @param showMessage If true, show message to the user.
*/
async cleaned(showMessage: boolean) {
Logger(`The remote database has been cleaned.`, showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
await skipIfDuplicated("cleanup", async () => {
const count = await purgeUnreferencedChunks(this.localDatabase.localDatabase, true);
const message = `The remote database has been cleaned up.
To synchronize, this device must be also cleaned up. ${count} chunk(s) will be erased from this device.
However, If there are many chunks to be deleted, maybe fetching again is faster.
We will lose the history of this device if we fetch the remote database again.
Even if you choose to clean up, you will see this option again if you exit Obsidian and then synchronise again.`;
const CHOICE_FETCH = "Fetch again";
const CHOICE_CLEAN = "Cleanup";
const CHOICE_DISMISS = "Dismiss";
const ret = await this.core.confirm.confirmWithMessage(
"Cleaned",
message,
[CHOICE_FETCH, CHOICE_CLEAN, CHOICE_DISMISS],
CHOICE_DISMISS,
30
);
if (ret == CHOICE_FETCH) {
await this.core.rebuilder.$performRebuildDB("localOnly");
}
if (ret == CHOICE_CLEAN) {
const replicator = this.services.replicator.getActiveReplicator();
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(
this.settings,
this.services.API.isMobile(),
true
);
if (typeof remoteDB == "string") {
Logger(remoteDB, LOG_LEVEL_NOTICE);
return false;
}
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
this.localDatabase.clearCaches();
// Perform the synchronisation once.
if (await this.core.replicator.openReplication(this.settings, false, showMessage, true)) {
await balanceChunkPurgedDBs(this.localDatabase.localDatabase, remoteDB.db);
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
this.localDatabase.clearCaches();
await this.services.replicator.getActiveReplicator()?.markRemoteResolved(this.settings);
Logger("The local database has been cleaned up.", showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
} else {
Logger(
"Replication has been cancelled. Please try it again.",
showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO
);
}
}
});
}
async _canReplicate(showMessage: boolean = false): Promise<boolean> {
if (!this.services.appLifecycle.isReady()) {
Logger(`Not ready`);
return false;
}
if (isLockAcquired("cleanup")) {
Logger($msg("Replicator.Message.Cleaned"), LOG_LEVEL_NOTICE);
return false;
}
if (this.settings.versionUpFlash != "") {
Logger($msg("Replicator.Message.VersionUpFlash"), LOG_LEVEL_NOTICE);
return false;
}
if (!(await this.services.fileProcessing.commitPendingFileEvents())) {
Logger($msg("Replicator.Message.Pending"), LOG_LEVEL_NOTICE);
return false;
}
if (!this.core.managers.networkManager.isOnline) {
this._log("Network is offline", showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
return false;
}
if (!(await this.services.replication.onBeforeReplicate(showMessage))) {
Logger($msg("Replicator.Message.SomeModuleFailed"), LOG_LEVEL_NOTICE);
return false;
}
return true;
}
async $$_replicate(showMessage: boolean = false): Promise<boolean | void> {
const checkBeforeReplicate = await this.services.replication.isReplicationReady(showMessage);
if (!checkBeforeReplicate) return false;
//<-- Here could be an module.
const ret = await this.core.replicator.openReplication(this.settings, false, showMessage, false);
if (!ret) {
if (this.core.replicator.tweakSettingsMismatched && this.core.replicator.preferredTweakValue) {
await this.services.tweakValue.askResolvingMismatched(this.core.replicator.preferredTweakValue);
} else {
if (this.core.replicator?.remoteLockedAndDeviceNotAccepted) {
if (this.core.replicator.remoteCleaned && this.settings.useIndexedDBAdapter) {
await this.cleaned(showMessage);
} else {
const message = $msg("Replicator.Dialogue.Locked.Message");
const CHOICE_FETCH = $msg("Replicator.Dialogue.Locked.Action.Fetch");
const CHOICE_DISMISS = $msg("Replicator.Dialogue.Locked.Action.Dismiss");
const CHOICE_UNLOCK = $msg("Replicator.Dialogue.Locked.Action.Unlock");
const ret = await this.core.confirm.askSelectStringDialogue(
message,
[CHOICE_FETCH, CHOICE_UNLOCK, CHOICE_DISMISS],
{
title: $msg("Replicator.Dialogue.Locked.Title"),
defaultAction: CHOICE_DISMISS,
timeout: 60,
}
);
if (ret == CHOICE_FETCH) {
this._log($msg("Replicator.Dialogue.Locked.Message.Fetch"), LOG_LEVEL_NOTICE);
await this.core.rebuilder.scheduleFetch();
this.services.appLifecycle.scheduleRestart();
return;
} else if (ret == CHOICE_UNLOCK) {
await this.core.replicator.markRemoteResolved(this.settings);
this._log($msg("Replicator.Dialogue.Locked.Message.Unlocked"), LOG_LEVEL_NOTICE);
return;
}
}
}
}
}
return ret;
}
private async _replicateByEvent(): Promise<boolean | void> {
const least = this.settings.syncMinimumInterval;
if (least > 0) {
return rateLimitedSharedExecution(KEY_REPLICATION_ON_EVENT, least, async () => {
return await this.services.replication.replicate();
});
}
return await shareRunningResult(`replication`, () => this.services.replication.replicate());
}
_parseReplicationResult(docs: Array<PouchDB.Core.ExistingDocument<EntryDoc>>): void {
if (this.settings.suspendParseReplicationResult && !this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.suspend();
}
this.replicationResultProcessor.enqueueAll(docs);
if (!this.settings.suspendParseReplicationResult && this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
}
_saveQueuedFiles = throttle(() => {
const saveData = this.replicationResultProcessor._queue
.filter((e) => e !== undefined && e !== null)
.map((e) => e?._id ?? ("" as string)) as string[];
const kvDBKey = "queued-files";
// localStorage.setItem(lsKey, saveData);
fireAndForget(() => this.core.kvDB.set(kvDBKey, saveData));
}, 100);
saveQueuedFiles() {
this._saveQueuedFiles();
}
async loadQueuedFiles() {
if (this.settings.suspendParseReplicationResult) return;
if (!this.settings.isConfigured) return;
try {
const kvDBKey = "queued-files";
// const ids = [...new Set(JSON.parse(localStorage.getItem(lsKey) || "[]"))] as string[];
const ids = [...new Set((await this.core.kvDB.get<string[]>(kvDBKey)) ?? [])];
const batchSize = 100;
const chunkedIds = arrayToChunkedArray(ids, batchSize);
// suspendParseReplicationResult is true, so we have to resume it if it is suspended.
if (this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
for await (const idsBatch of chunkedIds) {
const ret = await this.localDatabase.allDocsRaw<EntryDoc>({
keys: idsBatch,
include_docs: true,
limit: 100,
});
const docs = ret.rows
.filter((e) => e.doc)
.map((e) => e.doc) as PouchDB.Core.ExistingDocument<EntryDoc>[];
const errors = ret.rows.filter((e) => !e.doc && !e.value.deleted);
if (errors.length > 0) {
Logger("Some queued processes were not resurrected");
Logger(JSON.stringify(errors), LOG_LEVEL_VERBOSE);
}
this.replicationResultProcessor.enqueueAll(docs);
}
} catch (e) {
Logger(`Failed to load queued files.`, LOG_LEVEL_NOTICE);
Logger(e, LOG_LEVEL_VERBOSE);
} finally {
// Check again before awaiting,
if (this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume();
}
}
// Wait for all queued files to be processed.
try {
await this.replicationResultProcessor.waitForAllProcessed();
} catch (e) {
Logger(`Failed to wait for all queued files to be processed.`, LOG_LEVEL_NOTICE);
Logger(e, LOG_LEVEL_VERBOSE);
}
}
replicationResultProcessor = new QueueProcessor(
async (docs: PouchDB.Core.ExistingDocument<EntryDoc>[]) => {
if (this.settings.suspendParseReplicationResult) return;
const change = docs[0];
if (!change) return;
if (isChunk(change._id)) {
this.localDatabase.onNewLeaf(change as EntryLeaf);
return;
}
if (await this.services.replication.processVirtualDocument(change)) return;
// any addon needs this item?
// for (const proc of this.core.addOns) {
// if (await proc.parseReplicationResultItem(change)) {
// return;
// }
// }
if (change.type == "versioninfo") {
if (change.version > VER) {
this.core.replicator.closeReplication();
Logger(
`Remote database updated to incompatible version. update your Self-hosted LiveSync plugin.`,
LOG_LEVEL_NOTICE
);
}
return;
}
if (
change._id == SYNCINFO_ID || // Synchronisation information data
change._id.startsWith("_design") //design document
) {
return;
}
if (isAnyNote(change)) {
const docPath = getPath(change);
if (!(await this.services.vault.isTargetFile(docPath))) {
Logger(`Skipped: ${docPath}`, LOG_LEVEL_VERBOSE);
return;
}
if (this.databaseQueuedProcessor._isSuspended) {
Logger(`Processing scheduled: ${docPath}`, LOG_LEVEL_INFO);
}
const size = change.size;
if (this.services.vault.isFileSizeTooLarge(size)) {
Logger(
`Processing ${docPath} has been skipped due to file size exceeding the limit`,
LOG_LEVEL_NOTICE
);
return;
}
this.databaseQueuedProcessor.enqueue(change);
}
return;
},
{
batchSize: 1,
suspended: true,
concurrentLimit: 100,
delay: 0,
totalRemainingReactiveSource: this.core.replicationResultCount,
}
)
.replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter((e) => e._id != newItem._id);
return [...q, newItem];
})
.startPipeline()
.onUpdateProgress(() => {
this.saveQueuedFiles();
});
databaseQueuedProcessor = new QueueProcessor(
async (docs: EntryBody[]) => {
const dbDoc = docs[0] as LoadedEntry; // It has no `data`
const path = getPath(dbDoc);
// If `Read chunks online` is disabled, chunks should be transferred before here.
// However, in some cases, chunks are after that. So, if missing chunks exist, we have to wait for them.
const doc = await this.localDatabase.getDBEntryFromMeta({ ...dbDoc }, false, true);
if (!doc) {
Logger(
`Something went wrong while gathering content of ${path} (${dbDoc._id.substring(0, 8)}, ${dbDoc._rev?.substring(0, 10)}) `,
LOG_LEVEL_NOTICE
);
return;
}
if (await this.services.replication.processOptionalSynchroniseResult(dbDoc)) {
// Already processed
} else if (isValidPath(getPath(doc))) {
this.storageApplyingProcessor.enqueue(doc as MetaEntry);
} else {
Logger(`Skipped: ${path} (${doc._id.substring(0, 8)})`, LOG_LEVEL_VERBOSE);
}
return;
},
{
suspended: true,
batchSize: 1,
concurrentLimit: 10,
yieldThreshold: 1,
delay: 0,
totalRemainingReactiveSource: this.core.databaseQueueCount,
}
)
.replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter((e) => e._id != newItem._id);
return [...q, newItem];
})
.startPipeline();
storageApplyingProcessor = new QueueProcessor(
async (docs: MetaEntry[]) => {
const entry = docs[0];
await this.services.replication.processSynchroniseResult(entry);
return;
},
{
suspended: true,
batchSize: 1,
concurrentLimit: 6,
yieldThreshold: 1,
delay: 0,
totalRemainingReactiveSource: this.core.storageApplyingCount,
}
)
.replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter((e) => e._id != newItem._id);
return [...q, newItem];
})
.startPipeline();
_everyBeforeSuspendProcess(): Promise<boolean> {
this.core.replicator?.closeReplication();
return Promise.resolve(true);
}
private async _replicateAllToServer(
showingNotice: boolean = false,
sendChunksInBulkDisabled: boolean = false
): Promise<boolean> {
if (!this.services.appLifecycle.isReady()) return false;
if (!(await this.services.replication.onBeforeReplicate(showingNotice))) {
Logger($msg("Replicator.Message.SomeModuleFailed"), LOG_LEVEL_NOTICE);
return false;
}
if (!sendChunksInBulkDisabled) {
if (this.core.replicator instanceof LiveSyncCouchDBReplicator) {
if (
(await this.core.confirm.askYesNoDialog("Do you want to send all chunks before replication?", {
defaultOption: "No",
timeout: 20,
})) == "yes"
) {
await this.core.replicator.sendChunks(this.core.settings, undefined, true, 0);
}
}
}
const ret = await this.core.replicator.replicateAllToServer(this.settings, showingNotice);
if (ret) return true;
const checkResult = await this.services.replication.checkConnectionFailure();
if (checkResult == "CHECKAGAIN") return await this.services.remote.replicateAllToRemote(showingNotice);
return !checkResult;
}
async _replicateAllFromServer(showingNotice: boolean = false): Promise<boolean> {
if (!this.services.appLifecycle.isReady()) return false;
const ret = await this.core.replicator.replicateAllFromServer(this.settings, showingNotice);
if (ret) return true;
const checkResult = await this.services.replication.checkConnectionFailure();
if (checkResult == "CHECKAGAIN") return await this.services.remote.replicateAllFromRemote(showingNotice);
return !checkResult;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.handleGetActiveReplicator(this._getReplicator.bind(this));
services.databaseEvents.handleOnDatabaseInitialisation(this._everyOnInitializeDatabase.bind(this));
services.databaseEvents.handleOnResetDatabase(this._everyOnResetDatabase.bind(this));
services.appLifecycle.handleOnSettingLoaded(this._everyOnloadAfterLoadSettings.bind(this));
services.replication.handleParseSynchroniseResult(this._parseReplicationResult.bind(this));
services.appLifecycle.handleOnSuspending(this._everyBeforeSuspendProcess.bind(this));
services.replication.handleBeforeReplicate(this._everyBeforeReplicate.bind(this));
services.replication.handleIsReplicationReady(this._canReplicate.bind(this));
services.replication.handleReplicate(this._replicate.bind(this));
services.replication.handleReplicateByEvent(this._replicateByEvent.bind(this));
services.remote.handleReplicateAllToRemote(this._replicateAllToServer.bind(this));
services.remote.handleReplicateAllFromRemote(this._replicateAllFromServer.bind(this));
}
}

View File

@@ -0,0 +1,42 @@
import { fireAndForget } from "octagonal-wheels/promises";
import { REMOTE_MINIO, REMOTE_P2P, type RemoteDBSettings } from "../../lib/src/common/types";
import { LiveSyncCouchDBReplicator } from "../../lib/src/replication/couchdb/LiveSyncReplicator";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import { AbstractModule } from "../AbstractModule";
import type { LiveSyncCore } from "../../main";
export class ModuleReplicatorCouchDB extends AbstractModule {
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator | false> {
const settings = { ...this.settings, ...settingOverride };
// If new remote types were added, add them here. Do not use `REMOTE_COUCHDB` directly for the safety valve.
if (settings.remoteType == REMOTE_MINIO || settings.remoteType == REMOTE_P2P) {
return Promise.resolve(false);
}
return Promise.resolve(new LiveSyncCouchDBReplicator(this.core));
}
_everyAfterResumeProcess(): Promise<boolean> {
if (this.services.appLifecycle.isSuspended()) return Promise.resolve(true);
if (!this.services.appLifecycle.isReady()) return Promise.resolve(true);
if (this.settings.remoteType != REMOTE_MINIO && this.settings.remoteType != REMOTE_P2P) {
const LiveSyncEnabled = this.settings.liveSync;
const continuous = LiveSyncEnabled;
const eventualOnStart = !LiveSyncEnabled && this.settings.syncOnStart;
// If enabled LiveSync or on start, open replication
if (LiveSyncEnabled || eventualOnStart) {
// And note that we do not open the conflict detection dialogue directly during this process.
// This should be raised explicitly if needed.
fireAndForget(async () => {
const canReplicate = await this.services.replication.isReplicationReady(false);
if (!canReplicate) return;
void this.core.replicator.openReplication(this.settings, continuous, false, false);
});
}
}
return Promise.resolve(true);
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.handleGetNewReplicator(this._anyNewReplicator.bind(this));
services.appLifecycle.handleOnResumed(this._everyAfterResumeProcess.bind(this));
}
}

View File

@@ -0,0 +1,18 @@
import { REMOTE_MINIO, type RemoteDBSettings } from "../../lib/src/common/types";
import { LiveSyncJournalReplicator } from "../../lib/src/replication/journal/LiveSyncJournalReplicator";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import type { LiveSyncCore } from "../../main";
import { AbstractModule } from "../AbstractModule";
export class ModuleReplicatorMinIO extends AbstractModule {
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator | false> {
const settings = { ...this.settings, ...settingOverride };
if (settings.remoteType == REMOTE_MINIO) {
return Promise.resolve(new LiveSyncJournalReplicator(this.core));
}
return Promise.resolve(false);
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.handleGetNewReplicator(this._anyNewReplicator.bind(this));
}
}

View File

@@ -0,0 +1,34 @@
import { REMOTE_P2P, type RemoteDBSettings } from "../../lib/src/common/types";
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
import { AbstractModule } from "../AbstractModule";
import { LiveSyncTrysteroReplicator } from "../../lib/src/replication/trystero/LiveSyncTrysteroReplicator";
import type { LiveSyncCore } from "../../main";
export class ModuleReplicatorP2P extends AbstractModule {
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator | false> {
const settings = { ...this.settings, ...settingOverride };
if (settings.remoteType == REMOTE_P2P) {
return Promise.resolve(new LiveSyncTrysteroReplicator(this.core));
}
return Promise.resolve(false);
}
_everyAfterResumeProcess(): Promise<boolean> {
if (this.settings.remoteType == REMOTE_P2P) {
// // If LiveSync enabled, open replication
// if (this.settings.liveSync) {
// fireAndForget(() => this.core.replicator.openReplication(this.settings, true, false, false));
// }
// // If sync on start enabled, open replication
// if (!this.settings.liveSync && this.settings.syncOnStart) {
// // Possibly ok as if only share the result
// fireAndForget(() => this.core.replicator.openReplication(this.settings, false, false, false));
// }
}
return Promise.resolve(true);
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.replicator.handleGetNewReplicator(this._anyNewReplicator.bind(this));
services.appLifecycle.handleOnResumed(this._everyAfterResumeProcess.bind(this));
}
}

View File

@@ -0,0 +1,177 @@
import { LRUCache } from "octagonal-wheels/memory/LRUCache";
import {
getStoragePathFromUXFileInfo,
id2path,
isInternalMetadata,
path2id,
stripInternalMetadataPrefix,
useMemo,
} from "../../common/utils";
import {
LOG_LEVEL_VERBOSE,
type DocumentID,
type EntryHasPath,
type FilePath,
type FilePathWithPrefix,
type ObsidianLiveSyncSettings,
type UXFileInfoStub,
} from "../../lib/src/common/types";
import { addPrefix, isAcceptedAll } from "../../lib/src/string_and_binary/path";
import { AbstractModule } from "../AbstractModule";
import { EVENT_REQUEST_RELOAD_SETTING_TAB, EVENT_SETTING_SAVED, eventHub } from "../../common/events";
import { isDirty } from "../../lib/src/common/utils";
import type { LiveSyncCore } from "../../main";
export class ModuleTargetFilter extends AbstractModule {
reloadIgnoreFiles() {
this.ignoreFiles = this.settings.ignoreFiles.split(",").map((e) => e.trim());
}
private _everyOnload(): Promise<boolean> {
eventHub.onEvent(EVENT_SETTING_SAVED, (evt: ObsidianLiveSyncSettings) => {
this.reloadIgnoreFiles();
});
eventHub.onEvent(EVENT_REQUEST_RELOAD_SETTING_TAB, () => {
this.reloadIgnoreFiles();
});
return Promise.resolve(true);
}
_id2path(id: DocumentID, entry?: EntryHasPath, stripPrefix?: boolean): FilePathWithPrefix {
const tempId = id2path(id, entry);
if (stripPrefix && isInternalMetadata(tempId)) {
const out = stripInternalMetadataPrefix(tempId);
return out;
}
return tempId;
}
async _path2id(filename: FilePathWithPrefix | FilePath, prefix?: string): Promise<DocumentID> {
const destPath = addPrefix(filename, prefix ?? "");
return await path2id(
destPath,
this.settings.usePathObfuscation ? this.settings.passphrase : "",
!this.settings.handleFilenameCaseSensitive
);
}
private _isFileSizeExceeded(size: number) {
if (this.settings.syncMaxSizeInMB > 0 && size > 0) {
if (this.settings.syncMaxSizeInMB * 1024 * 1024 < size) {
return true;
}
}
return false;
}
_markFileListPossiblyChanged(): void {
this.totalFileEventCount++;
}
totalFileEventCount = 0;
get fileListPossiblyChanged() {
if (isDirty("totalFileEventCount", this.totalFileEventCount)) {
return true;
}
return false;
}
private async _isTargetFile(file: string | UXFileInfoStub, keepFileCheckList = false) {
const fileCount = useMemo<Record<string, number>>(
{
key: "fileCount", // forceUpdate: !keepFileCheckList,
},
(ctx, prev) => {
if (keepFileCheckList && prev) return prev;
if (!keepFileCheckList && prev && !this.fileListPossiblyChanged) {
return prev;
}
const fileList = (ctx.get("fileList") ?? []) as FilePathWithPrefix[];
// const fileNameList = (ctx.get("fileNameList") ?? []) as FilePath[];
// const fileNames =
const vaultFiles = this.core.storageAccess.getFileNames().sort();
if (prev && vaultFiles.length == fileList.length) {
const fl3 = new Set([...fileList, ...vaultFiles]);
if (fileList.length == fl3.size && vaultFiles.length == fl3.size) {
return prev;
}
}
ctx.set("fileList", vaultFiles);
const fileCount: Record<string, number> = {};
for (const file of vaultFiles) {
const lc = file.toLowerCase();
if (!fileCount[lc]) {
fileCount[lc] = 1;
} else {
fileCount[lc]++;
}
}
return fileCount;
}
);
const filepath = getStoragePathFromUXFileInfo(file);
const lc = filepath.toLowerCase();
if (this.services.setting.shouldCheckCaseInsensitively()) {
if (lc in fileCount && fileCount[lc] > 1) {
return false;
}
}
const fileNameLC = getStoragePathFromUXFileInfo(file).split("/").pop()?.toLowerCase();
if (this.settings.useIgnoreFiles) {
if (this.ignoreFiles.some((e) => e.toLowerCase() == fileNameLC)) {
// We must reload ignore files due to the its change.
await this.readIgnoreFile(filepath);
}
if (await this.services.vault.isIgnoredByIgnoreFile(file)) {
return false;
}
}
if (!this.localDatabase?.isTargetFile(filepath)) return false;
return true;
}
ignoreFileCache = new LRUCache<string, string[] | false>(300, 250000, true);
ignoreFiles = [] as string[];
async readIgnoreFile(path: string) {
try {
const file = await this.core.storageAccess.readFileText(path);
const gitignore = file.split(/\r?\n/g);
this.ignoreFileCache.set(path, gitignore);
return gitignore;
} catch (ex) {
this._log(`Failed to read ignore file ${path}`);
this._log(ex, LOG_LEVEL_VERBOSE);
this.ignoreFileCache.set(path, false);
return false;
}
}
async getIgnoreFile(path: string) {
if (this.ignoreFileCache.has(path)) {
return this.ignoreFileCache.get(path) ?? false;
} else {
return await this.readIgnoreFile(path);
}
}
private async _isIgnoredByIgnoreFiles(file: string | UXFileInfoStub): Promise<boolean> {
if (!this.settings.useIgnoreFiles) {
return false;
}
const filepath = getStoragePathFromUXFileInfo(file);
if (this.ignoreFileCache.has(filepath)) {
// Renew
await this.readIgnoreFile(filepath);
}
if (!(await isAcceptedAll(filepath, this.ignoreFiles, (filename) => this.getIgnoreFile(filename)))) {
return true;
}
return false;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.vault.handleMarkFileListPossiblyChanged(this._markFileListPossiblyChanged.bind(this));
services.path.handleId2Path(this._id2path.bind(this));
services.path.handlePath2Id(this._path2id.bind(this));
services.appLifecycle.handleOnLoaded(this._everyOnload.bind(this));
services.vault.handleIsFileSizeTooLarge(this._isFileSizeExceeded.bind(this));
services.vault.handleIsIgnoredByIgnoreFile(this._isIgnoredByIgnoreFiles.bind(this));
services.vault.handleIsTargetFile(this._isTargetFile.bind(this));
}
}

View File

@@ -0,0 +1,115 @@
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { AbstractModule } from "../AbstractModule.ts";
import { sizeToHumanReadable } from "octagonal-wheels/number";
import { $msg } from "src/lib/src/common/i18n.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleCheckRemoteSize extends AbstractModule {
async _allScanStat(): Promise<boolean> {
if (this.core.managers.networkManager.isOnline === false) {
this._log("Network is offline, skipping remote size check.", LOG_LEVEL_INFO);
return true;
}
this._log($msg("moduleCheckRemoteSize.logCheckingStorageSizes"), LOG_LEVEL_VERBOSE);
if (this.settings.notifyThresholdOfRemoteStorageSize < 0) {
const message = $msg("moduleCheckRemoteSize.msgSetDBCapacity");
const ANSWER_0 = $msg("moduleCheckRemoteSize.optionNoWarn");
const ANSWER_800 = $msg("moduleCheckRemoteSize.option800MB");
const ANSWER_2000 = $msg("moduleCheckRemoteSize.option2GB");
const ASK_ME_NEXT_TIME = $msg("moduleCheckRemoteSize.optionAskMeLater");
const ret = await this.core.confirm.askSelectStringDialogue(
message,
[ANSWER_0, ANSWER_800, ANSWER_2000, ASK_ME_NEXT_TIME],
{
defaultAction: ASK_ME_NEXT_TIME,
title: $msg("moduleCheckRemoteSize.titleDatabaseSizeNotify"),
timeout: 40,
}
);
if (ret == ANSWER_0) {
this.settings.notifyThresholdOfRemoteStorageSize = 0;
await this.core.saveSettings();
} else if (ret == ANSWER_800) {
this.settings.notifyThresholdOfRemoteStorageSize = 800;
await this.core.saveSettings();
} else if (ret == ANSWER_2000) {
this.settings.notifyThresholdOfRemoteStorageSize = 2000;
await this.core.saveSettings();
}
}
if (this.settings.notifyThresholdOfRemoteStorageSize > 0) {
const remoteStat = await this.core.replicator?.getRemoteStatus(this.settings);
if (remoteStat) {
const estimatedSize = remoteStat.estimatedSize;
if (estimatedSize) {
const maxSize = this.settings.notifyThresholdOfRemoteStorageSize * 1024 * 1024;
if (estimatedSize > maxSize) {
const message = $msg("moduleCheckRemoteSize.msgDatabaseGrowing", {
estimatedSize: sizeToHumanReadable(estimatedSize),
maxSize: sizeToHumanReadable(maxSize),
});
const newMax = ~~(estimatedSize / 1024 / 1024) + 100;
const ANSWER_ENLARGE_LIMIT = $msg("moduleCheckRemoteSize.optionIncreaseLimit", {
newMax: newMax.toString(),
});
const ANSWER_REBUILD = $msg("moduleCheckRemoteSize.optionRebuildAll");
const ANSWER_IGNORE = $msg("moduleCheckRemoteSize.optionDismiss");
const ret = await this.core.confirm.askSelectStringDialogue(
message,
[ANSWER_ENLARGE_LIMIT, ANSWER_REBUILD, ANSWER_IGNORE],
{
defaultAction: ANSWER_IGNORE,
title: $msg("moduleCheckRemoteSize.titleDatabaseSizeLimitExceeded"),
timeout: 60,
}
);
if (ret == ANSWER_REBUILD) {
const ret = await this.core.confirm.askYesNoDialog(
$msg("moduleCheckRemoteSize.msgConfirmRebuild"),
{ defaultOption: "No" }
);
if (ret == "yes") {
this.core.settings.notifyThresholdOfRemoteStorageSize = -1;
await this.saveSettings();
await this.core.rebuilder.scheduleRebuild();
}
} else if (ret == ANSWER_ENLARGE_LIMIT) {
this.settings.notifyThresholdOfRemoteStorageSize = ~~(estimatedSize / 1024 / 1024) + 100;
this._log(
$msg("moduleCheckRemoteSize.logThresholdEnlarged", {
size: this.settings.notifyThresholdOfRemoteStorageSize.toString(),
}),
LOG_LEVEL_NOTICE
);
await this.core.saveSettings();
} else {
// Dismiss or Close the dialog
}
this._log(
$msg("moduleCheckRemoteSize.logExceededWarning", {
measuredSize: sizeToHumanReadable(estimatedSize),
notifySize: sizeToHumanReadable(
this.settings.notifyThresholdOfRemoteStorageSize * 1024 * 1024
),
}),
LOG_LEVEL_INFO
);
} else {
this._log(
$msg("moduleCheckRemoteSize.logCurrentStorageSize", {
measuredSize: sizeToHumanReadable(estimatedSize),
}),
LOG_LEVEL_INFO
);
}
}
}
}
return true;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnScanningStartupIssues(this._allScanStat.bind(this));
}
}

View File

@@ -0,0 +1,82 @@
import { AbstractModule } from "../AbstractModule.ts";
import { LOG_LEVEL_NOTICE, type FilePathWithPrefix } from "../../lib/src/common/types";
import { QueueProcessor } from "octagonal-wheels/concurrency/processor";
import { sendValue } from "octagonal-wheels/messagepassing/signal";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleConflictChecker extends AbstractModule {
async _queueConflictCheckIfOpen(file: FilePathWithPrefix): Promise<void> {
const path = file;
if (this.settings.checkConflictOnlyOnOpen) {
const af = this.services.vault.getActiveFilePath();
if (af && af != path) {
this._log(`${file} is conflicted, merging process has been postponed.`, LOG_LEVEL_NOTICE);
return;
}
}
await this.services.conflict.queueCheckFor(path);
}
async _queueConflictCheck(file: FilePathWithPrefix): Promise<void> {
const optionalConflictResult = await this.services.conflict.getOptionalConflictCheckMethod(file);
if (optionalConflictResult == true) {
// The conflict has been resolved by another process.
return;
} else if (optionalConflictResult === "newer") {
// The conflict should be resolved by the newer entry.
await this.services.conflict.resolveByNewest(file);
} else {
this.conflictCheckQueue.enqueue(file);
}
}
_waitForAllConflictProcessed(): Promise<boolean> {
return this.conflictResolveQueue.waitForAllProcessed();
}
// TODO-> Move to ModuleConflictResolver?
conflictResolveQueue = new QueueProcessor(
async (filenames: FilePathWithPrefix[]) => {
const filename = filenames[0];
return await this.services.conflict.resolve(filename);
},
{
suspended: false,
batchSize: 1,
// No need to limit concurrency to `1` here, subsequent process will handle it,
// And, some cases, we do not need to synchronised. (e.g., auto-merge available).
// Therefore, limiting global concurrency is performed on resolver with the UI.
concurrentLimit: 10,
delay: 0,
keepResultUntilDownstreamConnected: false,
}
).replaceEnqueueProcessor((queue, newEntity) => {
const filename = newEntity;
sendValue("cancel-resolve-conflict:" + filename, true);
const newQueue = [...queue].filter((e) => e != newEntity);
return [...newQueue, newEntity];
});
conflictCheckQueue = // First process - Check is the file actually need resolve -
new QueueProcessor(
(files: FilePathWithPrefix[]) => {
const filename = files[0];
return Promise.resolve([filename]);
},
{
suspended: false,
batchSize: 1,
concurrentLimit: 10,
delay: 0,
keepResultUntilDownstreamConnected: true,
pipeTo: this.conflictResolveQueue,
totalRemainingReactiveSource: this.core.conflictProcessQueueCount,
}
);
onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.conflict.handleQueueCheckForIfOpen(this._queueConflictCheckIfOpen.bind(this));
services.conflict.handleQueueCheckFor(this._queueConflictCheck.bind(this));
services.conflict.handleEnsureAllProcessed(this._waitForAllConflictProcessed.bind(this));
}
}

View File

@@ -0,0 +1,220 @@
import { serialized } from "octagonal-wheels/concurrency/lock";
import { AbstractModule } from "../AbstractModule.ts";
import {
AUTO_MERGED,
CANCELLED,
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
MISSING_OR_ERROR,
NOT_CONFLICTED,
type diff_check_result,
type FilePathWithPrefix,
} from "../../lib/src/common/types";
import {
compareMTime,
displayRev,
isCustomisationSyncMetadata,
isPluginMetadata,
TARGET_IS_NEW,
} from "../../common/utils";
import diff_match_patch from "diff-match-patch";
import { stripAllPrefixes, isPlainText } from "../../lib/src/string_and_binary/path";
import { eventHub } from "../../common/events.ts";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
declare global {
interface LSEvents {
"conflict-cancelled": FilePathWithPrefix;
}
}
export class ModuleConflictResolver extends AbstractModule {
private async _resolveConflictByDeletingRev(
path: FilePathWithPrefix,
deleteRevision: string,
subTitle = ""
): Promise<typeof MISSING_OR_ERROR | typeof AUTO_MERGED> {
const title = `Resolving ${subTitle ? `[${subTitle}]` : ""}:`;
if (!(await this.core.fileHandler.deleteRevisionFromDB(path, deleteRevision))) {
this._log(
`${title} Could not delete conflicted revision ${displayRev(deleteRevision)} of ${path}`,
LOG_LEVEL_NOTICE
);
return MISSING_OR_ERROR;
}
eventHub.emitEvent("conflict-cancelled", path);
this._log(
`${title} Conflicted revision has been deleted ${displayRev(deleteRevision)} ${path}`,
LOG_LEVEL_INFO
);
if ((await this.core.databaseFileAccess.getConflictedRevs(path)).length != 0) {
this._log(`${title} some conflicts are left in ${path}`, LOG_LEVEL_INFO);
return AUTO_MERGED;
}
if (isPluginMetadata(path) || isCustomisationSyncMetadata(path)) {
this._log(`${title} ${path} is a plugin metadata file, no need to write to storage`, LOG_LEVEL_INFO);
return AUTO_MERGED;
}
// If no conflicts were found, write the resolved content to the storage.
if (!(await this.core.fileHandler.dbToStorage(path, stripAllPrefixes(path), true))) {
this._log(`Could not write the resolved content to the storage: ${path}`, LOG_LEVEL_NOTICE);
return MISSING_OR_ERROR;
}
const level = subTitle.indexOf("same") !== -1 ? LOG_LEVEL_INFO : LOG_LEVEL_NOTICE;
this._log(`${path} has been merged automatically`, level);
return AUTO_MERGED;
}
async checkConflictAndPerformAutoMerge(path: FilePathWithPrefix): Promise<diff_check_result> {
//
const ret = await this.localDatabase.tryAutoMerge(path, !this.settings.disableMarkdownAutoMerge);
if ("ok" in ret) {
return ret.ok;
}
if ("result" in ret) {
const p = ret.result;
// Merged content is coming.
// 1. Store the merged content to the storage
if (!(await this.core.databaseFileAccess.storeContent(path, p))) {
this._log(`Merged content cannot be stored:${path}`, LOG_LEVEL_NOTICE);
return MISSING_OR_ERROR;
}
// 2. As usual, delete the conflicted revision and if there are no conflicts, write the resolved content to the storage.
return await this.services.conflict.resolveByDeletingRevision(path, ret.conflictedRev, "Sensible");
}
const { rightRev, leftLeaf, rightLeaf } = ret;
// should be one or more conflicts;
if (leftLeaf == false) {
// what's going on..
this._log(`could not get current revisions:${path}`, LOG_LEVEL_NOTICE);
return MISSING_OR_ERROR;
}
if (rightLeaf == false) {
// Conflicted item could not load, delete this.
return await this.services.conflict.resolveByDeletingRevision(path, rightRev, "MISSING OLD REV");
}
const isSame = leftLeaf.data == rightLeaf.data && leftLeaf.deleted == rightLeaf.deleted;
const isBinary = !isPlainText(path);
const alwaysNewer = this.settings.resolveConflictsByNewerFile;
if (isSame || isBinary || alwaysNewer) {
const result = compareMTime(leftLeaf.mtime, rightLeaf.mtime);
let loser = leftLeaf;
// if (lMtime > rMtime) {
if (result != TARGET_IS_NEW) {
loser = rightLeaf;
}
const subTitle = [
`${isSame ? "same" : ""}`,
`${isBinary ? "binary" : ""}`,
`${alwaysNewer ? "alwaysNewer" : ""}`,
]
.filter((e) => e.trim())
.join(",");
return await this.services.conflict.resolveByDeletingRevision(path, loser.rev, subTitle);
}
// make diff.
const dmp = new diff_match_patch();
const diff = dmp.diff_main(leftLeaf.data, rightLeaf.data);
dmp.diff_cleanupSemantic(diff);
this._log(`conflict(s) found:${path}`);
return {
left: leftLeaf,
right: rightLeaf,
diff: diff,
};
}
private async _resolveConflict(filename: FilePathWithPrefix): Promise<void> {
// const filename = filenames[0];
return await serialized(`conflict-resolve:${filename}`, async () => {
const conflictCheckResult = await this.checkConflictAndPerformAutoMerge(filename);
if (
conflictCheckResult === MISSING_OR_ERROR ||
conflictCheckResult === NOT_CONFLICTED ||
conflictCheckResult === CANCELLED
) {
// nothing to do.
this._log(`[conflict] Not conflicted or cancelled: ${filename}`, LOG_LEVEL_VERBOSE);
return;
}
if (conflictCheckResult === AUTO_MERGED) {
//auto resolved, but need check again;
if (this.settings.syncAfterMerge && !this.services.appLifecycle.isSuspended()) {
//Wait for the running replication, if not running replication, run it once.
await this.services.replication.replicateByEvent();
}
this._log("[conflict] Automatically merged, but we have to check it again");
await this.services.conflict.queueCheckFor(filename);
return;
}
if (this.settings.showMergeDialogOnlyOnActive) {
const af = this.services.vault.getActiveFilePath();
if (af && af != filename) {
this._log(
`[conflict] ${filename} is conflicted. Merging process has been postponed to the file have got opened.`,
LOG_LEVEL_NOTICE
);
return;
}
}
this._log("[conflict] Manual merge required!");
eventHub.emitEvent("conflict-cancelled", filename);
await this.services.conflict.resolveByUserInteraction(filename, conflictCheckResult);
});
}
private async _anyResolveConflictByNewest(filename: FilePathWithPrefix): Promise<boolean> {
const currentRev = await this.core.databaseFileAccess.fetchEntryMeta(filename, undefined, true);
if (currentRev == false) {
this._log(`Could not get current revision of ${filename}`);
return Promise.resolve(false);
}
const revs = await this.core.databaseFileAccess.getConflictedRevs(filename);
if (revs.length == 0) {
return Promise.resolve(true);
}
const mTimeAndRev = (
[
[currentRev.mtime, currentRev._rev],
...(await Promise.all(
revs.map(async (rev) => {
const leaf = await this.core.databaseFileAccess.fetchEntryMeta(filename, rev);
if (leaf == false) {
return [0, rev] as [number, string];
}
return [leaf.mtime, rev] as [number, string];
})
)),
] as [number, string][]
).sort((a, b) => {
const diff = b[0] - a[0];
if (diff == 0) {
return a[1].localeCompare(b[1], "en", { numeric: true });
}
return diff;
});
// console.warn(mTimeAndRev);
this._log(
`Resolving conflict by newest: ${filename} (Newest: ${new Date(mTimeAndRev[0][0]).toLocaleString()}) (${mTimeAndRev.length} revisions exists)`
);
for (let i = 1; i < mTimeAndRev.length; i++) {
this._log(
`conflict: Deleting the older revision ${mTimeAndRev[i][1]} (${new Date(mTimeAndRev[i][0]).toLocaleString()}) of ${filename}`
);
await this.services.conflict.resolveByDeletingRevision(filename, mTimeAndRev[i][1], "NEWEST");
}
return true;
}
onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.conflict.handleResolveByDeletingRevision(this._resolveConflictByDeletingRev.bind(this));
services.conflict.handleResolve(this._resolveConflict.bind(this));
services.conflict.handleResolveByNewest(this._anyResolveConflictByNewest.bind(this));
}
}

View File

@@ -0,0 +1,325 @@
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { normalizePath } from "../../deps.ts";
import {
FlagFilesHumanReadable,
FlagFilesOriginal,
TweakValuesShouldMatchedTemplate,
type ObsidianLiveSyncSettings,
} from "../../lib/src/common/types.ts";
import { AbstractModule } from "../AbstractModule.ts";
import type { LiveSyncCore } from "../../main.ts";
import { SvelteDialogManager } from "../features/SetupWizard/ObsidianSvelteDialog.ts";
import FetchEverything from "../features/SetupWizard/dialogs/FetchEverything.svelte";
import RebuildEverything from "../features/SetupWizard/dialogs/RebuildEverything.svelte";
import { extractObject } from "octagonal-wheels/object";
export class ModuleRedFlag extends AbstractModule {
async isFlagFileExist(path: string) {
const redflag = await this.core.storageAccess.isExists(normalizePath(path));
if (redflag) {
return true;
}
return false;
}
async deleteFlagFile(path: string) {
try {
const isFlagged = await this.core.storageAccess.isExists(normalizePath(path));
if (isFlagged) {
await this.core.storageAccess.delete(path, true);
}
} catch (ex) {
this._log(`Could not delete ${path}`);
this._log(ex, LOG_LEVEL_VERBOSE);
}
}
isSuspendFlagActive = async () => await this.isFlagFileExist(FlagFilesOriginal.SUSPEND_ALL);
isRebuildFlagActive = async () =>
(await this.isFlagFileExist(FlagFilesOriginal.REBUILD_ALL)) ||
(await this.isFlagFileExist(FlagFilesHumanReadable.REBUILD_ALL));
isFetchAllFlagActive = async () =>
(await this.isFlagFileExist(FlagFilesOriginal.FETCH_ALL)) ||
(await this.isFlagFileExist(FlagFilesHumanReadable.FETCH_ALL));
async cleanupRebuildFlag() {
await this.deleteFlagFile(FlagFilesOriginal.REBUILD_ALL);
await this.deleteFlagFile(FlagFilesHumanReadable.REBUILD_ALL);
}
async cleanupFetchAllFlag() {
await this.deleteFlagFile(FlagFilesOriginal.FETCH_ALL);
await this.deleteFlagFile(FlagFilesHumanReadable.FETCH_ALL);
}
dialogManager = new SvelteDialogManager(this.core);
/**
* Adjust setting to remote if needed.
* @param extra result of dialogues that may contain preventFetchingConfig flag (e.g, from FetchEverything or RebuildEverything)
* @param config current configuration to retrieve remote preferred config
*/
async adjustSettingToRemoteIfNeeded(extra: { preventFetchingConfig: boolean }, config: ObsidianLiveSyncSettings) {
if (extra && extra.preventFetchingConfig) {
return;
}
// Remote configuration fetched and applied.
if (await this.adjustSettingToRemote(config)) {
config = this.core.settings;
} else {
this._log("Remote configuration not applied.", LOG_LEVEL_NOTICE);
}
console.debug(config);
}
/**
* Adjust setting to remote configuration.
* @param config current configuration to retrieve remote preferred config
* @returns updated configuration if applied, otherwise null.
*/
async adjustSettingToRemote(config: ObsidianLiveSyncSettings) {
// Fetch remote configuration unless prevented.
const SKIP_FETCH = "Skip and proceed";
const RETRY_FETCH = "Retry (recommended)";
let canProceed = false;
do {
const remoteTweaks = await this.services.tweakValue.fetchRemotePreferred(config);
if (!remoteTweaks) {
const choice = await this.core.confirm.askSelectStringDialogue(
"Could not fetch remote configuration. What do you want to do?",
[SKIP_FETCH, RETRY_FETCH] as const,
{
defaultAction: RETRY_FETCH,
timeout: 0,
title: "Fetch Remote Configuration Failed",
}
);
if (choice === SKIP_FETCH) {
canProceed = true;
}
} else {
const necessary = extractObject(TweakValuesShouldMatchedTemplate, remoteTweaks);
// Check if any necessary tweak value is different from current config.
const differentItems = Object.entries(necessary).filter(([key, value]) => {
return (config as any)[key] !== value;
});
if (differentItems.length === 0) {
this._log(
"Remote configuration matches local configuration. No changes applied.",
LOG_LEVEL_NOTICE
);
} else {
await this.core.confirm.askSelectStringDialogue(
"Your settings differed slightly from the server's. The plug-in has supplemented the incompatible parts with the server settings!",
["OK"] as const,
{
defaultAction: "OK",
timeout: 0,
}
);
}
config = {
...config,
...Object.fromEntries(differentItems),
} satisfies ObsidianLiveSyncSettings;
this.core.settings = config;
await this.core.services.setting.saveSettingData();
this._log("Remote configuration applied.", LOG_LEVEL_NOTICE);
canProceed = true;
return this.core.settings;
}
} while (!canProceed);
}
/**
* Process vault initialisation with suspending file watching and sync.
* @param proc process to be executed during initialisation, should return true if can be continued, false if app is unable to continue the process.
* @param keepSuspending whether to keep suspending file watching after the process.
* @returns result of the process, or false if error occurs.
*/
async processVaultInitialisation(proc: () => Promise<boolean>, keepSuspending = false) {
try {
// Disable batch saving and file watching during initialisation.
this.settings.batchSave = false;
await this.services.setting.suspendAllSync();
await this.services.setting.suspendExtraSync();
this.settings.suspendFileWatching = true;
await this.saveSettings();
try {
const result = await proc();
return result;
} catch (ex) {
this._log("Error during vault initialisation process.", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
return false;
}
} catch (ex) {
this._log("Error during vault initialisation.", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
return false;
} finally {
if (!keepSuspending) {
// Re-enable file watching after initialisation.
this.settings.suspendFileWatching = false;
await this.saveSettings();
}
}
}
/**
* Handle the rebuild everything scheduled operation.
* @returns true if can be continued, false if app restart is needed.
*/
async onRebuildEverythingScheduled() {
const method = await this.dialogManager.openWithExplicitCancel(RebuildEverything);
if (method === "cancelled") {
// Clean up the flag file and restart the app.
this._log("Rebuild everything cancelled by user.", LOG_LEVEL_NOTICE);
await this.cleanupRebuildFlag();
this.services.appLifecycle.performRestart();
return false;
}
const { extra } = method;
await this.adjustSettingToRemoteIfNeeded(extra, this.settings);
return await this.processVaultInitialisation(async () => {
await this.core.rebuilder.$rebuildEverything();
await this.cleanupRebuildFlag();
this._log("Rebuild everything operation completed.", LOG_LEVEL_NOTICE);
return true;
});
}
/**
* Handle the fetch all scheduled operation.
* @returns true if can be continued, false if app restart is needed.
*/
async onFetchAllScheduled() {
const method = await this.dialogManager.openWithExplicitCancel(FetchEverything);
if (method === "cancelled") {
this._log("Fetch everything cancelled by user.", LOG_LEVEL_NOTICE);
// Clean up the flag file and restart the app.
await this.cleanupFetchAllFlag();
this.services.appLifecycle.performRestart();
return false;
}
const { vault, extra } = method;
const mapVaultStateToAction = {
identical: {
// If both are identical, no need to make local files/chunks before sync,
// Just for the efficiency, chunks should be made before sync.
makeLocalChunkBeforeSync: true,
makeLocalFilesBeforeSync: false,
},
independent: {
// If both are independent, nothing needs to be made before sync.
// Respect the remote state.
makeLocalChunkBeforeSync: false,
makeLocalFilesBeforeSync: false,
},
unbalanced: {
// If both are unbalanced, local files should be made before sync to avoid data loss.
// Then, chunks should be made before sync for the efficiency, but also the metadata made and should be detected as conflicting.
makeLocalChunkBeforeSync: false,
makeLocalFilesBeforeSync: true,
},
cancelled: {
// Cancelled case, not actually used.
makeLocalChunkBeforeSync: false,
makeLocalFilesBeforeSync: false,
},
} as const;
return await this.processVaultInitialisation(async () => {
await this.adjustSettingToRemoteIfNeeded(extra, this.settings);
// Okay, proceed to fetch everything.
const { makeLocalChunkBeforeSync, makeLocalFilesBeforeSync } = mapVaultStateToAction[vault];
this._log(
`Fetching everything with settings: makeLocalChunkBeforeSync=${makeLocalChunkBeforeSync}, makeLocalFilesBeforeSync=${makeLocalFilesBeforeSync}`,
LOG_LEVEL_INFO
);
await this.core.rebuilder.$fetchLocal(makeLocalChunkBeforeSync, !makeLocalFilesBeforeSync);
await this.cleanupFetchAllFlag();
this._log("Fetch everything operation completed. Vault files will be gradually synced.", LOG_LEVEL_NOTICE);
return true;
});
}
async onSuspendAllScheduled() {
this._log("SCRAM is detected. All operations are suspended.", LOG_LEVEL_NOTICE);
return await this.processVaultInitialisation(async () => {
this._log(
"All operations are suspended as per SCRAM.\nLogs will be written to the file. This might be a performance impact.",
LOG_LEVEL_NOTICE
);
this.settings.writeLogToTheFile = true;
await this.core.services.setting.saveSettingData();
return Promise.resolve(false);
}, true);
}
async verifyAndUnlockSuspension() {
if (!this.settings.suspendFileWatching) {
return true;
}
if (
(await this.core.confirm.askYesNoDialog(
"Do you want to resume file and database processing, and restart obsidian now?",
{ defaultOption: "Yes", timeout: 15 }
)) != "yes"
) {
// TODO: Confirm actually proceed to next process.
return true;
}
this.settings.suspendFileWatching = false;
await this.saveSettings();
this.services.appLifecycle.performRestart();
return false;
}
private async processFlagFilesOnStartup(): Promise<boolean> {
const isFlagSuspensionActive = await this.isSuspendFlagActive();
const isFlagRebuildActive = await this.isRebuildFlagActive();
const isFlagFetchAllActive = await this.isFetchAllFlagActive();
// TODO: Address the case when both flags are active (very unlikely though).
// if(isFlagFetchAllActive && isFlagRebuildActive) {
// const message = "Rebuild everything and Fetch everything flags are both detected.";
// await this.core.confirm.askSelectStringDialogue(
// "Both Rebuild Everything and Fetch Everything flags are detected. Please remove one of them and restart the app.",
// ["OK"] as const,)
if (isFlagFetchAllActive) {
const res = await this.onFetchAllScheduled();
if (res) {
return await this.verifyAndUnlockSuspension();
}
return false;
}
if (isFlagRebuildActive) {
const res = await this.onRebuildEverythingScheduled();
if (res) {
return await this.verifyAndUnlockSuspension();
}
return false;
}
if (isFlagSuspensionActive) {
const res = await this.onSuspendAllScheduled();
return res;
}
return true;
}
async _everyOnLayoutReady(): Promise<boolean> {
try {
const flagProcessResult = await this.processFlagFilesOnStartup();
return flagProcessResult;
} catch (ex) {
this._log("Something went wrong on FlagFile Handling", LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
}
return true;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
super.onBindFunction(core, services);
services.appLifecycle.handleLayoutReady(this._everyOnLayoutReady.bind(this));
}
}

View File

@@ -0,0 +1,22 @@
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
import { AbstractModule } from "../AbstractModule.ts";
export class ModuleRemoteGovernor extends AbstractModule {
private async _markRemoteLocked(lockByClean: boolean = false): Promise<void> {
return await this.core.replicator.markRemoteLocked(this.settings, true, lockByClean);
}
private async _markRemoteUnlocked(): Promise<void> {
return await this.core.replicator.markRemoteLocked(this.settings, false, false);
}
private async _markRemoteResolved(): Promise<void> {
return await this.core.replicator.markRemoteResolved(this.settings);
}
onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.remote.handleMarkLocked(this._markRemoteLocked.bind(this));
services.remote.handleMarkUnlocked(this._markRemoteUnlocked.bind(this));
services.remote.handleMarkResolved(this._markRemoteResolved.bind(this));
}
}

View File

@@ -0,0 +1,295 @@
import { Logger, LOG_LEVEL_NOTICE } from "octagonal-wheels/common/logger";
import { extractObject } from "octagonal-wheels/object";
import {
TweakValuesShouldMatchedTemplate,
IncompatibleChanges,
confName,
type TweakValues,
type RemoteDBSettings,
IncompatibleChangesInSpecificPattern,
CompatibleButLossyChanges,
} from "../../lib/src/common/types.ts";
import { escapeMarkdownValue } from "../../lib/src/common/utils.ts";
import { AbstractModule } from "../AbstractModule.ts";
import { $msg } from "../../lib/src/common/i18n.ts";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleResolvingMismatchedTweaks extends AbstractModule {
async _anyAfterConnectCheckFailed(): Promise<boolean | "CHECKAGAIN" | undefined> {
if (!this.core.replicator.tweakSettingsMismatched && !this.core.replicator.preferredTweakValue) return false;
const preferred = this.core.replicator.preferredTweakValue;
if (!preferred) return false;
const ret = await this.services.tweakValue.askResolvingMismatched(preferred);
if (ret == "OK") return false;
if (ret == "CHECKAGAIN") return "CHECKAGAIN";
if (ret == "IGNORE") return true;
}
async _checkAndAskResolvingMismatchedTweaks(
preferred: Partial<TweakValues>
): Promise<[TweakValues | boolean, boolean]> {
const mine = extractObject(TweakValuesShouldMatchedTemplate, this.settings);
const items = Object.entries(TweakValuesShouldMatchedTemplate);
let rebuildRequired = false;
let rebuildRecommended = false;
// Making tables:
// let table = `| Value name | This device | Configured | \n` + `|: --- |: --- :|: ---- :| \n`;
const tableRows = [];
// const items = [mine,preferred]
for (const v of items) {
const key = v[0] as keyof typeof TweakValuesShouldMatchedTemplate;
const valueMine = escapeMarkdownValue(mine[key]);
const valuePreferred = escapeMarkdownValue(preferred[key]);
if (valueMine == valuePreferred) continue;
if (IncompatibleChanges.indexOf(key) !== -1) {
rebuildRequired = true;
}
for (const pattern of IncompatibleChangesInSpecificPattern) {
if (pattern.key !== key) continue;
// if from value supplied, check if current value have been violated : in other words, if the current value is the same as the from value, it should require a rebuild.
const isFromConditionMet = "from" in pattern ? pattern.from === mine[key] : false;
// and, if to value supplied, same as above.
const isToConditionMet = "to" in pattern ? pattern.to === preferred[key] : false;
// if either of them is true, it should require a rebuild, if the pattern is not a recommendation.
if (isFromConditionMet || isToConditionMet) {
if (pattern.isRecommendation) {
rebuildRecommended = true;
} else {
rebuildRequired = true;
}
}
}
if (CompatibleButLossyChanges.indexOf(key) !== -1) {
rebuildRecommended = true;
}
// table += `| ${confName(key)} | ${valueMine} | ${valuePreferred} | \n`;
tableRows.push(
$msg("TweakMismatchResolve.Table.Row", {
name: confName(key),
self: valueMine,
remote: valuePreferred,
})
);
}
const additionalMessage =
rebuildRequired && this.core.settings.isConfigured
? $msg("TweakMismatchResolve.Message.WarningIncompatibleRebuildRequired")
: "";
const additionalMessage2 =
rebuildRecommended && this.core.settings.isConfigured
? $msg("TweakMismatchResolve.Message.WarningIncompatibleRebuildRecommended")
: "";
const table = $msg("TweakMismatchResolve.Table", { rows: tableRows.join("\n") });
const message = $msg("TweakMismatchResolve.Message.MainTweakResolving", {
table: table,
additionalMessage: [additionalMessage, additionalMessage2].filter((v) => v).join("\n"),
});
const CHOICE_USE_REMOTE = $msg("TweakMismatchResolve.Action.UseRemote");
const CHOICE_USE_REMOTE_WITH_REBUILD = $msg("TweakMismatchResolve.Action.UseRemoteWithRebuild");
const CHOICE_USE_REMOTE_PREVENT_REBUILD = $msg("TweakMismatchResolve.Action.UseRemoteAcceptIncompatible");
const CHOICE_USE_MINE = $msg("TweakMismatchResolve.Action.UseMine");
const CHOICE_USE_MINE_WITH_REBUILD = $msg("TweakMismatchResolve.Action.UseMineWithRebuild");
const CHOICE_USE_MINE_PREVENT_REBUILD = $msg("TweakMismatchResolve.Action.UseMineAcceptIncompatible");
const CHOICE_DISMISS = $msg("TweakMismatchResolve.Action.Dismiss");
const CHOICE_AND_VALUES = [] as [string, [result: TweakValues | boolean, rebuild: boolean]][];
if (rebuildRequired) {
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE_WITH_REBUILD, [preferred, true]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE_WITH_REBUILD, [true, true]]);
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE_PREVENT_REBUILD, [preferred, false]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE_PREVENT_REBUILD, [true, false]]);
} else if (rebuildRecommended) {
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE, [preferred, false]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE, [true, false]]);
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE_WITH_REBUILD, [true, true]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE_WITH_REBUILD, [true, true]]);
} else {
CHOICE_AND_VALUES.push([CHOICE_USE_REMOTE, [preferred, false]]);
CHOICE_AND_VALUES.push([CHOICE_USE_MINE, [true, false]]);
}
CHOICE_AND_VALUES.push([CHOICE_DISMISS, [false, false]]);
const CHOICES = Object.fromEntries(CHOICE_AND_VALUES) as Record<
string,
[TweakValues | boolean, performRebuild: boolean]
>;
const retKey = await this.core.confirm.askSelectStringDialogue(message, Object.keys(CHOICES), {
title: $msg("TweakMismatchResolve.Title.TweakResolving"),
timeout: 60,
defaultAction: CHOICE_DISMISS,
});
if (!retKey) return [false, false];
return CHOICES[retKey];
}
async _askResolvingMismatchedTweaks(): Promise<"OK" | "CHECKAGAIN" | "IGNORE"> {
if (!this.core.replicator.tweakSettingsMismatched) {
return "OK";
}
const tweaks = this.core.replicator.preferredTweakValue;
if (!tweaks) {
return "IGNORE";
}
const preferred = extractObject(TweakValuesShouldMatchedTemplate, tweaks);
const [conf, rebuildRequired] = await this.services.tweakValue.checkAndAskResolvingMismatched(preferred);
if (!conf) return "IGNORE";
if (conf === true) {
await this.core.replicator.setPreferredRemoteTweakSettings(this.settings);
if (rebuildRequired) {
await this.core.rebuilder.$rebuildRemote();
}
Logger(
`Tweak values on the remote server have been updated. Your other device will see this message.`,
LOG_LEVEL_NOTICE
);
return "CHECKAGAIN";
}
if (conf) {
this.settings = { ...this.settings, ...conf };
await this.core.replicator.setPreferredRemoteTweakSettings(this.settings);
await this.services.setting.saveSettingData();
if (rebuildRequired) {
await this.core.rebuilder.$fetchLocal();
}
Logger(`Configuration has been updated as configured by the other device.`, LOG_LEVEL_NOTICE);
return "CHECKAGAIN";
}
return "IGNORE";
}
async _fetchRemotePreferredTweakValues(trialSetting: RemoteDBSettings): Promise<TweakValues | false> {
const replicator = await this.services.replicator.getNewReplicator(trialSetting);
if (!replicator) {
this._log("The remote type is not supported for fetching preferred tweak values.", LOG_LEVEL_NOTICE);
return false;
}
if (await replicator.tryConnectRemote(trialSetting)) {
const preferred = await replicator.getRemotePreferredTweakValues(trialSetting);
if (preferred) {
return preferred;
}
this._log("Failed to get the preferred tweak values from the remote server.", LOG_LEVEL_NOTICE);
return false;
}
this._log("Failed to connect to the remote server.", LOG_LEVEL_NOTICE);
return false;
}
async _checkAndAskUseRemoteConfiguration(
trialSetting: RemoteDBSettings
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
const preferred = await this.services.tweakValue.fetchRemotePreferred(trialSetting);
if (preferred) {
return await this.services.tweakValue.askUseRemoteConfiguration(trialSetting, preferred);
}
return { result: false, requireFetch: false };
}
async _askUseRemoteConfiguration(
trialSetting: RemoteDBSettings,
preferred: TweakValues
): Promise<{ result: false | TweakValues; requireFetch: boolean }> {
const items = Object.entries(TweakValuesShouldMatchedTemplate);
let rebuildRequired = false;
let rebuildRecommended = false;
// Making tables:
// let table = `| Value name | This device | On Remote | \n` + `|: --- |: ---- :|: ---- :| \n`;
let differenceCount = 0;
const tableRows = [] as string[];
// const items = [mine,preferred]
for (const v of items) {
const key = v[0] as keyof typeof TweakValuesShouldMatchedTemplate;
const remoteValueForDisplay = escapeMarkdownValue(preferred[key]);
const currentValueForDisplay = `${escapeMarkdownValue((trialSetting as TweakValues)?.[key])}`;
if ((trialSetting as TweakValues)?.[key] !== preferred[key]) {
if (IncompatibleChanges.indexOf(key) !== -1) {
rebuildRequired = true;
}
for (const pattern of IncompatibleChangesInSpecificPattern) {
if (pattern.key !== key) continue;
// if from value supplied, check if current value have been violated : in other words, if the current value is the same as the from value, it should require a rebuild.
const isFromConditionMet =
"from" in pattern ? pattern.from === (trialSetting as TweakValues)?.[key] : false;
// and, if to value supplied, same as above.
const isToConditionMet = "to" in pattern ? pattern.to === preferred[key] : false;
// if either of them is true, it should require a rebuild, if the pattern is not a recommendation.
if (isFromConditionMet || isToConditionMet) {
if (pattern.isRecommendation) {
rebuildRecommended = true;
} else {
rebuildRequired = true;
}
}
}
if (CompatibleButLossyChanges.indexOf(key) !== -1) {
rebuildRecommended = true;
}
} else {
continue;
}
tableRows.push(
$msg("TweakMismatchResolve.Table.Row", {
name: confName(key),
self: currentValueForDisplay,
remote: remoteValueForDisplay,
})
);
differenceCount++;
}
if (differenceCount === 0) {
this._log("The settings in the remote database are the same as the local database.", LOG_LEVEL_NOTICE);
return { result: false, requireFetch: false };
}
const additionalMessage =
rebuildRequired && this.core.settings.isConfigured
? $msg("TweakMismatchResolve.Message.UseRemote.WarningRebuildRequired")
: "";
const additionalMessage2 =
rebuildRecommended && this.core.settings.isConfigured
? $msg("TweakMismatchResolve.Message.UseRemote.WarningRebuildRecommended")
: "";
const table = $msg("TweakMismatchResolve.Table", { rows: tableRows.join("\n") });
const message = $msg("TweakMismatchResolve.Message.Main", {
table: table,
additionalMessage: [additionalMessage, additionalMessage2].filter((v) => v).join("\n"),
});
const CHOICE_USE_REMOTE = $msg("TweakMismatchResolve.Action.UseConfigured");
const CHOICE_DISMISS = $msg("TweakMismatchResolve.Action.Dismiss");
// const CHOICE_AND_VALUES = [
// [CHOICE_USE_REMOTE, preferred],
// [CHOICE_DISMISS, false]]
const CHOICES = [CHOICE_USE_REMOTE, CHOICE_DISMISS];
const retKey = await this.core.confirm.askSelectStringDialogue(message, CHOICES, {
title: $msg("TweakMismatchResolve.Title.UseRemoteConfig"),
timeout: 0,
defaultAction: CHOICE_DISMISS,
});
if (!retKey) return { result: false, requireFetch: false };
if (retKey === CHOICE_DISMISS) return { result: false, requireFetch: false };
if (retKey === CHOICE_USE_REMOTE) {
return { result: { ...trialSetting, ...preferred }, requireFetch: rebuildRequired };
}
return { result: false, requireFetch: false };
}
onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.tweakValue.handleFetchRemotePreferred(this._fetchRemotePreferredTweakValues.bind(this));
services.tweakValue.handleCheckAndAskResolvingMismatched(this._checkAndAskResolvingMismatchedTweaks.bind(this));
services.tweakValue.handleAskResolvingMismatched(this._askResolvingMismatchedTweaks.bind(this));
services.tweakValue.handleCheckAndAskUseRemoteConfiguration(this._checkAndAskUseRemoteConfiguration.bind(this));
services.tweakValue.handleAskUseRemoteConfiguration(this._askUseRemoteConfiguration.bind(this));
services.replication.handleCheckConnectionFailure(this._anyAfterConnectCheckFailed.bind(this));
}
}

View File

@@ -0,0 +1,392 @@
import { TFile, TFolder, type ListedFiles } from "obsidian";
import { SerializedFileAccess } from "./storageLib/SerializedFileAccess";
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
import { LOG_LEVEL_INFO, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import type {
FilePath,
FilePathWithPrefix,
UXDataWriteOptions,
UXFileInfo,
UXFileInfoStub,
UXFolderInfo,
UXStat,
} from "../../lib/src/common/types";
import { TFileToUXFileInfoStub, TFolderToUXFileInfoStub } from "./storageLib/utilObsidian.ts";
import { StorageEventManagerObsidian, type StorageEventManager } from "./storageLib/StorageEventManager";
import type { StorageAccess } from "../interfaces/StorageAccess";
import { createBlob, type CustomRegExp } from "../../lib/src/common/utils";
import { serialized } from "octagonal-wheels/concurrency/lock_v2";
import type { LiveSyncCore } from "../../main.ts";
import type ObsidianLiveSyncPlugin from "../../main.ts";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
const fileLockPrefix = "file-lock:";
export class ModuleFileAccessObsidian extends AbstractObsidianModule implements StorageAccess {
processingFiles: Set<FilePathWithPrefix> = new Set();
processWriteFile<T>(file: UXFileInfoStub | FilePathWithPrefix, proc: () => Promise<T>): Promise<T> {
const path = typeof file === "string" ? file : file.path;
return serialized(`${fileLockPrefix}${path}`, async () => {
try {
this.processingFiles.add(path);
return await proc();
} finally {
this.processingFiles.delete(path);
}
});
}
processReadFile<T>(file: UXFileInfoStub | FilePathWithPrefix, proc: () => Promise<T>): Promise<T> {
const path = typeof file === "string" ? file : file.path;
return serialized(`${fileLockPrefix}${path}`, async () => {
try {
this.processingFiles.add(path);
return await proc();
} finally {
this.processingFiles.delete(path);
}
});
}
isFileProcessing(file: UXFileInfoStub | FilePathWithPrefix): boolean {
const path = typeof file === "string" ? file : file.path;
return this.processingFiles.has(path);
}
vaultAccess!: SerializedFileAccess;
vaultManager: StorageEventManager = new StorageEventManagerObsidian(this.plugin, this.core, this);
private _everyOnload(): Promise<boolean> {
this.core.storageAccess = this;
return Promise.resolve(true);
}
_everyOnFirstInitialize(): Promise<boolean> {
this.vaultManager.beginWatch();
return Promise.resolve(true);
}
// $$flushFileEventQueue(): void {
// this.vaultManager.flushQueue();
// }
_everyCommitPendingFileEvent(): Promise<boolean> {
this.vaultManager.flushQueue();
return Promise.resolve(true);
}
_everyOnloadStart(): Promise<boolean> {
this.vaultAccess = new SerializedFileAccess(this.app, this.plugin, this);
return Promise.resolve(true);
}
_isStorageInsensitive(): boolean {
return this.vaultAccess.isStorageInsensitive();
}
_shouldCheckCaseInsensitive(): boolean {
if (this.services.vault.isStorageInsensitive()) return false;
return !this.settings.handleFilenameCaseSensitive;
}
async writeFileAuto(path: string, data: string | ArrayBuffer, opt?: UXDataWriteOptions): Promise<boolean> {
const file = this.vaultAccess.getAbstractFileByPath(path);
if (file instanceof TFile) {
return this.vaultAccess.vaultModify(file, data, opt);
} else if (file === null) {
if (!path.endsWith(".md")) {
// Very rare case, we encountered this case with `writing-goals-history.csv` file.
// Indeed, that file not appears in the File Explorer, but it exists in the vault.
// Hence, we cannot retrieve the file from the vault by getAbstractFileByPath, and we cannot write it via vaultModify.
// It makes `File already exists` error.
// Therefore, we need to write it via adapterWrite.
// Maybe there are others like this, so I will write it via adapterWrite.
// This is a workaround for the issue, but I don't know if this is the right solution.
// (So limits to non-md files).
// Has Obsidian been patched?, anyway, writing directly might be a safer approach.
// However, does changes of that file trigger file-change event?
await this.vaultAccess.adapterWrite(path, data, opt);
// For safety, check existence
return await this.vaultAccess.adapterExists(path);
} else {
return (await this.vaultAccess.vaultCreate(path, data, opt)) instanceof TFile;
}
} else {
this._log(`Could not write file (Possibly already exists as a folder): ${path}`, LOG_LEVEL_VERBOSE);
return false;
}
}
readFileAuto(path: string): Promise<string | ArrayBuffer> {
const file = this.vaultAccess.getAbstractFileByPath(path);
if (file instanceof TFile) {
return this.vaultAccess.vaultRead(file);
} else {
throw new Error(`Could not read file (Possibly does not exist): ${path}`);
}
}
readFileText(path: string): Promise<string> {
const file = this.vaultAccess.getAbstractFileByPath(path);
if (file instanceof TFile) {
return this.vaultAccess.vaultRead(file);
} else {
throw new Error(`Could not read file (Possibly does not exist): ${path}`);
}
}
isExists(path: string): Promise<boolean> {
return Promise.resolve(this.vaultAccess.getAbstractFileByPath(path) instanceof TFile);
}
async writeHiddenFileAuto(path: string, data: string | ArrayBuffer, opt?: UXDataWriteOptions): Promise<boolean> {
try {
await this.vaultAccess.adapterWrite(path, data, opt);
return true;
} catch (e) {
this._log(`Could not write hidden file: ${path}`, LOG_LEVEL_VERBOSE);
this._log(e, LOG_LEVEL_VERBOSE);
return false;
}
}
async appendHiddenFile(path: string, data: string, opt?: UXDataWriteOptions): Promise<boolean> {
try {
await this.vaultAccess.adapterAppend(path, data, opt);
return true;
} catch (e) {
this._log(`Could not append hidden file: ${path}`, LOG_LEVEL_VERBOSE);
this._log(e, LOG_LEVEL_VERBOSE);
return false;
}
}
stat(path: string): Promise<UXStat | null> {
const file = this.vaultAccess.getAbstractFileByPath(path);
if (file === null) return Promise.resolve(null);
if (file instanceof TFile) {
return Promise.resolve({
ctime: file.stat.ctime,
mtime: file.stat.mtime,
size: file.stat.size,
type: "file",
});
} else {
throw new Error(`Could not stat file (Possibly does not exist): ${path}`);
}
}
statHidden(path: string): Promise<UXStat | null> {
return this.vaultAccess.tryAdapterStat(path);
}
async removeHidden(path: string): Promise<boolean> {
try {
await this.vaultAccess.adapterRemove(path);
if (this.vaultAccess.tryAdapterStat(path) !== null) {
return false;
}
return true;
} catch (e) {
this._log(`Could not remove hidden file: ${path}`, LOG_LEVEL_VERBOSE);
this._log(e, LOG_LEVEL_VERBOSE);
return false;
}
}
async readHiddenFileAuto(path: string): Promise<string | ArrayBuffer> {
return await this.vaultAccess.adapterReadAuto(path);
}
async readHiddenFileText(path: string): Promise<string> {
return await this.vaultAccess.adapterRead(path);
}
async readHiddenFileBinary(path: string): Promise<ArrayBuffer> {
return await this.vaultAccess.adapterReadBinary(path);
}
async isExistsIncludeHidden(path: string): Promise<boolean> {
return (await this.vaultAccess.tryAdapterStat(path)) !== null;
}
async ensureDir(path: string): Promise<boolean> {
try {
await this.vaultAccess.ensureDirectory(path);
return true;
} catch (e) {
this._log(`Could not ensure directory: ${path}`, LOG_LEVEL_VERBOSE);
this._log(e, LOG_LEVEL_VERBOSE);
return false;
}
}
triggerFileEvent(event: string, path: string): void {
const file = this.vaultAccess.getAbstractFileByPath(path);
if (file === null) return;
this.vaultAccess.trigger(event, file);
}
async triggerHiddenFile(path: string): Promise<void> {
//@ts-ignore internal function
await this.app.vault.adapter.reconcileInternalFile(path);
}
// getFileStub(file: TFile): UXFileInfoStub {
// return TFileToUXFileInfoStub(file);
// }
getFileStub(path: string): UXFileInfoStub | null {
const file = this.vaultAccess.getAbstractFileByPath(path);
if (file instanceof TFile) {
return TFileToUXFileInfoStub(file);
} else {
return null;
}
}
async readStubContent(stub: UXFileInfoStub): Promise<UXFileInfo | false> {
const file = this.vaultAccess.getAbstractFileByPath(stub.path);
if (!(file instanceof TFile)) {
this._log(`Could not read file (Possibly does not exist or a folder): ${stub.path}`, LOG_LEVEL_VERBOSE);
return false;
}
const data = await this.vaultAccess.vaultReadAuto(file);
return {
...stub,
...TFileToUXFileInfoStub(file),
body: createBlob(data),
};
}
getStub(path: string): UXFileInfoStub | UXFolderInfo | null {
const file = this.vaultAccess.getAbstractFileByPath(path);
if (file instanceof TFile) {
return TFileToUXFileInfoStub(file);
} else if (file instanceof TFolder) {
return TFolderToUXFileInfoStub(file);
}
return null;
}
getFiles(): UXFileInfoStub[] {
return this.vaultAccess.getFiles().map((f) => TFileToUXFileInfoStub(f));
}
getFileNames(): FilePath[] {
return this.vaultAccess.getFiles().map((f) => f.path as FilePath);
}
async getFilesIncludeHidden(
basePath: string,
includeFilter?: CustomRegExp[],
excludeFilter?: CustomRegExp[],
skipFolder: string[] = [".git", ".trash", "node_modules"]
): Promise<FilePath[]> {
let w: ListedFiles;
try {
w = await this.app.vault.adapter.list(basePath);
} catch (ex) {
this._log(`Could not traverse(getFilesIncludeHidden):${basePath}`, LOG_LEVEL_INFO);
this._log(ex, LOG_LEVEL_VERBOSE);
return [];
}
skipFolder = skipFolder.map((e) => e.toLowerCase());
let files = [] as string[];
for (const file of w.files) {
if (includeFilter && includeFilter.length > 0) {
if (!includeFilter.some((e) => e.test(file))) continue;
}
if (excludeFilter && excludeFilter.some((ee) => ee.test(file))) {
continue;
}
if (await this.services.vault.isIgnoredByIgnoreFile(file)) continue;
files.push(file);
}
for (const v of w.folders) {
const folderName = (v.split("/").pop() ?? "").toLowerCase();
if (skipFolder.some((e) => folderName === e)) {
continue;
}
if (excludeFilter && excludeFilter.some((e) => e.test(v))) {
continue;
}
if (await this.services.vault.isIgnoredByIgnoreFile(v)) {
continue;
}
// OK, deep dive!
files = files.concat(await this.getFilesIncludeHidden(v, includeFilter, excludeFilter, skipFolder));
}
return files as FilePath[];
}
async touched(file: UXFileInfoStub | FilePathWithPrefix): Promise<void> {
const path = typeof file === "string" ? file : file.path;
await this.vaultAccess.touch(path as FilePath);
}
recentlyTouched(file: UXFileInfoStub | FilePathWithPrefix): boolean {
const xFile = typeof file === "string" ? (this.vaultAccess.getAbstractFileByPath(file) as TFile) : file;
if (xFile === null) return false;
if (xFile instanceof TFolder) return false;
return this.vaultAccess.recentlyTouched(xFile);
}
clearTouched(): void {
this.vaultAccess.clearTouched();
}
delete(file: FilePathWithPrefix | UXFileInfoStub | string, force: boolean): Promise<void> {
const xPath = typeof file === "string" ? file : file.path;
const xFile = this.vaultAccess.getAbstractFileByPath(xPath);
if (xFile === null) return Promise.resolve();
if (!(xFile instanceof TFile) && !(xFile instanceof TFolder)) return Promise.resolve();
return this.vaultAccess.delete(xFile, force);
}
trash(file: FilePathWithPrefix | UXFileInfoStub | string, system: boolean): Promise<void> {
const xPath = typeof file === "string" ? file : file.path;
const xFile = this.vaultAccess.getAbstractFileByPath(xPath);
if (xFile === null) return Promise.resolve();
if (!(xFile instanceof TFile) && !(xFile instanceof TFolder)) return Promise.resolve();
return this.vaultAccess.trash(xFile, system);
}
// $readFileBinary(path: string): Promise<ArrayBuffer> {
// const file = this.vaultAccess.getAbstractFileByPath(path);
// if (file instanceof TFile) {
// return this.vaultAccess.vaultReadBinary(file);
// } else {
// throw new Error(`Could not read file (Possibly does not exist): ${path}`);
// }
// }
// async $appendFileAuto(path: string, data: string | ArrayBuffer, opt?: DataWriteOptions): Promise<boolean> {
// const file = this.vaultAccess.getAbstractFileByPath(path);
// if (file instanceof TFile) {
// return this.vaultAccess.a(file, data, opt);
// } else if (file !== null) {
// return await this.vaultAccess.vaultCreate(path, data, opt) instanceof TFile;
// } else {
// this._log(`Could not append file (Possibly already exists as a folder): ${path}`, LOG_LEVEL_VERBOSE);
// return false;
// }
// }
async __deleteVaultItem(file: TFile | TFolder) {
if (file instanceof TFile) {
if (!(await this.services.vault.isTargetFile(file.path))) return;
}
const dir = file.parent;
if (this.settings.trashInsteadDelete) {
await this.vaultAccess.trash(file, false);
} else {
await this.vaultAccess.delete(file, true);
}
this._log(`xxx <- STORAGE (deleted) ${file.path}`);
if (dir) {
this._log(`files: ${dir.children.length}`);
if (dir.children.length == 0) {
if (!this.settings.doNotDeleteFolder) {
this._log(
`All files under the parent directory (${dir.path}) have been deleted, so delete this one.`
);
await this.__deleteVaultItem(dir);
}
}
}
}
async deleteVaultItem(fileSrc: FilePathWithPrefix | UXFileInfoStub | UXFolderInfo): Promise<void> {
const path = typeof fileSrc === "string" ? fileSrc : fileSrc.path;
const file = this.vaultAccess.getAbstractFileByPath(path);
if (file === null) return;
if (file instanceof TFile || file instanceof TFolder) {
return await this.__deleteVaultItem(file);
}
}
constructor(plugin: ObsidianLiveSyncPlugin, core: LiveSyncCore) {
super(plugin, core);
}
onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.vault.handleIsStorageInsensitive(this._isStorageInsensitive.bind(this));
services.setting.handleShouldCheckCaseInsensitively(this._shouldCheckCaseInsensitive.bind(this));
services.appLifecycle.handleFirstInitialise(this._everyOnFirstInitialize.bind(this));
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
services.appLifecycle.handleOnLoaded(this._everyOnload.bind(this));
services.fileProcessing.handleCommitPendingFileEvents(this._everyCommitPendingFileEvent.bind(this));
}
}

View File

@@ -0,0 +1,118 @@
// ModuleInputUIObsidian.ts
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
import { scheduleTask } from "octagonal-wheels/concurrency/task";
import { disposeMemoObject, memoIfNotExist, memoObject, retrieveMemoObject } from "../../common/utils.ts";
import {
askSelectString,
askString,
askYesNo,
confirmWithMessage,
confirmWithMessageWithWideButton,
} from "./UILib/dialogs.ts";
import { Notice } from "../../deps.ts";
import type { Confirm } from "../../lib/src/interfaces/Confirm.ts";
import { setConfirmInstance } from "../../lib/src/PlatformAPIs/obsidian/Confirm.ts";
import { $msg } from "src/lib/src/common/i18n.ts";
import type { LiveSyncCore } from "../../main.ts";
// This module cannot be a common module because it depends on Obsidian's API.
// However, we have to make compatible one for other platform.
export class ModuleInputUIObsidian extends AbstractObsidianModule implements Confirm {
private _everyOnload(): Promise<boolean> {
this.core.confirm = this;
setConfirmInstance(this);
return Promise.resolve(true);
}
askYesNo(message: string): Promise<"yes" | "no"> {
return askYesNo(this.app, message);
}
askString(title: string, key: string, placeholder: string, isPassword: boolean = false): Promise<string | false> {
return askString(this.app, title, key, placeholder, isPassword);
}
async askYesNoDialog(
message: string,
opt: { title?: string; defaultOption?: "Yes" | "No"; timeout?: number } = { title: "Confirmation" }
): Promise<"yes" | "no"> {
const defaultTitle = $msg("moduleInputUIObsidian.defaultTitleConfirmation");
const yesLabel = $msg("moduleInputUIObsidian.optionYes");
const noLabel = $msg("moduleInputUIObsidian.optionNo");
const defaultOption = opt.defaultOption === "Yes" ? yesLabel : noLabel;
const ret = await confirmWithMessageWithWideButton(
this.plugin,
opt.title || defaultTitle,
message,
[yesLabel, noLabel],
defaultOption,
opt.timeout
);
return ret === yesLabel ? "yes" : "no";
}
askSelectString(message: string, items: string[]): Promise<string> {
return askSelectString(this.app, message, items);
}
askSelectStringDialogue<T extends readonly string[]>(
message: string,
buttons: T,
opt: { title?: string; defaultAction: T[number]; timeout?: number }
): Promise<T[number] | false> {
const defaultTitle = $msg("moduleInputUIObsidian.defaultTitleSelect");
return confirmWithMessageWithWideButton(
this.plugin,
opt.title || defaultTitle,
message,
buttons,
opt.defaultAction,
opt.timeout
);
}
askInPopup(key: string, dialogText: string, anchorCallback: (anchor: HTMLAnchorElement) => void) {
const fragment = createFragment((doc) => {
const [beforeText, afterText] = dialogText.split("{HERE}", 2);
doc.createEl("span", undefined, (a) => {
a.appendText(beforeText);
a.appendChild(
a.createEl("a", undefined, (anchor) => {
anchorCallback(anchor);
})
);
a.appendText(afterText);
});
});
const popupKey = "popup-" + key;
scheduleTask(popupKey, 1000, async () => {
const popup = await memoIfNotExist(popupKey, () => new Notice(fragment, 0));
const isShown = popup?.noticeEl?.isShown();
if (!isShown) {
memoObject(popupKey, new Notice(fragment, 0));
}
scheduleTask(popupKey + "-close", 20000, () => {
const popup = retrieveMemoObject<Notice>(popupKey);
if (!popup) return;
if (popup?.noticeEl?.isShown()) {
popup.hide();
}
disposeMemoObject(popupKey);
});
});
}
confirmWithMessage(
title: string,
contentMd: string,
buttons: string[],
defaultAction: (typeof buttons)[number],
timeout?: number
): Promise<(typeof buttons)[number] | false> {
return confirmWithMessage(this.plugin, title, contentMd, buttons, defaultAction, timeout);
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnLoaded(this._everyOnload.bind(this));
}
}

View File

@@ -0,0 +1,320 @@
import { ButtonComponent } from "obsidian";
import { App, FuzzySuggestModal, MarkdownRenderer, Modal, Plugin, Setting } from "../../../deps.ts";
import { EVENT_PLUGIN_UNLOADED, eventHub } from "../../../common/events.ts";
class AutoClosableModal extends Modal {
_closeByUnload() {
eventHub.off(EVENT_PLUGIN_UNLOADED, this._closeByUnload);
this.close();
}
constructor(app: App) {
super(app);
this._closeByUnload = this._closeByUnload.bind(this);
eventHub.once(EVENT_PLUGIN_UNLOADED, this._closeByUnload);
}
onClose() {
eventHub.off(EVENT_PLUGIN_UNLOADED, this._closeByUnload);
}
}
export class InputStringDialog extends AutoClosableModal {
result: string | false = false;
onSubmit: (result: string | false) => void;
title: string;
key: string;
placeholder: string;
isManuallyClosed = false;
isPassword = false;
constructor(
app: App,
title: string,
key: string,
placeholder: string,
isPassword: boolean,
onSubmit: (result: string | false) => void
) {
super(app);
this.onSubmit = onSubmit;
this.title = title;
this.placeholder = placeholder;
this.key = key;
this.isPassword = isPassword;
}
onOpen() {
const { contentEl } = this;
this.titleEl.setText(this.title);
const formEl = contentEl.createDiv();
new Setting(formEl)
.setName(this.key)
.setClass(this.isPassword ? "password-input" : "normal-input")
.addText((text) =>
text.onChange((value) => {
this.result = value;
})
);
new Setting(formEl)
.addButton((btn) =>
btn
.setButtonText("Ok")
.setCta()
.onClick(() => {
this.isManuallyClosed = true;
this.close();
})
)
.addButton((btn) =>
btn
.setButtonText("Cancel")
.setCta()
.onClick(() => {
this.close();
})
);
}
onClose() {
super.onClose();
const { contentEl } = this;
contentEl.empty();
if (this.isManuallyClosed) {
this.onSubmit(this.result);
} else {
this.onSubmit(false);
}
}
}
export class PopoverSelectString extends FuzzySuggestModal<string> {
app: App;
callback: ((e: string) => void) | undefined = () => {};
getItemsFun: () => string[] = () => {
return ["yes", "no"];
};
constructor(
app: App,
note: string,
placeholder: string | undefined,
getItemsFun: (() => string[]) | undefined,
callback: (e: string) => void
) {
super(app);
this.app = app;
this.setPlaceholder((placeholder ?? "y/n) ") + note);
if (getItemsFun) this.getItemsFun = getItemsFun;
this.callback = callback;
}
getItems(): string[] {
return this.getItemsFun();
}
getItemText(item: string): string {
return item;
}
onChooseItem(item: string, evt: MouseEvent | KeyboardEvent): void {
// debugger;
this.callback?.(item);
this.callback = undefined;
}
onClose(): void {
setTimeout(() => {
if (this.callback) {
this.callback("");
this.callback = undefined;
}
}, 100);
}
}
export class MessageBox<T extends readonly string[]> extends AutoClosableModal {
plugin: Plugin;
title: string;
contentMd: string;
buttons: T;
result: string | false = false;
isManuallyClosed = false;
defaultAction: string | undefined;
timeout: number | undefined;
timer: ReturnType<typeof setInterval> | undefined = undefined;
defaultButtonComponent: ButtonComponent | undefined;
wideButton: boolean;
onSubmit: (result: string | false) => void;
constructor(
plugin: Plugin,
title: string,
contentMd: string,
buttons: T,
defaultAction: T[number],
timeout: number | undefined,
wideButton: boolean,
onSubmit: (result: T[number] | false) => void
) {
super(plugin.app);
this.plugin = plugin;
this.title = title;
this.contentMd = contentMd;
this.buttons = buttons;
this.onSubmit = onSubmit;
this.defaultAction = defaultAction;
this.timeout = timeout;
this.wideButton = wideButton;
if (this.timeout) {
this.timer = setInterval(() => {
if (this.timeout === undefined) return;
this.timeout--;
if (this.timeout < 0) {
if (this.timer) {
clearInterval(this.timer);
this.defaultButtonComponent?.setButtonText(`${defaultAction}`);
this.timer = undefined;
}
this.result = defaultAction;
this.isManuallyClosed = true;
this.close();
} else {
this.defaultButtonComponent?.setButtonText(`( ${this.timeout} ) ${defaultAction}`);
}
}, 1000);
}
}
onOpen() {
const { contentEl } = this;
this.titleEl.setText(this.title);
const div = contentEl.createDiv();
div.style.userSelect = "text";
div.style["webkitUserSelect"] = "text";
void MarkdownRenderer.render(this.plugin.app, this.contentMd, div, "/", this.plugin);
const buttonSetting = new Setting(contentEl);
const labelWrapper = contentEl.createDiv();
labelWrapper.addClass("sls-dialogue-note-wrapper");
const labelEl = labelWrapper.createEl("label", { text: "To stop the countdown, tap anywhere on the dialogue" });
labelEl.addClass("sls-dialogue-note-countdown");
if (!this.timeout || !this.timer) {
labelWrapper.empty();
labelWrapper.style.display = "none";
}
buttonSetting.infoEl.style.display = "none";
buttonSetting.controlEl.style.flexWrap = "wrap";
if (this.wideButton) {
buttonSetting.controlEl.style.flexDirection = "column";
buttonSetting.controlEl.style.alignItems = "center";
buttonSetting.controlEl.style.justifyContent = "center";
buttonSetting.controlEl.style.flexGrow = "1";
}
contentEl.addEventListener("click", () => {
if (this.timer) {
labelWrapper.empty();
labelWrapper.style.display = "none";
clearInterval(this.timer);
this.timer = undefined;
this.defaultButtonComponent?.setButtonText(`${this.defaultAction}`);
}
});
for (const button of this.buttons) {
buttonSetting.addButton((btn) => {
btn.setButtonText(button).onClick(() => {
this.isManuallyClosed = true;
this.result = button;
if (this.timer) {
clearInterval(this.timer);
this.timer = undefined;
}
this.close();
});
if (button == this.defaultAction) {
this.defaultButtonComponent = btn;
btn.setCta();
}
if (this.wideButton) {
btn.buttonEl.style.flexGrow = "1";
btn.buttonEl.style.width = "100%";
}
return btn;
});
}
}
onClose() {
super.onClose();
const { contentEl } = this;
contentEl.empty();
if (this.timer) {
clearInterval(this.timer);
this.timer = undefined;
}
if (this.isManuallyClosed) {
this.onSubmit(this.result);
} else {
this.onSubmit(false);
}
}
}
export function confirmWithMessage<T extends readonly string[]>(
plugin: Plugin,
title: string,
contentMd: string,
buttons: T,
defaultAction: T[number],
timeout?: number
): Promise<T[number] | false> {
return new Promise((res) => {
const dialog = new MessageBox(plugin, title, contentMd, buttons, defaultAction, timeout, false, (result) =>
res(result)
);
dialog.open();
});
}
export function confirmWithMessageWithWideButton<T extends readonly string[]>(
plugin: Plugin,
title: string,
contentMd: string,
buttons: T,
defaultAction: T[number],
timeout?: number
): Promise<T[number] | false> {
return new Promise((res) => {
const dialog = new MessageBox(plugin, title, contentMd, buttons, defaultAction, timeout, true, (result) =>
res(result)
);
dialog.open();
});
}
export const askYesNo = (app: App, message: string): Promise<"yes" | "no"> => {
return new Promise((res) => {
const popover = new PopoverSelectString(app, message, undefined, undefined, (result) =>
res(result as "yes" | "no")
);
popover.open();
});
};
export const askSelectString = (app: App, message: string, items: string[]): Promise<string> => {
const getItemsFun = () => items;
return new Promise((res) => {
const popover = new PopoverSelectString(app, message, "", getItemsFun, (result) => res(result));
popover.open();
});
};
export const askString = (
app: App,
title: string,
key: string,
placeholder: string,
isPassword: boolean = false
): Promise<string | false> => {
return new Promise((res) => {
const dialog = new InputStringDialog(app, title, key, placeholder, isPassword, (result) => res(result));
dialog.open();
});
};

View File

@@ -0,0 +1,231 @@
import { type App, TFile, type DataWriteOptions, TFolder, TAbstractFile } from "../../../deps.ts";
import { Logger } from "../../../lib/src/common/logger.ts";
import { isPlainText } from "../../../lib/src/string_and_binary/path.ts";
import type { FilePath, HasSettings, UXFileInfoStub } from "../../../lib/src/common/types.ts";
import { createBinaryBlob, isDocContentSame } from "../../../lib/src/common/utils.ts";
import type { InternalFileInfo } from "../../../common/types.ts";
import { markChangesAreSame } from "../../../common/utils.ts";
import type { StorageAccess } from "../../interfaces/StorageAccess.ts";
function toArrayBuffer(arr: Uint8Array<ArrayBuffer> | ArrayBuffer | DataView<ArrayBuffer>): ArrayBuffer {
if (arr instanceof Uint8Array) {
return arr.buffer;
}
if (arr instanceof DataView) {
return arr.buffer;
}
return arr;
}
export class SerializedFileAccess {
app: App;
plugin: HasSettings<{ handleFilenameCaseSensitive: boolean }>;
storageAccess: StorageAccess;
constructor(app: App, plugin: SerializedFileAccess["plugin"], storageAccess: StorageAccess) {
this.app = app;
this.plugin = plugin;
this.storageAccess = storageAccess;
}
async tryAdapterStat(file: TFile | string) {
const path = file instanceof TFile ? file.path : file;
return await this.storageAccess.processReadFile(path as FilePath, async () => {
if (!(await this.app.vault.adapter.exists(path))) return null;
return this.app.vault.adapter.stat(path);
});
}
async adapterStat(file: TFile | string) {
const path = file instanceof TFile ? file.path : file;
return await this.storageAccess.processReadFile(path as FilePath, () => this.app.vault.adapter.stat(path));
}
async adapterExists(file: TFile | string) {
const path = file instanceof TFile ? file.path : file;
return await this.storageAccess.processReadFile(path as FilePath, () => this.app.vault.adapter.exists(path));
}
async adapterRemove(file: TFile | string) {
const path = file instanceof TFile ? file.path : file;
return await this.storageAccess.processReadFile(path as FilePath, () => this.app.vault.adapter.remove(path));
}
async adapterRead(file: TFile | string) {
const path = file instanceof TFile ? file.path : file;
return await this.storageAccess.processReadFile(path as FilePath, () => this.app.vault.adapter.read(path));
}
async adapterReadBinary(file: TFile | string) {
const path = file instanceof TFile ? file.path : file;
return await this.storageAccess.processReadFile(path as FilePath, () =>
this.app.vault.adapter.readBinary(path)
);
}
async adapterReadAuto(file: TFile | string) {
const path = file instanceof TFile ? file.path : file;
if (isPlainText(path)) {
return await this.storageAccess.processReadFile(path as FilePath, () => this.app.vault.adapter.read(path));
}
return await this.storageAccess.processReadFile(path as FilePath, () =>
this.app.vault.adapter.readBinary(path)
);
}
async adapterWrite(
file: TFile | string,
data: string | ArrayBuffer | Uint8Array<ArrayBuffer>,
options?: DataWriteOptions
) {
const path = file instanceof TFile ? file.path : file;
if (typeof data === "string") {
return await this.storageAccess.processWriteFile(path as FilePath, () =>
this.app.vault.adapter.write(path, data, options)
);
} else {
return await this.storageAccess.processWriteFile(path as FilePath, () =>
this.app.vault.adapter.writeBinary(path, toArrayBuffer(data), options)
);
}
}
async vaultCacheRead(file: TFile) {
return await this.storageAccess.processReadFile(file.path as FilePath, () => this.app.vault.cachedRead(file));
}
async vaultRead(file: TFile) {
return await this.storageAccess.processReadFile(file.path as FilePath, () => this.app.vault.read(file));
}
async vaultReadBinary(file: TFile) {
return await this.storageAccess.processReadFile(file.path as FilePath, () => this.app.vault.readBinary(file));
}
async vaultReadAuto(file: TFile) {
const path = file.path;
if (isPlainText(path)) {
return await this.storageAccess.processReadFile(path as FilePath, () => this.app.vault.read(file));
}
return await this.storageAccess.processReadFile(path as FilePath, () => this.app.vault.readBinary(file));
}
async vaultModify(file: TFile, data: string | ArrayBuffer | Uint8Array<ArrayBuffer>, options?: DataWriteOptions) {
if (typeof data === "string") {
return await this.storageAccess.processWriteFile(file.path as FilePath, async () => {
const oldData = await this.app.vault.read(file);
if (data === oldData) {
if (options && options.mtime) markChangesAreSame(file.path, file.stat.mtime, options.mtime);
return true;
}
await this.app.vault.modify(file, data, options);
return true;
});
} else {
return await this.storageAccess.processWriteFile(file.path as FilePath, async () => {
const oldData = await this.app.vault.readBinary(file);
if (await isDocContentSame(createBinaryBlob(oldData), createBinaryBlob(data))) {
if (options && options.mtime) markChangesAreSame(file.path, file.stat.mtime, options.mtime);
return true;
}
await this.app.vault.modifyBinary(file, toArrayBuffer(data), options);
return true;
});
}
}
async vaultCreate(
path: string,
data: string | ArrayBuffer | Uint8Array<ArrayBuffer>,
options?: DataWriteOptions
): Promise<TFile> {
if (typeof data === "string") {
return await this.storageAccess.processWriteFile(path as FilePath, () =>
this.app.vault.create(path, data, options)
);
} else {
return await this.storageAccess.processWriteFile(path as FilePath, () =>
this.app.vault.createBinary(path, toArrayBuffer(data), options)
);
}
}
trigger(name: string, ...data: any[]) {
return this.app.vault.trigger(name, ...data);
}
async adapterAppend(normalizedPath: string, data: string, options?: DataWriteOptions) {
return await this.app.vault.adapter.append(normalizedPath, data, options);
}
async delete(file: TFile | TFolder, force = false) {
return await this.storageAccess.processWriteFile(file.path as FilePath, () =>
this.app.vault.delete(file, force)
);
}
async trash(file: TFile | TFolder, force = false) {
return await this.storageAccess.processWriteFile(file.path as FilePath, () =>
this.app.vault.trash(file, force)
);
}
isStorageInsensitive(): boolean {
//@ts-ignore
return this.app.vault.adapter.insensitive ?? true;
}
getAbstractFileByPathInsensitive(path: FilePath | string): TAbstractFile | null {
//@ts-ignore
return app.vault.getAbstractFileByPathInsensitive(path);
}
getAbstractFileByPath(path: FilePath | string): TAbstractFile | null {
if (!this.plugin.settings.handleFilenameCaseSensitive || this.isStorageInsensitive()) {
return this.getAbstractFileByPathInsensitive(path);
}
return this.app.vault.getAbstractFileByPath(path);
}
getFiles() {
return this.app.vault.getFiles();
}
async ensureDirectory(fullPath: string) {
const pathElements = fullPath.split("/");
pathElements.pop();
let c = "";
for (const v of pathElements) {
c += v;
try {
await this.app.vault.adapter.mkdir(c);
} catch (ex: any) {
if (ex?.message == "Folder already exists.") {
// Skip if already exists.
} else {
Logger("Folder Create Error");
Logger(ex);
}
}
c += "/";
}
}
touchedFiles: string[] = [];
_statInternal(file: FilePath) {
return this.app.vault.adapter.stat(file);
}
async touch(file: TFile | FilePath) {
const path = file instanceof TFile ? (file.path as FilePath) : file;
const statOrg = file instanceof TFile ? file.stat : await this._statInternal(path);
const stat = statOrg || { mtime: 0, size: 0 };
const key = `${path}-${stat.mtime}-${stat.size}`;
this.touchedFiles.unshift(key);
this.touchedFiles = this.touchedFiles.slice(0, 100);
}
recentlyTouched(file: TFile | InternalFileInfo | UXFileInfoStub) {
const key =
"stat" in file
? `${file.path}-${file.stat.mtime}-${file.stat.size}`
: `${file.path}-${file.mtime}-${file.size}`;
if (this.touchedFiles.indexOf(key) == -1) return false;
return true;
}
clearTouched() {
this.touchedFiles = [];
}
}

View File

@@ -0,0 +1,449 @@
import { TAbstractFile, TFile, TFolder } from "../../../deps.ts";
import { Logger } from "../../../lib/src/common/logger.ts";
import { shouldBeIgnored } from "../../../lib/src/string_and_binary/path.ts";
import {
DEFAULT_SETTINGS,
LOG_LEVEL_DEBUG,
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
type FileEventType,
type FilePath,
type FilePathWithPrefix,
type UXFileInfoStub,
type UXInternalFileInfoStub,
} from "../../../lib/src/common/types.ts";
import { delay, fireAndForget, getFileRegExp } from "../../../lib/src/common/utils.ts";
import { type FileEventItem } from "../../../common/types.ts";
import { serialized, skipIfDuplicated } from "octagonal-wheels/concurrency/lock";
import {
finishAllWaitingForTimeout,
finishWaitingForTimeout,
isWaitingForTimeout,
waitForTimeout,
} from "octagonal-wheels/concurrency/task";
import { Semaphore } from "octagonal-wheels/concurrency/semaphore";
import type { LiveSyncCore } from "../../../main.ts";
import { InternalFileToUXFileInfoStub, TFileToUXFileInfoStub } from "./utilObsidian.ts";
import ObsidianLiveSyncPlugin from "../../../main.ts";
import type { StorageAccess } from "../../interfaces/StorageAccess.ts";
// import { InternalFileToUXFileInfo } from "../platforms/obsidian.ts";
export type FileEvent = {
type: FileEventType;
file: UXFileInfoStub | UXInternalFileInfoStub;
oldPath?: string;
cachedData?: string;
skipBatchWait?: boolean;
};
export abstract class StorageEventManager {
abstract beginWatch(): void;
abstract flushQueue(): void;
abstract appendQueue(items: FileEvent[], ctx?: any): Promise<void>;
abstract cancelQueue(key: string): void;
abstract isWaiting(filename: FilePath): boolean;
}
export class StorageEventManagerObsidian extends StorageEventManager {
plugin: ObsidianLiveSyncPlugin;
core: LiveSyncCore;
storageAccess: StorageAccess;
get services() {
return this.core.services;
}
get shouldBatchSave() {
return this.core.settings?.batchSave && this.core.settings?.liveSync != true;
}
get batchSaveMinimumDelay(): number {
return this.core.settings?.batchSaveMinimumDelay ?? DEFAULT_SETTINGS.batchSaveMinimumDelay;
}
get batchSaveMaximumDelay(): number {
return this.core.settings?.batchSaveMaximumDelay ?? DEFAULT_SETTINGS.batchSaveMaximumDelay;
}
constructor(plugin: ObsidianLiveSyncPlugin, core: LiveSyncCore, storageAccess: StorageAccess) {
super();
this.storageAccess = storageAccess;
this.plugin = plugin;
this.core = core;
}
beginWatch() {
const plugin = this.plugin;
this.watchVaultChange = this.watchVaultChange.bind(this);
this.watchVaultCreate = this.watchVaultCreate.bind(this);
this.watchVaultDelete = this.watchVaultDelete.bind(this);
this.watchVaultRename = this.watchVaultRename.bind(this);
this.watchVaultRawEvents = this.watchVaultRawEvents.bind(this);
this.watchEditorChange = this.watchEditorChange.bind(this);
plugin.registerEvent(plugin.app.vault.on("modify", this.watchVaultChange));
plugin.registerEvent(plugin.app.vault.on("delete", this.watchVaultDelete));
plugin.registerEvent(plugin.app.vault.on("rename", this.watchVaultRename));
plugin.registerEvent(plugin.app.vault.on("create", this.watchVaultCreate));
//@ts-ignore : Internal API
plugin.registerEvent(plugin.app.vault.on("raw", this.watchVaultRawEvents));
plugin.registerEvent(plugin.app.workspace.on("editor-change", this.watchEditorChange));
// plugin.fileEventQueue.startPipeline();
}
watchEditorChange(editor: any, info: any) {
if (!("path" in info)) {
return;
}
if (!this.shouldBatchSave) {
return;
}
const file = info?.file as TFile;
if (!file) return;
if (this.storageAccess.isFileProcessing(file.path as FilePath)) {
// Logger(`Editor change skipped because the file is being processed: ${file.path}`, LOG_LEVEL_VERBOSE);
return;
}
if (!this.isWaiting(file.path as FilePath)) {
return;
}
const data = info?.data as string;
const fi: FileEvent = {
type: "CHANGED",
file: TFileToUXFileInfoStub(file),
cachedData: data,
};
void this.appendQueue([fi]);
}
watchVaultCreate(file: TAbstractFile, ctx?: any) {
if (file instanceof TFolder) return;
if (this.storageAccess.isFileProcessing(file.path as FilePath)) {
// Logger(`File create skipped because the file is being processed: ${file.path}`, LOG_LEVEL_VERBOSE);
return;
}
const fileInfo = TFileToUXFileInfoStub(file);
void this.appendQueue([{ type: "CREATE", file: fileInfo }], ctx);
}
watchVaultChange(file: TAbstractFile, ctx?: any) {
if (file instanceof TFolder) return;
if (this.storageAccess.isFileProcessing(file.path as FilePath)) {
// Logger(`File change skipped because the file is being processed: ${file.path}`, LOG_LEVEL_VERBOSE);
return;
}
const fileInfo = TFileToUXFileInfoStub(file);
void this.appendQueue([{ type: "CHANGED", file: fileInfo }], ctx);
}
watchVaultDelete(file: TAbstractFile, ctx?: any) {
if (file instanceof TFolder) return;
if (this.storageAccess.isFileProcessing(file.path as FilePath)) {
// Logger(`File delete skipped because the file is being processed: ${file.path}`, LOG_LEVEL_VERBOSE);
return;
}
const fileInfo = TFileToUXFileInfoStub(file, true);
void this.appendQueue([{ type: "DELETE", file: fileInfo }], ctx);
}
watchVaultRename(file: TAbstractFile, oldFile: string, ctx?: any) {
// vault Rename will not be raised for self-events (Self-hosted LiveSync will not handle 'rename').
if (file instanceof TFile) {
const fileInfo = TFileToUXFileInfoStub(file);
void this.appendQueue(
[
{
type: "DELETE",
file: {
path: oldFile as FilePath,
name: file.name,
stat: {
mtime: file.stat.mtime,
ctime: file.stat.ctime,
size: file.stat.size,
type: "file",
},
deleted: true,
},
skipBatchWait: true,
},
{ type: "CREATE", file: fileInfo, skipBatchWait: true },
],
ctx
);
}
}
// Watch raw events (Internal API)
watchVaultRawEvents(path: FilePath) {
if (this.storageAccess.isFileProcessing(path)) {
// Logger(`Raw file event skipped because the file is being processed: ${path}`, LOG_LEVEL_VERBOSE);
return;
}
// Only for internal files.
if (!this.plugin.settings) return;
// if (this.plugin.settings.useIgnoreFiles && this.plugin.ignoreFiles.some(e => path.endsWith(e.trim()))) {
if (this.plugin.settings.useIgnoreFiles) {
// If it is one of ignore files, refresh the cached one.
// (Calling$$isTargetFile will refresh the cache)
void this.services.vault.isTargetFile(path).then(() => this._watchVaultRawEvents(path));
} else {
this._watchVaultRawEvents(path);
}
}
_watchVaultRawEvents(path: FilePath) {
if (!this.plugin.settings.syncInternalFiles && !this.plugin.settings.usePluginSync) return;
if (!this.plugin.settings.watchInternalFileChanges) return;
if (!path.startsWith(this.plugin.app.vault.configDir)) return;
const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
const targetPatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
if (ignorePatterns.some((e) => e.test(path))) return;
if (!targetPatterns.some((e) => e.test(path))) return;
if (path.endsWith("/")) {
// Folder
return;
}
void this.appendQueue(
[
{
type: "INTERNAL",
file: InternalFileToUXFileInfoStub(path),
skipBatchWait: true, // Internal files should be processed immediately.
},
],
null
);
}
// Cache file and waiting to can be proceed.
async appendQueue(params: FileEvent[], ctx?: any) {
if (!this.core.settings.isConfigured) return;
if (this.core.settings.suspendFileWatching) return;
this.core.services.vault.markFileListPossiblyChanged();
// Flag up to be reload
const processFiles = new Set<FilePath>();
for (const param of params) {
if (shouldBeIgnored(param.file.path)) {
continue;
}
const atomicKey = [0, 0, 0, 0, 0, 0].map((e) => `${Math.floor(Math.random() * 100000)}`).join("-");
const type = param.type;
const file = param.file;
const oldPath = param.oldPath;
if (type !== "INTERNAL") {
const size = (file as UXFileInfoStub).stat.size;
if (this.services.vault.isFileSizeTooLarge(size) && (type == "CREATE" || type == "CHANGED")) {
Logger(
`The storage file has been changed but exceeds the maximum size. Skipping: ${param.file.path}`,
LOG_LEVEL_NOTICE
);
continue;
}
}
if (file instanceof TFolder) continue;
// TODO: Confirm why only the TFolder skipping
// Possibly following line is needed...
// if (file?.isFolder) continue;
if (!(await this.services.vault.isTargetFile(file.path))) continue;
// Stop cache using to prevent the corruption;
// let cache: null | string | ArrayBuffer;
// new file or something changed, cache the changes.
// if (file instanceof TFile && (type == "CREATE" || type == "CHANGED")) {
if (file instanceof TFile || !file.isFolder) {
if (type == "CREATE" || type == "CHANGED") {
// Wait for a bit while to let the writer has marked `touched` at the file.
await delay(10);
if (this.core.storageAccess.recentlyTouched(file.path)) {
continue;
}
}
}
let cache: string | undefined = undefined;
if (param.cachedData) {
cache = param.cachedData;
}
this.enqueue({
type,
args: {
file: file,
oldPath,
cache,
ctx,
},
skipBatchWait: param.skipBatchWait,
key: atomicKey,
});
processFiles.add(file.path as FilePath);
if (oldPath) {
processFiles.add(oldPath as FilePath);
}
}
for (const path of processFiles) {
fireAndForget(() => this.startStandingBy(path));
}
}
bufferedQueuedItems = [] as FileEventItem[];
enqueue(newItem: FileEventItem) {
const filename = newItem.args.file.path;
if (this.shouldBatchSave) {
Logger(`Request cancel for waiting of previous ${filename}`, LOG_LEVEL_DEBUG);
finishWaitingForTimeout(`storage-event-manager-batchsave-${filename}`);
}
this.bufferedQueuedItems.push(newItem);
// When deleting or renaming, the queue must be flushed once before processing subsequent processes to prevent unexpected race condition.
if (newItem.type == "DELETE") {
return this.flushQueue();
}
}
concurrentProcessing = Semaphore(5);
waitedSince = new Map<FilePath | FilePathWithPrefix, number>();
async startStandingBy(filename: FilePath) {
// If waited, no need to start again (looping inside the function)
await skipIfDuplicated(`storage-event-manager-${filename}`, async () => {
Logger(`Processing ${filename}: Starting`, LOG_LEVEL_DEBUG);
const release = await this.concurrentProcessing.acquire();
try {
Logger(`Processing ${filename}: Started`, LOG_LEVEL_DEBUG);
let noMoreFiles = false;
do {
const target = this.bufferedQueuedItems.find((e) => e.args.file.path == filename);
if (target === undefined) {
noMoreFiles = true;
break;
}
const operationType = target.type;
// if (target.waitedFrom + this.batchSaveMaximumDelay > now) {
// this.requestProcessQueue(target);
// continue;
// }
const type = target.type;
// If already cancelled by other operation, skip this.
if (target.cancelled) {
Logger(`Processing ${filename}: Cancelled (scheduled): ${operationType}`, LOG_LEVEL_DEBUG);
this.cancelStandingBy(target);
continue;
}
if (!target.skipBatchWait) {
if (this.shouldBatchSave && (type == "CREATE" || type == "CHANGED")) {
const waitedSince = this.waitedSince.get(filename);
let canWait = true;
const now = Date.now();
if (waitedSince !== undefined) {
if (waitedSince + this.batchSaveMaximumDelay * 1000 < now) {
Logger(
`Processing ${filename}: Could not wait no more: ${operationType}`,
LOG_LEVEL_INFO
);
canWait = false;
}
}
if (canWait) {
if (waitedSince === undefined) this.waitedSince.set(filename, now);
target.batched = true;
Logger(
`Processing ${filename}: Waiting for batch save delay: ${operationType}`,
LOG_LEVEL_DEBUG
);
this.updateStatus();
const result = await waitForTimeout(
`storage-event-manager-batchsave-${filename}`,
this.batchSaveMinimumDelay * 1000
);
if (!result) {
Logger(
`Processing ${filename}: Cancelled by new queue: ${operationType}`,
LOG_LEVEL_DEBUG
);
// If could not wait for the timeout, possibly we got a new queue. therefore, currently processing one should be cancelled
this.cancelStandingBy(target);
continue;
}
}
}
} else {
Logger(
`Processing ${filename}:Requested to perform immediately ${filename}: ${operationType}`,
LOG_LEVEL_DEBUG
);
}
Logger(`Processing ${filename}: Request main to process: ${operationType}`, LOG_LEVEL_DEBUG);
await this.requestProcessQueue(target);
} while (!noMoreFiles);
} finally {
release();
}
Logger(`Processing ${filename}: Finished`, LOG_LEVEL_DEBUG);
});
}
cancelStandingBy(fei: FileEventItem) {
this.bufferedQueuedItems.remove(fei);
this.updateStatus();
}
processingCount = 0;
async requestProcessQueue(fei: FileEventItem) {
try {
this.processingCount++;
this.bufferedQueuedItems.remove(fei);
this.updateStatus();
this.waitedSince.delete(fei.args.file.path);
await this.handleFileEvent(fei);
} finally {
this.processingCount--;
this.updateStatus();
}
}
isWaiting(filename: FilePath) {
return isWaitingForTimeout(`storage-event-manager-batchsave-${filename}`);
}
flushQueue() {
this.bufferedQueuedItems.forEach((e) => (e.skipBatchWait = true));
finishAllWaitingForTimeout("storage-event-manager-batchsave-", true);
}
cancelQueue(key: string) {
this.bufferedQueuedItems.forEach((e) => {
if (e.key === key) e.skipBatchWait = true;
});
}
updateStatus() {
const allItems = this.bufferedQueuedItems.filter((e) => !e.cancelled);
const batchedCount = allItems.filter((e) => e.batched && !e.skipBatchWait).length;
this.core.batched.value = batchedCount;
this.core.processing.value = this.processingCount;
this.core.totalQueued.value = allItems.length - batchedCount;
}
async handleFileEvent(queue: FileEventItem): Promise<any> {
const file = queue.args.file;
const lockKey = `handleFile:${file.path}`;
return await serialized(lockKey, async () => {
if (queue.type == "INTERNAL" || file.isInternal) {
await this.core.services.fileProcessing.processOptionalFileEvent(file.path as unknown as FilePath);
} else {
const key = `file-last-proc-${queue.type}-${file.path}`;
const last = Number((await this.core.kvDB.get(key)) || 0);
if (queue.type == "DELETE") {
await this.core.services.fileProcessing.processFileEvent(queue);
} else {
if (file.stat.mtime == last) {
Logger(`File has been already scanned on ${queue.type}, skip: ${file.path}`, LOG_LEVEL_VERBOSE);
// Should Cancel the relative operations? (e.g. rename)
// this.cancelRelativeEvent(queue);
return;
}
if (!(await this.core.services.fileProcessing.processFileEvent(queue))) {
Logger(
`STORAGE -> DB: Handler failed, cancel the relative operations: ${file.path}`,
LOG_LEVEL_INFO
);
// cancel running queues and remove one of atomic operation (e.g. rename)
this.cancelRelativeEvent(queue);
return;
}
}
}
});
}
cancelRelativeEvent(item: FileEventItem): void {
this.cancelQueue(item.key);
}
}

View File

@@ -0,0 +1,124 @@
// Obsidian to LiveSync Utils
import { TFile, type TAbstractFile, type TFolder } from "../../../deps.ts";
import { ICHeader } from "../../../common/types.ts";
import type { SerializedFileAccess } from "./SerializedFileAccess.ts";
import { addPrefix, isPlainText } from "../../../lib/src/string_and_binary/path.ts";
import { LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
import { createBlob } from "../../../lib/src/common/utils.ts";
import type {
FilePath,
FilePathWithPrefix,
UXFileInfo,
UXFileInfoStub,
UXFolderInfo,
UXInternalFileInfoStub,
} from "../../../lib/src/common/types.ts";
import type { LiveSyncCore } from "../../../main.ts";
export async function TFileToUXFileInfo(
core: LiveSyncCore,
file: TFile,
prefix?: string,
deleted?: boolean
): Promise<UXFileInfo> {
const isPlain = isPlainText(file.name);
const possiblyLarge = !isPlain;
let content: Blob;
if (deleted) {
content = new Blob();
} else {
if (possiblyLarge) Logger(`Reading : ${file.path}`, LOG_LEVEL_VERBOSE);
content = createBlob(await core.storageAccess.readFileAuto(file.path));
if (possiblyLarge) Logger(`Processing: ${file.path}`, LOG_LEVEL_VERBOSE);
}
// const datatype = determineTypeFromBlob(content);
const bareFullPath = file.path as FilePathWithPrefix;
const fullPath = prefix ? addPrefix(bareFullPath, prefix) : bareFullPath;
return {
name: file.name,
path: fullPath,
stat: {
size: content.size,
ctime: file.stat.ctime,
mtime: file.stat.mtime,
type: "file",
},
body: content,
};
}
export async function InternalFileToUXFileInfo(
fullPath: string,
vaultAccess: SerializedFileAccess,
prefix: string = ICHeader
): Promise<UXFileInfo> {
const name = fullPath.split("/").pop() as string;
const stat = await vaultAccess.tryAdapterStat(fullPath);
if (stat == null) throw new Error(`File not found: ${fullPath}`);
if (stat.type == "folder") throw new Error(`File not found: ${fullPath}`);
const file = await vaultAccess.adapterReadAuto(fullPath);
const isPlain = isPlainText(name);
const possiblyLarge = !isPlain;
if (possiblyLarge) Logger(`Reading : ${fullPath}`, LOG_LEVEL_VERBOSE);
const content = createBlob(file);
if (possiblyLarge) Logger(`Processing: ${fullPath}`, LOG_LEVEL_VERBOSE);
// const datatype = determineTypeFromBlob(content);
const bareFullPath = fullPath as FilePathWithPrefix;
const saveFullPath = prefix ? addPrefix(bareFullPath, prefix) : bareFullPath;
return {
name: name,
path: saveFullPath,
stat: {
size: content.size,
ctime: stat.ctime,
mtime: stat.mtime,
type: "file",
},
body: content,
};
}
export function TFileToUXFileInfoStub(file: TFile | TAbstractFile, deleted?: boolean): UXFileInfoStub {
if (!(file instanceof TFile)) {
throw new Error("Invalid file type");
}
const ret: UXFileInfoStub = {
name: file.name,
path: file.path as FilePathWithPrefix,
isFolder: false,
stat: {
size: file.stat.size,
mtime: file.stat.mtime,
ctime: file.stat.ctime,
type: "file",
},
deleted: deleted,
};
return ret;
}
export function InternalFileToUXFileInfoStub(filename: FilePathWithPrefix, deleted?: boolean): UXInternalFileInfoStub {
const name = filename.split("/").pop() as string;
const ret: UXInternalFileInfoStub = {
name: name,
path: filename,
isFolder: false,
stat: undefined,
isInternal: true,
deleted,
};
return ret;
}
export function TFolderToUXFileInfoStub(file: TFolder): UXFolderInfo {
const ret: UXFolderInfo = {
name: file.name,
path: file.path as FilePathWithPrefix,
parent: file.parent?.path as FilePath | undefined,
isFolder: true,
children: file.children.map((e) => TFileToUXFileInfoStub(e)),
};
return ret;
}

View File

@@ -0,0 +1,399 @@
import { unique } from "octagonal-wheels/collection";
import { throttle } from "octagonal-wheels/function";
import { eventHub } from "../../common/events.ts";
import { BASE_IS_NEW, compareFileFreshness, EVEN, getPath, isValidPath, TARGET_IS_NEW } from "../../common/utils.ts";
import {
type FilePathWithPrefixLC,
type FilePathWithPrefix,
type MetaEntry,
isMetaEntry,
type EntryDoc,
LOG_LEVEL_VERBOSE,
LOG_LEVEL_NOTICE,
LOG_LEVEL_INFO,
LOG_LEVEL_DEBUG,
type UXFileInfoStub,
} from "../../lib/src/common/types.ts";
import { isAnyNote } from "../../lib/src/common/utils.ts";
import { stripAllPrefixes } from "../../lib/src/string_and_binary/path.ts";
import { AbstractModule } from "../AbstractModule.ts";
import { withConcurrency } from "octagonal-wheels/iterable/map";
import type { InjectableServiceHub } from "../../lib/src/services/InjectableServices.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleInitializerFile extends AbstractModule {
private async _performFullScan(showingNotice?: boolean, ignoreSuspending: boolean = false): Promise<boolean> {
this._log("Opening the key-value database", LOG_LEVEL_VERBOSE);
const isInitialized = (await this.core.kvDB.get<boolean>("initialized")) || false;
// synchronize all files between database and storage.
if (!this.settings.isConfigured) {
if (showingNotice) {
this._log(
"LiveSync is not configured yet. Synchronising between the storage and the local database is now prevented.",
LOG_LEVEL_NOTICE,
"syncAll"
);
}
return false;
}
if (!ignoreSuspending && this.settings.suspendFileWatching) {
if (showingNotice) {
this._log(
"Now suspending file watching. Synchronising between the storage and the local database is now prevented.",
LOG_LEVEL_NOTICE,
"syncAll"
);
}
return false;
}
if (showingNotice) {
this._log("Initializing", LOG_LEVEL_NOTICE, "syncAll");
}
this._log("Initialize and checking database files");
this._log("Checking deleted files");
await this.collectDeletedFiles();
this._log("Collecting local files on the storage", LOG_LEVEL_VERBOSE);
const filesStorageSrc = this.core.storageAccess.getFiles();
const _filesStorage = [] as typeof filesStorageSrc;
for (const f of filesStorageSrc) {
if (await this.services.vault.isTargetFile(f.path, f != filesStorageSrc[0])) {
_filesStorage.push(f);
}
}
const convertCase = <FilePathWithPrefix>(path: FilePathWithPrefix): FilePathWithPrefixLC => {
if (this.settings.handleFilenameCaseSensitive) {
return path as FilePathWithPrefixLC;
}
return (path as string).toLowerCase() as FilePathWithPrefixLC;
};
// If handleFilenameCaseSensitive is enabled, `FilePathWithPrefixLC` is the same as `FilePathWithPrefix`.
const storageFileNameMap = Object.fromEntries(
_filesStorage.map((e) => [e.path, e] as [FilePathWithPrefix, UXFileInfoStub])
);
const storageFileNames = Object.keys(storageFileNameMap) as FilePathWithPrefix[];
const storageFileNameCapsPair = storageFileNames.map(
(e) => [e, convertCase(e)] as [FilePathWithPrefix, FilePathWithPrefixLC]
);
// const storageFileNameCS2CI = Object.fromEntries(storageFileNameCapsPair) as Record<FilePathWithPrefix, FilePathWithPrefixLC>;
const storageFileNameCI2CS = Object.fromEntries(storageFileNameCapsPair.map((e) => [e[1], e[0]])) as Record<
FilePathWithPrefixLC,
FilePathWithPrefix
>;
this._log("Collecting local files on the DB", LOG_LEVEL_VERBOSE);
const _DBEntries = [] as MetaEntry[];
let count = 0;
// Fetch all documents from the database (including conflicts to prevent overwriting).
for await (const doc of this.localDatabase.findAllNormalDocs({ conflicts: true })) {
count++;
if (count % 25 == 0)
this._log(
`Collecting local files on the DB: ${count}`,
showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO,
"syncAll"
);
const path = getPath(doc);
if (isValidPath(path) && (await this.services.vault.isTargetFile(path, true))) {
if (!isMetaEntry(doc)) {
this._log(`Invalid entry: ${path}`, LOG_LEVEL_INFO);
continue;
}
_DBEntries.push(doc);
}
}
const databaseFileNameMap = Object.fromEntries(
_DBEntries.map((e) => [getPath(e), e] as [FilePathWithPrefix, MetaEntry])
);
const databaseFileNames = Object.keys(databaseFileNameMap) as FilePathWithPrefix[];
const databaseFileNameCapsPair = databaseFileNames.map(
(e) => [e, convertCase(e)] as [FilePathWithPrefix, FilePathWithPrefixLC]
);
// const databaseFileNameCS2CI = Object.fromEntries(databaseFileNameCapsPair) as Record<FilePathWithPrefix, FilePathWithPrefixLC>;
const databaseFileNameCI2CS = Object.fromEntries(databaseFileNameCapsPair.map((e) => [e[1], e[0]])) as Record<
FilePathWithPrefix,
FilePathWithPrefixLC
>;
const allFiles = unique([
...Object.keys(databaseFileNameCI2CS),
...Object.keys(storageFileNameCI2CS),
]) as FilePathWithPrefixLC[];
this._log(`Total files in the database: ${databaseFileNames.length}`, LOG_LEVEL_VERBOSE, "syncAll");
this._log(`Total files in the storage: ${storageFileNames.length}`, LOG_LEVEL_VERBOSE, "syncAll");
this._log(`Total files: ${allFiles.length}`, LOG_LEVEL_VERBOSE, "syncAll");
const filesExistOnlyInStorage = allFiles.filter((e) => !databaseFileNameCI2CS[e]);
const filesExistOnlyInDatabase = allFiles.filter((e) => !storageFileNameCI2CS[e]);
const filesExistBoth = allFiles.filter((e) => databaseFileNameCI2CS[e] && storageFileNameCI2CS[e]);
this._log(`Files exist only in storage: ${filesExistOnlyInStorage.length}`, LOG_LEVEL_VERBOSE, "syncAll");
this._log(`Files exist only in database: ${filesExistOnlyInDatabase.length}`, LOG_LEVEL_VERBOSE, "syncAll");
this._log(`Files exist both in storage and database: ${filesExistBoth.length}`, LOG_LEVEL_VERBOSE, "syncAll");
this._log("Synchronising...");
const processStatus = {} as Record<string, string>;
const logLevel = showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const updateLog = throttle((key: string, msg: string) => {
processStatus[key] = msg;
const log = Object.values(processStatus).join("\n");
this._log(log, logLevel, "syncAll");
}, 25);
const initProcess = [];
const runAll = async <T>(procedureName: string, objects: T[], callback: (arg: T) => Promise<void>) => {
if (objects.length == 0) {
this._log(`${procedureName}: Nothing to do`);
return;
}
this._log(procedureName);
if (!this.localDatabase.isReady) throw Error("Database is not ready!");
let success = 0;
let failed = 0;
let total = 0;
for await (const result of withConcurrency(
objects,
async (e) => {
try {
await callback(e);
return true;
} catch (ex) {
this._log(`Error while ${procedureName}`, LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
return false;
}
},
10
)) {
if (result) {
success++;
} else {
failed++;
}
total++;
const msg = `${procedureName}: DONE:${success}, FAILED:${failed}, LAST:${objects.length - total}`;
updateLog(procedureName, msg);
}
const msg = `${procedureName} All done: DONE:${success}, FAILED:${failed}`;
updateLog(procedureName, msg);
};
initProcess.push(
runAll("UPDATE DATABASE", filesExistOnlyInStorage, async (e) => {
// Exists in storage but not in database.
const file = storageFileNameMap[storageFileNameCI2CS[e]];
if (!this.services.vault.isFileSizeTooLarge(file.stat.size)) {
const path = file.path;
await this.core.fileHandler.storeFileToDB(file);
// fireAndForget(() => this.checkAndApplySettingFromMarkdown(path, true));
eventHub.emitEvent("event-file-changed", { file: path, automated: true });
} else {
this._log(`UPDATE DATABASE: ${e} has been skipped due to file size exceeding the limit`, logLevel);
}
})
);
initProcess.push(
runAll("UPDATE STORAGE", filesExistOnlyInDatabase, async (e) => {
const w = databaseFileNameMap[databaseFileNameCI2CS[e]];
// Exists in database but not in storage.
const path = getPath(w) ?? e;
if (w && !(w.deleted || w._deleted)) {
if (!this.services.vault.isFileSizeTooLarge(w.size)) {
// Prevent applying the conflicted state to the storage.
if (w._conflicts?.length ?? 0 > 0) {
this._log(`UPDATE STORAGE: ${path} has conflicts. skipped (x)`, LOG_LEVEL_INFO);
return;
}
// await this.pullFile(path, undefined, false, undefined, false);
// Memo: No need to force
await this.core.fileHandler.dbToStorage(path, null, true);
// fireAndForget(() => this.checkAndApplySettingFromMarkdown(e, true));
eventHub.emitEvent("event-file-changed", {
file: e,
automated: true,
});
this._log(`Check or pull from db:${path} OK`);
} else {
this._log(
`UPDATE STORAGE: ${path} has been skipped due to file size exceeding the limit`,
logLevel
);
}
} else if (w) {
this._log(`Deletion history skipped: ${path}`, LOG_LEVEL_VERBOSE);
} else {
this._log(`entry not found: ${path}`);
}
})
);
const fileMap = filesExistBoth.map((path) => {
const file = storageFileNameMap[storageFileNameCI2CS[path]];
const doc = databaseFileNameMap[databaseFileNameCI2CS[path]];
return { file, doc };
});
initProcess.push(
runAll("SYNC DATABASE AND STORAGE", fileMap, async (e) => {
const { file, doc } = e;
// Prevent applying the conflicted state to the storage.
if (doc._conflicts?.length ?? 0 > 0) {
this._log(`SYNC DATABASE AND STORAGE: ${file.path} has conflicts. skipped`, LOG_LEVEL_INFO);
return;
}
if (
!this.services.vault.isFileSizeTooLarge(file.stat.size) &&
!this.services.vault.isFileSizeTooLarge(doc.size)
) {
await this.syncFileBetweenDBandStorage(file, doc);
} else {
this._log(
`SYNC DATABASE AND STORAGE: ${getPath(doc)} has been skipped due to file size exceeding the limit`,
logLevel
);
}
})
);
await Promise.all(initProcess);
// this.setStatusBarText(`NOW TRACKING!`);
this._log("Initialized, NOW TRACKING!");
if (!isInitialized) {
await this.core.kvDB.set("initialized", true);
}
if (showingNotice) {
this._log("Initialize done!", LOG_LEVEL_NOTICE, "syncAll");
}
return true;
}
async syncFileBetweenDBandStorage(file: UXFileInfoStub, doc: MetaEntry) {
if (!doc) {
throw new Error(`Missing doc:${(file as any).path}`);
}
if ("path" in file) {
const w = this.core.storageAccess.getFileStub((file as any).path);
if (w) {
file = w;
} else {
throw new Error(`Missing file:${(file as any).path}`);
}
}
const compareResult = compareFileFreshness(file, doc);
switch (compareResult) {
case BASE_IS_NEW:
if (!this.services.vault.isFileSizeTooLarge(file.stat.size)) {
this._log("STORAGE -> DB :" + file.path);
await this.core.fileHandler.storeFileToDB(file);
} else {
this._log(
`STORAGE -> DB : ${file.path} has been skipped due to file size exceeding the limit`,
LOG_LEVEL_NOTICE
);
}
break;
case TARGET_IS_NEW:
if (!this.services.vault.isFileSizeTooLarge(doc.size)) {
this._log("STORAGE <- DB :" + file.path);
if (await this.core.fileHandler.dbToStorage(doc, stripAllPrefixes(file.path), true)) {
eventHub.emitEvent("event-file-changed", {
file: file.path,
automated: true,
});
} else {
this._log(`STORAGE <- DB : Cloud not read ${file.path}, possibly deleted`, LOG_LEVEL_NOTICE);
}
return caches;
} else {
this._log(
`STORAGE <- DB : ${file.path} has been skipped due to file size exceeding the limit`,
LOG_LEVEL_NOTICE
);
}
break;
case EVEN:
this._log("STORAGE == DB :" + file.path + "", LOG_LEVEL_DEBUG);
break;
default:
this._log("STORAGE ?? DB :" + file.path + " Something got weird");
}
}
// This method uses an old version of database accessor, which is not recommended.
// TODO: Fix
async collectDeletedFiles() {
const limitDays = this.settings.automaticallyDeleteMetadataOfDeletedFiles;
if (limitDays <= 0) return;
this._log(`Checking expired file history`);
const limit = Date.now() - 86400 * 1000 * limitDays;
const notes: {
path: string;
mtime: number;
ttl: number;
doc: PouchDB.Core.ExistingDocument<EntryDoc & PouchDB.Core.AllDocsMeta>;
}[] = [];
for await (const doc of this.localDatabase.findAllDocs({ conflicts: true })) {
if (isAnyNote(doc)) {
if (doc.deleted && doc.mtime - limit < 0) {
notes.push({
path: getPath(doc),
mtime: doc.mtime,
ttl: (doc.mtime - limit) / 1000 / 86400,
doc: doc,
});
}
}
}
if (notes.length == 0) {
this._log("There are no old documents");
this._log(`Checking expired file history done`);
return;
}
for (const v of notes) {
this._log(`Deletion history expired: ${v.path}`);
const delDoc = v.doc;
delDoc._deleted = true;
await this.localDatabase.putRaw(delDoc);
}
this._log(`Checking expired file history done`);
}
private async _initializeDatabase(
showingNotice: boolean = false,
reopenDatabase = true,
ignoreSuspending: boolean = false
): Promise<boolean> {
this.services.appLifecycle.resetIsReady();
if (!reopenDatabase || (await this.services.database.openDatabase())) {
if (this.localDatabase.isReady) {
await this.services.vault.scanVault(showingNotice, ignoreSuspending);
}
if (!(await this.services.databaseEvents.onDatabaseInitialised(showingNotice))) {
this._log(`Initializing database has been failed on some module!`, LOG_LEVEL_NOTICE);
return false;
}
this.services.appLifecycle.markIsReady();
// run queued event once.
await this.services.fileProcessing.commitPendingFileEvents();
return true;
} else {
this.services.appLifecycle.resetIsReady();
return false;
}
}
onBindFunction(core: LiveSyncCore, services: InjectableServiceHub): void {
services.databaseEvents.handleInitialiseDatabase(this._initializeDatabase.bind(this));
services.vault.handleScanVault(this._performFullScan.bind(this));
}
}

View File

@@ -0,0 +1,109 @@
import { delay, yieldMicrotask } from "octagonal-wheels/promises";
import { OpenKeyValueDatabase } from "../../common/KeyValueDB.ts";
import type { LiveSyncLocalDB } from "../../lib/src/pouchdb/LiveSyncLocalDB.ts";
import { LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { AbstractModule } from "../AbstractModule.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleKeyValueDB extends AbstractModule {
tryCloseKvDB() {
try {
this.core.kvDB?.close();
return true;
} catch (e) {
this._log("Failed to close KeyValueDB", LOG_LEVEL_VERBOSE);
this._log(e);
return false;
}
}
async openKeyValueDB(): Promise<boolean> {
await delay(10);
try {
this.tryCloseKvDB();
await delay(10);
await yieldMicrotask();
this.core.kvDB = await OpenKeyValueDatabase(this.services.vault.getVaultName() + "-livesync-kv");
await yieldMicrotask();
await delay(100);
} catch (e) {
this.core.kvDB = undefined!;
this._log("Failed to open KeyValueDB", LOG_LEVEL_NOTICE);
this._log(e, LOG_LEVEL_VERBOSE);
return false;
}
return true;
}
_onDBUnload(db: LiveSyncLocalDB) {
if (this.core.kvDB) this.core.kvDB.close();
return Promise.resolve(true);
}
_onDBClose(db: LiveSyncLocalDB) {
if (this.core.kvDB) this.core.kvDB.close();
return Promise.resolve(true);
}
private async _everyOnloadAfterLoadSettings(): Promise<boolean> {
if (!(await this.openKeyValueDB())) {
return false;
}
this.core.simpleStore = this.services.database.openSimpleStore<any>("os");
return Promise.resolve(true);
}
_getSimpleStore<T>(kind: string) {
const prefix = `${kind}-`;
return {
get: async (key: string): Promise<T> => {
return await this.core.kvDB.get(`${prefix}${key}`);
},
set: async (key: string, value: any): Promise<void> => {
await this.core.kvDB.set(`${prefix}${key}`, value);
},
delete: async (key: string): Promise<void> => {
await this.core.kvDB.del(`${prefix}${key}`);
},
keys: async (
from: string | undefined,
to: string | undefined,
count?: number | undefined
): Promise<string[]> => {
const ret = this.core.kvDB.keys(
IDBKeyRange.bound(`${prefix}${from || ""}`, `${prefix}${to || ""}`),
count
);
return (await ret)
.map((e) => e.toString())
.filter((e) => e.startsWith(prefix))
.map((e) => e.substring(prefix.length));
},
};
}
_everyOnInitializeDatabase(db: LiveSyncLocalDB): Promise<boolean> {
return this.openKeyValueDB();
}
async _everyOnResetDatabase(db: LiveSyncLocalDB): Promise<boolean> {
try {
const kvDBKey = "queued-files";
await this.core.kvDB.del(kvDBKey);
// localStorage.removeItem(lsKey);
await this.core.kvDB.destroy();
await yieldMicrotask();
this.core.kvDB = await OpenKeyValueDatabase(this.services.vault.getVaultName() + "-livesync-kv");
await delay(100);
} catch (e) {
this.core.kvDB = undefined!;
this._log("Failed to reset KeyValueDB", LOG_LEVEL_NOTICE);
this._log(e, LOG_LEVEL_VERBOSE);
return false;
}
return true;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.databaseEvents.handleOnUnloadDatabase(this._onDBUnload.bind(this));
services.databaseEvents.handleOnCloseDatabase(this._onDBClose.bind(this));
services.databaseEvents.handleOnDatabaseInitialisation(this._everyOnInitializeDatabase.bind(this));
services.databaseEvents.handleOnResetDatabase(this._everyOnResetDatabase.bind(this));
services.database.handleOpenSimpleStore(this._getSimpleStore.bind(this));
services.appLifecycle.handleOnSettingLoaded(this._everyOnloadAfterLoadSettings.bind(this));
}
}

View File

@@ -0,0 +1,361 @@
import { LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, Logger } from "../../lib/src/common/logger.ts";
import {
EVENT_REQUEST_OPEN_P2P,
EVENT_REQUEST_OPEN_SETTING_WIZARD,
EVENT_REQUEST_OPEN_SETTINGS,
EVENT_REQUEST_RUN_DOCTOR,
EVENT_REQUEST_RUN_FIX_INCOMPLETE,
eventHub,
} from "../../common/events.ts";
import { AbstractModule } from "../AbstractModule.ts";
import { $msg } from "src/lib/src/common/i18n.ts";
import { performDoctorConsultation, RebuildOptions } from "../../lib/src/common/configForDoc.ts";
import { getPath, isValidPath } from "../../common/utils.ts";
import { isMetaEntry } from "../../lib/src/common/types.ts";
import { isDeletedEntry, isDocContentSame, isLoadedEntry, readAsBlob } from "../../lib/src/common/utils.ts";
import { countCompromisedChunks } from "../../lib/src/pouchdb/negotiation.ts";
import type { LiveSyncCore } from "../../main.ts";
import { SetupManager } from "../features/SetupManager.ts";
type ErrorInfo = {
path: string;
recordedSize: number;
actualSize: number;
storageSize: number;
contentMatched: boolean;
isConflicted?: boolean;
};
export class ModuleMigration extends AbstractModule {
async migrateUsingDoctor(skipRebuild: boolean = false, activateReason = "updated", forceRescan = false) {
const { shouldRebuild, shouldRebuildLocal, isModified, settings } = await performDoctorConsultation(
this.core,
this.settings,
{
localRebuild: skipRebuild ? RebuildOptions.SkipEvenIfRequired : RebuildOptions.AutomaticAcceptable,
remoteRebuild: skipRebuild ? RebuildOptions.SkipEvenIfRequired : RebuildOptions.AutomaticAcceptable,
activateReason,
forceRescan,
}
);
if (isModified) {
this.settings = settings;
await this.core.saveSettings();
}
if (!skipRebuild) {
if (shouldRebuild) {
await this.core.rebuilder.scheduleRebuild();
this.services.appLifecycle.performRestart();
return false;
} else if (shouldRebuildLocal) {
await this.core.rebuilder.scheduleFetch();
this.services.appLifecycle.performRestart();
return false;
}
}
return true;
}
async migrateDisableBulkSend() {
if (this.settings.sendChunksBulk) {
this._log($msg("moduleMigration.logBulkSendCorrupted"), LOG_LEVEL_NOTICE);
this.settings.sendChunksBulk = false;
this.settings.sendChunksBulkMaxSize = 1;
await this.saveSettings();
}
}
async initialMessage() {
const manager = this.core.getModule(SetupManager);
return await manager.startOnBoarding();
/*
const message = $msg("moduleMigration.msgInitialSetup", {
URI_DOC: $msg("moduleMigration.docUri"),
});
const USE_SETUP = $msg("moduleMigration.optionHaveSetupUri");
const NEXT = $msg("moduleMigration.optionNoSetupUri");
const ret = await this.core.confirm.askSelectStringDialogue(message, [USE_SETUP, NEXT], {
title: $msg("moduleMigration.titleWelcome"),
defaultAction: USE_SETUP,
});
if (ret === USE_SETUP) {
eventHub.emitEvent(EVENT_REQUEST_OPEN_SETUP_URI);
return false;
} else if (ret == NEXT) {
return true;
}
return false;
*/
}
async askAgainForSetupURI() {
const message = $msg("moduleMigration.msgRecommendSetupUri", { URI_DOC: $msg("moduleMigration.docUri") });
const USE_MINIMAL = $msg("moduleMigration.optionSetupWizard");
const USE_P2P = $msg("moduleMigration.optionSetupViaP2P");
const USE_SETUP = $msg("moduleMigration.optionManualSetup");
const NEXT = $msg("moduleMigration.optionRemindNextLaunch");
const ret = await this.core.confirm.askSelectStringDialogue(message, [USE_MINIMAL, USE_SETUP, USE_P2P, NEXT], {
title: $msg("moduleMigration.titleRecommendSetupUri"),
defaultAction: USE_MINIMAL,
});
if (ret === USE_MINIMAL) {
eventHub.emitEvent(EVENT_REQUEST_OPEN_SETTING_WIZARD);
return false;
}
if (ret === USE_P2P) {
eventHub.emitEvent(EVENT_REQUEST_OPEN_P2P);
return false;
}
if (ret === USE_SETUP) {
eventHub.emitEvent(EVENT_REQUEST_OPEN_SETTINGS);
return false;
} else if (ret == NEXT) {
return false;
}
return false;
}
async hasIncompleteDocs(force: boolean = false): Promise<boolean> {
const incompleteDocsChecked = (await this.core.kvDB.get<boolean>("checkIncompleteDocs")) || false;
if (incompleteDocsChecked && !force) {
this._log("Incomplete docs check already done, skipping.", LOG_LEVEL_VERBOSE);
return Promise.resolve(true);
}
this._log("Checking for incomplete documents...", LOG_LEVEL_NOTICE, "check-incomplete");
const errorFiles = [] as ErrorInfo[];
for await (const metaDoc of this.localDatabase.findAllNormalDocs({ conflicts: true })) {
const path = getPath(metaDoc);
if (!isValidPath(path)) {
continue;
}
if (!(await this.services.vault.isTargetFile(path, true))) {
continue;
}
if (!isMetaEntry(metaDoc)) {
continue;
}
const doc = await this.localDatabase.getDBEntryFromMeta(metaDoc);
if (!doc || !isLoadedEntry(doc)) {
continue;
}
if (isDeletedEntry(doc)) {
continue;
}
const isConflicted = metaDoc?._conflicts && metaDoc._conflicts.length > 0;
let storageFileContent;
try {
storageFileContent = await this.core.storageAccess.readHiddenFileBinary(path);
} catch (e) {
Logger(`Failed to read file ${path}: Possibly unprocessed or missing`);
Logger(e, LOG_LEVEL_VERBOSE);
continue;
}
// const storageFileBlob = createBlob(storageFileContent);
const sizeOnStorage = storageFileContent.byteLength;
const recordedSize = doc.size;
const docBlob = readAsBlob(doc);
const actualSize = docBlob.size;
if (
recordedSize !== actualSize ||
sizeOnStorage !== actualSize ||
sizeOnStorage !== recordedSize ||
isConflicted
) {
const contentMatched = await isDocContentSame(doc.data, storageFileContent);
errorFiles.push({
path,
recordedSize,
actualSize,
storageSize: sizeOnStorage,
contentMatched,
isConflicted,
});
Logger(
`Size mismatch for ${path}: ${recordedSize} (DB Recorded) , ${actualSize} (DB Stored) , ${sizeOnStorage} (Storage Stored), ${contentMatched ? "Content Matched" : "Content Mismatched"} ${isConflicted ? "Conflicted" : "Not Conflicted"}`
);
}
}
if (errorFiles.length == 0) {
Logger("No size mismatches found", LOG_LEVEL_NOTICE);
await this.core.kvDB.set("checkIncompleteDocs", true);
return Promise.resolve(true);
}
Logger(`Found ${errorFiles.length} size mismatches`, LOG_LEVEL_NOTICE);
// We have to repair them following rules and situations:
// A. DB Recorded != DB Stored
// A.1. DB Recorded == Storage Stored
// Possibly recoverable from storage. Just overwrite the DB content with storage content.
// A.2. Neither
// Probably it cannot be resolved on this device. Even if the storage content is larger than DB Recorded, it possibly corrupted.
// We do not fix it automatically. Leave it as is. Possibly other device can do this.
// B. DB Recorded == DB Stored , < Storage Stored
// Very fragile, if DB Recorded size is less than Storage Stored size, we possibly repair the content (The issue was `unexpectedly shortened file`).
// We do not fix it automatically, but it will be automatically overwritten in other process.
// C. DB Recorded == DB Stored , > Storage Stored
// Probably restored by the user by resolving A or B on other device, We should overwrite the storage
// Also do not fix it automatically. It should be overwritten by replication.
const recoverable = errorFiles.filter((e) => {
return e.recordedSize === e.storageSize && !e.isConflicted;
});
const unrecoverable = errorFiles.filter((e) => {
return e.recordedSize !== e.storageSize || e.isConflicted;
});
const fileInfo = (e: (typeof errorFiles)[0]) => {
return `${e.path} (M: ${e.recordedSize}, A: ${e.actualSize}, S: ${e.storageSize}) ${e.isConflicted ? "(Conflicted)" : ""}`;
};
const messageUnrecoverable =
unrecoverable.length > 0
? $msg("moduleMigration.fix0256.messageUnrecoverable", {
filesNotRecoverable: unrecoverable.map((e) => `- ${fileInfo(e)}`).join("\n"),
})
: "";
const message = $msg("moduleMigration.fix0256.message", {
files: recoverable.map((e) => `- ${fileInfo(e)}`).join("\n"),
messageUnrecoverable,
});
const CHECK_IT_LATER = $msg("moduleMigration.fix0256.buttons.checkItLater");
const FIX = $msg("moduleMigration.fix0256.buttons.fix");
const DISMISS = $msg("moduleMigration.fix0256.buttons.DismissForever");
const ret = await this.core.confirm.askSelectStringDialogue(message, [CHECK_IT_LATER, FIX, DISMISS], {
title: $msg("moduleMigration.fix0256.title"),
defaultAction: CHECK_IT_LATER,
});
if (ret == FIX) {
for (const file of recoverable) {
// Overwrite the database with the files on the storage
const stubFile = this.core.storageAccess.getFileStub(file.path);
if (stubFile == null) {
Logger(`Could not find stub file for ${file.path}`, LOG_LEVEL_NOTICE);
continue;
}
stubFile.stat.mtime = Date.now();
const result = await this.core.fileHandler.storeFileToDB(stubFile, true, false);
if (result) {
Logger(`Successfully restored ${file.path} from storage`);
} else {
Logger(`Failed to restore ${file.path} from storage`, LOG_LEVEL_NOTICE);
}
}
} else if (ret === DISMISS) {
// User chose to dismiss the issue
await this.core.kvDB.set("checkIncompleteDocs", true);
}
return Promise.resolve(true);
}
async hasCompromisedChunks(): Promise<boolean> {
Logger(`Checking for compromised chunks...`, LOG_LEVEL_VERBOSE);
if (!this.settings.encrypt) {
// If not encrypted, we do not need to check for compromised chunks.
return true;
}
// Check local database for compromised chunks
const localCompromised = await countCompromisedChunks(this.localDatabase.localDatabase);
const remote = this.services.replicator.getActiveReplicator();
const remoteCompromised = this.core.managers.networkManager.isOnline
? await remote?.countCompromisedChunks()
: 0;
if (localCompromised === false) {
Logger(`Failed to count compromised chunks in local database`, LOG_LEVEL_NOTICE);
return false;
}
if (remoteCompromised === false) {
Logger(`Failed to count compromised chunks in remote database`, LOG_LEVEL_NOTICE);
return false;
}
if (remoteCompromised === 0 && localCompromised === 0) {
return true;
}
Logger(
`Found compromised chunks : ${localCompromised} in local, ${remoteCompromised} in remote`,
LOG_LEVEL_NOTICE
);
const title = $msg("moduleMigration.insecureChunkExist.title");
const msg = $msg("moduleMigration.insecureChunkExist.message");
const REBUILD = $msg("moduleMigration.insecureChunkExist.buttons.rebuild");
const FETCH = $msg("moduleMigration.insecureChunkExist.buttons.fetch");
const DISMISS = $msg("moduleMigration.insecureChunkExist.buttons.later");
const buttons = [REBUILD, FETCH, DISMISS];
if (remoteCompromised != 0) {
buttons.splice(buttons.indexOf(FETCH), 1);
}
const result = await this.core.confirm.askSelectStringDialogue(msg, buttons, {
title,
defaultAction: DISMISS,
timeout: 0,
});
if (result === REBUILD) {
// Rebuild the database
await this.core.rebuilder.scheduleRebuild();
this.services.appLifecycle.performRestart();
return false;
} else if (result === FETCH) {
// Fetch the latest data from remote
await this.core.rebuilder.scheduleFetch();
this.services.appLifecycle.performRestart();
return false;
} else {
// User chose to dismiss the issue
this._log($msg("moduleMigration.insecureChunkExist.laterMessage"), LOG_LEVEL_NOTICE);
}
return true;
}
async _everyOnFirstInitialize(): Promise<boolean> {
if (!this.localDatabase.isReady) {
this._log($msg("moduleMigration.logLocalDatabaseNotReady"), LOG_LEVEL_NOTICE);
return false;
}
if (this.settings.isConfigured) {
if (!(await this.hasCompromisedChunks())) {
return false;
}
if (!(await this.hasIncompleteDocs())) {
return false;
}
if (!(await this.migrateUsingDoctor(false))) {
return false;
}
// await this.migrationCheck();
await this.migrateDisableBulkSend();
}
if (!this.settings.isConfigured) {
// if (!(await this.initialMessage()) || !(await this.askAgainForSetupURI())) {
// this._log($msg("moduleMigration.logSetupCancelled"), LOG_LEVEL_NOTICE);
// return false;
// }
if (!(await this.initialMessage())) {
this._log($msg("moduleMigration.logSetupCancelled"), LOG_LEVEL_NOTICE);
return false;
}
if (!(await this.migrateUsingDoctor(true))) {
return false;
}
}
return true;
}
_everyOnLayoutReady(): Promise<boolean> {
eventHub.onEvent(EVENT_REQUEST_RUN_DOCTOR, async (reason) => {
await this.migrateUsingDoctor(false, reason, true);
});
eventHub.onEvent(EVENT_REQUEST_RUN_FIX_INCOMPLETE, async () => {
await this.hasIncompleteDocs(true);
});
return Promise.resolve(true);
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
super.onBindFunction(core, services);
services.appLifecycle.handleLayoutReady(this._everyOnLayoutReady.bind(this));
services.appLifecycle.handleFirstInitialise(this._everyOnFirstInitialize.bind(this));
}
}

View File

@@ -1,17 +1,14 @@
// This file is based on a file that was published by the @remotely-save, under the Apache 2 License.
// I would love to express my deepest gratitude to the original authors for their hard work and dedication. Without their contributions, this project would not have been possible.
//
//
// Original Implementation is here: https://github.com/remotely-save/remotely-save/blob/28b99557a864ef59c19d2ad96101196e401718f0/src/remoteForS3.ts
import {
FetchHttpHandler,
type FetchHttpHandlerOptions,
} from "@smithy/fetch-http-handler";
import { FetchHttpHandler, type FetchHttpHandlerOptions } from "@smithy/fetch-http-handler";
import { HttpRequest, HttpResponse, type HttpHandlerOptions } from "@smithy/protocol-http";
//@ts-ignore
import { requestTimeout } from "@smithy/fetch-http-handler/dist-es/request-timeout";
import { buildQueryString } from "@smithy/querystring-builder";
import { requestUrl, type RequestUrlParam } from "../deps.ts";
import { requestUrl, type RequestUrlParam } from "../../../deps.ts";
////////////////////////////////////////////////////////////////////////////////
// special handler using Obsidian requestUrl
////////////////////////////////////////////////////////////////////////////////
@@ -25,20 +22,13 @@ import { requestUrl, type RequestUrlParam } from "../deps.ts";
export class ObsHttpHandler extends FetchHttpHandler {
requestTimeoutInMs: number | undefined;
reverseProxyNoSignUrl: string | undefined;
constructor(
options?: FetchHttpHandlerOptions,
reverseProxyNoSignUrl?: string
) {
constructor(options?: FetchHttpHandlerOptions, reverseProxyNoSignUrl?: string) {
super(options);
this.requestTimeoutInMs =
options === undefined ? undefined : options.requestTimeout;
this.requestTimeoutInMs = options === undefined ? undefined : options.requestTimeout;
this.reverseProxyNoSignUrl = reverseProxyNoSignUrl;
}
// eslint-disable-next-line require-await
async handle(
request: HttpRequest,
{ abortSignal }: HttpHandlerOptions = {}
): Promise<{ response: HttpResponse }> {
async handle(request: HttpRequest, { abortSignal }: HttpHandlerOptions = {}): Promise<{ response: HttpResponse }> {
if (abortSignal?.aborted) {
const abortError = new Error("Request aborted");
abortError.name = "AbortError";
@@ -54,18 +44,13 @@ export class ObsHttpHandler extends FetchHttpHandler {
}
const { port, method } = request;
let url = `${request.protocol}//${request.hostname}${port ? `:${port}` : ""
}${path}`;
if (
this.reverseProxyNoSignUrl !== undefined &&
this.reverseProxyNoSignUrl !== ""
) {
let url = `${request.protocol}//${request.hostname}${port ? `:${port}` : ""}${path}`;
if (this.reverseProxyNoSignUrl !== undefined && this.reverseProxyNoSignUrl !== "") {
const urlObj = new URL(url);
urlObj.host = this.reverseProxyNoSignUrl;
url = urlObj.href;
}
const body =
method === "GET" || method === "HEAD" ? undefined : request.body;
const body = method === "GET" || method === "HEAD" ? undefined : request.body;
const transformedHeaders: Record<string, string> = {};
for (const key of Object.keys(request.headers)) {

View File

@@ -0,0 +1,292 @@
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
import { LOG_LEVEL_DEBUG, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { Notice, requestUrl, type RequestUrlParam, type RequestUrlResponse } from "../../deps.ts";
import { type CouchDBCredentials, type EntryDoc, type FilePath } from "../../lib/src/common/types.ts";
import { getPathFromTFile } from "../../common/utils.ts";
import { isCloudantURI, isValidRemoteCouchDBURI } from "../../lib/src/pouchdb/utils_couchdb.ts";
import { replicationFilter } from "@/lib/src/pouchdb/compress.ts";
import { disableEncryption } from "@/lib/src/pouchdb/encryption.ts";
import { enableEncryption } from "@/lib/src/pouchdb/encryption.ts";
import { setNoticeClass } from "../../lib/src/mock_and_interop/wrapper.ts";
import { ObsHttpHandler } from "./APILib/ObsHttpHandler.ts";
import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser.ts";
import { AuthorizationHeaderGenerator } from "../../lib/src/replication/httplib.ts";
import type { LiveSyncCore } from "../../main.ts";
setNoticeClass(Notice);
async function fetchByAPI(request: RequestUrlParam, errorAsResult = false): Promise<RequestUrlResponse> {
const ret = await requestUrl({ ...request, throw: !errorAsResult });
return ret;
}
export class ModuleObsidianAPI extends AbstractObsidianModule {
_customHandler!: ObsHttpHandler;
_authHeader = new AuthorizationHeaderGenerator();
last_successful_post = false;
_customFetchHandler(): ObsHttpHandler {
if (!this._customHandler) this._customHandler = new ObsHttpHandler(undefined, undefined);
return this._customHandler;
}
_getLastPostFailedBySize(): boolean {
return !this.last_successful_post;
}
async __fetchByAPI(url: string, authHeader: string, opts?: RequestInit): Promise<Response> {
const body = opts?.body as string;
const transformedHeaders = { ...(opts?.headers as Record<string, string>) };
if (authHeader != "") transformedHeaders["authorization"] = authHeader;
delete transformedHeaders["host"];
delete transformedHeaders["Host"];
delete transformedHeaders["content-length"];
delete transformedHeaders["Content-Length"];
const requestParam: RequestUrlParam = {
url,
method: opts?.method,
body: body,
headers: transformedHeaders,
contentType:
transformedHeaders?.["content-type"] ?? transformedHeaders?.["Content-Type"] ?? "application/json",
};
const r = await fetchByAPI(requestParam, true);
return new Response(r.arrayBuffer, {
headers: r.headers,
status: r.status,
statusText: `${r.status}`,
});
}
async fetchByAPI(
url: string,
localURL: string,
method: string,
authHeader: string,
opts?: RequestInit
): Promise<Response> {
const body = opts?.body as string;
const size = body ? ` (${body.length})` : "";
try {
const r = await this.__fetchByAPI(url, authHeader, opts);
this.plugin.requestCount.value = this.plugin.requestCount.value + 1;
if (method == "POST" || method == "PUT") {
this.last_successful_post = r.status - (r.status % 100) == 200;
} else {
this.last_successful_post = true;
}
this._log(`HTTP:${method}${size} to:${localURL} -> ${r.status}`, LOG_LEVEL_DEBUG);
return r;
} catch (ex) {
this._log(`HTTP:${method}${size} to:${localURL} -> failed`, LOG_LEVEL_VERBOSE);
// limit only in bulk_docs.
if (url.toString().indexOf("_bulk_docs") !== -1) {
this.last_successful_post = false;
}
this._log(ex);
throw ex;
} finally {
this.plugin.responseCount.value = this.plugin.responseCount.value + 1;
}
}
async _connectRemoteCouchDB(
uri: string,
auth: CouchDBCredentials,
disableRequestURI: boolean,
passphrase: string | false,
useDynamicIterationCount: boolean,
performSetup: boolean,
skipInfo: boolean,
compression: boolean,
customHeaders: Record<string, string>,
useRequestAPI: boolean,
getPBKDF2Salt: () => Promise<Uint8Array<ArrayBuffer>>
): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> {
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters.";
if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces.";
if (!this.core.managers.networkManager.isOnline) {
return "Network is offline";
}
// let authHeader = await this._authHeader.getAuthorizationHeader(auth);
const conf: PouchDB.HttpAdapter.HttpAdapterConfiguration = {
adapter: "http",
auth: "username" in auth ? auth : undefined,
skip_setup: !performSetup,
fetch: async (url: string | Request, opts?: RequestInit) => {
const authHeader = await this._authHeader.getAuthorizationHeader(auth);
let size = "";
const localURL = url.toString().substring(uri.length);
const method = opts?.method ?? "GET";
if (opts?.body) {
const opts_length = opts.body.toString().length;
if (opts_length > 1000 * 1000 * 10) {
// over 10MB
if (isCloudantURI(uri)) {
this.last_successful_post = false;
this._log("This request should fail on IBM Cloudant.", LOG_LEVEL_VERBOSE);
throw new Error("This request should fail on IBM Cloudant.");
}
}
size = ` (${opts_length})`;
}
try {
const headers = new Headers(opts?.headers);
if (customHeaders) {
for (const [key, value] of Object.entries(customHeaders)) {
if (key && value) {
headers.append(key, value);
}
}
}
if (!("username" in auth)) {
headers.append("authorization", authHeader);
}
try {
this.plugin.requestCount.value = this.plugin.requestCount.value + 1;
const response: Response = await (useRequestAPI
? this.__fetchByAPI(url.toString(), authHeader, { ...opts, headers })
: fetch(url, { ...opts, headers }));
if (method == "POST" || method == "PUT") {
this.last_successful_post = response.ok;
} else {
this.last_successful_post = true;
}
this._log(`HTTP:${method}${size} to:${localURL} -> ${response.status}`, LOG_LEVEL_DEBUG);
if (Math.floor(response.status / 100) !== 2) {
if (response.status == 404) {
if (method === "GET" && localURL.indexOf("/_local/") === -1) {
this._log(
`Just checkpoint or some server information has been missing. The 404 error shown above is not an error.`,
LOG_LEVEL_VERBOSE
);
}
} else {
const r = response.clone();
this._log(
`The request may have failed. The reason sent by the server: ${r.status}: ${r.statusText}`,
LOG_LEVEL_NOTICE
);
try {
const result = await r.text();
this._log(result, LOG_LEVEL_VERBOSE);
} catch (_) {
this._log("Cloud not fetch response body", LOG_LEVEL_VERBOSE);
this._log(_, LOG_LEVEL_VERBOSE);
}
}
}
return response;
} catch (ex) {
if (ex instanceof TypeError) {
if (useRequestAPI) {
this._log("Failed to request by API.");
throw ex;
}
this._log(
"Failed to fetch by native fetch API. Trying to fetch by API to get more information."
);
const resp2 = await this.fetchByAPI(url.toString(), localURL, method, authHeader, {
...opts,
headers,
});
if (resp2.status / 100 == 2) {
this._log(
"The request was successful by API. But the native fetch API failed! Please check CORS settings on the remote database!. While this condition, you cannot enable LiveSync",
LOG_LEVEL_NOTICE
);
return resp2;
}
const r2 = resp2.clone();
const msg = await r2.text();
this._log(`Failed to fetch by API. ${resp2.status}: ${msg}`, LOG_LEVEL_NOTICE);
return resp2;
}
throw ex;
}
} catch (ex: any) {
this._log(`HTTP:${method}${size} to:${localURL} -> failed`, LOG_LEVEL_VERBOSE);
const msg = ex instanceof Error ? `${ex?.name}:${ex?.message}` : ex?.toString();
this._log(`Failed to fetch: ${msg}`, LOG_LEVEL_NOTICE);
this._log(ex, LOG_LEVEL_VERBOSE);
// limit only in bulk_docs.
if (url.toString().indexOf("_bulk_docs") !== -1) {
this.last_successful_post = false;
}
this._log(ex);
throw ex;
} finally {
this.plugin.responseCount.value = this.plugin.responseCount.value + 1;
}
// return await fetch(url, opts);
},
};
const db: PouchDB.Database<EntryDoc> = new PouchDB<EntryDoc>(uri, conf);
replicationFilter(db, compression);
disableEncryption();
if (passphrase !== "false" && typeof passphrase === "string") {
enableEncryption(
db,
passphrase,
useDynamicIterationCount,
false,
getPBKDF2Salt,
this.settings.E2EEAlgorithm
);
}
if (skipInfo) {
return { db: db, info: { db_name: "", doc_count: 0, update_seq: "" } };
}
try {
const info = await db.info();
return { db: db, info: info };
} catch (ex: any) {
const msg = `${ex?.name}:${ex?.message}`;
this._log(ex, LOG_LEVEL_VERBOSE);
return msg;
}
}
_isMobile(): boolean {
//@ts-ignore : internal API
return this.app.isMobile;
}
_vaultName(): string {
return this.app.vault.getName();
}
_getVaultName(): string {
return (
this.services.vault.vaultName() +
(this.settings?.additionalSuffixOfDatabaseName ? "-" + this.settings.additionalSuffixOfDatabaseName : "")
);
}
_getActiveFilePath(): FilePath | undefined {
const file = this.app.workspace.getActiveFile();
if (file) {
return getPathFromTFile(file);
}
return undefined;
}
_anyGetAppId(): string {
return `${"appId" in this.app ? this.app.appId : ""}`;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services) {
services.API.handleGetCustomFetchHandler(this._customFetchHandler.bind(this));
services.API.handleIsLastPostFailedDueToPayloadSize(this._getLastPostFailedBySize.bind(this));
services.remote.handleConnect(this._connectRemoteCouchDB.bind(this));
services.API.handleIsMobile(this._isMobile.bind(this));
services.vault.handleGetVaultName(this._getVaultName.bind(this));
services.vault.handleVaultName(this._vaultName.bind(this));
services.vault.handleGetActiveFilePath(this._getActiveFilePath.bind(this));
services.API.handleGetAppID(this._anyGetAppId.bind(this));
}
}

View File

@@ -0,0 +1,248 @@
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
import { EVENT_FILE_RENAMED, EVENT_LEAF_ACTIVE_CHANGED, eventHub } from "../../common/events.js";
import { LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { scheduleTask } from "octagonal-wheels/concurrency/task";
import { type TFile } from "../../deps.ts";
import { fireAndForget } from "octagonal-wheels/promises";
import { type FilePathWithPrefix } from "../../lib/src/common/types.ts";
import { reactive, reactiveSource } from "octagonal-wheels/dataobject/reactive";
import {
collectingChunks,
pluginScanningCount,
hiddenFilesEventCount,
hiddenFilesProcessingCount,
} from "../../lib/src/mock_and_interop/stores.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleObsidianEvents extends AbstractObsidianModule {
_everyOnloadStart(): Promise<boolean> {
// this.registerEvent(this.app.workspace.on("editor-change", ));
this.plugin.registerEvent(
this.app.vault.on("rename", (file, oldPath) => {
eventHub.emitEvent(EVENT_FILE_RENAMED, {
newPath: file.path as FilePathWithPrefix,
old: oldPath as FilePathWithPrefix,
});
})
);
this.plugin.registerEvent(
this.app.workspace.on("active-leaf-change", () => eventHub.emitEvent(EVENT_LEAF_ACTIVE_CHANGED))
);
return Promise.resolve(true);
}
private _performRestart(): void {
this.__performAppReload();
}
__performAppReload() {
//@ts-ignore
this.app.commands.executeCommandById("app:reload");
}
initialCallback: any;
swapSaveCommand() {
this._log("Modifying callback of the save command", LOG_LEVEL_VERBOSE);
const saveCommandDefinition = (this.app as any).commands?.commands?.["editor:save-file"];
const save = saveCommandDefinition?.callback;
if (typeof save === "function") {
this.initialCallback = save;
saveCommandDefinition.callback = () => {
scheduleTask("syncOnEditorSave", 250, () => {
if (this.services.appLifecycle.hasUnloaded()) {
this._log("Unload and remove the handler.", LOG_LEVEL_VERBOSE);
saveCommandDefinition.callback = this.initialCallback;
this.initialCallback = undefined;
} else {
if (this.settings.syncOnEditorSave) {
this._log("Sync on Editor Save.", LOG_LEVEL_VERBOSE);
fireAndForget(() => this.services.replication.replicateByEvent());
}
}
});
save();
};
}
// eslint-disable-next-line @typescript-eslint/no-this-alias
const _this = this;
//@ts-ignore
window.CodeMirrorAdapter.commands.save = () => {
//@ts-ignore
_this.app.commands.executeCommandById("editor:save-file");
// _this.app.performCommand('editor:save-file');
};
}
registerWatchEvents() {
this.setHasFocus = this.setHasFocus.bind(this);
this.watchWindowVisibility = this.watchWindowVisibility.bind(this);
this.watchWorkspaceOpen = this.watchWorkspaceOpen.bind(this);
this.watchOnline = this.watchOnline.bind(this);
this.plugin.registerEvent(this.app.workspace.on("file-open", this.watchWorkspaceOpen));
this.plugin.registerDomEvent(document, "visibilitychange", this.watchWindowVisibility);
this.plugin.registerDomEvent(window, "focus", () => this.setHasFocus(true));
this.plugin.registerDomEvent(window, "blur", () => this.setHasFocus(false));
this.plugin.registerDomEvent(window, "online", this.watchOnline);
this.plugin.registerDomEvent(window, "offline", this.watchOnline);
}
hasFocus = true;
isLastHidden = false;
setHasFocus(hasFocus: boolean) {
this.hasFocus = hasFocus;
this.watchWindowVisibility();
}
watchWindowVisibility() {
scheduleTask("watch-window-visibility", 100, () => fireAndForget(() => this.watchWindowVisibilityAsync()));
}
watchOnline() {
scheduleTask("watch-online", 500, () => fireAndForget(() => this.watchOnlineAsync()));
}
async watchOnlineAsync() {
// If some files were failed to retrieve, scan files again.
// TODO:FIXME AT V0.17.31, this logic has been disabled.
if (navigator.onLine && this.localDatabase.needScanning) {
this.localDatabase.needScanning = false;
await this.services.vault.scanVault();
}
}
async watchWindowVisibilityAsync() {
if (this.settings.suspendFileWatching) return;
if (!this.settings.isConfigured) return;
if (!this.services.appLifecycle.isReady()) return;
if (this.isLastHidden && !this.hasFocus) {
// NO OP while non-focused after made hidden;
return;
}
const isHidden = document.hidden;
if (this.isLastHidden === isHidden) {
return;
}
this.isLastHidden = isHidden;
await this.services.fileProcessing.commitPendingFileEvents();
if (isHidden) {
await this.services.appLifecycle.onSuspending();
} else {
// suspend all temporary.
if (this.services.appLifecycle.isSuspended()) return;
if (!this.hasFocus) return;
await this.services.appLifecycle.onResuming();
await this.services.appLifecycle.onResumed();
}
}
watchWorkspaceOpen(file: TFile | null) {
if (this.settings.suspendFileWatching) return;
if (!this.settings.isConfigured) return;
if (!this.services.appLifecycle.isReady()) return;
if (!file) return;
scheduleTask("watch-workspace-open", 500, () => fireAndForget(() => this.watchWorkspaceOpenAsync(file)));
}
async watchWorkspaceOpenAsync(file: TFile) {
if (this.settings.suspendFileWatching) return;
if (!this.settings.isConfigured) return;
if (!this.services.appLifecycle.isReady()) return;
await this.services.fileProcessing.commitPendingFileEvents();
if (file == null) {
return;
}
if (this.settings.syncOnFileOpen && !this.services.appLifecycle.isSuspended()) {
await this.services.replication.replicateByEvent();
}
await this.services.conflict.queueCheckForIfOpen(file.path as FilePathWithPrefix);
}
_everyOnLayoutReady(): Promise<boolean> {
this.swapSaveCommand();
this.registerWatchEvents();
return Promise.resolve(true);
}
private _askReload(message?: string) {
if (this.services.appLifecycle.isReloadingScheduled()) {
this._log(`Reloading is already scheduled`, LOG_LEVEL_VERBOSE);
return;
}
scheduleTask("configReload", 250, async () => {
const RESTART_NOW = "Yes, restart immediately";
const RESTART_AFTER_STABLE = "Yes, schedule a restart after stabilisation";
const RETRY_LATER = "No, Leave it to me";
const ret = await this.core.confirm.askSelectStringDialogue(
message || "Do you want to restart and reload Obsidian now?",
[RESTART_AFTER_STABLE, RESTART_NOW, RETRY_LATER],
{ defaultAction: RETRY_LATER }
);
if (ret == RESTART_NOW) {
this.__performAppReload();
} else if (ret == RESTART_AFTER_STABLE) {
this.services.appLifecycle.scheduleRestart();
}
});
}
private _scheduleAppReload() {
if (!this.core._totalProcessingCount) {
const __tick = reactiveSource(0);
this.core._totalProcessingCount = reactive(() => {
const dbCount = this.core.databaseQueueCount.value;
const replicationCount = this.core.replicationResultCount.value;
const storageApplyingCount = this.core.storageApplyingCount.value;
const chunkCount = collectingChunks.value;
const pluginScanCount = pluginScanningCount.value;
const hiddenFilesCount = hiddenFilesEventCount.value + hiddenFilesProcessingCount.value;
const conflictProcessCount = this.core.conflictProcessQueueCount.value;
const e = this.core.pendingFileEventCount.value;
const proc = this.core.processingFileEventCount.value;
// eslint-disable-next-line @typescript-eslint/no-unused-vars
const __ = __tick.value;
return (
dbCount +
replicationCount +
storageApplyingCount +
chunkCount +
pluginScanCount +
hiddenFilesCount +
conflictProcessCount +
e +
proc
);
});
this.plugin.registerInterval(
setInterval(() => {
__tick.value++;
}, 1000) as unknown as number
);
let stableCheck = 3;
this.core._totalProcessingCount.onChanged((e) => {
if (e.value == 0) {
if (stableCheck-- <= 0) {
this.__performAppReload();
}
this._log(
`Obsidian will be restarted soon! (Within ${stableCheck} seconds)`,
LOG_LEVEL_NOTICE,
"restart-notice"
);
} else {
stableCheck = 3;
}
});
}
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleLayoutReady(this._everyOnLayoutReady.bind(this));
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
services.appLifecycle.handlePerformRestart(this._performRestart.bind(this));
services.appLifecycle.handleAskRestart(this._askReload.bind(this));
services.appLifecycle.handleScheduleRestart(this._scheduleAppReload.bind(this));
}
}

View File

@@ -0,0 +1,138 @@
import { fireAndForget } from "octagonal-wheels/promises";
import { addIcon, type Editor, type MarkdownFileInfo, type MarkdownView } from "../../deps.ts";
import { LOG_LEVEL_NOTICE, type FilePathWithPrefix } from "../../lib/src/common/types.ts";
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
import { $msg } from "src/lib/src/common/i18n.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleObsidianMenu extends AbstractObsidianModule {
_everyOnloadStart(): Promise<boolean> {
// UI
addIcon(
"replicate",
`<g transform="matrix(1.15 0 0 1.15 -8.31 -9.52)" fill="currentColor" fill-rule="evenodd">
<path d="m85 22.2c-0.799-4.74-4.99-8.37-9.88-8.37-0.499 0-1.1 0.101-1.6 0.101-2.4-3.03-6.09-4.94-10.3-4.94-6.09 0-11.2 4.14-12.8 9.79-5.59 1.11-9.78 6.05-9.78 12 0 6.76 5.39 12.2 12 12.2h29.9c5.79 0 10.1-4.74 10.1-10.6 0-4.84-3.29-8.88-7.68-10.2zm-2.99 14.7h-29.5c-2.3-0.202-4.29-1.51-5.29-3.53-0.899-2.12-0.699-4.54 0.698-6.46 1.2-1.61 2.99-2.52 4.89-2.52 0.299 0 0.698 0 0.998 0.101l1.8 0.303v-2.02c0-3.63 2.4-6.76 5.89-7.57 0.599-0.101 1.2-0.202 1.8-0.202 2.89 0 5.49 1.62 6.79 4.24l0.598 1.21 1.3-0.504c0.599-0.202 1.3-0.303 2-0.303 1.3 0 2.5 0.404 3.59 1.11 1.6 1.21 2.6 3.13 2.6 5.15v1.61h2c2.6 0 4.69 2.12 4.69 4.74-0.099 2.52-2.2 4.64-4.79 4.64z"/>
<path d="m53.2 49.2h-41.6c-1.8 0-3.2 1.4-3.2 3.2v28.6c0 1.8 1.4 3.2 3.2 3.2h15.8v4h-7v6h24v-6h-7v-4h15.8c1.8 0 3.2-1.4 3.2-3.2v-28.6c0-1.8-1.4-3.2-3.2-3.2zm-2.8 29h-36v-23h36z"/>
<path d="m73 49.2c1.02 1.29 1.53 2.97 1.53 4.56 0 2.97-1.74 5.65-4.39 7.04v-4.06l-7.46 7.33 7.46 7.14v-4.06c7.66-1.98 12.2-9.61 10-17-0.102-0.297-0.205-0.595-0.307-0.892z"/>
<path d="m24.1 43c-0.817-0.991-1.53-2.97-1.53-4.56 0-2.97 1.74-5.65 4.39-7.04v4.06l7.46-7.33-7.46-7.14v4.06c-7.66 1.98-12.2 9.61-10 17 0.102 0.297 0.205 0.595 0.307 0.892z"/>
</g>`
);
this.addRibbonIcon("replicate", $msg("moduleObsidianMenu.replicate"), async () => {
await this.services.replication.replicate(true);
}).addClass("livesync-ribbon-replicate");
this.addCommand({
id: "livesync-replicate",
name: "Replicate now",
callback: async () => {
await this.services.replication.replicate();
},
});
this.addCommand({
id: "livesync-dump",
name: "Dump information of this doc ",
callback: () => {
const file = this.services.vault.getActiveFilePath();
if (!file) return;
fireAndForget(() => this.localDatabase.getDBEntry(file, {}, true, false));
},
});
this.addCommand({
id: "livesync-checkdoc-conflicted",
name: "Resolve if conflicted.",
editorCallback: (editor: Editor, view: MarkdownView | MarkdownFileInfo) => {
const file = view.file;
if (!file) return;
void this.services.conflict.queueCheckForIfOpen(file.path as FilePathWithPrefix);
},
});
this.addCommand({
id: "livesync-toggle",
name: "Toggle LiveSync",
callback: async () => {
if (this.settings.liveSync) {
this.settings.liveSync = false;
this._log("LiveSync Disabled.", LOG_LEVEL_NOTICE);
} else {
this.settings.liveSync = true;
this._log("LiveSync Enabled.", LOG_LEVEL_NOTICE);
}
await this.services.setting.realiseSetting();
await this.services.setting.saveSettingData();
},
});
this.addCommand({
id: "livesync-suspendall",
name: "Toggle All Sync.",
callback: async () => {
if (this.services.appLifecycle.isSuspended()) {
this.services.appLifecycle.setSuspended(false);
this._log("Self-hosted LiveSync resumed", LOG_LEVEL_NOTICE);
} else {
this.services.appLifecycle.setSuspended(true);
this._log("Self-hosted LiveSync suspended", LOG_LEVEL_NOTICE);
}
await this.services.setting.realiseSetting();
await this.services.setting.saveSettingData();
},
});
this.addCommand({
id: "livesync-scan-files",
name: "Scan storage and database again",
callback: async () => {
await this.services.vault.scanVault(true);
},
});
this.addCommand({
id: "livesync-runbatch",
name: "Run pended batch processes",
callback: async () => {
await this.services.fileProcessing.commitPendingFileEvents();
},
});
// TODO, Replicator is possibly one of features. It should be moved to features.
this.addCommand({
id: "livesync-abortsync",
name: "Abort synchronization immediately",
callback: () => {
this.core.replicator.terminateSync();
},
});
return Promise.resolve(true);
}
private __onWorkspaceReady() {
void this.services.appLifecycle.onReady();
}
private _everyOnload(): Promise<boolean> {
this.app.workspace.onLayoutReady(this.__onWorkspaceReady.bind(this));
return Promise.resolve(true);
}
private async _showView(viewType: string) {
const leaves = this.app.workspace.getLeavesOfType(viewType);
if (leaves.length == 0) {
await this.app.workspace.getLeaf(true).setViewState({
type: viewType,
active: true,
});
} else {
await leaves[0].setViewState({
type: viewType,
active: true,
});
}
if (leaves.length > 0) {
await this.app.workspace.revealLeaf(leaves[0]);
}
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
services.appLifecycle.handleOnLoaded(this._everyOnload.bind(this));
services.API.handleShowWindow(this._showView.bind(this));
}
}

View File

@@ -0,0 +1,18 @@
import type { LiveSyncCore } from "../../main.ts";
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
export class ModuleExtraSyncObsidian extends AbstractObsidianModule {
deviceAndVaultName: string = "";
_getDeviceAndVaultName(): string {
return this.deviceAndVaultName;
}
_setDeviceAndVaultName(name: string): void {
this.deviceAndVaultName = name;
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.setting.handleGetDeviceAndVaultName(this._getDeviceAndVaultName.bind(this));
services.setting.handleSetDeviceAndVaultName(this._setDeviceAndVaultName.bind(this));
}
}

View File

@@ -0,0 +1,166 @@
import { delay, fireAndForget } from "octagonal-wheels/promises";
import { __onMissingTranslation } from "../../lib/src/common/i18n";
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
import { LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { eventHub } from "../../common/events";
import { enableTestFunction } from "./devUtil/testUtils.ts";
import { TestPaneView, VIEW_TYPE_TEST } from "./devUtil/TestPaneView.ts";
import { writable } from "svelte/store";
import type { FilePathWithPrefix } from "../../lib/src/common/types.ts";
import type { LiveSyncCore } from "../../main.ts";
export class ModuleDev extends AbstractObsidianModule {
_everyOnloadStart(): Promise<boolean> {
__onMissingTranslation(() => {});
return Promise.resolve(true);
}
async onMissingTranslation(key: string): Promise<void> {
const now = new Date();
const filename = `missing-translation-`;
const time = now.toISOString().split("T")[0];
const outFile = `${filename}${time}.jsonl`;
const piece = JSON.stringify({
[key]: {},
});
const writePiece = piece.substring(1, piece.length - 1) + ",";
try {
await this.core.storageAccess.ensureDir(this.app.vault.configDir + "/ls-debug/");
await this.core.storageAccess.appendHiddenFile(
this.app.vault.configDir + "/ls-debug/" + outFile,
writePiece + "\n"
);
} catch (ex) {
this._log(`Could not write ${outFile}`, LOG_LEVEL_VERBOSE);
this._log(`Missing translation: ${writePiece}`, LOG_LEVEL_VERBOSE);
this._log(ex, LOG_LEVEL_VERBOSE);
}
}
private _everyOnloadAfterLoadSettings(): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
this.onMissingTranslation = this.onMissingTranslation.bind(this);
__onMissingTranslation((key) => {
void this.onMissingTranslation(key);
});
type STUB = {
toc: Set<string>;
stub: { [key: string]: { [key: string]: Map<string, Record<string, string>> } };
};
eventHub.onEvent("document-stub-created", (detail: STUB) => {
fireAndForget(async () => {
const stub = detail.stub;
const toc = detail.toc;
const stubDocX = Object.entries(stub)
.map(([key, value]) => {
return [
`## ${key}`,
Object.entries(value)
.map(([key2, value2]) => {
return [
`### ${key2}`,
[...value2.entries()].map(([key3, value3]) => {
// return `#### ${key3}` + "\n" + JSON.stringify(value3);
const isObsolete = value3["is_obsolete"] ? " (obsolete)" : "";
const desc = value3["desc"] ?? "";
const key = value3["key"] ? "Setting key: " + value3["key"] + "\n" : "";
return `#### ${key3}${isObsolete}\n${key}${desc}\n`;
}),
].flat();
})
.flat(),
].flat();
})
.flat();
const stubDocMD =
`
| Icon | Description |
| :---: | ----------------------------------------------------------------- |
` +
[...toc.values()].map((e) => `${e}`).join("\n") +
"\n\n" +
stubDocX.join("\n");
await this.core.storageAccess.writeHiddenFileAuto(
this.app.vault.configDir + "/ls-debug/stub-doc.md",
stubDocMD
);
});
});
enableTestFunction(this.plugin);
this.registerView(VIEW_TYPE_TEST, (leaf) => new TestPaneView(leaf, this.plugin, this));
this.addCommand({
id: "view-test",
name: "Open Test dialogue",
callback: () => {
void this.services.API.showWindow(VIEW_TYPE_TEST);
},
});
return Promise.resolve(true);
}
async _everyOnLayoutReady(): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
// if (await this.core.storageAccess.isExistsIncludeHidden("_SHOWDIALOGAUTO.md")) {
// void this.core.$$showView(VIEW_TYPE_TEST);
// }
this.addCommand({
id: "test-create-conflict",
name: "Create conflict",
callback: async () => {
const filename = "test-create-conflict.md";
const content = `# Test create conflict\n\n`;
const w = await this.core.databaseFileAccess.store({
name: filename as FilePathWithPrefix,
path: filename as FilePathWithPrefix,
body: new Blob([content], { type: "text/markdown" }),
stat: {
ctime: new Date().getTime(),
mtime: new Date().getTime(),
size: content.length,
type: "file",
},
});
if (w) {
const id = await this.services.path.path2id(filename as FilePathWithPrefix);
const f = await this.core.localDatabase.getRaw(id);
console.log(f);
console.log(f._rev);
const revConflict = f._rev.split("-")[0] + "-" + (parseInt(f._rev.split("-")[1]) + 1).toString();
console.log(await this.core.localDatabase.bulkDocsRaw([f], { new_edits: false }));
console.log(
await this.core.localDatabase.bulkDocsRaw([{ ...f, _rev: revConflict }], { new_edits: false })
);
}
},
});
await delay(1);
return true;
}
testResults = writable<[boolean, string, string][]>([]);
// testResults: string[] = [];
private _addTestResult(name: string, key: string, result: boolean, summary?: string, message?: string): void {
const logLine = `${name}: ${key} ${summary ?? ""}`;
this.testResults.update((results) => {
results.push([result, logLine, message ?? ""]);
return results;
});
}
private _everyModuleTest(): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
// this.core.$$addTestResult("DevModule", "Test", true);
// return Promise.resolve(true);
// this.addTestResult("Test of test1", true, "Just OK", "This is a test of test");
// this.addTestResult("Test of test2", true, "Just OK?");
// this.addTestResult("Test of test3", true);
return this.testDone();
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleLayoutReady(this._everyOnLayoutReady.bind(this));
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
services.appLifecycle.handleOnSettingLoaded(this._everyOnloadAfterLoadSettings.bind(this));
services.test.handleTest(this._everyModuleTest.bind(this));
services.test.handleAddTestResult(this._addTestResult.bind(this));
}
}

View File

@@ -0,0 +1,446 @@
import { delay } from "octagonal-wheels/promises";
import { LOG_LEVEL_NOTICE, REMOTE_MINIO, type FilePathWithPrefix } from "src/lib/src/common/types";
import { shareRunningResult } from "octagonal-wheels/concurrency/lock";
import { AbstractObsidianModule } from "../AbstractObsidianModule";
export class ModuleIntegratedTest extends AbstractObsidianModule {
async waitFor(proc: () => Promise<boolean>, timeout = 10000): Promise<boolean> {
await delay(100);
const start = Date.now();
while (!(await proc())) {
if (timeout > 0) {
if (Date.now() - start > timeout) {
this._log(`Timeout`);
return false;
}
}
await delay(500);
}
return true;
}
waitWithReplicating(proc: () => Promise<boolean>, timeout = 10000): Promise<boolean> {
return this.waitFor(async () => {
await this.tryReplicate();
return await proc();
}, timeout);
}
async storageContentIsEqual(file: string, content: string): Promise<boolean> {
try {
const fileContent = await this.readStorageContent(file as FilePathWithPrefix);
if (fileContent === content) {
return true;
} else {
// this._log(`Content is not same \n Expected:${content}\n Actual:${fileContent}`, LOG_LEVEL_VERBOSE);
return false;
}
} catch (e) {
this._log(`Error: ${e}`);
return false;
}
}
async assert(proc: () => Promise<boolean>): Promise<boolean> {
if (!(await proc())) {
this._log(`Assertion failed`);
return false;
}
return true;
}
async __orDie(key: string, proc: () => Promise<boolean>): Promise<true> | never {
if (!(await this._test(key, proc))) {
throw new Error(`${key}`);
}
return true;
}
tryReplicate() {
if (!this.settings.liveSync) {
return shareRunningResult("replicate-test", async () => {
await this.services.replication.replicate();
});
}
}
async readStorageContent(file: FilePathWithPrefix): Promise<string | undefined> {
if (!(await this.core.storageAccess.isExistsIncludeHidden(file))) {
return undefined;
}
return await this.core.storageAccess.readHiddenFileText(file);
}
async __proceed(no: number, title: string): Promise<boolean> {
const stepFile = "_STEP.md" as FilePathWithPrefix;
const stepAckFile = "_STEP_ACK.md" as FilePathWithPrefix;
const stepContent = `Step ${no}`;
await this.services.conflict.resolveByNewest(stepFile);
await this.core.storageAccess.writeFileAuto(stepFile, stepContent);
await this.__orDie(`Wait for acknowledge ${no}`, async () => {
if (
!(await this.waitWithReplicating(async () => {
return await this.storageContentIsEqual(stepAckFile, stepContent);
}, 20000))
)
return false;
return true;
});
return true;
}
async __join(no: number, title: string): Promise<boolean> {
const stepFile = "_STEP.md" as FilePathWithPrefix;
const stepAckFile = "_STEP_ACK.md" as FilePathWithPrefix;
// const otherStepFile = `_STEP_${isLeader ? "R" : "L"}.md` as FilePathWithPrefix;
const stepContent = `Step ${no}`;
await this.__orDie(`Wait for step ${no} (${title})`, async () => {
if (
!(await this.waitWithReplicating(async () => {
return await this.storageContentIsEqual(stepFile, stepContent);
}, 20000))
)
return false;
return true;
});
await this.services.conflict.resolveByNewest(stepAckFile);
await this.core.storageAccess.writeFileAuto(stepAckFile, stepContent);
await this.tryReplicate();
return true;
}
async performStep({
step,
title,
isGameChanger,
proc,
check,
}: {
step: number;
title: string;
isGameChanger: boolean;
proc: () => Promise<any>;
check: () => Promise<boolean>;
}): Promise<boolean> {
if (isGameChanger) {
await this.__proceed(step, title);
try {
await proc();
} catch (e) {
this._log(`Error: ${e}`);
return false;
}
return await this.__orDie(`Step ${step} - ${title}`, async () => await this.waitWithReplicating(check));
} else {
return await this.__join(step, title);
}
}
// // see scenario.md
// async testLeader(testMain: (testFileName: FilePathWithPrefix) => Promise<boolean>): Promise<boolean> {
// }
// async testReceiver(testMain: (testFileName: FilePathWithPrefix) => Promise<boolean>): Promise<boolean> {
// }
async nonLiveTestRunner(
isLeader: boolean,
testMain: (testFileName: FilePathWithPrefix, isLeader: boolean) => Promise<boolean>
): Promise<boolean> {
const storage = this.core.storageAccess;
// const database = this.core.databaseFileAccess;
// const _orDie = this._orDie.bind(this);
const testCommandFile = "IT.md" as FilePathWithPrefix;
const textCommandResponseFile = "ITx.md" as FilePathWithPrefix;
let testFileName: FilePathWithPrefix;
this.addTestResult(
"-------Starting ... ",
true,
`Test as ${isLeader ? "Leader" : "Receiver"} command file ${testCommandFile}`
);
if (isLeader) {
await this.__proceed(0, "start");
}
await this.tryReplicate();
await this.performStep({
step: 0,
title: "Make sure that command File Not Exists",
isGameChanger: isLeader,
proc: async () => await storage.removeHidden(testCommandFile),
check: async () => !(await storage.isExistsIncludeHidden(testCommandFile)),
});
await this.performStep({
step: 1,
title: "Make sure that command File Not Exists On Receiver",
isGameChanger: !isLeader,
proc: async () => await storage.removeHidden(textCommandResponseFile),
check: async () => !(await storage.isExistsIncludeHidden(textCommandResponseFile)),
});
await this.performStep({
step: 2,
title: "Decide the test file name",
isGameChanger: isLeader,
proc: async () => {
testFileName = (Date.now() + "-" + Math.ceil(Math.random() * 1000) + ".md") as FilePathWithPrefix;
const testCommandFile = "IT.md" as FilePathWithPrefix;
await storage.writeFileAuto(testCommandFile, testFileName);
},
check: () => Promise.resolve(true),
});
await this.performStep({
step: 3,
title: "Wait for the command file to be arrived",
isGameChanger: !isLeader,
proc: async () => {},
check: async () => await storage.isExistsIncludeHidden(testCommandFile),
});
await this.performStep({
step: 4,
title: "Send the response file",
isGameChanger: !isLeader,
proc: async () => {
await storage.writeHiddenFileAuto(textCommandResponseFile, "!");
},
check: () => Promise.resolve(true),
});
await this.performStep({
step: 5,
title: "Wait for the response file to be arrived",
isGameChanger: isLeader,
proc: async () => {},
check: async () => await storage.isExistsIncludeHidden(textCommandResponseFile),
});
await this.performStep({
step: 6,
title: "Proceed to begin the test",
isGameChanger: isLeader,
proc: async () => {},
check: () => Promise.resolve(true),
});
await this.performStep({
step: 6,
title: "Begin the test",
isGameChanger: !false,
proc: async () => {},
check: () => {
return Promise.resolve(true);
},
});
// await this.step(0, isLeader, true);
try {
this.addTestResult("** Main------", true, ``);
if (isLeader) {
return await testMain(testFileName!, true);
} else {
const testFileName = await this.readStorageContent(testCommandFile);
this.addTestResult("testFileName", true, `Request client to use :${testFileName!}`);
return await testMain(testFileName! as FilePathWithPrefix, false);
}
} finally {
this.addTestResult("Teardown", true, `Deleting ${testFileName!}`);
await storage.removeHidden(testFileName!);
}
return true;
// Make sure the
}
async testBasic(filename: FilePathWithPrefix, isLeader: boolean): Promise<boolean> {
const storage = this.core.storageAccess;
const database = this.core.databaseFileAccess;
await this.addTestResult(
`---**Starting Basic Test**---`,
true,
`Test as ${isLeader ? "Leader" : "Receiver"} command file ${filename}`
);
// if (isLeader) {
// await this._proceed(0);
// }
// await this.tryReplicate();
await this.performStep({
step: 0,
title: "Make sure that file is not exist",
isGameChanger: !isLeader,
proc: async () => {},
check: async () => !(await storage.isExists(filename)),
});
await this.performStep({
step: 1,
title: "Write a file",
isGameChanger: isLeader,
proc: async () => await storage.writeFileAuto(filename, "Hello World"),
check: async () => await storage.isExists(filename),
});
await this.performStep({
step: 2,
title: "Make sure the file is arrived",
isGameChanger: !isLeader,
proc: async () => {},
check: async () => await storage.isExists(filename),
});
await this.performStep({
step: 3,
title: "Update to Hello World 2",
isGameChanger: isLeader,
proc: async () => await storage.writeFileAuto(filename, "Hello World 2"),
check: async () => await this.storageContentIsEqual(filename, "Hello World 2"),
});
await this.performStep({
step: 4,
title: "Make sure the modified file is arrived",
isGameChanger: !isLeader,
proc: async () => {},
check: async () => await this.storageContentIsEqual(filename, "Hello World 2"),
});
await this.performStep({
step: 5,
title: "Update to Hello World 3",
isGameChanger: !isLeader,
proc: async () => await storage.writeFileAuto(filename, "Hello World 3"),
check: async () => await this.storageContentIsEqual(filename, "Hello World 3"),
});
await this.performStep({
step: 6,
title: "Make sure the modified file is arrived",
isGameChanger: isLeader,
proc: async () => {},
check: async () => await this.storageContentIsEqual(filename, "Hello World 3"),
});
const multiLineContent = `Line1:A
Line2:B
Line3:C
Line4:D`;
await this.performStep({
step: 7,
title: "Update to Multiline",
isGameChanger: isLeader,
proc: async () => await storage.writeFileAuto(filename, multiLineContent),
check: async () => await this.storageContentIsEqual(filename, multiLineContent),
});
await this.performStep({
step: 8,
title: "Make sure the modified file is arrived",
isGameChanger: !isLeader,
proc: async () => {},
check: async () => await this.storageContentIsEqual(filename, multiLineContent),
});
// While LiveSync, possibly cannot cause the conflict.
if (!this.settings.liveSync) {
// Step 9 Make Conflict But Resolvable
const multiLineContentL = `Line1:A
Line2:B
Line3:C!
Line4:D`;
const multiLineContentC = `Line1:A
Line2:bbbbb
Line3:C
Line4:D`;
await this.performStep({
step: 9,
title: "Progress to be conflicted",
isGameChanger: isLeader,
proc: async () => {},
check: () => Promise.resolve(true),
});
await storage.writeFileAuto(filename, isLeader ? multiLineContentL : multiLineContentC);
await this.performStep({
step: 10,
title: "Update As Conflicted",
isGameChanger: !isLeader,
proc: async () => {},
check: () => Promise.resolve(true),
});
await this.performStep({
step: 10,
title: "Make sure Automatically resolved",
isGameChanger: isLeader,
proc: async () => {},
check: async () => (await database.getConflictedRevs(filename)).length === 0,
});
await this.performStep({
step: 11,
title: "Make sure Automatically resolved",
isGameChanger: !isLeader,
proc: async () => {},
check: async () => (await database.getConflictedRevs(filename)).length === 0,
});
const sensiblyMergedContent = `Line1:A
Line2:bbbbb
Line3:C!
Line4:D`;
await this.performStep({
step: 12,
title: "Make sure Sensibly Merged on Leader",
isGameChanger: isLeader,
proc: async () => {},
check: async () => await this.storageContentIsEqual(filename, sensiblyMergedContent),
});
await this.performStep({
step: 13,
title: "Make sure Sensibly Merged on Receiver",
isGameChanger: !isLeader,
proc: async () => {},
check: async () => await this.storageContentIsEqual(filename, sensiblyMergedContent),
});
}
await this.performStep({
step: 14,
title: "Delete File",
isGameChanger: isLeader,
proc: async () => {
await storage.removeHidden(filename);
},
check: async () => !(await storage.isExists(filename)),
});
await this.performStep({
step: 15,
title: "Make sure File is deleted",
isGameChanger: !isLeader,
proc: async () => {},
check: async () => !(await storage.isExists(filename)),
});
this._log(`The Basic Test has been completed`, LOG_LEVEL_NOTICE);
return true;
}
async testBasicEvent(isLeader: boolean) {
this.settings.liveSync = false;
await this.saveSettings();
await this._test("basic", async () => await this.nonLiveTestRunner(isLeader, (t, l) => this.testBasic(t, l)));
}
async testBasicLive(isLeader: boolean) {
this.settings.liveSync = true;
await this.saveSettings();
await this._test("basic", async () => await this.nonLiveTestRunner(isLeader, (t, l) => this.testBasic(t, l)));
}
async _everyModuleTestMultiDevice(): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
const isLeader = this.core.services.vault.vaultName().indexOf("recv") === -1;
this.addTestResult("-------", true, `Test as ${isLeader ? "Leader" : "Receiver"}`);
try {
this._log(`Starting Test`);
await this.testBasicEvent(isLeader);
if (this.settings.remoteType == REMOTE_MINIO) await this.testBasicLive(isLeader);
} catch (e) {
this._log(e);
this._log(`Error: ${e}`);
return Promise.resolve(false);
}
return Promise.resolve(true);
}
onBindFunction(core: typeof this.core, services: typeof core.services): void {
services.test.handleTestMultiDevice(this._everyModuleTestMultiDevice.bind(this));
}
}

View File

@@ -0,0 +1,588 @@
import { delay } from "octagonal-wheels/promises";
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "octagonal-wheels/common/logger";
import { eventHub } from "../../common/events";
import { getWebCrypto } from "../../lib/src/mods.ts";
import { uint8ArrayToHexString } from "octagonal-wheels/binary/hex";
import { parseYaml, requestUrl, stringifyYaml } from "obsidian";
import type { FilePath } from "../../lib/src/common/types.ts";
import { scheduleTask } from "octagonal-wheels/concurrency/task";
import { getFileRegExp } from "../../lib/src/common/utils.ts";
import type { LiveSyncCore } from "../../main.ts";
declare global {
interface LSEvents {
"debug-sync-status": string[];
}
}
export class ModuleReplicateTest extends AbstractObsidianModule {
testRootPath = "_test/";
testInfoPath = "_testinfo/";
get isLeader() {
return (
this.services.vault.getVaultName().indexOf("dev") >= 0 &&
this.services.vault.vaultName().indexOf("recv") < 0
);
}
get nameByKind() {
if (!this.isLeader) {
return "RECV";
} else if (this.isLeader) {
return "LEADER";
}
}
get pairName() {
if (this.isLeader) {
return "RECV";
} else if (!this.isLeader) {
return "LEADER";
}
}
watchIsSynchronised = false;
statusBarSyncStatus?: HTMLElement;
async readFileContent(file: string) {
try {
return await this.core.storageAccess.readHiddenFileText(file);
} catch {
return "";
}
}
async dumpList() {
if (this.settings.syncInternalFiles) {
this._log("Write file list (Include Hidden)");
await this.__dumpFileListIncludeHidden("files.md");
} else {
this._log("Write file list");
await this.__dumpFileList("files.md");
}
}
async _everyBeforeReplicate(showMessage: boolean): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
await this.dumpList();
return true;
}
private _everyOnloadAfterLoadSettings(): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
this.addCommand({
id: "dump-file-structure-normal",
name: `Dump Structure (Normal)`,
callback: () => {
void this.__dumpFileList("files.md").finally(() => {
void this.refreshSyncStatus();
});
},
});
this.addCommand({
id: "dump-file-structure-ih",
name: "Dump Structure (Include Hidden)",
callback: () => {
const d = "files.md";
void this.__dumpFileListIncludeHidden(d);
},
});
this.addCommand({
id: "dump-file-structure-auto",
name: "Dump Structure",
callback: () => {
void this.dumpList();
},
});
this.addCommand({
id: "dump-file-test",
name: `Perform Test (Dev) ${this.isLeader ? "(Leader)" : "(Recv)"}`,
callback: () => {
void this.performTestManually();
},
});
this.addCommand({
id: "watch-sync-result",
name: `Watch sync result is matched between devices`,
callback: () => {
this.watchIsSynchronised = !this.watchIsSynchronised;
void this.refreshSyncStatus();
},
});
this.app.vault.on("modify", async (file) => {
if (file.path.startsWith(this.testInfoPath)) {
await this.refreshSyncStatus();
} else {
scheduleTask("dumpStatus", 125, async () => {
await this.dumpList();
return true;
});
}
});
this.statusBarSyncStatus = this.plugin.addStatusBarItem();
return Promise.resolve(true);
}
async getSyncStatusAsText() {
const fileMine = this.testInfoPath + this.nameByKind + "/" + "files.md";
const filePair = this.testInfoPath + this.pairName + "/" + "files.md";
const mine = parseYaml(await this.readFileContent(fileMine));
const pair = parseYaml(await this.readFileContent(filePair));
const result = [] as string[];
if (mine.length != pair.length) {
result.push(`File count is different: ${mine.length} vs ${pair.length}`);
}
const filesAll = new Set([...mine.map((e: any) => e.path), ...pair.map((e: any) => e.path)]);
for (const file of filesAll) {
const mineFile = mine.find((e: any) => e.path == file);
const pairFile = pair.find((e: any) => e.path == file);
if (!mineFile || !pairFile) {
result.push(`File not found: ${file}`);
} else {
if (mineFile.size != pairFile.size) {
result.push(`Size is different: ${file} ${mineFile.size} vs ${pairFile.size}`);
}
if (mineFile.hash != pairFile.hash) {
result.push(`Hash is different: ${file} ${mineFile.hash} vs ${pairFile.hash}`);
}
}
}
eventHub.emitEvent("debug-sync-status", result);
return result.join("\n");
}
async refreshSyncStatus() {
if (this.watchIsSynchronised) {
// Normal Files
const syncStatus = await this.getSyncStatusAsText();
if (syncStatus) {
this.statusBarSyncStatus!.setText(`Sync Status: Having Error`);
this._log(`Sync Status: Having Error\n${syncStatus}`, LOG_LEVEL_INFO);
} else {
this.statusBarSyncStatus!.setText(`Sync Status: Synchronised`);
}
} else {
this.statusBarSyncStatus!.setText("");
}
}
async __dumpFileList(outFile?: string) {
if (!this.core || !this.core.storageAccess) {
this._log("No storage access", LOG_LEVEL_INFO);
return;
}
const files = this.core.storageAccess.getFiles();
const out = [] as any[];
const webcrypto = await getWebCrypto();
for (const file of files) {
if (!(await this.services.vault.isTargetFile(file.path))) {
continue;
}
if (file.path.startsWith(this.testInfoPath)) continue;
const stat = await this.core.storageAccess.stat(file.path);
if (stat) {
const hashSrc = await this.core.storageAccess.readHiddenFileBinary(file.path);
const hash = await webcrypto.subtle.digest("SHA-1", hashSrc);
const hashStr = uint8ArrayToHexString(new Uint8Array(hash));
const item = {
path: file.path,
name: file.name,
size: stat.size,
mtime: stat.mtime,
hash: hashStr,
};
// const fileLine = `-${file.path}:${stat.size}:${stat.mtime}:${hashStr}`;
out.push(item);
}
}
out.sort((a, b) => a.path.localeCompare(b.path));
if (outFile) {
outFile = this.testInfoPath + this.nameByKind + "/" + outFile;
await this.core.storageAccess.ensureDir(outFile);
await this.core.storageAccess.writeHiddenFileAuto(outFile, stringifyYaml(out));
} else {
// console.dir(out);
}
this._log(`Dumped ${out.length} files`, LOG_LEVEL_INFO);
}
async __dumpFileListIncludeHidden(outFile?: string) {
const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
const targetPatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
const out = [] as any[];
const files = await this.core.storageAccess.getFilesIncludeHidden("", targetPatterns, ignorePatterns);
// console.dir(files);
const webcrypto = await getWebCrypto();
for (const file of files) {
// if (!await this.core.$$isTargetFile(file)) {
// continue;
// }
if (file.startsWith(this.testInfoPath)) continue;
const stat = await this.core.storageAccess.statHidden(file);
if (stat) {
const hashSrc = await this.core.storageAccess.readHiddenFileBinary(file);
const hash = await webcrypto.subtle.digest("SHA-1", hashSrc);
const hashStr = uint8ArrayToHexString(new Uint8Array(hash));
const item = {
path: file,
name: file.split("/").pop(),
size: stat.size,
mtime: stat.mtime,
hash: hashStr,
};
// const fileLine = `-${file.path}:${stat.size}:${stat.mtime}:${hashStr}`;
out.push(item);
}
}
out.sort((a, b) => a.path.localeCompare(b.path));
if (outFile) {
outFile = this.testInfoPath + this.nameByKind + "/" + outFile;
await this.core.storageAccess.ensureDir(outFile);
await this.core.storageAccess.writeHiddenFileAuto(outFile, stringifyYaml(out));
} else {
// console.dir(out);
}
this._log(`Dumped ${out.length} files`, LOG_LEVEL_NOTICE);
}
async collectTestFiles() {
const remoteTopDir = "https://raw.githubusercontent.com/vrtmrz/obsidian-livesync/refs/heads/main/";
const files = [
"README.md",
"docs/adding_translations.md",
"docs/design_docs_of_journalsync.md",
"docs/design_docs_of_keep_newborn_chunks.md",
"docs/design_docs_of_prefixed_hidden_file_sync.md",
"docs/design_docs_of_sharing_tweak_value.md",
"docs/quick_setup_cn.md",
"docs/quick_setup_ja.md",
"docs/quick_setup.md",
"docs/settings_ja.md",
"docs/settings.md",
"docs/setup_cloudant_ja.md",
"docs/setup_cloudant.md",
"docs/setup_flyio.md",
"docs/setup_own_server_cn.md",
"docs/setup_own_server_ja.md",
"docs/setup_own_server.md",
"docs/tech_info_ja.md",
"docs/tech_info.md",
"docs/terms.md",
"docs/troubleshooting.md",
"images/1.png",
"images/2.png",
"images/corrupted_data.png",
"images/hatch.png",
"images/lock_pattern1.png",
"images/lock_pattern2.png",
"images/quick_setup_1.png",
"images/quick_setup_2.png",
"images/quick_setup_3.png",
"images/quick_setup_3b.png",
"images/quick_setup_4.png",
"images/quick_setup_5.png",
"images/quick_setup_6.png",
"images/quick_setup_7.png",
"images/quick_setup_8.png",
"images/quick_setup_9_1.png",
"images/quick_setup_9_2.png",
"images/quick_setup_10.png",
"images/remote_db_setting.png",
"images/write_logs_into_the_file.png",
];
for (const file of files) {
const remote = remoteTopDir + file;
const local = this.testRootPath + file;
try {
const f = (await requestUrl(remote)).arrayBuffer;
await this.core.storageAccess.ensureDir(local);
await this.core.storageAccess.writeHiddenFileAuto(local, f);
} catch (ex) {
this._log(`Could not fetch ${remote}`, LOG_LEVEL_VERBOSE);
this._log(ex, LOG_LEVEL_VERBOSE);
}
}
await this.dumpList();
}
async waitFor(proc: () => Promise<boolean>, timeout = 10000): Promise<boolean> {
await delay(100);
const start = Date.now();
while (!(await proc())) {
if (timeout > 0) {
if (Date.now() - start > timeout) {
this._log(`Timeout`);
return false;
}
}
await delay(500);
}
return true;
}
async testConflictedManually1() {
await this.services.replication.replicate();
const commonFile = `Resolve!
*****, the amazing chocolatier!!`;
if (this.isLeader) {
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "wonka.md", commonFile);
}
await this.services.replication.replicate();
await this.services.replication.replicate();
if (
(await this.core.confirm.askYesNoDialog("Ready to begin the test conflict Manually 1?", {
timeout: 30,
defaultOption: "Yes",
})) == "no"
) {
return;
}
const fileA = `Resolve to KEEP THIS
Willy Wonka, Willy Wonka, the amazing chocolatier!!`;
const fileB = `Resolve to DISCARD THIS
Charlie Bucket, Charlie Bucket, the amazing chocolatier!!`;
if (this.isLeader) {
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "wonka.md", fileA);
} else {
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "wonka.md", fileB);
}
if (
(await this.core.confirm.askYesNoDialog("Ready to check the result of Manually 1?", {
timeout: 30,
defaultOption: "Yes",
})) == "no"
) {
return;
}
await this.services.replication.replicate();
await this.services.replication.replicate();
if (
!(await this.waitFor(async () => {
await this.services.replication.replicate();
return (
(await this.__assertStorageContent(
(this.testRootPath + "wonka.md") as FilePath,
fileA,
false,
true
)) == true
);
}, 30000))
) {
return await this.__assertStorageContent((this.testRootPath + "wonka.md") as FilePath, fileA, false, true);
}
return true;
// We have to check the result
}
async testConflictedManually2() {
await this.services.replication.replicate();
const commonFile = `Resolve To concatenate
ABCDEFG`;
if (this.isLeader) {
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "concat.md", commonFile);
}
await this.services.replication.replicate();
await this.services.replication.replicate();
if (
(await this.core.confirm.askYesNoDialog("Ready to begin the test conflict Manually 2?", {
timeout: 30,
defaultOption: "Yes",
})) == "no"
) {
return;
}
const fileA = `Resolve to Concatenate
ABCDEFGHIJKLMNOPQRSTYZ`;
const fileB = `Resolve to Concatenate
AJKLMNOPQRSTUVWXYZ`;
const concatenated = `Resolve to Concatenate
ABCDEFGHIJKLMNOPQRSTUVWXYZ`;
if (this.isLeader) {
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "concat.md", fileA);
} else {
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "concat.md", fileB);
}
if (
(await this.core.confirm.askYesNoDialog("Ready to test conflict Manually 2?", {
timeout: 30,
defaultOption: "Yes",
})) == "no"
) {
return;
}
await this.services.replication.replicate();
await this.services.replication.replicate();
if (
!(await this.waitFor(async () => {
await this.services.replication.replicate();
return (
(await this.__assertStorageContent(
(this.testRootPath + "concat.md") as FilePath,
concatenated,
false,
true
)) == true
);
}, 30000))
) {
return await this.__assertStorageContent(
(this.testRootPath + "concat.md") as FilePath,
concatenated,
false,
true
);
}
return true;
}
async testConflictAutomatic() {
if (this.isLeader) {
const baseDoc = `Tasks!
- [ ] Task 1
- [ ] Task 2
- [ ] Task 3
- [ ] Task 4
`;
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "task.md", baseDoc);
}
await delay(100);
await this.services.replication.replicate();
await this.services.replication.replicate();
if (
(await this.core.confirm.askYesNoDialog("Ready to test conflict?", {
timeout: 30,
defaultOption: "Yes",
})) == "no"
) {
return;
}
const mod1Doc = `Tasks!
- [ ] Task 1
- [v] Task 2
- [ ] Task 3
- [ ] Task 4
`;
const mod2Doc = `Tasks!
- [ ] Task 1
- [ ] Task 2
- [v] Task 3
- [ ] Task 4
`;
if (this.isLeader) {
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "task.md", mod1Doc);
} else {
await this.core.storageAccess.writeHiddenFileAuto(this.testRootPath + "task.md", mod2Doc);
}
await this.services.replication.replicate();
await this.services.replication.replicate();
await delay(1000);
if (
(await this.core.confirm.askYesNoDialog("Ready to check result?", { timeout: 30, defaultOption: "Yes" })) ==
"no"
) {
return;
}
await this.services.replication.replicate();
await this.services.replication.replicate();
const mergedDoc = `Tasks!
- [ ] Task 1
- [v] Task 2
- [v] Task 3
- [ ] Task 4
`;
return this.__assertStorageContent((this.testRootPath + "task.md") as FilePath, mergedDoc, false, true);
}
async checkConflictResolution() {
this._log("Before testing conflicted files, resolve all once", LOG_LEVEL_NOTICE);
await this.core.rebuilder.resolveAllConflictedFilesByNewerOnes();
await this.core.rebuilder.resolveAllConflictedFilesByNewerOnes();
await this.services.replication.replicate();
await delay(1000);
if (!(await this.testConflictAutomatic())) {
this._log("Conflict resolution (Auto) failed", LOG_LEVEL_NOTICE);
return false;
}
if (!(await this.testConflictedManually1())) {
this._log("Conflict resolution (Manual1) failed", LOG_LEVEL_NOTICE);
return false;
}
if (!(await this.testConflictedManually2())) {
this._log("Conflict resolution (Manual2) failed", LOG_LEVEL_NOTICE);
return false;
}
return true;
}
async __assertStorageContent(
fileName: FilePath,
content: string,
inverted = false,
showResult = false
): Promise<boolean | string> {
try {
const fileContent = await this.core.storageAccess.readHiddenFileText(fileName);
let result = fileContent === content;
if (inverted) {
result = !result;
}
if (result) {
return true;
} else {
if (showResult) {
this._log(`Content is not same \n Expected:${content}\n Actual:${fileContent}`, LOG_LEVEL_VERBOSE);
}
return `Content is not same \n Expected:${content}\n Actual:${fileContent}`;
}
} catch (e) {
this._log(`Cannot assert storage content: ${e}`);
return false;
}
}
async performTestManually() {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
await this.checkConflictResolution();
// await this.collectTestFiles();
}
// testResults = writable<[boolean, string, string][]>([]);
// testResults: string[] = [];
// $$addTestResult(name: string, key: string, result: boolean, summary?: string, message?: string): void {
// const logLine = `${name}: ${key} ${summary ?? ""}`;
// this.testResults.update((results) => {
// results.push([result, logLine, message ?? ""]);
// return results;
// });
// }
private async _everyModuleTestMultiDevice(): Promise<boolean> {
if (!this.settings.enableDebugTools) return Promise.resolve(true);
// this.core.$$addTestResult("DevModule", "Test", true);
// return Promise.resolve(true);
await this._test("Conflict resolution", async () => await this.checkConflictResolution());
return this.testDone();
}
onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.handleOnSettingLoaded(this._everyOnloadAfterLoadSettings.bind(this));
services.replication.handleBeforeReplicate(this._everyBeforeReplicate.bind(this));
services.test.handleTestMultiDevice(this._everyModuleTestMultiDevice.bind(this));
}
}

View File

@@ -0,0 +1,119 @@
<script lang="ts">
import { onDestroy, onMount } from "svelte";
import type ObsidianLiveSyncPlugin from "../../../main.ts";
import { perf_trench } from "./tests.ts";
import { MarkdownRenderer, Notice } from "../../../deps.ts";
import type { ModuleDev } from "../ModuleDev.ts";
import { fireAndForget } from "octagonal-wheels/promises";
import { EVENT_LAYOUT_READY, eventHub } from "../../../common/events.ts";
import { writable } from "svelte/store";
export let plugin: ObsidianLiveSyncPlugin;
export let moduleDev: ModuleDev;
let performanceTestResult = "";
let functionCheckResult = "";
let testRunning = false;
let prefTestResultEl: HTMLDivElement;
let isReady = false;
$: {
if (performanceTestResult != "" && isReady) {
MarkdownRenderer.render(plugin.app, performanceTestResult, prefTestResultEl, "/", plugin);
}
}
async function performTest() {
try {
testRunning = true;
performanceTestResult = await perf_trench(plugin);
} finally {
testRunning = false;
}
}
function clearResult() {
moduleDev.testResults.update((v) => {
v = [];
return v;
});
}
function clearPerfTestResult() {
prefTestResultEl.empty();
}
onMount(async () => {
isReady = true;
// performTest();
eventHub.onceEvent(EVENT_LAYOUT_READY, async () => {
if (await plugin.storageAccess.isExistsIncludeHidden("_AUTO_TEST.md")) {
new Notice("Auto test file found, running tests...");
fireAndForget(async () => {
await allTest();
});
} else {
// new Notice("No auto test file found, skipping tests...");
}
});
});
let moduleTesting = false;
function moduleMultiDeviceTest() {
if (moduleTesting) return;
moduleTesting = true;
plugin.services.test.testMultiDevice().finally(() => {
moduleTesting = false;
});
}
function moduleSingleDeviceTest() {
if (moduleTesting) return;
moduleTesting = true;
plugin.services.test.test().finally(() => {
moduleTesting = false;
});
}
async function allTest() {
if (moduleTesting) return;
moduleTesting = true;
try {
await plugin.services.test.test();
await plugin.services.test.testMultiDevice();
} finally {
moduleTesting = false;
}
}
const results = moduleDev.testResults;
$: resultLines = $results;
let syncStatus = [] as string[];
eventHub.onEvent("debug-sync-status", (status) => {
syncStatus = [...status];
});
</script>
<h2>TESTING BENCH: Self-hosted LiveSync</h2>
<h3>Module Checks</h3>
<button on:click={() => moduleMultiDeviceTest()} disabled={moduleTesting}>MultiDevice Test</button>
<button on:click={() => moduleSingleDeviceTest()} disabled={moduleTesting}>SingleDevice Test</button>
<button on:click={() => allTest()} disabled={moduleTesting}>All Test</button>
<button on:click={() => clearResult()}>Clear</button>
{#each resultLines as [result, line, message]}
<details open={!result}>
<summary>[{result ? "PASS" : "FAILED"}] {line}</summary>
<pre>{message}</pre>
</details>
{/each}
<h3>Synchronisation Result Status</h3>
<pre>{syncStatus.join("\n")}</pre>
<h3>Performance test</h3>
<button on:click={() => performTest()} disabled={testRunning}>Test!</button>
<button on:click={() => clearPerfTestResult()}>Clear</button>
<div bind:this={prefTestResultEl}></div>
<style>
* {
box-sizing: border-box;
}
</style>

View File

@@ -1,29 +1,27 @@
import {
ItemView,
WorkspaceLeaf
} from "obsidian";
import TestPaneComponent from "./TestPane.svelte"
import type ObsidianLiveSyncPlugin from "../main"
import { ItemView, WorkspaceLeaf } from "obsidian";
import TestPaneComponent from "./TestPane.svelte";
import type ObsidianLiveSyncPlugin from "../../../main.ts";
import type { ModuleDev } from "../ModuleDev.ts";
export const VIEW_TYPE_TEST = "ols-pane-test";
//Log view
export class TestPaneView extends ItemView {
component?: TestPaneComponent;
plugin: ObsidianLiveSyncPlugin;
moduleDev: ModuleDev;
icon = "view-log";
title: string = "Self-hosted LiveSync Test and Results"
title: string = "Self-hosted LiveSync Test and Results";
navigation = true;
getIcon(): string {
return "view-log";
}
constructor(leaf: WorkspaceLeaf, plugin: ObsidianLiveSyncPlugin) {
constructor(leaf: WorkspaceLeaf, plugin: ObsidianLiveSyncPlugin, moduleDev: ModuleDev) {
super(leaf);
this.plugin = plugin;
this.moduleDev = moduleDev;
}
getViewType() {
return VIEW_TYPE_TEST;
}
@@ -32,18 +30,19 @@ export class TestPaneView extends ItemView {
return "Self-hosted LiveSync Test and Results";
}
// eslint-disable-next-line require-await
async onOpen() {
this.component = new TestPaneComponent({
target: this.contentEl,
props: {
plugin: this.plugin
plugin: this.plugin,
moduleDev: this.moduleDev,
},
});
await Promise.resolve();
}
// eslint-disable-next-line require-await
async onClose() {
this.component?.$destroy();
await Promise.resolve();
}
}

View File

@@ -0,0 +1,50 @@
import { fireAndForget } from "../../../lib/src/common/utils.ts";
import { serialized } from "octagonal-wheels/concurrency/lock";
import type ObsidianLiveSyncPlugin from "../../../main.ts";
let plugin: ObsidianLiveSyncPlugin;
export function enableTestFunction(plugin_: ObsidianLiveSyncPlugin) {
plugin = plugin_;
}
export function addDebugFileLog(message: any, stackLog = false) {
fireAndForget(
serialized("debug-log", async () => {
const now = new Date();
const filename = `debug-log`;
const time = now.toISOString().split("T")[0];
const outFile = `${filename}${time}.jsonl`;
// const messageContent = typeof message == "string" ? message : message instanceof Error ? `${message.name}:${message.message}` : JSON.stringify(message, null, 2);
const timestamp = now.toLocaleString();
const timestampEpoch = now;
let out = { timestamp: timestamp, epoch: timestampEpoch } as Record<string, any>;
if (message instanceof Error) {
// debugger;
// console.dir(message.stack);
out = { ...out, message };
} else if (stackLog) {
if (stackLog) {
const stackE = new Error();
const stack = stackE.stack;
out = { ...out, stack };
}
}
if (typeof message == "object") {
out = { ...out, ...message };
} else {
out = {
result: message,
};
}
// const out = "--" + timestamp + "--\n" + messageContent + " " + (stack || "");
// const out
try {
await plugin.storageAccess.appendHiddenFile(
plugin.app.vault.configDir + "/ls-debug/" + outFile,
JSON.stringify(out) + "\n"
);
} catch {
//NO OP
}
})
);
}

View File

@@ -0,0 +1,90 @@
import { Trench } from "octagonal-wheels/memory/memutil";
import type ObsidianLiveSyncPlugin from "../../../main.ts";
type MeasureResult = [times: number, spent: number];
type NamedMeasureResult = [name: string, result: MeasureResult];
const measures = new Map<string, MeasureResult>();
function clearResult(name: string) {
measures.set(name, [0, 0]);
}
async function measureEach(name: string, proc: () => void | Promise<void>) {
const [times, spent] = measures.get(name) ?? [0, 0];
const start = performance.now();
const result = proc();
if (result instanceof Promise) await result;
const end = performance.now();
measures.set(name, [times + 1, spent + (end - start)]);
}
function formatNumber(num: number) {
return num.toLocaleString("en-US", { maximumFractionDigits: 2 });
}
async function measure(
name: string,
proc: () => void | Promise<void>,
times: number = 10000,
duration: number = 1000
): Promise<NamedMeasureResult> {
const from = Date.now();
let last = times;
clearResult(name);
do {
await measureEach(name, proc);
} while (last-- > 0 && Date.now() - from < duration);
return [name, measures.get(name) as MeasureResult];
}
// eslint-disable-next-line require-await, @typescript-eslint/require-await
async function formatPerfResults(items: NamedMeasureResult[]) {
return (
`| Name | Runs | Each | Total |\n| --- | --- | --- | --- | \n` +
items
.map(
(e) =>
`| ${e[0]} | ${e[1][0]} | ${e[1][0] != 0 ? formatNumber(e[1][1] / e[1][0]) : "-"} | ${formatNumber(e[1][0])} |`
)
.join("\n")
);
}
export async function perf_trench(plugin: ObsidianLiveSyncPlugin) {
clearResult("trench");
const trench = new Trench(plugin.simpleStore);
const result = [] as NamedMeasureResult[];
result.push(
await measure("trench-short-string", async () => {
const p = trench.evacuate("string");
await p();
})
);
{
const testBinary = await plugin.storageAccess.readHiddenFileBinary("testdata/10kb.png");
const uint8Array = new Uint8Array(testBinary);
result.push(
await measure("trench-binary-10kb", async () => {
const p = trench.evacuate(uint8Array);
await p();
})
);
}
{
const testBinary = await plugin.storageAccess.readHiddenFileBinary("testdata/100kb.jpeg");
const uint8Array = new Uint8Array(testBinary);
result.push(
await measure("trench-binary-100kb", async () => {
const p = trench.evacuate(uint8Array);
await p();
})
);
}
{
const testBinary = await plugin.storageAccess.readHiddenFileBinary("testdata/1mb.png");
const uint8Array = new Uint8Array(testBinary);
result.push(
await measure("trench-binary-1mb", async () => {
const p = trench.evacuate(uint8Array);
await p();
})
);
}
return formatPerfResults(result);
}

View File

@@ -1,12 +1,20 @@
import { TFile, Modal, App, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch } from "../deps.ts";
import { getPathFromTFile, isValidPath } from "../common/utils.ts";
import { decodeBinary, escapeStringToHTML, readString } from "../lib/src/string_and_binary/convert.ts";
import ObsidianLiveSyncPlugin from "../main.ts";
import { type DocumentID, type FilePathWithPrefix, type LoadedEntry, LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "../lib/src/common/types.ts";
import { Logger } from "../lib/src/common/logger.ts";
import { isErrorOfMissingDoc } from "../lib/src/pouchdb/utils_couchdb.ts";
import { getDocData, readContent } from "../lib/src/common/utils.ts";
import { isPlainText, stripPrefix } from "../lib/src/string_and_binary/path.ts";
import { TFile, Modal, App, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch } from "../../../deps.ts";
import { getPathFromTFile, isValidPath } from "../../../common/utils.ts";
import { decodeBinary, escapeStringToHTML, readString } from "../../../lib/src/string_and_binary/convert.ts";
import ObsidianLiveSyncPlugin from "../../../main.ts";
import {
type DocumentID,
type FilePathWithPrefix,
type LoadedEntry,
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
} from "../../../lib/src/common/types.ts";
import { Logger } from "../../../lib/src/common/logger.ts";
import { isErrorOfMissingDoc } from "../../../lib/src/pouchdb/utils_couchdb.ts";
import { fireAndForget, getDocData, readContent } from "../../../lib/src/common/utils.ts";
import { isPlainText, stripPrefix } from "../../../lib/src/string_and_binary/path.ts";
import { scheduleOnceIfDuplicated } from "octagonal-wheels/concurrency/lock";
function isImage(path: string) {
const ext = path.split(".").splice(-1)[0].toLowerCase();
@@ -18,7 +26,7 @@ function isComparableText(path: string) {
}
function isComparableTextDecode(path: string) {
const ext = path.split(".").splice(-1)[0].toLowerCase();
return ["json"].includes(ext)
return ["json"].includes(ext);
}
function readDocument(w: LoadedEntry) {
if (w.data.length == 0) return "";
@@ -31,12 +39,16 @@ function readDocument(w: LoadedEntry) {
try {
return readString(new Uint8Array(decodeBinary(w.data)));
} catch (ex) {
Logger(ex, LOG_LEVEL_VERBOSE);
// NO OP.
}
return getDocData(w.data);
}
export class DocumentHistoryModal extends Modal {
plugin: ObsidianLiveSyncPlugin;
get services() {
return this.plugin.services;
}
range!: HTMLInputElement;
contentView!: HTMLDivElement;
info!: HTMLDivElement;
@@ -52,14 +64,20 @@ export class DocumentHistoryModal extends Modal {
currentDeleted = false;
initialRev?: string;
constructor(app: App, plugin: ObsidianLiveSyncPlugin, file: TFile | FilePathWithPrefix, id?: DocumentID, revision?: string) {
constructor(
app: App,
plugin: ObsidianLiveSyncPlugin,
file: TFile | FilePathWithPrefix,
id?: DocumentID,
revision?: string
) {
super(app);
this.plugin = plugin;
this.file = (file instanceof TFile) ? getPathFromTFile(file) : file;
this.file = file instanceof TFile ? getPathFromTFile(file) : file;
this.id = id;
this.initialRev = revision;
if (!file && id) {
this.file = this.plugin.id2path(id);
this.file = this.services.path.id2path(id);
}
if (localStorage.getItem("ols-history-highlightdiff") == "1") {
this.showDiff = true;
@@ -68,7 +86,7 @@ export class DocumentHistoryModal extends Modal {
async loadFile(initialRev?: string) {
if (!this.id) {
this.id = await this.plugin.path2id(this.file);
this.id = await this.services.path.path2id(this.file);
}
const db = this.plugin.localDatabase;
try {
@@ -83,9 +101,9 @@ export class DocumentHistoryModal extends Modal {
this.range.max = "0";
this.range.value = "";
this.range.disabled = true;
this.contentView.setText(`History of this file was not recorded.`);
this.contentView.setText(`We don't have any history for this note.`);
} else {
this.contentView.setText(`Error occurred.`);
this.contentView.setText(`Error while loading file.`);
Logger(ex, LOG_LEVEL_VERBOSE);
}
}
@@ -93,7 +111,7 @@ export class DocumentHistoryModal extends Modal {
async loadRevs(initialRev?: string) {
if (this.revs_info.length == 0) return;
if (initialRev) {
const rIndex = this.revs_info.findIndex(e => e.rev == initialRev);
const rIndex = this.revs_info.findIndex((e) => e.rev == initialRev);
if (rIndex >= 0) {
this.range.value = `${this.revs_info.length - 1 - rIndex}`;
}
@@ -111,7 +129,7 @@ export class DocumentHistoryModal extends Modal {
}
this.BlobURLs.delete(key);
}
generateBlobURL(key: string, data: Uint8Array) {
generateBlobURL(key: string, data: Uint8Array<ArrayBuffer>) {
this.revokeURL(key);
const v = URL.createObjectURL(new Blob([data], { endings: "transparent", type: "application/octet-stream" }));
this.BlobURLs.set(key, v);
@@ -160,9 +178,11 @@ export class DocumentHistoryModal extends Modal {
result = result.replace(/\n/g, "<br>");
} else if (isImage(this.file)) {
const src = this.generateBlobURL("base", w1data);
const overlay = this.generateBlobURL("overlay", readDocument(w2) as Uint8Array);
result =
`<div class='ls-imgdiff-wrap'>
const overlay = this.generateBlobURL(
"overlay",
readDocument(w2) as Uint8Array<ArrayBuffer>
);
result = `<div class='ls-imgdiff-wrap'>
<div class='overlay'>
<img class='img-base' src="${src}">
<img class='img-overlay' src='${overlay}'>
@@ -172,14 +192,12 @@ export class DocumentHistoryModal extends Modal {
}
}
}
}
if (result == undefined) {
if (typeof w1data != "string") {
if (isImage(this.file)) {
const src = this.generateBlobURL("base", w1data);
result =
`<div class='ls-imgdiff-wrap'>
result = `<div class='ls-imgdiff-wrap'>
<div class='overlay'>
<img class='img-base' src="${src}">
</div>
@@ -191,7 +209,8 @@ export class DocumentHistoryModal extends Modal {
}
}
if (result == undefined) result = typeof w1data == "string" ? escapeStringToHTML(w1data) : "Binary file";
this.contentView.innerHTML = (this.currentDeleted ? "(At this revision, the file has been deleted)\n" : "") + result;
this.contentView.innerHTML =
(this.currentDeleted ? "(At this revision, the file has been deleted)\n" : "") + result;
}
}
@@ -207,10 +226,10 @@ export class DocumentHistoryModal extends Modal {
divView.createEl("input", { type: "range" }, (e) => {
this.range = e;
e.addEventListener("change", (e) => {
this.loadRevs();
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
e.addEventListener("input", (e) => {
this.loadRevs();
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
});
contentEl
@@ -224,7 +243,7 @@ export class DocumentHistoryModal extends Modal {
checkbox.addEventListener("input", (evt: any) => {
this.showDiff = checkbox.checked;
localStorage.setItem("ols-history-highlightdiff", this.showDiff == true ? "1" : "");
this.loadRevs();
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
})
);
@@ -234,7 +253,7 @@ export class DocumentHistoryModal extends Modal {
.addClass("op-info");
this.info = contentEl.createDiv("");
this.info.addClass("op-info");
this.loadFile(this.initialRev);
fireAndForget(async () => await this.loadFile(this.initialRev));
const div = contentEl.createDiv({ text: "Loading old revisions..." });
this.contentView = div;
div.addClass("op-scrollable");
@@ -242,9 +261,11 @@ export class DocumentHistoryModal extends Modal {
const buttons = contentEl.createDiv("");
buttons.createEl("button", { text: "Copy to clipboard" }, (e) => {
e.addClass("mod-cta");
e.addEventListener("click", async () => {
await navigator.clipboard.writeText(this.currentText);
Logger(`Old content copied to clipboard`, LOG_LEVEL_NOTICE);
e.addEventListener("click", () => {
fireAndForget(async () => {
await navigator.clipboard.writeText(this.currentText);
Logger(`Old content copied to clipboard`, LOG_LEVEL_NOTICE);
});
});
});
const focusFile = async (path: string) => {
@@ -253,35 +274,37 @@ export class DocumentHistoryModal extends Modal {
const leaf = this.plugin.app.workspace.getLeaf(false);
await leaf.openFile(targetFile);
} else {
Logger("The file could not view on the editor", LOG_LEVEL_NOTICE)
Logger("Unable to display the file in the editor", LOG_LEVEL_NOTICE);
}
}
};
buttons.createEl("button", { text: "Back to this revision" }, (e) => {
e.addClass("mod-cta");
e.addEventListener("click", async () => {
// const pathToWrite = this.plugin.id2path(this.id, true);
const pathToWrite = stripPrefix(this.file);
if (!isValidPath(pathToWrite)) {
Logger("Path is not valid to write content.", LOG_LEVEL_INFO);
return;
}
if (!this.currentDoc) {
Logger("No active file loaded.", LOG_LEVEL_INFO);
return;
}
const d = readContent(this.currentDoc);
await this.plugin.vaultAccess.adapterWrite(pathToWrite, d);
await focusFile(pathToWrite);
this.close();
e.addEventListener("click", () => {
fireAndForget(async () => {
// const pathToWrite = this.plugin.id2path(this.id, true);
const pathToWrite = stripPrefix(this.file);
if (!isValidPath(pathToWrite)) {
Logger("Path is not valid to write content.", LOG_LEVEL_INFO);
return;
}
if (!this.currentDoc) {
Logger("No active file loaded.", LOG_LEVEL_INFO);
return;
}
const d = readContent(this.currentDoc);
await this.plugin.storageAccess.writeHiddenFileAuto(pathToWrite, d);
await focusFile(pathToWrite);
this.close();
});
});
});
}
onClose() {
const { contentEl } = this;
contentEl.empty();
this.BlobURLs.forEach(value => {
this.BlobURLs.forEach((value) => {
console.log(value);
if (value) URL.revokeObjectURL(value);
})
});
}
}

View File

@@ -1,12 +1,12 @@
<script lang="ts">
import ObsidianLiveSyncPlugin from "../main";
import ObsidianLiveSyncPlugin from "../../../main.ts";
import { onDestroy, onMount } from "svelte";
import type { AnyEntry, FilePathWithPrefix } from "../lib/src/common/types";
import { getDocData, isAnyNote, isDocContentSame, readAsBlob } from "../lib/src/common/utils";
import { diff_match_patch } from "../deps";
import { DocumentHistoryModal } from "./DocumentHistoryModal";
import { isPlainText, stripAllPrefixes } from "../lib/src/string_and_binary/path";
import { TFile } from "../deps";
import type { AnyEntry, FilePathWithPrefix } from "../../../lib/src/common/types.ts";
import { getDocData, isAnyNote, isDocContentSame, readAsBlob } from "../../../lib/src/common/utils.ts";
import { diff_match_patch } from "../../../deps.ts";
import { DocumentHistoryModal } from "../DocumentHistory/DocumentHistoryModal.ts";
import { isPlainText, stripAllPrefixes } from "../../../lib/src/string_and_binary/path.ts";
import { getPath } from "../../../common/utils.ts";
export let plugin: ObsidianLiveSyncPlugin;
let showDiffInfo = false;
@@ -54,7 +54,7 @@
continue;
}
if (!isAnyNote(docA)) continue;
const path = plugin.getPath(docA as AnyEntry);
const path = getPath(docA as AnyEntry);
const isPlain = isPlainText(docA.path);
const revs = await db.getRaw(docA._id, { revs_info: true });
let p: string | undefined = undefined;
@@ -66,7 +66,10 @@
for (const revInfo of reversedRevs) {
if (revInfo.status == "available") {
const doc = (!isPlain && showDiffInfo) || (checkStorageDiff && revInfo.rev == docA._rev) ? await db.getDBEntry(path, { rev: revInfo.rev }, false, false, true) : await db.getDBEntryMeta(path, { rev: revInfo.rev }, true);
const doc =
(!isPlain && showDiffInfo) || (checkStorageDiff && revInfo.rev == docA._rev)
? await db.getDBEntry(path, { rev: revInfo.rev }, false, false, true)
: await db.getDBEntryMeta(path, { rev: revInfo.rev }, true);
if (doc === false) continue;
const rev = revInfo.rev;
@@ -89,12 +92,15 @@
const diff = dmp.diff_main(p, data);
dmp.diff_cleanupSemantic(diff);
p = data;
const pxinit = {
const pxInit = {
[DIFF_DELETE]: 0,
[DIFF_EQUAL]: 0,
[DIFF_INSERT]: 0,
} as { [key: number]: number };
const px = diff.reduce((p, c) => ({ ...p, [c[0]]: (p[c[0]] ?? 0) + c[1].length }), pxinit);
const px = diff.reduce(
(p, c) => ({ ...p, [c[0]]: (p[c[0]] ?? 0) + c[1].length }),
pxInit
);
diffDetail = `-${px[DIFF_DELETE]}, +${px[DIFF_INSERT]}`;
}
}
@@ -104,9 +110,13 @@
}
if (rev == docA._rev) {
if (checkStorageDiff) {
const abs = plugin.vaultAccess.getAbstractFileByPath(stripAllPrefixes(plugin.getPath(docA)));
if (abs instanceof TFile) {
const data = await plugin.vaultAccess.adapterReadAuto(abs);
const isExist = await plugin.storageAccess.isExistsIncludeHidden(
stripAllPrefixes(getPath(docA))
);
if (isExist) {
const data = await plugin.storageAccess.readHiddenFileBinary(
stripAllPrefixes(getPath(docA))
);
const d = readAsBlob(doc);
const result = await isDocContentSame(data, d);
if (result) {
@@ -117,7 +127,7 @@
}
}
}
const docPath = plugin.getPath(doc as AnyEntry);
const docPath = getPath(doc as AnyEntry);
const [filename, ...pathItems] = docPath.split("/").reverse();
let chunksStatus = "";
@@ -187,19 +197,28 @@
<div class="globalhistory">
<h1>Vault history</h1>
<div class="control">
<div class="row"><label for="">From:</label><input type="date" bind:value={dispDateFrom} disabled={loading} /></div>
<div class="row">
<label for="">From:</label><input type="date" bind:value={dispDateFrom} disabled={loading} />
</div>
<div class="row"><label for="">To:</label><input type="date" bind:value={dispDateTo} disabled={loading} /></div>
<div class="row">
<label for="">Info:</label>
<label><input type="checkbox" bind:checked={showDiffInfo} disabled={loading} /><span>Diff</span></label>
<label><input type="checkbox" bind:checked={showChunkCorrected} disabled={loading} /><span>Chunks</span></label>
<label><input type="checkbox" bind:checked={checkStorageDiff} disabled={loading} /><span>File integrity</span></label>
<label
><input type="checkbox" bind:checked={showChunkCorrected} disabled={loading} /><span>Chunks</span
></label
>
<label
><input type="checkbox" bind:checked={checkStorageDiff} disabled={loading} /><span>File integrity</span
></label
>
</div>
</div>
{#if loading}
<div class="">Gathering information...</div>
{/if}
<table>
<tbody>
<tr>
<th> Date </th>
<th> Path </th>
@@ -212,7 +231,7 @@
<tr>
<td colspan="5" class="more">
{#if loading}
<div class="" />
<div class=""></div>
{:else}
<div><button on:click={() => nextWeek()}>+1 week</button></div>
{/if}
@@ -257,12 +276,13 @@
<tr>
<td colspan="5" class="more">
{#if loading}
<div class="" />
<div class=""></div>
{:else}
<div><button on:click={() => prevWeek()}>+1 week</button></div>
{/if}
</td>
</tr>
</tbody>
</table>
</div>

View File

@@ -1,14 +1,20 @@
import {
ItemView,
WorkspaceLeaf
} from "../deps.ts";
import { WorkspaceLeaf } from "../../../deps.ts";
import GlobalHistoryComponent from "./GlobalHistory.svelte";
import type ObsidianLiveSyncPlugin from "../main.ts";
import type ObsidianLiveSyncPlugin from "../../../main.ts";
import { SvelteItemView } from "../../../common/SvelteItemView.ts";
import { mount } from "svelte";
export const VIEW_TYPE_GLOBAL_HISTORY = "global-history";
export class GlobalHistoryView extends ItemView {
export class GlobalHistoryView extends SvelteItemView {
instantiateComponent(target: HTMLElement) {
return mount(GlobalHistoryComponent, {
target: target,
props: {
plugin: this.plugin,
},
});
}
component?: GlobalHistoryComponent;
plugin: ObsidianLiveSyncPlugin;
icon = "clock";
title: string = "";
@@ -23,7 +29,6 @@ export class GlobalHistoryView extends ItemView {
this.plugin = plugin;
}
getViewType() {
return VIEW_TYPE_GLOBAL_HISTORY;
}
@@ -31,19 +36,4 @@ export class GlobalHistoryView extends ItemView {
getDisplayText() {
return "Vault history";
}
// eslint-disable-next-line require-await
async onOpen() {
this.component = new GlobalHistoryComponent({
target: this.contentEl,
props: {
plugin: this.plugin,
},
});
}
// eslint-disable-next-line require-await
async onClose() {
this.component?.$destroy();
}
}

View File

@@ -0,0 +1,143 @@
import { App, Modal } from "../../../deps.ts";
import { DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT } from "diff-match-patch";
import { CANCELLED, LEAVE_TO_SUBSEQUENT, type diff_result } from "../../../lib/src/common/types.ts";
import { escapeStringToHTML } from "../../../lib/src/string_and_binary/convert.ts";
import { delay } from "../../../lib/src/common/utils.ts";
import { eventHub } from "../../../common/events.ts";
import { globalSlipBoard } from "../../../lib/src/bureau/bureau.ts";
export type MergeDialogResult = typeof CANCELLED | typeof LEAVE_TO_SUBSEQUENT | string;
declare global {
interface Slips extends LSSlips {
"conflict-resolved": typeof CANCELLED | MergeDialogResult;
}
}
export class ConflictResolveModal extends Modal {
result: diff_result;
filename: string;
response: MergeDialogResult = CANCELLED;
isClosed = false;
consumed = false;
title: string = "Conflicting changes";
pluginPickMode: boolean = false;
localName: string = "Base";
remoteName: string = "Conflicted";
offEvent?: ReturnType<typeof eventHub.onEvent>;
constructor(app: App, filename: string, diff: diff_result, pluginPickMode?: boolean, remoteName?: string) {
super(app);
this.result = diff;
this.filename = filename;
this.pluginPickMode = pluginPickMode || false;
if (this.pluginPickMode) {
this.title = "Pick a version";
this.remoteName = `${remoteName || "Remote"}`;
this.localName = "Local";
}
// Send cancel signal for the previous merge dialogue
// if not there, simply be ignored.
// sendValue("close-resolve-conflict:" + this.filename, false);
}
onOpen() {
const { contentEl } = this;
// Send cancel signal for the previous merge dialogue
// if not there, simply be ignored.
globalSlipBoard.submit("conflict-resolved", this.filename, CANCELLED);
if (this.offEvent) {
this.offEvent();
}
this.offEvent = eventHub.onEvent("conflict-cancelled", (path) => {
if (path === this.filename) {
this.sendResponse(CANCELLED);
}
});
// sendValue("close-resolve-conflict:" + this.filename, false);
this.titleEl.setText(this.title);
contentEl.empty();
contentEl.createEl("span", { text: this.filename });
const div = contentEl.createDiv("");
div.addClass("op-scrollable");
let diff = "";
for (const v of this.result.diff) {
const x1 = v[0];
const x2 = v[1];
if (x1 == DIFF_DELETE) {
diff +=
"<span class='deleted'>" +
escapeStringToHTML(x2).replace(/\n/g, "<span class='ls-mark-cr'></span>\n") +
"</span>";
} else if (x1 == DIFF_EQUAL) {
diff +=
"<span class='normal'>" +
escapeStringToHTML(x2).replace(/\n/g, "<span class='ls-mark-cr'></span>\n") +
"</span>";
} else if (x1 == DIFF_INSERT) {
diff +=
"<span class='added'>" +
escapeStringToHTML(x2).replace(/\n/g, "<span class='ls-mark-cr'></span>\n") +
"</span>";
}
}
const div2 = contentEl.createDiv("");
const date1 =
new Date(this.result.left.mtime).toLocaleString() + (this.result.left.deleted ? " (Deleted)" : "");
const date2 =
new Date(this.result.right.mtime).toLocaleString() + (this.result.right.deleted ? " (Deleted)" : "");
div2.setHTMLUnsafe(`
<span class='deleted'><span class='conflict-dev-name'>${this.localName}</span>: ${date1}</span><br>
<span class='added'><span class='conflict-dev-name'>${this.remoteName}</span>: ${date2}</span><br>
`);
contentEl.createEl("button", { text: `Use ${this.localName}` }, (e) =>
e.addEventListener("click", () => this.sendResponse(this.result.right.rev))
).style.marginRight = "4px";
contentEl.createEl("button", { text: `Use ${this.remoteName}` }, (e) =>
e.addEventListener("click", () => this.sendResponse(this.result.left.rev))
).style.marginRight = "4px";
if (!this.pluginPickMode) {
contentEl.createEl("button", { text: "Concat both" }, (e) =>
e.addEventListener("click", () => this.sendResponse(LEAVE_TO_SUBSEQUENT))
).style.marginRight = "4px";
}
contentEl.createEl("button", { text: !this.pluginPickMode ? "Not now" : "Cancel" }, (e) =>
e.addEventListener("click", () => this.sendResponse(CANCELLED))
).style.marginRight = "4px";
diff = diff.replace(/\n/g, "<br>");
// div.innerHTML = diff;
if (diff.length > 100 * 1024) {
div.setText("(Too large diff to display)");
} else {
div.setHTMLUnsafe(diff);
}
}
sendResponse(result: MergeDialogResult) {
this.response = result;
this.close();
}
onClose() {
const { contentEl } = this;
contentEl.empty();
if (this.offEvent) {
this.offEvent();
}
if (this.consumed) {
return;
}
this.consumed = true;
globalSlipBoard.submit("conflict-resolved", this.filename, this.response);
}
async waitForResult(): Promise<MergeDialogResult> {
await delay(100);
const r = await globalSlipBoard.awaitNext("conflict-resolved", this.filename);
return r;
}
}

View File

@@ -0,0 +1,105 @@
<script lang="ts">
import { onDestroy, onMount } from "svelte";
import { logMessages } from "../../../lib/src/mock_and_interop/stores";
import { reactive, type ReactiveInstance } from "octagonal-wheels/dataobject/reactive";
import { Logger } from "../../../lib/src/common/logger";
import { $msg as msg, currentLang as lang } from "../../../lib/src/common/i18n.ts";
let unsubscribe: () => void;
let messages = $state([] as string[]);
let wrapRight = $state(false);
let autoScroll = $state(true);
let suspended = $state(false);
type Props = {
close: () => void;
};
let { close }: Props = $props();
// export let close: () => void;
function updateLog(logs: ReactiveInstance<string[]>) {
const e = logs.value;
if (!suspended) {
messages = [...e];
setTimeout(() => {
if (scroll) scroll.scrollTop = scroll.scrollHeight;
}, 10);
}
}
onMount(async () => {
const _logMessages = reactive(() => logMessages.value);
_logMessages.onChanged(updateLog);
Logger(msg("logPane.logWindowOpened", {}, lang));
unsubscribe = () => _logMessages.offChanged(updateLog);
});
onDestroy(() => {
if (unsubscribe) unsubscribe();
});
let scroll: HTMLDivElement;
function closeDialogue() {
close();
}
</script>
<div class="logpane">
<!-- <h1>{msg("logPane.title", {}, lang)}</h1> -->
<div class="control">
<div class="row">
<label>
<input type="checkbox" bind:checked={wrapRight} />
<span>{msg("logPane.wrap", {}, lang)}</span>
</label>
<label>
<input type="checkbox" bind:checked={autoScroll} />
<span>{msg("logPane.autoScroll", {}, lang)}</span>
</label>
<label>
<input type="checkbox" bind:checked={suspended} />
<span>{msg("logPane.pause", {}, lang)}</span>
</label>
<span class="spacer"></span>
<button onclick={() => closeDialogue()}>Close</button>
</div>
</div>
<div class="log" bind:this={scroll}>
{#each messages as line}
<pre class:wrap-right={wrapRight}>{line}</pre>
{/each}
</div>
</div>
<style>
* {
box-sizing: border-box;
}
.logpane {
display: flex;
height: 100%;
flex-direction: column;
}
.log {
overflow-y: scroll;
user-select: text;
-webkit-user-select: text;
padding-bottom: 2em;
}
.log > pre {
margin: 0;
}
.log > pre.wrap-right {
word-break: break-all;
max-width: 100%;
width: 100%;
white-space: normal;
}
.row {
display: flex;
flex-direction: row;
justify-content: flex-end;
}
.row > label {
display: flex;
align-items: center;
min-width: 5em;
margin-right: 1em;
}
</style>

View File

@@ -0,0 +1,43 @@
import { WorkspaceLeaf } from "obsidian";
import LogPaneComponent from "./LogPane.svelte";
import type ObsidianLiveSyncPlugin from "../../../main.ts";
import { SvelteItemView } from "../../../common/SvelteItemView.ts";
import { $msg } from "src/lib/src/common/i18n.ts";
import { mount } from "svelte";
export const VIEW_TYPE_LOG = "log-log";
//Log view
export class LogPaneView extends SvelteItemView {
instantiateComponent(target: HTMLElement) {
return mount(LogPaneComponent, {
target: target,
props: {
close: () => {
this.leaf.detach();
},
},
});
}
plugin: ObsidianLiveSyncPlugin;
icon = "view-log";
title: string = "";
navigation = false;
getIcon(): string {
return "view-log";
}
constructor(leaf: WorkspaceLeaf, plugin: ObsidianLiveSyncPlugin) {
super(leaf);
this.plugin = plugin;
}
getViewType() {
return VIEW_TYPE_LOG;
}
getDisplayText() {
// TODO: This function is not reactive and does not update the title based on the current language
return $msg("logPane.title");
}
}

View File

@@ -0,0 +1,25 @@
import { AbstractObsidianModule } from "../AbstractObsidianModule.ts";
import { VIEW_TYPE_GLOBAL_HISTORY, GlobalHistoryView } from "./GlobalHistory/GlobalHistoryView.ts";
export class ModuleObsidianGlobalHistory extends AbstractObsidianModule {
_everyOnloadStart(): Promise<boolean> {
this.addCommand({
id: "livesync-global-history",
name: "Show vault history",
callback: () => {
this.showGlobalHistory();
},
});
this.registerView(VIEW_TYPE_GLOBAL_HISTORY, (leaf) => new GlobalHistoryView(leaf, this.plugin));
return Promise.resolve(true);
}
showGlobalHistory() {
void this.services.API.showWindow(VIEW_TYPE_GLOBAL_HISTORY);
}
onBindFunction(core: typeof this.core, services: typeof core.services): void {
services.appLifecycle.handleOnInitialise(this._everyOnloadStart.bind(this));
}
}

Some files were not shown because too many files have changed in this diff Show More