Compare commits

...

30 Commits

Author SHA1 Message Date
vorotamoroz
ebcb059d99 Modified:
- Plugins and settings is now in beta.

Implemented:
- Show the count of the pending processes into the status.
2022-01-11 13:17:35 +09:00
vorotamoroz
5bb8b2567b Modified:
- Implement automatic temporary reduction of batch sizes.
- Disable remote checkpointing.
2022-01-05 17:20:33 +09:00
vorotamoroz
c3464a4e9c New feature:
- Bootup sequence prevention implemented.

Touched the docs up:
2021-12-28 11:30:19 +09:00
vorotamoroz
55545da45f Fixed:
- Fixed problems about saving or deleting files to the local database.
- Disable version up warning.
- Fixed error on folder renaming.
- Merge dialog is now shown one by one.
- Fixed icons of queued files.
- Handled sync issue of Folder to File
- Fixed the messages in the setting dialog.
- Fixed deadlock.
2021-12-24 17:05:57 +09:00
vorotamoroz
96165b4f9b Fixed and implemented:
- New configuration to solve synchronization failure on large vault.
- Preset misconfigurations
- Sometimes hanged on replication.
- Wrote documents
2021-12-23 13:22:46 +09:00
vorotamoroz
abe613539b Fixes:
- Allow "'" and ban "#" in filenames. #27
- Fixed misspelling (one of #28)
2021-12-22 19:38:00 +09:00
vorotamoroz
fc210de58b bumped 2021-12-16 19:08:15 +09:00
vorotamoroz
1b2f9dd171 Refactored and fixed:
- Refactored, linted, fixed potential problems, enabled 'use strict'

Fixed:
- Added "Enable plugin synchronization" option
(Plugins and settings had been run always)

Implemented:
- Sync preset implemented.
- "Check integrity on saving" implemented.
- "Sanity check" implemented
It's mainly for debugging.
2021-12-16 19:06:42 +09:00
vorotamoroz
eef2281ae3 Mini fix:
fixed the problem that plugin sync timing and notification.
2021-12-15 09:22:43 +09:00
vorotamoroz
40ed2bbdcf Improved:
- Tidied up the Setting dialog.
- Implemented Automatic plugin saving.
- implemented notifying the new plugin or its settings.

Fixed:

- Reduced reconnection when editing configuration.
- Fixed the problem about syncing the stylesheet of the plugin.
2021-12-14 19:14:17 +09:00
vorotamoroz
92fd814c89 update readme 2021-12-07 17:29:36 +09:00
vorotamoroz
3118276603 Wrote the docs. 2021-12-07 17:28:18 +09:00
vorotamoroz
2b11be05ec Add new feature:
- Reread all files
2021-12-06 12:19:05 +09:00
vorotamoroz
0ee73860d1 Fixed:
- Make less file corruption.
- Some notice was not hidden automatically
2021-12-06 11:43:42 +09:00
vorotamoroz
ecec546f13 Improvements:
- Show sync status information inside the editor.

Fixed:
- Reduce the same messages on popup notifications
- show warning message when synchronization
2021-12-03 12:54:18 +09:00
vorotamoroz
4a8c76efb5 Tidy up:
- Plugin and setting table.
- Add new option showOwnPlugin.
- Change icons.
- Buttons in setting dialog.

Fixed: Could not sync the file that contains "$" in filename.
2021-11-29 16:31:29 +09:00
vorotamoroz
75ee63e573 Bumped 2021-11-26 17:27:35 +09:00
vorotamoroz
3435efaf89 Fixed:
- Removed waiting delay on fetching files from the database when launch.
- Database had not been closed right timings.
- Fixed wrong message.

Improved:
- Plugins and Settings sync is not compatible with Cloudant.
- "Restore from file" is added on "Corrupted data".
2021-11-26 07:52:15 +09:00
vorotamoroz
57f91eb407 Just added "yet".
(I will improve this feature)
2021-11-26 00:56:52 +09:00
vorotamoroz
50916aef0b add warning message. 2021-11-26 00:51:50 +09:00
vorotamoroz
8126bb6c02 Implemented:
- Plugins and settings sync (bleeding edge, not tested well)
2021-11-25 23:50:46 +09:00
vorotamoroz
12753262fd bumped 2021-11-25 12:32:35 +09:00
vorotamoroz
97b34cff47 Fixed:
- the issue of filenames.

Improved:
- logging (error detail are now logged.)
- Testing remote DB.
2021-11-25 03:13:08 +09:00
vorotamoroz
85e29b99b2 Bumped and documented. 2021-11-24 17:37:13 +09:00
vorotamoroz
2d223a1439 Add new configuration
- Use newer file if conflicted
I mprove:.
- status message improved.
- fixed misconfigurations on automatically disabled sync.
2021-11-24 17:31:03 +09:00
vorotamoroz
c8decb05f5 Documentated, added feature.
- Toggle All Sync (command) for suspend all sync.
- Batch database update (beta)
2021-11-18 18:15:23 +09:00
vorotamoroz
6fcb6e5a6a bumped 2021-11-15 12:25:45 +09:00
vorotamoroz
bf4ce560ea Fix 3 issues, implement a feature, and tidy up .
Fixed:
- note splitting bug
- missing some logging
- leaking important things to log. #11
New feature:
- No not delete empty folder
Tidy Up:
- Setting dialog.
- URI and Database are splited.
- Controlled options that should not be selected at the same time
- Drop history improved
2021-11-15 12:25:18 +09:00
vorotamoroz
8adab63724 Fixed issues and implement End to End Encryption.
Improvements:
- End to End Encryption implemented (beta)
- Speedup boot file checking.
- Show status on Dropping history and set E2E

Fixes:
- Fix replication issue that reflects own changes again.
- Fix replication issue about unexpected error message shown.
- Fix replication issue on mobile (Excessive resolution of modified time)
- Fix issue about error on initialize.
2021-11-12 19:11:52 +09:00
vorotamoroz
9facb57760 Bug fixed and new feature implemented
- Synchronization Timing problem fixed
- Performance improvement of handling large files
- Timeout for collecting leaves extended
- Periodic synchronization implemented
- Dumping document information implemented.
- Folder watching problem fixed.
- Delay vault watching for database ready.
2021-11-10 18:07:09 +09:00
33 changed files with 10053 additions and 2683 deletions

3
.eslintignore Normal file
View File

@@ -0,0 +1,3 @@
npm node_modules
build
.eslintrc.js.bak

19
.eslintrc Normal file
View File

@@ -0,0 +1,19 @@
{
"root": true,
"parser": "@typescript-eslint/parser",
"plugins": ["@typescript-eslint"],
"extends": ["eslint:recommended", "plugin:@typescript-eslint/eslint-recommended", "plugin:@typescript-eslint/recommended"],
"parserOptions": {
"sourceType": "module"
},
"rules": {
"no-unused-vars": "off",
"@typescript-eslint/no-unused-vars": ["error", { "args": "none" }],
"@typescript-eslint/ban-ts-comment": "off",
"no-prototype-builtins": "off",
"@typescript-eslint/no-empty-function": "off",
"require-await": "warn",
"no-async-promise-executor": "off",
"@typescript-eslint/no-explicit-any": "off"
}
}

178
README.md
View File

@@ -1,40 +1,58 @@
# Self-hosted LiveSync
Sorry for late! [Japanese docs](./README_ja.md) is also coming up.
**Renamed from: obsidian-livesync**
This is the obsidian plugin that enables livesync between multi-devices with self-hosted database.
Runs in Mac, Android, Windows, and iOS.
Using a self-hosted database, live-sync to multi-devices bidirectionally.
Runs in Mac, Android, Windows, and iOS. Perhaps available on Linux too.
Community implementation, not compatible with official "Sync".
<!-- <div><video controls src="https://user-images.githubusercontent.com/45774780/137352386-a274736d-a38b-4069-ac41-759c73e36a23.mp4" muted="false"></video></div> -->
![obsidian_live_sync_demo](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif)
**It's getting almost stable now, But Please make sure to back your vault up!**
Limitations: ~~Folder deletion handling is not completed.~~ **It would work now.**
## This plugin enables..
## This plugin enables...
- Live Sync
- Self-Hosted data synchronization with conflict detection and resolving in Obsidian.
- Runs in Windows, Mac, iPad, iPhone, Android, Chromebook
- Synchronize to Self-hosted Database
- Replicate to/from other devices bidirectionally near-real-time
- Resolving synchronizing conflicts in the Obsidian.
- You can use CouchDB or its compatibles like IBM Cloudant. CouchDB is OSS, and IBM Cloudant has the terms and certificates about security. Your notes are yours.
- Off-line sync is also available.
- Receive WebClip from [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
- End-to-End encryption is available (beta).
- Receive WebClip from [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf) (End-to-End encryption will not be applicable.)
It must be useful for the Researcher, Engineer, Developer who has to keep NDA or something like agreement.
Especially, in some companies, people have to store all data to their fully controlled host, even End-to-End encryption applied.
## IMPORTANT NOTICE
**Please make sure to disable other synchronize solutions to avoid content corruption or duplication.**
If you want to synchronize to both backend, sync one by one, please.
- Do not use with other synchronize solutions. Before enabling this plugin, make sure to disable other synchronize solutions, to avoid content corruption or duplication. If you want to synchronize to both backend, sync one by one, please.
This includes making your vault on the cloud-controlled folder(e.g., Inside the iCloud folder).
- This is the synchronization plugin. Not backup solutions. Do not rely on this for backup.
- When the device's storage has been run out, Database corruption may happen.
- When editing hidden files or any other invisible files from obsidian, the file wouldn't be kept in the database. (**Or be deleted.**)
## Supplements
- When the file has been deleted, the deletion of the file is replicated to other devices.
- When the folder became empty by replication, The folder will be deleted in the default setting. But you can change this behaivour. Check the [Settings](docs/settings.md).
- LiveSync drains many batteries in mobile devices.
- Mobile Obsidian can not connect to the non-secure(HTTP) or local CA-signed servers, even though the certificate is stored in the device store.
- There are no 'exclude_folders' like configurations.
## How to use
1. Install from Obsidian, or clone this repo and run `npm run build` ,copy `main.js`, `styles.css` and `manifest.json` into `[your-vault]/.obsidian/plugins/` (PC, Mac and Android will work)
2. Enable Self-hosted LiveSync in the settings dialog.
3. If you use your self-hosted CouchDB, set your server's info.
4. or Use [IBM Cloudant](https://www.ibm.com/cloud/cloudant), take an account and enable **Cloudant** in [Catalog](https://cloud.ibm.com/catalog#services)
Note please choose "IAM and legacy credentials" for the Authentication method
Setup details are in Couldant Setup Section.
5. Setup LiveSync or SyncOnSave or SyncOnStart as you like.
1. Install from Obsidian, or download from this repo's releases, copy `main.js`, `styles.css` and `manifest.json` into `[your-vault]/.obsidian/plugins/`
2. Get your database. IBM Cloudant is preferred for testing. Or you can use your own server with CouchDB.
For more information, refer below:
1. [Setup IBM Cloudant](docs/setup_cloudant.md)
2. [Setup your CouchDB](docs/setup_own_server.md)
3. Enter connection information to Plugin's setting dialog. In details, refer [Settings of Self-hosted LiveSync](docs/settings.md)
4. Enable LiveSync or other Synchronize method as you like.
## Test Server
@@ -46,117 +64,29 @@ Note: Please read "Limitations" carefully. Do not send your private vault.
Available from on Chrome Web Store:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
Repo is here: [obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip). (Docs are work in progress.)
## When your database looks corrupted or too heavy to replicate to a new device.
# Information in StatusBar
self-hosted-livesync changes data treatment of markdown files since 0.1.0
When you are troubled with synchronization, **Please reset local and remote databases**.
_Note: Without synchronization, your files won't be deleted._
Synchronization status is shown in statusbar.
1. Update plugin on all devices.
1. Disable any synchronizations on all devices.
1. From the most reliable device<sup>(_The device_)</sup>, back your vault up.
1. Press "Drop History"-> "Execute" button from _The device_.
1. Wait for a while, so self-hosted-livesync will say "completed."
1. In other devices, replication will be canceled automatically. Click "Reset local database" and click "I'm ready, mark this device 'resolved'" on all devices.
If it doesn't be shown. replicate once.
1. It's all done. But if you are sure to resolve all devices and the warning is noisy, click "I'm ready, unlock the database". it unlocks the database completely.
- Status
- ⏹️ Stopped
- 💤 LiveSync is enabled. Waiting for changes.
- ⚡️ Synchronize is now in progress.
- ⚠ Error occurred.
- ↑ Uploaded pieces
- ↓ Downloaded pieces
- ⏳ Count of the pending process
If you have deleted or renamed files, please wait until this disappears.
# Designed architecture
# More supplements
## How does this plugin synchronize.
![Synchronization](images/1.png)
1. When notes are created or modified, Obsidian raises some events. obsidian-live-sync catch these events and reflect changes into Local PouchDB.
2. PouchDB automatically or manually replicates changes to remote CouchDB.
3. Another device is watching remote CouchDB's changes, so retrieve new changes.
4. obsidian-live-sync reflects replicated changeset into Obsidian's vault.
Note: The figure is drawn as single-directional, between two devices. But everything occurs bi-directionally between many devices at once in real.
## Techniques to keep bandwidth low.
![dedupe](images/2.png)
## Cloudant Setup
### Creating an Instance
1. Hit the "Create Resource" button.
![step 1](./instruction_images/cloudant_1.png)
1. In IBM Cloud Catalog, search "Cloudant".
![step 2](instruction_images/cloudant_2.png)
1. You can choose "Lite plan" for free.
![step 3](instruction_images/cloudant_3.png)
Select Multitenant(it's the default) and the region as you like.
![step 4](instruction_images/cloudant_4.png) 3. Be sure to select "IAM and Legacy credentials" for "Authentication Method".
![step 5](instruction_images/cloudant_5.png)
4. Select Lite and be sure to check the capacity.
![step 6](instruction_images/cloudant_6.png)
5. And hit "Create" on the right panel.
![step 7](instruction_images/cloudant_7.png)
6. When all of the above steps have been done, Open "Resource list" on the left pane. you can see the Cloudant instance in the "Service and software". Click it.
![step 8](instruction_images/cloudant_8.png)
7. In resource details, there's information to connect from self-hosted-livesync.
Copy the "External Endpoint(preferred)" address. <sup>(\*1)</sup>. We use this address later, with the database name.
![step 9](instruction_images/cloudant_9.png)
### CouchDB setup
1. Hit the "Launch Dashboard" button, Cloudant dashboard will be shown.
Yes, it's almost CouchDB's fauxton.
![step 1](instruction_images/couchdb_1.png)
1. First, you have to enable the CORS option.
Hit the Account menu and open the "CORS" tab.
Initially, "Origin Domains" is set to "Restrict to specific domains"., so set to "All domains(\*)"
_NOTE: of course We want to set "app://obsidian.md" but it's not acceptable on Cloudant._
![step 2](instruction_images/couchdb_2.png)
1. And open the "Databases" tab and hit the "Create Database" button.
Enter the name as you like <sup>(\*2)</sup> and Hit the "Create" button below.
![step 3](instruction_images/couchdb_3.png)
1. If the database was shown with joyful messages, setup is almost done.
And, once you have confirmed that you can create a database, usullay there is no need to open this screen.
You can create a database from Self-hosted LiveSync.
![step 4](instruction_images/couchdb_4.png)
### Credentials Setup
1. Back into IBM Cloud, Open the "Service credentials". You'll get an empty list, hit the "New credential" button.
![step 1](instruction_images/credentials_1.png)
1. The dialog to create a credential will be shown.
type any name or leave it default, hit the "Add" button.
![step 2](instruction_images/credentials_2.png)
_NOTE: This "name" is not related to your username that uses in Self-hosted LiveSync._
1. Back to "Service credentials", the new credential should be created.
open details.
![step 3](instruction_images/credentials_3.png)
The username and password pair is inside this JSON.
"username" and "password" are so.
follow the figure, it's
"apikey-v2-2unu15184f7o8emr90xlqgkm2ncwhbltml6tgnjl9sd5"<sup>(\*3)</sup> and "c2c11651d75497fa3d3c486e4c8bdf27"<sup>(\*4)</sup>
### Self-hosted LiveSync setting
![xx](instruction_images/obsidian_sync_1.png)
example values.
| Items | Value | example |
| ------------------- | -------------------------------- | --------------------------------------------------------------------------- |
| CouchDB Remote URI: | (\*1)/(\*2) or any favorite name | https://xxxxxxxxxxxxxxxxx-bluemix.cloudantnosqldb.appdomain.cloud/sync-test |
| CouchDB Username | (\*3) | apikey-v2-2unu15184f7o8emr90xlqgkm2ncwhbltml6tgnjl9sd5 |
| CouchDB Password | (\*4) | c2c11651d75497fa3d3c486e4c8bdf27 |
- When synchronized, files are compared by their modified times and overwritten by the newer ones once. Then plugin checks the conflicts and if a merge is needed, the dialog will open.
- Rarely, the file in the database would be broken. The plugin will not write storage when it looks broken, so some old files must be on your device. If you edit the file, it will be cured. But if the file does not exist on any device, can not rescue it. So you can delete these items from the setting dialog.
- If your database looks corrupted, try "Drop History". Usually, It is the easiest way.
- To stop the bootup sequence for fixing problems on databases, you can put `redflag.md` on top of your vault.
- Q: Database is growing, how can I shrink it up?
A: each of the docs is saved with their old 100 revisions to detect and resolve confliction. Picture yourself that one device has been off the line for a while, and joined again. The device has to check his note and remote saved note. If exists in revision histories of remote notes even though the device's note is a little different from the latest one, it could be merged safely. Even if that is not in revision histories, we only have to check differences after the revision that both devices commonly have. This is like The git's conflict resolving method. So, We have to make the database again like an enlarged git repo if you want to solve the root of the problem.
- And more technical Information are in the [Technical Information](docs/tech_info.md)
# License

96
README_ja.md Normal file
View File

@@ -0,0 +1,96 @@
# Self-hosted LiveSync
**旧): obsidian-livesync**
セルフホストしたデータベースを使って、双方向のライブシンクするObsidianのプラグイン。
**公式のSyncとは互換性はありません**
![obsidian_live_sync_demo](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif)
**ほぼ動くようになってきましたが、Vaultのバックアップは確実に取得してください**
[英語版](./README.md)
## こんなことができるプラグインです。
- Windows, Mac, iPad, iPhone, Android, Chromebookで動く
- セルフホストしたデータベースに同期して
- 複数端末で同時にその変更をほぼリアルタイムで配信し
- さらに、他の端末での変更も別の端末に配信する、双方向リアルタイムなLiveSyncを実現でき、
- 発生した変更の衝突はその場で解決できます。
- 同期先のホストにはCouchDBまたはその互換DBaaSのIBM Cloudantをサーバーに使用できます。あなたのデータは、あなたのものです。
- もちろんLiveではない同期もできます。
- 万が一のために、サーバーに送る内容を暗号化できます(betaです)。
- [Webクリッパー](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf) もあります(End-to-End暗号化対象外です)
NDAや類似の契約や義務、倫理を守る必要のある、研究者、設計者、開発者のような方に特にオススメです。
特にエンタープライズでは、たとえEnd to Endの暗号化が行われていても、管理下にあるサーバーにのみデータを格納することが求められる場合があります。
# 重要なお知らせ
- ❌ファイルの重複や破損を避けるため、複数の同期手段を同時に使用しないでください。
これは、Vaultをクラウド管理下のフォルダに置くことも含みます。(例えば、iCloudの管理フォルダ内に入れたり)。
- ⚠️このプラグインは、端末間でのノートの反映を目的として作成されました。バックアップ等が目的ではありません。そのため、バックアップは必ず別のソリューションで行うようにしてください。
- ストレージの空き容量が枯渇した場合、データベースが破損することがあります。
- 隠しファイルやObsidisanが認識できないファイルを編集した場合、そのファイルは削除されることがあります。
# 補足
- レプリケーションなどでファイルがリモートデバイスから削除された場合、受信したデバイスでも、ファイルの削除が反映されます。
- その際、Self-hosted LiveSyncは、フォルダが空になった際に、フォルダをデフォルトでは残しません。残す場合はオプションから設定してください。
- LiveSyncはモバイルではバッテリーをかなり消費します。
- モバイル端末からは、非httpsのエンドポイント、または独自CAが発行した証明書でホストされているhttpsのサーバーには接続できません。
- 除外フォルダのような設定はありません。
# このプラグインの使い方
1. Community Pluginsから、Self-holsted LiveSyncと検索しインストールするか、このリポジトリのReleasesから`main.js`, `manifest.json`, `style.css` をダウンロードしvaultの中の`.obsidian/plugins/obsidian-livesync`に入れて、Obsidianを再起動してください。
2. サーバーを確保します。IBM Cloudantがお手軽かつ堅牢で便利です。完全にセルフホストする際にはお持ちのサーバーにCouchDBをインストールする必要があります。詳しくは下記を参照してください
1. [IBM Cloudantのセットアップ](docs/setup_cloudant_ja.md)
2. [独自のCouchDBのセットアップ](docs/setup_own_server_ja.md)
3. サーバー情報を入力します。初回のみ、Obsidianを再起動することをオススメします。
設定内容の詳細は[このプラグインの設定](docs/settings_ja.md)を参照してください。
4. お好きな同期方法を選んで、利用を開始してください。
# テストサーバー
もし、CouchDBをインストールしたり、Cloudantのインスタンスをセットアップしたりするのに気が引ける場合、[Self-hosted LiveSyncのテストサーバー](https://olstaste.vrtmrz.net/)を作りましたので、使ってみてください。
備考: 制限事項をよく確認して使用してください。くれぐれも、本当に使用している自分のVaultを同期しないようにしてください。
# WebClipperあります
Self-hosted LiveSync用にWebClipperも作りました。Chrome Web Storeからダウンロードできます。
[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
リポジトリはこちらです: [obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip)。
相変わらずドキュメントは間に合っていません。
# ステータスバーの情報
右下のステータスバーに、同期の状態が表示されます
- 同期状態
- ⏹️ 同期は停止しています
- 💤 同期はLiveSync中で、なにか起こるのを待っています
- ⚡️ 同期中です
- ⚠ エラーが発生しています
- ↑ 送信したデータ数
- ↓ 受信したデータ数
- ⏳ 保留している処理の数です
ファイルを削除したりリネームした場合、この表示が消えるまでお待ちください。
# さらなる補足
- ファイルは同期された後、タイムスタンプを比較して新しければいったん新しい方で上書きされます。その後、衝突が発生したかによって、マージが行われます。
- まれにファイルが破損することがあります。破損したファイルに関してはディスクへの反映を試みないため、実際には使用しているデバイスには少し古いファイルが残っていることが多いです。そのファイルを再度更新してもらうと、データベースが更新されて問題なくなるケースがあります。ファイルがどの端末にも存在しない場合は、設定画面から、削除できます。
- データベースが変。そういうときは、いったんデータベースをDrop Historyのapply and sendで再初期化してみてください。だいたい直ります。
- データベースの復旧中に再起動した場合など、うまくローカルデータベースを修正できない際には、Vaultのトップに`redflag.md`というファイルを置いてください。起動時のシーケンスがスキップされます。
- データベースが大きくなってきてるんだけど、小さくできる→各ートは、それぞれの古い100リビジョンとともに保存されています。例えば、しばらくオフラインだったあるデバイスが、久しぶりに同期したと想定してみてください。そのとき、そのデバイスは最新とは少し異なるリビジョンを持ってるはずです。その場合でも、リモートのリビジョン履歴にリモートのものが存在した場合、安全にマージできます。もしリビジョン履歴に存在しなかった場合、確認しなければいけない差分も、対象を存在して持っている共通のリビジョン以降のみに絞れます。ちょうどGitのような方法で、衝突を解決している形になるのです。そのため、肥大化したリポジトリの解消と同様に、本質的にデータベースを小さくしたい場合は、データベースの作り直しが必要です。
- その他の技術的なお話は、[技術的な内容](docs/tech_info_ja.md)に書いてあります。
# ライセンス
The source code is licensed MIT.

269
docs/settings.md Normal file
View File

@@ -0,0 +1,269 @@
# Settings of this plugin
## Remote Database Configurations
Configure settings of synchronize server. If any synchronization is enabled, you can't edit this section. Please disable all synchronization to change.
### URI
URI of CouchDB. In the case of Cloudant, It's "External Endpoint(preferred)".
**Do not end it up with a slash** when it doesn't contain the database name.
### Username
Your CouchDB's Username. With administrator's privilege is preferred.
### Password
Your CouchDB's Password.
Note: This password is saved into your Obsidian's vault in plain text.
### Database Name
The Database name to synchronize.
If not exist, created automatically.
### Test Database connection
## Local Database Configurations
"Local Database" is created inside your obsidian.
### Batch database update
Delay database update until raise replication, open another file, window visibility changed, or file events except for file modification.
This option can not be used with LiveSync at the same time.
### Auto Garbage Collection delay
When the note has been modified, Self-hosted LiveSync splits the note into some chunks by parsing the markdown structure. And saving only file information and modified chunks into the Local Database. At this time, do not delete old chunks.
So, Self-hosted LiveSync has to delete old chunks somewhen.
However, the chunk is represented as the crc32 of their contents and shared between all notes. In this way, Self-hosted LiveSync dedupes the entries and keeps low bandwidth and low transfer amounts.
In addition to this, when we edit notes, sometimes back to the previous expression. So It cannot be said that it will be unnecessary immediately.
Therefore, the plugin deletes unused chunks at once when you leave Obsidian for a while (after this setting seconds).
This process is called "Garbage Collection"
As a result, Obsidian's behavior is temporarily slowed down.
Default is 300 seconds.
If you are an early adopter, maybe this value is left as 30 seconds. Please change this value to larger values.
### Manual Garbage Collect
Run "Garbage Collection" manually.
### End to End Encryption
Encrypt your database. It affects only the database, your files are left as plain.
The encryption algorithm is AES-GCM.
### Passphrase
The passphrase to used as the key of encryption. Please use the long text.
### Apply
To enable End-to-End encryption, there must be no items of the same content encrypted with different passphrases to avoid attackers guessing passphrases. Self-hosted LiveSync uses crc32 of the chunks, It is really a must.
So, this plugin completely deletes everything from both local and remote databases before enabling it and then synchronizing again.
To enable, "Apply and send" from the most powerful device, and "Apply and receive" from every other device.
- Apply and send
1. Initialize the Local Database and set (or clear) passphrase, put all files into the database again.
2. Initialize the Remote Database.
3. Lock the Remote Database.
4. Send it all.
This process is simply heavy. Using a PC or Mac is preferred.
- Apply and receive
1. Initialize the Local Database and set (or clear) the passphrase.
2. Unlock the Remote Database.
3. Retrieve all and decrypt to file.
When running these operations, every synchronization settings is disabled.
**And even your passphrase is wrong, It doesn't be checked before the plugin really decrypts. So If you set the wrong passphrase and run "Apply and Receive", you will get an amount of decryption error. But, this is the specification.**
### minimum chunk size and LongLine threshold
The configuration of chunk splitting.
Self-hosted LiveSync splits the note into chunks for efficient synchronization. This chunk should be longer than "Minimum chunk size".
Specifically, the length of the chunk is determined by the following orders.
1. Find the nearest newline character, and if it is farther than LongLineThreshold, this piece becomes an independent chunk.
2. If not, find nearest to these items.
1. Newline character
2. Empty line (Windows style)
3. Empty line (non-Windows style)
3. Compare the farther in these 3 positions and next "\[newline\]#" position, pick a shorter piece to as chunk.
This rule was made empirically from my dataset. If this rule acts as badly on your data. Please give me the information.
You can dump saved note structure to `Dump informations of this doc`. Replace every character to x except newline and "#" when sending information to me.
Default values are 20 letters and 250 letters.
## General Settings
### Do not show low-priority log
If you enable this option, log only the entries with the popup.
### Verbose log
## Sync setting
### LiveSync
Do LiveSync.
It is the one of raison d'être of this plugin.
Useful, but this method drains many batteries on the mobile and uses not the ignorable amount of data transfer.
This method is exclusive to other synchronization methods.
### Periodic Sync
Synchronize periodically.
### Periodic Sync Interval
Unit is seconds.
### Sync on Save
Synchronize when the note has been modified or created.
### Sync on File Open
Synchronize when the note is opened.
### Sync on Start
Synchronize when Obsidian started.
### Use Trash for deleted files
When the file has been deleted on remote devices, deletion will be replicated to the local device and the file will be deleted.
If this option is enabled, move deleted files into the trash instead delete actually.
### Do not delete empty folder
Self-hosted LiveSync will delete the folder when the folder becomes empty. If this option is enabled, leave it as an empty folder.
### Use newer file if conflicted (beta)
Always use the newer file to resolve and overwrite when conflict has occurred.
### Advanced settings
Self-hosted LiveSync using PouchDB and synchronizes with the remote by [this protocol](https://docs.couchdb.org/en/stable/replication/protocol.html).
So, it splits every entry into chunks to be acceptable by the database with limited payload size and document size.
However, it was not enough.
According to [2.4.2.5.2. Upload Batch of Changed Documents](https://docs.couchdb.org/en/stable/replication/protocol.html#upload-batch-of-changed-documents) in [Replicate Changes](https://docs.couchdb.org/en/stable/replication/protocol.html#replicate-changes), it might become a bigger request.
Unfortunately, there is no way to deal with this automatically by size for every request.
Therefore, I made it possible to configure this.
Note: If you set these values lower number, the number of requests will increase.
Therefore, if you are far from the server, the total throughput will be low, and the traffic will increase.
### Batch size
Number of change feed items to process at a time. Defaults to 250.
### Batch limit
Number of batches to process at a time. Defaults to 40. This along with batch size controls how many docs are kept in memory at a time.
## Miscellaneous
### Show status inside editor
Show information inside the editor pane.
It would be useful for mobile.
### Check integrity on saving
Check all chunks are correctly saved on saving.
### Presets
You can set synchronization method at once as these pattern:
- LiveSync
- LiveSync : enabled
- Batch database update : disabled
- Periodic Sync : disabled
- Sync on Save : disabled
- Sync on File Open : disabled
- Sync on Start : disabled
- Periodic w/ batch
- LiveSync : disabled
- Batch database update : enabled
- Periodic Sync : enabled
- Sync on Save : disabled
- Sync on File Open : enabled
- Sync on Start : enabled
- Disable all sync
- LiveSync : disabled
- Batch database update : disabled
- Periodic Sync : disabled
- Sync on Save : disabled
- Sync on File Open : disabled
- Sync on Start : disabled
## Hatch
From here, everything is under the hood. Please handle it with care.
When there are problems with synchronization, the warning message is shown Under this section header.
- Pattern 1
![CorruptedData](../images/lock_pattern1.png)
This message is shown when the remote database is locked and your device is not marked as "resolved".
Almost it is happened by enabling End-to-End encryption or History has been dropped.
If you enabled End-to-End encryption, you can unlock the remote database by "Apply and receive" automatically. Or "Drop and receive" when you dropped. If you want to unlock manually, click "mark this device as resolved".
- Pattern 2
![CorruptedData](../images/lock_pattern2.png)
The remote database indicates that has been unlocked Pattern 1.
When you mark all devices as resolved, you can unlock the database.
But, there's no problem even if you leave it as it is.
### Verify and repair all files
read all files in the vault, and update them into the database if there's diff or could not read from the database.
### Sanity check
Make sure that all the files on the local database have all chunks.
### Drop history
Drop all histories on the local database and the remote database, and initialize When synchronization time has been prolonged to the new device or new vault, or database size became to be much larger. Try this.
Note: When CouchDB deletes entries, to merge confliction, there left old entries as deleted data before compaction. After compaction has been run, deleted data are become "tombstone". "tombstone" uses less disk, But still use some.
It's the specification, to shrink database size from the root, re-initialization is required, even it's explicit or implicit.
Same as a setting passphrase, database locking is also performed.
- Drop and send (Same as "Apply and send")
1. Initialize the Local Database and set (or clear) passphrase, put all files into the database again.
2. Initialize the Remote Database.
3. Lock the Remote Database.
4. Send it all.
- Drop and receive (Same as "Apply and receive")
1. Initialize the Local Database and set (or clear) the passphrase.
2. Unlock the Remote Database.
3. Retrieve all and decrypt to file.
### Lock remote database
Lock the remote database to ban out other devices for synchronization. It is the same as the database lock that happened in dropping databases or applying passphrases.
Use it as an emergency evacuation method to protect local or remote databases when synchronization has been broken.
### Suspend file watching
If enable this option, Self-hosted LiveSync dismisses every file change or deletes the event.
From here, these commands are used inside applying encryption passphrases or dropping histories.
Usually, doesn't use it so much. But sometimes it could be handy.
### Reset remote database
Discard the data stored in the remote database.
### Reset local database
Discard the data stored in the local database.
### Initialize local database again
Discard the data stored in the local database and initialize and create the database from the files on storage.
### Corrupted data
![CorruptedData](../images/corrupted_data.png)
When Self-hosted LiveSync could not write to the file on the storage, the files are shown here. If you have the old data in your vault, change it once, it will be cured. Or you can use the "File History" plugin.
But if you don't, sorry, we can't rescue the file, and error messages are shown frequently, and you have to delete the file from here.

238
docs/settings_ja.md Normal file
View File

@@ -0,0 +1,238 @@
# このプラグインの設定項目
## Remote Database Configurations
同期先のデータベース設定を行います。何らかの同期が有効になっている場合は編集できないため、同期を解除してから行ってください。
### URI
CouchDBのURIを入力します。Cloudantの場合は「External Endpoint(preferred)」になります。
**スラッシュで終わってはいけません。**
こちらにデータベース名を含めてもかまいません。
### Username
ユーザー名を入力します。このユーザーは管理者権限があることが望ましいです。
### Password
パスワードを入力します。
### Database Name
同期するデータベース名を入力します。
⚠️存在しない場合は、テストや接続を行った際、自動的に作成されます[^1]。
[^1]:権限がない場合は自動作成には失敗します。
### Test Database connection
上記の設定でデータベースに接続できるか確認します。
## Local Database Configurations
端末内に作成されるデータベースの設定です。
### Batch database update
データベースの更新を以下の事象が発生するまで遅延させます。
- レプリケーションが発生する
- 他のファイルを開く
- ウィンドウの表示状態を変更する
- ファイルの修正以外のファイル関連イベント
このオプションはLiveSyncと同時には使用できません。
### Auto Garbage Collection delay
Self-hosted LiveSyncはートの変更時、ートをmarkdownの構造を鑑みてチャンクに分割し、ファイルの情報と更新があったチャンクのみ保存していきます。この際、古いチャンクの削除は行いません。
そのため、使わなくなったチャンクをどこかのタイミングで消去する必要があります。
ただし、このチャンクはチャンクの内容から作成されるため、同一の内容からは同一のチャンクが作成され、同じノートだけではなく、すべてのノートから共有されます。これによってデータベースの使用容量とデバイス‐サーバー間での転送量を削減しています。
執筆を繰り返す上で、元の文書に戻したりすることもあるため、一概に「すぐに不要になる」とは言い切れません。そこで、プラグインはObsidianを開いたまま操作しなくなってからこの設定値秒後、まとめて使用していないチャンクを削除します。
この処理をGarbage Collectionと呼んでいます。
この作業はすべてのファイル変更の反映を停止して一気に行う必要があります。そのため、一時的にObsidianの動作がかなり重くなります。
Obsidianでのファイル操作が終わってから指定秒数が経過した際に実行されます。
デフォルト値は300秒です。
※ごく初期は30秒でした。初期から使用されている方は、是非300秒ぐらいまで伸ばしてください。ストレスが違います。
### Manual Garbage Collect
上記のGarbage Collectionを手動で行います。
### End to End Encryption
データベースを暗号化します。この効果はデータベースに格納されるデータに限られ、ディスク上のファイルは平文のままです。
暗号化はAES-GCMを使用して行っています。
### Passphrase
暗号化を行う際に使用するパスフレーズです。充分に長いものを使用してください。
### Apply
End to End 暗号化を行うに当たって、異なるパスフレーズで暗号化された同一の内容を入手されることは避けるべきです。また、Self-hosted LiveSyncはコンテンツのcrc32を重複回避に使用しているため、その点でも攻撃が有効になってしまいます。
そのため、End to End 暗号化を有効にする際には、ローカル、リモートすべてのデータベースをいったん破棄し、新しいパスフレーズで暗号化された内容のみを、改めて同期し直します。
有効化するには、一番体力のある端末からApply and sendを行い、他の端末でApply and receiveを行います。
- Apply and send
1. ローカルのデータベースを初期化しパスフレーズを設定(またはクリア)します。その後、すべてのファイルをもう一度データベースに登録します。
2. リモートのデータベースを初期化します。
3. リモートのデータベースをロックし、他の端末を締め出します。
4. すべて再送信します。
負荷と時間がかかるため、デスクトップから行う方が好ましいです。
- Apply and receive
1. ローカルのデータベースを初期化し、パスフレーズを設定(またはクリア)します。
2. リモートのデータベースにかかっているロックを解除します。
3. すべて受信して、復号します。
どちらのオペレーションも、実行するとすべての同期設定が無効化されます。
**また、パスフレーズのチェックは、実際に復号するまで行いません。そのため、パスフレーズを間違えて設定し、Apply and receiveで同期を行うと、大量のエラーが発生します。これは仕様です。**
### minimum chunk size と LongLine threshold
チャンクの分割についての設定です。
Self-hosted LiveSyncは一つのチャンクのサイズを最低minimum chunk size文字確保した上で、できるだけ効率的に同期できるよう、ートを分割してチャンクを作成します。
これは、同期を行う際に、一定の文字数で分割した場合、先頭の方を編集すると、その後の分割位置がすべてずれ、結果としてほぼまるごとのファイルのファイル送受信を行うことになっていた問題を避けるために実装されました。
具体的には、先頭から順に直近の下記の箇所を検索し、一番長く切れたものを一つのチャンクとします。
1. 次の改行を探し、それがLongLine Thresholdより先であれば、一つのチャンクとして確定します。
2. そうではない場合は、下記を順に探します。
1. 改行
2. windowsでの空行がある所
3. 非Windowsでの空行がある所
3. この三つのうち一番遠い場所と、 「改行後、#から始まる所」を比べ、短い方をチャンクとします。
このルールは経験則的に作りました。実データが偏っているため。もし思わぬ挙動をしている場合は、是非コマンドから`Dump informations of this doc`を選択し、情報をください。
改行文字と#を除き、すべて●に置換しても、アルゴリズムは有効に働きます。
デフォルトは20文字と、250文字です。
## General Settings
一般的な設定です。
### Do not show low-priority log
有効にした場合、優先度の低いログを記録しません。通知を伴うログのみ表示されます。
### Vervose log
詳細なログをログに出力します。
## Sync setting
同期に関する設定です。
### LiveSync
LiveSyncを行います。
他の同期方法では、同期の順序が「バージョン確認を行い、ロックが行われていないか確認した後、リモートの変更を受信した後、デバイスの変更を送信する」という挙動になります。
### Periodic Sync
定期的に同期を行います。
### Periodic Sync Interval
定期的に同期を行う場合の間隔です。
### Sync on Save
ファイルが保存されたときに同期を行います。
**Obsidianは、ートを編集している間、定期的に保存を行います。添付ファイルを新しく追加した場合も同様に処理されます。**
### Sync on File Open
ファイルを開いた際に同期を行います。
### Sync on Start
Obsidianの起動時に同期を行います。
備考:
LiveSyncをONにするか、もしくはPeriodic Sync + Sync On File Openがオススメです。
### Use Trash for deleted files
リモートでファイルが削除された際、デバイスにもその削除が反映されます。
このオプションが有効になっている場合、実際に削除する代わりに、ゴミ箱に移動します。
### Do not delete empty folder
Self-hosted LiveSyncは通常、フォルダ内のファイルがすべて削除された場合、フォルダを削除します。
備考:Self-hosted LiveSyncの同期対象はファイルです。
### Use newer file if conflicted (beta)
競合が発生したとき、常に新しいファイルを使用して競合を自動的に解決します。
### Advanced settings
Self-hosted LiveSyncはPouchDBを使用し、リモートと[このプロトコル](https://docs.couchdb.org/en/stable/replication/protocol.html)で同期しています。
そのため、全てのノートなどはデータベースが許容するペイロードサイズやドキュメントサイズに併せてチャンクに分割されています。
しかしながら、それだけでは不十分なケースがあり、[Replicate Changes](https://docs.couchdb.org/en/stable/replication/protocol.html#replicate-changes)の[2.4.2.5.2. Upload Batch of Changed Documents](https://docs.couchdb.org/en/stable/replication/protocol.html#upload-batch-of-changed-documents)を参照すると、このリクエストは巨大になる可能性がありました。
残念ながら、このサイズを呼び出しごとに自動的に調整する方法はありません。
そのため、設定を変更できるように機能追加いたしました。
備考:もし小さな値を設定した場合、リクエスト数は増えます。
もしサーバから遠い場合、トータルのスループットは遅くなり、転送量は増えます。
### Batch size
一度に処理するChange feedの数です。デフォルトは250です。
### Batch limit
一度に処理するBatchの数です。デフォルトは40です。
## Miscellaneous
その他の設定です
### Show status inside editor
同期の情報をエディター内に表示します。
モバイルで便利です。
### Check integrity on saving
保存時にデータが全て保存できたかチェックを行います。
## Hatch
ここから先は、困ったときに開ける蓋の中身です。注意して使用してください。
同期の状態に問題がある場合、Hatchの直下に警告が表示されることがあります。
- パターン1
![CorruptedData](../images/lock_pattern1.png)
データベースがロックされていて、端末が「解決済み」とマークされていない場合、警告が表示されます。
他のデバイスで、End to End暗号化を有効にしたか、Drop Historyを行った等、他の端末がそのまま同期を行ってはいない状態に陥った場合表示されます。
暗号化を有効化した場合は、パスフレーズを設定してApply and recieve、Drop Historyを行った場合は、Drop and recieveを行うと自動的に解除されます。
手動でこのロックを解除する場合は「mark this device as resolved」をクリックしてください。
- パターン2
![CorruptedData](../images/lock_pattern2.png)
リモートのデータベースが、過去、パターン1を解除したことがあると表示しています。
ご使用のすべてのデバイスでロックを解除した場合は、データベースのロックを解除することができます。
ただし、このまま放置しても問題はありません。
### Verify and repair all files
Vault内のファイルを全て読み込み直し、もし差分があったり、データベースから正常に読み込めなかったものに関して、データベースに反映します。
### Sanity check
ローカルデータベースに保存されている全てのファイルが正しくチャンクを持っていることを確認します。
### Drop history
データベースに記録されている履歴を削除し、データベースを初期化します。
新しい端末や新しいVaultへの同期にやたらと時間がかかったり、データベースサイズが肥大化したりしてきた際に使用してください。
備考:CouchDBは、データを削除する際、衝突の解決のために、削除した痕跡を保存します。そのため、Garbage Collectしていたとしても、データは必ず増え続けます。
パスフレーズ設定と同様に、完全に同期されているのであれば、データを失う可能性は低いです。
また、同様にデータベースのロック等の処理も行われます。
- Drop and send
デバイスとリモートのデータベースを破棄し、ロックしてからデバイスのファイルでデータベースを構築後、リモートに上書きします。
- Drop and receive
デバイスのデータベースを破棄した後、リモートから、操作しているデバイスに関してロックを解除し、データを受信して再構築します。
### Lock remote database
リモートのデータベースをロックし、他の端末で同期を行おうとしてもエラーとともに同期がキャンセルされるように設定します。これは、データベースの再構築を行った場合、自動的に設定されるものと同じものです。
万が一同期に不具合が発生していて、使用しているデバイスのデータ+サーバーのデータを保護する場合などに、緊急避難的に使用してください。
### Suspend file watching
ファイルの更新の監視を止めます。
これ以降の操作は、暗号化設定のApplyや、Drop Historyで行われる処理を手動で行うためのオプションです。
あまり使用することはありませんがいざというときに使用します。
### reset remote database
リモートのデータベースを破棄します。
### reset local database
ローカルのデータベースを破棄します。
### initialize local database again
デバイスのデータベースを破棄し、実ファイルから再度データベースを構築します。
### Corrupted data
![CorruptedData](../images/corrupted_data.png)
データベースからストレージに書き出せなかったファイルがここに表示されます。
もし、Obsidian内にそのデータが存在する場合は、一度編集を行い、上書きを行うと保存に成功する場合があります。File Historyプラグインで救っても大丈夫です
それ以外の場合は、残念ながら復旧手段がないため、データベース上の破損したファイルを削除しない限り、エラーが表示されます。
その「データベース上の破損したファイルを削除」するボタンです。

85
docs/setup_cloudant.md Normal file
View File

@@ -0,0 +1,85 @@
# Cloudant Setup
## Creating an Instance
In these instructions, create IBM Cloudant Instance for trial.
1. Hit the "Create Resource" button.
![step 1](../instruction_images/cloudant_1.png)
1. In IBM Cloud Catalog, search "Cloudant".
![step 2](../instruction_images/cloudant_2.png)
1. You can choose "Lite plan" for free.
![step 3](../instruction_images/cloudant_3.png)
1. Select Multitenant(it's the default) and the region as you like.
![step 4](../instruction_images/cloudant_4.png)
1. Be sure to select "IAM and Legacy credentials" for "Authentication Method".
![step 5](../instruction_images/cloudant_5.png)
1. Select Lite and be sure to check the capacity.
![step 6](../instruction_images/cloudant_6.png)
1. And hit "Create" on the right panel.
![step 7](../instruction_images/cloudant_7.png)
1. When all of the above steps have been done, open "Resource list" on the left pane. you can see the Cloudant instance in the "Service and software". Click it.
![step 8](../instruction_images/cloudant_8.png)
1. In resource details, there's information to connect from Self-hosted LiveSync.
Copy the "External Endpoint(preferred)" address. <sup>(\*1)</sup>. We use this address later, with the database name.
![step 9](../instruction_images/cloudant_9.png)
## Database setup
1. Hit the "Launch Dashboard" button, Cloudant dashboard will be shown.
Yes, it's almost CouchDB's fauxton.
![step 1](../instruction_images/couchdb_1.png)
1. First, you have to enable the CORS option.
Hit the Account menu and open the "CORS" tab.
Initially, "Origin Domains" is set to "Restrict to specific domains"., so set to "All domains(\*)"
_NOTE: of course We want to set "app://obsidian.md" but it's not acceptable on Cloudant._
![step 2](../instruction_images/couchdb_2.png)
1. Next, Open the "Databases" tab and hit the "Create Database" button.
Enter the name as you like <sup>(\*2)</sup> and Hit the "Create" button below.
![step 3](../instruction_images/couchdb_3.png)
1. If the database was shown with joyful messages, the setup is almost done.
And, once you have confirmed that you can create a database, usually there is no need to open this screen.
You can create a database from Self-hosted LiveSync.
![step 4](../instruction_images/couchdb_4.png)
### Credentials Setup
1. Back into IBM Cloud, Open the "Service credentials". You'll get an empty list, hit the "New credential" button.
![step 1](../instruction_images/credentials_1.png)
1. The dialog to create a credential will be shown.
type any name or leave it default, hit the "Add" button.
![step 2](../instruction_images/credentials_2.png)
_NOTE: This "name" is not related to your username that uses in Self-hosted LiveSync._
1. Back to "Service credentials", the new credential should be created.
open details.
![step 3](../instruction_images/credentials_3.png)
The username and password pair is inside this JSON.
"username" and "password" are so.
follow the figure, it's
"apikey-v2-2unu15184f7o8emr90xlqgkm2ncwhbltml6tgnjl9sd5"<sup>(\*3)</sup> and "c2c11651d75497fa3d3c486e4c8bdf27"<sup>(\*4)</sup>
## Self-hosted LiveSync setting
![Setting](../images/remote_db_setting.png)
The Setting should be as below:
| Items | Value | example |
| ------------- | ----- | ----------------------------------------------------------------- |
| URI | (\*1) | https://xxxxxxxxxxxxxxxxx-bluemix.cloudantnosqldb.appdomain.cloud |
| Username | (\*3) | apikey-v2-2unu15184f7o8emr90xlqgkm2ncwhbltml6tgnjl9sd5 |
| Password | (\*4) | c2c11651d75497fa3d3c486e4c8bdf27 |
| Database name | (\*2) | sync-test |

79
docs/setup_cloudant_ja.md Normal file
View File

@@ -0,0 +1,79 @@
# IBM Cloudantのセットアップ
## インスタンスの作成
下記の手順で、試用のためにIBM Cloudantのインスタンスを作成できます。
1. 「リソースの作成」ボタンをクリックします。
![step 1](../instruction_images/cloudant_1.png)
1. カタログが開くので、「Cloudant」と検索してください。出てきた選択肢をクリックすると作成画面に進みます。
![step 2](../instruction_images/cloudant_2.png)
1. Liteプランを選択してください。
![step 3](../instruction_images/cloudant_3.png)
1. リージョンと環境を選択します。LiteではMultitenantしか選択できないので、Multitenantを選択してください。デフォルトで選択されています。
リージョンはお好みの場所で作成してください。
![step 4](../instruction_images/cloudant_4.png)
3. "Authentication Method"で「IAM and legacy credentials」を選択します。
![step 5](../instruction_images/cloudant_5.png)
4. Liteプランが選択されていることと、Capacityを確認します。
![step 6](../instruction_images/cloudant_6.png)
5. 確認ができたら、右側のCreateボタンをクリックします。
![step 7](../instruction_images/cloudant_7.png)
6. 上記の手順が正常に完了したら、左のメニューから「リソース・リスト」をクリックしてください。リソース・リストが表示され、「サービス及びソフトウエア」に作成したCloudantのインスタンスが表示されます。
インスタンス名をクリックしてください。
![step 8](../instruction_images/cloudant_8.png)
7. ここで、"External Endpoint (preferred)" と記載されているアドレスを控えてください。後ほど使います。<sup>(\*1)</sup>
![step 9](../instruction_images/cloudant_9.png)
## データベースの設定
1. 「Launch Dashboard」ボタンをクリックします。そうすると、今度はデータベースのダッシュボードが表示されます。CouchDBには、Fauxtonというインターフェイスがあるのですが、それそのものです。
![step 1](../instruction_images/couchdb_1.png)
1. CORSの許可設定を行います。メニューの「Account」をクリックし、「CORS」タブを開きます。
最初は「Restrict to specific domains」が選択されているので、「All domains (\*)」を選択し直します。この反映は即座に行われますが、すぐに戻せるので大丈夫です。
![step 2](../instruction_images/couchdb_2.png)
1. データベースが作成できるか確認します。メニューの「Databases」をクリックし、次に「Create Database」ボタンをクリックします。
右側にパネルが表示されますので、好きな名前を入力し、「Create」ボタンをクリックします。
![step 3](../instruction_images/couchdb_3.png)
1. それっぽいメッセージが表示された後、データベースが表示されていれば、ほとんどセットアップは完了です。今後、ほとんどこの画面は使いません。Self-hosted LiveSyncからデータベースは作成できます。
![step 4](../instruction_images/couchdb_4.png)
### 資格情報のセットアップ
1. IBM Cloudに戻って、「サービス資格情報」をクリックしてください。おそらく何も表示されていないので、「新規資格情報」をクリックします。
![step 1](../instruction_images/credentials_1.png)
1. 資格情報を作成するダイアログが表示されるので、わかりやすい名前を入力します。その後、役割に「管理者」が選択されていることを確認してから、「追加」ボタンをクリックしてください。
![step 2](../instruction_images/credentials_2.png)
備考: この「名前」はSelf-hosted LiveSyncで使用するUsernameとはまた別のものです。
1. 「サービス資格情報」に戻ると、新しい資格情報が作成されています。~~わかりにくいことに名前は「鍵名」に変わります~~。左側のボタンを押すと詳細が開きます。
![step 3](../instruction_images/credentials_3.png)
Self-hosted LiveSyncから使用するUsernameとPasswordは、表示されたJSONに記載されているものを使用します。
今回の図で言うと、Usernameは"apikey-v2-2unu15184f7o8emr90xlqgkm2ncwhbltml6tgnjl9sd5"<sup>(\*3)</sup>、パスワードは"c2c11651d75497fa3d3c486e4c8bdf27"<sup>(\*4)</sup>になります。
## Self-hosted LiveSyncに設定
![Setting](../images/remote_db_setting.png)
先ほどの設定例から引用すると、
| Items | Value | example |
| ------------------- | -------------------------------- | --------------------------------------------------------------------------- |
| URI | (\*1) | https://xxxxxxxxxxxxxxxxx-bluemix.cloudantnosqldb.appdomain.cloud |
| Username | (\*3) | apikey-v2-2unu15184f7o8emr90xlqgkm2ncwhbltml6tgnjl9sd5 |
| Password | (\*4) | c2c11651d75497fa3d3c486e4c8bdf27 |
| Database name | (\*2) | sync-test |
となります。

95
docs/setup_own_server.md Normal file
View File

@@ -0,0 +1,95 @@
# Setup CouchDB to your server
## Install CouchDB and access from PC or Mac
The easiest way to set up the CouchDB is using the [docker image]((https://hub.docker.com/_/couchdb)).
But some additional configurations are required in `local.ini` to use from Self-hosted LiveSync, like below:
```
[couchdb]
single_node=true
[chttpd]
require_valid_user = true
[chttpd_auth]
require_valid_user = true
authentication_redirect = /_utils/session.html
[httpd]
WWW-Authenticate = Basic realm="couchdb"
enable_cors = true
[cors]
origins = app://obsidian.md,capacitor://localhost,http://localhost
credentials = true
headers = accept, authorization, content-type, origin, referer
methods = GET, PUT, POST, HEAD, DELETE
max_age = 3600
```
Make `local.ini` and run with docker run like this, you can launch the CouchDB.
```
$ docker run --rm -it -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -v .local.ini:/opt/couchdb/etc/local.ini -p 5984:5984 couchdb
```
Note: At this time, the file owner of local.ini became 5984:5984. It's the limitation docker image. please change the owner before editing local.ini again.
If you could confirm that Self-hosted LiveSync can sync with the server, launch docker image as background as you like.
example)
```
$ docker run -d --restart always -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -v .local.ini:/opt/couchdb/etc/local.ini -p 5984:5984 couchdb
```
## Access from mobile device
If you want to access Self-hosted LiveSync from mobile devices, you need a valid SSL certificate.
### Testing from mobile
In the testing phase, [localhost.run](http://localhost.run/) or something like services is very useful.
example on using localhost.run)
```
$ ssh -R 80:localhost:5984 nokey@localhost.run
Warning: Permanently added the RSA host key for IP address '35.171.254.69' to the list of known hosts.
===============================================================================
Welcome to localhost.run!
Follow your favourite reverse tunnel at [https://twitter.com/localhost_run].
**You need a SSH key to access this service.**
If you get a permission denied follow Gitlab's most excellent howto:
https://docs.gitlab.com/ee/ssh/
*Only rsa and ed25519 keys are supported*
To set up and manage custom domains go to https://admin.localhost.run/
More details on custom domains (and how to enable subdomains of your custom
domain) at https://localhost.run/docs/custom-domains
To explore using localhost.run visit the documentation site:
https://localhost.run/docs/
===============================================================================
** your connection id is xxxxxxxxxxxxxxxxxxxxxxxxxxxx, please mention it if you send me a message about an issue. **
xxxxxxxx.localhost.run tunneled with tls termination, https://xxxxxxxx.localhost.run
Connection to localhost.run closed by remote host.
Connection to localhost.run closed.
```
https://xxxxxxxx.localhost.run is the temporary server address.
### Setting up your domain
Set the A record of your domain to point to your server, and host reverse proxy as you like.
Note: Mounting CouchDB on the top directory is not recommended.
Using Caddy is a handy way to serve the server with SSL automatically.
I have published [docker-compose.yml and ini files](https://github.com/vrtmrz/self-hosted-livesync-server) that launches Caddy and CouchDB at once. Please try it out.
And, be sure to check the server log and be careful of malicious access.

View File

@@ -0,0 +1,91 @@
# CouchDBのセットアップ方法
## CouchDBのインストールとPCやMacでの使用
CouchDBを構築するには、[Dockerのイメージ](https://hub.docker.com/_/couchdb)を使用するのが一番簡単です。
ただし、インストールしたCouchDBをSelf-hosted LiveSyncから使用するためには、少々設定が必要となります。
具体的には、下記の設定が`local.ini`として必要になります。
```
[couchdb]
single_node=true
[chttpd]
require_valid_user = true
[chttpd_auth]
require_valid_user = true
authentication_redirect = /_utils/session.html
[httpd]
WWW-Authenticate = Basic realm="couchdb"
enable_cors = true
[cors]
origins = app://obsidian.md,capacitor://localhost,http://localhost
credentials = true
headers = accept, authorization, content-type, origin, referer
methods = GET, PUT, POST, HEAD, DELETE
max_age = 3600
```
このファイルを作成し、
```
$ docker run --rm -it -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -v .local.ini:/opt/couchdb/etc/local.ini -p 5984:5984 couchdb
```
とすると簡単にCouchDBを起動することができます。
備考このとき、local.iniのオーナーが5984:5984になります。これは、Dockerイメージの制限事項です。編集する場合はいったんオーナーを変更してください。
正常にSelf-hosted LiveSyncからアクセスすることができたら、お好みでバックグラウンドで起動するように編集して起動してください。
例)
```
$ docker run -d --restart always -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -v .local.ini:/opt/couchdb/etc/local.ini -p 5984:5984 couchdb
```
## モバイルからのアクセス
MacやPCからアクセスする場合は上記の方法で作ったサーバーで問題ありませんが、モバイル端末からアクセスする場合は有効なSSLの証明書が必要となります。
### モバイルからのアクセスのテスト
テストを行う場合は、[localhost.run](http://localhost.run/)などのサービスが便利です。
```
$ ssh -R 80:localhost:5984 nokey@localhost.run
Warning: Permanently added the RSA host key for IP address '35.171.254.69' to the list of known hosts.
===============================================================================
Welcome to localhost.run!
Follow your favourite reverse tunnel at [https://twitter.com/localhost_run].
**You need a SSH key to access this service.**
If you get a permission denied follow Gitlab's most excellent howto:
https://docs.gitlab.com/ee/ssh/
*Only rsa and ed25519 keys are supported*
To set up and manage custom domains go to https://admin.localhost.run/
More details on custom domains (and how to enable subdomains of your custom
domain) at https://localhost.run/docs/custom-domains
To explore using localhost.run visit the documentation site:
https://localhost.run/docs/
===============================================================================
** your connection id is xxxxxxxxxxxxxxxxxxxxxxxxxxxx, please mention it if you send me a message about an issue. **
xxxxxxxx.localhost.run tunneled with tls termination, https://xxxxxxxx.localhost.run
Connection to localhost.run closed by remote host.
Connection to localhost.run closed.
```
このように表示された場合、`https://xxxxxxxx.localhost.run`が一時的なサーバアドレスとして使用できます。
### ドメインを設定してアクセスする。
DNSのAレコードを設定し、お好みの方法でリバースプロキシをホスティングしてください。
備考:トップディレクトリにCouchDBを露出させるのはおすすめしません。
Caddy等でLet's Encryptの証明書を自動取得すると運用が楽になります。
CaddyとCouchDBを同時に立てられる[docker-composeの設定とiniファイル](https://github.com/vrtmrz/self-hosted-livesync-server)を公開しています。
ぜひご利用下さい。
なお、サーバのログは必ず確認し、不正なアクセスに注意してください。

16
docs/tech_info.md Normal file
View File

@@ -0,0 +1,16 @@
# Designed architecture
## How does this plugin synchronize.
![Synchronization](../images/1.png)
1. When notes are created or modified, Obsidian raises some events. Self-hosted LiveSync catches these events and reflects changes into Local PouchDB.
2. PouchDB automatically or manually replicates changes to remote CouchDB.
3. Another device is watching remote CouchDB's changes, so retrieve new changes.
4. Self-hosted LiveSync reflects replicated changeset into Obsidian's vault.
Note: The figure is drawn as single-directional, between two devices. But everything occurs bi-directionally between many devices at once in real.
## Techniques to keep bandwidth low.
![dedupe](../images/2.png)

16
docs/tech_info_ja.md Normal file
View File

@@ -0,0 +1,16 @@
# アーキテクチャ設計
## 同期
![Synchronization](../images/1.png)
1. ートが更新された際、Obsidianがイベントを発報します。Obsidian-LiveSyncはそれをハンドリングして、ローカルのPouchDBに変更を反映します。
2. PouchDBは、リモートのCouchDBに差分をレプリケーションします。
3. 他のデバイスは、リモートのCouchDBを監視しているので、変更が検出された場合はそのまま差分がダウンロードされます。
4. Self-hosted LiveSyncはPouchDBに転送された変更を、ObsidianのVaultに反映していきます。
図は2端末での単一方向として描きましたが、実際には双方向に、複数の端末間で実行されます。
## 帯域幅低減のために
![dedupe](../images/2.png)

BIN
images/corrupted_data.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
images/lock_pattern1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

BIN
images/lock_pattern2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

2534
main.ts

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.1.13",
"version": "0.5.0",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",

4156
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,12 @@
{
"name": "obsidian-livesync",
"version": "0.1.13",
"version": "0.5.0",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js",
"scripts": {
"dev": "rollup --config rollup.config.js -w",
"build": "rollup --config rollup.config.js --environment BUILD:production"
"build": "rollup --config rollup.config.js --environment BUILD:production",
"lint": "eslint src"
},
"keywords": [],
"author": "vorotamoroz",
@@ -16,7 +17,12 @@
"@rollup/plugin-typescript": "^8.2.1",
"@types/diff-match-patch": "^1.0.32",
"@types/pouchdb-browser": "^6.1.3",
"obsidian": "^0.12.0",
"@typescript-eslint/eslint-plugin": "^5.7.0",
"@typescript-eslint/parser": "^5.0.0",
"eslint": "^7.32.0",
"eslint-config-airbnb-base": "^14.2.1",
"eslint-plugin-import": "^2.25.2",
"obsidian": "^0.13.11",
"rollup": "^2.32.1",
"tslib": "^2.2.0",
"typescript": "^4.2.4"

View File

@@ -11,7 +11,7 @@ if you want to view the source visit the plugins github repository
`;
export default {
input: "main.ts",
input: "./src/main.ts",
output: {
dir: ".",
sourcemap: "inline",

View File

@@ -0,0 +1,81 @@
import { App, Modal } from "obsidian";
import { DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT } from "diff-match-patch";
import { diff_result } from "./types";
import { escapeStringToHTML } from "./utils";
export class ConflictResolveModal extends Modal {
// result: Array<[number, string]>;
result: diff_result;
callback: (remove_rev: string) => Promise<void>;
constructor(app: App, diff: diff_result, callback: (remove_rev: string) => Promise<void>) {
super(app);
this.result = diff;
this.callback = callback;
}
onOpen() {
const { contentEl } = this;
contentEl.empty();
contentEl.createEl("h2", { text: "This document has conflicted changes." });
const div = contentEl.createDiv("");
div.addClass("op-scrollable");
let diff = "";
for (const v of this.result.diff) {
const x1 = v[0];
const x2 = v[1];
if (x1 == DIFF_DELETE) {
diff += "<span class='deleted'>" + escapeStringToHTML(x2) + "</span>";
} else if (x1 == DIFF_EQUAL) {
diff += "<span class='normal'>" + escapeStringToHTML(x2) + "</span>";
} else if (x1 == DIFF_INSERT) {
diff += "<span class='added'>" + escapeStringToHTML(x2) + "</span>";
}
}
diff = diff.replace(/\n/g, "<br>");
div.innerHTML = diff;
const div2 = contentEl.createDiv("");
const date1 = new Date(this.result.left.mtime).toLocaleString();
const date2 = new Date(this.result.right.mtime).toLocaleString();
div2.innerHTML = `
<span class='deleted'>A:${date1}</span><br /><span class='added'>B:${date2}</span><br>
`;
contentEl.createEl("button", { text: "Keep A" }, (e) => {
e.addEventListener("click", async () => {
await this.callback(this.result.right.rev);
this.callback = null;
this.close();
});
});
contentEl.createEl("button", { text: "Keep B" }, (e) => {
e.addEventListener("click", async () => {
await this.callback(this.result.left.rev);
this.callback = null;
this.close();
});
});
contentEl.createEl("button", { text: "Concat both" }, (e) => {
e.addEventListener("click", async () => {
await this.callback("");
this.callback = null;
this.close();
});
});
contentEl.createEl("button", { text: "Not now" }, (e) => {
e.addEventListener("click", () => {
this.close();
});
});
}
onClose() {
const { contentEl } = this;
contentEl.empty();
if (this.callback != null) {
this.callback(null);
}
}
}

1284
src/LocalPouchDB.ts Normal file

File diff suppressed because it is too large Load Diff

37
src/LogDisplayModal.ts Normal file
View File

@@ -0,0 +1,37 @@
import { App, Modal } from "obsidian";
import { escapeStringToHTML } from "./utils";
import ObsidianLiveSyncPlugin from "./main";
export class LogDisplayModal extends Modal {
plugin: ObsidianLiveSyncPlugin;
logEl: HTMLDivElement;
constructor(app: App, plugin: ObsidianLiveSyncPlugin) {
super(app);
this.plugin = plugin;
}
updateLog() {
let msg = "";
for (const v of this.plugin.logMessage) {
msg += escapeStringToHTML(v) + "<br>";
}
this.logEl.innerHTML = msg;
}
onOpen() {
const { contentEl } = this;
contentEl.empty();
contentEl.createEl("h2", { text: "Sync Status" });
const div = contentEl.createDiv("");
div.addClass("op-scrollable");
div.addClass("op-pre");
this.logEl = div;
this.updateLog = this.updateLog.bind(this);
this.plugin.addLogHook = this.updateLog;
this.updateLog();
}
onClose() {
const { contentEl } = this;
contentEl.empty();
this.plugin.addLogHook = null;
}
}

File diff suppressed because it is too large Load Diff

168
src/e2ee.ts Normal file
View File

@@ -0,0 +1,168 @@
import { Logger } from "./logger";
import { LOG_LEVEL } from "./types";
export type encodedData = [encryptedData: string, iv: string, salt: string];
export type KeyBuffer = {
index: string;
key: CryptoKey;
salt: Uint8Array;
};
const KeyBuffs: KeyBuffer[] = [];
const decKeyBuffs: KeyBuffer[] = [];
const KEY_RECYCLE_COUNT = 100;
let recycleCount = KEY_RECYCLE_COUNT;
let semiStaticFieldBuffer: Uint8Array = null;
const nonceBuffer: Uint32Array = new Uint32Array(1);
export async function getKeyForEncrypt(passphrase: string): Promise<[CryptoKey, Uint8Array]> {
// For performance, the plugin reuses the key KEY_RECYCLE_COUNT times.
const f = KeyBuffs.find((e) => e.index == passphrase);
if (f) {
recycleCount--;
if (recycleCount > 0) {
return [f.key, f.salt];
}
KeyBuffs.remove(f);
recycleCount = KEY_RECYCLE_COUNT;
}
const xpassphrase = new TextEncoder().encode(passphrase);
const digest = await crypto.subtle.digest({ name: "SHA-256" }, xpassphrase);
const keyMaterial = await crypto.subtle.importKey("raw", digest, { name: "PBKDF2" }, false, ["deriveKey"]);
const salt = crypto.getRandomValues(new Uint8Array(16));
const key = await crypto.subtle.deriveKey(
{
name: "PBKDF2",
salt,
iterations: 100000,
hash: "SHA-256",
},
keyMaterial,
{ name: "AES-GCM", length: 256 },
false,
["encrypt"]
);
KeyBuffs.push({
index: passphrase,
key,
salt,
});
while (KeyBuffs.length > 50) {
KeyBuffs.shift();
}
return [key, salt];
}
export async function getKeyForDecryption(passphrase: string, salt: Uint8Array): Promise<[CryptoKey, Uint8Array]> {
const bufKey = passphrase + uint8ArrayToHexString(salt);
const f = decKeyBuffs.find((e) => e.index == bufKey);
if (f) {
return [f.key, f.salt];
}
const xpassphrase = new TextEncoder().encode(passphrase);
const digest = await crypto.subtle.digest({ name: "SHA-256" }, xpassphrase);
const keyMaterial = await crypto.subtle.importKey("raw", digest, { name: "PBKDF2" }, false, ["deriveKey"]);
const key = await crypto.subtle.deriveKey(
{
name: "PBKDF2",
salt,
iterations: 100000,
hash: "SHA-256",
},
keyMaterial,
{ name: "AES-GCM", length: 256 },
false,
["decrypt"]
);
decKeyBuffs.push({
index: bufKey,
key,
salt,
});
while (decKeyBuffs.length > 50) {
decKeyBuffs.shift();
}
return [key, salt];
}
function getSemiStaticField(reset?: boolean) {
// return fixed field of iv.
if (semiStaticFieldBuffer != null && !reset) {
return semiStaticFieldBuffer;
}
semiStaticFieldBuffer = crypto.getRandomValues(new Uint8Array(12));
return semiStaticFieldBuffer;
}
function getNonce() {
// This is nonce, so do not send same thing.
nonceBuffer[0]++;
if (nonceBuffer[0] > 10000) {
// reset semi-static field.
getSemiStaticField(true);
}
return nonceBuffer;
}
function uint8ArrayToHexString(src: Uint8Array): string {
return Array.from(src)
.map((e: number): string => `00${e.toString(16)}`.slice(-2))
.join("");
}
function hexStringToUint8Array(src: string): Uint8Array {
const srcArr = [...src];
const arr = srcArr.reduce((acc, _, i) => (i % 2 ? acc : [...acc, srcArr.slice(i, i + 2).join("")]), []).map((e) => parseInt(e, 16));
return Uint8Array.from(arr);
}
export async function encrypt(input: string, passphrase: string) {
const [key, salt] = await getKeyForEncrypt(passphrase);
// Create initial vector with semifixed part and incremental part
// I think it's not good against related-key attacks.
const fixedPart = getSemiStaticField();
const invocationPart = getNonce();
const iv = Uint8Array.from([...fixedPart, ...new Uint8Array(invocationPart.buffer)]);
const plainStringified: string = JSON.stringify(input);
const plainStringBuffer: Uint8Array = new TextEncoder().encode(plainStringified);
const encryptedDataArrayBuffer = await crypto.subtle.encrypt({ name: "AES-GCM", iv }, key, plainStringBuffer);
const encryptedData = window.btoa(Array.from(new Uint8Array(encryptedDataArrayBuffer), (char) => String.fromCharCode(char)).join(""));
//return data with iv and salt.
const response: encodedData = [encryptedData, uint8ArrayToHexString(iv), uint8ArrayToHexString(salt)];
const ret = JSON.stringify(response);
return ret;
}
export async function decrypt(encryptedResult: string, passphrase: string): Promise<string> {
try {
const [encryptedData, ivString, salt]: encodedData = JSON.parse(encryptedResult);
const [key] = await getKeyForDecryption(passphrase, hexStringToUint8Array(salt));
const iv = hexStringToUint8Array(ivString);
// decode base 64, it should increase speed and i should with in MAX_DOC_SIZE_BIN, so it won't OOM.
const encryptedDataBin = window.atob(encryptedData);
const encryptedDataArrayBuffer = Uint8Array.from(encryptedDataBin.split(""), (char) => char.charCodeAt(0));
const plainStringBuffer: ArrayBuffer = await crypto.subtle.decrypt({ name: "AES-GCM", iv }, key, encryptedDataArrayBuffer);
const plainStringified = new TextDecoder().decode(plainStringBuffer);
const plain = JSON.parse(plainStringified);
return plain;
} catch (ex) {
Logger("Couldn't decode! You should wrong the passphrases", LOG_LEVEL.VERBOSE);
Logger(ex, LOG_LEVEL.VERBOSE);
throw ex;
}
}
export async function testCrypt() {
const src = "supercalifragilisticexpialidocious";
const encoded = await encrypt(src, "passwordTest");
const decrypted = await decrypt(encoded, "passwordTest");
if (src != decrypted) {
Logger("WARNING! Your device would not support encryption.", LOG_LEVEL.VERBOSE);
return false;
} else {
Logger("CRYPT LOGIC OK", LOG_LEVEL.VERBOSE);
return true;
}
}

13
src/logger.ts Normal file
View File

@@ -0,0 +1,13 @@
import { LOG_LEVEL } from "./types";
// eslint-disable-next-line require-await
export let Logger: (message: any, levlel?: LOG_LEVEL) => Promise<void> = async (message, _) => {
const timestamp = new Date().toLocaleString();
const messagecontent = typeof message == "string" ? message : message instanceof Error ? `${message.name}:${message.message}` : JSON.stringify(message, null, 2);
const newmessage = timestamp + "->" + messagecontent;
console.log(newmessage);
};
export function setLogger(loggerFun: (message: any, levlel?: LOG_LEVEL) => Promise<void>) {
Logger = loggerFun;
}

1417
src/main.ts Normal file

File diff suppressed because it is too large Load Diff

230
src/types.ts Normal file
View File

@@ -0,0 +1,230 @@
// docs should be encoded as base64, so 1 char -> 1 bytes
// and cloudant limitation is 1MB , we use 900kb;
import { PluginManifest } from "obsidian";
export const MAX_DOC_SIZE = 1000; // for .md file, but if delimiters exists. use that before.
export const MAX_DOC_SIZE_BIN = 102400; // 100kb
export const VER = 10;
export const RECENT_MOFIDIED_DOCS_QTY = 30;
export const LEAF_WAIT_TIMEOUT = 90000; // in synchronization, waiting missing leaf time out.
export const LOG_LEVEL = {
VERBOSE: 1,
INFO: 10,
NOTICE: 100,
URGENT: 1000,
} as const;
export type LOG_LEVEL = typeof LOG_LEVEL[keyof typeof LOG_LEVEL];
export const VERSIONINFO_DOCID = "obsydian_livesync_version";
export const MILSTONE_DOCID = "_local/obsydian_livesync_milestone";
export const NODEINFO_DOCID = "_local/obsydian_livesync_nodeinfo";
export interface ObsidianLiveSyncSettings {
couchDB_URI: string;
couchDB_USER: string;
couchDB_PASSWORD: string;
couchDB_DBNAME: string;
liveSync: boolean;
syncOnSave: boolean;
syncOnStart: boolean;
syncOnFileOpen: boolean;
savingDelay: number;
lessInformationInLog: boolean;
gcDelay: number;
versionUpFlash: string;
minimumChunkSize: number;
longLineThreshold: number;
showVerboseLog: boolean;
suspendFileWatching: boolean;
trashInsteadDelete: boolean;
periodicReplication: boolean;
periodicReplicationInterval: number;
encrypt: boolean;
passphrase: string;
workingEncrypt: boolean;
workingPassphrase: string;
doNotDeleteFolder: boolean;
resolveConflictsByNewerFile: boolean;
batchSave: boolean;
deviceAndVaultName: string;
usePluginSettings: boolean;
showOwnPlugins: boolean;
showStatusOnEditor: boolean;
usePluginSync: boolean;
autoSweepPlugins: boolean;
autoSweepPluginsPeriodic: boolean;
notifyPluginOrSettingUpdated: boolean;
checkIntegrityOnSave: boolean;
batch_size: number;
batches_limit: number;
}
export const DEFAULT_SETTINGS: ObsidianLiveSyncSettings = {
couchDB_URI: "",
couchDB_USER: "",
couchDB_PASSWORD: "",
couchDB_DBNAME: "",
liveSync: false,
syncOnSave: false,
syncOnStart: false,
savingDelay: 200,
lessInformationInLog: false,
gcDelay: 300,
versionUpFlash: "",
minimumChunkSize: 20,
longLineThreshold: 250,
showVerboseLog: false,
suspendFileWatching: false,
trashInsteadDelete: true,
periodicReplication: false,
periodicReplicationInterval: 60,
syncOnFileOpen: false,
encrypt: false,
passphrase: "",
workingEncrypt: false,
workingPassphrase: "",
doNotDeleteFolder: false,
resolveConflictsByNewerFile: false,
batchSave: false,
deviceAndVaultName: "",
usePluginSettings: false,
showOwnPlugins: false,
showStatusOnEditor: false,
usePluginSync: false,
autoSweepPlugins: false,
autoSweepPluginsPeriodic: false,
notifyPluginOrSettingUpdated: false,
checkIntegrityOnSave: false,
batch_size: 250,
batches_limit: 40,
};
export const PERIODIC_PLUGIN_SWEEP = 60;
export interface Entry {
_id: string;
data: string;
_rev?: string;
ctime: number;
mtime: number;
size: number;
_deleted?: boolean;
_conflicts?: string[];
type?: "notes";
}
export interface NewEntry {
_id: string;
children: string[];
_rev?: string;
ctime: number;
mtime: number;
size: number;
_deleted?: boolean;
_conflicts?: string[];
NewNote: true;
type: "newnote";
}
export interface PlainEntry {
_id: string;
children: string[];
_rev?: string;
ctime: number;
mtime: number;
size: number;
_deleted?: boolean;
NewNote: true;
_conflicts?: string[];
type: "plain";
}
export type LoadedEntry = Entry & {
children: string[];
datatype: "plain" | "newnote";
};
export interface PluginDataEntry {
_id: string;
deviceVaultName: string;
mtime: number;
manifest: PluginManifest;
mainJs: string;
manifestJson: string;
styleCss?: string;
// it must be encrypted.
dataJson?: string;
_rev?: string;
_deleted?: boolean;
_conflicts?: string[];
type: "plugin";
}
export interface EntryLeaf {
_id: string;
data: string;
_deleted?: boolean;
type: "leaf";
_rev?: string;
}
export interface EntryVersionInfo {
_id: typeof VERSIONINFO_DOCID;
_rev?: string;
type: "versioninfo";
version: number;
_deleted?: boolean;
}
export interface EntryMilestoneInfo {
_id: typeof MILSTONE_DOCID;
_rev?: string;
type: "milestoneinfo";
_deleted?: boolean;
created: number;
accepted_nodes: string[];
locked: boolean;
}
export interface EntryNodeInfo {
_id: typeof NODEINFO_DOCID;
_rev?: string;
_deleted?: boolean;
type: "nodeinfo";
nodeid: string;
}
export type EntryBody = Entry | NewEntry | PlainEntry;
export type EntryDoc = EntryBody | LoadedEntry | EntryLeaf | EntryVersionInfo | EntryMilestoneInfo | EntryNodeInfo;
export type diff_result_leaf = {
rev: string;
data: string;
ctime: number;
mtime: number;
};
export type dmp_result = Array<[number, string]>;
export type diff_result = {
left: diff_result_leaf;
right: diff_result_leaf;
diff: dmp_result;
};
export type diff_check_result = boolean | diff_result;
export type Credential = {
username: string;
password: string;
};
export type EntryDocResponse = EntryDoc & PouchDB.Core.IdMeta & PouchDB.Core.GetMeta;
export type DatabaseConnectingStatus = "STARTED" | "NOT_CONNECTED" | "PAUSED" | "CONNECTED" | "COMPLETED" | "CLOSED" | "ERRORED";
export interface PluginList {
[key: string]: PluginDataEntry[];
}
export interface DevicePluginList {
[key: string]: PluginDataEntry;
}
export const FLAGMD_REDFLAG = "redflag.md";

236
src/utils.ts Normal file
View File

@@ -0,0 +1,236 @@
import { normalizePath } from "obsidian";
import { Logger } from "./logger";
import { FLAGMD_REDFLAG, LOG_LEVEL } from "./types";
export function arrayBufferToBase64(buffer: ArrayBuffer): Promise<string> {
return new Promise((res) => {
const blob = new Blob([buffer], { type: "application/octet-binary" });
const reader = new FileReader();
reader.onload = function (evt) {
const dataurl = evt.target.result.toString();
res(dataurl.substr(dataurl.indexOf(",") + 1));
};
reader.readAsDataURL(blob);
});
}
export function base64ToString(base64: string): string {
try {
const binary_string = window.atob(base64);
const len = binary_string.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binary_string.charCodeAt(i);
}
return new TextDecoder().decode(bytes);
} catch (ex) {
return base64;
}
}
export function base64ToArrayBuffer(base64: string): ArrayBuffer {
try {
const binary_string = window.atob(base64);
const len = binary_string.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binary_string.charCodeAt(i);
}
return bytes.buffer;
} catch (ex) {
try {
return new Uint16Array(
[].map.call(base64, function (c: string) {
return c.charCodeAt(0);
})
).buffer;
} catch (ex2) {
return null;
}
}
}
export const escapeStringToHTML = (str: string) => {
if (!str) return "";
return str.replace(/[<>&"'`]/g, (match) => {
const escape: any = {
"<": "&lt;",
">": "&gt;",
"&": "&amp;",
'"': "&quot;",
"'": "&#39;",
"`": "&#x60;",
};
return escape[match];
});
};
export function resolveWithIgnoreKnownError<T>(p: Promise<T>, def: T): Promise<T> {
return new Promise((res, rej) => {
p.then(res).catch((ex) => (ex.status && ex.status == 404 ? res(def) : rej(ex)));
});
}
export function isValidPath(filename: string): boolean {
// eslint-disable-next-line no-control-regex
const regex = /[\u0000-\u001f]|[\\":?<>|*#]/g;
let x = filename.replace(regex, "_");
const win = /(\\|\/)(COM\d|LPT\d|CON|PRN|AUX|NUL|CLOCK$)($|\.)/gi;
const sx = (x = x.replace(win, "/_"));
return sx == filename;
}
export function shouldBeIgnored(filename: string): boolean {
if (filename == FLAGMD_REDFLAG) {
return true;
}
return false;
}
export function versionNumberString2Number(version: string): number {
return version // "1.23.45"
.split(".") // 1 23 45
.reverse() // 45 23 1
.map((e, i) => ((e as any) / 1) * 1000 ** i) // 45 23000 1000000
.reduce((prev, current) => prev + current, 0); // 1023045
}
export const delay = (ms: number): Promise<void> => {
return new Promise((res) => {
setTimeout(() => {
res();
}, ms);
});
};
// For backward compatibility, using the path for determining id.
// Only CouchDB nonacceptable ID (that starts with an underscore) has been prefixed with "/".
// The first slash will be deleted when the path is normalized.
export function path2id(filename: string): string {
let x = normalizePath(filename);
if (x.startsWith("_")) x = "/" + x;
return x;
}
export function id2path(filename: string): string {
return normalizePath(filename);
}
const runningProcs: string[] = [];
const pendingProcs: { [key: string]: (() => Promise<void>)[] } = {};
function objectToKey(key: any): string {
if (typeof key === "string") return key;
const keys = Object.keys(key).sort((a, b) => a.localeCompare(b));
return keys.map((e) => e + objectToKey(key[e])).join(":");
}
export function getProcessingCounts() {
let count = 0;
for (const v in pendingProcs) {
count += pendingProcs[v].length;
}
count += runningProcs.length;
return count;
}
let externalNotifier: () => void = () => {};
let notifyTimer: number = null;
export function setLockNotifier(fn: () => void) {
externalNotifier = fn;
}
function notifyLock() {
if (notifyTimer != null) {
window.clearTimeout(notifyTimer);
}
notifyTimer = window.setTimeout(() => {
externalNotifier();
}, 100);
}
// Just run async/await as like transacion ISOLATION SERIALIZABLE
export function runWithLock<T>(key: unknown, ignoreWhenRunning: boolean, proc: () => Promise<T>): Promise<T> {
// Logger(`Lock:${key}:enter`, LOG_LEVEL.VERBOSE);
const lockKey = typeof key === "string" ? key : objectToKey(key);
const handleNextProcs = () => {
if (typeof pendingProcs[lockKey] === "undefined") {
//simply unlock
runningProcs.remove(lockKey);
notifyLock();
// Logger(`Lock:${lockKey}:released`, LOG_LEVEL.VERBOSE);
} else {
Logger(`Lock:${lockKey}:left ${pendingProcs[lockKey].length}`, LOG_LEVEL.VERBOSE);
let nextProc = null;
nextProc = pendingProcs[lockKey].shift();
notifyLock();
if (nextProc) {
// left some
nextProc()
.then()
.catch((err) => {
Logger(err);
})
.finally(() => {
if (pendingProcs && lockKey in pendingProcs && pendingProcs[lockKey].length == 0) {
delete pendingProcs[lockKey];
notifyLock();
}
queueMicrotask(() => {
handleNextProcs();
});
});
} else {
if (pendingProcs && lockKey in pendingProcs && pendingProcs[lockKey].length == 0) {
delete pendingProcs[lockKey];
notifyLock();
}
}
}
};
if (runningProcs.contains(lockKey)) {
if (ignoreWhenRunning) {
return null;
}
if (typeof pendingProcs[lockKey] === "undefined") {
pendingProcs[lockKey] = [];
}
let responderRes: (value: T | PromiseLike<T>) => void;
let responderRej: (reason?: unknown) => void;
const responder = new Promise<T>((res, rej) => {
responderRes = res;
responderRej = rej;
//wait for subproc resolved
});
const subproc = () =>
new Promise<void>((res, rej) => {
proc()
.then((v) => {
// Logger(`Lock:${key}:processed`, LOG_LEVEL.VERBOSE);
handleNextProcs();
responderRes(v);
res();
})
.catch((reason) => {
Logger(`Lock:${key}:rejected`, LOG_LEVEL.VERBOSE);
handleNextProcs();
rej(reason);
responderRej(reason);
});
});
pendingProcs[lockKey].push(subproc);
notifyLock();
// Logger(`Lock:${lockKey}:queud:left${pendingProcs[lockKey].length}`, LOG_LEVEL.VERBOSE);
return responder;
} else {
runningProcs.push(lockKey);
notifyLock();
// Logger(`Lock:${lockKey}:aqquired`, LOG_LEVEL.VERBOSE);
return new Promise((res, rej) => {
proc()
.then((v) => {
handleNextProcs();
res(v);
})
.catch((reason) => {
handleNextProcs();
rej(reason);
});
});
}
}

113
src/utils_couchdb.ts Normal file
View File

@@ -0,0 +1,113 @@
import { Logger } from "./logger";
import { LOG_LEVEL, VER, VERSIONINFO_DOCID, EntryVersionInfo, EntryDoc } from "./types";
import { resolveWithIgnoreKnownError } from "./utils";
import { PouchDB } from "../pouchdb-browser-webpack/dist/pouchdb-browser.js";
export const isValidRemoteCouchDBURI = (uri: string): boolean => {
if (uri.startsWith("https://")) return true;
if (uri.startsWith("http://")) return true;
return false;
};
let last_post_successed = false;
export const getLastPostFailedBySize = () => {
return !last_post_successed;
};
export const connectRemoteCouchDB = async (uri: string, auth: { username: string; password: string }): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> => {
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
const conf: PouchDB.HttpAdapter.HttpAdapterConfiguration = {
adapter: "http",
auth,
fetch: async function (url: string | Request, opts: RequestInit) {
let size_ok = true;
let size = "";
const localURL = url.toString().substring(uri.length);
const method = opts.method ?? "GET";
if (opts.body) {
const opts_length = opts.body.toString().length;
if (opts_length > 1024 * 1024 * 10) {
// over 10MB
size_ok = false;
if (uri.contains(".cloudantnosqldb.")) {
last_post_successed = false;
Logger("This request should fail on IBM Cloudant.", LOG_LEVEL.VERBOSE);
throw new Error("This request should fail on IBM Cloudant.");
}
}
size = ` (${opts_length})`;
}
try {
const responce: Response = await fetch(url, opts);
if (method == "POST" || method == "PUT") {
last_post_successed = responce.ok;
} else {
last_post_successed = true;
}
Logger(`HTTP:${method}${size} to:${localURL} -> ${responce.status}`, LOG_LEVEL.VERBOSE);
return responce;
} catch (ex) {
Logger(`HTTP:${method}${size} to:${localURL} -> failed`, LOG_LEVEL.VERBOSE);
if (!size_ok && (method == "POST" || method == "PUT")) {
last_post_successed = false;
}
Logger(ex);
throw ex;
}
// return await fetch(url, opts);
},
};
const db: PouchDB.Database<EntryDoc> = new PouchDB<EntryDoc>(uri, conf);
try {
const info = await db.info();
return { db: db, info: info };
} catch (ex) {
let msg = `${ex.name}:${ex.message}`;
if (ex.name == "TypeError" && ex.message == "Failed to fetch") {
msg += "\n**Note** This error caused by many reasons. The only sure thing is you didn't touch the server.\nTo check details, open inspector.";
}
Logger(ex, LOG_LEVEL.VERBOSE);
return msg;
}
};
// check the version of remote.
// if remote is higher than current(or specified) version, return false.
export const checkRemoteVersion = async (db: PouchDB.Database, migrate: (from: number, to: number) => Promise<boolean>, barrier: number = VER): Promise<boolean> => {
try {
const versionInfo = (await db.get(VERSIONINFO_DOCID)) as EntryVersionInfo;
if (versionInfo.type != "versioninfo") {
return false;
}
const version = versionInfo.version;
if (version < barrier) {
const versionUpResult = await migrate(version, barrier);
if (versionUpResult) {
await bumpRemoteVersion(db);
return true;
}
}
if (version == barrier) return true;
return false;
} catch (ex) {
if (ex.status && ex.status == 404) {
if (await bumpRemoteVersion(db)) {
return true;
}
return false;
}
throw ex;
}
};
export const bumpRemoteVersion = async (db: PouchDB.Database, barrier: number = VER): Promise<boolean> => {
const vi: EntryVersionInfo = {
_id: VERSIONINFO_DOCID,
version: barrier,
type: "versioninfo",
};
const versionInfo = (await resolveWithIgnoreKnownError(db.get(VERSIONINFO_DOCID), vi)) as EntryVersionInfo;
if (versionInfo.type != "versioninfo") {
return false;
}
vi._rev = versionInfo._rev;
await db.put(vi);
return true;
};

View File

@@ -28,3 +28,115 @@
-webkit-filter: grayscale(100%);
filter: grayscale(100%);
}
.tcenter {
text-align: center;
}
.sls-plugins-wrap {
display: flex;
flex-grow: 1;
/* overflow: scroll; */
}
.sls-plugins-tbl {
border: 1px solid var(--background-modifier-border);
width: 100%;
}
.divider th {
border-top: 1px solid var(--background-modifier-border);
}
/* .sls-table-head{
width:50%;
}
.sls-table-tail{
width:50%;
} */
.sls-btn-left {
padding-right: 4px;
}
.sls-btn-right {
padding-left: 4px;
}
.sls-hidden {
display: none;
}
:root {
--slsmessage: "";
}
.CodeMirror-wrap::before,
.cm-s-obsidian > .cm-editor::before {
content: var(--slsmessage);
position: absolute;
border-radius: 4px;
/* border:1px solid --background-modifier-border; */
display: inline-block;
top: 8px;
color: --text-normal;
opacity: 0.5;
font-size: 80%;
-webkit-filter: grayscale(100%);
filter: grayscale(100%);
}
.CodeMirror-wrap::before {
right: 0px;
}
.cm-s-obsidian > .cm-editor::before {
right: 16px;
}
.sls-setting-tab {
display: none;
}
div.sls-setting-menu-btn {
color: var(--text-normal);
background-color: var(--background-secondary-alt);
border-radius: 4px 4px 0 0;
padding: 6px 10px;
cursor: pointer;
margin-right: 12px;
font-family: "Inter", sans-serif;
outline: none;
user-select: none;
flex-grow: 1;
text-align: center;
flex-shrink: 1;
}
.sls-setting-label.selected {
/* order: 1; */
flex-grow: 1;
/* width: 100%; */
}
.sls-setting-tab:hover ~ div.sls-setting-menu-btn,
.sls-setting-tab:checked ~ div.sls-setting-menu-btn {
background-color: var(--interactive-accent);
color: var(--text-on-accent);
}
.sls-setting-menu {
display: flex;
flex-direction: row;
/* flex-wrap: wrap; */
overflow-x: auto;
}
.sls-setting-label {
flex-grow: 1;
display: inline-flex;
justify-content: center;
}
.setting-collapsed {
display: none;
}
.sls-plugins-tbl-buttons {
text-align: right;
}
.sls-plugins-tbl-buttons button {
flex-grow: 0;
padding: 6px 10px;
}
.sls-plugins-tbl-device-head {
background-color: var(--background-secondary-alt);
color: var(--text-accent);
}

View File

@@ -9,9 +9,13 @@
"noImplicitAny": true,
"moduleResolution": "node",
"importHelpers": true,
"lib": ["dom", "es5", "scripthost", "es2015"]
"noImplicitReturns": true,
"noImplicitThis": true,
"strictFunctionTypes": true,
"alwaysStrict": true,
"lib": ["dom", "es5", "ES6", "ES7", "es2020"]
},
"include": ["**/*.ts"],
"files": ["./main.ts"],
"include": ["./src/*.ts"],
// "files": ["./src/main.ts"],
"exclude": ["pouchdb-browser-webpack"]
}