mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-02-23 12:38:47 +00:00
Compare commits
15 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
73ac93e8c5 | ||
|
|
8d2b9eff37 | ||
|
|
0ee32a2147 | ||
|
|
ac3c78e198 | ||
|
|
0da1e3d9c8 | ||
|
|
8f021a3c93 | ||
|
|
6db0743096 | ||
|
|
0e300a0a6b | ||
|
|
9d0ffd1848 | ||
|
|
e7f4d8c9c2 | ||
|
|
ca36e1b663 | ||
|
|
8f583e3680 | ||
|
|
98407cf72f | ||
|
|
1f377cdf67 | ||
|
|
3a965e74da |
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@@ -22,7 +22,7 @@ jobs:
|
||||
- name: Get Version
|
||||
id: version
|
||||
run: |
|
||||
echo "::set-output name=tag::$(git describe --abbrev=0)"
|
||||
echo "::set-output name=tag::$(git describe --abbrev=0 --tags)"
|
||||
# Build the plugin
|
||||
- name: Build
|
||||
id: build
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Self-hosted LiveSync
|
||||
|
||||
[Japanese docs](./README_ja.md).
|
||||
[Japanese docs](./README_ja.md) [Chinese docs](./README_cn.md).
|
||||
|
||||
Self-hosted LiveSync is a community implemented synchronization plugin.
|
||||
A self-hosted or purchased CouchDB acts as the intermediate server. Available on every obsidian-compatible platform.
|
||||
|
||||
130
README_cn.md
Normal file
130
README_cn.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Self-hosted LiveSync
|
||||
|
||||
Self-hosted LiveSync (自搭建在线同步) 是一个社区实现的在线同步插件。
|
||||
使用一个自搭建的或者购买的 CouchDB 作为中转服务器。兼容所有支持 Obsidian 的平台。
|
||||
|
||||
注意: 本插件与官方的 "Obsidian Sync" 服务不兼容。
|
||||
|
||||

|
||||
|
||||
安装或升级 LiveSync 之前,请备份你的 vault。
|
||||
|
||||
## 功能
|
||||
|
||||
- 可视化的冲突解决器
|
||||
- 接近实时的多设备双向同步
|
||||
- 可使用 CouchDB 以及兼容的服务,如 IBM Cloudant
|
||||
- 支持端到端加密
|
||||
- 插件同步 (Beta)
|
||||
- 从 [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf) 接收 WebClip (本功能不适用端到端加密)
|
||||
|
||||
适用于出于安全原因需要将笔记完全自托管的研究人员、工程师或开发人员,以及任何喜欢笔记完全私密所带来的安全感的人。
|
||||
|
||||
## 重要提醒
|
||||
|
||||
- 请勿与其他同步解决方案(包括 iCloud、Obsidian Sync)一起使用。在启用此插件之前,请确保禁用所有其他同步方法以避免内容损坏或重复。如果要同步到多个服务,请一一进行,切勿同时启用两种同步方法。
|
||||
这包括不能将您的保管库放在云同步文件夹中(例如 iCloud 文件夹或 Dropbox 文件夹)
|
||||
- 这是一个同步插件,不是备份解决方案。不要依赖它进行备份。
|
||||
- 如果设备的存储空间耗尽,可能会发生数据库损坏。
|
||||
- 隐藏文件或任何其他不可见文件不会保存在数据库中,因此不会被同步。(**并且可能会被删除**)
|
||||
|
||||
## 如何使用
|
||||
|
||||
### 准备好你的数据库
|
||||
|
||||
首先,准备好你的数据库。IBM Cloudant 是用于测试的首选。或者,您也可以在自己的服务器上安装 CouchDB。有关更多信息,请参阅以下内容:
|
||||
1. [Setup IBM Cloudant](docs/setup_cloudant.md)
|
||||
2. [Setup your CouchDB](docs/setup_own_server_cn.md)
|
||||
|
||||
Note: 正在征集更多搭建方法!目前在讨论的有 [使用 fly.io](https://github.com/vrtmrz/obsidian-livesync/discussions/85)。
|
||||
|
||||
### 第一个设备
|
||||
|
||||
1. 在您的设备上安装插件。
|
||||
2. 配置远程数据库信息。
|
||||
1. 将您的服务器信息填写到 `Remote Database configuration`(远程数据库配置)设置页中。
|
||||
2. 建议启用 `End to End Encryption`(端到端加密)。输入密码后,单击“应用”。
|
||||
3. 点击 `Test Database Connection` 并确保插件显示 `Connected to (你的数据库名称)`。
|
||||
4. 单击 `Check database configuration`(检查数据库配置)并确保所有测试均已通过。
|
||||
3. 在 `Sync Settings`(同步设置)选项卡中配置何时进行同步。(您也可以稍后再设置)
|
||||
1. 如果要实时同步,请启用 `LiveSync`。
|
||||
2. 或者,根据您的需要设置同步方式。默认情况下,不会启用任何自动同步,这意味着您需要手动触发同步过程。
|
||||
3. 其他配置也在这里。建议启用 `Use Trash for deleted files`(删除文件到回收站),但您也可以保持所有配置不变。
|
||||
4. 配置杂项功能。
|
||||
1. 启用 `Show staus inside editor` 会在编辑器右上角显示状态。(推荐开启)
|
||||
5. 回到编辑器。等待初始扫描完成。
|
||||
6. 当状态不再变化并显示 ⏹️ 图标表示 COMPLETED(没有 ⏳ 和 🧩 图标)时,您就可以与服务器同步了。
|
||||
7. 按功能区上的复制图标或从命令面板运行 `Replicate now`(立刻复制)。这会将您的所有数据发送到服务器。
|
||||
8. 打开命令面板,运行 `Copy setup URI`(复制设置链接),并设置密码。这会将您的配置导出到剪贴板,作为您导入其他设备的链接。
|
||||
|
||||
**重要: 不要公开本链接,这个链接包含了你的所有认证信息!** (即使没有密码别人读不了)
|
||||
|
||||
### 后续设备
|
||||
|
||||
注意:如果要与非空的 vault 进行同步,文件的修改日期和时间必须互相匹配。否则,可能会发生额外的传输或文件可能会损坏。
|
||||
为简单起见,我们强烈建议同步到一个全空的 vault。
|
||||
|
||||
1. 安装插件。
|
||||
2. 打开您从第一台设备导出的链接。
|
||||
3. 插件会询问您是否确定应用配置。 回答 `Yes`,然后按照以下说明进行操作:
|
||||
1. 对 `Keep local DB?` 回答 `Yes`。
|
||||
*注意:如果您希望保留本地现有 vault,则必须对此问题回答 `No`,并对 `Rebuild the database?` 回答 `No`。*
|
||||
2. 对 `Keep remote DB?` 回答 `Yes`。
|
||||
3. 对 `Replicate once?` 回答 `Yes`。
|
||||
完成后,您的所有设置将会从第一台设备成功导入。
|
||||
4. 你的笔记应该很快就会同步。
|
||||
|
||||
## 文件看起来有损坏...
|
||||
|
||||
请再次打开配置链接并回答如下:
|
||||
- 如果您的本地数据库看起来已损坏(当你的本地 Obsidian 文件看起来很奇怪)
|
||||
- 对 `Keep local DB?` 回答 `No`
|
||||
- 如果您的远程数据库看起来已损坏(当复制时发生中断)
|
||||
- 对 `Keep remote DB?` 回答 `No`
|
||||
|
||||
如果您对两者都回答“否”,您的数据库将根据您设备上的内容重建。并且远程数据库将锁定其他设备,您必须再次同步所有设备。(此时,几乎所有文件都会与时间戳同步。因此您可以安全地使用现有的 vault)。
|
||||
|
||||
## 测试服务器
|
||||
|
||||
设置 Cloudant 或本地 CouchDB 实例有点复杂,所以我搭建了一个 [self-hosted-livesync 尝鲜服务器](https://olstaste.vrtmrz.net/)。欢迎免费尝试!
|
||||
注意:请仔细阅读“限制”条目。不要发送您的私人 vault。
|
||||
|
||||
## 状态栏信息
|
||||
|
||||
同步状态将显示在状态栏。
|
||||
|
||||
- 状态
|
||||
- ⏹️ 就绪
|
||||
- 💤 LiveSync 已启用,正在等待更改。
|
||||
- ⚡️ 同步中。
|
||||
- ⚠ 一个错误出现了。
|
||||
- ↑ 上传的 chunk 和元数据数量
|
||||
- ↓ 下载的 chunk 和元数据数量
|
||||
- ⏳ 等待的过程的数量
|
||||
- 🧩 正在等待 chunk 的文件数量
|
||||
如果你删除或更名了文件,请等待 ⏳ 图标消失。
|
||||
|
||||
|
||||
## 提示
|
||||
|
||||
- 如果文件夹在复制后变为空,则默认情况下该文件夹会被删除。您可以关闭此行为。检查 [设置](docs/settings.md)。
|
||||
- LiveSync 模式在移动设备上可能导致耗电量增加。建议使用定期同步 + 条件自动同步。
|
||||
- 移动平台上的 Obsidian 无法连接到非安全 (HTTP) 或本地签名的服务器,即使设备上安装了根证书。
|
||||
- 没有类似“exclude_folders”的配置。
|
||||
- 同步时,文件按修改时间进行比较,较旧的将被较新的文件覆盖。然后插件检查冲突,如果需要合并,将打开一个对话框。
|
||||
- 数据库中的文件在罕见情况下可能会损坏。当接收到的文件看起来已损坏时,插件不会将其写入本地存储。如果您的设备上有文件的本地版本,则可以通过编辑本地文件并进行同步来覆盖损坏的版本。但是,如果您的任何设备上都不存在该文件,则无法挽救该文件。在这种情况下,您可以从设置对话框中删除这些损坏的文件。
|
||||
- 如果您的数据库看起来已损坏,请尝试 "Drop History"(“删除历史记录”)。通常,这是最简单的方法。
|
||||
- 要阻止插件的启动流程(例如,为了修复数据库问题),您可以在 vault 的根目录创建一个 "redflag.md" 文件。
|
||||
- 问:数据库在增长,我该如何缩小它?
|
||||
答:每个文档都保存了过去 100 次修订,用于检测和解决冲突。想象一台设备已经离线一段时间,然后再次上线。设备必须将其笔记与远程保存的笔记进行比较。如果存在曾经相同的历史修订,则可以安全地直接更新这个文件(和 git 的快进原理一样)。即使文件不在修订历史中,我们也只需检查两个设备上该文件的公有修订版本之后的差异。这就像 git 的冲突解决方法。所以,如果想从根本上解决数据库太大的问题,我们像构建一个扩大版的 git repo 一样去重新设计数据库。
|
||||
- 更多技术信息在 [技术信息](docs/tech_info.md)
|
||||
- 如果你想在没有黑曜石的情况下同步文件,你可以使用[filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync)。
|
||||
- WebClipper 也可在 Chrome Web Store 上使用:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
|
||||
|
||||
|
||||
仓库地址:[obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip) (文档施工中)
|
||||
|
||||
## License
|
||||
|
||||
The source code is licensed under the MIT License.
|
||||
本源代码使用 MIT 协议授权。
|
||||
95
docs/setup_own_server_cn.md
Normal file
95
docs/setup_own_server_cn.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# 在你自己的服务器上设置 CouchDB
|
||||
|
||||
> 注:提供了 [docker-compose.yml 和 ini 文件](https://github.com/vrtmrz/self-hosted-livesync-server) 可以同时启动 Caddy 和 CouchDB。推荐直接使用该 docker-compose 配置进行搭建。(若使用,请查阅链接中的文档,而不是这个文档)
|
||||
|
||||
## 安装 CouchDB 并从 PC 或 Mac 上访问
|
||||
|
||||
设置 CouchDB 的最简单方法是使用 [CouchDB docker image]((https://hub.docker.com/_/couchdb)).
|
||||
|
||||
需要修改一些 `local.ini` 中的配置,以让它可以用于 Self-hosted LiveSync,如下:
|
||||
|
||||
```
|
||||
[couchdb]
|
||||
single_node=true
|
||||
|
||||
[chttpd]
|
||||
require_valid_user = true
|
||||
|
||||
[chttpd_auth]
|
||||
require_valid_user = true
|
||||
authentication_redirect = /_utils/session.html
|
||||
|
||||
[httpd]
|
||||
WWW-Authenticate = Basic realm="couchdb"
|
||||
enable_cors = true
|
||||
|
||||
[cors]
|
||||
origins = app://obsidian.md,capacitor://localhost,http://localhost
|
||||
credentials = true
|
||||
headers = accept, authorization, content-type, origin, referer
|
||||
methods = GET, PUT, POST, HEAD, DELETE
|
||||
max_age = 3600
|
||||
```
|
||||
|
||||
创建 `local.ini` 并用如下指令启动 CouchDB:
|
||||
```
|
||||
$ docker run --rm -it -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -v .local.ini:/opt/couchdb/etc/local.ini -p 5984:5984 couchdb
|
||||
```
|
||||
Note: 此时 local.ini 的文件所有者会变成 5984:5984。这是 docker 镜像的限制,请修改文件所有者后再编辑 local.ini。
|
||||
|
||||
在确定 Self-hosted LiveSync 可以和服务器同步后,可以后台启动 docker 镜像:
|
||||
|
||||
```
|
||||
$ docker run -d --restart always -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -v .local.ini:/opt/couchdb/etc/local.ini -p 5984:5984 couchdb
|
||||
```
|
||||
|
||||
## 从移动设备访问
|
||||
如果你想要从移动设备访问 Self-hosted LiveSync,你需要一个合法的 SSL 证书。
|
||||
|
||||
### 移动设备测试
|
||||
测试时,[localhost.run](http://localhost.run/) 这一类的反向隧道服务很实用。(非必须,只是用于终端设备不方便 ssh 的时候的备选方案)
|
||||
|
||||
```
|
||||
$ ssh -R 80:localhost:5984 nokey@localhost.run
|
||||
Warning: Permanently added the RSA host key for IP address '35.171.254.69' to the list of known hosts.
|
||||
|
||||
===============================================================================
|
||||
Welcome to localhost.run!
|
||||
|
||||
Follow your favourite reverse tunnel at [https://twitter.com/localhost_run].
|
||||
|
||||
**You need a SSH key to access this service.**
|
||||
If you get a permission denied follow Gitlab's most excellent howto:
|
||||
https://docs.gitlab.com/ee/ssh/
|
||||
*Only rsa and ed25519 keys are supported*
|
||||
|
||||
To set up and manage custom domains go to https://admin.localhost.run/
|
||||
|
||||
More details on custom domains (and how to enable subdomains of your custom
|
||||
domain) at https://localhost.run/docs/custom-domains
|
||||
|
||||
To explore using localhost.run visit the documentation site:
|
||||
https://localhost.run/docs/
|
||||
|
||||
===============================================================================
|
||||
|
||||
|
||||
** your connection id is xxxxxxxxxxxxxxxxxxxxxxxxxxxx, please mention it if you send me a message about an issue. **
|
||||
|
||||
xxxxxxxx.localhost.run tunneled with tls termination, https://xxxxxxxx.localhost.run
|
||||
Connection to localhost.run closed by remote host.
|
||||
Connection to localhost.run closed.
|
||||
```
|
||||
|
||||
https://xxxxxxxx.localhost.run 即为临时服务器地址。
|
||||
|
||||
### 设置你的域名
|
||||
|
||||
设置一个指向你服务器的 A 记录,并根据需要设置反向代理。
|
||||
|
||||
Note: 不推荐将 CouchDB 挂载到根目录
|
||||
可以使用 Caddy 很方便的给服务器加上 SSL 功能
|
||||
|
||||
提供了 [docker-compose.yml 和 ini 文件](https://github.com/vrtmrz/self-hosted-livesync-server) 可以同时启动 Caddy 和 CouchDB。
|
||||
|
||||
注意检查服务器日志,当心恶意访问。
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.13.3",
|
||||
"version": "0.14.3",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.13.3",
|
||||
"version": "0.14.3",
|
||||
"lockfileVersion": 2,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.13.3",
|
||||
"version": "0.14.3",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"diff-match-patch": "^1.0.5",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.13.3",
|
||||
"version": "0.14.3",
|
||||
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"main": "main.js",
|
||||
"type": "module",
|
||||
|
||||
@@ -31,14 +31,25 @@ export class DocumentHistoryModal extends Modal {
|
||||
}
|
||||
async loadFile() {
|
||||
const db = this.plugin.localDatabase;
|
||||
const w = await db.localDatabase.get(path2id(this.file), { revs_info: true });
|
||||
this.revs_info = w._revs_info.filter((e) => e.status == "available");
|
||||
this.range.max = `${this.revs_info.length - 1}`;
|
||||
this.range.value = this.range.max;
|
||||
this.fileInfo.setText(`${this.file} / ${this.revs_info.length} revisions`);
|
||||
await this.loadRevs();
|
||||
try {
|
||||
const w = await db.localDatabase.get(path2id(this.file), { revs_info: true });
|
||||
this.revs_info = w._revs_info.filter((e) => e.status == "available");
|
||||
this.range.max = `${this.revs_info.length - 1}`;
|
||||
this.range.value = this.range.max;
|
||||
this.fileInfo.setText(`${this.file} / ${this.revs_info.length} revisions`);
|
||||
await this.loadRevs();
|
||||
} catch (ex) {
|
||||
if (ex.status && ex.status == 404) {
|
||||
this.range.max = "0";
|
||||
this.range.value = "";
|
||||
this.range.disabled = true;
|
||||
this.showDiff
|
||||
this.contentView.setText(`History of this file was not recorded.`);
|
||||
}
|
||||
}
|
||||
}
|
||||
async loadRevs() {
|
||||
if (this.revs_info.length == 0) return;
|
||||
const db = this.plugin.localDatabase;
|
||||
const index = this.revs_info.length - 1 - (this.range.value as any) / 1;
|
||||
const rev = this.revs_info[index];
|
||||
|
||||
@@ -24,9 +24,9 @@ import {
|
||||
} from "./lib/src/types";
|
||||
import { RemoteDBSettings } from "./lib/src/types";
|
||||
import { resolveWithIgnoreKnownError, runWithLock, shouldSplitAsPlainText, splitPieces2, enableEncryption } from "./lib/src/utils";
|
||||
import { path2id } from "./utils";
|
||||
import { id2path, path2id } from "./utils";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { checkRemoteVersion, connectRemoteCouchDBWithSetting, getLastPostFailedBySize } from "./utils_couchdb";
|
||||
import { checkRemoteVersion, connectRemoteCouchDBWithSetting, getLastPostFailedBySize, putDesignDocuments } from "./utils_couchdb";
|
||||
import { KeyValueDatabase, OpenKeyValueDatabase } from "./KeyValueDB";
|
||||
import { LRUCache } from "./lib/src/LRUCache";
|
||||
|
||||
@@ -72,6 +72,7 @@ export class LocalPouchDB {
|
||||
chunkVersion = -1;
|
||||
maxChunkVersion = -1;
|
||||
minChunkVersion = -1;
|
||||
needScanning = false;
|
||||
|
||||
cancelHandler<T extends PouchDB.Core.Changes<EntryDoc> | PouchDB.Replication.Sync<EntryDoc> | PouchDB.Replication.Replication<EntryDoc>>(handler: T): T {
|
||||
if (handler != null) {
|
||||
@@ -160,6 +161,7 @@ export class LocalPouchDB {
|
||||
this.localDatabase.removeAllListeners();
|
||||
});
|
||||
this.nodeid = nodeinfo.nodeid;
|
||||
await putDesignDocuments(this.localDatabase);
|
||||
|
||||
// Traceing the leaf id
|
||||
const changes = this.localDatabase
|
||||
@@ -299,6 +301,10 @@ export class LocalPouchDB {
|
||||
}
|
||||
|
||||
async getDBEntryMeta(path: string, opt?: PouchDB.Core.GetOptions, includeDeleted = false): Promise<false | LoadedEntry> {
|
||||
// safety valve
|
||||
if (!this.isTargetFile(path)) {
|
||||
return false;
|
||||
}
|
||||
const id = path2id(path);
|
||||
try {
|
||||
let obj: EntryDocResponse = null;
|
||||
@@ -348,6 +354,10 @@ export class LocalPouchDB {
|
||||
return false;
|
||||
}
|
||||
async getDBEntry(path: string, opt?: PouchDB.Core.GetOptions, dump = false, waitForReady = true, includeDeleted = false): Promise<false | LoadedEntry> {
|
||||
// safety valve
|
||||
if (!this.isTargetFile(path)) {
|
||||
return false;
|
||||
}
|
||||
const id = path2id(path);
|
||||
try {
|
||||
let obj: EntryDocResponse = null;
|
||||
@@ -392,26 +402,51 @@ export class LocalPouchDB {
|
||||
// simple note
|
||||
}
|
||||
if (obj.type == "newnote" || obj.type == "plain") {
|
||||
// search childrens
|
||||
// search children
|
||||
try {
|
||||
if (dump) {
|
||||
Logger(`Enhanced doc`);
|
||||
Logger(obj);
|
||||
}
|
||||
let childrens: string[];
|
||||
try {
|
||||
childrens = await Promise.all(obj.children.map((e) => this.getDBLeaf(e, waitForReady)));
|
||||
if (dump) {
|
||||
Logger(`Chunks:`);
|
||||
Logger(childrens);
|
||||
let children: string[] = [];
|
||||
|
||||
if (this.settings.readChunksOnline) {
|
||||
const items = await this.CollectChunks(obj.children);
|
||||
if (items) {
|
||||
for (const v of items) {
|
||||
if (v && v.type == "leaf") {
|
||||
children.push(v.data);
|
||||
} else {
|
||||
if (!opt) {
|
||||
Logger(`Chunks of ${obj._id} are not valid.`, LOG_LEVEL.NOTICE);
|
||||
this.needScanning = true;
|
||||
this.corruptedEntries[obj._id] = obj;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (opt) {
|
||||
Logger(`Could not retrieve chunks of ${obj._id}. we have to `, LOG_LEVEL.NOTICE);
|
||||
this.needScanning = true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
try {
|
||||
children = await Promise.all(obj.children.map((e) => this.getDBLeaf(e, waitForReady)));
|
||||
if (dump) {
|
||||
Logger(`Chunks:`);
|
||||
Logger(children);
|
||||
}
|
||||
} catch (ex) {
|
||||
Logger(`Something went wrong on reading chunks of ${obj._id} from database, see verbose info for detail.`, LOG_LEVEL.NOTICE);
|
||||
Logger(ex, LOG_LEVEL.VERBOSE);
|
||||
this.corruptedEntries[obj._id] = obj;
|
||||
return false;
|
||||
}
|
||||
} catch (ex) {
|
||||
Logger(`Something went wrong on reading chunks of ${obj._id} from database, see verbose info for detail.`, LOG_LEVEL.NOTICE);
|
||||
Logger(ex, LOG_LEVEL.VERBOSE);
|
||||
this.corruptedEntries[obj._id] = obj;
|
||||
return false;
|
||||
}
|
||||
const data = childrens.join("");
|
||||
const data = children.join("");
|
||||
const doc: LoadedEntry & PouchDB.Core.IdMeta & PouchDB.Core.GetMeta = {
|
||||
data: data,
|
||||
_id: obj._id,
|
||||
@@ -452,6 +487,10 @@ export class LocalPouchDB {
|
||||
return false;
|
||||
}
|
||||
async deleteDBEntry(path: string, opt?: PouchDB.Core.GetOptions): Promise<boolean> {
|
||||
// safety valve
|
||||
if (!this.isTargetFile(path)) {
|
||||
return false;
|
||||
}
|
||||
const id = path2id(path);
|
||||
|
||||
try {
|
||||
@@ -521,7 +560,7 @@ export class LocalPouchDB {
|
||||
for (const v of result.rows) {
|
||||
// let doc = v.doc;
|
||||
if (v.id.startsWith(prefix) || v.id.startsWith("/" + prefix)) {
|
||||
delDocs.push(v.id);
|
||||
if (this.isTargetFile(id2path(v.id))) delDocs.push(v.id);
|
||||
// console.log("!" + v.id);
|
||||
} else {
|
||||
if (!v.id.startsWith("h:")) {
|
||||
@@ -566,12 +605,17 @@ export class LocalPouchDB {
|
||||
return true;
|
||||
}
|
||||
async putDBEntry(note: LoadedEntry, saveAsBigChunk?: boolean) {
|
||||
//safety valve
|
||||
if (!this.isTargetFile(id2path(note._id))) {
|
||||
return;
|
||||
}
|
||||
|
||||
// let leftData = note.data;
|
||||
const savenNotes = [];
|
||||
let processed = 0;
|
||||
let made = 0;
|
||||
let skiped = 0;
|
||||
let pieceSize = MAX_DOC_SIZE_BIN;
|
||||
let pieceSize = MAX_DOC_SIZE_BIN * Math.max(this.settings.customChunkSize, 1);
|
||||
let plainSplit = false;
|
||||
let cacheUsed = 0;
|
||||
const userpasswordHash = this.h32Raw(new TextEncoder().encode(this.settings.passphrase));
|
||||
@@ -727,7 +771,7 @@ export class LocalPouchDB {
|
||||
}
|
||||
});
|
||||
} else {
|
||||
Logger(`note coud not saved:${note._id}`);
|
||||
Logger(`note could not saved:${note._id}`);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -779,6 +823,7 @@ export class LocalPouchDB {
|
||||
}
|
||||
|
||||
if (!skipCheck) {
|
||||
await putDesignDocuments(dbret.db);
|
||||
if (!(await checkRemoteVersion(dbret.db, this.migrate.bind(this), VER))) {
|
||||
Logger("Remote database is newer or corrupted, make sure to latest version of self-hosted-livesync installed", LOG_LEVEL.NOTICE);
|
||||
return false;
|
||||
@@ -850,6 +895,10 @@ export class LocalPouchDB {
|
||||
batches_limit: setting.batches_limit,
|
||||
batch_size: setting.batch_size,
|
||||
};
|
||||
if (setting.readChunksOnline) {
|
||||
syncOptionBase.push = { filter: 'replicate/push' };
|
||||
syncOptionBase.pull = { filter: 'replicate/pull' };
|
||||
}
|
||||
const syncOption: PouchDB.Replication.SyncOptions = keepAlive ? { live: true, retry: true, heartbeat: 30000, ...syncOptionBase } : { ...syncOptionBase };
|
||||
|
||||
return { db: dbret.db, info: dbret.info, syncOptionBase, syncOption };
|
||||
@@ -902,6 +951,8 @@ export class LocalPouchDB {
|
||||
this.syncStatus = "ERRORED";
|
||||
this.syncHandler = this.cancelHandler(this.syncHandler);
|
||||
this.updateInfo();
|
||||
Logger("Replication error", LOG_LEVEL.NOTICE, "sync");
|
||||
Logger(e);
|
||||
}
|
||||
replicationPaused() {
|
||||
this.syncStatus = "PAUSED";
|
||||
@@ -962,7 +1013,7 @@ export class LocalPouchDB {
|
||||
}
|
||||
});
|
||||
} else if (syncmode == "pullOnly") {
|
||||
this.syncHandler = this.localDatabase.replicate.from(db, { checkpoint: "target", ...syncOptionBase });
|
||||
this.syncHandler = this.localDatabase.replicate.from(db, { checkpoint: "target", ...syncOptionBase, ...(this.settings.readChunksOnline ? { filter: "replicate/pull" } : {}) });
|
||||
this.syncHandler
|
||||
.on("change", async (e) => {
|
||||
await this.replicationChangeDetected({ direction: "pull", change: e }, showResult, docSentOnStart, docArrivedOnStart, callback);
|
||||
@@ -982,7 +1033,7 @@ export class LocalPouchDB {
|
||||
}
|
||||
});
|
||||
} else if (syncmode == "pushOnly") {
|
||||
this.syncHandler = this.localDatabase.replicate.to(db, { checkpoint: "target", ...syncOptionBase });
|
||||
this.syncHandler = this.localDatabase.replicate.to(db, { checkpoint: "target", ...syncOptionBase, ...(this.settings.readChunksOnline ? { filter: "replicate/push" } : {}) });
|
||||
this.syncHandler.on("change", async (e) => {
|
||||
await this.replicationChangeDetected({ direction: "push", change: e }, showResult, docSentOnStart, docArrivedOnStart, callback);
|
||||
if (retrying) {
|
||||
@@ -1293,4 +1344,44 @@ export class LocalPouchDB {
|
||||
if (this.minChunkVersion > 0 && this.minChunkVersion > ver) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
isTargetFile(file: string) {
|
||||
if (file.includes(":")) return true;
|
||||
if (this.settings.syncOnlyRegEx) {
|
||||
const syncOnly = new RegExp(this.settings.syncOnlyRegEx);
|
||||
if (!file.match(syncOnly)) return false;
|
||||
}
|
||||
if (this.settings.syncIgnoreRegEx) {
|
||||
const syncIgnore = new RegExp(this.settings.syncIgnoreRegEx);
|
||||
if (file.match(syncIgnore)) return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
// Collect chunks from both local and remote.
|
||||
async CollectChunks(ids: string[], showResult = false) {
|
||||
// Fetch local chunks.
|
||||
const localChunks = await this.localDatabase.allDocs({ keys: ids, include_docs: true });
|
||||
const missingChunks = localChunks.rows.filter(e => "error" in e).map(e => e.key);
|
||||
// If we have enough chunks, return them.
|
||||
if (missingChunks.length == 0) {
|
||||
return localChunks.rows.map(e => e.doc);
|
||||
}
|
||||
|
||||
// Fetching remote chunks.
|
||||
const ret = await connectRemoteCouchDBWithSetting(this.settings, this.isMobile);
|
||||
if (typeof (ret) === "string") {
|
||||
|
||||
Logger(`Could not connect to server.${ret} `, showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO, "fetch");
|
||||
return false;
|
||||
}
|
||||
|
||||
const remoteChunks = await ret.db.allDocs({ keys: missingChunks, include_docs: true });
|
||||
if (remoteChunks.rows.some(e => "error" in e)) {
|
||||
return false;
|
||||
}
|
||||
// Merge them
|
||||
const chunkMap: { [key: string]: EntryDoc } = remoteChunks.rows.reduce((p, c) => ({ ...p, [c.key]: c.doc }), {})
|
||||
return localChunks.rows.map(e => ("error" in e) ? (chunkMap[e.key]) : e.doc);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -115,12 +115,12 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
};
|
||||
const applyDisplayEnabled = () => {
|
||||
if (isAnySyncEnabled()) {
|
||||
dbsettings.forEach((e) => {
|
||||
dbSettings.forEach((e) => {
|
||||
e.setDisabled(true).setTooltip("Could not change this while any synchronization options are enabled.");
|
||||
});
|
||||
syncWarn.removeClass("sls-hidden");
|
||||
} else {
|
||||
dbsettings.forEach((e) => {
|
||||
dbSettings.forEach((e) => {
|
||||
e.setDisabled(false).setTooltip("");
|
||||
});
|
||||
syncWarn.addClass("sls-hidden");
|
||||
@@ -149,8 +149,8 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
}
|
||||
};
|
||||
|
||||
const dbsettings: Setting[] = [];
|
||||
dbsettings.push(
|
||||
const dbSettings: Setting[] = [];
|
||||
dbSettings.push(
|
||||
new Setting(containerRemoteDatabaseEl).setName("URI").addText((text) =>
|
||||
text
|
||||
.setPlaceholder("https://........")
|
||||
@@ -469,6 +469,22 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
} else {
|
||||
addResult("✔ httpd.enable_cors is ok.");
|
||||
}
|
||||
// If the server is not cloudant, configure request size
|
||||
if (!this.plugin.settings.couchDB_URI.contains(".cloudantnosqldb.")) {
|
||||
// REQUEST SIZE
|
||||
if (Number(responseConfig?.chttpd?.max_http_request_size ?? 0) < 4294967296) {
|
||||
addResult("❗ chttpd.max_http_request_size is low)");
|
||||
addConfigFixButton("Set chttpd.max_http_request_size", "chttpd/max_http_request_size", "4294967296");
|
||||
} else {
|
||||
addResult("✔ chttpd.max_http_request_size is ok.");
|
||||
}
|
||||
if (Number(responseConfig?.couchdb?.max_document_size ?? 0) < 50000000) {
|
||||
addResult("❗ couchdb.max_document_size is low)");
|
||||
addConfigFixButton("Set couchdb.max_document_size", "couchdb/max_document_size", "50000000");
|
||||
} else {
|
||||
addResult("✔ couchdb.max_document_size is ok.");
|
||||
}
|
||||
}
|
||||
// CORS check
|
||||
// checking connectivity for mobile
|
||||
if (responseConfig?.cors?.credentials != "true") {
|
||||
@@ -652,7 +668,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
|
||||
new Setting(containerGeneralSettingsEl)
|
||||
.setName("Do not show low-priority Log")
|
||||
.setDesc("Reduce log infomations")
|
||||
.setDesc("Reduce log information")
|
||||
.addToggle((toggle) =>
|
||||
toggle.setValue(this.plugin.settings.lessInformationInLog).onChange(async (value) => {
|
||||
this.plugin.settings.lessInformationInLog = value;
|
||||
@@ -661,7 +677,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
);
|
||||
new Setting(containerGeneralSettingsEl)
|
||||
.setName("Verbose Log")
|
||||
.setDesc("Show verbose log ")
|
||||
.setDesc("Show verbose log")
|
||||
.addToggle((toggle) =>
|
||||
toggle.setValue(this.plugin.settings.showVerboseLog).onChange(async (value) => {
|
||||
this.plugin.settings.showVerboseLog = value;
|
||||
@@ -810,15 +826,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
})
|
||||
);
|
||||
|
||||
// new Setting(containerSyncSettingEl)
|
||||
// .setName("Skip old files on sync")
|
||||
// .setDesc("Skip old incoming if incoming changes older than storage.")
|
||||
// .addToggle((toggle) =>
|
||||
// toggle.setValue(this.plugin.settings.skipOlderFilesOnSync).onChange(async (value) => {
|
||||
// this.plugin.settings.skipOlderFilesOnSync = value;
|
||||
// await this.plugin.saveSettings();
|
||||
// })
|
||||
// );
|
||||
new Setting(containerSyncSettingEl)
|
||||
.setName("Check conflict only on opened files")
|
||||
.setDesc("Do not check conflict for replication")
|
||||
@@ -829,9 +836,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
})
|
||||
);
|
||||
|
||||
containerSyncSettingEl.createEl("h3", {
|
||||
text: sanitizeHTMLToDom(`Experimental`),
|
||||
});
|
||||
|
||||
new Setting(containerSyncSettingEl)
|
||||
.setName("Sync hidden files")
|
||||
.addToggle((toggle) =>
|
||||
@@ -926,6 +931,86 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
})
|
||||
)
|
||||
|
||||
containerSyncSettingEl.createEl("h3", {
|
||||
text: sanitizeHTMLToDom(`Experimental`),
|
||||
});
|
||||
new Setting(containerSyncSettingEl)
|
||||
.setName("Regular expression to ignore files")
|
||||
.setDesc("If this is set, any changes to local and remote files that match this will be skipped.")
|
||||
.addTextArea((text) => {
|
||||
text
|
||||
.setValue(this.plugin.settings.syncIgnoreRegEx)
|
||||
.setPlaceholder("\\.pdf$")
|
||||
.onChange(async (value) => {
|
||||
let isValidRegExp = false;
|
||||
try {
|
||||
new RegExp(value);
|
||||
isValidRegExp = true;
|
||||
} catch (_) {
|
||||
// NO OP.
|
||||
}
|
||||
if (isValidRegExp || value.trim() == "") {
|
||||
this.plugin.settings.syncIgnoreRegEx = value;
|
||||
await this.plugin.saveSettings();
|
||||
}
|
||||
})
|
||||
return text;
|
||||
}
|
||||
);
|
||||
new Setting(containerSyncSettingEl)
|
||||
.setName("Regular expression for restricting synchronization targets")
|
||||
.setDesc("If this is set, changes to local and remote files that only match this will be processed.")
|
||||
.addTextArea((text) => {
|
||||
text
|
||||
.setValue(this.plugin.settings.syncOnlyRegEx)
|
||||
.setPlaceholder("\\.md$|\\.txt")
|
||||
.onChange(async (value) => {
|
||||
let isValidRegExp = false;
|
||||
try {
|
||||
new RegExp(value);
|
||||
isValidRegExp = true;
|
||||
} catch (_) {
|
||||
// NO OP.
|
||||
}
|
||||
if (isValidRegExp || value.trim() == "") {
|
||||
this.plugin.settings.syncOnlyRegEx = value;
|
||||
await this.plugin.saveSettings();
|
||||
}
|
||||
})
|
||||
return text;
|
||||
}
|
||||
);
|
||||
|
||||
|
||||
new Setting(containerSyncSettingEl)
|
||||
.setName("Chunk size")
|
||||
.setDesc("Customize chunk size for binary files (0.1MBytes). This cannot be increased when using IBM Cloudant.")
|
||||
.addText((text) => {
|
||||
text.setPlaceholder("")
|
||||
.setValue(this.plugin.settings.customChunkSize + "")
|
||||
.onChange(async (value) => {
|
||||
let v = Number(value);
|
||||
if (isNaN(v) || v < 100) {
|
||||
v = 100;
|
||||
}
|
||||
this.plugin.settings.customChunkSize = v;
|
||||
await this.plugin.saveSettings();
|
||||
});
|
||||
text.inputEl.setAttribute("type", "number");
|
||||
});
|
||||
new Setting(containerSyncSettingEl)
|
||||
.setName("Read chunks online.")
|
||||
.setDesc("If this option is enabled, LiveSync reads chunks online directly instead of replicating them locally. Increasing Custom chunk size is recommended.")
|
||||
.addToggle((toggle) => {
|
||||
toggle
|
||||
.setValue(this.plugin.settings.readChunksOnline)
|
||||
.onChange(async (value) => {
|
||||
this.plugin.settings.readChunksOnline = value;
|
||||
await this.plugin.saveSettings();
|
||||
})
|
||||
return toggle;
|
||||
}
|
||||
);
|
||||
containerSyncSettingEl.createEl("h3", {
|
||||
text: sanitizeHTMLToDom(`Advanced settings`),
|
||||
});
|
||||
|
||||
2
src/lib
2
src/lib
Submodule src/lib updated: a49a096a6a...3ca623780c
449
src/main.ts
449
src/main.ts
@@ -29,7 +29,7 @@ import { DocumentHistoryModal } from "./DocumentHistoryModal";
|
||||
|
||||
|
||||
|
||||
import { clearAllPeriodic, clearAllTriggers, disposeMemoObject, id2path, memoIfNotExist, memoObject, path2id, retriveMemoObject, setTrigger } from "./utils";
|
||||
import { clearAllPeriodic, clearAllTriggers, clearTrigger, disposeMemoObject, id2path, memoIfNotExist, memoObject, path2id, retriveMemoObject, setTrigger } from "./utils";
|
||||
import { decrypt, encrypt } from "./lib/src/e2ee_v2";
|
||||
|
||||
const isDebug = false;
|
||||
@@ -48,7 +48,7 @@ const ICHeaderLength = ICHeader.length;
|
||||
* @param str ID
|
||||
* @returns
|
||||
*/
|
||||
function isInteralChunk(str: string): boolean {
|
||||
function isInternalChunk(str: string): boolean {
|
||||
return str.startsWith(ICHeader);
|
||||
}
|
||||
function id2filenameInternalChunk(str: string): string {
|
||||
@@ -174,6 +174,45 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
this.showHistory(target);
|
||||
}
|
||||
}
|
||||
async pickFileForResolve() {
|
||||
const pageLimit = 1000;
|
||||
let nextKey = "";
|
||||
const notes: { path: string, mtime: number }[] = [];
|
||||
do {
|
||||
const docs = await this.localDatabase.localDatabase.allDocs({ limit: pageLimit, startkey: nextKey, conflicts: true, include_docs: true });
|
||||
nextKey = "";
|
||||
for (const row of docs.rows) {
|
||||
const doc = row.doc;
|
||||
nextKey = `${row.id}\u{10ffff}`;
|
||||
if (!("_conflicts" in doc)) continue;
|
||||
if (isInternalChunk(row.id)) continue;
|
||||
if (doc._deleted) continue;
|
||||
if ("deleted" in doc && doc.deleted) continue;
|
||||
if (doc.type == "newnote" || doc.type == "plain") {
|
||||
// const docId = doc._id.startsWith("i:") ? doc._id.substring("i:".length) : doc._id;
|
||||
notes.push({ path: id2path(doc._id), mtime: doc.mtime });
|
||||
}
|
||||
if (isChunk(nextKey)) {
|
||||
// skip the chunk zone.
|
||||
nextKey = CHeaderEnd;
|
||||
}
|
||||
}
|
||||
} while (nextKey != "");
|
||||
notes.sort((a, b) => b.mtime - a.mtime);
|
||||
const notesList = notes.map(e => e.path);
|
||||
if (notesList.length == 0) {
|
||||
Logger("There are no conflicted documents", LOG_LEVEL.NOTICE);
|
||||
return;
|
||||
}
|
||||
const target = await askSelectString(this.app, "File to view History", notesList);
|
||||
if (target) {
|
||||
if (isInternalChunk(target)) {
|
||||
//NOP
|
||||
} else {
|
||||
await this.showIfConflicted(this.app.vault.getAbstractFileByPath(target) as TFile);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async onload() {
|
||||
setLogger(this.addLog.bind(this)); // Logger moved to global.
|
||||
@@ -247,7 +286,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
this.watchVaultDelete = this.watchVaultDelete.bind(this);
|
||||
this.watchVaultRename = this.watchVaultRename.bind(this);
|
||||
this.watchWorkspaceOpen = debounce(this.watchWorkspaceOpen.bind(this), 1000, false);
|
||||
this.watchWindowVisiblity = debounce(this.watchWindowVisiblity.bind(this), 1000, false);
|
||||
this.watchWindowVisibility = debounce(this.watchWindowVisibility.bind(this), 1000, false);
|
||||
this.watchOnline = debounce(this.watchOnline.bind(this), 500, false);
|
||||
|
||||
this.parseReplicationResult = this.parseReplicationResult.bind(this);
|
||||
|
||||
@@ -281,8 +321,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (this.settings.suspendFileWatching) {
|
||||
Logger("'Suspend file watching' turned on. Are you sure this is what you intended? Every modification on the vault will be ignored.", LOG_LEVEL.NOTICE);
|
||||
}
|
||||
const isInitalized = await this.initializeDatabase();
|
||||
if (!isInitalized) {
|
||||
const isInitialized = await this.initializeDatabase();
|
||||
if (!isInitialized) {
|
||||
//TODO:stop all sync.
|
||||
return false;
|
||||
}
|
||||
@@ -322,19 +362,19 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
const config = decodeURIComponent(setupURI.substring(configURIBase.length));
|
||||
console.dir(config)
|
||||
await setupwizard(config);
|
||||
await setupWizard(config);
|
||||
},
|
||||
});
|
||||
const setupwizard = async (confString: string) => {
|
||||
const setupWizard = async (confString: string) => {
|
||||
try {
|
||||
const oldConf = JSON.parse(JSON.stringify(this.settings));
|
||||
const encryptingPassphrase = await askString(this.app, "Passphrase", "Passphrase for your settings", "");
|
||||
if (encryptingPassphrase === false) return;
|
||||
const newconf = await JSON.parse(await decrypt(confString, encryptingPassphrase));
|
||||
if (newconf) {
|
||||
const newConf = await JSON.parse(await decrypt(confString, encryptingPassphrase));
|
||||
if (newConf) {
|
||||
const result = await askYesNo(this.app, "Importing LiveSync's conf, OK?");
|
||||
if (result == "yes") {
|
||||
const newSettingW = Object.assign({}, this.settings, newconf);
|
||||
const newSettingW = Object.assign({}, this.settings, newConf);
|
||||
// stopping once.
|
||||
this.localDatabase.closeReplication();
|
||||
this.settings.suspendFileWatching = true;
|
||||
@@ -398,7 +438,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
};
|
||||
this.registerObsidianProtocolHandler("setuplivesync", async (conf: any) => {
|
||||
await setupwizard(conf.settings);
|
||||
await setupWizard(conf.settings);
|
||||
});
|
||||
this.addCommand({
|
||||
id: "livesync-replicate",
|
||||
@@ -409,7 +449,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
});
|
||||
this.addCommand({
|
||||
id: "livesync-dump",
|
||||
name: "Dump informations of this doc ",
|
||||
name: "Dump information of this doc ",
|
||||
editorCallback: (editor: Editor, view: MarkdownView) => {
|
||||
this.localDatabase.getDBEntry(view.file.path, {}, true, false);
|
||||
},
|
||||
@@ -465,6 +505,13 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
this.showHistory(view.file);
|
||||
},
|
||||
});
|
||||
this.addCommand({
|
||||
id: "livesync-scan-files",
|
||||
name: "Scan storage and database again",
|
||||
callback: async () => {
|
||||
await this.syncAllFiles(true)
|
||||
}
|
||||
})
|
||||
|
||||
this.triggerRealizeSettingSyncMode = debounce(this.triggerRealizeSettingSyncMode.bind(this), 1000);
|
||||
this.triggerCheckPluginUpdate = debounce(this.triggerCheckPluginUpdate.bind(this), 3000);
|
||||
@@ -492,6 +539,20 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
callback: () => {
|
||||
this.fileHistory();
|
||||
},
|
||||
});
|
||||
this.addCommand({
|
||||
id: "livesync-conflictcheck",
|
||||
name: "Pick a file to resolve conflict",
|
||||
callback: () => {
|
||||
this.pickFileForResolve();
|
||||
},
|
||||
})
|
||||
this.addCommand({
|
||||
id: "livesync-runbatch",
|
||||
name: "Run pended batch processes",
|
||||
callback: async () => {
|
||||
await this.applyBatchChange();
|
||||
},
|
||||
})
|
||||
|
||||
}
|
||||
@@ -532,7 +593,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
clearAllPeriodic();
|
||||
clearAllTriggers();
|
||||
window.removeEventListener("visibilitychange", this.watchWindowVisiblity);
|
||||
window.removeEventListener("visibilitychange", this.watchWindowVisibility);
|
||||
window.removeEventListener("online", this.watchOnline)
|
||||
Logger("unloading plugin");
|
||||
}
|
||||
|
||||
@@ -613,14 +675,26 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
this.registerEvent(this.app.vault.on("rename", this.watchVaultRename));
|
||||
this.registerEvent(this.app.vault.on("create", this.watchVaultCreate));
|
||||
this.registerEvent(this.app.workspace.on("file-open", this.watchWorkspaceOpen));
|
||||
window.addEventListener("visibilitychange", this.watchWindowVisiblity);
|
||||
window.addEventListener("visibilitychange", this.watchWindowVisibility);
|
||||
window.addEventListener("online", this.watchOnline);
|
||||
}
|
||||
|
||||
watchWindowVisiblity() {
|
||||
this.watchWindowVisiblityAsync();
|
||||
|
||||
watchOnline() {
|
||||
this.watchOnlineAsync();
|
||||
}
|
||||
async watchOnlineAsync() {
|
||||
// If some files were failed to retrieve, scan files again.
|
||||
if (navigator.onLine && this.localDatabase.needScanning) {
|
||||
this.localDatabase.needScanning = false;
|
||||
await this.syncAllFiles();
|
||||
}
|
||||
}
|
||||
watchWindowVisibility() {
|
||||
this.watchWindowVisibilityAsync();
|
||||
}
|
||||
|
||||
async watchWindowVisiblityAsync() {
|
||||
async watchWindowVisibilityAsync() {
|
||||
if (this.settings.suspendFileWatching) return;
|
||||
// if (this.suspended) return;
|
||||
const isHidden = document.hidden;
|
||||
@@ -665,6 +739,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
|
||||
watchVaultCreate(file: TFile, ...args: any[]) {
|
||||
if (!this.isTargetFile(file)) return;
|
||||
if (this.settings.suspendFileWatching) return;
|
||||
if (recentlyTouched(file)) {
|
||||
return;
|
||||
@@ -673,6 +748,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
|
||||
watchVaultChange(file: TAbstractFile, ...args: any[]) {
|
||||
if (!this.isTargetFile(file)) return;
|
||||
if (!(file instanceof TFile)) {
|
||||
return;
|
||||
}
|
||||
@@ -746,6 +822,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
|
||||
watchVaultDelete(file: TAbstractFile) {
|
||||
if (!this.isTargetFile(file)) return;
|
||||
// When save is delayed, it should be cancelled.
|
||||
this.batchFileChange = this.batchFileChange.filter((e) => e != file.path);
|
||||
if (this.settings.suspendFileWatching) return;
|
||||
@@ -777,6 +854,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
|
||||
watchVaultRename(file: TAbstractFile, oldFile: any) {
|
||||
if (!this.isTargetFile(file)) return;
|
||||
if (this.settings.suspendFileWatching) return;
|
||||
this.watchVaultRenameAsync(file, oldFile).then(() => { });
|
||||
}
|
||||
@@ -846,32 +924,32 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (this.settings && !this.settings.showVerboseLog && level == LOG_LEVEL.VERBOSE) {
|
||||
return;
|
||||
}
|
||||
const valutName = this.getVaultName();
|
||||
const vaultName = this.getVaultName();
|
||||
const timestamp = new Date().toLocaleString();
|
||||
const messagecontent = typeof message == "string" ? message : message instanceof Error ? `${message.name}:${message.message}` : JSON.stringify(message, null, 2);
|
||||
const newmessage = timestamp + "->" + messagecontent;
|
||||
const messageContent = typeof message == "string" ? message : message instanceof Error ? `${message.name}:${message.message}` : JSON.stringify(message, null, 2);
|
||||
const newMessage = timestamp + "->" + messageContent;
|
||||
|
||||
this.logMessage = [].concat(this.logMessage).concat([newmessage]).slice(-100);
|
||||
console.log(valutName + ":" + newmessage);
|
||||
this.setStatusBarText(null, messagecontent.substring(0, 30));
|
||||
this.logMessage = [].concat(this.logMessage).concat([newMessage]).slice(-100);
|
||||
console.log(vaultName + ":" + newMessage);
|
||||
this.setStatusBarText(null, messageContent.substring(0, 30));
|
||||
// if (message instanceof Error) {
|
||||
// console.trace(message);
|
||||
// }
|
||||
|
||||
if (level >= LOG_LEVEL.NOTICE) {
|
||||
if (!key) key = messagecontent;
|
||||
if (!key) key = messageContent;
|
||||
if (key in this.notifies) {
|
||||
// @ts-ignore
|
||||
const isShown = this.notifies[key].notice.noticeEl?.isShown()
|
||||
if (!isShown) {
|
||||
this.notifies[key].notice = new Notice(messagecontent, 0);
|
||||
this.notifies[key].notice = new Notice(messageContent, 0);
|
||||
}
|
||||
clearTimeout(this.notifies[key].timer);
|
||||
if (key == messagecontent) {
|
||||
if (key == messageContent) {
|
||||
this.notifies[key].count++;
|
||||
this.notifies[key].notice.setMessage(`(${this.notifies[key].count}):${messagecontent}`);
|
||||
this.notifies[key].notice.setMessage(`(${this.notifies[key].count}):${messageContent}`);
|
||||
} else {
|
||||
this.notifies[key].notice.setMessage(`${messagecontent}`);
|
||||
this.notifies[key].notice.setMessage(`${messageContent}`);
|
||||
}
|
||||
|
||||
this.notifies[key].timer = setTimeout(() => {
|
||||
@@ -884,7 +962,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
}, 5000);
|
||||
} else {
|
||||
const notify = new Notice(messagecontent, 0);
|
||||
const notify = new Notice(messageContent, 0);
|
||||
this.notifies[key] = {
|
||||
count: 0,
|
||||
notice: notify,
|
||||
@@ -898,8 +976,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (this.addLogHook != null) this.addLogHook();
|
||||
}
|
||||
|
||||
async ensureDirectory(fullpath: string) {
|
||||
const pathElements = fullpath.split("/");
|
||||
async ensureDirectory(fullPath: string) {
|
||||
const pathElements = fullPath.split("/");
|
||||
pathElements.pop();
|
||||
let c = "";
|
||||
for (const v of pathElements) {
|
||||
@@ -909,7 +987,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
} catch (ex) {
|
||||
// basically skip exceptions.
|
||||
if (ex.message && ex.message == "Folder already exists.") {
|
||||
// especialy this message is.
|
||||
// especially this message is.
|
||||
} else {
|
||||
Logger("Folder Create Error");
|
||||
Logger(ex);
|
||||
@@ -924,6 +1002,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (shouldBeIgnored(pathSrc)) {
|
||||
return;
|
||||
}
|
||||
if (!this.isTargetFile(pathSrc)) return;
|
||||
|
||||
const doc = await this.localDatabase.getDBEntry(pathSrc, { rev: docEntry._rev });
|
||||
if (doc === false) return;
|
||||
const msg = `DB -> STORAGE (create${force ? ",force" : ""},${doc.datatype}) `;
|
||||
@@ -937,14 +1017,14 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
await this.ensureDirectory(path);
|
||||
try {
|
||||
const newfile = await this.app.vault.createBinary(normalizePath(path), bin, {
|
||||
const newFile = await this.app.vault.createBinary(normalizePath(path), bin, {
|
||||
ctime: doc.ctime,
|
||||
mtime: doc.mtime,
|
||||
});
|
||||
this.batchFileChange = this.batchFileChange.filter((e) => e != newfile.path);
|
||||
this.batchFileChange = this.batchFileChange.filter((e) => e != newFile.path);
|
||||
Logger(msg + path);
|
||||
touch(newfile);
|
||||
this.app.vault.trigger("create", newfile);
|
||||
touch(newFile);
|
||||
this.app.vault.trigger("create", newFile);
|
||||
} catch (ex) {
|
||||
Logger(msg + "ERROR, Could not write: " + path, LOG_LEVEL.NOTICE);
|
||||
Logger(ex, LOG_LEVEL.VERBOSE);
|
||||
@@ -957,14 +1037,14 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
await this.ensureDirectory(path);
|
||||
try {
|
||||
const newfile = await this.app.vault.create(normalizePath(path), doc.data, {
|
||||
const newFile = await this.app.vault.create(normalizePath(path), doc.data, {
|
||||
ctime: doc.ctime,
|
||||
mtime: doc.mtime,
|
||||
});
|
||||
this.batchFileChange = this.batchFileChange.filter((e) => e != newfile.path);
|
||||
this.batchFileChange = this.batchFileChange.filter((e) => e != newFile.path);
|
||||
Logger(msg + path);
|
||||
touch(newfile);
|
||||
this.app.vault.trigger("create", newfile);
|
||||
touch(newFile);
|
||||
this.app.vault.trigger("create", newFile);
|
||||
} catch (ex) {
|
||||
Logger(msg + "ERROR, Could not parse: " + path + "(" + doc.datatype + ")", LOG_LEVEL.NOTICE);
|
||||
Logger(ex, LOG_LEVEL.VERBOSE);
|
||||
@@ -975,6 +1055,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
|
||||
async deleteVaultItem(file: TFile | TFolder) {
|
||||
if (!this.isTargetFile(file)) return;
|
||||
const dir = file.parent;
|
||||
if (this.settings.trashInsteadDelete) {
|
||||
await this.app.vault.trash(file, false);
|
||||
@@ -996,9 +1077,10 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (shouldBeIgnored(pathSrc)) {
|
||||
return;
|
||||
}
|
||||
if (!this.isTargetFile(pathSrc)) return;
|
||||
if (docEntry._deleted || docEntry.deleted) {
|
||||
//basically pass.
|
||||
//but if there are no docs left, delete file.
|
||||
// This occurs not only when files are deleted, but also when conflicts are resolved.
|
||||
// We have to check no other revisions are left.
|
||||
const lastDocs = await this.localDatabase.getDBEntry(pathSrc);
|
||||
if (lastDocs === false) {
|
||||
await this.deleteVaultItem(file);
|
||||
@@ -1006,7 +1088,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
// it perhaps delete some revisions.
|
||||
// may be we have to reload this
|
||||
await this.pullFile(pathSrc, null, true);
|
||||
Logger(`delete skipped:${lastDocs._id}`);
|
||||
Logger(`delete skipped:${lastDocs._id}`, LOG_LEVEL.VERBOSE);
|
||||
}
|
||||
return;
|
||||
}
|
||||
@@ -1064,7 +1146,37 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
}
|
||||
|
||||
async handleDBChanged(change: EntryBody) {
|
||||
queuedEntries: EntryBody[] = [];
|
||||
handleDBChanged(change: EntryBody) {
|
||||
// If queued same file, cancel previous one.
|
||||
this.queuedEntries.remove(this.queuedEntries.find(e => e._id == change._id));
|
||||
// If the file is opened, we have to apply immediately
|
||||
const af = app.workspace.getActiveFile();
|
||||
if (af && af.path == id2path(change._id)) {
|
||||
return this.handleDBChangedAsync(change);
|
||||
}
|
||||
this.queuedEntries.push(change);
|
||||
if (this.queuedEntries.length > 50) {
|
||||
clearTrigger("dbchanged");
|
||||
this.execDBchanged();
|
||||
}
|
||||
setTrigger("dbchanged", 500, () => this.execDBchanged());
|
||||
}
|
||||
async execDBchanged() {
|
||||
await runWithLock("dbchanged", false, async () => {
|
||||
const w = [...this.queuedEntries];
|
||||
this.queuedEntries = [];
|
||||
Logger(`Applying ${w.length} files`);
|
||||
for (const entry of w) {
|
||||
Logger(`Applying ${entry._id} (${entry._rev}) change...`, LOG_LEVEL.VERBOSE);
|
||||
await this.handleDBChangedAsync(entry);
|
||||
Logger(`Applied ${entry._id} (${entry._rev}) change...`);
|
||||
}
|
||||
}
|
||||
);
|
||||
}
|
||||
async handleDBChangedAsync(change: EntryBody) {
|
||||
|
||||
const targetFile = this.app.vault.getAbstractFileByPath(id2path(change._id));
|
||||
if (targetFile == null) {
|
||||
if (change._deleted || change.deleted) {
|
||||
@@ -1113,36 +1225,48 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
}
|
||||
}
|
||||
async procQueuedFiles() {
|
||||
await runWithLock("procQueue", false, async () => {
|
||||
this.saveQueuedFiles();
|
||||
for (const queue of this.queuedFiles) {
|
||||
if (queue.done) continue;
|
||||
const now = new Date().getTime();
|
||||
if (queue.missingChildren.length == 0) {
|
||||
queue.done = true;
|
||||
if (isInteralChunk(queue.entry._id)) {
|
||||
//system file
|
||||
const filename = id2path(id2filenameInternalChunk(queue.entry._id));
|
||||
Logger(`Applying hidden file, ${queue.entry._id} (${queue.entry._rev}) change...`);
|
||||
await this.syncInternalFilesAndDatabase("pull", false, false, [filename])
|
||||
Logger(`Applied hidden file, ${queue.entry._id} (${queue.entry._rev}) change...`);
|
||||
}
|
||||
if (isValidPath(id2path(queue.entry._id))) {
|
||||
Logger(`Applying ${queue.entry._id} (${queue.entry._rev}) change...`);
|
||||
await this.handleDBChanged(queue.entry);
|
||||
Logger(`Applied ${queue.entry._id} (${queue.entry._rev})`);
|
||||
}
|
||||
} else if (now > queue.timeout) {
|
||||
if (!queue.warned) Logger(`Timed out: ${queue.entry._id} could not collect ${queue.missingChildren.length} chunks. plugin keeps watching, but you have to check the file after the replication.`, LOG_LEVEL.NOTICE);
|
||||
queue.warned = true;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
this.queuedFiles = this.queuedFiles.filter((e) => !e.done);
|
||||
this.saveQueuedFiles();
|
||||
procInternalFiles: string[] = [];
|
||||
async execInternalFile() {
|
||||
await runWithLock("execinternal", false, async () => {
|
||||
const w = [...this.procInternalFiles];
|
||||
this.procInternalFiles = [];
|
||||
Logger(`Applying hidden ${w.length} files change...`);
|
||||
await this.syncInternalFilesAndDatabase("pull", false, false, w);
|
||||
Logger(`Applying hidden ${w.length} files changed`);
|
||||
});
|
||||
}
|
||||
procInternalFile(filename: string) {
|
||||
this.procInternalFiles.push(filename);
|
||||
setTrigger("procInternal", 500, async () => {
|
||||
await this.execInternalFile();
|
||||
});
|
||||
}
|
||||
procQueuedFiles() {
|
||||
|
||||
this.saveQueuedFiles();
|
||||
for (const queue of this.queuedFiles) {
|
||||
if (queue.done) continue;
|
||||
const now = new Date().getTime();
|
||||
if (queue.missingChildren.length == 0) {
|
||||
queue.done = true;
|
||||
if (isInternalChunk(queue.entry._id)) {
|
||||
//system file
|
||||
const filename = id2path(id2filenameInternalChunk(queue.entry._id));
|
||||
// await this.syncInternalFilesAndDatabase("pull", false, false, [filename])
|
||||
this.procInternalFile(filename);
|
||||
}
|
||||
if (isValidPath(id2path(queue.entry._id))) {
|
||||
this.handleDBChanged(queue.entry);
|
||||
}
|
||||
} else if (now > queue.timeout) {
|
||||
if (!queue.warned) Logger(`Timed out: ${queue.entry._id} could not collect ${queue.missingChildren.length} chunks. plugin keeps watching, but you have to check the file after the replication.`, LOG_LEVEL.NOTICE);
|
||||
queue.warned = true;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
this.queuedFiles = this.queuedFiles.filter((e) => !e.done);
|
||||
this.saveQueuedFiles();
|
||||
}
|
||||
parseIncomingChunk(chunk: PouchDB.Core.ExistingDocument<EntryDoc>) {
|
||||
const now = new Date().getTime();
|
||||
let isNewFileCompleted = false;
|
||||
@@ -1165,8 +1289,9 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (isNewFileCompleted) this.procQueuedFiles();
|
||||
}
|
||||
async parseIncomingDoc(doc: PouchDB.Core.ExistingDocument<EntryBody>) {
|
||||
if (!this.isTargetFile(id2path(doc._id))) return;
|
||||
const skipOldFile = this.settings.skipOlderFilesOnSync && false; //patched temporary.
|
||||
if ((!isInteralChunk(doc._id)) && skipOldFile) {
|
||||
if ((!isInternalChunk(doc._id)) && skipOldFile) {
|
||||
const info = this.app.vault.getAbstractFileByPath(id2path(doc._id));
|
||||
|
||||
if (info && info instanceof TFile) {
|
||||
@@ -1185,9 +1310,11 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
missingChildren: [] as string[],
|
||||
timeout: now + this.chunkWaitTimeout,
|
||||
};
|
||||
if ("children" in doc) {
|
||||
// If `Read chunks online` is enabled, retrieve chunks from the remote CouchDB directly.
|
||||
if ((!this.settings.readChunksOnline) && "children" in doc) {
|
||||
const c = await this.localDatabase.localDatabase.allDocs({ keys: doc.children, include_docs: false });
|
||||
const missing = c.rows.filter((e) => "error" in e).map((e) => e.key);
|
||||
// fetch from remote
|
||||
if (missing.length > 0) Logger(`${doc._id}(${doc._rev}) Queued (waiting ${missing.length} items)`, LOG_LEVEL.VERBOSE);
|
||||
newQueue.missingChildren = missing;
|
||||
this.queuedFiles.push(newQueue);
|
||||
@@ -1466,10 +1593,10 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
Logger("Initializing", LOG_LEVEL.NOTICE, "syncAll");
|
||||
}
|
||||
|
||||
const filesStorage = this.app.vault.getFiles();
|
||||
const filesStorage = this.app.vault.getFiles().filter(e => this.isTargetFile(e));
|
||||
const filesStorageName = filesStorage.map((e) => e.path);
|
||||
const wf = await this.localDatabase.localDatabase.allDocs();
|
||||
const filesDatabase = wf.rows.filter((e) => !isChunk(e.id) && !isPluginChunk(e.id) && e.id != "obsydian_livesync_version").filter(e => isValidPath(e.id)).map((e) => id2path(e.id));
|
||||
const filesDatabase = wf.rows.filter((e) => !isChunk(e.id) && !isPluginChunk(e.id) && e.id != "obsydian_livesync_version").filter(e => isValidPath(e.id)).map((e) => id2path(e.id)).filter(e => this.isTargetFile(e));
|
||||
const isInitialized = await (this.localDatabase.kvDB.get<boolean>("initialized")) || false;
|
||||
// Make chunk bigger if it is the initial scan. There must be non-active docs.
|
||||
if (filesDatabase.length == 0 && !isInitialized) {
|
||||
@@ -1598,7 +1725,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (ex.code && ex.code == "ENOENT") {
|
||||
//NO OP.
|
||||
} else {
|
||||
Logger(`error while delete filder:${folder.path}`, LOG_LEVEL.NOTICE);
|
||||
Logger(`error while delete folder:${folder.path}`, LOG_LEVEL.NOTICE);
|
||||
Logger(ex);
|
||||
}
|
||||
}
|
||||
@@ -1801,6 +1928,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
|
||||
async pullFile(filename: string, fileList?: TFile[], force?: boolean, rev?: string, waitForReady = true) {
|
||||
const targetFile = this.app.vault.getAbstractFileByPath(id2path(filename));
|
||||
if (!this.isTargetFile(id2path(filename))) return;
|
||||
if (targetFile == null) {
|
||||
//have to create;
|
||||
const doc = await this.localDatabase.getDBEntry(filename, rev ? { rev: rev } : null, false, waitForReady);
|
||||
@@ -1876,6 +2004,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
|
||||
async updateIntoDB(file: TFile, initialScan?: boolean) {
|
||||
if (!this.isTargetFile(file)) return;
|
||||
if (shouldBeIgnored(file.path)) {
|
||||
return;
|
||||
}
|
||||
@@ -1930,6 +2059,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
|
||||
async deleteFromDB(file: TFile) {
|
||||
if (!this.isTargetFile(file)) return;
|
||||
const fullpath = file.path;
|
||||
Logger(`deleteDB By path:${fullpath}`);
|
||||
await this.deleteFromDBbyPath(fullpath);
|
||||
@@ -2199,7 +2329,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
return result;
|
||||
}
|
||||
|
||||
async storeInternaFileToDatabase(file: InternalFileInfo, forceWrite = false) {
|
||||
async storeInternalFileToDatabase(file: InternalFileInfo, forceWrite = false) {
|
||||
const id = filename2idInternalChunk(path2id(file.path));
|
||||
const contentBin = await this.app.vault.adapter.readBinary(file.path);
|
||||
const content = await arrayBufferToBase64(contentBin);
|
||||
@@ -2241,7 +2371,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
});
|
||||
}
|
||||
|
||||
async deleteInternaFileOnDatabase(filename: string, forceWrite = false) {
|
||||
async deleteInternalFileOnDatabase(filename: string, forceWrite = false) {
|
||||
const id = filename2idInternalChunk(path2id(filename));
|
||||
const mtime = new Date().getTime();
|
||||
await runWithLock("file-" + id, false, async () => {
|
||||
@@ -2297,7 +2427,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
c += "/";
|
||||
}
|
||||
}
|
||||
async extractInternaFileFromDatabase(filename: string, force = false) {
|
||||
async extractInternalFileFromDatabase(filename: string, force = false) {
|
||||
const isExists = await this.app.vault.adapter.exists(filename);
|
||||
const id = filename2idInternalChunk(path2id(filename));
|
||||
|
||||
@@ -2328,7 +2458,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
const content = await arrayBufferToBase64(contentBin);
|
||||
if (content == fileOnDB.data && !force) {
|
||||
// Logger(`STORAGE <-- DB:${filename}: skipped (hidden) Not changed`, LOG_LEVEL.VERBOSE);
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
await this.app.vault.adapter.writeBinary(filename, base64ToArrayBuffer(fileOnDB.data), { mtime: fileOnDB.mtime, ctime: fileOnDB.ctime });
|
||||
Logger(`STORAGE <-- DB:${filename}: written (hidden, overwrite${force ? ", force" : ""})`);
|
||||
@@ -2353,8 +2483,46 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
confirmPopup: WrappedNotice = null;
|
||||
|
||||
|
||||
async resolveConflictOnInternalFiles() {
|
||||
// Scan all conflicted internal files
|
||||
const docs = await this.localDatabase.localDatabase.allDocs({ startkey: ICHeader, endkey: ICHeaderEnd, conflicts: true, include_docs: true });
|
||||
for (const row of docs.rows) {
|
||||
const doc = row.doc;
|
||||
if (!("_conflicts" in doc)) continue;
|
||||
if (isInternalChunk(row.id)) {
|
||||
await this.resolveConflictOnInternalFile(row.id);
|
||||
}
|
||||
}
|
||||
}
|
||||
async resolveConflictOnInternalFile(id: string): Promise<boolean> {
|
||||
// Retrieve data
|
||||
const doc = await this.localDatabase.localDatabase.get(id, { conflicts: true });
|
||||
// If there is no conflict, return with false.
|
||||
if (!("_conflicts" in doc)) return false;
|
||||
if (doc._conflicts.length == 0) return false;
|
||||
Logger(`Hidden file conflicetd:${id2filenameInternalChunk(id)}`);
|
||||
const revA = doc._rev;
|
||||
const revB = doc._conflicts[0];
|
||||
|
||||
const revBdoc = await this.localDatabase.localDatabase.get(id, { rev: revB });
|
||||
// determine which revision sould been deleted.
|
||||
// simply check modified time
|
||||
const mtimeA = ("mtime" in doc && doc.mtime) || 0;
|
||||
const mtimeB = ("mtime" in revBdoc && revBdoc.mtime) || 0;
|
||||
// Logger(`Revisions:${new Date(mtimeA).toLocaleString} and ${new Date(mtimeB).toLocaleString}`);
|
||||
// console.log(`mtime:${mtimeA} - ${mtimeB}`);
|
||||
const delRev = mtimeA < mtimeB ? revA : revB;
|
||||
// delete older one.
|
||||
await this.localDatabase.localDatabase.remove(id, delRev);
|
||||
Logger(`Older one has been deleted:${id2filenameInternalChunk(id)}`);
|
||||
// check the file again
|
||||
return this.resolveConflictOnInternalFile(id);
|
||||
|
||||
}
|
||||
//TODO: Tidy up. Even though it is experimental feature, So dirty...
|
||||
async syncInternalFilesAndDatabase(direction: "push" | "pull" | "safe", showMessage: boolean, files: InternalFileInfo[] | false = false, targetFiles: string[] | false = false) {
|
||||
await this.resolveConflictOnInternalFiles();
|
||||
const logLevel = showMessage ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO;
|
||||
Logger("Scanning hidden files.", logLevel, "sync_internal");
|
||||
const ignorePatterns = this.settings.syncInternalFilesIgnorePatterns.toLocaleLowerCase()
|
||||
@@ -2409,73 +2577,51 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
|
||||
const fileOnStorage = files.find(e => e.path == filename);
|
||||
const fileOnDatabase = filesOnDB.find(e => e._id == filename2idInternalChunk(id2path(filename)));
|
||||
// TODO: Fix this somehow smart.
|
||||
let proc: Promise<void> | null;
|
||||
|
||||
if (fileOnStorage && fileOnDatabase) {
|
||||
// Both => Synchronize
|
||||
const cache = filename in caches ? caches[filename] : { storageMtime: 0, docMtime: 0 };
|
||||
if (fileOnDatabase.mtime == cache.docMtime && fileOnStorage.mtime == cache.storageMtime) {
|
||||
continue;
|
||||
}
|
||||
const nw = compareMTime(fileOnStorage.mtime, fileOnDatabase.mtime);
|
||||
if (nw == 0) continue;
|
||||
|
||||
if (nw > 0) {
|
||||
proc = (async (fileOnStorage) => {
|
||||
await this.storeInternaFileToDatabase(fileOnStorage);
|
||||
cache.docMtime = fileOnDatabase.mtime;
|
||||
cache.storageMtime = fileOnStorage.mtime;
|
||||
caches[filename] = cache;
|
||||
})(fileOnStorage);
|
||||
|
||||
}
|
||||
if (nw < 0) {
|
||||
proc = (async (filename) => {
|
||||
if (await this.extractInternaFileFromDatabase(filename)) {
|
||||
cache.docMtime = fileOnDatabase.mtime;
|
||||
cache.storageMtime = fileOnStorage.mtime;
|
||||
caches[filename] = cache;
|
||||
countUpdatedFolder(filename);
|
||||
}
|
||||
})(filename);
|
||||
|
||||
}
|
||||
} else if (!fileOnStorage && fileOnDatabase) {
|
||||
if (direction == "push") {
|
||||
if (fileOnDatabase.deleted) {
|
||||
// await this.storeInternaFileToDatabase(fileOnStorage);
|
||||
} else {
|
||||
proc = (async () => {
|
||||
await this.deleteInternaFileOnDatabase(filename);
|
||||
})();
|
||||
}
|
||||
} else if (direction == "pull") {
|
||||
proc = (async () => {
|
||||
if (await this.extractInternaFileFromDatabase(filename)) {
|
||||
countUpdatedFolder(filename);
|
||||
}
|
||||
})();
|
||||
} else if (direction == "safe") {
|
||||
if (fileOnDatabase.deleted) {
|
||||
// await this.storeInternaFileToDatabase(fileOnStorage);
|
||||
} else {
|
||||
proc = (async () => {
|
||||
if (await this.extractInternaFileFromDatabase(filename)) {
|
||||
countUpdatedFolder(filename);
|
||||
}
|
||||
})();
|
||||
}
|
||||
}
|
||||
} else if (fileOnStorage && !fileOnDatabase) {
|
||||
proc = (async () => {
|
||||
await this.storeInternaFileToDatabase(fileOnStorage);
|
||||
})();
|
||||
} else {
|
||||
throw new Error("Invalid state on hidden file sync");
|
||||
// Something corrupted?
|
||||
const addProc = (p: () => Promise<void>): Promise<unknown> => {
|
||||
return p();
|
||||
}
|
||||
if (proc) p.add(proc);
|
||||
const cache = filename in caches ? caches[filename] : { storageMtime: 0, docMtime: 0 };
|
||||
|
||||
p.add(addProc(async () => {
|
||||
if (fileOnStorage && fileOnDatabase) {
|
||||
// Both => Synchronize
|
||||
if (fileOnDatabase.mtime == cache.docMtime && fileOnStorage.mtime == cache.storageMtime) {
|
||||
return;
|
||||
}
|
||||
const nw = compareMTime(fileOnStorage.mtime, fileOnDatabase.mtime);
|
||||
if (nw > 0) {
|
||||
await this.storeInternalFileToDatabase(fileOnStorage);
|
||||
}
|
||||
if (nw < 0) {
|
||||
// skip if not extraction performed.
|
||||
if (!await this.extractInternalFileFromDatabase(filename)) return;
|
||||
}
|
||||
// If process successfly updated or file contents are same, update cache.
|
||||
cache.docMtime = fileOnDatabase.mtime;
|
||||
cache.storageMtime = fileOnStorage.mtime;
|
||||
caches[filename] = cache;
|
||||
countUpdatedFolder(filename);
|
||||
} else if (!fileOnStorage && fileOnDatabase) {
|
||||
if (direction == "push") {
|
||||
if (fileOnDatabase.deleted) return;
|
||||
await this.deleteInternalFileOnDatabase(filename);
|
||||
} else if (direction == "pull") {
|
||||
if (await this.extractInternalFileFromDatabase(filename)) {
|
||||
countUpdatedFolder(filename);
|
||||
}
|
||||
} else if (direction == "safe") {
|
||||
if (fileOnDatabase.deleted) return
|
||||
if (await this.extractInternalFileFromDatabase(filename)) {
|
||||
countUpdatedFolder(filename);
|
||||
}
|
||||
}
|
||||
} else if (fileOnStorage && !fileOnDatabase) {
|
||||
await this.storeInternalFileToDatabase(fileOnStorage);
|
||||
} else {
|
||||
throw new Error("Invalid state on hidden file sync");
|
||||
// Something corrupted?
|
||||
}
|
||||
}));
|
||||
await p.wait(limit);
|
||||
}
|
||||
await p.all();
|
||||
@@ -2579,4 +2725,13 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
|
||||
Logger(`Hidden files scanned: ${filesChanged} files had been modified`, logLevel, "sync_internal");
|
||||
}
|
||||
|
||||
isTargetFile(file: string | TAbstractFile) {
|
||||
if (file instanceof TFile) {
|
||||
return this.localDatabase.isTargetFile(file.path);
|
||||
} else if (typeof file == "string") {
|
||||
return this.localDatabase.isTargetFile(file);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -40,8 +40,8 @@ export const connectRemoteCouchDBWithSetting = (settings: RemoteDBSettings, isMo
|
||||
|
||||
const connectRemoteCouchDB = async (uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> => {
|
||||
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
|
||||
if (uri.toLowerCase() != uri) return "Remote URI and database name cound not contain capital letters.";
|
||||
if (uri.indexOf(" ") !== -1) return "Remote URI and database name cound not contain spaces.";
|
||||
if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters.";
|
||||
if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces.";
|
||||
let authHeader = "";
|
||||
if (auth.username && auth.password) {
|
||||
const utf8str = String.fromCharCode.apply(null, new TextEncoder().encode(`${auth.username}:${auth.password}`));
|
||||
@@ -225,3 +225,55 @@ export const checkSyncInfo = async (db: PouchDB.Database): Promise<boolean> => {
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
export async function putDesignDocuments(db: PouchDB.Database) {
|
||||
type DesignDoc = {
|
||||
_id: string;
|
||||
_rev: string;
|
||||
ver: number;
|
||||
filters: {
|
||||
default: string,
|
||||
push: string,
|
||||
pull: string,
|
||||
};
|
||||
}
|
||||
const design: DesignDoc = {
|
||||
"_id": "_design/replicate",
|
||||
"_rev": undefined as string | undefined,
|
||||
"ver": 2,
|
||||
"filters": {
|
||||
"default": function (doc: any, req: any) {
|
||||
return !("remote" in doc && doc.remote);
|
||||
}.toString(),
|
||||
"push": function (doc: any, req: any) {
|
||||
return true;
|
||||
}.toString(),
|
||||
"pull": function (doc: any, req: any) {
|
||||
return !(doc.type && doc.type == "leaf")
|
||||
}.toString(),
|
||||
}
|
||||
}
|
||||
|
||||
// We can use the filter on replication : filter: 'replicate/default',
|
||||
|
||||
try {
|
||||
const w = await db.get<DesignDoc>(design._id);
|
||||
if (w.ver < design.ver) {
|
||||
design._rev = w._rev;
|
||||
//@ts-ignore
|
||||
await db.put(design);
|
||||
return true;
|
||||
}
|
||||
} catch (ex) {
|
||||
if (ex.status && ex.status == 404) {
|
||||
delete design._rev;
|
||||
//@ts-ignore
|
||||
await db.put(design);
|
||||
return true;
|
||||
} else {
|
||||
Logger("Could not make design documents", LOG_LEVEL.INFO);
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
27
updates.md
27
updates.md
@@ -1,3 +1,23 @@
|
||||
### 0.14.1
|
||||
- The target selecting filter was implemented.
|
||||
Now we can set what files are synchronised by regular expression.
|
||||
- We can configure the size of chunks.
|
||||
We can use larger chunks to improve performance.
|
||||
(This feature can not be used with IBM Cloudant)
|
||||
- Read chunks online.
|
||||
Now we can synchronise only metadata and retrieve chunks on demand. It reduces local database size and time for replication.
|
||||
- Added this note.
|
||||
- Use local chunks in preference to remote them if present,
|
||||
|
||||
#### Recommended configuration for Self-hosted CouchDB
|
||||
- Set chunk size to around 100 to 250 (10MB - 25MB per chunk)
|
||||
- *Set batch size to 100 and batch limit to 20 (0.14.2)*
|
||||
- Be sure to `Read chunks online` checked.
|
||||
|
||||
#### Minors
|
||||
- 0.14.2 Fixed issue about retrieving files if synchronisation has been interrupted or failed
|
||||
- 0.14.3 New test items have been added to `Check database configuration`.
|
||||
|
||||
### 0.13.0
|
||||
|
||||
- The metadata of the deleted files will be kept on the database by default. If you want to delete this as the previous version, please turn on `Delete metadata of deleted files.`. And, if you have upgraded from the older version, please ensure every device has been upgraded.
|
||||
@@ -9,4 +29,9 @@
|
||||
|
||||
#### Minors
|
||||
- 0.13.1 Fixed on conflict resolution.
|
||||
- 0.13.2 Fixed file deletion failures.
|
||||
- 0.13.2 Fixed file deletion failures.
|
||||
- 0.13.4
|
||||
- Now, we can synchronise hidden files that conflicted on each devices.
|
||||
- We can search for conflicting docs.
|
||||
- Pending processes can now be run at any time.
|
||||
- Performance improved on synchronising large numbers of files at once.
|
||||
|
||||
Reference in New Issue
Block a user