mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-02-22 20:18:48 +00:00
Compare commits
16 Commits
forprivacy
...
0.17.7
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c7db8592c6 | ||
|
|
fc3617d9f9 | ||
|
|
34c1b040db | ||
|
|
6b85aecafe | ||
|
|
4dabadd5ea | ||
|
|
0619c96c48 | ||
|
|
b0f612b61c | ||
|
|
81caad8602 | ||
|
|
f5e28b5e1c | ||
|
|
0c206226b1 | ||
|
|
1ad5dcc1cc | ||
|
|
a512566e5b | ||
|
|
02de82af46 | ||
|
|
840e03a2d3 | ||
|
|
96b676caf3 | ||
|
|
a8219de375 |
32
README.md
32
README.md
@@ -2,7 +2,7 @@
|
||||
|
||||
[Japanese docs](./README_ja.md) [Chinese docs](./README_cn.md).
|
||||
|
||||
Self-hosted LiveSync is a community implemented synchronization plugin.
|
||||
Self-hosted LiveSync is a community-implemented synchronization plugin.
|
||||
A self-hosted or purchased CouchDB acts as the intermediate server. Available on every obsidian-compatible platform.
|
||||
|
||||
Note: It has no compatibility with the official "Obsidian Sync".
|
||||
@@ -16,19 +16,19 @@ Before installing or upgrading LiveSync, please back your vault up.
|
||||
- Visual conflict resolver included.
|
||||
- Bidirectional synchronization between devices nearly in real-time
|
||||
- You can use CouchDB or its compatibles like IBM Cloudant.
|
||||
- End-to-End encryption supported.
|
||||
- End-to-End encryption is supported.
|
||||
- Plugin synchronization(Beta)
|
||||
- Receive WebClip from [obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf) (End-to-End encryption will not be applicable.)
|
||||
|
||||
Useful for researchers, engineers and developers with a need to keep their notes fully self-hosted for security reasons. Or just anyone who would like the peace of mind knowing that their notes are fully private.
|
||||
Useful for researchers, engineers and developers with a need to keep their notes fully self-hosted for security reasons. Or just anyone who would like the peace of mind of knowing that their notes are fully private.
|
||||
|
||||
## IMPORTANT NOTICE
|
||||
|
||||
- Do not use in conjunction with another synchronization solution (including iCloud, Obsidian Sync). Before enabling this plugin, make sure to disable all the other synchronization methods to avoid content corruption or duplication. If you want to synchronize to two or more services, do them one by one and never enable two synchronization methods at the same time.
|
||||
- Do not enable this plugin with another synchronization solution at the same time (including iCloud and Obsidian Sync). Before enabling this plugin, make sure to disable all the other synchronization methods to avoid content corruption or duplication. If you want to synchronize to two or more services, do them one by one and never enable two synchronization methods at the same time.
|
||||
This includes not putting your vault inside a cloud-synchronized folder (eg. an iCloud folder or Dropbox folder)
|
||||
- This is a synchronization plugin. Not a backup solutions. Do not rely on this for backup.
|
||||
- This is a synchronization plugin. Not a backup solution. Do not rely on this for backup.
|
||||
- If the device's storage runs out, database corruption may happen.
|
||||
- Hidden files or any other invisible files wouldn't be kept in the database, thus won't be synchronized. (**and may also get deleted**)
|
||||
- Hidden files or any other invisible files wouldn't be kept in the database, and thus won't be synchronized. (**and may also get deleted**)
|
||||
|
||||
## How to use
|
||||
|
||||
@@ -38,7 +38,7 @@ First, get your database ready. IBM Cloudant is preferred for testing. Or you ca
|
||||
1. [Setup IBM Cloudant](docs/setup_cloudant.md)
|
||||
2. [Setup your CouchDB](docs/setup_own_server.md)
|
||||
|
||||
Note: More information about alternative hosting methods needed! Currently, [using fly.io](https://github.com/vrtmrz/obsidian-livesync/discussions/85) is being discussed.
|
||||
Note: More information about alternative hosting methods is needed! Currently, [using fly.io](https://github.com/vrtmrz/obsidian-livesync/discussions/85) is being discussed.
|
||||
|
||||
### Configure the plugin
|
||||
|
||||
@@ -46,13 +46,13 @@ See [Quick setup guide](doccs/../docs/quick_setup.md)
|
||||
|
||||
## Something looks corrupted...
|
||||
|
||||
Please open the configuration link again and Answer as below:
|
||||
Please open the configuration link again and Answer below:
|
||||
- If your local database looks corrupted (in other words, when your Obsidian getting weird even standalone.)
|
||||
- Answer `No` to `Keep local DB?`
|
||||
- If your remote database looks corrupted (in other words, when something happens while replicating)
|
||||
- Answer `No` to `Keep remote DB?`
|
||||
|
||||
If you answered `No` to both, your databases will be rebuilt by the content on your device. And the remote database will lock out other devices. You have to synchronize all your devices again. (When this time, almost all your files should be synchronized with a timestamp. So you can use a existed vault).
|
||||
If you answered `No` to both, your databases will be rebuilt by the content on your device. And the remote database will lock out other devices. You have to synchronize all your devices again. (When this time, almost all your files should be synchronized with a timestamp. So you can use an existing vault).
|
||||
|
||||
## Test Server
|
||||
|
||||
@@ -78,18 +78,18 @@ If you have deleted or renamed files, please wait until ⏳ icon disappeared.
|
||||
## Hints
|
||||
- If a folder becomes empty after a replication, it will be deleted by default. But you can toggle this behaviour. Check the [Settings](docs/settings.md).
|
||||
- LiveSync mode drains more batteries in mobile devices. Periodic sync with some automatic sync is recommended.
|
||||
- Mobile Obsidian can not connect to a non-secure (HTTP) or a locally-signed servers, even if the root certificate is installed on the device.
|
||||
- Mobile Obsidian can not connect to non-secure (HTTP) or locally-signed servers, even if the root certificate is installed on the device.
|
||||
- There are no 'exclude_folders' like configurations.
|
||||
- While synchronizing, files are compared by their modification time and the older ones will be overwritten by the newer ones. Then plugin checks for conflicts and if a merge is needed, a dialog will open.
|
||||
- Rarely, a file in the database could be corrupted. The plugin will not write to local storage when a file looks corrupted. If a local version of the file is on your device, the corruption could be fixed by editing the local file and synchronizing it. But if the file does not exist on any of your devices, then it can not be rescued. In this case you can delete these items from the settings dialog.
|
||||
- To stop the boot up sequence (eg. for fixing problems on databases), you can put a `redflag.md` file at the root of your vault.
|
||||
- Q: Database is growing, how can I shrink it down?
|
||||
A: each of the docs is saved with their past 100 revisions for detecting and resolving conflicts. Picturing that one device has been offline for a while, and comes online again. The device has to compare its notes with the remotely saved ones. If there exists a historic revision in which the note used to be identical, it could be updated safely (like git fast-forward). Even if that is not in revision histories, we only have to check the differences after the revision that both devices commonly have. This is like git's conflict resolving method. So, We have to make the database again like an enlarged git repo if you want to solve the root of the problem.
|
||||
- And more technical Information are in the [Technical Information](docs/tech_info.md)
|
||||
- Rarely, a file in the database could be corrupted. The plugin will not write to local storage when a file looks corrupted. If a local version of the file is on your device, the corruption could be fixed by editing the local file and synchronizing it. But if the file does not exist on any of your devices, then it can not be rescued. In this case, you can delete these items from the settings dialog.
|
||||
- To stop the boot-up sequence (eg. for fixing problems on databases), you can put a `redflag.md` file at the root of your vault.
|
||||
- Q: The database is growing, how can I shrink it down?
|
||||
A: each of the docs is saved with their past 100 revisions for detecting and resolving conflicts. Picturing that one device has been offline for a while, and comes online again. The device has to compare its notes with the remotely saved ones. If there exists a historic revision in which the note used to be identical, it could be updated safely (like git fast-forward). Even if that is not in revision histories, we only have to check the differences after the revision that both devices commonly have. This is like git's conflict-resolving method. So, We have to make the database again like an enlarged git repo if you want to solve the root of the problem.
|
||||
- And more technical Information is in the [Technical Information](docs/tech_info.md)
|
||||
- If you want to synchronize files without obsidian, you can use [filesystem-livesync](https://github.com/vrtmrz/filesystem-livesync).
|
||||
- WebClipper is also available on Chrome Web Store:[obsidian-livesync-webclip](https://chrome.google.com/webstore/detail/obsidian-livesync-webclip/jfpaflmpckblieefkegjncjoceapakdf)
|
||||
|
||||
Repo is here: [obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip). (Docs are work in progress.)
|
||||
Repo is here: [obsidian-livesync-webclip](https://github.com/vrtmrz/obsidian-livesync-webclip). (Docs are a work in progress.)
|
||||
|
||||
## License
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Quick setup
|
||||
The Setup wizard has been implemented since v0.15.0. This simplifies the initial set-up.
|
||||
The Setup wizard has been implemented since v0.15.0. This simplifies the initial setup.
|
||||
|
||||
Note: The subsequent devices should be set up using the `Copy setup URI` and `Open setup URI`.
|
||||
|
||||
@@ -34,18 +34,18 @@ Enter the information in the database we have set up.
|
||||
|
||||

|
||||
|
||||
If End to End encryption is enabled, the possibility of a third party who does not know the Passphrase being able to read the contents of the Remote database in the event that they are leaked is reduced. So we strongly recommend to enable it.
|
||||
If End to End encryption is enabled, the possibility of a third party who does not know the Passphrase being able to read the contents of the Remote database if they are leaked is reduced. So we strongly recommend enabling it.
|
||||
Encryption is based on 256-bit AES-GCM.
|
||||
This setting can be disabled if you are inside a closed network and it is clear that you will not be accessed by third parties.
|
||||
|
||||
### Test database connectionとCheck database configuraion
|
||||
### Test database connection and Check database configuration
|
||||
|
||||
Here we can check the status of the connection to the database and the database settings.
|
||||
|
||||

|
||||
|
||||
#### Test Database Connection
|
||||
Check whether we can connect to the database. If it fails, there are a number of reasons, but once you have done the `Check database configuration`, check if it fails there too.
|
||||
Check whether we can connect to the database. If it fails, there are several reasons, but once you have done the `Check database configuration`, check if it fails there too.
|
||||
|
||||
#### Check database configuration
|
||||
|
||||
@@ -64,11 +64,11 @@ Go to the Local Database configuration.
|
||||
### Discard exist database and proceed
|
||||
Discard the contents of the Remote database and go to the Local Database configuration.
|
||||
|
||||
## Local Database confiuration
|
||||
## Local Database configuration
|
||||
|
||||

|
||||
|
||||
Configure the local database. If we already have a Vaults with Self-hosted LiveSync installed and having same directory name as currently we are setting up, please specify a different suffix than the Vault you have already set up here.
|
||||
Configure the local database. If we already have a Vaults with Self-hosted LiveSync installed and having the same directory name as currently we are setting up, please specify a different suffix than the Vault you have already set up here.
|
||||
|
||||
## Miscellaneous
|
||||
Finally, finish the miscellaneous configurations and select a preset for synchronisation.
|
||||
|
||||
@@ -17,14 +17,14 @@ If you feel something, please feel free to inform me.
|
||||
| 🚑 | [Corrupted data](#corrupted-data) |
|
||||
|
||||
## Remote Database Configurations
|
||||
Configure settings of synchronize server. If any synchronization is enabled, you can't edit this section. Please disable all synchronization to change.
|
||||
Configure the settings of synchronize server. If any synchronization is enabled, you can't edit this section. Please disable all synchronization to change.
|
||||
|
||||
### URI
|
||||
URI of CouchDB. In the case of Cloudant, It's "External Endpoint(preferred)".
|
||||
**Do not end it up with a slash** when it doesn't contain the database name.
|
||||
|
||||
### Username
|
||||
Your CouchDB's Username. With administrator's privilege is preferred.
|
||||
Your CouchDB's Username. Administrator's privilege is preferred.
|
||||
|
||||
### Password
|
||||
Your CouchDB's Password.
|
||||
@@ -47,11 +47,11 @@ The passphrase to used as the key of encryption. Please use the long text.
|
||||
|
||||
### Apply
|
||||
Set the End to End encryption enabled and its passphrase for use in replication.
|
||||
If you change the passphrase of a existing database, overwriting the remote database is strongly recommended.
|
||||
If you change the passphrase of an existing database, overwriting the remote database is strongly recommended.
|
||||
|
||||
|
||||
### Overwrite remote database
|
||||
Overwrite the remote database by the local database using the passphrase you applied.
|
||||
Overwrite the remote database with the local database using the passphrase you applied.
|
||||
|
||||
|
||||
### Rebuild
|
||||
@@ -61,17 +61,17 @@ Rebuild remote and local databases with local files. It will delete all document
|
||||
You can check the connection by clicking this button.
|
||||
|
||||
### Check database configuration
|
||||
You can check and modify your CouchDB's configuration from here directly.
|
||||
You can check and modify your CouchDB configuration from here directly.
|
||||
|
||||
### Lock remote database.
|
||||
Other devices are banned from the database when you have locked the database.
|
||||
If you have something troubled with other devices, you can protect the vault and remote database by your device.
|
||||
If you have something troubled with other devices, you can protect the vault and remote database with your device.
|
||||
|
||||
## Local Database Configurations
|
||||
"Local Database" is created inside your obsidian.
|
||||
|
||||
### Batch database update
|
||||
Delay database update until raise replication, open another file, window visibility changed, or file events except for file modification.
|
||||
Delay database update until raise replication, open another file, window visibility changes, or file events except for file modification.
|
||||
This option can not be used with LiveSync at the same time.
|
||||
|
||||
|
||||
@@ -81,23 +81,23 @@ If one device rebuilds or locks the remote database, every other device will be
|
||||
### minimum chunk size and LongLine threshold
|
||||
The configuration of chunk splitting.
|
||||
|
||||
Self-hosted LiveSync splits the note into chunks for efficient synchronization. This chunk should be longer than "Minimum chunk size".
|
||||
Self-hosted LiveSync splits the note into chunks for efficient synchronization. This chunk should be longer than the "Minimum chunk size".
|
||||
|
||||
Specifically, the length of the chunk is determined by the following orders.
|
||||
|
||||
1. Find the nearest newline character, and if it is farther than LongLineThreshold, this piece becomes an independent chunk.
|
||||
|
||||
2. If not, find nearest to these items.
|
||||
1. Newline character
|
||||
2. Empty line (Windows style)
|
||||
3. Empty line (non-Windows style)
|
||||
3. Compare the farther in these 3 positions and next "\[newline\]#" position, pick a shorter piece to as chunk.
|
||||
2. If not, find the nearest to these items.
|
||||
1. A newline character
|
||||
2. An empty line (Windows style)
|
||||
3. An empty line (non-Windows style)
|
||||
3. Compare the farther in these 3 positions and the next "newline\]#" position, and pick a shorter piece as a chunk.
|
||||
|
||||
This rule was made empirically from my dataset. If this rule acts as badly on your data. Please give me the information.
|
||||
|
||||
You can dump saved note structure to `Dump informations of this doc`. Replace every character to x except newline and "#" when sending information to me.
|
||||
You can dump saved note structure to `Dump informations of this doc`. Replace every character with x except newline and "#" when sending information to me.
|
||||
|
||||
Default values are 20 letters and 250 letters.
|
||||
The default values are 20 letters and 250 letters.
|
||||
|
||||
## General Settings
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Setup CouchDB to your server
|
||||
|
||||
|
||||
## Install CouchDB and access from PC or Mac
|
||||
## Install CouchDB and access from a PC or Mac
|
||||
|
||||
The easiest way to set up the CouchDB is using the [docker image]((https://hub.docker.com/_/couchdb)).
|
||||
|
||||
@@ -39,7 +39,7 @@ $ docker run --rm -it -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password -v /pat
|
||||
*Remember to replace the path with the path to your local.ini*
|
||||
Note: At this time, the file owner of local.ini became 5984:5984. It's the limitation docker image. please change the owner before editing local.ini again.
|
||||
|
||||
If you could confirm that Self-hosted LiveSync can sync with the server, launch docker image as background as you like.
|
||||
If you could confirm that Self-hosted LiveSync can sync with the server, launch the docker image as a background as you like.
|
||||
|
||||
Example to run docker in detached mode:
|
||||
```
|
||||
@@ -47,10 +47,10 @@ $ docker run -d --restart always -e COUCHDB_USER=admin -e COUCHDB_PASSWORD=passw
|
||||
```
|
||||
*Remember to replace the path with the path to your local.ini*
|
||||
|
||||
## Access from mobile device
|
||||
## Access from a mobile device
|
||||
If you want to access Self-hosted LiveSync from mobile devices, you need a valid SSL certificate.
|
||||
|
||||
### Testing from mobile
|
||||
### Testing from a mobile
|
||||
In the testing phase, [localhost.run](http://localhost.run/) or something like services is very useful.
|
||||
|
||||
example on using localhost.run)
|
||||
@@ -94,6 +94,6 @@ Set the A record of your domain to point to your server, and host reverse proxy
|
||||
Note: Mounting CouchDB on the top directory is not recommended.
|
||||
Using Caddy is a handy way to serve the server with SSL automatically.
|
||||
|
||||
I have published [docker-compose.yml and ini files](https://github.com/vrtmrz/self-hosted-livesync-server) that launches Caddy and CouchDB at once. Please try it out.
|
||||
I have published [docker-compose.yml and ini files](https://github.com/vrtmrz/self-hosted-livesync-server) that launch Caddy and CouchDB at once. Please try it out.
|
||||
|
||||
And, be sure to check the server log and be careful of malicious access.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.16.8",
|
||||
"version": "0.17.7",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.16.8",
|
||||
"version": "0.17.7",
|
||||
"lockfileVersion": 2,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.16.8",
|
||||
"version": "0.17.7",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"diff-match-patch": "^1.0.5",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.16.8",
|
||||
"version": "0.17.7",
|
||||
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"main": "main.js",
|
||||
"type": "module",
|
||||
|
||||
@@ -52,7 +52,7 @@ export class LocalPouchDB extends LocalPouchDBBase {
|
||||
}
|
||||
|
||||
|
||||
async connectRemoteCouchDB(uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> {
|
||||
async connectRemoteCouchDB(uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | boolean, useDynamicIterationCount: boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> {
|
||||
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
|
||||
if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters.";
|
||||
if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces.";
|
||||
@@ -155,7 +155,7 @@ export class LocalPouchDB extends LocalPouchDBBase {
|
||||
|
||||
const db: PouchDB.Database<EntryDoc> = new PouchDB<EntryDoc>(uri, conf);
|
||||
if (passphrase && typeof passphrase === "string") {
|
||||
enableEncryption(db, passphrase);
|
||||
enableEncryption(db, passphrase, useDynamicIterationCount);
|
||||
}
|
||||
try {
|
||||
const info = await db.info();
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { App, PluginSettingTab, Setting, sanitizeHTMLToDom, RequestUrlParam, requestUrl, TextAreaComponent, MarkdownRenderer, stringifyYaml } from "obsidian";
|
||||
import { DEFAULT_SETTINGS, LOG_LEVEL, ObsidianLiveSyncSettings, RemoteDBSettings } from "./lib/src/types";
|
||||
import { path2id, id2path } from "./utils";
|
||||
import { delay, versionNumberString2Number } from "./lib/src/utils";
|
||||
import { delay, Semaphore, versionNumberString2Number } from "./lib/src/utils";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { checkSyncInfo, isCloudantURI } from "./lib/src/utils_couchdb.js";
|
||||
import { testCrypt } from "./lib/src/e2ee_v2";
|
||||
@@ -297,10 +297,12 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
if (inWizard) {
|
||||
this.plugin.settings.encrypt = value;
|
||||
passphrase.setDisabled(!value);
|
||||
dynamicIteration.setDisabled(!value);
|
||||
await this.plugin.saveSettings();
|
||||
} else {
|
||||
this.plugin.settings.workingEncrypt = value;
|
||||
passphrase.setDisabled(!value);
|
||||
dynamicIteration.setDisabled(!value);
|
||||
await this.plugin.saveSettings();
|
||||
}
|
||||
})
|
||||
@@ -325,11 +327,30 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
});
|
||||
passphrase.setDisabled(!this.plugin.settings.workingEncrypt);
|
||||
|
||||
const dynamicIteration = new Setting(containerRemoteDatabaseEl)
|
||||
.setName("Use dynamic iteration count (experimental)")
|
||||
.setDesc("Balancing the encryption/decryption load against the length of the passphrase if toggled. (v0.17.5 or higher required)")
|
||||
.addToggle((toggle) => {
|
||||
toggle.setValue(this.plugin.settings.workingUseDynamicIterationCount)
|
||||
.onChange(async (value) => {
|
||||
if (inWizard) {
|
||||
this.plugin.settings.useDynamicIterationCount = value;
|
||||
await this.plugin.saveSettings();
|
||||
} else {
|
||||
this.plugin.settings.workingUseDynamicIterationCount = value;
|
||||
await this.plugin.saveSettings();
|
||||
}
|
||||
});
|
||||
})
|
||||
.setClass("wizardHidden");
|
||||
dynamicIteration.setDisabled(!this.plugin.settings.workingEncrypt);
|
||||
|
||||
const checkWorkingPassphrase = async (): Promise<boolean> => {
|
||||
const settingForCheck: RemoteDBSettings = {
|
||||
...this.plugin.settings,
|
||||
encrypt: this.plugin.settings.workingEncrypt,
|
||||
passphrase: this.plugin.settings.workingPassphrase,
|
||||
useDynamicIterationCount: this.plugin.settings.workingUseDynamicIterationCount,
|
||||
};
|
||||
console.dir(settingForCheck);
|
||||
const db = await this.plugin.localDatabase.connectRemoteCouchDBWithSetting(settingForCheck, this.plugin.localDatabase.isMobile);
|
||||
@@ -355,7 +376,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
Logger("WARNING! Your device would not support encryption.", LOG_LEVEL.NOTICE);
|
||||
return;
|
||||
}
|
||||
if (!(await checkWorkingPassphrase())) {
|
||||
if (!(await checkWorkingPassphrase()) && !sendToServer) {
|
||||
return;
|
||||
}
|
||||
if (!this.plugin.settings.workingEncrypt) {
|
||||
@@ -368,6 +389,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
this.plugin.settings.syncOnFileOpen = false;
|
||||
this.plugin.settings.encrypt = this.plugin.settings.workingEncrypt;
|
||||
this.plugin.settings.passphrase = this.plugin.settings.workingPassphrase;
|
||||
this.plugin.settings.useDynamicIterationCount = this.plugin.settings.workingUseDynamicIterationCount;
|
||||
|
||||
await this.plugin.saveSettings();
|
||||
if (sendToServer) {
|
||||
@@ -390,7 +412,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
.setButtonText("Apply")
|
||||
.setWarning()
|
||||
.setDisabled(false)
|
||||
.setClass("sls-btn-right")
|
||||
.onClick(async () => {
|
||||
await applyEncryption(true);
|
||||
})
|
||||
@@ -400,7 +421,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
.setButtonText("Apply w/o rebuilding")
|
||||
.setWarning()
|
||||
.setDisabled(false)
|
||||
.setClass("sls-btn-right")
|
||||
.onClick(async () => {
|
||||
await applyEncryption(false);
|
||||
})
|
||||
@@ -448,7 +468,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
.setButtonText("Send")
|
||||
.setWarning()
|
||||
.setDisabled(false)
|
||||
.setClass("sls-btn-left")
|
||||
.onClick(async () => {
|
||||
await rebuildDB("remoteOnly");
|
||||
})
|
||||
@@ -463,7 +482,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
.setButtonText("Rebuild")
|
||||
.setWarning()
|
||||
.setDisabled(false)
|
||||
.setClass("sls-btn-left")
|
||||
.onClick(async () => {
|
||||
await rebuildDB("rebuildBothByThisDevice");
|
||||
})
|
||||
@@ -738,7 +756,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
.setButtonText("Fetch")
|
||||
.setWarning()
|
||||
.setDisabled(false)
|
||||
.setClass("sls-btn-left")
|
||||
.onClick(async () => {
|
||||
await rebuildDB("localOnly");
|
||||
})
|
||||
@@ -974,6 +991,24 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
await this.plugin.saveSettings();
|
||||
})
|
||||
);
|
||||
new Setting(containerSyncSettingEl)
|
||||
.setName("Disable sensible auto merging on markdown files")
|
||||
.setDesc("If this switch is turned on, a merge dialog will be displayed, even if the sensible-merge is possible automatically. (Turn on to previous behavior)")
|
||||
.addToggle((toggle) =>
|
||||
toggle.setValue(this.plugin.settings.disableMarkdownAutoMerge).onChange(async (value) => {
|
||||
this.plugin.settings.disableMarkdownAutoMerge = value;
|
||||
await this.plugin.saveSettings();
|
||||
})
|
||||
);
|
||||
new Setting(containerSyncSettingEl)
|
||||
.setName("Write documents after synchronization even if they have conflict")
|
||||
.setDesc("Turn on to previous behavior")
|
||||
.addToggle((toggle) =>
|
||||
toggle.setValue(this.plugin.settings.writeDocumentsIfConflicted).onChange(async (value) => {
|
||||
this.plugin.settings.writeDocumentsIfConflicted = value;
|
||||
await this.plugin.saveSettings();
|
||||
})
|
||||
);
|
||||
|
||||
|
||||
new Setting(containerSyncSettingEl)
|
||||
@@ -1063,7 +1098,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
.setButtonText("Touch")
|
||||
.setWarning()
|
||||
.setDisabled(false)
|
||||
.setClass("sls-btn-left")
|
||||
.onClick(async () => {
|
||||
const filesAll = await this.plugin.scanInternalFiles();
|
||||
const targetFiles = await this.plugin.filterTargetFiles(filesAll);
|
||||
@@ -1288,7 +1322,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
|
||||
new Setting(containerHatchEl)
|
||||
.setName("Make report to inform the issue")
|
||||
.setDesc("Verify and repair all files and update database without restoring")
|
||||
.addButton((button) =>
|
||||
button
|
||||
.setButtonText("Make report")
|
||||
@@ -1368,21 +1401,28 @@ ${stringifyYaml(pluginConfig)}`;
|
||||
.setDisabled(false)
|
||||
.setWarning()
|
||||
.onClick(async () => {
|
||||
const semaphore = Semaphore(10);
|
||||
const files = this.app.vault.getFiles();
|
||||
Logger("Verify and repair all files started", LOG_LEVEL.NOTICE, "verify");
|
||||
// const notice = NewNotice("", 0);
|
||||
let i = 0;
|
||||
for (const file of files) {
|
||||
i++;
|
||||
Logger(`Update into ${file.path}`);
|
||||
Logger(`${i}/${files.length}\n${file.path}`, LOG_LEVEL.NOTICE, "verify");
|
||||
const processes = files.map(e => (async (file) => {
|
||||
const releaser = await semaphore.acquire(1, "verifyAndRepair");
|
||||
|
||||
try {
|
||||
await this.plugin.updateIntoDB(file);
|
||||
Logger(`Update into ${file.path}`);
|
||||
await this.plugin.updateIntoDB(file, false, null, true);
|
||||
i++;
|
||||
Logger(`${i}/${files.length}\n${file.path}`, LOG_LEVEL.NOTICE, "verify");
|
||||
|
||||
} catch (ex) {
|
||||
Logger("could not update:");
|
||||
i++;
|
||||
Logger(`Error while verifyAndRepair`, LOG_LEVEL.NOTICE);
|
||||
Logger(ex);
|
||||
} finally {
|
||||
releaser();
|
||||
}
|
||||
}
|
||||
)(e));
|
||||
await Promise.all(processes);
|
||||
Logger("done", LOG_LEVEL.NOTICE, "verify");
|
||||
})
|
||||
);
|
||||
|
||||
2
src/lib
2
src/lib
Submodule src/lib updated: 85bb3556ba...bfad1f86d3
342
src/main.ts
342
src/main.ts
@@ -1,5 +1,5 @@
|
||||
import { debounce, Notice, Plugin, TFile, addIcon, TFolder, normalizePath, TAbstractFile, Editor, MarkdownView, PluginManifest, App, } from "obsidian";
|
||||
import { diff_match_patch } from "diff-match-patch";
|
||||
import { Diff, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch } from "diff-match-patch";
|
||||
|
||||
import { EntryDoc, LoadedEntry, ObsidianLiveSyncSettings, diff_check_result, diff_result_leaf, EntryBody, LOG_LEVEL, VER, DEFAULT_SETTINGS, diff_result, FLAGMD_REDFLAG, SYNCINFO_ID, InternalFileEntry } from "./lib/src/types";
|
||||
import { PluginDataEntry, PERIODIC_PLUGIN_SWEEP, PluginList, DevicePluginList, InternalFileInfo } from "./types";
|
||||
@@ -29,7 +29,7 @@ import { DocumentHistoryModal } from "./DocumentHistoryModal";
|
||||
|
||||
|
||||
|
||||
import { clearAllPeriodic, clearAllTriggers, clearTrigger, disposeMemoObject, id2path, memoIfNotExist, memoObject, path2id, retrieveMemoObject, setTrigger } from "./utils";
|
||||
import { applyPatch, clearAllPeriodic, clearAllTriggers, clearTrigger, disposeMemoObject, generatePatchObj, id2path, isObjectMargeApplicable, isSensibleMargeApplicable, memoIfNotExist, memoObject, path2id, retrieveMemoObject, setTrigger, tryParseJSON } from "./utils";
|
||||
import { decrypt, encrypt } from "./lib/src/e2ee_v2";
|
||||
|
||||
const isDebug = false;
|
||||
@@ -418,7 +418,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
delete setting[k];
|
||||
}
|
||||
}
|
||||
const encryptedSetting = encodeURIComponent(await encrypt(JSON.stringify(setting), encryptingPassphrase));
|
||||
const encryptedSetting = encodeURIComponent(await encrypt(JSON.stringify(setting), encryptingPassphrase, false));
|
||||
const uri = `${configURIBase}${encryptedSetting}`;
|
||||
await navigator.clipboard.writeText(uri);
|
||||
Logger("Setup URI copied to clipboard", LOG_LEVEL.NOTICE);
|
||||
@@ -431,7 +431,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
const encryptingPassphrase = await askString(this.app, "Encrypt your settings", "Passphrase", "");
|
||||
if (encryptingPassphrase === false) return;
|
||||
const setting = { ...this.settings };
|
||||
const encryptedSetting = encodeURIComponent(await encrypt(JSON.stringify(setting), encryptingPassphrase));
|
||||
const encryptedSetting = encodeURIComponent(await encrypt(JSON.stringify(setting), encryptingPassphrase, false));
|
||||
const uri = `${configURIBase}${encryptedSetting}`;
|
||||
await navigator.clipboard.writeText(uri);
|
||||
Logger("Setup URI copied to clipboard", LOG_LEVEL.NOTICE);
|
||||
@@ -457,7 +457,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
const oldConf = JSON.parse(JSON.stringify(this.settings));
|
||||
const encryptingPassphrase = await askString(this.app, "Passphrase", "Passphrase for your settings", "");
|
||||
if (encryptingPassphrase === false) return;
|
||||
const newConf = await JSON.parse(await decrypt(confString, encryptingPassphrase));
|
||||
const newConf = await JSON.parse(await decrypt(confString, encryptingPassphrase, false));
|
||||
if (newConf) {
|
||||
const result = await askYesNo(this.app, "Importing LiveSync's conf, OK?");
|
||||
if (result == "yes") {
|
||||
@@ -709,7 +709,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
clearAllPeriodic();
|
||||
clearAllTriggers();
|
||||
window.removeEventListener("visibilitychange", this.watchWindowVisibility);
|
||||
window.removeEventListener("online", this.watchOnline)
|
||||
window.removeEventListener("online", this.watchOnline);
|
||||
Logger("unloading plugin");
|
||||
}
|
||||
|
||||
@@ -1365,13 +1365,30 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
} else if (targetFile instanceof TFile) {
|
||||
const doc = change;
|
||||
const file = targetFile;
|
||||
await this.doc2storage_modify(doc, file);
|
||||
if (!this.settings.checkConflictOnlyOnOpen) {
|
||||
this.queueConflictedCheck(file);
|
||||
} else {
|
||||
const af = app.workspace.getActiveFile();
|
||||
if (af && af.path == file.path) {
|
||||
const queueConflictCheck = () => {
|
||||
if (!this.settings.checkConflictOnlyOnOpen) {
|
||||
this.queueConflictedCheck(file);
|
||||
return true;
|
||||
} else {
|
||||
const af = app.workspace.getActiveFile();
|
||||
if (af && af.path == file.path) {
|
||||
this.queueConflictedCheck(file);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
if (this.settings.writeDocumentsIfConflicted) {
|
||||
await this.doc2storage_modify(doc, file);
|
||||
queueConflictCheck();
|
||||
} else {
|
||||
const d = await this.localDatabase.getDBEntryMeta(id2path(change._id), { conflicts: true }, true);
|
||||
if (d && !d._conflicts) {
|
||||
await this.doc2storage_modify(doc, file);
|
||||
} else {
|
||||
if (!queueConflictCheck()) {
|
||||
Logger(`${id2path(change._id)} is conflicted, write to the storage has been pended.`, LOG_LEVEL.NOTICE);
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@@ -1954,6 +1971,193 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
return false;
|
||||
}
|
||||
//TODO: TIDY UP
|
||||
async mergeSensibly(path: string, baseRev: string, currentRev: string, conflictedRev: string): Promise<Diff[] | false> {
|
||||
const baseLeaf = await this.getConflictedDoc(path, baseRev);
|
||||
const leftLeaf = await this.getConflictedDoc(path, currentRev);
|
||||
const rightLeaf = await this.getConflictedDoc(path, conflictedRev);
|
||||
let autoMerge = false;
|
||||
if (baseLeaf == false || leftLeaf == false || rightLeaf == false) {
|
||||
return false;
|
||||
}
|
||||
// diff between base and each revision
|
||||
const dmp = new diff_match_patch();
|
||||
const mapLeft = dmp.diff_linesToChars_(baseLeaf.data, leftLeaf.data);
|
||||
const diffLeftSrc = dmp.diff_main(mapLeft.chars1, mapLeft.chars2, false);
|
||||
dmp.diff_charsToLines_(diffLeftSrc, mapLeft.lineArray);
|
||||
const mapRight = dmp.diff_linesToChars_(baseLeaf.data, rightLeaf.data);
|
||||
const diffRightSrc = dmp.diff_main(mapRight.chars1, mapRight.chars2, false);
|
||||
dmp.diff_charsToLines_(diffRightSrc, mapRight.lineArray);
|
||||
function splitDiffPiece(src: Diff[]): Diff[] {
|
||||
const ret = [] as Diff[];
|
||||
do {
|
||||
const d = src.shift();
|
||||
const pieces = d[1].split(/([^\n]*\n)/).filter(f => f != "");
|
||||
if (typeof (d) == "undefined") {
|
||||
break;
|
||||
}
|
||||
if (d[0] != DIFF_DELETE) {
|
||||
ret.push(...(pieces.map(e => [d[0], e] as Diff)));
|
||||
}
|
||||
if (d[0] == DIFF_DELETE) {
|
||||
const nd = src.shift();
|
||||
|
||||
if (typeof (nd) != "undefined") {
|
||||
const piecesPair = nd[1].split(/([^\n]*\n)/).filter(f => f != "");
|
||||
if (nd[0] == DIFF_INSERT) {
|
||||
// it might be pair
|
||||
for (const pt of pieces) {
|
||||
ret.push([d[0], pt]);
|
||||
const pairP = piecesPair.shift();
|
||||
if (typeof (pairP) != "undefined") ret.push([DIFF_INSERT, pairP]);
|
||||
}
|
||||
ret.push(...(piecesPair.map(e => [nd[0], e] as Diff)));
|
||||
} else {
|
||||
ret.push(...(pieces.map(e => [d[0], e] as Diff)));
|
||||
ret.push(...(piecesPair.map(e => [nd[0], e] as Diff)));
|
||||
|
||||
}
|
||||
} else {
|
||||
ret.push(...(pieces.map(e => [0, e] as Diff)));
|
||||
}
|
||||
}
|
||||
} while (src.length > 0);
|
||||
return ret;
|
||||
}
|
||||
|
||||
const diffLeft = splitDiffPiece(diffLeftSrc);
|
||||
const diffRight = splitDiffPiece(diffRightSrc);
|
||||
|
||||
let rightIdx = 0;
|
||||
let leftIdx = 0;
|
||||
const merged = [] as Diff[];
|
||||
autoMerge = true;
|
||||
LOOP_MERGE:
|
||||
do {
|
||||
if (leftIdx >= diffLeft.length && rightIdx >= diffRight.length) {
|
||||
break LOOP_MERGE;
|
||||
}
|
||||
const leftItem = diffLeft[leftIdx] ?? [0, ""];
|
||||
const rightItem = diffRight[rightIdx] ?? [0, ""];
|
||||
leftIdx++;
|
||||
rightIdx++;
|
||||
// when completely same, leave it .
|
||||
if (leftItem[0] == DIFF_EQUAL && rightItem[0] == DIFF_EQUAL && leftItem[1] == rightItem[1]) {
|
||||
merged.push(leftItem);
|
||||
continue;
|
||||
}
|
||||
if (leftItem[0] == DIFF_DELETE && rightItem[0] == DIFF_DELETE && leftItem[1] == rightItem[1]) {
|
||||
// when deleted evenly,
|
||||
const nextLeftIdx = leftIdx;
|
||||
const nextRightIdx = rightIdx;
|
||||
const [nextLeftItem, nextRightItem] = [diffLeft[nextLeftIdx] ?? [0, ""], diffRight[nextRightIdx] ?? [0, ""]];
|
||||
if ((nextLeftItem[0] == DIFF_INSERT && nextRightItem[0] == DIFF_INSERT) && nextLeftItem[1] != nextRightItem[1]) {
|
||||
//but next line looks like different
|
||||
autoMerge = false;
|
||||
break;
|
||||
} else {
|
||||
merged.push(leftItem);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
// when inserted evenly
|
||||
if (leftItem[0] == DIFF_INSERT && rightItem[0] == DIFF_INSERT) {
|
||||
if (leftItem[1] == rightItem[1]) {
|
||||
merged.push(leftItem);
|
||||
continue;
|
||||
} else {
|
||||
// sort by file date.
|
||||
if (leftLeaf.mtime <= rightLeaf.mtime) {
|
||||
merged.push(leftItem);
|
||||
merged.push(rightItem);
|
||||
continue;
|
||||
} else {
|
||||
merged.push(rightItem);
|
||||
merged.push(leftItem);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
// when on inserting, index should be fixed again.
|
||||
if (leftItem[0] == DIFF_INSERT) {
|
||||
rightIdx--;
|
||||
merged.push(leftItem);
|
||||
continue;
|
||||
}
|
||||
if (rightItem[0] == DIFF_INSERT) {
|
||||
leftIdx--;
|
||||
merged.push(rightItem);
|
||||
continue;
|
||||
}
|
||||
// except insertion, the line should not be different.
|
||||
if (rightItem[1] != leftItem[1]) {
|
||||
//TODO: SHOULD BE PANIC.
|
||||
Logger(`MERGING PANIC:${leftItem[0]},${leftItem[1]} == ${rightItem[0]},${rightItem[1]}`, LOG_LEVEL.VERBOSE);
|
||||
autoMerge = false;
|
||||
break LOOP_MERGE;
|
||||
}
|
||||
if (leftItem[0] == DIFF_DELETE) {
|
||||
if (rightItem[0] == DIFF_EQUAL) {
|
||||
merged.push(leftItem);
|
||||
continue;
|
||||
} else {
|
||||
//we cannot perform auto merge.
|
||||
autoMerge = false;
|
||||
break LOOP_MERGE;
|
||||
}
|
||||
}
|
||||
if (rightItem[0] == DIFF_DELETE) {
|
||||
if (leftItem[0] == DIFF_EQUAL) {
|
||||
merged.push(rightItem);
|
||||
continue;
|
||||
} else {
|
||||
//we cannot perform auto merge.
|
||||
autoMerge = false;
|
||||
break LOOP_MERGE;
|
||||
}
|
||||
}
|
||||
Logger(`Weird condition:${leftItem[0]},${leftItem[1]} == ${rightItem[0]},${rightItem[1]}`, LOG_LEVEL.VERBOSE);
|
||||
// here is the exception
|
||||
break LOOP_MERGE;
|
||||
} while (leftIdx < diffLeft.length || rightIdx < diffRight.length);
|
||||
if (autoMerge) {
|
||||
Logger(`Sensibly merge available`, LOG_LEVEL.VERBOSE);
|
||||
return merged;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async mergeObject(path: string, baseRev: string, currentRev: string, conflictedRev: string): Promise<string | false> {
|
||||
const baseLeaf = await this.getConflictedDoc(path, baseRev);
|
||||
const leftLeaf = await this.getConflictedDoc(path, currentRev);
|
||||
const rightLeaf = await this.getConflictedDoc(path, conflictedRev);
|
||||
if (baseLeaf == false || leftLeaf == false || rightLeaf == false) {
|
||||
return false;
|
||||
}
|
||||
const baseObj = { data: tryParseJSON(baseLeaf.data, {}) } as Record<string | number | symbol, any>;
|
||||
const leftObj = { data: tryParseJSON(leftLeaf.data, {}) } as Record<string | number | symbol, any>;
|
||||
const rightObj = { data: tryParseJSON(rightLeaf.data, {}) } as Record<string | number | symbol, any>;
|
||||
|
||||
const diffLeft = generatePatchObj(baseObj, leftObj);
|
||||
const diffRight = generatePatchObj(baseObj, rightObj);
|
||||
const patches = [
|
||||
{ mtime: leftLeaf.mtime, patch: diffLeft },
|
||||
{ mtime: rightLeaf.mtime, patch: diffRight }
|
||||
].sort((a, b) => a.mtime - b.mtime);
|
||||
let newObj = { ...baseObj };
|
||||
try {
|
||||
for (const patch of patches) {
|
||||
newObj = applyPatch(newObj, patch.patch);
|
||||
}
|
||||
return JSON.stringify(newObj.data);
|
||||
} catch (ex) {
|
||||
Logger("Could not merge object");
|
||||
Logger(ex, LOG_LEVEL.VERBOSE)
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Getting file conflicted status.
|
||||
@@ -1966,9 +2170,56 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (test == null) return false;
|
||||
if (!test._conflicts) return false;
|
||||
if (test._conflicts.length == 0) return false;
|
||||
const conflicts = test._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
|
||||
if ((isSensibleMargeApplicable(path) || isObjectMargeApplicable(path)) && !this.settings.disableMarkdownAutoMerge) {
|
||||
const conflictedRev = conflicts[0];
|
||||
const conflictedRevNo = Number(conflictedRev.split("-")[0]);
|
||||
//Search
|
||||
const revFrom = (await this.localDatabase.localDatabase.get(id2path(path), { revs_info: true })) as unknown as LoadedEntry & PouchDB.Core.GetMeta;
|
||||
const commonBase = revFrom._revs_info.filter(e => e.status == "available" && Number(e.rev.split("-")[0]) < conflictedRevNo).first()?.rev ?? "";
|
||||
let p = undefined;
|
||||
if (commonBase) {
|
||||
if (isSensibleMargeApplicable(path)) {
|
||||
const result = await this.mergeSensibly(path, commonBase, test._rev, conflictedRev);
|
||||
if (result) {
|
||||
p = result.filter(e => e[0] != DIFF_DELETE).map((e) => e[1]).join("");
|
||||
// can be merged.
|
||||
Logger(`Sensible merge:${path}`, LOG_LEVEL.INFO);
|
||||
} else {
|
||||
Logger(`Sensible merge is not applicable.`, LOG_LEVEL.VERBOSE);
|
||||
}
|
||||
} else if (isObjectMargeApplicable(path)) {
|
||||
// can be merged.
|
||||
const result = await this.mergeObject(path, commonBase, test._rev, conflictedRev);
|
||||
if (result) {
|
||||
Logger(`Object merge:${path}`, LOG_LEVEL.INFO);
|
||||
p = result;
|
||||
} else {
|
||||
Logger(`Object merge is not applicable.`, LOG_LEVEL.VERBOSE);
|
||||
}
|
||||
}
|
||||
|
||||
if (p != undefined) {
|
||||
// remove conflicted revision.
|
||||
await this.localDatabase.deleteDBEntry(path, { rev: conflictedRev });
|
||||
|
||||
const file = getAbstractFileByPath(path) as TFile;
|
||||
if (file) {
|
||||
await this.app.vault.modify(file, p);
|
||||
await this.updateIntoDB(file);
|
||||
} else {
|
||||
const newFile = await this.app.vault.create(path, p);
|
||||
await this.updateIntoDB(newFile);
|
||||
}
|
||||
await this.pullFile(path);
|
||||
Logger(`Automatically merged (sensible) :${path}`, LOG_LEVEL.INFO);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
// should be one or more conflicts;
|
||||
const leftLeaf = await this.getConflictedDoc(path, test._rev);
|
||||
const rightLeaf = await this.getConflictedDoc(path, test._conflicts[0]);
|
||||
const rightLeaf = await this.getConflictedDoc(path, conflicts[0]);
|
||||
if (leftLeaf == false) {
|
||||
// what's going on..
|
||||
Logger(`could not get current revisions:${path}`, LOG_LEVEL.NOTICE);
|
||||
@@ -1976,7 +2227,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
}
|
||||
if (rightLeaf == false) {
|
||||
// Conflicted item could not load, delete this.
|
||||
await this.localDatabase.deleteDBEntry(path, { rev: test._conflicts[0] });
|
||||
await this.localDatabase.deleteDBEntry(path, { rev: conflicts[0] });
|
||||
await this.pullFile(path, null, true);
|
||||
Logger(`could not get old revisions, automatically used newer one:${path}`, LOG_LEVEL.NOTICE);
|
||||
return true;
|
||||
@@ -2032,11 +2283,10 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
const toDelete = selected;
|
||||
const toKeep = conflictCheckResult.left.rev != toDelete ? conflictCheckResult.left.rev : conflictCheckResult.right.rev;
|
||||
if (toDelete == "") {
|
||||
//concat both,
|
||||
// write data,and delete both old rev.
|
||||
// concat both,
|
||||
// delete conflicted revision and write a new file, store it again.
|
||||
const p = conflictCheckResult.diff.map((e) => e[1]).join("");
|
||||
await this.localDatabase.deleteDBEntry(filename, { rev: conflictCheckResult.left.rev });
|
||||
await this.localDatabase.deleteDBEntry(filename, { rev: conflictCheckResult.right.rev });
|
||||
await this.localDatabase.deleteDBEntry(filename, { rev: testDoc._conflicts[0] });
|
||||
const file = getAbstractFileByPath(filename) as TFile;
|
||||
if (file) {
|
||||
await this.app.vault.modify(file, p);
|
||||
@@ -2092,7 +2342,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
Logger(ex);
|
||||
}
|
||||
}
|
||||
}, 1000);
|
||||
}, 100);
|
||||
}
|
||||
|
||||
async showIfConflicted(filename: string) {
|
||||
@@ -2192,7 +2442,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
|
||||
}
|
||||
|
||||
async updateIntoDB(file: TFile, initialScan?: boolean, cache?: CacheData) {
|
||||
async updateIntoDB(file: TFile, initialScan?: boolean, cache?: CacheData, force?: boolean) {
|
||||
if (!this.isTargetFile(file)) return;
|
||||
if (shouldBeIgnored(file.path)) {
|
||||
return;
|
||||
@@ -2234,15 +2484,24 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (recentlyTouched(file)) {
|
||||
return true;
|
||||
}
|
||||
const old = await this.localDatabase.getDBEntry(fullPath, null, false, false);
|
||||
if (old !== false) {
|
||||
const oldData = { data: old.data, deleted: old._deleted || old.deleted, };
|
||||
const newData = { data: d.data, deleted: d._deleted || d.deleted };
|
||||
if (JSON.stringify(oldData) == JSON.stringify(newData)) {
|
||||
Logger(msg + "Skipped (not changed) " + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
|
||||
return true;
|
||||
try {
|
||||
const old = await this.localDatabase.getDBEntry(fullPath, null, false, false);
|
||||
if (old !== false) {
|
||||
const oldData = { data: old.data, deleted: old._deleted || old.deleted, };
|
||||
const newData = { data: d.data, deleted: d._deleted || d.deleted };
|
||||
if (JSON.stringify(oldData) == JSON.stringify(newData)) {
|
||||
Logger(msg + "Skipped (not changed) " + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
|
||||
return true;
|
||||
}
|
||||
// d._rev = old._rev;
|
||||
}
|
||||
// d._rev = old._rev;
|
||||
} catch (ex) {
|
||||
if (force) {
|
||||
Logger(msg + "Error, Could not check the diff for the old one." + (force ? "force writing." : "") + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
|
||||
} else {
|
||||
Logger(msg + "Error, Could not check the diff for the old one." + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
|
||||
}
|
||||
return !force;
|
||||
}
|
||||
return false;
|
||||
});
|
||||
@@ -2702,8 +2961,33 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
|
||||
if (!("_conflicts" in doc)) return false;
|
||||
if (doc._conflicts.length == 0) return false;
|
||||
Logger(`Hidden file conflicted:${id2filenameInternalChunk(id)}`);
|
||||
const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
|
||||
|
||||
const revA = doc._rev;
|
||||
const revB = doc._conflicts[0];
|
||||
const revB = conflicts[0];
|
||||
|
||||
const conflictedRev = conflicts[0];
|
||||
const conflictedRevNo = Number(conflictedRev.split("-")[0]);
|
||||
//Search
|
||||
const revFrom = (await this.localDatabase.localDatabase.get(id, { revs_info: true })) as unknown as LoadedEntry & PouchDB.Core.GetMeta;
|
||||
const commonBase = revFrom._revs_info.filter(e => e.status == "available" && Number(e.rev.split("-")[0]) < conflictedRevNo).first().rev ?? "";
|
||||
const result = await this.mergeObject(id, commonBase, doc._rev, conflictedRev);
|
||||
if (result) {
|
||||
Logger(`Object merge:${id}`, LOG_LEVEL.INFO);
|
||||
const filename = id2filenameInternalChunk(id);
|
||||
const isExists = await this.app.vault.adapter.exists(filename);
|
||||
if (!isExists) {
|
||||
await this.ensureDirectoryEx(filename);
|
||||
}
|
||||
await this.app.vault.adapter.write(filename, result);
|
||||
const stat = await this.app.vault.adapter.stat(filename);
|
||||
await this.storeInternalFileToDatabase({ path: filename, ...stat });
|
||||
await this.extractInternalFileFromDatabase(filename);
|
||||
await this.localDatabase.localDatabase.remove(id, revB);
|
||||
return this.resolveConflictOnInternalFile(id);
|
||||
} else {
|
||||
Logger(`Object merge is not applicable.`, LOG_LEVEL.VERBOSE);
|
||||
}
|
||||
|
||||
const revBDoc = await this.localDatabase.localDatabase.get(id, { rev: revB });
|
||||
// determine which revision should been deleted.
|
||||
|
||||
127
src/utils.ts
127
src/utils.ts
@@ -72,4 +72,129 @@ export function retrieveMemoObject<T>(key: string): T | false {
|
||||
}
|
||||
export function disposeMemoObject(key: string) {
|
||||
delete memos[key];
|
||||
}
|
||||
}
|
||||
|
||||
export function isSensibleMargeApplicable(path: string) {
|
||||
if (path.endsWith(".md")) return true;
|
||||
return false;
|
||||
}
|
||||
export function isObjectMargeApplicable(path: string) {
|
||||
if (path.endsWith(".canvas")) return true;
|
||||
if (path.endsWith(".json")) return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
export function tryParseJSON(str: string, fallbackValue?: any) {
|
||||
try {
|
||||
return JSON.parse(str);
|
||||
} catch (ex) {
|
||||
return fallbackValue;
|
||||
}
|
||||
}
|
||||
|
||||
const MARK_OPERATOR = `\u{0001}`;
|
||||
const MARK_DELETED = `${MARK_OPERATOR}__DELETED`;
|
||||
const MARK_ISARRAY = `${MARK_OPERATOR}__ARRAY`;
|
||||
const MARK_SWAPPED = `${MARK_OPERATOR}__SWAP`;
|
||||
|
||||
function unorderedArrayToObject(obj: Array<any>) {
|
||||
return obj.map(e => ({ [e.id as string]: e })).reduce((p, c) => ({ ...p, ...c }), {})
|
||||
}
|
||||
function objectToUnorderedArray(obj: object) {
|
||||
const entries = Object.entries(obj);
|
||||
if (entries.some(e => e[0] != e[1]?.id)) throw new Error("Item looks like not unordered array")
|
||||
return entries.map(e => e[1]);
|
||||
}
|
||||
function generatePatchUnorderedArray(from: Array<any>, to: Array<any>) {
|
||||
if (from.every(e => typeof (e) == "object" && ("id" in e)) && to.every(e => typeof (e) == "object" && ("id" in e))) {
|
||||
const fObj = unorderedArrayToObject(from);
|
||||
const tObj = unorderedArrayToObject(to);
|
||||
const diff = generatePatchObj(fObj, tObj);
|
||||
if (Object.keys(diff).length > 0) {
|
||||
return { [MARK_ISARRAY]: diff };
|
||||
} else {
|
||||
return {};
|
||||
}
|
||||
}
|
||||
return { [MARK_SWAPPED]: to };
|
||||
}
|
||||
|
||||
export function generatePatchObj(from: Record<string | number | symbol, any>, to: Record<string | number | symbol, any>) {
|
||||
const entries = Object.entries(from);
|
||||
const tempMap = new Map<string | number | symbol, any>(entries);
|
||||
const ret = {} as Record<string | number | symbol, any>;
|
||||
const newEntries = Object.entries(to);
|
||||
for (const [key, value] of newEntries) {
|
||||
if (!tempMap.has(key)) {
|
||||
//New
|
||||
ret[key] = value;
|
||||
tempMap.delete(key);
|
||||
} else {
|
||||
//Exists
|
||||
const v = tempMap.get(key);
|
||||
if (typeof (v) !== typeof (value) || (Array.isArray(v) !== Array.isArray(value))) {
|
||||
//if type is not match, replace completely.
|
||||
ret[key] = { [MARK_SWAPPED]: value };
|
||||
} else {
|
||||
if (typeof (v) == "object" && typeof (value) == "object" && !Array.isArray(v) && !Array.isArray(value)) {
|
||||
const wk = generatePatchObj(v, value);
|
||||
if (Object.keys(wk).length > 0) ret[key] = wk;
|
||||
} else if (typeof (v) == "object" && typeof (value) == "object" && Array.isArray(v) && Array.isArray(value)) {
|
||||
const wk = generatePatchUnorderedArray(v, value);
|
||||
if (Object.keys(wk).length > 0) ret[key] = wk;
|
||||
} else if (typeof (v) != "object" && typeof (value) != "object") {
|
||||
if (JSON.stringify(tempMap.get(key)) !== JSON.stringify(value)) {
|
||||
ret[key] = value;
|
||||
}
|
||||
} else {
|
||||
if (JSON.stringify(tempMap.get(key)) !== JSON.stringify(value)) {
|
||||
ret[key] = { [MARK_SWAPPED]: value };
|
||||
}
|
||||
}
|
||||
}
|
||||
tempMap.delete(key);
|
||||
}
|
||||
}
|
||||
//Not used item, means deleted one
|
||||
for (const [key,] of tempMap) {
|
||||
ret[key] = MARK_DELETED
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
export function applyPatch(from: Record<string | number | symbol, any>, patch: Record<string | number | symbol, any>) {
|
||||
const ret = from;
|
||||
const patches = Object.entries(patch);
|
||||
for (const [key, value] of patches) {
|
||||
if (value == MARK_DELETED) {
|
||||
delete ret[key];
|
||||
continue;
|
||||
}
|
||||
if (typeof (value) == "object") {
|
||||
if (MARK_SWAPPED in value) {
|
||||
ret[key] = value[MARK_SWAPPED];
|
||||
continue;
|
||||
}
|
||||
if (MARK_ISARRAY in value) {
|
||||
if (!(key in ret)) ret[key] = [];
|
||||
if (!Array.isArray(ret[key])) {
|
||||
throw new Error("Patch target type is mismatched (array to something)");
|
||||
}
|
||||
const orgArrayObject = unorderedArrayToObject(ret[key]);
|
||||
const appliedObject = applyPatch(orgArrayObject, value[MARK_ISARRAY]);
|
||||
const appliedArray = objectToUnorderedArray(appliedObject);
|
||||
ret[key] = [...appliedArray];
|
||||
} else {
|
||||
if (!(key in ret)) {
|
||||
ret[key] = value;
|
||||
continue;
|
||||
}
|
||||
ret[key] = applyPatch(ret[key], value);
|
||||
}
|
||||
} else {
|
||||
ret[key] = value;
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
14
styles.css
14
styles.css
@@ -85,13 +85,6 @@
|
||||
|
||||
} */
|
||||
|
||||
.sls-btn-left {
|
||||
padding-right: 4px;
|
||||
}
|
||||
|
||||
.sls-btn-right {
|
||||
padding-left: 4px;
|
||||
}
|
||||
|
||||
.sls-header-button {
|
||||
margin-left: 2em;
|
||||
@@ -106,7 +99,8 @@
|
||||
}
|
||||
|
||||
.CodeMirror-wrap::before,
|
||||
.cm-s-obsidian>.cm-editor::before {
|
||||
.cm-s-obsidian>.cm-editor::before,
|
||||
.canvas-wrapper::before {
|
||||
content: var(--slsmessage);
|
||||
text-align: right;
|
||||
white-space: pre-wrap;
|
||||
@@ -122,6 +116,10 @@
|
||||
filter: grayscale(100%);
|
||||
}
|
||||
|
||||
.canvas-wrapper::before {
|
||||
right: 48px;
|
||||
}
|
||||
|
||||
.CodeMirror-wrap::before {
|
||||
right: 0px;
|
||||
}
|
||||
|
||||
60
updates.md
60
updates.md
@@ -1,3 +1,41 @@
|
||||
### 0.17.0
|
||||
- 0.17.0 has no surfaced changes but the design of saving chunks has been changed. They have compatibility but changing files after upgrading makes different chunks than before 0.16.x.
|
||||
Please rebuild databases once if you have been worried about storage usage.
|
||||
|
||||
- Improved:
|
||||
- Splitting markdown
|
||||
- Saving chunks
|
||||
|
||||
- Changed:
|
||||
- Chunk ID numbering rules
|
||||
|
||||
#### Minors
|
||||
- 0.17.1
|
||||
- Fixed: Now we can verify and repair the database.
|
||||
- Refactored inside.
|
||||
|
||||
- 0.17.2
|
||||
- New feature
|
||||
- We can merge conflicted documents automatically if sensible.
|
||||
- Fixed
|
||||
- Writing to the storage will be pended while they have conflicts after replication.
|
||||
|
||||
- 0.17.3
|
||||
- Now we supported canvas! And conflicted JSON files are also synchronised with merging its content if they are obvious.
|
||||
|
||||
- 0.17.4
|
||||
- Canvases are now treated as a sort of plain text file. now we transfer only the metadata and chunks that have differences.
|
||||
|
||||
- 0.17.5 Now `read chunks online` had been fixed, and a new feature: `Use dynamic iteration count` to reduce the load on encryption/decryption.
|
||||
Note: `Use dynamic iteration count` is not compatible with earlier versions.
|
||||
- 0.17.6 Now our renamed/deleted files have been surely deleted again.
|
||||
- 0.17.7
|
||||
- Fixed:
|
||||
- Fixed merging issues.
|
||||
- Fixed button styling.
|
||||
- Changed:
|
||||
- Conflict checking on synchronising has been enabled for every note in default.
|
||||
|
||||
### 0.16.0
|
||||
- Now hidden files need not be scanned. Changes will be detected automatically.
|
||||
- If you want it to back to its previous behaviour, please disable `Monitor changes to internal files`.
|
||||
@@ -31,25 +69,3 @@
|
||||
|
||||
Note:
|
||||
Before 0.16.5, LiveSync had some issues making chunks. In this case, synchronisation had became been always failing after a corrupted one should be made. After 0.16.6, the corrupted chunk is automatically detected. Sorry for troubling you but please do `rebuild everything` when this plug-in notified so.
|
||||
|
||||
### 0.15.0
|
||||
- Outdated configuration items have been removed.
|
||||
- Setup wizard has been implemented!
|
||||
|
||||
I appreciate for reviewing and giving me advice @Pouhon158!
|
||||
|
||||
#### Minors
|
||||
- 0.15.1 Missed the stylesheet.
|
||||
- 0.15.2 The wizard has been improved and documented!
|
||||
- 0.15.3 Fixed the issue about locking/unlocking remote database while rebuilding in the wizard.
|
||||
- 0.15.4 Fixed issues about asynchronous processing (e.g., Conflict check or hidden file detection)
|
||||
- 0.15.5 Add new features for setting Self-hosted LiveSync up more easier.
|
||||
- 0.15.6 File tracking logic has been refined.
|
||||
- 0.15.7 Fixed bug about renaming file.
|
||||
- 0.15.8 Fixed bug about deleting empty directory, weird behaviour on boot-sequence on mobile devices.
|
||||
- 0.15.9 Improved chunk retrieving, now chunks are retrieved in batch on continuous requests.
|
||||
- 0.15.10 Fixed:
|
||||
- The boot sequence has been corrected and now boots smoothly.
|
||||
- Auto applying of batch save will be processed earlier than before.
|
||||
|
||||
... To continue on to `updates_old.md`.
|
||||
Reference in New Issue
Block a user