Compare commits

...

21 Commits

Author SHA1 Message Date
vorotamoroz
1ad5dcc1cc bump 2022-12-16 18:56:28 +09:00
vorotamoroz
a512566e5b New feature
- We can merge conflicted documents automatically if sensible.

Fixed:
- Writing to the storage will be pended while they have conflicts after replication.

Minor changes included.
2022-12-16 18:55:04 +09:00
vorotamoroz
02de82af46 bump 2022-12-06 18:03:31 +09:00
vorotamoroz
840e03a2d3 Fixed:
- Now we can verify and repair database again.
2022-12-06 17:59:48 +09:00
vorotamoroz
96b676caf3 bump 2022-12-05 19:53:24 +09:00
vorotamoroz
a8219de375 Improved:
- Splitting markdown
- Saving chunks

Changed:
- Chunk ID numbering rules

Fixed:
- Just weed.
2022-12-05 19:37:24 +09:00
vorotamoroz
db3eb7e1a0 bump 2022-11-24 14:14:27 +09:00
vorotamoroz
50f51393fc upgrade lib. 2022-11-24 14:14:17 +09:00
vorotamoroz
8a04e332d6 Fix check warning for max_document_size, max_http_request_size as like as #145 2022-11-23 15:41:09 +09:00
vorotamoroz
12ae17aa2f Merge pull request #145 from Bpazy/patch-1
Fix check warning for max_document_size, max_http_request_size
2022-11-23 15:37:32 +09:00
Ziyuan Han
657f12f966 Fix check warning 2022-11-23 14:12:50 +08:00
Ziyuan Han
15a7bed448 Fix check warning 2022-11-23 14:11:44 +08:00
vorotamoroz
420c3b94df bump 2022-11-23 10:34:04 +09:00
vorotamoroz
239c087132 framework and dependency upgraded. 2022-11-23 10:27:12 +09:00
vorotamoroz
d1a633c799 bump 2022-11-07 17:26:40 +09:00
vorotamoroz
1c07cd92fc - Fixed
- Automatic (temporary) batch size adjustment has been restored to work correctly.
  - Chunk splitting has been backed to the previous behaviour for saving them correctly.
- Improved
  - Corrupted chunks will be detected automatically.
  - Now on the case-insensitive system, `aaa.md` and `AAA.md` will be treated as the same file or path at applying changesets.
2022-11-07 17:26:33 +09:00
vorotamoroz
adc84d53b1 bump again 2022-10-27 17:43:00 +09:00
vorotamoroz
c3a762ceed bump 2022-10-27 17:42:15 +09:00
vorotamoroz
5945638633 Fixed:
- Conflict detection and merging of deleted files.
- Fixed wrong logs.
- Fix redundant logs.

Implemented
- Automatically deletion of old metadata.
2022-10-27 17:41:26 +09:00
vorotamoroz
331acd463d bump 2022-10-25 11:48:21 +09:00
vorotamoroz
9d4f41bbf9 Fixed failure of detection 2022-10-25 11:47:02 +09:00
13 changed files with 1606 additions and 1770 deletions

View File

@@ -10,9 +10,11 @@ But some additional configurations are required in `local.ini` to use from Self-
```
[couchdb]
single_node=true
max_document_size = 50000000
[chttpd]
require_valid_user = true
max_http_request_size = 4294967296
[chttpd_auth]
require_valid_user = true
@@ -94,4 +96,4 @@ Using Caddy is a handy way to serve the server with SSL automatically.
I have published [docker-compose.yml and ini files](https://github.com/vrtmrz/self-hosted-livesync-server) that launches Caddy and CouchDB at once. Please try it out.
And, be sure to check the server log and be careful of malicious access.
And, be sure to check the server log and be careful of malicious access.

View File

@@ -11,9 +11,11 @@
```
[couchdb]
single_node=true
max_document_size = 50000000
[chttpd]
require_valid_user = true
max_http_request_size = 4294967296
[chttpd_auth]
require_valid_user = true
@@ -92,4 +94,4 @@ Note: 不推荐将 CouchDB 挂载到根目录
提供了 [docker-compose.yml 和 ini 文件](https://github.com/vrtmrz/self-hosted-livesync-server) 可以同时启动 Caddy 和 CouchDB。
注意检查服务器日志,当心恶意访问。
注意检查服务器日志,当心恶意访问。

View File

@@ -8,12 +8,14 @@ CouchDBを構築するには、[Dockerのイメージ](https://hub.docker.com/_/
```
[couchdb]
single_node=true
max_document_size = 50000000
[chttpd]
require_valid_user = true
[chttpd_auth]
require_valid_user = true
max_http_request_size = 4294967296
authentication_redirect = /_utils/session.html
[httpd]

View File

@@ -1,7 +1,7 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.16.3",
"version": "0.17.2",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",

2725
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "obsidian-livesync",
"version": "0.16.3",
"version": "0.17.2",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js",
"type": "module",
@@ -13,34 +13,30 @@
"author": "vorotamoroz",
"license": "MIT",
"devDependencies": {
"@rollup/plugin-commonjs": "^18.0.0",
"@rollup/plugin-node-resolve": "^11.2.1",
"@rollup/plugin-typescript": "^8.2.1",
"@types/diff-match-patch": "^1.0.32",
"@types/pouchdb": "^6.4.0",
"@types/pouchdb-browser": "^6.1.3",
"@typescript-eslint/eslint-plugin": "^5.7.0",
"@typescript-eslint/parser": "^5.0.0",
"builtin-modules": "^3.2.0",
"esbuild": "0.13.12",
"esbuild-svelte": "^0.7.0",
"eslint": "^7.32.0",
"eslint-config-airbnb-base": "^14.2.1",
"eslint-plugin-import": "^2.25.2",
"@typescript-eslint/eslint-plugin": "^5.44.0",
"@typescript-eslint/parser": "^5.44.0",
"builtin-modules": "^3.3.0",
"esbuild": "0.15.15",
"esbuild-svelte": "^0.7.3",
"eslint": "^8.28.0",
"eslint-config-airbnb-base": "^15.0.0",
"eslint-plugin-import": "^2.26.0",
"obsidian": "^0.16.3",
"postcss": "^8.4.14",
"postcss-load-config": "^3.1.4",
"rollup": "^2.32.1",
"svelte": "^3.49.0",
"postcss": "^8.4.19",
"postcss-load-config": "^4.0.1",
"svelte": "^3.53.1",
"svelte-preprocess": "^4.10.7",
"tslib": "^2.2.0",
"typescript": "^4.2.4"
"tslib": "^2.4.1",
"typescript": "^4.9.3"
},
"dependencies": {
"diff-match-patch": "^1.0.5",
"esbuild": "0.13.12",
"esbuild-svelte": "^0.7.0",
"idb": "^7.0.2",
"esbuild": "0.15.15",
"esbuild-svelte": "^0.7.3",
"idb": "^7.1.1",
"xxhash-wasm": "^0.4.2"
}
}

View File

@@ -1,31 +0,0 @@
import typescript from "@rollup/plugin-typescript";
import { nodeResolve } from "@rollup/plugin-node-resolve";
import commonjs from "@rollup/plugin-commonjs";
const isProd = process.env.BUILD === "production";
const banner = `/*
THIS IS A GENERATED/BUNDLED FILE BY ROLLUP
if you want to view the source visit the plugins github repository
*/
`;
export default {
input: "./src/main.ts",
output: {
dir: ".",
sourcemap: "inline",
sourcemapExcludeSources: isProd,
format: "cjs",
exports: "default",
banner,
},
external: ["obsidian"],
plugins: [
typescript({ exclude: ["pouchdb-browser.js", "pouchdb-browser-webpack"] }),
nodeResolve({
browser: true,
}),
commonjs(),
],
};

View File

@@ -38,8 +38,8 @@ export class ConflictResolveModal extends Modal {
diff = diff.replace(/\n/g, "<br>");
div.innerHTML = diff;
const div2 = contentEl.createDiv("");
const date1 = new Date(this.result.left.mtime).toLocaleString();
const date2 = new Date(this.result.right.mtime).toLocaleString();
const date1 = new Date(this.result.left.mtime).toLocaleString() + (this.result.left.deleted ? " (Deleted)" : "");
const date2 = new Date(this.result.right.mtime).toLocaleString() + (this.result.right.deleted ? " (Deleted)" : "");
div2.innerHTML = `
<span class='deleted'>A:${date1}</span><br /><span class='added'>B:${date2}</span><br>
`;

View File

@@ -35,7 +35,7 @@ export class LocalPouchDB extends LocalPouchDBBase {
last_successful_post = false;
getLastPostFailedBySize() {
return this.last_successful_post;
return !this.last_successful_post;
}
async fetchByAPI(request: RequestUrlParam): Promise<RequestUrlResponse> {
const ret = await requestUrl(request);
@@ -75,7 +75,7 @@ export class LocalPouchDB extends LocalPouchDBBase {
const method = opts.method ?? "GET";
if (opts.body) {
const opts_length = opts.body.toString().length;
if (opts_length > 1024 * 1024 * 10) {
if (opts_length > 1000 * 1000 * 10) {
// over 10MB
if (isCloudantURI(uri)) {
this.last_successful_post = false;

View File

@@ -1,7 +1,7 @@
import { App, PluginSettingTab, Setting, sanitizeHTMLToDom, RequestUrlParam, requestUrl, TextAreaComponent, MarkdownRenderer, stringifyYaml } from "obsidian";
import { DEFAULT_SETTINGS, LOG_LEVEL, ObsidianLiveSyncSettings, RemoteDBSettings } from "./lib/src/types";
import { path2id, id2path } from "./utils";
import { delay, versionNumberString2Number } from "./lib/src/utils";
import { delay, Semaphore, versionNumberString2Number } from "./lib/src/utils";
import { Logger } from "./lib/src/logger";
import { checkSyncInfo, isCloudantURI } from "./lib/src/utils_couchdb.js";
import { testCrypt } from "./lib/src/e2ee_v2";
@@ -813,6 +813,24 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
})
}
);
new Setting(containerGeneralSettingsEl)
.setName("Delete old metadata of deleted files on start-up")
.setClass("wizardHidden")
.setDesc("(Days passed, 0 to disable automatic-deletion)")
.addText((text) => {
text.setPlaceholder("")
.setValue(this.plugin.settings.automaticallyDeleteMetadataOfDeletedFiles + "")
.onChange(async (value) => {
let v = Number(value);
if (isNaN(v)) {
v = 0;
}
this.plugin.settings.automaticallyDeleteMetadataOfDeletedFiles = v;
await this.plugin.saveSettings();
});
text.inputEl.setAttribute("type", "number");
});
addScreenElement("20", containerGeneralSettingsEl);
const containerSyncSettingEl = containerEl.createDiv();
@@ -956,6 +974,24 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
await this.plugin.saveSettings();
})
);
new Setting(containerSyncSettingEl)
.setName("Disable sensible auto merging on markdown files")
.setDesc("If this switch is turned on, a merge dialog will be displayed, even if the sensible-merge is possible automatically. (Turn on to previous behavior)")
.addToggle((toggle) =>
toggle.setValue(this.plugin.settings.disableMarkdownAutoMerge).onChange(async (value) => {
this.plugin.settings.disableMarkdownAutoMerge = value;
await this.plugin.saveSettings();
})
);
new Setting(containerSyncSettingEl)
.setName("Write documents after synchronization even if they have conflict")
.setDesc("Turn on to previous behavior")
.addToggle((toggle) =>
toggle.setValue(this.plugin.settings.writeDocumentsIfConflicted).onChange(async (value) => {
this.plugin.settings.writeDocumentsIfConflicted = value;
await this.plugin.saveSettings();
})
);
new Setting(containerSyncSettingEl)
@@ -1270,7 +1306,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
new Setting(containerHatchEl)
.setName("Make report to inform the issue")
.setDesc("Verify and repair all files and update database without restoring")
.addButton((button) =>
button
.setButtonText("Make report")
@@ -1350,21 +1385,28 @@ ${stringifyYaml(pluginConfig)}`;
.setDisabled(false)
.setWarning()
.onClick(async () => {
const semaphore = Semaphore(10);
const files = this.app.vault.getFiles();
Logger("Verify and repair all files started", LOG_LEVEL.NOTICE, "verify");
// const notice = NewNotice("", 0);
let i = 0;
for (const file of files) {
i++;
Logger(`Update into ${file.path}`);
Logger(`${i}/${files.length}\n${file.path}`, LOG_LEVEL.NOTICE, "verify");
const processes = files.map(e => (async (file) => {
const releaser = await semaphore.acquire(1, "verifyAndRepair");
try {
await this.plugin.updateIntoDB(file);
Logger(`Update into ${file.path}`);
await this.plugin.updateIntoDB(file, false, null, true);
i++;
Logger(`${i}/${files.length}\n${file.path}`, LOG_LEVEL.NOTICE, "verify");
} catch (ex) {
Logger("could not update:");
i++;
Logger(`Error while verifyAndRepair`, LOG_LEVEL.NOTICE);
Logger(ex);
} finally {
releaser();
}
}
)(e));
await Promise.all(processes);
Logger("done", LOG_LEVEL.NOTICE, "verify");
})
);

Submodule src/lib updated: d73b96ba2a...bf8ab8883d

View File

@@ -1,5 +1,5 @@
import { debounce, Notice, Plugin, TFile, addIcon, TFolder, normalizePath, TAbstractFile, Editor, MarkdownView, PluginManifest, App, } from "obsidian";
import { diff_match_patch } from "diff-match-patch";
import { Diff, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch } from "diff-match-patch";
import { EntryDoc, LoadedEntry, ObsidianLiveSyncSettings, diff_check_result, diff_result_leaf, EntryBody, LOG_LEVEL, VER, DEFAULT_SETTINGS, diff_result, FLAGMD_REDFLAG, SYNCINFO_ID, InternalFileEntry } from "./lib/src/types";
import { PluginDataEntry, PERIODIC_PLUGIN_SWEEP, PluginList, DevicePluginList, InternalFileInfo } from "./types";
@@ -44,6 +44,16 @@ const ICHeaderEnd = "i;";
const ICHeaderLength = ICHeader.length;
const FileWatchEventQueueMax = 10;
function getAbstractFileByPath(path: string): TAbstractFile | null {
// Hidden API but so useful.
if ("getAbstractFileByPathInsensitive" in app.vault && (app.vault.adapter?.insensitive ?? false)) {
// @ts-ignore
return app.vault.getAbstractFileByPathInsensitive(path);
} else {
return app.vault.getAbstractFileByPath(path);
}
}
/**
* returns is internal chunk of file
* @param str ID
@@ -97,7 +107,7 @@ const askString = (app: App, title: string, key: string, placeholder: string): P
};
let touchedFiles: string[] = [];
function touch(file: TFile | string) {
const f = file instanceof TFile ? file : app.vault.getAbstractFileByPath(file) as TFile;
const f = file instanceof TFile ? file : getAbstractFileByPath(file) as TFile;
const key = `${f.path}-${f.stat.mtime}-${f.stat.size}`;
touchedFiles.unshift(key);
touchedFiles = touchedFiles.slice(0, 100);
@@ -147,7 +157,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}
isRedFlagRaised(): boolean {
const redflag = this.app.vault.getAbstractFileByPath(normalizePath(FLAGMD_REDFLAG));
const redflag = getAbstractFileByPath(normalizePath(FLAGMD_REDFLAG));
if (redflag != null) {
return true;
}
@@ -174,7 +184,6 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
nextKey = `${row.id}\u{10ffff}`;
if (!("type" in doc)) continue;
if (doc.type == "newnote" || doc.type == "plain") {
// const docId = doc._id.startsWith("i:") ? doc._id.substring("i:".length) : doc._id;
notes.push({ path: id2path(doc._id), mtime: doc.mtime });
}
if (isChunk(nextKey)) {
@@ -203,8 +212,9 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
nextKey = `${row.id}\u{10ffff}`;
if (!("_conflicts" in doc)) continue;
if (isInternalChunk(row.id)) continue;
if (doc._deleted) continue;
if ("deleted" in doc && doc.deleted) continue;
// We have to check also deleted files.
// if (doc._deleted) continue;
// if ("deleted" in doc && doc.deleted) continue;
if (doc.type == "newnote" || doc.type == "plain") {
// const docId = doc._id.startsWith("i:") ? doc._id.substring("i:".length) : doc._id;
notes.push({ path: id2path(doc._id), mtime: doc.mtime });
@@ -226,11 +236,52 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (isInternalChunk(target)) {
//NOP
} else {
await this.showIfConflicted(this.app.vault.getAbstractFileByPath(target) as TFile);
await this.showIfConflicted(target);
}
}
}
async collectDeletedFiles() {
const pageLimit = 1000;
let nextKey = "";
const limitDays = this.settings.automaticallyDeleteMetadataOfDeletedFiles;
if (limitDays <= 0) return;
Logger(`Checking expired file history`);
const limit = Date.now() - (86400 * 1000 * limitDays);
const notes: { path: string, mtime: number, ttl: number, doc: PouchDB.Core.ExistingDocument<EntryDoc & PouchDB.Core.AllDocsMeta> }[] = [];
do {
const docs = await this.localDatabase.localDatabase.allDocs({ limit: pageLimit, startkey: nextKey, conflicts: true, include_docs: true });
nextKey = "";
for (const row of docs.rows) {
const doc = row.doc;
nextKey = `${row.id}\u{10ffff}`;
if (doc.type == "newnote" || doc.type == "plain") {
if (doc.deleted && (doc.mtime - limit) < 0) {
notes.push({ path: id2path(doc._id), mtime: doc.mtime, ttl: (doc.mtime - limit) / 1000 / 86400, doc: doc });
}
}
if (isChunk(nextKey)) {
// skip the chunk zone.
nextKey = CHeaderEnd;
}
}
} while (nextKey != "");
if (notes.length == 0) {
Logger("There are no old documents");
Logger(`Checking expired file history done`);
return;
}
for (const v of notes) {
Logger(`Deletion history expired: ${v.path}`);
const delDoc = v.doc;
delDoc._deleted = true;
// console.dir(delDoc);
await this.localDatabase.localDatabase.put(delDoc);
}
Logger(`Checking expired file history done`);
}
async onload() {
setLogger(this.addLog.bind(this)); // Logger moved to global.
Logger("loading plugin");
@@ -528,7 +579,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
id: "livesync-checkdoc-conflicted",
name: "Resolve if conflicted.",
editorCallback: async (editor: Editor, view: MarkdownView) => {
await this.showIfConflicted(view.file);
await this.showIfConflicted(view.file.path);
},
});
@@ -658,7 +709,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
clearAllPeriodic();
clearAllTriggers();
window.removeEventListener("visibilitychange", this.watchWindowVisibility);
window.removeEventListener("online", this.watchOnline)
window.removeEventListener("online", this.watchOnline);
Logger("unloading plugin");
}
@@ -921,7 +972,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (this.settings.syncOnFileOpen && !this.suspended) {
await this.replicate();
}
await this.showIfConflicted(file);
await this.showIfConflicted(file.path);
}
async applyBatchChange() {
@@ -1009,7 +1060,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
for (const i of newFiles) {
try {
const newFilePath = normalizePath(this.getFilePath(i));
const newFile = this.app.vault.getAbstractFileByPath(newFilePath);
const newFile = getAbstractFileByPath(newFilePath);
if (newFile instanceof TFile) {
Logger(`save ${newFile.path} into db`);
await this.updateIntoDB(newFile);
@@ -1170,7 +1221,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
touch(newFile);
this.app.vault.trigger("create", newFile);
} catch (ex) {
Logger(msg + "ERROR, Could not parse: " + path + "(" + doc.datatype + ")", LOG_LEVEL.NOTICE);
Logger(msg + "ERROR, Could not create: " + path + "(" + doc.datatype + ")", LOG_LEVEL.NOTICE);
Logger(ex, LOG_LEVEL.VERBOSE);
}
} else {
@@ -1237,7 +1288,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
await this.app.vault.modifyBinary(file, bin, { ctime: doc.ctime, mtime: doc.mtime });
// this.batchFileChange = this.batchFileChange.filter((e) => e != file.path);
Logger(msg + path);
const xf = this.app.vault.getAbstractFileByPath(file.path) as TFile;
const xf = getAbstractFileByPath(file.path) as TFile;
touch(xf);
this.app.vault.trigger("modify", xf);
} catch (ex) {
@@ -1254,7 +1305,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
await this.app.vault.modify(file, doc.data, { ctime: doc.ctime, mtime: doc.mtime });
Logger(msg + path);
// this.batchFileChange = this.batchFileChange.filter((e) => e != file.path);
const xf = this.app.vault.getAbstractFileByPath(file.path) as TFile;
const xf = getAbstractFileByPath(file.path) as TFile;
touch(xf);
this.app.vault.trigger("modify", xf);
} catch (ex) {
@@ -1304,7 +1355,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}
async handleDBChangedAsync(change: EntryBody) {
const targetFile = this.app.vault.getAbstractFileByPath(id2path(change._id));
const targetFile = getAbstractFileByPath(id2path(change._id));
if (targetFile == null) {
if (change._deleted || change.deleted) {
return;
@@ -1314,13 +1365,30 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} else if (targetFile instanceof TFile) {
const doc = change;
const file = targetFile;
await this.doc2storage_modify(doc, file);
if (!this.settings.checkConflictOnlyOnOpen) {
this.queueConflictedCheck(file);
} else {
const af = app.workspace.getActiveFile();
if (af && af.path == file.path) {
const queueConflictCheck = () => {
if (!this.settings.checkConflictOnlyOnOpen) {
this.queueConflictedCheck(file);
return true;
} else {
const af = app.workspace.getActiveFile();
if (af && af.path == file.path) {
this.queueConflictedCheck(file);
return true;
}
}
return false;
}
if (this.settings.writeDocumentsIfConflicted) {
await this.doc2storage_modify(doc, file);
queueConflictCheck();
} else {
const d = await this.localDatabase.getDBEntryMeta(id2path(change._id), { conflicts: true })
if (d && !d._conflicts) {
await this.doc2storage_modify(doc, file);
} else {
if (!queueConflictCheck()) {
Logger(`${id2path(change._id)} is conflicted, write to the storage has been pended.`, LOG_LEVEL.NOTICE);
}
}
}
} else {
@@ -1420,7 +1488,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (!this.isTargetFile(id2path(doc._id))) return;
const skipOldFile = this.settings.skipOlderFilesOnSync && false; //patched temporary.
if ((!isInternalChunk(doc._id)) && skipOldFile) {
const info = this.app.vault.getAbstractFileByPath(id2path(doc._id));
const info = getAbstractFileByPath(id2path(doc._id));
if (info && info instanceof TFile) {
const localMtime = ~~((info as TFile).stat.mtime / 1000);
@@ -1727,10 +1795,18 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
Logger("Initializing", LOG_LEVEL.NOTICE, "syncAll");
}
await this.collectDeletedFiles();
const filesStorage = this.app.vault.getFiles().filter(e => this.isTargetFile(e));
const filesStorageName = filesStorage.map((e) => e.path);
const wf = await this.localDatabase.localDatabase.allDocs();
const filesDatabase = wf.rows.filter((e) => !isChunk(e.id) && !isPluginChunk(e.id) && e.id != "obsydian_livesync_version").filter(e => isValidPath(e.id)).map((e) => id2path(e.id)).filter(e => this.isTargetFile(e));
const filesDatabase = wf.rows.filter((e) =>
!isChunk(e.id) &&
!isPluginChunk(e.id) &&
e.id != "obsydian_livesync_version" &&
e.id != "_design/replicate"
)
.filter(e => isValidPath(e.id)).map((e) => id2path(e.id)).filter(e => this.isTargetFile(e));
const isInitialized = await (this.localDatabase.kvDB.get<boolean>("initialized")) || false;
// Make chunk bigger if it is the initial scan. There must be non-active docs.
if (filesDatabase.length == 0 && !isInitialized) {
@@ -1748,28 +1824,28 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
this.setStatusBarText(`UPDATE DATABASE`);
const runAll = async<T>(procedureName: string, objects: T[], callback: (arg: T) => Promise<void>) => {
const count = objects.length;
// const count = objects.length;
Logger(procedureName);
let i = 0;
// let i = 0;
const semaphore = Semaphore(10);
Logger(`${procedureName} exec.`);
// Logger(`${procedureName} exec.`);
if (!this.localDatabase.isReady) throw Error("Database is not ready!");
const processes = objects.map(e => (async (v) => {
const releaser = await semaphore.acquire(1, procedureName);
try {
await callback(v);
i++;
if (i % 50 == 0) {
const notify = `${procedureName} : ${i}/${count}`;
if (showingNotice) {
Logger(notify, LOG_LEVEL.NOTICE, "syncAll");
} else {
Logger(notify);
}
this.setStatusBarText(notify);
}
// i++;
// if (i % 50 == 0) {
// const notify = `${procedureName} : ${i}/${count}`;
// if (showingNotice) {
// Logger(notify, LOG_LEVEL.NOTICE, "syncAll");
// } else {
// Logger(notify);
// }
// this.setStatusBarText(notify);
// }
} catch (ex) {
Logger(`Error while ${procedureName}`, LOG_LEVEL.NOTICE);
Logger(ex);
@@ -1785,18 +1861,19 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
await runAll("UPDATE DATABASE", onlyInStorage, async (e) => {
Logger(`Update into ${e.path}`);
await this.updateIntoDB(e, initialScan);
});
if (!initialScan) {
await runAll("UPDATE STORAGE", onlyInDatabase, async (e) => {
const w = await this.localDatabase.getDBEntryMeta(e);
if (w) {
const w = await this.localDatabase.getDBEntryMeta(e, {}, true);
if (w && !(w.deleted || w._deleted)) {
Logger(`Check or pull from db:${e}`);
await this.pullFile(e, filesStorage, false, null, false);
Logger(`Check or pull from db:${e} OK`);
} else if (w) {
Logger(`Deletion history skipped: ${e}`, LOG_LEVEL.VERBOSE);
} else {
Logger(`entry not found, maybe deleted (it is normal behavior):${e}`);
Logger(`entry not found: ${e}`);
}
});
}
@@ -1872,7 +1949,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
// --> conflict resolving
async getConflictedDoc(path: string, rev: string): Promise<false | diff_result_leaf> {
try {
const doc = await this.localDatabase.getDBEntry(path, { rev: rev }, false, false);
const doc = await this.localDatabase.getDBEntry(path, { rev: rev }, false, false, true);
if (doc === false) return false;
let data = doc.data;
if (doc.datatype == "newnote") {
@@ -1881,6 +1958,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
data = doc.data;
}
return {
deleted: doc.deleted || doc._deleted,
ctime: doc.ctime,
mtime: doc.mtime,
rev: rev,
@@ -1893,6 +1971,163 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}
return false;
}
//TODO: TIDY UP
async mergeSensibly(path: string, baseRev: string, currentRev: string, conflictedRev: string): Promise<Diff[] | false> {
const baseLeaf = await this.getConflictedDoc(path, baseRev);
const leftLeaf = await this.getConflictedDoc(path, currentRev);
const rightLeaf = await this.getConflictedDoc(path, conflictedRev);
let autoMerge = false;
if (baseLeaf == false || leftLeaf == false || rightLeaf == false) {
return false;
}
// diff between base and each revision
const dmp = new diff_match_patch();
const mapLeft = dmp.diff_linesToChars_(baseLeaf.data, leftLeaf.data);
const diffLeftSrc = dmp.diff_main(mapLeft.chars1, mapLeft.chars2, false);
dmp.diff_charsToLines_(diffLeftSrc, mapLeft.lineArray);
const mapRight = dmp.diff_linesToChars_(baseLeaf.data, rightLeaf.data);
const diffRightSrc = dmp.diff_main(mapRight.chars1, mapRight.chars2, false);
dmp.diff_charsToLines_(diffRightSrc, mapRight.lineArray);
function splitDiffPiece(src: Diff[]): Diff[] {
const ret = [] as Diff[];
do {
const d = src.shift();
const pieces = d[1].split(/([^\n]*\n)/).filter(f => f != "");
if (typeof (d) == "undefined") {
break;
}
if (d[0] != DIFF_DELETE) {
ret.push(...(pieces.map(e => [d[0], e] as Diff)));
}
if (d[0] == DIFF_DELETE) {
const nd = src.shift();
if (typeof (nd) != "undefined") {
const piecesPair = nd[1].split(/([^\n]*\n)/).filter(f => f != "");
if (nd[0] == DIFF_INSERT) {
// it might be pair
for (const pt of pieces) {
ret.push([d[0], pt]);
const pairP = piecesPair.shift();
if (typeof (pairP) != "undefined") ret.push([DIFF_INSERT, pairP]);
}
ret.push(...(piecesPair.map(e => [nd[0], e] as Diff)));
} else {
ret.push(...(pieces.map(e => [d[0], e] as Diff)));
ret.push(...(piecesPair.map(e => [nd[0], e] as Diff)));
}
} else {
ret.push(...(pieces.map(e => [0, e] as Diff)));
}
}
} while (src.length > 0);
return ret;
}
const diffLeft = splitDiffPiece(diffLeftSrc);
const diffRight = splitDiffPiece(diffRightSrc);
let rightIdx = 0;
let leftIdx = 0;
const merged = [] as Diff[];
autoMerge = true;
LOOP_MERGE:
do {
if (leftIdx >= diffLeft.length && rightIdx >= diffRight.length) {
break LOOP_MERGE;
}
const leftItem = diffLeft[leftIdx] ?? [0, ""];
const rightItem = diffRight[rightIdx] ?? [0, ""];
leftIdx++;
rightIdx++;
// when completely same, leave it .
if (leftItem[0] == DIFF_EQUAL && rightItem[0] == DIFF_EQUAL && leftItem[1] == rightItem[1]) {
merged.push(leftItem);
continue;
}
if (leftItem[0] == DIFF_DELETE && rightItem[0] == DIFF_DELETE && leftItem[1] == rightItem[1]) {
// when deleted evenly,
const nextLeftIdx = leftIdx;
const nextRightIdx = rightIdx;
const [nextLeftItem, nextRightItem] = [diffLeft[nextLeftIdx] ?? [0, ""], diffRight[nextRightIdx] ?? [0, ""]];
if ((nextLeftItem[0] == DIFF_INSERT && nextRightItem[0] == DIFF_INSERT) && nextLeftItem[1] != nextRightItem[1]) {
//but next line looks like different
autoMerge = false;
break;
} else {
merged.push(leftItem);
continue;
}
}
// when inserted evenly
if (leftItem[0] == DIFF_INSERT && rightItem[0] == DIFF_INSERT) {
if (leftItem[1] == rightItem[1]) {
merged.push(leftItem);
continue;
} else {
// sort by file date.
if (leftLeaf.mtime <= rightLeaf.mtime) {
merged.push(leftItem);
merged.push(rightItem);
continue;
} else {
merged.push(rightItem);
merged.push(leftItem);
continue;
}
}
}
// when on inserting, index should be fixed again.
if (leftItem[0] == DIFF_INSERT) {
rightIdx--;
merged.push(leftItem);
continue;
}
if (rightItem[0] == DIFF_INSERT) {
leftIdx--;
merged.push(rightItem);
continue;
}
// except insertion, the line should not be different.
if (rightItem[1] != leftItem[1]) {
//TODO: SHOULD BE PANIC.
Logger(`MERGING PANIC:${leftItem[0]},${leftItem[1]} == ${rightItem[0]},${rightItem[1]}`, LOG_LEVEL.VERBOSE);
autoMerge = false;
break LOOP_MERGE;
}
if (leftItem[0] == DIFF_DELETE) {
if (rightItem[0] == DIFF_EQUAL) {
merged.push(leftItem);
continue;
} else {
//we cannot perform auto merge.
autoMerge = false;
break LOOP_MERGE;
}
}
if (rightItem[0] == DIFF_DELETE) {
if (leftItem[0] == DIFF_EQUAL) {
merged.push(rightItem);
continue;
} else {
//we cannot perform auto merge.
autoMerge = false;
break LOOP_MERGE;
}
}
Logger(`Weird condition:${leftItem[0]},${leftItem[1]} == ${rightItem[0]},${rightItem[1]}`, LOG_LEVEL.VERBOSE);
// here is the exception
break LOOP_MERGE;
} while (leftIdx < diffLeft.length || rightIdx < diffRight.length);
if (autoMerge) {
Logger(`Sensibly merge available`, LOG_LEVEL.VERBOSE);
return merged;
} else {
return false;
}
}
/**
* Getting file conflicted status.
@@ -1900,14 +2135,44 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
* @returns true -> resolved, false -> nothing to do, or check result.
*/
async getConflictedStatus(path: string): Promise<diff_check_result> {
const test = await this.localDatabase.getDBEntry(path, { conflicts: true }, false, false);
const test = await this.localDatabase.getDBEntry(path, { conflicts: true }, false, false, true);
if (test === false) return false;
if (test == null) return false;
if (!test._conflicts) return false;
if (test._conflicts.length == 0) return false;
const conflicts = test._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
if (path.endsWith(".md") && !this.settings.disableMarkdownAutoMerge) {
const conflictedRev = conflicts[0];
const conflictedRevNo = Number(conflictedRev.split("-")[0]);
//Search
const revFrom = (await this.localDatabase.localDatabase.get(id2path(path), { revs_info: true })) as unknown as LoadedEntry & PouchDB.Core.GetMeta;
const commonBase = revFrom._revs_info.filter(e => e.status == "available" && Number(e.rev.split("-")[0]) < conflictedRevNo).first().rev ?? "";
if (commonBase) {
const result = await this.mergeSensibly(path, commonBase, test._rev, conflictedRev);
if (result) {
// can be merged.
Logger(`Sensible merge:${path}`, LOG_LEVEL.INFO);
// remove conflicted revision.
await this.localDatabase.deleteDBEntry(path, { rev: conflictedRev });
const p = result.filter(e => e[0] != DIFF_DELETE).map((e) => e[1]).join("");
const file = getAbstractFileByPath(path) as TFile;
if (file) {
await this.app.vault.modify(file, p);
await this.updateIntoDB(file);
} else {
const newFile = await this.app.vault.create(path, p);
await this.updateIntoDB(newFile);
}
await this.pullFile(path);
Logger(`Automatically merged (sensible) :${path}`, LOG_LEVEL.INFO);
return true;
}
}
}
// should be one or more conflicts;
const leftLeaf = await this.getConflictedDoc(path, test._rev);
const rightLeaf = await this.getConflictedDoc(path, test._conflicts[0]);
const rightLeaf = await this.getConflictedDoc(path, conflicts[0]);
if (leftLeaf == false) {
// what's going on..
Logger(`could not get current revisions:${path}`, LOG_LEVEL.NOTICE);
@@ -1915,13 +2180,13 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}
if (rightLeaf == false) {
// Conflicted item could not load, delete this.
await this.localDatabase.deleteDBEntry(path, { rev: test._conflicts[0] });
await this.localDatabase.deleteDBEntry(path, { rev: conflicts[0] });
await this.pullFile(path, null, true);
Logger(`could not get old revisions, automatically used newer one:${path}`, LOG_LEVEL.NOTICE);
return true;
}
// first,check for same contents
if (leftLeaf.data == rightLeaf.data) {
// first, check for same contents and deletion status.
if (leftLeaf.data == rightLeaf.data && leftLeaf.deleted == rightLeaf.deleted) {
let leaf = leftLeaf;
if (leftLeaf.mtime > rightLeaf.mtime) {
leaf = rightLeaf;
@@ -1955,11 +2220,11 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
};
}
showMergeDialog(file: TFile, conflictCheckResult: diff_result): Promise<boolean> {
showMergeDialog(filename: string, conflictCheckResult: diff_result): Promise<boolean> {
return new Promise((res, rej) => {
Logger("open conflict dialog", LOG_LEVEL.VERBOSE);
new ConflictResolveModal(this.app, conflictCheckResult, async (selected) => {
const testDoc = await this.localDatabase.getDBEntry(file.path, { conflicts: true });
const testDoc = await this.localDatabase.getDBEntry(filename, { conflicts: true }, false, false, true);
if (testDoc === false) {
Logger("Missing file..", LOG_LEVEL.VERBOSE);
return res(true);
@@ -1971,28 +2236,33 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const toDelete = selected;
const toKeep = conflictCheckResult.left.rev != toDelete ? conflictCheckResult.left.rev : conflictCheckResult.right.rev;
if (toDelete == "") {
//concat both,
// write data,and delete both old rev.
// concat both,
// delete conflicted revision and write a new file, store it again.
const p = conflictCheckResult.diff.map((e) => e[1]).join("");
await this.localDatabase.deleteDBEntry(file.path, { rev: conflictCheckResult.left.rev });
await this.localDatabase.deleteDBEntry(file.path, { rev: conflictCheckResult.right.rev });
await this.app.vault.modify(file, p);
await this.updateIntoDB(file);
await this.pullFile(file.path);
await this.localDatabase.deleteDBEntry(filename, { rev: testDoc._conflicts[0] });
const file = getAbstractFileByPath(filename) as TFile;
if (file) {
await this.app.vault.modify(file, p);
await this.updateIntoDB(file);
} else {
const newFile = await this.app.vault.create(filename, p);
await this.updateIntoDB(newFile);
}
await this.pullFile(filename);
Logger("concat both file");
setTimeout(() => {
//resolved, check again.
this.showIfConflicted(file);
this.showIfConflicted(filename);
}, 500);
} else if (toDelete == null) {
Logger("Leave it still conflicted");
} else {
Logger(`Conflict resolved:${file.path}`);
await this.localDatabase.deleteDBEntry(file.path, { rev: toDelete });
await this.pullFile(file.path, null, true, toKeep);
Logger(`Conflict resolved:${filename}`);
await this.localDatabase.deleteDBEntry(filename, { rev: toDelete });
await this.pullFile(filename, null, true, toKeep);
setTimeout(() => {
//resolved, check again.
this.showIfConflicted(file);
this.showIfConflicted(filename);
}, 500);
}
@@ -2017,20 +2287,20 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const checkFiles = JSON.parse(JSON.stringify(this.conflictedCheckFiles)) as string[];
for (const filename of checkFiles) {
try {
const file = this.app.vault.getAbstractFileByPath(filename);
const file = getAbstractFileByPath(filename);
if (file != null && file instanceof TFile) {
await this.showIfConflicted(file);
await this.showIfConflicted(file.path);
}
} catch (ex) {
Logger(ex);
}
}
}, 1000);
}, 100);
}
async showIfConflicted(file: TFile) {
async showIfConflicted(filename: string) {
await runWithLock("conflicted", false, async () => {
const conflictCheckResult = await this.getConflictedStatus(file.path);
const conflictCheckResult = await this.getConflictedStatus(filename);
if (conflictCheckResult === false) {
//nothing to do.
return;
@@ -2039,17 +2309,17 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
//auto resolved, but need check again;
Logger("conflict:Automatically merged, but we have to check it again");
setTimeout(() => {
this.showIfConflicted(file);
this.showIfConflicted(filename);
}, 500);
return;
}
//there conflicts, and have to resolve ;
await this.showMergeDialog(file, conflictCheckResult);
await this.showMergeDialog(filename, conflictCheckResult);
});
}
async pullFile(filename: string, fileList?: TFile[], force?: boolean, rev?: string, waitForReady = true) {
const targetFile = this.app.vault.getAbstractFileByPath(id2path(filename));
const targetFile = getAbstractFileByPath(id2path(filename));
if (!this.isTargetFile(id2path(filename))) return;
if (targetFile == null) {
//have to create;
@@ -2080,7 +2350,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
throw new Error(`Missing doc:${(file as any).path}`)
}
if (!(file instanceof TFile) && "path" in file) {
const w = this.app.vault.getAbstractFileByPath((file as any).path);
const w = getAbstractFileByPath((file as any).path);
if (w instanceof TFile) {
file = w;
} else {
@@ -2125,7 +2395,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}
async updateIntoDB(file: TFile, initialScan?: boolean, cache?: CacheData) {
async updateIntoDB(file: TFile, initialScan?: boolean, cache?: CacheData, force?: boolean) {
if (!this.isTargetFile(file)) return;
if (shouldBeIgnored(file.path)) {
return;
@@ -2167,15 +2437,24 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (recentlyTouched(file)) {
return true;
}
const old = await this.localDatabase.getDBEntry(fullPath, null, false, false);
if (old !== false) {
const oldData = { data: old.data, deleted: old._deleted || old.deleted, };
const newData = { data: d.data, deleted: d._deleted || d.deleted };
if (JSON.stringify(oldData) == JSON.stringify(newData)) {
Logger(msg + "Skipped (not changed) " + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
return true;
try {
const old = await this.localDatabase.getDBEntry(fullPath, null, false, false);
if (old !== false) {
const oldData = { data: old.data, deleted: old._deleted || old.deleted, };
const newData = { data: d.data, deleted: d._deleted || d.deleted };
if (JSON.stringify(oldData) == JSON.stringify(newData)) {
Logger(msg + "Skipped (not changed) " + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
return true;
}
// d._rev = old._rev;
}
// d._rev = old._rev;
} catch (ex) {
if (force) {
Logger(msg + "Error, Could not check the diff for the old one." + (force ? "force writing." : "") + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
} else {
Logger(msg + "Error, Could not check the diff for the old one." + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
}
return !force;
}
return false;
});

View File

@@ -1,3 +1,25 @@
### 0.17.0
- 0.17.0 has no surfaced changes but the design of saving chunks has been changed. They have compatibility but changing files after upgrading makes different chunks than before 0.16.x.
Please rebuild databases once if you have been worried about storage usage.
- Improved:
- Splitting markdown
- Saving chunks
- Changed:
- Chunk ID numbering rules
#### Minors
- 0.17.1
- Fixed: Now we can verify and repair the database.
- Refactored inside.
- 0.17.2
- New feature
- We can merge conflicted documents automatically if sensible.
- Fixed
- Writing to the storage will be pended while they have conflicts after replication.
### 0.16.0
- Now hidden files need not be scanned. Changes will be detected automatically.
- If you want it to back to its previous behaviour, please disable `Monitor changes to internal files`.
@@ -10,25 +32,24 @@
- 0.16.3
- Fixed detection of IBM Cloudant (And if there are some issues, be fixed automatically).
- A configuration information reporting tool has been implemented.
- 0.16.4 Fixed detection failure. Please set the `Chunk size` again when using a self-hosted database.
- 0.16.5
- Fixed
- Conflict detection and merging now be able to treat deleted files.
- Logs while the boot-up sequence has been tidied up.
- Fixed incorrect log entries.
- New Feature
- The feature of automatically deleting old expired metadata has been implemented.
We can configure it in `Delete old metadata of deleted files on start-up` in the `General Settings` pane.
- 0.16.6
- Fixed
- Automatic (temporary) batch size adjustment has been restored to work correctly.
- Chunk splitting has been backed to the previous behaviour for saving them correctly.
- Improved
- Corrupted chunks will be detected automatically.
- Now on the case-insensitive system, `aaa.md` and `AAA.md` will be treated as the same file or path at applying changesets.
- 0.16.7 Nothing has been changed except toolsets, framework library, and as like them. Please inform me if something had been getting strange!
- 0.16.8 Now we can synchronise without `bad_request:invalid UTF-8 JSON` even while end-to-end encryption has been disabled.
### 0.15.0
- Outdated configuration items have been removed.
- Setup wizard has been implemented!
I appreciate for reviewing and giving me advice @Pouhon158!
#### Minors
- 0.15.1 Missed the stylesheet.
- 0.15.2 The wizard has been improved and documented!
- 0.15.3 Fixed the issue about locking/unlocking remote database while rebuilding in the wizard.
- 0.15.4 Fixed issues about asynchronous processing (e.g., Conflict check or hidden file detection)
- 0.15.5 Add new features for setting Self-hosted LiveSync up more easier.
- 0.15.6 File tracking logic has been refined.
- 0.15.7 Fixed bug about renaming file.
- 0.15.8 Fixed bug about deleting empty directory, weird behaviour on boot-sequence on mobile devices.
- 0.15.9 Improved chunk retrieving, now chunks are retrieved in batch on continuous requests.
- 0.15.10 Fixed:
- The boot sequence has been corrected and now boots smoothly.
- Auto applying of batch save will be processed earlier than before.
... To continue on to `updates_old.md`.
Note:
Before 0.16.5, LiveSync had some issues making chunks. In this case, synchronisation had became been always failing after a corrupted one should be made. After 0.16.6, the corrupted chunk is automatically detected. Sorry for troubling you but please do `rebuild everything` when this plug-in notified so.