Compare commits

...

18 Commits

Author SHA1 Message Date
vorotamoroz
b14ecdb205 Bump 2022-09-06 17:02:58 +09:00
vorotamoroz
21362adb5b Typos 2022-09-06 14:32:09 +09:00
vorotamoroz
f8c1474700 Refactored 2022-09-06 13:42:12 +09:00
vorotamoroz
b35052a485 bump 2022-09-05 16:55:35 +09:00
vorotamoroz
c367d35e09 Target ES2018 2022-09-05 16:55:29 +09:00
vorotamoroz
2a5078cdbb bump 2022-09-05 16:54:06 +09:00
vorotamoroz
8112a07210 Implemented:
- Auto chunk size adjusting.
  Now our large files are processed more efficiently
- These configuration has been removed.

Improved
- Remote chunk retrieving logic has been speeded up.

Fixed
- Fixed process handling of boot sequence
2022-09-05 16:53:22 +09:00
vorotamoroz
c9daa1b47d Fixed issue of importing configurations. 2022-09-04 01:16:29 +09:00
vorotamoroz
73ac93e8c5 bump 2022-09-04 01:08:09 +09:00
vorotamoroz
8d2b9eff37 Improved:
- New test items have been added to `Check database configuration`
2022-09-04 01:08:02 +09:00
vorotamoroz
0ee32a2147 bump 2022-09-03 16:44:51 +09:00
vorotamoroz
ac3c78e198 Fixed
- Could not retrieve files if synchronisation has been interrupted or failed
2022-09-03 16:43:59 +09:00
vorotamoroz
0da1e3d9c8 bump 2022-08-30 15:24:38 +09:00
vorotamoroz
8f021a3c93 Improved:
- Use local chunks in preference to remote them if present.
2022-08-30 15:24:26 +09:00
vorotamoroz
6db0743096 Update release.yml 2022-08-29 16:50:53 +09:00
vorotamoroz
0e300a0a6b bump 2022-08-29 16:48:35 +09:00
vorotamoroz
9d0ffd1848 Implemented:
- The target selecting filter was implemented.
- We can configure size of chunks.
- Read chunks online.

Fixed:
- Typos
2022-08-29 16:32:14 +09:00
vorotamoroz
e7f4d8c9c2 Add error handling for loading the document 2022-08-29 16:13:54 +09:00
16 changed files with 661 additions and 368 deletions

View File

@@ -22,7 +22,7 @@ jobs:
- name: Get Version - name: Get Version
id: version id: version
run: | run: |
echo "::set-output name=tag::$(git describe --abbrev=0)" echo "::set-output name=tag::$(git describe --abbrev=0 --tags)"
# Build the plugin # Build the plugin
- name: Build - name: Build
id: build id: build

1
.gitignore vendored
View File

@@ -12,3 +12,4 @@ main.js
# obsidian # obsidian
data.json data.json
.vscode

View File

@@ -43,7 +43,7 @@ Note: More information about alternative hosting methods needed! Currently, [usi
### First device ### First device
1. Install the plugin on your device. 1. Install the plugin on your device.
2. Configure remote database infomation. 2. Configure remote database information.
1. Fill your server's information into the `Remote Database configuration` pane. 1. Fill your server's information into the `Remote Database configuration` pane.
2. Enabling `End to End Encryption` is recommended. After entering a passphrase, click `Apply`. 2. Enabling `End to End Encryption` is recommended. After entering a passphrase, click `Apply`.
3. Click `Test Database Connection` and make sure that the plugin says `Connected to (your-database-name)`. 3. Click `Test Database Connection` and make sure that the plugin says `Connected to (your-database-name)`.
@@ -53,7 +53,7 @@ Note: More information about alternative hosting methods needed! Currently, [usi
2. Or, set up the synchronization as you like. By default, none of the settings are enabled, meaning you would need to manually trigger the synchronization process. 2. Or, set up the synchronization as you like. By default, none of the settings are enabled, meaning you would need to manually trigger the synchronization process.
3. Additional configurations are also here. I recommend enabling `Use Trash for deleted files`, but you can also leave all configurations as-is. 3. Additional configurations are also here. I recommend enabling `Use Trash for deleted files`, but you can also leave all configurations as-is.
4. Configure miscellaneous features. 4. Configure miscellaneous features.
1. Enabling `Show staus inside editor` shows status at the top-right corner of the editor while in editing mode. (Recommended) 1. Enabling `Show status inside editor` shows status at the top-right corner of the editor while in editing mode. (Recommended)
5. Go back to the editor. Wait for the initial scan to complete. 5. Go back to the editor. Wait for the initial scan to complete.
6. When the status no longer changes and shows a ⏹️ for COMPLETED (No ⏳ and 🧩 icons), you are ready to synchronize with the server. 6. When the status no longer changes and shows a ⏹️ for COMPLETED (No ⏳ and 🧩 icons), you are ready to synchronize with the server.
7. Press the replicate icon on the Ribbon or run `Replicate now` from the command palette. This will send all your data to the server. 7. Press the replicate icon on the Ribbon or run `Replicate now` from the command palette. This will send all your data to the server.
@@ -115,7 +115,7 @@ If you have deleted or renamed files, please wait until ⏳ icon disappeared.
- While synchronizing, files are compared by their modification time and the older ones will be overwritten by the newer ones. Then plugin checks for conflicts and if a merge is needed, a dialog will open. - While synchronizing, files are compared by their modification time and the older ones will be overwritten by the newer ones. Then plugin checks for conflicts and if a merge is needed, a dialog will open.
- Rarely, a file in the database could be corrupted. The plugin will not write to local storage when a file looks corrupted. If a local version of the file is on your device, the corruption could be fixed by editing the local file and synchronizing it. But if the file does not exist on any of your devices, then it can not be rescued. In this case you can delete these items from the settings dialog. - Rarely, a file in the database could be corrupted. The plugin will not write to local storage when a file looks corrupted. If a local version of the file is on your device, the corruption could be fixed by editing the local file and synchronizing it. But if the file does not exist on any of your devices, then it can not be rescued. In this case you can delete these items from the settings dialog.
- If your database looks corrupted, try "Drop History". Usually, It is the easiest way. - If your database looks corrupted, try "Drop History". Usually, It is the easiest way.
- To stop the bootup sequence (eg. for fixing problems on databases), you can put a `redflag.md` file at the root of your vault. - To stop the boot up sequence (eg. for fixing problems on databases), you can put a `redflag.md` file at the root of your vault.
- Q: Database is growing, how can I shrink it down? - Q: Database is growing, how can I shrink it down?
A: each of the docs is saved with their past 100 revisions for detecting and resolving conflicts. Picturing that one device has been offline for a while, and comes online again. The device has to compare its notes with the remotely saved ones. If there exists a historic revision in which the note used to be identical, it could be updated safely (like git fast-forward). Even if that is not in revision histories, we only have to check the differences after the revision that both devices commonly have. This is like git's conflict resolving method. So, We have to make the database again like an enlarged git repo if you want to solve the root of the problem. A: each of the docs is saved with their past 100 revisions for detecting and resolving conflicts. Picturing that one device has been offline for a while, and comes online again. The device has to compare its notes with the remotely saved ones. If there exists a historic revision in which the note used to be identical, it could be updated safely (like git fast-forward). Even if that is not in revision histories, we only have to check the differences after the revision that both devices commonly have. This is like git's conflict resolving method. So, We have to make the database again like an enlarged git repo if you want to solve the root of the problem.
- And more technical Information are in the [Technical Information](docs/tech_info.md) - And more technical Information are in the [Technical Information](docs/tech_info.md)

View File

@@ -29,7 +29,7 @@ esbuild
external: ["obsidian", "electron", ...builtins], external: ["obsidian", "electron", ...builtins],
format: "cjs", format: "cjs",
watch: !prod, watch: !prod,
target: "es2015", target: "es2018",
logLevel: "info", logLevel: "info",
sourcemap: prod ? false : "inline", sourcemap: prod ? false : "inline",
treeShaking: true, treeShaking: true,

View File

@@ -1,7 +1,7 @@
{ {
"id": "obsidian-livesync", "id": "obsidian-livesync",
"name": "Self-hosted LiveSync", "name": "Self-hosted LiveSync",
"version": "0.13.4", "version": "0.14.7",
"minAppVersion": "0.9.12", "minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz", "author": "vorotamoroz",

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.13.3", "version": "0.14.7",
"lockfileVersion": 2, "lockfileVersion": 2,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.13.3", "version": "0.14.7",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"diff-match-patch": "^1.0.5", "diff-match-patch": "^1.0.5",

View File

@@ -1,6 +1,6 @@
{ {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.13.4", "version": "0.14.7",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js", "main": "main.js",
"type": "module", "type": "module",

View File

@@ -31,14 +31,25 @@ export class DocumentHistoryModal extends Modal {
} }
async loadFile() { async loadFile() {
const db = this.plugin.localDatabase; const db = this.plugin.localDatabase;
const w = await db.localDatabase.get(path2id(this.file), { revs_info: true }); try {
this.revs_info = w._revs_info.filter((e) => e.status == "available"); const w = await db.localDatabase.get(path2id(this.file), { revs_info: true });
this.range.max = `${this.revs_info.length - 1}`; this.revs_info = w._revs_info.filter((e) => e.status == "available");
this.range.value = this.range.max; this.range.max = `${this.revs_info.length - 1}`;
this.fileInfo.setText(`${this.file} / ${this.revs_info.length} revisions`); this.range.value = this.range.max;
await this.loadRevs(); this.fileInfo.setText(`${this.file} / ${this.revs_info.length} revisions`);
await this.loadRevs();
} catch (ex) {
if (ex.status && ex.status == 404) {
this.range.max = "0";
this.range.value = "";
this.range.disabled = true;
this.showDiff
this.contentView.setText(`History of this file was not recorded.`);
}
}
} }
async loadRevs() { async loadRevs() {
if (this.revs_info.length == 0) return;
const db = this.plugin.localDatabase; const db = this.plugin.localDatabase;
const index = this.revs_info.length - 1 - (this.range.value as any) / 1; const index = this.revs_info.length - 1 - (this.range.value as any) / 1;
const rev = this.revs_info[index]; const rev = this.revs_info[index];
@@ -154,7 +165,7 @@ export class DocumentHistoryModal extends Modal {
const leaf = app.workspace.getLeaf(false); const leaf = app.workspace.getLeaf(false);
await leaf.openFile(targetFile); await leaf.openFile(targetFile);
} else { } else {
Logger("The file cound not view on the editor", LOG_LEVEL.NOTICE) Logger("The file could not view on the editor", LOG_LEVEL.NOTICE)
} }
} }
buttons.createEl("button", { text: "Back to this revision" }, (e) => { buttons.createEl("button", { text: "Back to this revision" }, (e) => {
@@ -162,7 +173,7 @@ export class DocumentHistoryModal extends Modal {
e.addEventListener("click", async () => { e.addEventListener("click", async () => {
const pathToWrite = this.file.startsWith("i:") ? this.file.substring("i:".length) : this.file; const pathToWrite = this.file.startsWith("i:") ? this.file.substring("i:".length) : this.file;
if (!isValidPath(pathToWrite)) { if (!isValidPath(pathToWrite)) {
Logger("Path is not vaild to write content.", LOG_LEVEL.INFO); Logger("Path is not valid to write content.", LOG_LEVEL.INFO);
} }
if (this.currentDoc?.datatype == "plain") { if (this.currentDoc?.datatype == "plain") {
await this.app.vault.adapter.write(pathToWrite, this.currentDoc.data); await this.app.vault.adapter.write(pathToWrite, this.currentDoc.data);

View File

@@ -24,14 +24,14 @@ import {
} from "./lib/src/types"; } from "./lib/src/types";
import { RemoteDBSettings } from "./lib/src/types"; import { RemoteDBSettings } from "./lib/src/types";
import { resolveWithIgnoreKnownError, runWithLock, shouldSplitAsPlainText, splitPieces2, enableEncryption } from "./lib/src/utils"; import { resolveWithIgnoreKnownError, runWithLock, shouldSplitAsPlainText, splitPieces2, enableEncryption } from "./lib/src/utils";
import { path2id } from "./utils"; import { id2path, path2id } from "./utils";
import { Logger } from "./lib/src/logger"; import { Logger } from "./lib/src/logger";
import { checkRemoteVersion, connectRemoteCouchDBWithSetting, getLastPostFailedBySize } from "./utils_couchdb"; import { checkRemoteVersion, connectRemoteCouchDBWithSetting, getLastPostFailedBySize, putDesignDocuments } from "./utils_couchdb";
import { KeyValueDatabase, OpenKeyValueDatabase } from "./KeyValueDB"; import { KeyValueDatabase, OpenKeyValueDatabase } from "./KeyValueDB";
import { LRUCache } from "./lib/src/LRUCache"; import { LRUCache } from "./lib/src/LRUCache";
// when replicated, LiveSync checks chunk versions that every node used. // when replicated, LiveSync checks chunk versions that every node used.
// If all minumum version of every devices were up, that means we can convert database automatically. // If all minimum version of every devices were up, that means we can convert database automatically.
const currentVersionRange: ChunkVersionRange = { const currentVersionRange: ChunkVersionRange = {
min: 0, min: 0,
@@ -72,6 +72,7 @@ export class LocalPouchDB {
chunkVersion = -1; chunkVersion = -1;
maxChunkVersion = -1; maxChunkVersion = -1;
minChunkVersion = -1; minChunkVersion = -1;
needScanning = false;
cancelHandler<T extends PouchDB.Core.Changes<EntryDoc> | PouchDB.Replication.Sync<EntryDoc> | PouchDB.Replication.Replication<EntryDoc>>(handler: T): T { cancelHandler<T extends PouchDB.Core.Changes<EntryDoc> | PouchDB.Replication.Sync<EntryDoc> | PouchDB.Replication.Replication<EntryDoc>>(handler: T): T {
if (handler != null) { if (handler != null) {
@@ -160,8 +161,9 @@ export class LocalPouchDB {
this.localDatabase.removeAllListeners(); this.localDatabase.removeAllListeners();
}); });
this.nodeid = nodeinfo.nodeid; this.nodeid = nodeinfo.nodeid;
await putDesignDocuments(this.localDatabase);
// Traceing the leaf id // Tracings the leaf id
const changes = this.localDatabase const changes = this.localDatabase
.changes({ .changes({
since: "now", since: "now",
@@ -186,7 +188,7 @@ export class LocalPouchDB {
const oi = await old.info(); const oi = await old.info();
if (oi.doc_count == 0) { if (oi.doc_count == 0) {
Logger("Old database is empty, proceed to next step", LOG_LEVEL.VERBOSE); Logger("Old database is empty, proceed to next step", LOG_LEVEL.VERBOSE);
// aleady converted. // already converted.
return nextSeq(); return nextSeq();
} }
// //
@@ -292,13 +294,17 @@ export class LocalPouchDB {
throw new Error(`Chunk was not found: ${id}`); throw new Error(`Chunk was not found: ${id}`);
} }
} else { } else {
Logger(`Something went wrong on retriving chunk`); Logger(`Something went wrong while retrieving chunks`);
throw ex; throw ex;
} }
} }
} }
async getDBEntryMeta(path: string, opt?: PouchDB.Core.GetOptions, includeDeleted = false): Promise<false | LoadedEntry> { async getDBEntryMeta(path: string, opt?: PouchDB.Core.GetOptions, includeDeleted = false): Promise<false | LoadedEntry> {
// safety valve
if (!this.isTargetFile(path)) {
return false;
}
const id = path2id(path); const id = path2id(path);
try { try {
let obj: EntryDocResponse = null; let obj: EntryDocResponse = null;
@@ -348,6 +354,10 @@ export class LocalPouchDB {
return false; return false;
} }
async getDBEntry(path: string, opt?: PouchDB.Core.GetOptions, dump = false, waitForReady = true, includeDeleted = false): Promise<false | LoadedEntry> { async getDBEntry(path: string, opt?: PouchDB.Core.GetOptions, dump = false, waitForReady = true, includeDeleted = false): Promise<false | LoadedEntry> {
// safety valve
if (!this.isTargetFile(path)) {
return false;
}
const id = path2id(path); const id = path2id(path);
try { try {
let obj: EntryDocResponse = null; let obj: EntryDocResponse = null;
@@ -392,26 +402,51 @@ export class LocalPouchDB {
// simple note // simple note
} }
if (obj.type == "newnote" || obj.type == "plain") { if (obj.type == "newnote" || obj.type == "plain") {
// search childrens // search children
try { try {
if (dump) { if (dump) {
Logger(`Enhanced doc`); Logger(`Enhanced doc`);
Logger(obj); Logger(obj);
} }
let childrens: string[]; let children: string[] = [];
try {
childrens = await Promise.all(obj.children.map((e) => this.getDBLeaf(e, waitForReady))); if (this.settings.readChunksOnline) {
if (dump) { const items = await this.CollectChunks(obj.children);
Logger(`Chunks:`); if (items) {
Logger(childrens); for (const v of items) {
if (v && v.type == "leaf") {
children.push(v.data);
} else {
if (!opt) {
Logger(`Chunks of ${obj._id} are not valid.`, LOG_LEVEL.NOTICE);
this.needScanning = true;
this.corruptedEntries[obj._id] = obj;
}
return false;
}
}
} else {
if (opt) {
Logger(`Could not retrieve chunks of ${obj._id}. we have to `, LOG_LEVEL.NOTICE);
this.needScanning = true;
}
return false;
}
} else {
try {
children = await Promise.all(obj.children.map((e) => this.getDBLeaf(e, waitForReady)));
if (dump) {
Logger(`Chunks:`);
Logger(children);
}
} catch (ex) {
Logger(`Something went wrong on reading chunks of ${obj._id} from database, see verbose info for detail.`, LOG_LEVEL.NOTICE);
Logger(ex, LOG_LEVEL.VERBOSE);
this.corruptedEntries[obj._id] = obj;
return false;
} }
} catch (ex) {
Logger(`Something went wrong on reading chunks of ${obj._id} from database, see verbose info for detail.`, LOG_LEVEL.NOTICE);
Logger(ex, LOG_LEVEL.VERBOSE);
this.corruptedEntries[obj._id] = obj;
return false;
} }
const data = childrens.join(""); const data = children.join("");
const doc: LoadedEntry & PouchDB.Core.IdMeta & PouchDB.Core.GetMeta = { const doc: LoadedEntry & PouchDB.Core.IdMeta & PouchDB.Core.GetMeta = {
data: data, data: data,
_id: obj._id, _id: obj._id,
@@ -452,6 +487,10 @@ export class LocalPouchDB {
return false; return false;
} }
async deleteDBEntry(path: string, opt?: PouchDB.Core.GetOptions): Promise<boolean> { async deleteDBEntry(path: string, opt?: PouchDB.Core.GetOptions): Promise<boolean> {
// safety valve
if (!this.isTargetFile(path)) {
return false;
}
const id = path2id(path); const id = path2id(path);
try { try {
@@ -521,7 +560,7 @@ export class LocalPouchDB {
for (const v of result.rows) { for (const v of result.rows) {
// let doc = v.doc; // let doc = v.doc;
if (v.id.startsWith(prefix) || v.id.startsWith("/" + prefix)) { if (v.id.startsWith(prefix) || v.id.startsWith("/" + prefix)) {
delDocs.push(v.id); if (this.isTargetFile(id2path(v.id))) delDocs.push(v.id);
// console.log("!" + v.id); // console.log("!" + v.id);
} else { } else {
if (!v.id.startsWith("h:")) { if (!v.id.startsWith("h:")) {
@@ -566,37 +605,34 @@ export class LocalPouchDB {
return true; return true;
} }
async putDBEntry(note: LoadedEntry, saveAsBigChunk?: boolean) { async putDBEntry(note: LoadedEntry, saveAsBigChunk?: boolean) {
//safety valve
if (!this.isTargetFile(id2path(note._id))) {
return;
}
// let leftData = note.data; // let leftData = note.data;
const savenNotes = []; const savedNotes = [];
let processed = 0; let processed = 0;
let made = 0; let made = 0;
let skiped = 0; let skipped = 0;
let pieceSize = MAX_DOC_SIZE_BIN; const maxChunkSize = MAX_DOC_SIZE_BIN * Math.max(this.settings.customChunkSize, 1);
let pieceSize = maxChunkSize;
let plainSplit = false; let plainSplit = false;
let cacheUsed = 0; let cacheUsed = 0;
const userpasswordHash = this.h32Raw(new TextEncoder().encode(this.settings.passphrase)); const userPasswordHash = this.h32Raw(new TextEncoder().encode(this.settings.passphrase));
if (!saveAsBigChunk && shouldSplitAsPlainText(note._id)) { if (!saveAsBigChunk && shouldSplitAsPlainText(note._id)) {
pieceSize = MAX_DOC_SIZE; pieceSize = MAX_DOC_SIZE;
plainSplit = true; plainSplit = true;
} }
const minimumChunkSize = Math.min(Math.max(40, ~~(note.data.length / 100)), maxChunkSize);
if (pieceSize < minimumChunkSize) pieceSize = minimumChunkSize;
const newLeafs: EntryLeaf[] = []; const newLeafs: EntryLeaf[] = [];
// To keep low bandwith and database size,
// Dedup pieces on database.
// from 0.1.10, for best performance. we use markdown delimiters
// 1. \n[^\n]{longLineThreshold}[^\n]*\n -> long sentence shuld break.
// 2. \n\n shold break
// 3. \r\n\r\n should break
// 4. \n# should break.
let minimumChunkSize = this.settings.minimumChunkSize;
if (minimumChunkSize < 10) minimumChunkSize = 10;
let longLineThreshold = this.settings.longLineThreshold;
if (longLineThreshold < 100) longLineThreshold = 100;
const pieces = splitPieces2(note.data, pieceSize, plainSplit, minimumChunkSize, longLineThreshold); const pieces = splitPieces2(note.data, pieceSize, plainSplit, minimumChunkSize, 0);
for (const piece of pieces()) { for (const piece of pieces()) {
processed++; processed++;
let leafid = ""; let leafId = "";
// Get hash of piece. // Get hash of piece.
let hashedPiece = ""; let hashedPiece = "";
let hashQ = 0; // if hash collided, **IF**, count it up. let hashQ = 0; // if hash collided, **IF**, count it up.
@@ -605,40 +641,40 @@ export class LocalPouchDB {
const cache = this.hashCaches.get(piece); const cache = this.hashCaches.get(piece);
if (cache) { if (cache) {
hashedPiece = ""; hashedPiece = "";
leafid = cache; leafId = cache;
needMake = false; needMake = false;
skiped++; skipped++;
cacheUsed++; cacheUsed++;
} else { } else {
if (this.settings.encrypt) { if (this.settings.encrypt) {
// When encryption has been enabled, make hash to be different between each passphrase to avoid inferring password. // When encryption has been enabled, make hash to be different between each passphrase to avoid inferring password.
hashedPiece = "+" + (this.h32Raw(new TextEncoder().encode(piece)) ^ userpasswordHash).toString(16); hashedPiece = "+" + (this.h32Raw(new TextEncoder().encode(piece)) ^ userPasswordHash).toString(16);
} else { } else {
hashedPiece = this.h32(piece); hashedPiece = this.h32(piece);
} }
leafid = "h:" + hashedPiece; leafId = "h:" + hashedPiece;
do { do {
let nleafid = leafid; let newLeafId = leafId;
try { try {
nleafid = `${leafid}${hashQ}`; newLeafId = `${leafId}${hashQ}`;
const pieceData = await this.localDatabase.get<EntryLeaf>(nleafid); const pieceData = await this.localDatabase.get<EntryLeaf>(newLeafId);
if (pieceData.type == "leaf" && pieceData.data == piece) { if (pieceData.type == "leaf" && pieceData.data == piece) {
leafid = nleafid; leafId = newLeafId;
needMake = false; needMake = false;
tryNextHash = false; tryNextHash = false;
this.hashCaches.set(piece, leafid); this.hashCaches.set(piece, leafId);
} else if (pieceData.type == "leaf") { } else if (pieceData.type == "leaf") {
Logger("hash:collision!!"); Logger("hash:collision!!");
hashQ++; hashQ++;
tryNextHash = true; tryNextHash = true;
} else { } else {
leafid = nleafid; leafId = newLeafId;
tryNextHash = false; tryNextHash = false;
} }
} catch (ex) { } catch (ex) {
if (ex.status && ex.status == 404) { if (ex.status && ex.status == 404) {
//not found, we can use it. //not found, we can use it.
leafid = nleafid; leafId = newLeafId;
needMake = true; needMake = true;
tryNextHash = false; tryNextHash = false;
} else { } else {
@@ -653,18 +689,18 @@ export class LocalPouchDB {
const savePiece = piece; const savePiece = piece;
const d: EntryLeaf = { const d: EntryLeaf = {
_id: leafid, _id: leafId,
data: savePiece, data: savePiece,
type: "leaf", type: "leaf",
}; };
newLeafs.push(d); newLeafs.push(d);
this.hashCaches.set(piece, leafid); this.hashCaches.set(piece, leafId);
made++; made++;
} else { } else {
skiped++; skipped++;
} }
} }
savenNotes.push(leafid); savedNotes.push(leafId);
} }
let saved = true; let saved = true;
if (newLeafs.length > 0) { if (newLeafs.length > 0) {
@@ -673,7 +709,7 @@ export class LocalPouchDB {
for (const item of result) { for (const item of result) {
if (!(item as any).ok) { if (!(item as any).ok) {
if ((item as any).status && (item as any).status == 409) { if ((item as any).status && (item as any).status == 409) {
// conflicted, but it would be ok in childrens. // conflicted, but it would be ok in children.
} else { } else {
Logger(`Save failed:id:${item.id} rev:${item.rev}`, LOG_LEVEL.NOTICE); Logger(`Save failed:id:${item.id} rev:${item.rev}`, LOG_LEVEL.NOTICE);
Logger(item); Logger(item);
@@ -688,9 +724,9 @@ export class LocalPouchDB {
} }
} }
if (saved) { if (saved) {
Logger(`Content saved:${note._id} ,pieces:${processed} (new:${made}, skip:${skiped}, cache:${cacheUsed})`); Logger(`Content saved:${note._id} ,pieces:${processed} (new:${made}, skip:${skipped}, cache:${cacheUsed})`);
const newDoc: PlainEntry | NewEntry = { const newDoc: PlainEntry | NewEntry = {
children: savenNotes, children: savedNotes,
_id: note._id, _id: note._id,
ctime: note.ctime, ctime: note.ctime,
mtime: note.mtime, mtime: note.mtime,
@@ -727,12 +763,12 @@ export class LocalPouchDB {
} }
}); });
} else { } else {
Logger(`note coud not saved:${note._id}`); Logger(`note could not saved:${note._id}`);
} }
} }
updateInfo: () => void = () => { updateInfo: () => void = () => {
console.log("default updinfo"); console.log("Update Info default implement");
}; };
// eslint-disable-next-line require-await // eslint-disable-next-line require-await
async migrate(from: number, to: number): Promise<boolean> { async migrate(from: number, to: number): Promise<boolean> {
@@ -772,14 +808,15 @@ export class LocalPouchDB {
return false; return false;
} }
const dbret = await connectRemoteCouchDBWithSetting(setting, this.isMobile); const dbRet = await connectRemoteCouchDBWithSetting(setting, this.isMobile);
if (typeof dbret === "string") { if (typeof dbRet === "string") {
Logger(`could not connect to ${uri}: ${dbret}`, showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO); Logger(`could not connect to ${uri}: ${dbRet}`, showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO);
return false; return false;
} }
if (!skipCheck) { if (!skipCheck) {
if (!(await checkRemoteVersion(dbret.db, this.migrate.bind(this), VER))) { await putDesignDocuments(dbRet.db);
if (!(await checkRemoteVersion(dbRet.db, this.migrate.bind(this), VER))) {
Logger("Remote database is newer or corrupted, make sure to latest version of self-hosted-livesync installed", LOG_LEVEL.NOTICE); Logger("Remote database is newer or corrupted, make sure to latest version of self-hosted-livesync installed", LOG_LEVEL.NOTICE);
return false; return false;
} }
@@ -793,7 +830,7 @@ export class LocalPouchDB {
node_chunk_info: { [this.nodeid]: currentVersionRange } node_chunk_info: { [this.nodeid]: currentVersionRange }
}; };
const remoteMilestone: EntryMilestoneInfo = { ...defMilestonePoint, ...(await resolveWithIgnoreKnownError(dbret.db.get(MILSTONE_DOCID), defMilestonePoint)) }; const remoteMilestone: EntryMilestoneInfo = { ...defMilestonePoint, ...(await resolveWithIgnoreKnownError(dbRet.db.get(MILSTONE_DOCID), defMilestonePoint)) };
remoteMilestone.node_chunk_info = { ...defMilestonePoint.node_chunk_info, ...remoteMilestone.node_chunk_info }; remoteMilestone.node_chunk_info = { ...defMilestonePoint.node_chunk_info, ...remoteMilestone.node_chunk_info };
this.remoteLocked = remoteMilestone.locked; this.remoteLocked = remoteMilestone.locked;
this.remoteLockedAndDeviceNotAccepted = remoteMilestone.locked && remoteMilestone.accepted_nodes.indexOf(this.nodeid) == -1; this.remoteLockedAndDeviceNotAccepted = remoteMilestone.locked && remoteMilestone.accepted_nodes.indexOf(this.nodeid) == -1;
@@ -807,7 +844,7 @@ export class LocalPouchDB {
if (writeMilestone) { if (writeMilestone) {
remoteMilestone.node_chunk_info[this.nodeid].min = currentVersionRange.min; remoteMilestone.node_chunk_info[this.nodeid].min = currentVersionRange.min;
remoteMilestone.node_chunk_info[this.nodeid].max = currentVersionRange.max; remoteMilestone.node_chunk_info[this.nodeid].max = currentVersionRange.max;
await dbret.db.put(remoteMilestone); await dbRet.db.put(remoteMilestone);
} }
// Check compatibility and make sure available version // Check compatibility and make sure available version
@@ -850,9 +887,13 @@ export class LocalPouchDB {
batches_limit: setting.batches_limit, batches_limit: setting.batches_limit,
batch_size: setting.batch_size, batch_size: setting.batch_size,
}; };
if (setting.readChunksOnline) {
syncOptionBase.push = { filter: 'replicate/push' };
syncOptionBase.pull = { filter: 'replicate/pull' };
}
const syncOption: PouchDB.Replication.SyncOptions = keepAlive ? { live: true, retry: true, heartbeat: 30000, ...syncOptionBase } : { ...syncOptionBase }; const syncOption: PouchDB.Replication.SyncOptions = keepAlive ? { live: true, retry: true, heartbeat: 30000, ...syncOptionBase } : { ...syncOptionBase };
return { db: dbret.db, info: dbret.info, syncOptionBase, syncOption }; return { db: dbRet.db, info: dbRet.info, syncOptionBase, syncOption };
} }
openReplication(setting: RemoteDBSettings, keepAlive: boolean, showResult: boolean, callback: (e: PouchDB.Core.ExistingDocument<EntryDoc>[]) => Promise<void>) { openReplication(setting: RemoteDBSettings, keepAlive: boolean, showResult: boolean, callback: (e: PouchDB.Core.ExistingDocument<EntryDoc>[]) => Promise<void>) {
@@ -891,7 +932,7 @@ export class LocalPouchDB {
Logger("Replication completed", showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO, showResult ? "sync" : ""); Logger("Replication completed", showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO, showResult ? "sync" : "");
this.syncHandler = this.cancelHandler(this.syncHandler); this.syncHandler = this.cancelHandler(this.syncHandler);
} }
replicationDeniend(e: any) { replicationDenied(e: any) {
this.syncStatus = "ERRORED"; this.syncStatus = "ERRORED";
this.updateInfo(); this.updateInfo();
this.syncHandler = this.cancelHandler(this.syncHandler); this.syncHandler = this.cancelHandler(this.syncHandler);
@@ -902,6 +943,8 @@ export class LocalPouchDB {
this.syncStatus = "ERRORED"; this.syncStatus = "ERRORED";
this.syncHandler = this.cancelHandler(this.syncHandler); this.syncHandler = this.cancelHandler(this.syncHandler);
this.updateInfo(); this.updateInfo();
Logger("Replication error", LOG_LEVEL.NOTICE, "sync");
Logger(e);
} }
replicationPaused() { replicationPaused() {
this.syncStatus = "PAUSED"; this.syncStatus = "PAUSED";
@@ -915,13 +958,13 @@ export class LocalPouchDB {
callback: (e: PouchDB.Core.ExistingDocument<EntryDoc>[]) => Promise<void>, callback: (e: PouchDB.Core.ExistingDocument<EntryDoc>[]) => Promise<void>,
retrying: boolean, retrying: boolean,
callbackDone: (e: boolean | any) => void, callbackDone: (e: boolean | any) => void,
syncmode: "sync" | "pullOnly" | "pushOnly" syncMode: "sync" | "pullOnly" | "pushOnly"
): Promise<boolean> { ): Promise<boolean> {
if (this.syncHandler != null) { if (this.syncHandler != null) {
Logger("Replication is already in progress.", showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO, "sync"); Logger("Replication is already in progress.", showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO, "sync");
return; return;
} }
Logger(`Oneshot Sync begin... (${syncmode})`); Logger(`Oneshot Sync begin... (${syncMode})`);
let thisCallback = callbackDone; let thisCallback = callbackDone;
const ret = await this.checkReplicationConnectivity(setting, true, retrying, showResult); const ret = await this.checkReplicationConnectivity(setting, true, retrying, showResult);
if (ret === false) { if (ret === false) {
@@ -941,17 +984,17 @@ export class LocalPouchDB {
this.originalSetting = setting; this.originalSetting = setting;
} }
this.syncHandler = this.cancelHandler(this.syncHandler); this.syncHandler = this.cancelHandler(this.syncHandler);
if (syncmode == "sync") { if (syncMode == "sync") {
this.syncHandler = this.localDatabase.sync(db, { checkpoint: "target", ...syncOptionBase }); this.syncHandler = this.localDatabase.sync(db, { checkpoint: "target", ...syncOptionBase });
this.syncHandler this.syncHandler
.on("change", async (e) => { .on("change", async (e) => {
await this.replicationChangeDetected(e, showResult, docSentOnStart, docArrivedOnStart, callback); await this.replicationChangeDetected(e, showResult, docSentOnStart, docArrivedOnStart, callback);
if (retrying) { if (retrying) {
if (this.docSent - docSentOnStart + (this.docArrived - docArrivedOnStart) > this.originalSetting.batch_size * 2) { if (this.docSent - docSentOnStart + (this.docArrived - docArrivedOnStart) > this.originalSetting.batch_size * 2) {
// restore configration. // restore configuration.
Logger("Back into original settings once."); Logger("Back into original settings once.");
this.syncHandler = this.cancelHandler(this.syncHandler); this.syncHandler = this.cancelHandler(this.syncHandler);
this.openOneshotReplication(this.originalSetting, showResult, callback, false, callbackDone, syncmode); this.openOneshotReplication(this.originalSetting, showResult, callback, false, callbackDone, syncMode);
} }
} }
}) })
@@ -961,17 +1004,17 @@ export class LocalPouchDB {
thisCallback(true); thisCallback(true);
} }
}); });
} else if (syncmode == "pullOnly") { } else if (syncMode == "pullOnly") {
this.syncHandler = this.localDatabase.replicate.from(db, { checkpoint: "target", ...syncOptionBase }); this.syncHandler = this.localDatabase.replicate.from(db, { checkpoint: "target", ...syncOptionBase, ...(this.settings.readChunksOnline ? { filter: "replicate/pull" } : {}) });
this.syncHandler this.syncHandler
.on("change", async (e) => { .on("change", async (e) => {
await this.replicationChangeDetected({ direction: "pull", change: e }, showResult, docSentOnStart, docArrivedOnStart, callback); await this.replicationChangeDetected({ direction: "pull", change: e }, showResult, docSentOnStart, docArrivedOnStart, callback);
if (retrying) { if (retrying) {
if (this.docSent - docSentOnStart + (this.docArrived - docArrivedOnStart) > this.originalSetting.batch_size * 2) { if (this.docSent - docSentOnStart + (this.docArrived - docArrivedOnStart) > this.originalSetting.batch_size * 2) {
// restore configration. // restore configuration.
Logger("Back into original settings once."); Logger("Back into original settings once.");
this.syncHandler = this.cancelHandler(this.syncHandler); this.syncHandler = this.cancelHandler(this.syncHandler);
this.openOneshotReplication(this.originalSetting, showResult, callback, false, callbackDone, syncmode); this.openOneshotReplication(this.originalSetting, showResult, callback, false, callbackDone, syncMode);
} }
} }
}) })
@@ -981,16 +1024,16 @@ export class LocalPouchDB {
thisCallback(true); thisCallback(true);
} }
}); });
} else if (syncmode == "pushOnly") { } else if (syncMode == "pushOnly") {
this.syncHandler = this.localDatabase.replicate.to(db, { checkpoint: "target", ...syncOptionBase }); this.syncHandler = this.localDatabase.replicate.to(db, { checkpoint: "target", ...syncOptionBase, ...(this.settings.readChunksOnline ? { filter: "replicate/push" } : {}) });
this.syncHandler.on("change", async (e) => { this.syncHandler.on("change", async (e) => {
await this.replicationChangeDetected({ direction: "push", change: e }, showResult, docSentOnStart, docArrivedOnStart, callback); await this.replicationChangeDetected({ direction: "push", change: e }, showResult, docSentOnStart, docArrivedOnStart, callback);
if (retrying) { if (retrying) {
if (this.docSent - docSentOnStart + (this.docArrived - docArrivedOnStart) > this.originalSetting.batch_size * 2) { if (this.docSent - docSentOnStart + (this.docArrived - docArrivedOnStart) > this.originalSetting.batch_size * 2) {
// restore configration. // restore configuration.
Logger("Back into original settings once."); Logger("Back into original settings once.");
this.syncHandler = this.cancelHandler(this.syncHandler); this.syncHandler = this.cancelHandler(this.syncHandler);
this.openOneshotReplication(this.originalSetting, showResult, callback, false, callbackDone, syncmode); this.openOneshotReplication(this.originalSetting, showResult, callback, false, callbackDone, syncMode);
} }
} }
}) })
@@ -1005,7 +1048,7 @@ export class LocalPouchDB {
this.syncHandler this.syncHandler
.on("active", () => this.replicationActivated(showResult)) .on("active", () => this.replicationActivated(showResult))
.on("denied", (e) => { .on("denied", (e) => {
this.replicationDeniend(e); this.replicationDenied(e);
if (thisCallback != null) { if (thisCallback != null) {
thisCallback(e); thisCallback(e);
} }
@@ -1015,15 +1058,15 @@ export class LocalPouchDB {
Logger("Replication stopped.", showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO, "sync"); Logger("Replication stopped.", showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO, "sync");
if (getLastPostFailedBySize()) { if (getLastPostFailedBySize()) {
// Duplicate settings for smaller batch. // Duplicate settings for smaller batch.
const xsetting: RemoteDBSettings = JSON.parse(JSON.stringify(setting)); const tempSetting: RemoteDBSettings = JSON.parse(JSON.stringify(setting));
xsetting.batch_size = Math.ceil(xsetting.batch_size / 2) + 2; tempSetting.batch_size = Math.ceil(tempSetting.batch_size / 2) + 2;
xsetting.batches_limit = Math.ceil(xsetting.batches_limit / 2) + 2; tempSetting.batches_limit = Math.ceil(tempSetting.batches_limit / 2) + 2;
if (xsetting.batch_size <= 5 && xsetting.batches_limit <= 5) { if (tempSetting.batch_size <= 5 && tempSetting.batches_limit <= 5) {
Logger("We can't replicate more lower value.", showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO); Logger("We can't replicate more lower value.", showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO);
} else { } else {
Logger(`Retry with lower batch size:${xsetting.batch_size}/${xsetting.batches_limit}`, showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO); Logger(`Retry with lower batch size:${tempSetting.batch_size}/${tempSetting.batches_limit}`, showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO);
thisCallback = null; thisCallback = null;
this.openOneshotReplication(xsetting, showResult, callback, true, callbackDone, syncmode); this.openOneshotReplication(tempSetting, showResult, callback, true, callbackDone, syncMode);
} }
} else { } else {
Logger("Replication error", LOG_LEVEL.NOTICE, "sync"); Logger("Replication error", LOG_LEVEL.NOTICE, "sync");
@@ -1065,7 +1108,7 @@ export class LocalPouchDB {
const docArrivedOnStart = this.docArrived; const docArrivedOnStart = this.docArrived;
const docSentOnStart = this.docSent; const docSentOnStart = this.docSent;
if (!retrying) { if (!retrying) {
//TODO if successfly saven, roll back org setting. //TODO if successfully saved, roll back org setting.
this.originalSetting = setting; this.originalSetting = setting;
} }
this.syncHandler = this.cancelHandler(this.syncHandler); this.syncHandler = this.cancelHandler(this.syncHandler);
@@ -1092,7 +1135,7 @@ export class LocalPouchDB {
} }
}) })
.on("complete", (e) => this.replicationCompleted(showResult)) .on("complete", (e) => this.replicationCompleted(showResult))
.on("denied", (e) => this.replicationDeniend(e)) .on("denied", (e) => this.replicationDenied(e))
.on("error", (e) => { .on("error", (e) => {
this.replicationErrored(e); this.replicationErrored(e);
Logger("Replication stopped.", LOG_LEVEL.NOTICE, "sync"); Logger("Replication stopped.", LOG_LEVEL.NOTICE, "sync");
@@ -1141,7 +1184,7 @@ export class LocalPouchDB {
Logger("Remote Database Destroyed", LOG_LEVEL.NOTICE); Logger("Remote Database Destroyed", LOG_LEVEL.NOTICE);
await this.tryCreateRemoteDatabase(setting); await this.tryCreateRemoteDatabase(setting);
} catch (ex) { } catch (ex) {
Logger("Something happened on Remote Database Destory:", LOG_LEVEL.NOTICE); Logger("Something happened on Remote Database Destroy:", LOG_LEVEL.NOTICE);
Logger(ex, LOG_LEVEL.NOTICE); Logger(ex, LOG_LEVEL.NOTICE);
} }
} }
@@ -1154,13 +1197,13 @@ export class LocalPouchDB {
} }
async markRemoteLocked(setting: RemoteDBSettings, locked: boolean) { async markRemoteLocked(setting: RemoteDBSettings, locked: boolean) {
const uri = setting.couchDB_URI + (setting.couchDB_DBNAME == "" ? "" : "/" + setting.couchDB_DBNAME); const uri = setting.couchDB_URI + (setting.couchDB_DBNAME == "" ? "" : "/" + setting.couchDB_DBNAME);
const dbret = await connectRemoteCouchDBWithSetting(setting, this.isMobile); const dbRet = await connectRemoteCouchDBWithSetting(setting, this.isMobile);
if (typeof dbret === "string") { if (typeof dbRet === "string") {
Logger(`could not connect to ${uri}:${dbret}`, LOG_LEVEL.NOTICE); Logger(`could not connect to ${uri}:${dbRet}`, LOG_LEVEL.NOTICE);
return; return;
} }
if (!(await checkRemoteVersion(dbret.db, this.migrate.bind(this), VER))) { if (!(await checkRemoteVersion(dbRet.db, this.migrate.bind(this), VER))) {
Logger("Remote database is newer or corrupted, make sure to latest version of self-hosted-livesync installed", LOG_LEVEL.NOTICE); Logger("Remote database is newer or corrupted, make sure to latest version of self-hosted-livesync installed", LOG_LEVEL.NOTICE);
return; return;
} }
@@ -1173,7 +1216,7 @@ export class LocalPouchDB {
node_chunk_info: { [this.nodeid]: currentVersionRange } node_chunk_info: { [this.nodeid]: currentVersionRange }
}; };
const remoteMilestone: EntryMilestoneInfo = { ...defInitPoint, ...await resolveWithIgnoreKnownError(dbret.db.get(MILSTONE_DOCID), defInitPoint) }; const remoteMilestone: EntryMilestoneInfo = { ...defInitPoint, ...await resolveWithIgnoreKnownError(dbRet.db.get(MILSTONE_DOCID), defInitPoint) };
remoteMilestone.node_chunk_info = { ...defInitPoint.node_chunk_info, ...remoteMilestone.node_chunk_info }; remoteMilestone.node_chunk_info = { ...defInitPoint.node_chunk_info, ...remoteMilestone.node_chunk_info };
remoteMilestone.accepted_nodes = [this.nodeid]; remoteMilestone.accepted_nodes = [this.nodeid];
remoteMilestone.locked = locked; remoteMilestone.locked = locked;
@@ -1182,17 +1225,17 @@ export class LocalPouchDB {
} else { } else {
Logger("Unlock remote database to prevent data corruption", LOG_LEVEL.NOTICE); Logger("Unlock remote database to prevent data corruption", LOG_LEVEL.NOTICE);
} }
await dbret.db.put(remoteMilestone); await dbRet.db.put(remoteMilestone);
} }
async markRemoteResolved(setting: RemoteDBSettings) { async markRemoteResolved(setting: RemoteDBSettings) {
const uri = setting.couchDB_URI + (setting.couchDB_DBNAME == "" ? "" : "/" + setting.couchDB_DBNAME); const uri = setting.couchDB_URI + (setting.couchDB_DBNAME == "" ? "" : "/" + setting.couchDB_DBNAME);
const dbret = await connectRemoteCouchDBWithSetting(setting, this.isMobile); const dbRet = await connectRemoteCouchDBWithSetting(setting, this.isMobile);
if (typeof dbret === "string") { if (typeof dbRet === "string") {
Logger(`could not connect to ${uri}:${dbret}`, LOG_LEVEL.NOTICE); Logger(`could not connect to ${uri}:${dbRet}`, LOG_LEVEL.NOTICE);
return; return;
} }
if (!(await checkRemoteVersion(dbret.db, this.migrate.bind(this), VER))) { if (!(await checkRemoteVersion(dbRet.db, this.migrate.bind(this), VER))) {
Logger("Remote database is newer or corrupted, make sure to latest version of self-hosted-livesync installed", LOG_LEVEL.NOTICE); Logger("Remote database is newer or corrupted, make sure to latest version of self-hosted-livesync installed", LOG_LEVEL.NOTICE);
return; return;
} }
@@ -1205,11 +1248,11 @@ export class LocalPouchDB {
node_chunk_info: { [this.nodeid]: currentVersionRange } node_chunk_info: { [this.nodeid]: currentVersionRange }
}; };
// check local database hash status and remote replicate hash status // check local database hash status and remote replicate hash status
const remoteMilestone: EntryMilestoneInfo = { ...defInitPoint, ...await resolveWithIgnoreKnownError(dbret.db.get(MILSTONE_DOCID), defInitPoint) }; const remoteMilestone: EntryMilestoneInfo = { ...defInitPoint, ...await resolveWithIgnoreKnownError(dbRet.db.get(MILSTONE_DOCID), defInitPoint) };
remoteMilestone.node_chunk_info = { ...defInitPoint.node_chunk_info, ...remoteMilestone.node_chunk_info }; remoteMilestone.node_chunk_info = { ...defInitPoint.node_chunk_info, ...remoteMilestone.node_chunk_info };
remoteMilestone.accepted_nodes = Array.from(new Set([...remoteMilestone.accepted_nodes, this.nodeid])); remoteMilestone.accepted_nodes = Array.from(new Set([...remoteMilestone.accepted_nodes, this.nodeid]));
Logger("Mark this device as 'resolved'.", LOG_LEVEL.NOTICE); Logger("Mark this device as 'resolved'.", LOG_LEVEL.NOTICE);
await dbret.db.put(remoteMilestone); await dbRet.db.put(remoteMilestone);
} }
async sanCheck(entry: EntryDoc): Promise<boolean> { async sanCheck(entry: EntryDoc): Promise<boolean> {
if (entry.type == "plain" || entry.type == "newnote") { if (entry.type == "plain" || entry.type == "newnote") {
@@ -1272,16 +1315,16 @@ export class LocalPouchDB {
// console.dir(chunks); // console.dir(chunks);
let alive = 0; let alive = 0;
let nonref = 0; let unreachable = 0;
for (const chunk of chunks) { for (const chunk of chunks) {
const items = chunk[1]; const items = chunk[1];
if (items.size == 0) { if (items.size == 0) {
nonref++; unreachable++;
} else { } else {
alive++; alive++;
} }
} }
Logger(`Garbage checking completed, documents:${docNum}. Used chunks:${alive}, Retained chunks:${nonref}. Retained chunks will be reused, but you can rebuild database if you feel there are too much.`, LOG_LEVEL.NOTICE, "gc"); Logger(`Garbage checking completed, documents:${docNum}. Used chunks:${alive}, Retained chunks:${unreachable}. Retained chunks will be reused, but you can rebuild database if you feel there are too much.`, LOG_LEVEL.NOTICE, "gc");
}); });
return; return;
} }
@@ -1293,4 +1336,57 @@ export class LocalPouchDB {
if (this.minChunkVersion > 0 && this.minChunkVersion > ver) return false; if (this.minChunkVersion > 0 && this.minChunkVersion > ver) return false;
return true; return true;
} }
isTargetFile(file: string) {
if (file.includes(":")) return true;
if (this.settings.syncOnlyRegEx) {
const syncOnly = new RegExp(this.settings.syncOnlyRegEx);
if (!file.match(syncOnly)) return false;
}
if (this.settings.syncIgnoreRegEx) {
const syncIgnore = new RegExp(this.settings.syncIgnoreRegEx);
if (file.match(syncIgnore)) return false;
}
return true;
}
// Collect chunks from both local and remote.
async CollectChunks(ids: string[], showResult = false) {
// Fetch local chunks.
const localChunks = await this.localDatabase.allDocs({ keys: ids, include_docs: true });
const missingChunks = localChunks.rows.filter(e => "error" in e).map(e => e.key);
// If we have enough chunks, return them.
if (missingChunks.length == 0) {
return localChunks.rows.map(e => e.doc);
}
// Fetching remote chunks.
const ret = await connectRemoteCouchDBWithSetting(this.settings, this.isMobile);
if (typeof (ret) === "string") {
Logger(`Could not connect to server.${ret} `, showResult ? LOG_LEVEL.NOTICE : LOG_LEVEL.INFO, "fetch");
return false;
}
const remoteChunks = await ret.db.allDocs({ keys: missingChunks, include_docs: true });
if (remoteChunks.rows.some(e => "error" in e)) {
return false;
}
const remoteChunkItems = remoteChunks.rows.map(e => e.doc);
const max = remoteChunkItems.length;
let last = 0;
// Chunks should be ordered by as we requested.
function findChunk(key: string) {
const offset = last;
for (let i = 0; i < max; i++) {
const idx = (offset + i) % max;
last = i;
if (remoteChunkItems[idx]._id == key) return remoteChunkItems[idx];
}
throw Error("Chunk collecting error");
}
// Merge them
return localChunks.rows.map(e => ("error" in e) ? (findChunk(e.key)) : e.doc);
}
} }

View File

@@ -49,7 +49,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
<label class='sls-setting-label'><input type='radio' name='disp' value='60' class='sls-setting-tab' ><div class='sls-setting-menu-btn'>🔌</div></label> <label class='sls-setting-label'><input type='radio' name='disp' value='60' class='sls-setting-tab' ><div class='sls-setting-menu-btn'>🔌</div></label>
<label class='sls-setting-label'><input type='radio' name='disp' value='70' class='sls-setting-tab' ><div class='sls-setting-menu-btn'>🚑</div></label> <label class='sls-setting-label'><input type='radio' name='disp' value='70' class='sls-setting-tab' ><div class='sls-setting-menu-btn'>🚑</div></label>
`; `;
const menutabs = w.querySelectorAll(".sls-setting-label"); const menuTabs = w.querySelectorAll(".sls-setting-label");
const changeDisplay = (screen: string) => { const changeDisplay = (screen: string) => {
for (const k in screenElements) { for (const k in screenElements) {
if (k == screen) { if (k == screen) {
@@ -59,11 +59,11 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
} }
} }
}; };
menutabs.forEach((element) => { menuTabs.forEach((element) => {
const e = element.querySelector(".sls-setting-tab"); const e = element.querySelector(".sls-setting-tab");
if (!e) return; if (!e) return;
e.addEventListener("change", (event) => { e.addEventListener("change", (event) => {
menutabs.forEach((element) => element.removeClass("selected")); menuTabs.forEach((element) => element.removeClass("selected"));
changeDisplay((event.currentTarget as HTMLInputElement).value); changeDisplay((event.currentTarget as HTMLInputElement).value);
element.addClass("selected"); element.addClass("selected");
}); });
@@ -115,12 +115,12 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
}; };
const applyDisplayEnabled = () => { const applyDisplayEnabled = () => {
if (isAnySyncEnabled()) { if (isAnySyncEnabled()) {
dbsettings.forEach((e) => { dbSettings.forEach((e) => {
e.setDisabled(true).setTooltip("Could not change this while any synchronization options are enabled."); e.setDisabled(true).setTooltip("Could not change this while any synchronization options are enabled.");
}); });
syncWarn.removeClass("sls-hidden"); syncWarn.removeClass("sls-hidden");
} else { } else {
dbsettings.forEach((e) => { dbSettings.forEach((e) => {
e.setDisabled(false).setTooltip(""); e.setDisabled(false).setTooltip("");
}); });
syncWarn.addClass("sls-hidden"); syncWarn.addClass("sls-hidden");
@@ -149,8 +149,8 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
} }
}; };
const dbsettings: Setting[] = []; const dbSettings: Setting[] = [];
dbsettings.push( dbSettings.push(
new Setting(containerRemoteDatabaseEl).setName("URI").addText((text) => new Setting(containerRemoteDatabaseEl).setName("URI").addText((text) =>
text text
.setPlaceholder("https://........") .setPlaceholder("https://........")
@@ -201,11 +201,11 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
.addToggle((toggle) => .addToggle((toggle) =>
toggle.setValue(this.plugin.settings.workingEncrypt).onChange(async (value) => { toggle.setValue(this.plugin.settings.workingEncrypt).onChange(async (value) => {
this.plugin.settings.workingEncrypt = value; this.plugin.settings.workingEncrypt = value;
phasspharase.setDisabled(!value); passphrase.setDisabled(!value);
await this.plugin.saveSettings(); await this.plugin.saveSettings();
}) })
); );
const phasspharase = new Setting(containerRemoteDatabaseEl) const passphrase = new Setting(containerRemoteDatabaseEl)
.setName("Passphrase") .setName("Passphrase")
.setDesc("Encrypting passphrase. If you change the passphrase of a existing database, overwriting the remote database is strongly recommended.") .setDesc("Encrypting passphrase. If you change the passphrase of a existing database, overwriting the remote database is strongly recommended.")
.addText((text) => { .addText((text) => {
@@ -217,7 +217,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
}); });
text.inputEl.setAttribute("type", "password"); text.inputEl.setAttribute("type", "password");
}); });
phasspharase.setDisabled(!this.plugin.settings.workingEncrypt); passphrase.setDisabled(!this.plugin.settings.workingEncrypt);
const checkWorkingPassphrase = async (): Promise<boolean> => { const checkWorkingPassphrase = async (): Promise<boolean> => {
const settingForCheck: RemoteDBSettings = { const settingForCheck: RemoteDBSettings = {
...this.plugin.settings, ...this.plugin.settings,
@@ -417,7 +417,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
const res = await requestToCouchDB(this.plugin.settings.couchDB_URI, this.plugin.settings.couchDB_USER, this.plugin.settings.couchDB_PASSWORD, undefined, key, value); const res = await requestToCouchDB(this.plugin.settings.couchDB_URI, this.plugin.settings.couchDB_USER, this.plugin.settings.couchDB_PASSWORD, undefined, key, value);
console.dir(res); console.dir(res);
if (res.status == 200) { if (res.status == 200) {
Logger(`${title} successfly updated`, LOG_LEVEL.NOTICE); Logger(`${title} successfully updated`, LOG_LEVEL.NOTICE);
checkResultDiv.removeChild(x); checkResultDiv.removeChild(x);
checkConfig(); checkConfig();
} else { } else {
@@ -469,6 +469,22 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
} else { } else {
addResult("✔ httpd.enable_cors is ok."); addResult("✔ httpd.enable_cors is ok.");
} }
// If the server is not cloudant, configure request size
if (!this.plugin.settings.couchDB_URI.contains(".cloudantnosqldb.")) {
// REQUEST SIZE
if (Number(responseConfig?.chttpd?.max_http_request_size ?? 0) < 4294967296) {
addResult("❗ chttpd.max_http_request_size is low)");
addConfigFixButton("Set chttpd.max_http_request_size", "chttpd/max_http_request_size", "4294967296");
} else {
addResult("✔ chttpd.max_http_request_size is ok.");
}
if (Number(responseConfig?.couchdb?.max_document_size ?? 0) < 50000000) {
addResult("❗ couchdb.max_document_size is low)");
addConfigFixButton("Set couchdb.max_document_size", "couchdb/max_document_size", "50000000");
} else {
addResult("✔ couchdb.max_document_size is ok.");
}
}
// CORS check // CORS check
// checking connectivity for mobile // checking connectivity for mobile
if (responseConfig?.cors?.credentials != "true") { if (responseConfig?.cors?.credentials != "true") {
@@ -515,10 +531,10 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
addResult("✔ CORS origin OK"); addResult("✔ CORS origin OK");
} }
} }
addResult("--Done--", ["ob-btn-config-haed"]); addResult("--Done--", ["ob-btn-config-head"]);
addResult("If you have some trouble with Connection-check even though all Config-check has been passed, Please check your reverse proxy's configuration.", ["ob-btn-config-info"]); addResult("If you have some trouble with Connection-check even though all Config-check has been passed, Please check your reverse proxy's configuration.", ["ob-btn-config-info"]);
} catch (ex) { } catch (ex) {
Logger(`Checking configration failed`); Logger(`Checking configuration failed`);
Logger(ex); Logger(ex);
} }
}; };
@@ -583,43 +599,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
}) })
) )
containerLocalDatabaseEl.createEl("div", {
text: sanitizeHTMLToDom(`Advanced settings<br>
Configuration of how LiveSync makes chunks from the file.`),
});
new Setting(containerLocalDatabaseEl)
.setName("Minimum chunk size")
.setDesc("(letters), minimum chunk size.")
.addText((text) => {
text.setPlaceholder("")
.setValue(this.plugin.settings.minimumChunkSize + "")
.onChange(async (value) => {
let v = Number(value);
if (isNaN(v) || v < 10 || v > 1000) {
v = 10;
}
this.plugin.settings.minimumChunkSize = v;
await this.plugin.saveSettings();
});
text.inputEl.setAttribute("type", "number");
});
new Setting(containerLocalDatabaseEl)
.setName("LongLine Threshold")
.setDesc("(letters), If the line is longer than this, make the line to chunk")
.addText((text) => {
text.setPlaceholder("")
.setValue(this.plugin.settings.longLineThreshold + "")
.onChange(async (value) => {
let v = Number(value);
if (isNaN(v) || v < 10 || v > 1000) {
v = 10;
}
this.plugin.settings.longLineThreshold = v;
await this.plugin.saveSettings();
});
text.inputEl.setAttribute("type", "number");
});
let newDatabaseName = this.plugin.settings.additionalSuffixOfDatabaseName + ""; let newDatabaseName = this.plugin.settings.additionalSuffixOfDatabaseName + "";
new Setting(containerLocalDatabaseEl) new Setting(containerLocalDatabaseEl)
.setName("Database suffix") .setName("Database suffix")
@@ -652,7 +632,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
new Setting(containerGeneralSettingsEl) new Setting(containerGeneralSettingsEl)
.setName("Do not show low-priority Log") .setName("Do not show low-priority Log")
.setDesc("Reduce log infomations") .setDesc("Reduce log information")
.addToggle((toggle) => .addToggle((toggle) =>
toggle.setValue(this.plugin.settings.lessInformationInLog).onChange(async (value) => { toggle.setValue(this.plugin.settings.lessInformationInLog).onChange(async (value) => {
this.plugin.settings.lessInformationInLog = value; this.plugin.settings.lessInformationInLog = value;
@@ -661,7 +641,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
); );
new Setting(containerGeneralSettingsEl) new Setting(containerGeneralSettingsEl)
.setName("Verbose Log") .setName("Verbose Log")
.setDesc("Show verbose log ") .setDesc("Show verbose log")
.addToggle((toggle) => .addToggle((toggle) =>
toggle.setValue(this.plugin.settings.showVerboseLog).onChange(async (value) => { toggle.setValue(this.plugin.settings.showVerboseLog).onChange(async (value) => {
this.plugin.settings.showVerboseLog = value; this.plugin.settings.showVerboseLog = value;
@@ -810,15 +790,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
}) })
); );
// new Setting(containerSyncSettingEl)
// .setName("Skip old files on sync")
// .setDesc("Skip old incoming if incoming changes older than storage.")
// .addToggle((toggle) =>
// toggle.setValue(this.plugin.settings.skipOlderFilesOnSync).onChange(async (value) => {
// this.plugin.settings.skipOlderFilesOnSync = value;
// await this.plugin.saveSettings();
// })
// );
new Setting(containerSyncSettingEl) new Setting(containerSyncSettingEl)
.setName("Check conflict only on opened files") .setName("Check conflict only on opened files")
.setDesc("Do not check conflict for replication") .setDesc("Do not check conflict for replication")
@@ -829,9 +800,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
}) })
); );
containerSyncSettingEl.createEl("h3", {
text: sanitizeHTMLToDom(`Experimental`),
});
new Setting(containerSyncSettingEl) new Setting(containerSyncSettingEl)
.setName("Sync hidden files") .setName("Sync hidden files")
.addToggle((toggle) => .addToggle((toggle) =>
@@ -926,6 +895,86 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
}) })
) )
containerSyncSettingEl.createEl("h3", {
text: sanitizeHTMLToDom(`Experimental`),
});
new Setting(containerSyncSettingEl)
.setName("Regular expression to ignore files")
.setDesc("If this is set, any changes to local and remote files that match this will be skipped.")
.addTextArea((text) => {
text
.setValue(this.plugin.settings.syncIgnoreRegEx)
.setPlaceholder("\\.pdf$")
.onChange(async (value) => {
let isValidRegExp = false;
try {
new RegExp(value);
isValidRegExp = true;
} catch (_) {
// NO OP.
}
if (isValidRegExp || value.trim() == "") {
this.plugin.settings.syncIgnoreRegEx = value;
await this.plugin.saveSettings();
}
})
return text;
}
);
new Setting(containerSyncSettingEl)
.setName("Regular expression for restricting synchronization targets")
.setDesc("If this is set, changes to local and remote files that only match this will be processed.")
.addTextArea((text) => {
text
.setValue(this.plugin.settings.syncOnlyRegEx)
.setPlaceholder("\\.md$|\\.txt")
.onChange(async (value) => {
let isValidRegExp = false;
try {
new RegExp(value);
isValidRegExp = true;
} catch (_) {
// NO OP.
}
if (isValidRegExp || value.trim() == "") {
this.plugin.settings.syncOnlyRegEx = value;
await this.plugin.saveSettings();
}
})
return text;
}
);
new Setting(containerSyncSettingEl)
.setName("Chunk size")
.setDesc("Customize chunk size for binary files (0.1MBytes). This cannot be increased when using IBM Cloudant.")
.addText((text) => {
text.setPlaceholder("")
.setValue(this.plugin.settings.customChunkSize + "")
.onChange(async (value) => {
let v = Number(value);
if (isNaN(v) || v < 100) {
v = 100;
}
this.plugin.settings.customChunkSize = v;
await this.plugin.saveSettings();
});
text.inputEl.setAttribute("type", "number");
});
new Setting(containerSyncSettingEl)
.setName("Read chunks online.")
.setDesc("If this option is enabled, LiveSync reads chunks online directly instead of replicating them locally. Increasing Custom chunk size is recommended.")
.addToggle((toggle) => {
toggle
.setValue(this.plugin.settings.readChunksOnline)
.onChange(async (value) => {
this.plugin.settings.readChunksOnline = value;
await this.plugin.saveSettings();
})
return toggle;
}
);
containerSyncSettingEl.createEl("h3", { containerSyncSettingEl.createEl("h3", {
text: sanitizeHTMLToDom(`Advanced settings`), text: sanitizeHTMLToDom(`Advanced settings`),
}); });
@@ -1064,7 +1113,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
c.addClass("op-warn"); c.addClass("op-warn");
} }
} }
const hatchWarn = containerHatchEl.createEl("div", { text: `To stop the bootup sequence for fixing problems on databases, you can put redflag.md on top of your vault (Rebooting obsidian is required).` }); const hatchWarn = containerHatchEl.createEl("div", { text: `To stop the boot up sequence for fixing problems on databases, you can put redflag.md on top of your vault (Rebooting obsidian is required).` });
hatchWarn.addClass("op-warn-info"); hatchWarn.addClass("op-warn-info");
new Setting(containerHatchEl) new Setting(containerHatchEl)
@@ -1179,7 +1228,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
new Setting(containerHatchEl) new Setting(containerHatchEl)
.setName("Drop old encrypted database") .setName("Drop old encrypted database")
.setDesc("WARNING: Please use this button only when you have failed on converting old-style localdatabase at v0.10.0.") .setDesc("WARNING: Please use this button only when you have failed on converting old-style local database at v0.10.0.")
.addButton((button) => .addButton((button) =>
button button
.setButtonText("Drop") .setButtonText("Drop")
@@ -1193,7 +1242,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
addScreenElement("50", containerHatchEl); addScreenElement("50", containerHatchEl);
// With great respect, thank you TfTHacker! // With great respect, thank you TfTHacker!
// refered: https://github.com/TfTHacker/obsidian42-brat/blob/main/src/features/BetaPlugins.ts // Refer: https://github.com/TfTHacker/obsidian42-brat/blob/main/src/features/BetaPlugins.ts
const containerPluginSettings = containerEl.createDiv(); const containerPluginSettings = containerEl.createDiv();
containerPluginSettings.createEl("h3", { text: "Plugins and settings (beta)" }); containerPluginSettings.createEl("h3", { text: "Plugins and settings (beta)" });

Submodule src/lib updated: a49a096a6a...aacfa353a9

View File

@@ -17,8 +17,8 @@ import {
setNoticeClass, setNoticeClass,
NewNotice, NewNotice,
getLocks, getLocks,
Parallels,
WrappedNotice, WrappedNotice,
Semaphore,
} from "./lib/src/utils"; } from "./lib/src/utils";
import { Logger, setLogger } from "./lib/src/logger"; import { Logger, setLogger } from "./lib/src/logger";
import { LocalPouchDB } from "./LocalPouchDB"; import { LocalPouchDB } from "./LocalPouchDB";
@@ -29,7 +29,7 @@ import { DocumentHistoryModal } from "./DocumentHistoryModal";
import { clearAllPeriodic, clearAllTriggers, clearTrigger, disposeMemoObject, id2path, memoIfNotExist, memoObject, path2id, retriveMemoObject, setTrigger } from "./utils"; import { clearAllPeriodic, clearAllTriggers, clearTrigger, disposeMemoObject, id2path, memoIfNotExist, memoObject, path2id, retrieveMemoObject, setTrigger } from "./utils";
import { decrypt, encrypt } from "./lib/src/e2ee_v2"; import { decrypt, encrypt } from "./lib/src/e2ee_v2";
const isDebug = false; const isDebug = false;
@@ -48,7 +48,7 @@ const ICHeaderLength = ICHeader.length;
* @param str ID * @param str ID
* @returns * @returns
*/ */
function isInteralChunk(str: string): boolean { function isInternalChunk(str: string): boolean {
return str.startsWith(ICHeader); return str.startsWith(ICHeader);
} }
function id2filenameInternalChunk(str: string): string { function id2filenameInternalChunk(str: string): string {
@@ -185,7 +185,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const doc = row.doc; const doc = row.doc;
nextKey = `${row.id}\u{10ffff}`; nextKey = `${row.id}\u{10ffff}`;
if (!("_conflicts" in doc)) continue; if (!("_conflicts" in doc)) continue;
if (isInteralChunk(row.id)) continue; if (isInternalChunk(row.id)) continue;
if (doc._deleted) continue; if (doc._deleted) continue;
if ("deleted" in doc && doc.deleted) continue; if ("deleted" in doc && doc.deleted) continue;
if (doc.type == "newnote" || doc.type == "plain") { if (doc.type == "newnote" || doc.type == "plain") {
@@ -206,7 +206,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
const target = await askSelectString(this.app, "File to view History", notesList); const target = await askSelectString(this.app, "File to view History", notesList);
if (target) { if (target) {
if (isInteralChunk(target)) { if (isInternalChunk(target)) {
//NOP //NOP
} else { } else {
await this.showIfConflicted(this.app.vault.getAbstractFileByPath(target) as TFile); await this.showIfConflicted(this.app.vault.getAbstractFileByPath(target) as TFile);
@@ -224,8 +224,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
Logger(`Self-hosted LiveSync v${manifestVersion} ${packageVersion} `); Logger(`Self-hosted LiveSync v${manifestVersion} ${packageVersion} `);
const lsname = "obsidian-live-sync-ver" + this.getVaultName(); const lsKey = "obsidian-live-sync-ver" + this.getVaultName();
const last_version = localStorage.getItem(lsname); const last_version = localStorage.getItem(lsKey);
await this.loadSettings(); await this.loadSettings();
const lastVersion = ~~(versionNumberString2Number(manifestVersion) / 1000); const lastVersion = ~~(versionNumberString2Number(manifestVersion) / 1000);
if (lastVersion > this.settings.lastReadUpdates) { if (lastVersion > this.settings.lastReadUpdates) {
@@ -245,7 +245,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
this.settings.versionUpFlash = "Self-hosted LiveSync has been upgraded and some behaviors have changed incompatibly. All automatic synchronization is now disabled temporary. Ensure that other devices are also upgraded, and enable synchronization again."; this.settings.versionUpFlash = "Self-hosted LiveSync has been upgraded and some behaviors have changed incompatibly. All automatic synchronization is now disabled temporary. Ensure that other devices are also upgraded, and enable synchronization again.";
this.saveSettings(); this.saveSettings();
} }
localStorage.setItem(lsname, `${VER}`); localStorage.setItem(lsKey, `${VER}`);
await this.openDatabase(); await this.openDatabase();
addIcon( addIcon(
@@ -286,7 +286,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
this.watchVaultDelete = this.watchVaultDelete.bind(this); this.watchVaultDelete = this.watchVaultDelete.bind(this);
this.watchVaultRename = this.watchVaultRename.bind(this); this.watchVaultRename = this.watchVaultRename.bind(this);
this.watchWorkspaceOpen = debounce(this.watchWorkspaceOpen.bind(this), 1000, false); this.watchWorkspaceOpen = debounce(this.watchWorkspaceOpen.bind(this), 1000, false);
this.watchWindowVisiblity = debounce(this.watchWindowVisiblity.bind(this), 1000, false); this.watchWindowVisibility = debounce(this.watchWindowVisibility.bind(this), 1000, false);
this.watchOnline = debounce(this.watchOnline.bind(this), 500, false);
this.parseReplicationResult = this.parseReplicationResult.bind(this); this.parseReplicationResult = this.parseReplicationResult.bind(this);
@@ -320,8 +321,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (this.settings.suspendFileWatching) { if (this.settings.suspendFileWatching) {
Logger("'Suspend file watching' turned on. Are you sure this is what you intended? Every modification on the vault will be ignored.", LOG_LEVEL.NOTICE); Logger("'Suspend file watching' turned on. Are you sure this is what you intended? Every modification on the vault will be ignored.", LOG_LEVEL.NOTICE);
} }
const isInitalized = await this.initializeDatabase(); const isInitialized = await this.initializeDatabase();
if (!isInitalized) { if (!isInitialized) {
//TODO:stop all sync. //TODO:stop all sync.
return false; return false;
} }
@@ -361,19 +362,19 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
const config = decodeURIComponent(setupURI.substring(configURIBase.length)); const config = decodeURIComponent(setupURI.substring(configURIBase.length));
console.dir(config) console.dir(config)
await setupwizard(config); await setupWizard(config);
}, },
}); });
const setupwizard = async (confString: string) => { const setupWizard = async (confString: string) => {
try { try {
const oldConf = JSON.parse(JSON.stringify(this.settings)); const oldConf = JSON.parse(JSON.stringify(this.settings));
const encryptingPassphrase = await askString(this.app, "Passphrase", "Passphrase for your settings", ""); const encryptingPassphrase = await askString(this.app, "Passphrase", "Passphrase for your settings", "");
if (encryptingPassphrase === false) return; if (encryptingPassphrase === false) return;
const newconf = await JSON.parse(await decrypt(confString, encryptingPassphrase)); const newConf = await JSON.parse(await decrypt(confString, encryptingPassphrase));
if (newconf) { if (newConf) {
const result = await askYesNo(this.app, "Importing LiveSync's conf, OK?"); const result = await askYesNo(this.app, "Importing LiveSync's conf, OK?");
if (result == "yes") { if (result == "yes") {
const newSettingW = Object.assign({}, this.settings, newconf); const newSettingW = Object.assign({}, this.settings, newConf);
// stopping once. // stopping once.
this.localDatabase.closeReplication(); this.localDatabase.closeReplication();
this.settings.suspendFileWatching = true; this.settings.suspendFileWatching = true;
@@ -401,6 +402,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
} }
let initDB; let initDB;
this.settings = newSettingW;
await this.saveSettings(); await this.saveSettings();
if (keepLocalDB == "no") { if (keepLocalDB == "no") {
this.resetLocalOldDatabase(); this.resetLocalOldDatabase();
@@ -437,7 +439,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
}; };
this.registerObsidianProtocolHandler("setuplivesync", async (conf: any) => { this.registerObsidianProtocolHandler("setuplivesync", async (conf: any) => {
await setupwizard(conf.settings); await setupWizard(conf.settings);
}); });
this.addCommand({ this.addCommand({
id: "livesync-replicate", id: "livesync-replicate",
@@ -448,7 +450,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}); });
this.addCommand({ this.addCommand({
id: "livesync-dump", id: "livesync-dump",
name: "Dump informations of this doc ", name: "Dump information of this doc ",
editorCallback: (editor: Editor, view: MarkdownView) => { editorCallback: (editor: Editor, view: MarkdownView) => {
this.localDatabase.getDBEntry(view.file.path, {}, true, false); this.localDatabase.getDBEntry(view.file.path, {}, true, false);
}, },
@@ -504,6 +506,13 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
this.showHistory(view.file); this.showHistory(view.file);
}, },
}); });
this.addCommand({
id: "livesync-scan-files",
name: "Scan storage and database again",
callback: async () => {
await this.syncAllFiles(true)
}
})
this.triggerRealizeSettingSyncMode = debounce(this.triggerRealizeSettingSyncMode.bind(this), 1000); this.triggerRealizeSettingSyncMode = debounce(this.triggerRealizeSettingSyncMode.bind(this), 1000);
this.triggerCheckPluginUpdate = debounce(this.triggerCheckPluginUpdate.bind(this), 3000); this.triggerCheckPluginUpdate = debounce(this.triggerCheckPluginUpdate.bind(this), 3000);
@@ -534,14 +543,14 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}); });
this.addCommand({ this.addCommand({
id: "livesync-conflictcheck", id: "livesync-conflictcheck",
name: "Pick a file to resolive conflict", name: "Pick a file to resolve conflict",
callback: () => { callback: () => {
this.pickFileForResolve(); this.pickFileForResolve();
}, },
}) })
this.addCommand({ this.addCommand({
id: "livesync-runbatch", id: "livesync-runbatch",
name: "Run pending batch processes", name: "Run pended batch processes",
callback: async () => { callback: async () => {
await this.applyBatchChange(); await this.applyBatchChange();
}, },
@@ -585,7 +594,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
clearAllPeriodic(); clearAllPeriodic();
clearAllTriggers(); clearAllTriggers();
window.removeEventListener("visibilitychange", this.watchWindowVisiblity); window.removeEventListener("visibilitychange", this.watchWindowVisibility);
window.removeEventListener("online", this.watchOnline)
Logger("unloading plugin"); Logger("unloading plugin");
} }
@@ -620,15 +630,15 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
// So, use history is always enabled. // So, use history is always enabled.
this.settings.useHistory = true; this.settings.useHistory = true;
const lsname = "obsidian-live-sync-vaultanddevicename-" + this.getVaultName(); const lsKey = "obsidian-live-sync-vaultanddevicename-" + this.getVaultName();
if (this.settings.deviceAndVaultName != "") { if (this.settings.deviceAndVaultName != "") {
if (!localStorage.getItem(lsname)) { if (!localStorage.getItem(lsKey)) {
this.deviceAndVaultName = this.settings.deviceAndVaultName; this.deviceAndVaultName = this.settings.deviceAndVaultName;
localStorage.setItem(lsname, this.deviceAndVaultName); localStorage.setItem(lsKey, this.deviceAndVaultName);
this.settings.deviceAndVaultName = ""; this.settings.deviceAndVaultName = "";
} }
} }
this.deviceAndVaultName = localStorage.getItem(lsname) || ""; this.deviceAndVaultName = localStorage.getItem(lsKey) || "";
} }
triggerRealizeSettingSyncMode() { triggerRealizeSettingSyncMode() {
@@ -636,9 +646,9 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
async saveSettings() { async saveSettings() {
const lsname = "obsidian-live-sync-vaultanddevicename-" + this.getVaultName(); const lsKey = "obsidian-live-sync-vaultanddevicename-" + this.getVaultName();
localStorage.setItem(lsname, this.deviceAndVaultName || ""); localStorage.setItem(lsKey, this.deviceAndVaultName || "");
await this.saveData(this.settings); await this.saveData(this.settings);
this.localDatabase.settings = this.settings; this.localDatabase.settings = this.settings;
this.triggerRealizeSettingSyncMode(); this.triggerRealizeSettingSyncMode();
@@ -666,14 +676,26 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
this.registerEvent(this.app.vault.on("rename", this.watchVaultRename)); this.registerEvent(this.app.vault.on("rename", this.watchVaultRename));
this.registerEvent(this.app.vault.on("create", this.watchVaultCreate)); this.registerEvent(this.app.vault.on("create", this.watchVaultCreate));
this.registerEvent(this.app.workspace.on("file-open", this.watchWorkspaceOpen)); this.registerEvent(this.app.workspace.on("file-open", this.watchWorkspaceOpen));
window.addEventListener("visibilitychange", this.watchWindowVisiblity); window.addEventListener("visibilitychange", this.watchWindowVisibility);
window.addEventListener("online", this.watchOnline);
} }
watchWindowVisiblity() {
this.watchWindowVisiblityAsync(); watchOnline() {
this.watchOnlineAsync();
}
async watchOnlineAsync() {
// If some files were failed to retrieve, scan files again.
if (navigator.onLine && this.localDatabase.needScanning) {
this.localDatabase.needScanning = false;
await this.syncAllFiles();
}
}
watchWindowVisibility() {
this.watchWindowVisibilityAsync();
} }
async watchWindowVisiblityAsync() { async watchWindowVisibilityAsync() {
if (this.settings.suspendFileWatching) return; if (this.settings.suspendFileWatching) return;
// if (this.suspended) return; // if (this.suspended) return;
const isHidden = document.hidden; const isHidden = document.hidden;
@@ -718,6 +740,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
watchVaultCreate(file: TFile, ...args: any[]) { watchVaultCreate(file: TFile, ...args: any[]) {
if (!this.isTargetFile(file)) return;
if (this.settings.suspendFileWatching) return; if (this.settings.suspendFileWatching) return;
if (recentlyTouched(file)) { if (recentlyTouched(file)) {
return; return;
@@ -726,6 +749,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
watchVaultChange(file: TAbstractFile, ...args: any[]) { watchVaultChange(file: TAbstractFile, ...args: any[]) {
if (!this.isTargetFile(file)) return;
if (!(file instanceof TFile)) { if (!(file instanceof TFile)) {
return; return;
} }
@@ -734,7 +758,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
if (this.settings.suspendFileWatching) return; if (this.settings.suspendFileWatching) return;
// If batchsave is enabled, queue all changes and do nothing. // If batchSave is enabled, queue all changes and do nothing.
if (this.settings.batchSave) { if (this.settings.batchSave) {
~(async () => { ~(async () => {
const meta = await this.localDatabase.getDBEntryMeta(file.path); const meta = await this.localDatabase.getDBEntryMeta(file.path);
@@ -760,27 +784,25 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
return await runWithLock("batchSave", false, async () => { return await runWithLock("batchSave", false, async () => {
const batchItems = JSON.parse(JSON.stringify(this.batchFileChange)) as string[]; const batchItems = JSON.parse(JSON.stringify(this.batchFileChange)) as string[];
this.batchFileChange = []; this.batchFileChange = [];
const limit = 3; const semaphore = Semaphore(3);
const p = Parallels();
for (const e of batchItems) { const batchProcesses = batchItems.map(e => (async (e) => {
const w = (async () => { const releaser = await semaphore.acquire(1, "batch");
try { try {
const f = this.app.vault.getAbstractFileByPath(normalizePath(e)); const f = this.app.vault.getAbstractFileByPath(normalizePath(e));
if (f && f instanceof TFile) { if (f && f instanceof TFile) {
await this.updateIntoDB(f); await this.updateIntoDB(f);
Logger(`Batch save:${e}`); Logger(`Batch save:${e}`);
}
} catch (ex) {
Logger(`Batch save error:${e}`, LOG_LEVEL.NOTICE);
Logger(ex, LOG_LEVEL.VERBOSE);
} }
})(); } catch (ex) {
p.add(w); Logger(`Batch save error:${e}`, LOG_LEVEL.NOTICE);
await p.wait(limit) Logger(ex, LOG_LEVEL.VERBOSE);
} } finally {
this.refreshStatusText(); releaser();
await p.all(); }
})(e))
await Promise.all(batchProcesses);
this.refreshStatusText(); this.refreshStatusText();
return; return;
}); });
@@ -799,6 +821,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
watchVaultDelete(file: TAbstractFile) { watchVaultDelete(file: TAbstractFile) {
if (!this.isTargetFile(file)) return;
// When save is delayed, it should be cancelled. // When save is delayed, it should be cancelled.
this.batchFileChange = this.batchFileChange.filter((e) => e != file.path); this.batchFileChange = this.batchFileChange.filter((e) => e != file.path);
if (this.settings.suspendFileWatching) return; if (this.settings.suspendFileWatching) return;
@@ -830,6 +853,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
watchVaultRename(file: TAbstractFile, oldFile: any) { watchVaultRename(file: TAbstractFile, oldFile: any) {
if (!this.isTargetFile(file)) return;
if (this.settings.suspendFileWatching) return; if (this.settings.suspendFileWatching) return;
this.watchVaultRenameAsync(file, oldFile).then(() => { }); this.watchVaultRenameAsync(file, oldFile).then(() => { });
} }
@@ -899,32 +923,32 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (this.settings && !this.settings.showVerboseLog && level == LOG_LEVEL.VERBOSE) { if (this.settings && !this.settings.showVerboseLog && level == LOG_LEVEL.VERBOSE) {
return; return;
} }
const valutName = this.getVaultName(); const vaultName = this.getVaultName();
const timestamp = new Date().toLocaleString(); const timestamp = new Date().toLocaleString();
const messagecontent = typeof message == "string" ? message : message instanceof Error ? `${message.name}:${message.message}` : JSON.stringify(message, null, 2); const messageContent = typeof message == "string" ? message : message instanceof Error ? `${message.name}:${message.message}` : JSON.stringify(message, null, 2);
const newmessage = timestamp + "->" + messagecontent; const newMessage = timestamp + "->" + messageContent;
this.logMessage = [].concat(this.logMessage).concat([newmessage]).slice(-100); this.logMessage = [].concat(this.logMessage).concat([newMessage]).slice(-100);
console.log(valutName + ":" + newmessage); console.log(vaultName + ":" + newMessage);
this.setStatusBarText(null, messagecontent.substring(0, 30)); this.setStatusBarText(null, messageContent.substring(0, 30));
// if (message instanceof Error) { // if (message instanceof Error) {
// console.trace(message); // console.trace(message);
// } // }
if (level >= LOG_LEVEL.NOTICE) { if (level >= LOG_LEVEL.NOTICE) {
if (!key) key = messagecontent; if (!key) key = messageContent;
if (key in this.notifies) { if (key in this.notifies) {
// @ts-ignore // @ts-ignore
const isShown = this.notifies[key].notice.noticeEl?.isShown() const isShown = this.notifies[key].notice.noticeEl?.isShown()
if (!isShown) { if (!isShown) {
this.notifies[key].notice = new Notice(messagecontent, 0); this.notifies[key].notice = new Notice(messageContent, 0);
} }
clearTimeout(this.notifies[key].timer); clearTimeout(this.notifies[key].timer);
if (key == messagecontent) { if (key == messageContent) {
this.notifies[key].count++; this.notifies[key].count++;
this.notifies[key].notice.setMessage(`(${this.notifies[key].count}):${messagecontent}`); this.notifies[key].notice.setMessage(`(${this.notifies[key].count}):${messageContent}`);
} else { } else {
this.notifies[key].notice.setMessage(`${messagecontent}`); this.notifies[key].notice.setMessage(`${messageContent}`);
} }
this.notifies[key].timer = setTimeout(() => { this.notifies[key].timer = setTimeout(() => {
@@ -937,7 +961,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
}, 5000); }, 5000);
} else { } else {
const notify = new Notice(messagecontent, 0); const notify = new Notice(messageContent, 0);
this.notifies[key] = { this.notifies[key] = {
count: 0, count: 0,
notice: notify, notice: notify,
@@ -951,8 +975,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (this.addLogHook != null) this.addLogHook(); if (this.addLogHook != null) this.addLogHook();
} }
async ensureDirectory(fullpath: string) { async ensureDirectory(fullPath: string) {
const pathElements = fullpath.split("/"); const pathElements = fullPath.split("/");
pathElements.pop(); pathElements.pop();
let c = ""; let c = "";
for (const v of pathElements) { for (const v of pathElements) {
@@ -962,7 +986,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} catch (ex) { } catch (ex) {
// basically skip exceptions. // basically skip exceptions.
if (ex.message && ex.message == "Folder already exists.") { if (ex.message && ex.message == "Folder already exists.") {
// especialy this message is. // especially this message is.
} else { } else {
Logger("Folder Create Error"); Logger("Folder Create Error");
Logger(ex); Logger(ex);
@@ -977,6 +1001,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (shouldBeIgnored(pathSrc)) { if (shouldBeIgnored(pathSrc)) {
return; return;
} }
if (!this.isTargetFile(pathSrc)) return;
const doc = await this.localDatabase.getDBEntry(pathSrc, { rev: docEntry._rev }); const doc = await this.localDatabase.getDBEntry(pathSrc, { rev: docEntry._rev });
if (doc === false) return; if (doc === false) return;
const msg = `DB -> STORAGE (create${force ? ",force" : ""},${doc.datatype}) `; const msg = `DB -> STORAGE (create${force ? ",force" : ""},${doc.datatype}) `;
@@ -990,14 +1016,14 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
await this.ensureDirectory(path); await this.ensureDirectory(path);
try { try {
const newfile = await this.app.vault.createBinary(normalizePath(path), bin, { const newFile = await this.app.vault.createBinary(normalizePath(path), bin, {
ctime: doc.ctime, ctime: doc.ctime,
mtime: doc.mtime, mtime: doc.mtime,
}); });
this.batchFileChange = this.batchFileChange.filter((e) => e != newfile.path); this.batchFileChange = this.batchFileChange.filter((e) => e != newFile.path);
Logger(msg + path); Logger(msg + path);
touch(newfile); touch(newFile);
this.app.vault.trigger("create", newfile); this.app.vault.trigger("create", newFile);
} catch (ex) { } catch (ex) {
Logger(msg + "ERROR, Could not write: " + path, LOG_LEVEL.NOTICE); Logger(msg + "ERROR, Could not write: " + path, LOG_LEVEL.NOTICE);
Logger(ex, LOG_LEVEL.VERBOSE); Logger(ex, LOG_LEVEL.VERBOSE);
@@ -1010,14 +1036,14 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
await this.ensureDirectory(path); await this.ensureDirectory(path);
try { try {
const newfile = await this.app.vault.create(normalizePath(path), doc.data, { const newFile = await this.app.vault.create(normalizePath(path), doc.data, {
ctime: doc.ctime, ctime: doc.ctime,
mtime: doc.mtime, mtime: doc.mtime,
}); });
this.batchFileChange = this.batchFileChange.filter((e) => e != newfile.path); this.batchFileChange = this.batchFileChange.filter((e) => e != newFile.path);
Logger(msg + path); Logger(msg + path);
touch(newfile); touch(newFile);
this.app.vault.trigger("create", newfile); this.app.vault.trigger("create", newFile);
} catch (ex) { } catch (ex) {
Logger(msg + "ERROR, Could not parse: " + path + "(" + doc.datatype + ")", LOG_LEVEL.NOTICE); Logger(msg + "ERROR, Could not parse: " + path + "(" + doc.datatype + ")", LOG_LEVEL.NOTICE);
Logger(ex, LOG_LEVEL.VERBOSE); Logger(ex, LOG_LEVEL.VERBOSE);
@@ -1028,6 +1054,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
async deleteVaultItem(file: TFile | TFolder) { async deleteVaultItem(file: TFile | TFolder) {
if (!this.isTargetFile(file)) return;
const dir = file.parent; const dir = file.parent;
if (this.settings.trashInsteadDelete) { if (this.settings.trashInsteadDelete) {
await this.app.vault.trash(file, false); await this.app.vault.trash(file, false);
@@ -1049,6 +1076,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (shouldBeIgnored(pathSrc)) { if (shouldBeIgnored(pathSrc)) {
return; return;
} }
if (!this.isTargetFile(pathSrc)) return;
if (docEntry._deleted || docEntry.deleted) { if (docEntry._deleted || docEntry.deleted) {
// This occurs not only when files are deleted, but also when conflicts are resolved. // This occurs not only when files are deleted, but also when conflicts are resolved.
// We have to check no other revisions are left. // We have to check no other revisions are left.
@@ -1137,7 +1165,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
await runWithLock("dbchanged", false, async () => { await runWithLock("dbchanged", false, async () => {
const w = [...this.queuedEntries]; const w = [...this.queuedEntries];
this.queuedEntries = []; this.queuedEntries = [];
Logger(`Applyng ${w.length} files`); Logger(`Applying ${w.length} files`);
for (const entry of w) { for (const entry of w) {
Logger(`Applying ${entry._id} (${entry._rev}) change...`, LOG_LEVEL.VERBOSE); Logger(`Applying ${entry._id} (${entry._rev}) change...`, LOG_LEVEL.VERBOSE);
await this.handleDBChangedAsync(entry); await this.handleDBChangedAsync(entry);
@@ -1183,12 +1211,12 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
saveQueuedFiles() { saveQueuedFiles() {
const saveData = JSON.stringify(this.queuedFiles.filter((e) => !e.done).map((e) => e.entry._id)); const saveData = JSON.stringify(this.queuedFiles.filter((e) => !e.done).map((e) => e.entry._id));
const lsname = "obsidian-livesync-queuefiles-" + this.getVaultName(); const lsKey = "obsidian-livesync-queuefiles-" + this.getVaultName();
localStorage.setItem(lsname, saveData); localStorage.setItem(lsKey, saveData);
} }
async loadQueuedFiles() { async loadQueuedFiles() {
const lsname = "obsidian-livesync-queuefiles-" + this.getVaultName(); const lsKey = "obsidian-livesync-queuefiles-" + this.getVaultName();
const ids = JSON.parse(localStorage.getItem(lsname) || "[]") as string[]; const ids = JSON.parse(localStorage.getItem(lsKey) || "[]") as string[];
const ret = await this.localDatabase.localDatabase.allDocs({ keys: ids, include_docs: true }); const ret = await this.localDatabase.localDatabase.allDocs({ keys: ids, include_docs: true });
for (const doc of ret.rows) { for (const doc of ret.rows) {
if (doc.doc && !this.queuedFiles.some((e) => e.entry._id == doc.doc._id)) { if (doc.doc && !this.queuedFiles.some((e) => e.entry._id == doc.doc._id)) {
@@ -1220,7 +1248,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const now = new Date().getTime(); const now = new Date().getTime();
if (queue.missingChildren.length == 0) { if (queue.missingChildren.length == 0) {
queue.done = true; queue.done = true;
if (isInteralChunk(queue.entry._id)) { if (isInternalChunk(queue.entry._id)) {
//system file //system file
const filename = id2path(id2filenameInternalChunk(queue.entry._id)); const filename = id2path(id2filenameInternalChunk(queue.entry._id));
// await this.syncInternalFilesAndDatabase("pull", false, false, [filename]) // await this.syncInternalFilesAndDatabase("pull", false, false, [filename])
@@ -1260,8 +1288,9 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (isNewFileCompleted) this.procQueuedFiles(); if (isNewFileCompleted) this.procQueuedFiles();
} }
async parseIncomingDoc(doc: PouchDB.Core.ExistingDocument<EntryBody>) { async parseIncomingDoc(doc: PouchDB.Core.ExistingDocument<EntryBody>) {
if (!this.isTargetFile(id2path(doc._id))) return;
const skipOldFile = this.settings.skipOlderFilesOnSync && false; //patched temporary. const skipOldFile = this.settings.skipOlderFilesOnSync && false; //patched temporary.
if ((!isInteralChunk(doc._id)) && skipOldFile) { if ((!isInternalChunk(doc._id)) && skipOldFile) {
const info = this.app.vault.getAbstractFileByPath(id2path(doc._id)); const info = this.app.vault.getAbstractFileByPath(id2path(doc._id));
if (info && info instanceof TFile) { if (info && info instanceof TFile) {
@@ -1280,9 +1309,11 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
missingChildren: [] as string[], missingChildren: [] as string[],
timeout: now + this.chunkWaitTimeout, timeout: now + this.chunkWaitTimeout,
}; };
if ("children" in doc) { // If `Read chunks online` is enabled, retrieve chunks from the remote CouchDB directly.
if ((!this.settings.readChunksOnline) && "children" in doc) {
const c = await this.localDatabase.localDatabase.allDocs({ keys: doc.children, include_docs: false }); const c = await this.localDatabase.localDatabase.allDocs({ keys: doc.children, include_docs: false });
const missing = c.rows.filter((e) => "error" in e).map((e) => e.key); const missing = c.rows.filter((e) => "error" in e).map((e) => e.key);
// fetch from remote
if (missing.length > 0) Logger(`${doc._id}(${doc._rev}) Queued (waiting ${missing.length} items)`, LOG_LEVEL.VERBOSE); if (missing.length > 0) Logger(`${doc._id}(${doc._rev}) Queued (waiting ${missing.length} items)`, LOG_LEVEL.VERBOSE);
newQueue.missingChildren = missing; newQueue.missingChildren = missing;
this.queuedFiles.push(newQueue); this.queuedFiles.push(newQueue);
@@ -1463,9 +1494,9 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const pieces = queue.map((e) => e[1].missingChildren).reduce((prev, cur) => prev + cur.length, 0); const pieces = queue.map((e) => e[1].missingChildren).reduce((prev, cur) => prev + cur.length, 0);
queued = ` 🧩 ${queuedCount} (${pieces})`; queued = ` 🧩 ${queuedCount} (${pieces})`;
} }
const procs = getProcessingCounts(); const processes = getProcessingCounts();
const procsDisp = procs == 0 ? "" : `${procs}`; const processesDisp = processes == 0 ? "" : `${processes}`;
const message = `Sync: ${w}${sent}${arrived}${waiting}${procsDisp}${queued}`; const message = `Sync: ${w}${sent}${arrived}${waiting}${processesDisp}${queued}`;
const locks = getLocks(); const locks = getLocks();
const pendingTask = locks.pending.length const pendingTask = locks.pending.length
? "\nPending: " + ? "\nPending: " +
@@ -1561,10 +1592,10 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
Logger("Initializing", LOG_LEVEL.NOTICE, "syncAll"); Logger("Initializing", LOG_LEVEL.NOTICE, "syncAll");
} }
const filesStorage = this.app.vault.getFiles(); const filesStorage = this.app.vault.getFiles().filter(e => this.isTargetFile(e));
const filesStorageName = filesStorage.map((e) => e.path); const filesStorageName = filesStorage.map((e) => e.path);
const wf = await this.localDatabase.localDatabase.allDocs(); const wf = await this.localDatabase.localDatabase.allDocs();
const filesDatabase = wf.rows.filter((e) => !isChunk(e.id) && !isPluginChunk(e.id) && e.id != "obsydian_livesync_version").filter(e => isValidPath(e.id)).map((e) => id2path(e.id)); const filesDatabase = wf.rows.filter((e) => !isChunk(e.id) && !isPluginChunk(e.id) && e.id != "obsydian_livesync_version").filter(e => isValidPath(e.id)).map((e) => id2path(e.id)).filter(e => this.isTargetFile(e));
const isInitialized = await (this.localDatabase.kvDB.get<boolean>("initialized")) || false; const isInitialized = await (this.localDatabase.kvDB.get<boolean>("initialized")) || false;
// Make chunk bigger if it is the initial scan. There must be non-active docs. // Make chunk bigger if it is the initial scan. There must be non-active docs.
if (filesDatabase.length == 0 && !isInitialized) { if (filesDatabase.length == 0 && !isInitialized) {
@@ -1581,23 +1612,22 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
Logger("Updating database by new files"); Logger("Updating database by new files");
this.setStatusBarText(`UPDATE DATABASE`); this.setStatusBarText(`UPDATE DATABASE`);
const runAll = async<T>(procedurename: string, objects: T[], callback: (arg: T) => Promise<void>) => { const runAll = async<T>(procedureName: string, objects: T[], callback: (arg: T) => Promise<void>) => {
const count = objects.length; const count = objects.length;
Logger(procedurename); Logger(procedureName);
let i = 0; let i = 0;
// let lastTicks = performance.now() + 2000; const semaphore = Semaphore(10);
// let workProcs = 0;
const p = Parallels();
const limit = 10;
Logger(`${procedurename} exec.`); Logger(`${procedureName} exec.`);
for (const v of objects) { if (!this.localDatabase.isReady) throw Error("Database is not ready!");
// workProcs++; const processes = objects.map(e => (async (v) => {
if (!this.localDatabase.isReady) throw Error("Database is not ready!"); const releaser = await semaphore.acquire(1, procedureName);
p.add(callback(v).then(() => {
try {
await callback(v);
i++; i++;
if (i % 100 == 0) { if (i % 50 == 0) {
const notify = `${procedurename} : ${i}/${count}`; const notify = `${procedureName} : ${i}/${count}`;
if (showingNotice) { if (showingNotice) {
Logger(notify, LOG_LEVEL.NOTICE, "syncAll"); Logger(notify, LOG_LEVEL.NOTICE, "syncAll");
} else { } else {
@@ -1605,17 +1635,17 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
this.setStatusBarText(notify); this.setStatusBarText(notify);
} }
}).catch(ex => { } catch (ex) {
Logger(`Error while ${procedurename}`, LOG_LEVEL.NOTICE); Logger(`Error while ${procedureName}`, LOG_LEVEL.NOTICE);
Logger(ex); Logger(ex);
}).finally(() => { } finally {
// workProcs--; releaser();
}) }
);
await p.wait(limit);
} }
await p.all(); )(e));
Logger(`${procedurename} done.`); await Promise.all(processes);
Logger(`${procedureName} done.`);
}; };
await runAll("UPDATE DATABASE", onlyInStorage, async (e) => { await runAll("UPDATE DATABASE", onlyInStorage, async (e) => {
@@ -1627,6 +1657,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
await runAll("UPDATE STORAGE", onlyInDatabase, async (e) => { await runAll("UPDATE STORAGE", onlyInDatabase, async (e) => {
Logger(`Check or pull from db:${e}`); Logger(`Check or pull from db:${e}`);
await this.pullFile(e, filesStorage, false, null, false); await this.pullFile(e, filesStorage, false, null, false);
Logger(`Check or pull from db:${e} OK`);
}); });
} }
if (!initialScan) { if (!initialScan) {
@@ -1693,7 +1724,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
if (ex.code && ex.code == "ENOENT") { if (ex.code && ex.code == "ENOENT") {
//NO OP. //NO OP.
} else { } else {
Logger(`error while delete filder:${folder.path}`, LOG_LEVEL.NOTICE); Logger(`error while delete folder:${folder.path}`, LOG_LEVEL.NOTICE);
Logger(ex); Logger(ex);
} }
} }
@@ -1763,7 +1794,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
// Conflicted item could not load, delete this. // Conflicted item could not load, delete this.
await this.localDatabase.deleteDBEntry(path, { rev: test._conflicts[0] }); await this.localDatabase.deleteDBEntry(path, { rev: test._conflicts[0] });
await this.pullFile(path, null, true); await this.pullFile(path, null, true);
Logger(`could not get old revisions, automaticaly used newer one:${path}`, LOG_LEVEL.NOTICE); Logger(`could not get old revisions, automatically used newer one:${path}`, LOG_LEVEL.NOTICE);
return true; return true;
} }
// first,check for same contents // first,check for same contents
@@ -1774,19 +1805,19 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
await this.localDatabase.deleteDBEntry(path, { rev: leaf.rev }); await this.localDatabase.deleteDBEntry(path, { rev: leaf.rev });
await this.pullFile(path, null, true); await this.pullFile(path, null, true);
Logger(`automaticaly merged:${path}`); Logger(`automatically merged:${path}`);
return true; return true;
} }
if (this.settings.resolveConflictsByNewerFile) { if (this.settings.resolveConflictsByNewerFile) {
const lmtime = ~~(leftLeaf.mtime / 1000); const lMtime = ~~(leftLeaf.mtime / 1000);
const rmtime = ~~(rightLeaf.mtime / 1000); const rMtime = ~~(rightLeaf.mtime / 1000);
let loser = leftLeaf; let loser = leftLeaf;
if (lmtime > rmtime) { if (lMtime > rMtime) {
loser = rightLeaf; loser = rightLeaf;
} }
await this.localDatabase.deleteDBEntry(path, { rev: loser.rev }); await this.localDatabase.deleteDBEntry(path, { rev: loser.rev });
await this.pullFile(path, null, true); await this.pullFile(path, null, true);
Logger(`Automaticaly merged (newerFileResolve) :${path}`, LOG_LEVEL.NOTICE); Logger(`Automatically merged (newerFileResolve) :${path}`, LOG_LEVEL.NOTICE);
return true; return true;
} }
// make diff. // make diff.
@@ -1878,7 +1909,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
await runWithLock("conflicted", false, async () => { await runWithLock("conflicted", false, async () => {
const conflictCheckResult = await this.getConflictedStatus(file.path); const conflictCheckResult = await this.getConflictedStatus(file.path);
if (conflictCheckResult === false) { if (conflictCheckResult === false) {
//nothign to do. //nothing to do.
return; return;
} }
if (conflictCheckResult === true) { if (conflictCheckResult === true) {
@@ -1896,6 +1927,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
async pullFile(filename: string, fileList?: TFile[], force?: boolean, rev?: string, waitForReady = true) { async pullFile(filename: string, fileList?: TFile[], force?: boolean, rev?: string, waitForReady = true) {
const targetFile = this.app.vault.getAbstractFileByPath(id2path(filename)); const targetFile = this.app.vault.getAbstractFileByPath(id2path(filename));
if (!this.isTargetFile(id2path(filename))) return;
if (targetFile == null) { if (targetFile == null) {
//have to create; //have to create;
const doc = await this.localDatabase.getDBEntry(filename, rev ? { rev: rev } : null, false, waitForReady); const doc = await this.localDatabase.getDBEntry(filename, rev ? { rev: rev } : null, false, waitForReady);
@@ -1971,6 +2003,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
async updateIntoDB(file: TFile, initialScan?: boolean) { async updateIntoDB(file: TFile, initialScan?: boolean) {
if (!this.isTargetFile(file)) return;
if (shouldBeIgnored(file.path)) { if (shouldBeIgnored(file.path)) {
return; return;
} }
@@ -1984,9 +2017,9 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
content = await this.app.vault.read(file); content = await this.app.vault.read(file);
datatype = "plain"; datatype = "plain";
} }
const fullpath = path2id(file.path); const fullPath = path2id(file.path);
const d: LoadedEntry = { const d: LoadedEntry = {
_id: fullpath, _id: fullPath,
data: content, data: content,
ctime: file.stat.ctime, ctime: file.stat.ctime,
mtime: file.stat.mtime, mtime: file.stat.mtime,
@@ -1997,16 +2030,16 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}; };
//upsert should locked //upsert should locked
const msg = `DB <- STORAGE (${datatype}) `; const msg = `DB <- STORAGE (${datatype}) `;
const isNotChanged = await runWithLock("file:" + fullpath, false, async () => { const isNotChanged = await runWithLock("file:" + fullPath, false, async () => {
if (recentlyTouched(file)) { if (recentlyTouched(file)) {
return true; return true;
} }
const old = await this.localDatabase.getDBEntry(fullpath, null, false, false); const old = await this.localDatabase.getDBEntry(fullPath, null, false, false);
if (old !== false) { if (old !== false) {
const oldData = { data: old.data, deleted: old._deleted || old.deleted, }; const oldData = { data: old.data, deleted: old._deleted || old.deleted, };
const newData = { data: d.data, deleted: d._deleted || d.deleted }; const newData = { data: d.data, deleted: d._deleted || d.deleted };
if (JSON.stringify(oldData) == JSON.stringify(newData)) { if (JSON.stringify(oldData) == JSON.stringify(newData)) {
Logger(msg + "Skipped (not changed) " + fullpath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE); Logger(msg + "Skipped (not changed) " + fullPath + ((d._deleted || d.deleted) ? " (deleted)" : ""), LOG_LEVEL.VERBOSE);
return true; return true;
} }
// d._rev = old._rev; // d._rev = old._rev;
@@ -2018,23 +2051,24 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
this.queuedFiles = this.queuedFiles.map((e) => ({ ...e, ...(e.entry._id == d._id ? { done: true } : {}) })); this.queuedFiles = this.queuedFiles.map((e) => ({ ...e, ...(e.entry._id == d._id ? { done: true } : {}) }));
Logger(msg + fullpath); Logger(msg + fullPath);
if (this.settings.syncOnSave && !this.suspended) { if (this.settings.syncOnSave && !this.suspended) {
await this.replicate(); await this.replicate();
} }
} }
async deleteFromDB(file: TFile) { async deleteFromDB(file: TFile) {
const fullpath = file.path; if (!this.isTargetFile(file)) return;
Logger(`deleteDB By path:${fullpath}`); const fullPath = file.path;
await this.deleteFromDBbyPath(fullpath); Logger(`deleteDB By path:${fullPath}`);
await this.deleteFromDBbyPath(fullPath);
if (this.settings.syncOnSave && !this.suspended) { if (this.settings.syncOnSave && !this.suspended) {
await this.replicate(); await this.replicate();
} }
} }
async deleteFromDBbyPath(fullpath: string) { async deleteFromDBbyPath(fullPath: string) {
await this.localDatabase.deleteDBEntry(fullpath); await this.localDatabase.deleteDBEntry(fullPath);
if (this.settings.syncOnSave && !this.suspended) { if (this.settings.syncOnSave && !this.suspended) {
await this.replicate(); await this.replicate();
} }
@@ -2294,7 +2328,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
return result; return result;
} }
async storeInternaFileToDatabase(file: InternalFileInfo, forceWrite = false) { async storeInternalFileToDatabase(file: InternalFileInfo, forceWrite = false) {
const id = filename2idInternalChunk(path2id(file.path)); const id = filename2idInternalChunk(path2id(file.path));
const contentBin = await this.app.vault.adapter.readBinary(file.path); const contentBin = await this.app.vault.adapter.readBinary(file.path);
const content = await arrayBufferToBase64(contentBin); const content = await arrayBufferToBase64(contentBin);
@@ -2336,7 +2370,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}); });
} }
async deleteInternaFileOnDatabase(filename: string, forceWrite = false) { async deleteInternalFileOnDatabase(filename: string, forceWrite = false) {
const id = filename2idInternalChunk(path2id(filename)); const id = filename2idInternalChunk(path2id(filename));
const mtime = new Date().getTime(); const mtime = new Date().getTime();
await runWithLock("file-" + id, false, async () => { await runWithLock("file-" + id, false, async () => {
@@ -2372,8 +2406,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
}); });
} }
async ensureDirectoryEx(fullpath: string) { async ensureDirectoryEx(fullPath: string) {
const pathElements = fullpath.split("/"); const pathElements = fullPath.split("/");
pathElements.pop(); pathElements.pop();
let c = ""; let c = "";
for (const v of pathElements) { for (const v of pathElements) {
@@ -2383,7 +2417,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} catch (ex) { } catch (ex) {
// basically skip exceptions. // basically skip exceptions.
if (ex.message && ex.message == "Folder already exists.") { if (ex.message && ex.message == "Folder already exists.") {
// especialy this message is. // especially this message is.
} else { } else {
Logger("Folder Create Error"); Logger("Folder Create Error");
Logger(ex); Logger(ex);
@@ -2392,7 +2426,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
c += "/"; c += "/";
} }
} }
async extractInternaFileFromDatabase(filename: string, force = false) { async extractInternalFileFromDatabase(filename: string, force = false) {
const isExists = await this.app.vault.adapter.exists(filename); const isExists = await this.app.vault.adapter.exists(filename);
const id = filename2idInternalChunk(path2id(filename)); const id = filename2idInternalChunk(path2id(filename));
@@ -2455,7 +2489,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
for (const row of docs.rows) { for (const row of docs.rows) {
const doc = row.doc; const doc = row.doc;
if (!("_conflicts" in doc)) continue; if (!("_conflicts" in doc)) continue;
if (isInteralChunk(row.id)) { if (isInternalChunk(row.id)) {
await this.resolveConflictOnInternalFile(row.id); await this.resolveConflictOnInternalFile(row.id);
} }
} }
@@ -2466,15 +2500,15 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
// If there is no conflict, return with false. // If there is no conflict, return with false.
if (!("_conflicts" in doc)) return false; if (!("_conflicts" in doc)) return false;
if (doc._conflicts.length == 0) return false; if (doc._conflicts.length == 0) return false;
Logger(`Hidden file conflicetd:${id2filenameInternalChunk(id)}`); Logger(`Hidden file conflicted:${id2filenameInternalChunk(id)}`);
const revA = doc._rev; const revA = doc._rev;
const revB = doc._conflicts[0]; const revB = doc._conflicts[0];
const revBdoc = await this.localDatabase.localDatabase.get(id, { rev: revB }); const revBDoc = await this.localDatabase.localDatabase.get(id, { rev: revB });
// determine which revision sould been deleted. // determine which revision should been deleted.
// simply check modified time // simply check modified time
const mtimeA = ("mtime" in doc && doc.mtime) || 0; const mtimeA = ("mtime" in doc && doc.mtime) || 0;
const mtimeB = ("mtime" in revBdoc && revBdoc.mtime) || 0; const mtimeB = ("mtime" in revBDoc && revBDoc.mtime) || 0;
// Logger(`Revisions:${new Date(mtimeA).toLocaleString} and ${new Date(mtimeB).toLocaleString}`); // Logger(`Revisions:${new Date(mtimeA).toLocaleString} and ${new Date(mtimeB).toLocaleString}`);
// console.log(`mtime:${mtimeA} - ${mtimeB}`); // console.log(`mtime:${mtimeA} - ${mtimeB}`);
const delRev = mtimeA < mtimeB ? revA : revB; const delRev = mtimeA < mtimeB ? revA : revB;
@@ -2508,8 +2542,6 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const fileCount = allFileNames.length; const fileCount = allFileNames.length;
let processed = 0; let processed = 0;
let filesChanged = 0; let filesChanged = 0;
const p = Parallels();
const limit = 10;
// count updated files up as like this below: // count updated files up as like this below:
// .obsidian: 2 // .obsidian: 2
// .obsidian/workspace: 1 // .obsidian/workspace: 1
@@ -2532,6 +2564,8 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
c = pieces.shift(); c = pieces.shift();
} }
} }
const p = [] as Promise<void>[];
const semaphore = Semaphore(15);
// Cache update time information for files which have already been processed (mainly for files that were skipped due to the same content) // Cache update time information for files which have already been processed (mainly for files that were skipped due to the same content)
let caches: { [key: string]: { storageMtime: number; docMtime: number } } = {}; let caches: { [key: string]: { storageMtime: number; docMtime: number } } = {};
caches = await this.localDatabase.kvDB.get<{ [key: string]: { storageMtime: number; docMtime: number } }>("diff-caches-internal") || {}; caches = await this.localDatabase.kvDB.get<{ [key: string]: { storageMtime: number; docMtime: number } }>("diff-caches-internal") || {};
@@ -2542,12 +2576,20 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const fileOnStorage = files.find(e => e.path == filename); const fileOnStorage = files.find(e => e.path == filename);
const fileOnDatabase = filesOnDB.find(e => e._id == filename2idInternalChunk(id2path(filename))); const fileOnDatabase = filesOnDB.find(e => e._id == filename2idInternalChunk(id2path(filename)));
const addProc = (p: () => Promise<void>): Promise<unknown> => { const addProc = async (p: () => Promise<void>): Promise<void> => {
return p(); const releaser = await semaphore.acquire(1);
try {
return p();
} catch (ex) {
Logger("Some process failed", logLevel)
Logger(ex);
} finally {
releaser();
}
} }
const cache = filename in caches ? caches[filename] : { storageMtime: 0, docMtime: 0 }; const cache = filename in caches ? caches[filename] : { storageMtime: 0, docMtime: 0 };
p.add(addProc(async () => { p.push(addProc(async () => {
if (fileOnStorage && fileOnDatabase) { if (fileOnStorage && fileOnDatabase) {
// Both => Synchronize // Both => Synchronize
if (fileOnDatabase.mtime == cache.docMtime && fileOnStorage.mtime == cache.storageMtime) { if (fileOnDatabase.mtime == cache.docMtime && fileOnStorage.mtime == cache.storageMtime) {
@@ -2555,45 +2597,43 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
} }
const nw = compareMTime(fileOnStorage.mtime, fileOnDatabase.mtime); const nw = compareMTime(fileOnStorage.mtime, fileOnDatabase.mtime);
if (nw > 0) { if (nw > 0) {
await this.storeInternaFileToDatabase(fileOnStorage); await this.storeInternalFileToDatabase(fileOnStorage);
} }
if (nw < 0) { if (nw < 0) {
// skip if not extraction performed. // skip if not extraction performed.
if (!await this.extractInternaFileFromDatabase(filename)) return; if (!await this.extractInternalFileFromDatabase(filename)) return;
} }
// If process successfly updated or file contents are same, update cache. // If process successfully updated or file contents are same, update cache.
cache.docMtime = fileOnDatabase.mtime; cache.docMtime = fileOnDatabase.mtime;
cache.storageMtime = fileOnStorage.mtime; cache.storageMtime = fileOnStorage.mtime;
caches[filename] = cache; caches[filename] = cache;
countUpdatedFolder(filename); countUpdatedFolder(filename);
} else if (!fileOnStorage && fileOnDatabase) { } else if (!fileOnStorage && fileOnDatabase) {
console.log("pushpull")
if (direction == "push") { if (direction == "push") {
if (fileOnDatabase.deleted) return; if (fileOnDatabase.deleted) return;
await this.deleteInternaFileOnDatabase(filename); await this.deleteInternalFileOnDatabase(filename);
} else if (direction == "pull") { } else if (direction == "pull") {
if (await this.extractInternaFileFromDatabase(filename)) { if (await this.extractInternalFileFromDatabase(filename)) {
countUpdatedFolder(filename); countUpdatedFolder(filename);
} }
} else if (direction == "safe") { } else if (direction == "safe") {
if (fileOnDatabase.deleted) return if (fileOnDatabase.deleted) return
if (await this.extractInternaFileFromDatabase(filename)) { if (await this.extractInternalFileFromDatabase(filename)) {
countUpdatedFolder(filename); countUpdatedFolder(filename);
} }
} }
} else if (fileOnStorage && !fileOnDatabase) { } else if (fileOnStorage && !fileOnDatabase) {
await this.storeInternaFileToDatabase(fileOnStorage); await this.storeInternalFileToDatabase(fileOnStorage);
} else { } else {
throw new Error("Invalid state on hidden file sync"); throw new Error("Invalid state on hidden file sync");
// Something corrupted? // Something corrupted?
} }
})); }));
await p.wait(limit);
} }
await p.all(); await Promise.all(p);
await this.localDatabase.kvDB.set("diff-caches-internal", caches); await this.localDatabase.kvDB.set("diff-caches-internal", caches);
// When files has been retreived from the database. they must be reloaded. // When files has been retrieved from the database. they must be reloaded.
if (direction == "pull" && filesChanged != 0) { if (direction == "pull" && filesChanged != 0) {
const configDir = normalizePath(this.app.vault.configDir); const configDir = normalizePath(this.app.vault.configDir);
// Show notification to restart obsidian when something has been changed in configDir. // Show notification to restart obsidian when something has been changed in configDir.
@@ -2618,12 +2658,12 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
a.appendChild(a.createEl("a", null, (anchor) => { a.appendChild(a.createEl("a", null, (anchor) => {
anchor.text = "HERE"; anchor.text = "HERE";
anchor.addEventListener("click", async () => { anchor.addEventListener("click", async () => {
Logger(`Unloading plugin: ${updatePluginName}`, LOG_LEVEL.NOTICE, "pluin-reload-" + updatePluginId); Logger(`Unloading plugin: ${updatePluginName}`, LOG_LEVEL.NOTICE, "plugin-reload-" + updatePluginId);
// @ts-ignore // @ts-ignore
await this.app.plugins.unloadPlugin(updatePluginId); await this.app.plugins.unloadPlugin(updatePluginId);
// @ts-ignore // @ts-ignore
await this.app.plugins.loadPlugin(updatePluginId); await this.app.plugins.loadPlugin(updatePluginId);
Logger(`Plugin reloaded: ${updatePluginName}`, LOG_LEVEL.NOTICE, "pluin-reload-" + updatePluginId); Logger(`Plugin reloaded: ${updatePluginName}`, LOG_LEVEL.NOTICE, "plugin-reload-" + updatePluginId);
}); });
})) }))
@@ -2640,7 +2680,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
memoObject(updatedPluginKey, new Notice(fragment, 0)) memoObject(updatedPluginKey, new Notice(fragment, 0))
} }
setTrigger(updatedPluginKey + "-close", 20000, () => { setTrigger(updatedPluginKey + "-close", 20000, () => {
const popup = retriveMemoObject<Notice>(updatedPluginKey) const popup = retrieveMemoObject<Notice>(updatedPluginKey)
if (!popup) return; if (!popup) return;
//@ts-ignore //@ts-ignore
if (popup?.noticeEl?.isShown()) { if (popup?.noticeEl?.isShown()) {
@@ -2691,4 +2731,13 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
Logger(`Hidden files scanned: ${filesChanged} files had been modified`, logLevel, "sync_internal"); Logger(`Hidden files scanned: ${filesChanged} files had been modified`, logLevel, "sync_internal");
} }
isTargetFile(file: string | TAbstractFile) {
if (file instanceof TFile) {
return this.localDatabase.isTargetFile(file.path);
} else if (typeof file == "string") {
return this.localDatabase.isTargetFile(file);
}
}
} }

View File

@@ -3,7 +3,7 @@ import { normalizePath } from "obsidian";
import { path2id_base, id2path_base } from "./lib/src/utils"; import { path2id_base, id2path_base } from "./lib/src/utils";
// For backward compatibility, using the path for determining id. // For backward compatibility, using the path for determining id.
// Only CouchDB nonacceptable ID (that starts with an underscore) has been prefixed with "/". // Only CouchDB unacceptable ID (that starts with an underscore) has been prefixed with "/".
// The first slash will be deleted when the path is normalized. // The first slash will be deleted when the path is normalized.
export function path2id(filename: string): string { export function path2id(filename: string): string {
const x = normalizePath(filename); const x = normalizePath(filename);
@@ -63,7 +63,7 @@ export async function memoIfNotExist<T>(key: string, func: () => T | Promise<T>)
} }
return memos[key] as T; return memos[key] as T;
} }
export function retriveMemoObject<T>(key: string): T | false { export function retrieveMemoObject<T>(key: string): T | false {
if (key in memos) { if (key in memos) {
return memos[key]; return memos[key];
} else { } else {

View File

@@ -9,9 +9,9 @@ export const isValidRemoteCouchDBURI = (uri: string): boolean => {
if (uri.startsWith("http://")) return true; if (uri.startsWith("http://")) return true;
return false; return false;
}; };
let last_post_successed = false; let last_successful_post = false;
export const getLastPostFailedBySize = () => { export const getLastPostFailedBySize = () => {
return !last_post_successed; return !last_successful_post;
}; };
const fetchByAPI = async (request: RequestUrlParam): Promise<RequestUrlResponse> => { const fetchByAPI = async (request: RequestUrlParam): Promise<RequestUrlResponse> => {
const ret = await requestUrl(request); const ret = await requestUrl(request);
@@ -40,8 +40,8 @@ export const connectRemoteCouchDBWithSetting = (settings: RemoteDBSettings, isMo
const connectRemoteCouchDB = async (uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> => { const connectRemoteCouchDB = async (uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> => {
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid"; if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
if (uri.toLowerCase() != uri) return "Remote URI and database name cound not contain capital letters."; if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters.";
if (uri.indexOf(" ") !== -1) return "Remote URI and database name cound not contain spaces."; if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces.";
let authHeader = ""; let authHeader = "";
if (auth.username && auth.password) { if (auth.username && auth.password) {
const utf8str = String.fromCharCode.apply(null, new TextEncoder().encode(`${auth.username}:${auth.password}`)); const utf8str = String.fromCharCode.apply(null, new TextEncoder().encode(`${auth.username}:${auth.password}`));
@@ -62,7 +62,7 @@ const connectRemoteCouchDB = async (uri: string, auth: { username: string; passw
if (opts_length > 1024 * 1024 * 10) { if (opts_length > 1024 * 1024 * 10) {
// over 10MB // over 10MB
if (uri.contains(".cloudantnosqldb.")) { if (uri.contains(".cloudantnosqldb.")) {
last_post_successed = false; last_successful_post = false;
Logger("This request should fail on IBM Cloudant.", LOG_LEVEL.VERBOSE); Logger("This request should fail on IBM Cloudant.", LOG_LEVEL.VERBOSE);
throw new Error("This request should fail on IBM Cloudant."); throw new Error("This request should fail on IBM Cloudant.");
} }
@@ -91,9 +91,9 @@ const connectRemoteCouchDB = async (uri: string, auth: { username: string; passw
try { try {
const r = await fetchByAPI(requestParam); const r = await fetchByAPI(requestParam);
if (method == "POST" || method == "PUT") { if (method == "POST" || method == "PUT") {
last_post_successed = r.status - (r.status % 100) == 200; last_successful_post = r.status - (r.status % 100) == 200;
} else { } else {
last_post_successed = true; last_successful_post = true;
} }
Logger(`HTTP:${method}${size} to:${localURL} -> ${r.status}`, LOG_LEVEL.DEBUG); Logger(`HTTP:${method}${size} to:${localURL} -> ${r.status}`, LOG_LEVEL.DEBUG);
@@ -106,7 +106,7 @@ const connectRemoteCouchDB = async (uri: string, auth: { username: string; passw
Logger(`HTTP:${method}${size} to:${localURL} -> failed`, LOG_LEVEL.VERBOSE); Logger(`HTTP:${method}${size} to:${localURL} -> failed`, LOG_LEVEL.VERBOSE);
// limit only in bulk_docs. // limit only in bulk_docs.
if (url.toString().indexOf("_bulk_docs") !== -1) { if (url.toString().indexOf("_bulk_docs") !== -1) {
last_post_successed = false; last_successful_post = false;
} }
Logger(ex); Logger(ex);
throw ex; throw ex;
@@ -116,19 +116,19 @@ const connectRemoteCouchDB = async (uri: string, auth: { username: string; passw
// -old implementation // -old implementation
try { try {
const responce: Response = await fetch(url, opts); const response: Response = await fetch(url, opts);
if (method == "POST" || method == "PUT") { if (method == "POST" || method == "PUT") {
last_post_successed = responce.ok; last_successful_post = response.ok;
} else { } else {
last_post_successed = true; last_successful_post = true;
} }
Logger(`HTTP:${method}${size} to:${localURL} -> ${responce.status}`, LOG_LEVEL.DEBUG); Logger(`HTTP:${method}${size} to:${localURL} -> ${response.status}`, LOG_LEVEL.DEBUG);
return responce; return response;
} catch (ex) { } catch (ex) {
Logger(`HTTP:${method}${size} to:${localURL} -> failed`, LOG_LEVEL.VERBOSE); Logger(`HTTP:${method}${size} to:${localURL} -> failed`, LOG_LEVEL.VERBOSE);
// limit only in bulk_docs. // limit only in bulk_docs.
if (url.toString().indexOf("_bulk_docs") !== -1) { if (url.toString().indexOf("_bulk_docs") !== -1) {
last_post_successed = false; last_successful_post = false;
} }
Logger(ex); Logger(ex);
throw ex; throw ex;
@@ -225,3 +225,55 @@ export const checkSyncInfo = async (db: PouchDB.Database): Promise<boolean> => {
} }
} }
}; };
export async function putDesignDocuments(db: PouchDB.Database) {
type DesignDoc = {
_id: string;
_rev: string;
ver: number;
filters: {
default: string,
push: string,
pull: string,
};
}
const design: DesignDoc = {
"_id": "_design/replicate",
"_rev": undefined as string | undefined,
"ver": 2,
"filters": {
"default": function (doc: any, req: any) {
return !("remote" in doc && doc.remote);
}.toString(),
"push": function (doc: any, req: any) {
return true;
}.toString(),
"pull": function (doc: any, req: any) {
return !(doc.type && doc.type == "leaf")
}.toString(),
}
}
// We can use the filter on replication : filter: 'replicate/default',
try {
const w = await db.get<DesignDoc>(design._id);
if (w.ver < design.ver) {
design._rev = w._rev;
//@ts-ignore
await db.put(design);
return true;
}
} catch (ex) {
if (ex.status && ex.status == 404) {
delete design._rev;
//@ts-ignore
await db.put(design);
return true;
} else {
Logger("Could not make design documents", LOG_LEVEL.INFO);
}
}
return false;
}

View File

@@ -2,15 +2,26 @@
"compilerOptions": { "compilerOptions": {
"baseUrl": ".", "baseUrl": ".",
"module": "ESNext", "module": "ESNext",
"target": "ES6", "target": "ES2018",
"allowJs": true, "allowJs": true,
"noImplicitAny": true, "noImplicitAny": true,
"moduleResolution": "node", "moduleResolution": "node",
// "importsNotUsedAsValues": "error", // "importsNotUsedAsValues": "error",
"importHelpers": true, "importHelpers": false,
"alwaysStrict": true, "alwaysStrict": true,
"lib": ["es2018", "DOM", "ES5", "ES6", "ES7"] "lib": [
"es2018",
"DOM",
"ES5",
"ES6",
"ES7",
"es2019.array"
]
}, },
"include": ["**/*.ts"], "include": [
"exclude": ["pouchdb-browser-webpack"] "**/*.ts"
],
"exclude": [
"pouchdb-browser-webpack"
]
} }

View File

@@ -1,3 +1,27 @@
### 0.14.1
- The target selecting filter was implemented.
Now we can set what files are synchronised by regular expression.
- We can configure the size of chunks.
We can use larger chunks to improve performance.
(This feature can not be used with IBM Cloudant)
- Read chunks online.
Now we can synchronise only metadata and retrieve chunks on demand. It reduces local database size and time for replication.
- Added this note.
- Use local chunks in preference to remote them if present,
#### Recommended configuration for Self-hosted CouchDB
- Set chunk size to around 100 to 250 (10MB - 25MB per chunk)
- *Set batch size to 100 and batch limit to 20 (0.14.2)*
- Be sure to `Read chunks online` checked.
#### Minors
- 0.14.2 Fixed issue about retrieving files if synchronisation has been interrupted or failed
- 0.14.3 New test items have been added to `Check database configuration`.
- 0.14.4 Fixed issue of importing configurations.
- 0.14.5 Auto chunk size adjusting implemented.
- 0.14.6 Change Target to ES2018
- 0.14.7 Refactor and fix typos.
### 0.13.0 ### 0.13.0
- The metadata of the deleted files will be kept on the database by default. If you want to delete this as the previous version, please turn on `Delete metadata of deleted files.`. And, if you have upgraded from the older version, please ensure every device has been upgraded. - The metadata of the deleted files will be kept on the database by default. If you want to delete this as the previous version, please turn on `Delete metadata of deleted files.`. And, if you have upgraded from the older version, please ensure every device has been upgraded.