mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-02-25 13:38:49 +00:00
Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
db0562eda1 | ||
|
|
b610d5d959 | ||
|
|
5abba74f3b | ||
|
|
021c1fccfe | ||
|
|
0a30af479f | ||
|
|
a9c3f60fe7 | ||
|
|
f996e056af | ||
|
|
341f0ab12d | ||
|
|
39340c1e1b | ||
|
|
55cdc58857 | ||
|
|
4f1a9dc4e8 | ||
|
|
013818b7d0 |
@@ -5,10 +5,15 @@
|
||||
- [Setup a CouchDB server](#setup-a-couchdb-server)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [1. Prepare CouchDB](#1-prepare-couchdb)
|
||||
- [A. Using Docker container](#a-using-docker-container)
|
||||
- [A. Using Docker](#a-using-docker)
|
||||
- [1. Prepare](#1-prepare)
|
||||
- [2. Run docker container](#2-run-docker-container)
|
||||
- [B. Install CouchDB directly](#b-install-couchdb-directly)
|
||||
- [B. Using Docker Compose](#b-using-docker-compose)
|
||||
- [1. Prepare](#1-prepare-1)
|
||||
- [2. Creating Compose file](#2-create-a-docker-composeyml-file-with-the-following-added-to-it)
|
||||
- [3. Boot check](#3-run-the-docker-compose-file-to-boot-check)
|
||||
- [4. Starting Docker Compose in background](#4-run-the-docker-compose-file-in-the-background)
|
||||
- [C. Install CouchDB directly](#c-install-couchdb-directly)
|
||||
- [2. Run couchdb-init.sh for initialise](#2-run-couchdb-initsh-for-initialise)
|
||||
- [3. Expose CouchDB to the Internet](#3-expose-couchdb-to-the-internet)
|
||||
- [4. Client Setup](#4-client-setup)
|
||||
@@ -21,44 +26,56 @@
|
||||
---
|
||||
|
||||
## 1. Prepare CouchDB
|
||||
### A. Using Docker container
|
||||
### A. Using Docker
|
||||
|
||||
#### 1. Prepare
|
||||
```bash
|
||||
|
||||
# Prepare environment variables.
|
||||
# Adding environment variables.
|
||||
export hostname=localhost:5984
|
||||
export username=goojdasjdas #Please change as you like.
|
||||
export password=kpkdasdosakpdsa #Please change as you like
|
||||
|
||||
# Prepare directories which save data and configurations.
|
||||
# Creating the save data & configuration directories.
|
||||
mkdir couchdb-data
|
||||
mkdir couchdb-etc
|
||||
```
|
||||
|
||||
#### 2. Run docker container
|
||||
|
||||
1. Boot Check.
|
||||
```
|
||||
$ docker run --name couchdb-for-ols --rm -it -e COUCHDB_USER=${username} -e COUCHDB_PASSWORD=${password} -v ${PWD}/couchdb-data:/opt/couchdb/data -v ${PWD}/couchdb-etc:/opt/couchdb/etc/local.d -p 5984:5984 couchdb
|
||||
```
|
||||
If your container has been exited, please check the permission of couchdb-data, and couchdb-etc.
|
||||
Once CouchDB run, these directories will be owned by uid:`5984`. Please chown it for you again.
|
||||
> [!WARNING]
|
||||
> If your container threw an error or exited unexpectedly, please check the permission of couchdb-data, and couchdb-etc.
|
||||
> Once CouchDB starts, these directories will be owned by uid:`5984`. Please chown it for that uid again.
|
||||
|
||||
2. Enable it in the background
|
||||
```
|
||||
$ docker run --name couchdb-for-ols -d --restart always -e COUCHDB_USER=${username} -e COUCHDB_PASSWORD=${password} -v ${PWD}/couchdb-data:/opt/couchdb/data -v ${PWD}/couchdb-etc:/opt/couchdb/etc/local.d -p 5984:5984 couchdb
|
||||
```
|
||||
If you prefer a compose file instead of docker run, here is the equivalent below:
|
||||
|
||||
Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
|
||||
### B. Using Docker Compose
|
||||
|
||||
#### 1. Prepare
|
||||
|
||||
```
|
||||
# Creating the save data & configuration directories.
|
||||
mkdir couchdb-data
|
||||
mkdir couchdb-etc
|
||||
```
|
||||
|
||||
#### 2. Create a `docker-compose.yml` file with the following added to it
|
||||
```
|
||||
services:
|
||||
couchdb:
|
||||
image: couchdb:latest
|
||||
container_name: couchdb-for-ols
|
||||
user: 1000:1000
|
||||
user: 5984:5984
|
||||
environment:
|
||||
- COUCHDB_USER=${username}
|
||||
- COUCHDB_PASSWORD=${password}
|
||||
- COUCHDB_USER=<INSERT USERNAME HERE> #Please change as you like.
|
||||
- COUCHDB_PASSWORD=<INSERT PASSWORD HERE> #Please change as you like.
|
||||
volumes:
|
||||
- ./couchdb-data:/opt/couchdb/data
|
||||
- ./couchdb-etc:/opt/couchdb/etc/local.d
|
||||
@@ -66,7 +83,30 @@ services:
|
||||
- 5984:5984
|
||||
restart: unless-stopped
|
||||
```
|
||||
### B. Install CouchDB directly
|
||||
|
||||
#### 3. Run the Docker Compose file to boot check
|
||||
|
||||
```
|
||||
docker compose up
|
||||
# Or if using the old version
|
||||
docker-compose up
|
||||
```
|
||||
> [!WARNING]
|
||||
> If your container threw an error or exited unexpectedly, please check the permission of couchdb-data, and couchdb-etc.
|
||||
> Once CouchDB starts, these directories will be owned by uid:`5984`. Please chown it for that uid again.
|
||||
|
||||
#### 4. Run the Docker Compose file in the background
|
||||
If all went well and didn't throw any errors, `CTRL+C` out of it, and then run this command
|
||||
```
|
||||
docker compose up -d
|
||||
# Or if using the old version
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
|
||||
|
||||
|
||||
### C. Install CouchDB directly
|
||||
Please refer to the [official document](https://docs.couchdb.org/en/stable/install/index.html). However, we do not have to configure it fully. Just the administrator needs to be configured.
|
||||
|
||||
## 2. Run couchdb-init.sh for initialise
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.25.4",
|
||||
"version": "0.25.7",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
840
package-lock.json
generated
840
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.25.4",
|
||||
"version": "0.25.7",
|
||||
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"main": "main.js",
|
||||
"type": "module",
|
||||
@@ -23,7 +23,8 @@
|
||||
"pretty": "npm run prettyNoWrite -- --write --log-level error",
|
||||
"prettyCheck": "npm run prettyNoWrite -- --check",
|
||||
"prettyNoWrite": "prettier --config ./.prettierrc \"**/*.js\" \"**/*.ts\" \"**/*.json\" ",
|
||||
"check": "npm run lint && npm run svelte-check && npm run tsc-check"
|
||||
"check": "npm run lint && npm run svelte-check && npm run tsc-check",
|
||||
"unittest": "deno test -A --no-check --coverage=cov_profile --v8-flags=--expose-gc --trace-leaks ./src/"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "vorotamoroz",
|
||||
@@ -60,6 +61,7 @@
|
||||
"pouchdb-adapter-http": "^9.0.0",
|
||||
"pouchdb-adapter-idb": "^9.0.0",
|
||||
"pouchdb-adapter-indexeddb": "^9.0.0",
|
||||
"pouchdb-adapter-memory": "^9.0.0",
|
||||
"pouchdb-core": "^9.0.0",
|
||||
"pouchdb-errors": "^9.0.0",
|
||||
"pouchdb-find": "^9.0.0",
|
||||
@@ -75,7 +77,8 @@
|
||||
"tslib": "^2.8.1",
|
||||
"tsx": "^4.19.4",
|
||||
"typescript": "5.7.3",
|
||||
"yaml": "^2.8.0"
|
||||
"yaml": "^2.8.0",
|
||||
"@types/deno": "^2.3.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-s3": "^3.808.0",
|
||||
|
||||
@@ -1,13 +1,5 @@
|
||||
import { deleteDB, type IDBPDatabase, openDB } from "idb";
|
||||
export interface KeyValueDatabase {
|
||||
get<T>(key: IDBValidKey): Promise<T>;
|
||||
set<T>(key: IDBValidKey, value: T): Promise<IDBValidKey>;
|
||||
del(key: IDBValidKey): Promise<void>;
|
||||
clear(): Promise<void>;
|
||||
keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]>;
|
||||
close(): void;
|
||||
destroy(): Promise<void>;
|
||||
}
|
||||
import type { KeyValueDatabase } from "../lib/src/interfaces/KeyValueDatabase.ts";
|
||||
const databaseCache: { [key: string]: IDBPDatabase<any> } = {};
|
||||
export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => {
|
||||
if (dbKey in databaseCache) {
|
||||
|
||||
@@ -20,6 +20,7 @@ export const EVENT_REQUEST_OPEN_P2P = "request-open-p2p";
|
||||
export const EVENT_REQUEST_CLOSE_P2P = "request-close-p2p";
|
||||
|
||||
export const EVENT_REQUEST_RUN_DOCTOR = "request-run-doctor";
|
||||
export const EVENT_REQUEST_RUN_FIX_INCOMPLETE = "request-run-fix-incomplete";
|
||||
|
||||
// export const EVENT_FILE_CHANGED = "file-changed";
|
||||
|
||||
@@ -38,6 +39,7 @@ declare global {
|
||||
[EVENT_REQUEST_COPY_SETUP_URI]: undefined;
|
||||
[EVENT_REQUEST_SHOW_SETUP_QR]: undefined;
|
||||
[EVENT_REQUEST_RUN_DOCTOR]: string;
|
||||
[EVENT_REQUEST_RUN_FIX_INCOMPLETE]: undefined;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -28,11 +28,12 @@ import type ObsidianLiveSyncPlugin from "../main.ts";
|
||||
import { writeString } from "../lib/src/string_and_binary/convert.ts";
|
||||
import { fireAndForget } from "../lib/src/common/utils.ts";
|
||||
import { sameChangePairs } from "./stores.ts";
|
||||
import type { KeyValueDatabase } from "./KeyValueDB.ts";
|
||||
|
||||
import { scheduleTask } from "octagonal-wheels/concurrency/task";
|
||||
import { EVENT_PLUGIN_UNLOADED, eventHub } from "./events.ts";
|
||||
import { promiseWithResolver, type PromiseWithResolvers } from "octagonal-wheels/promises";
|
||||
import { AuthorizationHeaderGenerator } from "../lib/src/replication/httplib.ts";
|
||||
import type { KeyValueDatabase } from "../lib/src/interfaces/KeyValueDatabase.ts";
|
||||
|
||||
export { scheduleTask, cancelTask, cancelAllTasks } from "../lib/src/concurrency/task.ts";
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ export class LocalDatabaseMaintenance extends LiveSyncCommands implements IObsid
|
||||
return this.localDatabase.localDatabase;
|
||||
}
|
||||
clearHash() {
|
||||
this.localDatabase.hashCaches.clear();
|
||||
this.localDatabase.clearCaches();
|
||||
}
|
||||
|
||||
async confirm(title: string, message: string, affirmative = "Yes", negative = "No") {
|
||||
|
||||
2
src/lib
2
src/lib
Submodule src/lib updated: a5ac735c6f...1f51336162
10
src/main.ts
10
src/main.ts
@@ -27,7 +27,7 @@ import {
|
||||
LiveSyncAbstractReplicator,
|
||||
type LiveSyncReplicatorEnv,
|
||||
} from "./lib/src/replication/LiveSyncAbstractReplicator.js";
|
||||
import { type KeyValueDatabase } from "./common/KeyValueDB.ts";
|
||||
import { type KeyValueDatabase } from "./lib/src/interfaces/KeyValueDatabase.ts";
|
||||
import { LiveSyncCommands } from "./features/LiveSyncCommands.ts";
|
||||
import { HiddenFileSync } from "./features/HiddenFileSync/CmdHiddenFileSync.ts";
|
||||
import { ConfigSync } from "./features/ConfigSync/CmdConfigSync.ts";
|
||||
@@ -591,7 +591,11 @@ export default class ObsidianLiveSyncPlugin
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
|
||||
$$initializeDatabase(showingNotice: boolean = false, reopenDatabase = true): Promise<boolean> {
|
||||
$$initializeDatabase(
|
||||
showingNotice: boolean = false,
|
||||
reopenDatabase = true,
|
||||
ignoreSuspending: boolean = false
|
||||
): Promise<boolean> {
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
|
||||
@@ -628,7 +632,7 @@ export default class ObsidianLiveSyncPlugin
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
|
||||
$$performFullScan(showingNotice?: boolean): Promise<void> {
|
||||
$$performFullScan(showingNotice?: boolean, ignoreSuspending?: boolean): Promise<void> {
|
||||
throwShouldBeOverridden();
|
||||
}
|
||||
|
||||
|
||||
@@ -53,7 +53,7 @@ export class ModuleDatabaseFileAccess extends AbstractModule implements IObsidia
|
||||
async () => await this.storeContent("autoTest.md" as FilePathWithPrefix, testString)
|
||||
);
|
||||
// For test, we need to clear the caches.
|
||||
await this.localDatabase.hashCaches.clear();
|
||||
this.localDatabase.clearCaches();
|
||||
await this._test("readContent", async () => {
|
||||
const content = await this.fetch("autoTest.md" as FilePathWithPrefix);
|
||||
if (!content) return "File not found";
|
||||
|
||||
@@ -18,7 +18,7 @@ import {
|
||||
getStoragePathFromUXFileInfo,
|
||||
markChangesAreSame,
|
||||
} from "../../common/utils";
|
||||
import { getDocDataAsArray, isDocContentSame, readContent } from "../../lib/src/common/utils";
|
||||
import { getDocDataAsArray, isDocContentSame, readAsBlob, readContent } from "../../lib/src/common/utils";
|
||||
import { shouldBeIgnored } from "../../lib/src/string_and_binary/path";
|
||||
import type { ICoreModule } from "../ModuleTypes";
|
||||
import { Semaphore } from "octagonal-wheels/concurrency/semaphore";
|
||||
@@ -259,6 +259,16 @@ export class ModuleFileHandler extends AbstractModule implements ICoreModule {
|
||||
const docData = readContent(docRead);
|
||||
|
||||
if (existOnStorage && !force) {
|
||||
// If we want to process size mismatched files -- in case of having files created by some integrations, enable the toggle.
|
||||
if (!this.settings.processSizeMismatchedFiles) {
|
||||
// Check the file is not corrupted
|
||||
// (Zero is a special case, may be created by some APIs and it might be acceptable).
|
||||
if (docRead.size != 0 && docRead.size !== readAsBlob(docRead).size) {
|
||||
this._log(`File ${path} seems to be corrupted! Writing prevented.`, LOG_LEVEL_NOTICE);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// The file is exist on the storage. Let's check the difference between the file and the entry.
|
||||
// But, if force is true, then it should be updated.
|
||||
// Ok, we have to compare.
|
||||
|
||||
@@ -73,7 +73,7 @@ export class ModuleRebuilder extends AbstractModule implements ICoreModule, Rebu
|
||||
await this.core.$$realizeSettingSyncMode();
|
||||
await this.resetLocalDatabase();
|
||||
await delay(1000);
|
||||
await this.core.$$initializeDatabase(true);
|
||||
await this.core.$$initializeDatabase(true, true, true);
|
||||
await this.core.$$markRemoteLocked();
|
||||
await this.core.$$tryResetRemoteDatabase();
|
||||
await this.core.$$markRemoteLocked();
|
||||
@@ -190,7 +190,7 @@ export class ModuleRebuilder extends AbstractModule implements ICoreModule, Rebu
|
||||
if (makeLocalChunkBeforeSync) {
|
||||
await this.core.fileHandler.createAllChunks(true);
|
||||
} else if (!preventMakeLocalFilesBeforeSync) {
|
||||
await this.core.$$initializeDatabase(true);
|
||||
await this.core.$$initializeDatabase(true, true, true);
|
||||
} else {
|
||||
// Do not create local file entries before sync (Means use remote information)
|
||||
}
|
||||
|
||||
@@ -30,7 +30,7 @@ import {
|
||||
import { isAnyNote } from "../../lib/src/common/utils";
|
||||
import { EVENT_FILE_SAVED, EVENT_SETTING_SAVED, eventHub } from "../../common/events";
|
||||
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator";
|
||||
import { globalSlipBoard } from "../../lib/src/bureau/bureau";
|
||||
|
||||
import { $msg } from "../../lib/src/common/i18n";
|
||||
import { clearHandlers } from "../../lib/src/replication/SyncParamsHandler";
|
||||
|
||||
@@ -150,12 +150,12 @@ Even if you choose to clean up, you will see this option again if you exit Obsid
|
||||
}
|
||||
|
||||
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
|
||||
this.localDatabase.hashCaches.clear();
|
||||
this.localDatabase.clearCaches();
|
||||
// Perform the synchronisation once.
|
||||
if (await this.core.replicator.openReplication(this.settings, false, showMessage, true)) {
|
||||
await balanceChunkPurgedDBs(this.localDatabase.localDatabase, remoteDB.db);
|
||||
await purgeUnreferencedChunks(this.localDatabase.localDatabase, false);
|
||||
this.localDatabase.hashCaches.clear();
|
||||
this.localDatabase.clearCaches();
|
||||
await this.core.$$getReplicator().markRemoteResolved(this.settings);
|
||||
Logger("The local database has been cleaned up.", showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
|
||||
} else {
|
||||
@@ -310,7 +310,7 @@ Even if you choose to clean up, you will see this option again if you exit Obsid
|
||||
const change = docs[0];
|
||||
if (!change) return;
|
||||
if (isChunk(change._id)) {
|
||||
globalSlipBoard.submit("read-chunk", change._id, change as EntryLeaf);
|
||||
this.localDatabase.onNewLeaf(change as EntryLeaf);
|
||||
return;
|
||||
}
|
||||
if (await this.core.$anyModuleParsedReplicationResultItem(change)) return;
|
||||
|
||||
@@ -20,7 +20,7 @@ import { AbstractModule } from "../AbstractModule.ts";
|
||||
import type { ICoreModule } from "../ModuleTypes.ts";
|
||||
import { withConcurrency } from "octagonal-wheels/iterable/map";
|
||||
export class ModuleInitializerFile extends AbstractModule implements ICoreModule {
|
||||
async $$performFullScan(showingNotice?: boolean): Promise<void> {
|
||||
async $$performFullScan(showingNotice?: boolean, ignoreSuspending: boolean = false): Promise<void> {
|
||||
this._log("Opening the key-value database", LOG_LEVEL_VERBOSE);
|
||||
const isInitialized = (await this.core.kvDB.get<boolean>("initialized")) || false;
|
||||
// synchronize all files between database and storage.
|
||||
@@ -34,6 +34,16 @@ export class ModuleInitializerFile extends AbstractModule implements ICoreModule
|
||||
}
|
||||
return;
|
||||
}
|
||||
if (!ignoreSuspending && this.settings.suspendFileWatching) {
|
||||
if (showingNotice) {
|
||||
this._log(
|
||||
"Now suspending file watching. Synchronising between the storage and the local database is now prevented.",
|
||||
LOG_LEVEL_NOTICE,
|
||||
"syncAll"
|
||||
);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (showingNotice) {
|
||||
this._log("Initializing", LOG_LEVEL_NOTICE, "syncAll");
|
||||
@@ -355,11 +365,15 @@ export class ModuleInitializerFile extends AbstractModule implements ICoreModule
|
||||
this._log(`Checking expired file history done`);
|
||||
}
|
||||
|
||||
async $$initializeDatabase(showingNotice: boolean = false, reopenDatabase = true): Promise<boolean> {
|
||||
async $$initializeDatabase(
|
||||
showingNotice: boolean = false,
|
||||
reopenDatabase = true,
|
||||
ignoreSuspending: boolean = false
|
||||
): Promise<boolean> {
|
||||
this.core.$$resetIsReady();
|
||||
if (!reopenDatabase || (await this.core.$$openDatabase())) {
|
||||
if (this.localDatabase.isReady) {
|
||||
await this.core.$$performFullScan(showingNotice);
|
||||
await this.core.$$performFullScan(showingNotice, ignoreSuspending);
|
||||
}
|
||||
if (!(await this.core.$everyOnDatabaseInitialized(showingNotice))) {
|
||||
this._log(`Initializing database has been failed on some module`, LOG_LEVEL_NOTICE);
|
||||
|
||||
@@ -1,16 +1,20 @@
|
||||
import { LOG_LEVEL_NOTICE } from "octagonal-wheels/common/logger";
|
||||
import { LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, Logger } from "../../lib/src/common/logger.ts";
|
||||
import {
|
||||
EVENT_REQUEST_OPEN_P2P,
|
||||
EVENT_REQUEST_OPEN_SETTING_WIZARD,
|
||||
EVENT_REQUEST_OPEN_SETTINGS,
|
||||
EVENT_REQUEST_OPEN_SETUP_URI,
|
||||
EVENT_REQUEST_RUN_DOCTOR,
|
||||
EVENT_REQUEST_RUN_FIX_INCOMPLETE,
|
||||
eventHub,
|
||||
} from "../../common/events.ts";
|
||||
import { AbstractModule } from "../AbstractModule.ts";
|
||||
import type { ICoreModule } from "../ModuleTypes.ts";
|
||||
import { $msg } from "src/lib/src/common/i18n.ts";
|
||||
import { performDoctorConsultation, RebuildOptions } from "../../lib/src/common/configForDoc.ts";
|
||||
import { getPath, isValidPath } from "../../common/utils.ts";
|
||||
import { isMetaEntry } from "../../lib/src/common/types.ts";
|
||||
import { isDeletedEntry, isDocContentSame, isLoadedEntry, readAsBlob } from "../../lib/src/common/utils.ts";
|
||||
|
||||
export class ModuleMigration extends AbstractModule implements ICoreModule {
|
||||
async migrateUsingDoctor(skipRebuild: boolean = false, activateReason = "updated", forceRescan = false) {
|
||||
@@ -96,12 +100,129 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
|
||||
return false;
|
||||
}
|
||||
|
||||
async checkIncompleteDocs(force: boolean = false): Promise<boolean> {
|
||||
const incompleteDocsChecked = (await this.core.kvDB.get<boolean>("checkIncompleteDocs")) || false;
|
||||
if (incompleteDocsChecked && !force) {
|
||||
this._log("Incomplete docs check already done, skipping.", LOG_LEVEL_VERBOSE);
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
|
||||
this._log("Checking for incomplete documents...", LOG_LEVEL_NOTICE, "check-incomplete");
|
||||
const errorFiles = [];
|
||||
for await (const metaDoc of this.localDatabase.findAllNormalDocs({ conflicts: true })) {
|
||||
const path = getPath(metaDoc);
|
||||
|
||||
if (!isValidPath(path)) {
|
||||
continue;
|
||||
}
|
||||
if (!(await this.core.$$isTargetFile(path, true))) {
|
||||
continue;
|
||||
}
|
||||
if (!isMetaEntry(metaDoc)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const doc = await this.localDatabase.getDBEntryFromMeta(metaDoc);
|
||||
if (!doc || !isLoadedEntry(doc)) {
|
||||
continue;
|
||||
}
|
||||
if (isDeletedEntry(doc)) {
|
||||
continue;
|
||||
}
|
||||
const storageFileContent = await this.core.storageAccess.readHiddenFileBinary(path);
|
||||
// const storageFileBlob = createBlob(storageFileContent);
|
||||
const sizeOnStorage = storageFileContent.byteLength;
|
||||
const recordedSize = doc.size;
|
||||
const docBlob = readAsBlob(doc);
|
||||
const actualSize = docBlob.size;
|
||||
if (recordedSize !== actualSize || sizeOnStorage !== actualSize || sizeOnStorage !== recordedSize) {
|
||||
const contentMatched = await isDocContentSame(doc.data, storageFileContent);
|
||||
errorFiles.push({ path, recordedSize, actualSize, storageSize: sizeOnStorage, contentMatched });
|
||||
Logger(
|
||||
`Size mismatch for ${path}: ${recordedSize} (DB Recorded) , ${actualSize} (DB Stored) , ${sizeOnStorage} (Storage Stored), ${contentMatched ? "Content Matched" : "Content Mismatched"}`
|
||||
);
|
||||
}
|
||||
}
|
||||
if (errorFiles.length == 0) {
|
||||
Logger("No size mismatches found", LOG_LEVEL_NOTICE);
|
||||
await this.core.kvDB.set("checkIncompleteDocs", true);
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
Logger(`Found ${errorFiles.length} size mismatches`, LOG_LEVEL_NOTICE);
|
||||
// We have to repair them following rules and situations:
|
||||
// A. DB Recorded != DB Stored
|
||||
// A.1. DB Recorded == Storage Stored
|
||||
// Possibly recoverable from storage. Just overwrite the DB content with storage content.
|
||||
// A.2. Neither
|
||||
// Probably it cannot be resolved on this device. Even if the storage content is larger than DB Recorded, it possibly corrupted.
|
||||
// We do not fix it automatically. Leave it as is. Possibly other device can do this.
|
||||
// B. DB Recorded == DB Stored , < Storage Stored
|
||||
// Very fragile, if DB Recorded size is less than Storage Stored size, we possibly repair the content (The issue was `unexpectedly shortened file`).
|
||||
// We do not fix it automatically, but it will be automatically overwritten in other process.
|
||||
// C. DB Recorded == DB Stored , > Storage Stored
|
||||
// Probably restored by the user by resolving A or B on other device, We should overwrite the storage
|
||||
// Also do not fix it automatically. It should be overwritten by replication.
|
||||
const recoverable = errorFiles.filter((e) => {
|
||||
return e.recordedSize === e.storageSize;
|
||||
});
|
||||
const unrecoverable = errorFiles.filter((e) => {
|
||||
return e.recordedSize !== e.storageSize;
|
||||
});
|
||||
const messageUnrecoverable =
|
||||
unrecoverable.length > 0
|
||||
? $msg("moduleMigration.fix0256.messageUnrecoverable", {
|
||||
filesNotRecoverable: unrecoverable
|
||||
.map((e) => `- ${e.path} (M: ${e.recordedSize}, A: ${e.actualSize}, S: ${e.storageSize})`)
|
||||
.join("\n"),
|
||||
})
|
||||
: "";
|
||||
|
||||
const message = $msg("moduleMigration.fix0256.message", {
|
||||
files: recoverable
|
||||
.map((e) => `- ${e.path} (M: ${e.recordedSize}, A: ${e.actualSize}, S: ${e.storageSize})`)
|
||||
.join("\n"),
|
||||
messageUnrecoverable,
|
||||
});
|
||||
const CHECK_IT_LATER = $msg("moduleMigration.fix0256.buttons.checkItLater");
|
||||
const FIX = $msg("moduleMigration.fix0256.buttons.fix");
|
||||
const DISMISS = $msg("moduleMigration.fix0256.buttons.DismissForever");
|
||||
const ret = await this.core.confirm.askSelectStringDialogue(message, [CHECK_IT_LATER, FIX, DISMISS], {
|
||||
title: $msg("moduleMigration.fix0256.title"),
|
||||
defaultAction: CHECK_IT_LATER,
|
||||
});
|
||||
if (ret == FIX) {
|
||||
for (const file of recoverable) {
|
||||
// Overwrite the database with the files on the storage
|
||||
const stubFile = this.core.storageAccess.getFileStub(file.path);
|
||||
if (stubFile == null) {
|
||||
Logger(`Could not find stub file for ${file.path}`, LOG_LEVEL_NOTICE);
|
||||
continue;
|
||||
}
|
||||
|
||||
stubFile.stat.mtime = Date.now();
|
||||
const result = await this.core.fileHandler.storeFileToDB(stubFile, true, false);
|
||||
if (result) {
|
||||
Logger(`Successfully restored ${file.path} from storage`);
|
||||
} else {
|
||||
Logger(`Failed to restore ${file.path} from storage`, LOG_LEVEL_NOTICE);
|
||||
}
|
||||
}
|
||||
} else if (ret === DISMISS) {
|
||||
// User chose to dismiss the issue
|
||||
await this.core.kvDB.set("checkIncompleteDocs", true);
|
||||
}
|
||||
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
|
||||
async $everyOnFirstInitialize(): Promise<boolean> {
|
||||
if (!this.localDatabase.isReady) {
|
||||
this._log($msg("moduleMigration.logLocalDatabaseNotReady"), LOG_LEVEL_NOTICE);
|
||||
return false;
|
||||
}
|
||||
if (this.settings.isConfigured) {
|
||||
// TODO: Probably we have to check for insecure chunks
|
||||
await this.checkIncompleteDocs();
|
||||
await this.migrateUsingDoctor(false);
|
||||
// await this.migrationCheck();
|
||||
await this.migrateDisableBulkSend();
|
||||
@@ -120,6 +241,9 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
|
||||
eventHub.onEvent(EVENT_REQUEST_RUN_DOCTOR, async (reason) => {
|
||||
await this.migrateUsingDoctor(false, reason, true);
|
||||
});
|
||||
eventHub.onEvent(EVENT_REQUEST_RUN_FIX_INCOMPLETE, async () => {
|
||||
await this.checkIncompleteDocs(true);
|
||||
});
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -25,8 +25,8 @@ export class ConflictResolveModal extends Modal {
|
||||
title: string = "Conflicting changes";
|
||||
|
||||
pluginPickMode: boolean = false;
|
||||
localName: string = "Keep A";
|
||||
remoteName: string = "Keep B";
|
||||
localName: string = "Use Base";
|
||||
remoteName: string = "Use Conflicted";
|
||||
offEvent?: ReturnType<typeof eventHub.onEvent>;
|
||||
|
||||
constructor(app: App, filename: string, diff: diff_result, pluginPickMode?: boolean, remoteName?: string) {
|
||||
|
||||
@@ -17,8 +17,6 @@ import { delay, isObjectDifferent, sizeToHumanReadable } from "../../../lib/src/
|
||||
import { versionNumberString2Number } from "../../../lib/src/string_and_binary/convert.ts";
|
||||
import { Logger } from "../../../lib/src/common/logger.ts";
|
||||
import { checkSyncInfo } from "@/lib/src/pouchdb/negotiation.ts";
|
||||
import { balanceChunkPurgedDBs } from "@/lib/src/pouchdb/chunks.ts";
|
||||
import { purgeUnreferencedChunks } from "@/lib/src/pouchdb/chunks.ts";
|
||||
import { testCrypt } from "../../../lib/src/encryption/e2ee_v2.ts";
|
||||
import ObsidianLiveSyncPlugin from "../../../main.ts";
|
||||
import { scheduleTask } from "../../../common/utils.ts";
|
||||
@@ -38,7 +36,6 @@ import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
|
||||
import { fireAndForget, yieldNextAnimationFrame } from "octagonal-wheels/promises";
|
||||
import { confirmWithMessage } from "../../coreObsidian/UILib/dialogs.ts";
|
||||
import { EVENT_REQUEST_RELOAD_SETTING_TAB, eventHub } from "../../../common/events.ts";
|
||||
import { skipIfDuplicated } from "octagonal-wheels/concurrency/lock";
|
||||
import { JournalSyncMinio } from "../../../lib/src/replication/journal/objectstore/JournalSyncMinio.ts";
|
||||
import { paneChangeLog } from "./PaneChangeLog.ts";
|
||||
import {
|
||||
@@ -861,49 +858,6 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
|
||||
});
|
||||
}
|
||||
|
||||
async dryRunGC() {
|
||||
await skipIfDuplicated("cleanup", async () => {
|
||||
const replicator = this.plugin.$$getReplicator();
|
||||
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
|
||||
const remoteDBConn = await replicator.connectRemoteCouchDBWithSetting(
|
||||
this.plugin.settings,
|
||||
this.plugin.$$isMobile()
|
||||
);
|
||||
if (typeof remoteDBConn == "string") {
|
||||
Logger(remoteDBConn);
|
||||
return;
|
||||
}
|
||||
await purgeUnreferencedChunks(remoteDBConn.db, true, this.plugin.settings, false);
|
||||
await purgeUnreferencedChunks(this.plugin.localDatabase.localDatabase, true);
|
||||
this.plugin.localDatabase.hashCaches.clear();
|
||||
});
|
||||
}
|
||||
|
||||
async dbGC() {
|
||||
// Lock the remote completely once.
|
||||
await skipIfDuplicated("cleanup", async () => {
|
||||
const replicator = this.plugin.$$getReplicator();
|
||||
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
|
||||
await this.plugin.$$getReplicator().markRemoteLocked(this.plugin.settings, true, true);
|
||||
const remoteDBConnection = await replicator.connectRemoteCouchDBWithSetting(
|
||||
this.plugin.settings,
|
||||
this.plugin.$$isMobile()
|
||||
);
|
||||
if (typeof remoteDBConnection == "string") {
|
||||
Logger(remoteDBConnection);
|
||||
return;
|
||||
}
|
||||
await purgeUnreferencedChunks(remoteDBConnection.db, false, this.plugin.settings, true);
|
||||
await purgeUnreferencedChunks(this.plugin.localDatabase.localDatabase, false);
|
||||
this.plugin.localDatabase.hashCaches.clear();
|
||||
await balanceChunkPurgedDBs(this.plugin.localDatabase.localDatabase, remoteDBConnection.db);
|
||||
this.plugin.localDatabase.refreshSettings();
|
||||
Logger(
|
||||
"The remote database has been cleaned up! Other devices will be cleaned up on the next synchronisation."
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
getMinioJournalSyncClient() {
|
||||
const id = this.plugin.settings.accessKey;
|
||||
const key = this.plugin.settings.secretKey;
|
||||
|
||||
@@ -6,7 +6,7 @@ import type { PageFunctions } from "./SettingPane.ts";
|
||||
export function paneAdvanced(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement, { addPanel }: PageFunctions): void {
|
||||
void addPanel(paneEl, "Memory cache").then((paneEl) => {
|
||||
new Setting(paneEl).autoWireNumeric("hashCacheMaxCount", { clampMin: 10 });
|
||||
new Setting(paneEl).autoWireNumeric("hashCacheMaxAmount", { clampMin: 1 });
|
||||
// new Setting(paneEl).autoWireNumeric("hashCacheMaxAmount", { clampMin: 1 });
|
||||
});
|
||||
void addPanel(paneEl, "Local Database Tweak").then((paneEl) => {
|
||||
paneEl.addClass("wizardHidden");
|
||||
|
||||
@@ -3,6 +3,7 @@ import { versionNumberString2Number } from "../../../lib/src/string_and_binary/c
|
||||
import { $msg } from "../../../lib/src/common/i18n.ts";
|
||||
import { fireAndForget } from "octagonal-wheels/promises";
|
||||
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
|
||||
import { visibleOnly } from "./SettingPane.ts";
|
||||
//@ts-ignore
|
||||
const manifestVersion: string = MANIFEST_VERSION || "-";
|
||||
//@ts-ignore
|
||||
@@ -10,8 +11,34 @@ const updateInformation: string = UPDATE_INFO || "";
|
||||
|
||||
const lastVersion = ~~(versionNumberString2Number(manifestVersion) / 1000);
|
||||
export function paneChangeLog(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement): void {
|
||||
const informationDivEl = this.createEl(paneEl, "div", { text: "" });
|
||||
const cx = this.createEl(
|
||||
paneEl,
|
||||
"div",
|
||||
{
|
||||
cls: "op-warn-info",
|
||||
},
|
||||
undefined,
|
||||
visibleOnly(() => !this.isConfiguredAs("versionUpFlash", ""))
|
||||
);
|
||||
|
||||
this.createEl(
|
||||
cx,
|
||||
"div",
|
||||
{
|
||||
text: this.editingSettings.versionUpFlash,
|
||||
},
|
||||
undefined
|
||||
);
|
||||
this.createEl(cx, "button", { text: $msg("obsidianLiveSyncSettingTab.btnGotItAndUpdated") }, (e) => {
|
||||
e.addClass("mod-cta");
|
||||
e.addEventListener("click", () => {
|
||||
fireAndForget(async () => {
|
||||
this.editingSettings.versionUpFlash = "";
|
||||
await this.saveAllDirtySettings();
|
||||
});
|
||||
});
|
||||
});
|
||||
const informationDivEl = this.createEl(paneEl, "div", { text: "" });
|
||||
const tmpDiv = createDiv();
|
||||
// tmpDiv.addClass("sls-header-button");
|
||||
tmpDiv.addClass("op-warn-info");
|
||||
|
||||
@@ -26,7 +26,7 @@ import { addPrefix, shouldBeIgnored, stripAllPrefixes } from "../../../lib/src/s
|
||||
import { $msg } from "../../../lib/src/common/i18n.ts";
|
||||
import { Semaphore } from "octagonal-wheels/concurrency/semaphore";
|
||||
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
|
||||
import { EVENT_REQUEST_RUN_DOCTOR, eventHub } from "../../../common/events.ts";
|
||||
import { EVENT_REQUEST_RUN_DOCTOR, EVENT_REQUEST_RUN_FIX_INCOMPLETE, eventHub } from "../../../common/events.ts";
|
||||
import { ICHeader, ICXHeader, PSCHeader } from "../../../common/types.ts";
|
||||
import { HiddenFileSync } from "../../../features/HiddenFileSync/CmdHiddenFileSync.ts";
|
||||
import { EVENT_REQUEST_SHOW_HISTORY } from "../../../common/obsidianEvents.ts";
|
||||
@@ -50,6 +50,19 @@ export function paneHatch(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement,
|
||||
eventHub.emitEvent(EVENT_REQUEST_RUN_DOCTOR, "you wanted(Thank you)!");
|
||||
})
|
||||
);
|
||||
new Setting(paneEl)
|
||||
.setName($msg("Setting.TroubleShooting.ScanBrokenFiles"))
|
||||
.setDesc($msg("Setting.TroubleShooting.ScanBrokenFiles.Desc"))
|
||||
.addButton((button) =>
|
||||
button
|
||||
.setButtonText("Scan for Broken files")
|
||||
.setCta()
|
||||
.setDisabled(false)
|
||||
.onClick(() => {
|
||||
this.closeSetting();
|
||||
eventHub.emitEvent(EVENT_REQUEST_RUN_FIX_INCOMPLETE);
|
||||
})
|
||||
);
|
||||
new Setting(paneEl).setName("Prepare the 'report' to create an issue").addButton((button) =>
|
||||
button
|
||||
.setButtonText("Copy Report to clipboard")
|
||||
@@ -190,7 +203,7 @@ ${stringifyYaml({
|
||||
);
|
||||
infoGroupEl.appendChild(
|
||||
this.createEl(infoGroupEl, "div", {
|
||||
text: `Database: Modified: ${!fileOnDB ? `Missing:` : `${new Date(fileOnDB.mtime).toLocaleString()}, Size:${fileOnDB.size}`}`,
|
||||
text: `Database: Modified: ${!fileOnDB ? `Missing:` : `${new Date(fileOnDB.mtime).toLocaleString()}, Size:${fileOnDB.size} (actual size:${readAsBlob(fileOnDB).size})`}`,
|
||||
})
|
||||
);
|
||||
})
|
||||
@@ -335,7 +348,7 @@ ${stringifyYaml({
|
||||
Logger("Start verifying all files", LOG_LEVEL_NOTICE, "verify");
|
||||
const ignorePatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesIgnorePatterns");
|
||||
const targetPatterns = getFileRegExp(this.plugin.settings, "syncInternalFilesTargetPatterns");
|
||||
this.plugin.localDatabase.hashCaches.clear();
|
||||
this.plugin.localDatabase.clearCaches();
|
||||
Logger("Start verifying all files", LOG_LEVEL_NOTICE, "verify");
|
||||
const files = this.plugin.settings.syncInternalFiles
|
||||
? await this.plugin.storageAccess.getFilesIncludeHidden("/", targetPatterns, ignorePatterns)
|
||||
|
||||
@@ -28,9 +28,9 @@ export function panePatches(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElemen
|
||||
void addPanel(paneEl, "Compatibility (Database structure)").then((paneEl) => {
|
||||
new Setting(paneEl).autoWireToggle("useIndexedDBAdapter", { invert: true, holdValue: true });
|
||||
|
||||
new Setting(paneEl)
|
||||
.autoWireToggle("doNotUseFixedRevisionForChunks", { holdValue: true })
|
||||
.setClass("wizardHidden");
|
||||
// new Setting(paneEl)
|
||||
// .autoWireToggle("doNotUseFixedRevisionForChunks", { holdValue: true })
|
||||
// .setClass("wizardHidden");
|
||||
new Setting(paneEl).autoWireToggle("handleFilenameCaseSensitive", { holdValue: true }).setClass("wizardHidden");
|
||||
|
||||
this.addOnSaved("useIndexedDBAdapter", async () => {
|
||||
@@ -82,6 +82,7 @@ export function panePatches(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElemen
|
||||
void addPanel(paneEl, "Edge case addressing (Behaviour)").then((paneEl) => {
|
||||
new Setting(paneEl).autoWireToggle("doNotSuspendOnFetching");
|
||||
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("doNotDeleteFolder");
|
||||
new Setting(paneEl).autoWireToggle("processSizeMismatchedFiles");
|
||||
});
|
||||
|
||||
void addPanel(paneEl, "Edge case addressing (Processing)").then((paneEl) => {
|
||||
@@ -99,13 +100,13 @@ export function panePatches(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElemen
|
||||
});
|
||||
|
||||
void addPanel(paneEl, "Remote Database Tweak (In sunset)").then((paneEl) => {
|
||||
new Setting(paneEl).autoWireToggle("useEden").setClass("wizardHidden");
|
||||
const onlyUsingEden = visibleOnly(() => this.isConfiguredAs("useEden", true));
|
||||
new Setting(paneEl).autoWireNumeric("maxChunksInEden", { onUpdate: onlyUsingEden }).setClass("wizardHidden");
|
||||
new Setting(paneEl)
|
||||
.autoWireNumeric("maxTotalLengthInEden", { onUpdate: onlyUsingEden })
|
||||
.setClass("wizardHidden");
|
||||
new Setting(paneEl).autoWireNumeric("maxAgeInEden", { onUpdate: onlyUsingEden }).setClass("wizardHidden");
|
||||
// new Setting(paneEl).autoWireToggle("useEden").setClass("wizardHidden");
|
||||
// const onlyUsingEden = visibleOnly(() => this.isConfiguredAs("useEden", true));
|
||||
// new Setting(paneEl).autoWireNumeric("maxChunksInEden", { onUpdate: onlyUsingEden }).setClass("wizardHidden");
|
||||
// new Setting(paneEl)
|
||||
// .autoWireNumeric("maxTotalLengthInEden", { onUpdate: onlyUsingEden })
|
||||
// .setClass("wizardHidden");
|
||||
// new Setting(paneEl).autoWireNumeric("maxAgeInEden", { onUpdate: onlyUsingEden }).setClass("wizardHidden");
|
||||
|
||||
new Setting(paneEl).autoWireToggle("enableCompression").setClass("wizardHidden");
|
||||
});
|
||||
|
||||
@@ -7,7 +7,6 @@ import {
|
||||
import { Logger } from "../../../lib/src/common/logger.ts";
|
||||
import { $msg } from "../../../lib/src/common/i18n.ts";
|
||||
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
|
||||
import { fireAndForget } from "octagonal-wheels/promises";
|
||||
import { EVENT_REQUEST_COPY_SETUP_URI, eventHub } from "../../../common/events.ts";
|
||||
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
|
||||
import type { PageFunctions } from "./SettingPane.ts";
|
||||
@@ -17,30 +16,6 @@ export function paneSyncSettings(
|
||||
paneEl: HTMLElement,
|
||||
{ addPanel, addPane }: PageFunctions
|
||||
): void {
|
||||
if (this.editingSettings.versionUpFlash != "") {
|
||||
const c = this.createEl(
|
||||
paneEl,
|
||||
"div",
|
||||
{
|
||||
text: this.editingSettings.versionUpFlash,
|
||||
cls: "op-warn sls-setting-hidden",
|
||||
},
|
||||
(el) => {
|
||||
this.createEl(el, "button", { text: $msg("obsidianLiveSyncSettingTab.btnGotItAndUpdated") }, (e) => {
|
||||
e.addClass("mod-cta");
|
||||
e.addEventListener("click", () => {
|
||||
fireAndForget(async () => {
|
||||
this.editingSettings.versionUpFlash = "";
|
||||
await this.saveAllDirtySettings();
|
||||
c.remove();
|
||||
});
|
||||
});
|
||||
});
|
||||
},
|
||||
visibleOnly(() => !this.isConfiguredAs("versionUpFlash", ""))
|
||||
);
|
||||
}
|
||||
|
||||
this.createEl(paneEl, "div", {
|
||||
text: $msg("obsidianLiveSyncSettingTab.msgSelectAndApplyPreset"),
|
||||
cls: "wizardOnly",
|
||||
|
||||
@@ -23,5 +23,5 @@
|
||||
}
|
||||
},
|
||||
"include": ["**/*.ts"],
|
||||
"exclude": ["pouchdb-browser-webpack", "utils", "src/lib/apps"]
|
||||
"exclude": ["pouchdb-browser-webpack", "utils", "src/lib/apps", "**/*.test.ts"]
|
||||
}
|
||||
|
||||
130
updates.md
130
updates.md
@@ -1,3 +1,91 @@
|
||||
## 0.25.7
|
||||
|
||||
15th August, 2025
|
||||
|
||||
**Since the release of 0.25.6, there are two large problem. Please update immediately.**
|
||||
|
||||
- We may have corrupted some documents during the migration process. **Please check your documents on the wizard.**
|
||||
- Due to a chunk ID assignment issue, some data has not been encrypted. **Please rebuild the database using Rebuild Everything** if you have enabled E2EE.
|
||||
|
||||
**_So, If you have enabled E2EE, please perform `Rebuild everything`. If not, please check your documents on the wizard._**
|
||||
|
||||
In next version, insecure chunk detection will be implemented.
|
||||
|
||||
### Fixed
|
||||
|
||||
- Off-loaded chunking have been fixed to ensure proper functionality (#693).
|
||||
- Chunk document ID assignment has been fixed.
|
||||
- Replication prevention message during version up detection has been improved (#686).
|
||||
- `Keep A` and `Keep B` on Conflict resolving dialogue has been renamed to `Use Base` and `Use Conflicted` (#691).
|
||||
|
||||
### Improved
|
||||
|
||||
- Metadata and content-size unmatched documents are now detected and reported, prevented to be applied to the storage.
|
||||
- This behaviour can be configured in `Patch` -> `Edge case addressing (Behaviour)` -> `Process files even if seems to be corrupted`
|
||||
- Note: this toggle is for the direct-database-manipulation users.
|
||||
|
||||
### New Features
|
||||
|
||||
- `Scan for Broken files` has been implemented on `Hatch` -> `TroubleShooting`.
|
||||
|
||||
### Refactored
|
||||
|
||||
- Off-loaded processes have been refactored for the better maintainability.
|
||||
- Files prefixed `bg.worker` are now work on the worker threads.
|
||||
- Files prefixed `bgWorker.` are now also controls these worker threads. (I know what you want to say... I will rename them).
|
||||
- Removed unused code.
|
||||
|
||||
## ~~0.25.5~~ 0.25.6
|
||||
|
||||
(0.25.5 has been withdrawn due to a bug in the `Fetch chunks on demand` feature).
|
||||
|
||||
9th August, 2025
|
||||
|
||||
### Fixed
|
||||
|
||||
- Storage scanning no longer occurs when `Suspend file watching` is enabled (including boot-sequence).
|
||||
- This change improves safety when troubleshooting or fetching the remote database.
|
||||
- `Fetch chunks on demand` is now working again (if you installed 0.25.5, other versions are not affected).
|
||||
|
||||
### Improved
|
||||
|
||||
- Saving notes and files now consumes less memory.
|
||||
- Data is no longer fully buffered in memory and written at once; instead, it is now written in each over-2MB increments.
|
||||
- Chunk caching is now more efficient.
|
||||
- Chunks are now managed solely by their count (still maintained as LRU). If memory usage becomes excessive, they will be automatically released by the system-runtime.
|
||||
- Reverse-indexing is also no longer used. It is performed as scanning caches and act also as a WeakRef thinning.
|
||||
- Both of them (may) are effective for #692, #680, and some more.
|
||||
|
||||
### Changed
|
||||
|
||||
- `Incubate Chunks in Document` (also known as `Eden`) is now fully sunset.
|
||||
- Existing chunks can still be read, but new ones will no longer be created.
|
||||
- The `Compute revisions for chunks` setting has also been removed.
|
||||
- This feature is now always enabled and is no longer configurable (restoring the original behaviour).
|
||||
- As mentioned, `Memory cache size (by total characters)` has been removed.
|
||||
- The `Memory cache size (by total items)` setting is now the only option available (but it has 10x ratio compared to the previous version).
|
||||
|
||||
### Refactored
|
||||
|
||||
- A significant refactoring of the core codebase is underway.
|
||||
- This is part of our ongoing efforts to improve code maintainability, readability, and to unify interfaces.
|
||||
- Previously, complex files posed a risk due to a low bus factor. Fortunately, as our devices have become faster and more capable, we can now write code that is clearer and more maintainable (And not so much costs on performance).
|
||||
- Hashing functions have been refactored into the `HashManager` class and its derived classes.
|
||||
- Chunk splitting functions have been refactored into the `ContentSplitterCore` class and its derived classes.
|
||||
- Change tracking functions have been refactored into the `ChangeManager` class.
|
||||
- Chunk read/write functions have been refactored into the `ChunkManager` class.
|
||||
- Fetching chunks on demand is now handled separately from the `ChunkManager` and chunk reading functions. Chunks are queued by the `ChunkManager` and then processed by the `ChunkFetcher`, simplifying the process and reducing unnecessary complexity.
|
||||
- Then, local database access via `LiveSyncLocalDB` has been refactored to use the new classes.
|
||||
- References to external sources from `commonlib` have been corrected.
|
||||
- Type definitions in `types.ts` have been refined.
|
||||
- Unit tests are being added incrementally.
|
||||
- I am using `Deno` for testing, to simplify testing and coverage reporting.
|
||||
- While this is not identical to the Obsidian environment, `jest` may also have limitations. It is certainly better than having no tests.
|
||||
- In other words, recent manual scenario testing has highlighted some shortcomings.
|
||||
- `pouchdb-test`, used for testing PouchDB with Deno, has been added, utilising the `memory` adapter.
|
||||
|
||||
Side note: Although class-oriented programming is sometimes considered an outdated style, However, I have come to re-evaluate it as valuable from the perspectives of maintainability and readability.
|
||||
|
||||
## 0.25.4
|
||||
|
||||
29th July, 2025
|
||||
@@ -57,47 +145,5 @@ I have now rewritten the E2EE code to be more robust and easier to understand. I
|
||||
|
||||
As a result, this is the first time in a while that forward compatibility has been broken. We have also taken the opportunity to change all metadata to use encryption rather than obfuscation. Furthermore, the `Dynamic Iteration Count` setting is now redundant and has been moved to the `Patches` pane in the settings. Thanks to Rabin-Karp, the eden setting is also no longer necessary and has been relocated accordingly. Therefore, v0.25.0 represents a legitimate and correct evolution.
|
||||
|
||||
### Fixed
|
||||
|
||||
- The encryption algorithm now uses HKDF with a master key.
|
||||
- This is more robust and faster than the previous implementation.
|
||||
- It is now more secure against rainbow table attacks.
|
||||
- The previous implementation can still be used via `Patches` -> `End-to-end encryption algorithm` -> `Force V1`.
|
||||
- Note that `V1: Legacy` can decrypt V2, but produces V1 output.
|
||||
- `Fetch everything from the remote` now works correctly.
|
||||
- It no longer creates local database entries before synchronisation.
|
||||
- Extra log messages during QR code decoding have been removed.
|
||||
|
||||
### Changed
|
||||
|
||||
- The following settings have been moved to the `Patches` pane:
|
||||
- `Remote Database Tweak`
|
||||
- `Incubate Chunks in Document`
|
||||
- `Data Compression`
|
||||
|
||||
### Behavioural and API Changes
|
||||
|
||||
- `DirectFileManipulatorV2` now requires new settings (as you may already know, E2EEAlgorithm).
|
||||
- The database version has been increased to `12` from `10`.
|
||||
- If an older version is detected, we will be notified and synchronisation will be paused until the update is acknowledged. (It has been a long time since this behaviour was last encountered; we always err on the side of caution, even if it is less convenient.)
|
||||
|
||||
### Refactored
|
||||
|
||||
- `couchdb_utils.ts` has been separated into several explicitly named files.
|
||||
- Some missing functions in `bgWorker.mock.ts` have been added.
|
||||
|
||||
## 0.24.31
|
||||
|
||||
10th July, 2025
|
||||
|
||||
### Fixed
|
||||
|
||||
- The description of `Enable Developers' Debug Tools.` has been refined.
|
||||
- Now performance impact is more clearly stated.
|
||||
- Automatic conflict checking and resolution has been improved.
|
||||
- It now works parallelly for each other file, instead of sequentially. It makes significantly faster on first synchronisation when with local files information.
|
||||
- Resolving conflicts dialogue will not be shown for the multiple files at once.
|
||||
- It will be shown for each file, one by one.
|
||||
|
||||
Older notes are in
|
||||
[updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md).
|
||||
|
||||
@@ -1,3 +1,45 @@
|
||||
|
||||
## 0.25.0
|
||||
|
||||
19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
|
||||
|
||||
After reading Issue #668, I conducted another self-review of the E2EE-related code. In retrospect, it was clearly written by someone inexperienced, which is understandable, but it is still rather embarrassing. Three years is certainly enough time for growth.
|
||||
|
||||
I have now rewritten the E2EE code to be more robust and easier to understand. It is significantly more readable and should be easier to maintain in the future. The performance issue, previously considered a concern, has been addressed by introducing a master key and deriving keys using HKDF. This approach is both fast and robust, and it provides protection against rainbow table attacks. (In addition, this implementation has been [a dedicated package on the npm registry](https://github.com/vrtmrz/octagonal-wheels), and tested in 100% branch-coverage).
|
||||
|
||||
As a result, this is the first time in a while that forward compatibility has been broken. We have also taken the opportunity to change all metadata to use encryption rather than obfuscation. Furthermore, the `Dynamic Iteration Count` setting is now redundant and has been moved to the `Patches` pane in the settings. Thanks to Rabin-Karp, the eden setting is also no longer necessary and has been relocated accordingly. Therefore, v0.25.0 represents a legitimate and correct evolution.
|
||||
|
||||
### Fixed
|
||||
|
||||
- The encryption algorithm now uses HKDF with a master key.
|
||||
- This is more robust and faster than the previous implementation.
|
||||
- It is now more secure against rainbow table attacks.
|
||||
- The previous implementation can still be used via `Patches` -> `End-to-end encryption algorithm` -> `Force V1`.
|
||||
- Note that `V1: Legacy` can decrypt V2, but produces V1 output.
|
||||
- `Fetch everything from the remote` now works correctly.
|
||||
- It no longer creates local database entries before synchronisation.
|
||||
- Extra log messages during QR code decoding have been removed.
|
||||
|
||||
### Changed
|
||||
|
||||
- The following settings have been moved to the `Patches` pane:
|
||||
- `Remote Database Tweak`
|
||||
- `Incubate Chunks in Document`
|
||||
- `Data Compression`
|
||||
|
||||
### Behavioural and API Changes
|
||||
|
||||
- `DirectFileManipulatorV2` now requires new settings (as you may already know, E2EEAlgorithm).
|
||||
- The database version has been increased to `12` from `10`.
|
||||
- If an older version is detected, we will be notified and synchronisation will be paused until the update is acknowledged. (It has been a long time since this behaviour was last encountered; we always err on the side of caution, even if it is less convenient.)
|
||||
|
||||
### Refactored
|
||||
|
||||
- `couchdb_utils.ts` has been separated into several explicitly named files.
|
||||
- Some missing functions in `bgWorker.mock.ts` have been added.
|
||||
|
||||
|
||||
|
||||
## 0.24.0
|
||||
|
||||
I know that we have been waiting for a long time. It is finally released!
|
||||
@@ -14,6 +56,19 @@ Thank you, and I hope your troubles will be resolved!
|
||||
|
||||
---
|
||||
|
||||
## 0.24.31
|
||||
|
||||
10th July, 2025
|
||||
|
||||
### Fixed
|
||||
|
||||
- The description of `Enable Developers' Debug Tools.` has been refined.
|
||||
- Now performance impact is more clearly stated.
|
||||
- Automatic conflict checking and resolution has been improved.
|
||||
- It now works parallelly for each other file, instead of sequentially. It makes significantly faster on first synchronisation when with local files information.
|
||||
- Resolving conflicts dialogue will not be shown for the multiple files at once.
|
||||
- It will be shown for each file, one by one.
|
||||
|
||||
## 0.24.30
|
||||
|
||||
9th July, 2025
|
||||
|
||||
Reference in New Issue
Block a user