Compare commits

...

21 Commits

Author SHA1 Message Date
vorotamoroz
4048186bb5 bump 2025-09-04 11:46:50 +01:00
vorotamoroz
2b94fd9139 ## Improved
- Improved connectivity for P2P connections
- The connection to the signalling server can now be disconnected while in the background or when explicitly disconnected.
  - These features use a patch that has not been incorporated upstream.
2025-09-04 11:44:49 +01:00
vorotamoroz
ec72ece86d bump 2025-09-03 10:12:51 +01:00
vorotamoroz
e394a994c5 Fix typo 2025-09-03 10:11:46 +01:00
vorotamoroz
aa23b6a39a ### Improved
- Now we can configure `forcePathStyle` for bucket synchronisation (#707).
2025-09-03 10:08:49 +01:00
vorotamoroz
58e328a591 bump 2025-09-02 10:27:23 +01:00
vorotamoroz
1730c39d70 ### Fixed
- Opening IndexedDB handling has been ensured.
- Migration check of corrupted files detection has been fixed.
    - Now informs us about conflicted files as non-recoverable, but noted so.
    - No longer errors on not-found files.
2025-09-02 10:24:13 +01:00
vorotamoroz
b42152db5e bump 2025-09-01 12:28:01 +09:00
vorotamoroz
171cfc0a38 ### Fixed
- Conflict resolving dialogue now properly displays the changeset name instead of A or B (#691).
2025-09-01 12:23:38 +09:00
vorotamoroz
d2787bdb6a Update older dependencies 2025-09-01 12:21:12 +09:00
vorotamoroz
58845276e7 bump 2025-08-29 11:48:33 +01:00
vorotamoroz
a2cc093a9e ### Fixed
- Fixed an issue with automatic synchronisation starting (#702).
2025-08-29 11:46:11 +01:00
vorotamoroz
fec203a751 bump 2025-08-28 10:27:31 +01:00
vorotamoroz
1a06837769 ### Fixed
- Automatic translation detection on the first launch now works correctly (#630).
- No errors are shown during synchronisations in offline (if not explicitly requested) (#699).
- Missing some checking during automatic-synchronisation now works correctly.
2025-08-28 10:26:17 +01:00
vorotamoroz
18d1ce8ec8 bump 2025-08-26 11:17:09 +01:00
vorotamoroz
2221d8c4e8 Update dependency 2025-08-26 11:14:30 +01:00
vorotamoroz
08548f8630 ### New experimental feature
- We can perform Garbage Collection (Beta2) without rebuilding the entire database, and also fetch the database.

### Fixed

- Resetting the bucket now properly clears all uploaded files.

### Refactored

- Some files have been moved to better reflect their purpose and improve maintainability.
- The extensive LiveSyncLocalDB has been split into separate files for each role.
2025-08-26 11:09:33 +01:00
vorotamoroz
5d24c3b984 bump 2025-08-20 10:36:47 +01:00
vorotamoroz
de8fd43c8b ### Fixed
- CORS Checking messages now use replacements.
- Configuring CORS setting via the UI now respects the existing rules.
- Now startup-checking works correctly again, performs migration check serially and then it will also fix starting LiveSync or start-up sync. (#696)
- Statusline in editor now supported 'Bases'.
2025-08-20 10:36:22 +01:00
vorotamoroz
ed88761eaa bump 2025-08-18 06:32:19 +01:00
vorotamoroz
4dcb37f5a2 ## 0.25.8
### New feature
- Insecure chunk detection has been implemented.

### Fixed
- Unexpected `Failed to obtain PBKDF2 salt` or similar errors during bucket-synchronisation no longer occur.
- Unexpected long delays for chunk-missing documents when using bucket-synchronisation have been resolved.
- Fetched remote chunks are now properly stored in the local database if `Fetch chunks on demand` is enabled.
- The 'fetch' dialogue's message has been refined.
- No longer overwriting any corrupted documents to the storage on boot-sequence.

### Refactored
- Type errors have been corrected.
2025-08-18 06:26:50 +01:00
26 changed files with 1411 additions and 839 deletions

View File

@@ -1,7 +1,7 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.25.7",
"version": "0.25.16",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",

1238
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "obsidian-livesync",
"version": "0.25.7",
"version": "0.25.16",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js",
"type": "module",
@@ -92,10 +92,10 @@
"fflate": "^0.8.2",
"idb": "^8.0.3",
"minimatch": "^10.0.1",
"octagonal-wheels": "^0.1.37",
"octagonal-wheels": "^0.1.38",
"qrcode-generator": "^1.4.4",
"svelte-check": "^4.1.7",
"trystero": "^0.21.5",
"trystero": "github:vrtmrz/trystero#9e892a93ec14eeb57ce806d272fbb7c3935256d8",
"xxhash-wasm-102": "npm:xxhash-wasm@^1.0.2"
}
}

View File

@@ -8,8 +8,8 @@ export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatab
}
const storeKey = dbKey;
const dbPromise = openDB(dbKey, 1, {
upgrade(db) {
db.createObjectStore(storeKey);
upgrade(db, _oldVersion, _newVersion, _transaction, _event) {
return db.createObjectStore(storeKey);
},
});
const db = await dbPromise;

View File

@@ -86,7 +86,7 @@ export class HiddenFileSync extends LiveSyncCommands implements IObsidianModule
return this.plugin.kvDB;
}
getConflictedDoc(path: FilePathWithPrefix, rev: string) {
return this.plugin.localDatabase.getConflictedDoc(path, rev);
return this.plugin.managers.conflictManager.getConflictedDoc(path, rev);
}
onunload() {
this.periodicInternalFileScanProcessor?.disable();
@@ -699,7 +699,7 @@ Offline Changed files: ${processFiles.length}`;
revFrom._revs_info
?.filter((e) => e.status == "available" && Number(e.rev.split("-")[0]) < conflictedRevNo)
.first()?.rev ?? "";
const result = await this.plugin.localDatabase.mergeObject(
const result = await this.plugin.managers.conflictManager.mergeObject(
doc.path,
commonBase,
doc._rev,

View File

@@ -1,9 +1,27 @@
import { sizeToHumanReadable } from "octagonal-wheels/number";
import { LOG_LEVEL_NOTICE, type MetaEntry } from "../../lib/src/common/types";
import {
EntryTypes,
LOG_LEVEL_INFO,
LOG_LEVEL_NOTICE,
LOG_LEVEL_VERBOSE,
type DocumentID,
type EntryDoc,
type EntryLeaf,
type MetaEntry,
} from "../../lib/src/common/types";
import { getNoFromRev } from "../../lib/src/pouchdb/LiveSyncLocalDB";
import type { IObsidianModule } from "../../modules/AbstractObsidianModule";
import { LiveSyncCommands } from "../LiveSyncCommands";
import { serialized } from "octagonal-wheels/concurrency/lock_v2";
import { arrayToChunkedArray } from "octagonal-wheels/collection";
const DB_KEY_SEQ = "gc-seq";
const DB_KEY_CHUNK_SET = "chunk-set";
const DB_KEY_DOC_USAGE_MAP = "doc-usage-map";
type ChunkID = DocumentID;
type NoteDocumentID = DocumentID;
type Rev = string;
type ChunkUsageMap = Map<NoteDocumentID, Map<Rev, Set<ChunkID>>>;
export class LocalDatabaseMaintenance extends LiveSyncCommands implements IObsidianModule {
$everyOnload(): Promise<boolean> {
return Promise.resolve(true);
@@ -262,4 +280,213 @@ Note: **Make sure to synchronise all devices before deletion.**
this.clearHash();
}
}
async scanUnusedChunks() {
const kvDB = this.plugin.kvDB;
const chunkSet = (await kvDB.get<Set<DocumentID>>(DB_KEY_CHUNK_SET)) || new Set();
const chunkUsageMap = (await kvDB.get<ChunkUsageMap>(DB_KEY_DOC_USAGE_MAP)) || new Map();
const KEEP_MAX_REVS = 10;
const unusedSet = new Set<DocumentID>([...chunkSet]);
for (const [, revIdMap] of chunkUsageMap) {
const sortedRevId = [...revIdMap.entries()].sort((a, b) => getNoFromRev(b[0]) - getNoFromRev(a[0]));
if (sortedRevId.length > KEEP_MAX_REVS) {
// If we have more revisions than we want to keep, we need to delete the extras
}
const keepRevID = sortedRevId.slice(0, KEEP_MAX_REVS);
keepRevID.forEach((e) => e[1].forEach((ee) => unusedSet.delete(ee)));
}
return {
chunkSet,
chunkUsageMap,
unusedSet,
};
}
/**
* Track changes in the database and update the chunk usage map for garbage collection.
* Note that this only able to perform without Fetch chunks on demand.
*/
async trackChanges(fromStart: boolean = false, showNotice: boolean = false) {
if (!this.isAvailable()) return;
const logLevel = showNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const kvDB = this.plugin.kvDB;
const previousSeq = fromStart ? "" : await kvDB.get<string>(DB_KEY_SEQ);
const chunkSet = (await kvDB.get<Set<DocumentID>>(DB_KEY_CHUNK_SET)) || new Set();
const chunkUsageMap = (await kvDB.get<ChunkUsageMap>(DB_KEY_DOC_USAGE_MAP)) || new Map();
const db = this.localDatabase.localDatabase;
const verbose = (msg: string) => this._verbose(msg);
const processDoc = async (doc: EntryDoc, isDeleted: boolean) => {
if (!("children" in doc)) {
return;
}
const id = doc._id;
const rev = doc._rev!;
const deleted = doc._deleted || isDeleted;
const softDeleted = doc.deleted;
const children = (doc.children || []) as DocumentID[];
if (!chunkUsageMap.has(id)) {
chunkUsageMap.set(id, new Map<Rev, Set<ChunkID>>());
}
for (const chunkId of children) {
if (deleted) {
chunkUsageMap.get(id)!.delete(rev);
// chunkSet.add(chunkId as DocumentID);
} else {
if (softDeleted) {
//TODO: Soft delete
chunkUsageMap.get(id)!.set(rev, (chunkUsageMap.get(id)!.get(rev) || new Set()).add(chunkId));
} else {
chunkUsageMap.get(id)!.set(rev, (chunkUsageMap.get(id)!.get(rev) || new Set()).add(chunkId));
}
}
}
verbose(
`Tracking chunk: ${id}/${rev} (${doc?.path}), deleted: ${deleted ? "yes" : "no"} Soft-Deleted:${softDeleted ? "yes" : "no"}`
);
return await Promise.resolve();
};
// let saveQueue = 0;
const saveState = async (seq: string | number) => {
await kvDB.set(DB_KEY_SEQ, seq);
await kvDB.set(DB_KEY_CHUNK_SET, chunkSet);
await kvDB.set(DB_KEY_DOC_USAGE_MAP, chunkUsageMap);
};
const processDocRevisions = async (doc: EntryDoc) => {
try {
const oldRevisions = await db.get(doc._id, { revs: true, revs_info: true, conflicts: true });
const allRevs = oldRevisions._revs_info?.length || 0;
const info = (oldRevisions._revs_info || [])
.filter((e) => e.status == "available" && e.rev != doc._rev)
.filter((info) => !chunkUsageMap.get(doc._id)?.has(info.rev));
const infoLength = info.length;
this._log(`Found ${allRevs} old revisions for ${doc._id} . ${infoLength} items to check `);
if (info.length > 0) {
const oldDocs = await Promise.all(
info
.filter((revInfo) => revInfo.status == "available")
.map((revInfo) => db.get(doc._id, { rev: revInfo.rev }))
).then((docs) => docs.filter((doc) => doc));
for (const oldDoc of oldDocs) {
await processDoc(oldDoc as EntryDoc, false);
}
}
} catch (ex) {
if ((ex as any)?.status == 404) {
this._log(`No revisions found for ${doc._id}`, LOG_LEVEL_VERBOSE);
} else {
this._log(`Error finding revisions for ${doc._id}`);
this._verbose(ex);
}
}
};
const processChange = async (doc: EntryDoc, isDeleted: boolean, seq: string | number) => {
if (doc.type === EntryTypes.CHUNK) {
if (isDeleted) return;
chunkSet.add(doc._id);
} else if ("children" in doc) {
await processDoc(doc, isDeleted);
await serialized("x-process-doc", async () => await processDocRevisions(doc));
}
};
// Track changes
let i = 0;
await db
.changes({
since: previousSeq || "",
live: false,
conflicts: true,
include_docs: true,
style: "all_docs",
return_docs: false,
})
.on("change", async (change) => {
// handle change
await processChange(change.doc!, change.deleted ?? false, change.seq);
if (i++ % 100 == 0) {
await saveState(change.seq);
}
})
.on("complete", async (info) => {
await saveState(info.last_seq);
});
// Track all changed docs and new-leafs;
const result = await this.scanUnusedChunks();
const message = `Total chunks: ${result.chunkSet.size}\nUnused chunks: ${result.unusedSet.size}`;
this._log(message, logLevel);
}
async performGC(showingNotice = false) {
if (!this.isAvailable()) return;
await this.trackChanges(false, showingNotice);
const title = "Are all devices synchronised?";
const confirmMessage = `This function deletes unused chunks from the device. If there are differences between devices, some chunks may be missing when resolving conflicts.
Be sure to synchronise before executing.
However, if you have deleted them, you may be able to recover them by performing Hatch -> Recreate missing chunks for all files.
Are you ready to delete unused chunks?`;
const logLevel = showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const BUTTON_OK = `Yes, delete chunks`;
const BUTTON_CANCEL = "Cancel";
const result = await this.plugin.confirm.askSelectStringDialogue(
confirmMessage,
[BUTTON_OK, BUTTON_CANCEL] as const,
{
title,
defaultAction: BUTTON_CANCEL,
}
);
if (result !== BUTTON_OK) {
this._log("User cancelled chunk deletion", logLevel);
return;
}
const { unusedSet, chunkSet } = await this.scanUnusedChunks();
const deleteChunks = await this.database.allDocs({
keys: [...unusedSet],
include_docs: true,
});
for (const chunk of deleteChunks.rows) {
if ((chunk as any)?.value?.deleted) {
chunkSet.delete(chunk.key as DocumentID);
}
}
const deleteDocs = deleteChunks.rows
.filter((e) => "doc" in e)
.map((e) => ({
...(e as any).doc!,
_deleted: true,
}));
this._log(`Deleting chunks: ${deleteDocs.length}`, logLevel);
const deleteChunkBatch = arrayToChunkedArray(deleteDocs, 100);
let successCount = 0;
let errored = 0;
for (const batch of deleteChunkBatch) {
const results = await this.database.bulkDocs(batch as EntryLeaf[]);
for (const result of results) {
if ("ok" in result) {
chunkSet.delete(result.id as DocumentID);
successCount++;
} else {
this._log(`Failed to delete doc: ${result.id}`, LOG_LEVEL_VERBOSE);
errored++;
}
}
this._log(`Deleting chunks: ${successCount} `, logLevel, "gc-preforming");
}
const message = `Garbage Collection completed.
Success: ${successCount}, Errored: ${errored}`;
this._log(message, logLevel);
const kvDB = this.plugin.kvDB;
await kvDB.set(DB_KEY_CHUNK_SET, chunkSet);
}
}

View File

@@ -174,6 +174,13 @@ export class P2PReplicator
if (this.settings.P2P_Enabled && this.settings.P2P_AutoStart) {
setTimeout(() => void this.open(), 100);
}
const rep = this._replicatorInstance;
rep?.allowReconnection();
return Promise.resolve(true);
}
$everyBeforeSuspendProcess(): Promise<boolean> {
const rep = this._replicatorInstance;
rep?.disconnectFromServer();
return Promise.resolve(true);
}
}

Submodule src/lib updated: 1f51336162...c00f62f060

View File

@@ -84,6 +84,7 @@ import { ModuleLiveSyncMain } from "./modules/main/ModuleLiveSyncMain.ts";
import { ModuleExtraSyncObsidian } from "./modules/extraFeaturesObsidian/ModuleExtraSyncObsidian.ts";
import { LocalDatabaseMaintenance } from "./features/LocalDatabaseMainte/CmdLocalDatabaseMainte.ts";
import { P2PReplicator } from "./features/P2PSync/CmdP2PReplicator.ts";
import type { LiveSyncManagers } from "./lib/src/managers/LiveSyncManagers.ts";
function throwShouldBeOverridden(): never {
throw new Error("This function should be overridden by the module.");
@@ -211,6 +212,7 @@ export default class ObsidianLiveSyncPlugin
settings!: ObsidianLiveSyncSettings;
localDatabase!: LiveSyncLocalDB;
managers!: LiveSyncManagers;
simpleStore!: SimpleStore<CheckPointInfo>;
replicator!: LiveSyncAbstractReplicator;
confirm!: Confirm;
@@ -580,6 +582,11 @@ export default class ObsidianLiveSyncPlugin
$everyBeforeReplicate(showMessage: boolean): Promise<boolean> {
return InterceptiveEvery;
}
$$canReplicate(showMessage: boolean = false): Promise<boolean> {
throwShouldBeOverridden();
}
$$replicate(showMessage: boolean = false): Promise<boolean | void> {
throwShouldBeOverridden();
}

View File

@@ -256,19 +256,20 @@ export class ModuleFileHandler extends AbstractModule implements ICoreModule {
this._log(`File ${path} is not exist on the database`, LOG_LEVEL_VERBOSE);
return false;
}
// If we want to process size mismatched files -- in case of having files created by some integrations, enable the toggle.
if (!this.settings.processSizeMismatchedFiles) {
// Check the file is not corrupted
// (Zero is a special case, may be created by some APIs and it might be acceptable).
if (docRead.size != 0 && docRead.size !== readAsBlob(docRead).size) {
this._log(`File ${path} seems to be corrupted! Writing prevented.`, LOG_LEVEL_NOTICE);
return false;
}
}
const docData = readContent(docRead);
if (existOnStorage && !force) {
// If we want to process size mismatched files -- in case of having files created by some integrations, enable the toggle.
if (!this.settings.processSizeMismatchedFiles) {
// Check the file is not corrupted
// (Zero is a special case, may be created by some APIs and it might be acceptable).
if (docRead.size != 0 && docRead.size !== readAsBlob(docRead).size) {
this._log(`File ${path} seems to be corrupted! Writing prevented.`, LOG_LEVEL_NOTICE);
return false;
}
}
// The file is exist on the storage. Let's check the difference between the file and the entry.
// But, if force is true, then it should be updated.
// Ok, we have to compare.

View File

@@ -3,6 +3,7 @@ import { LiveSyncLocalDB } from "../../lib/src/pouchdb/LiveSyncLocalDB.ts";
import { initializeStores } from "../../common/stores.ts";
import { AbstractModule } from "../AbstractModule.ts";
import type { ICoreModule } from "../ModuleTypes.ts";
import { LiveSyncManagers } from "../../lib/src/managers/LiveSyncManagers.ts";
export class ModuleLocalDatabaseObsidian extends AbstractModule implements ICoreModule {
$everyOnloadStart(): Promise<boolean> {
@@ -14,7 +15,21 @@ export class ModuleLocalDatabaseObsidian extends AbstractModule implements ICore
}
const vaultName = this.core.$$getVaultName();
this._log($msg("moduleLocalDatabase.logWaitingForReady"));
const getDB = () => this.core.localDatabase.localDatabase;
const getSettings = () => this.core.settings;
this.core.managers = new LiveSyncManagers({
get database() {
return getDB();
},
getActiveReplicator: () => this.core.replicator,
id2path: this.core.$$id2path.bind(this.core),
path2id: this.core.$$path2id.bind(this.core),
get settings() {
return getSettings();
},
});
this.core.localDatabase = new LiveSyncLocalDB(vaultName, this.core);
initializeStores(vaultName);
return await this.localDatabase.initializeDatabase();
}

View File

@@ -17,6 +17,7 @@ import {
type EntryLeaf,
type LoadedEntry,
type MetaEntry,
type RemoteType,
} from "../../lib/src/common/types";
import { QueueProcessor } from "octagonal-wheels/concurrency/processor";
import {
@@ -38,7 +39,8 @@ const KEY_REPLICATION_ON_EVENT = "replicationOnEvent";
const REPLICATION_ON_EVENT_FORECASTED_TIME = 5000;
export class ModuleReplicator extends AbstractModule implements ICoreModule {
_replicatorType?: string;
_replicatorType?: RemoteType;
$everyOnloadAfterLoadSettings(): Promise<boolean> {
eventHub.onEvent(EVENT_FILE_SAVED, () => {
if (this.settings.syncOnSave && !this.core.$$isSuspended()) {
@@ -91,6 +93,10 @@ export class ModuleReplicator extends AbstractModule implements ICoreModule {
async $everyBeforeReplicate(showMessage: boolean): Promise<boolean> {
// Checking salt
if (!this.core.managers.networkManager.isOnline) {
this._log("Network is offline", showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
return false;
}
// Showing message is false: that because be shown here. (And it is a fatal error, no way to hide it).
if (!(await this.ensureReplicatorPBKDF2Salt(false))) {
Logger("Failed to initialise the encryption key, preventing replication.", LOG_LEVEL_NOTICE);
@@ -167,25 +173,42 @@ Even if you choose to clean up, you will see this option again if you exit Obsid
}
});
}
async $$_replicate(showMessage: boolean = false): Promise<boolean | void> {
//--?
if (!this.core.$$isReady()) return;
async $$canReplicate(showMessage: boolean = false): Promise<boolean> {
if (!this.core.$$isReady()) {
Logger(`Not ready`);
return false;
}
if (isLockAcquired("cleanup")) {
Logger($msg("Replicator.Message.Cleaned"), LOG_LEVEL_NOTICE);
return;
return false;
}
if (this.settings.versionUpFlash != "") {
Logger($msg("Replicator.Message.VersionUpFlash"), LOG_LEVEL_NOTICE);
return;
return false;
}
if (!(await this.core.$everyCommitPendingFileEvent())) {
Logger($msg("Replicator.Message.Pending"), LOG_LEVEL_NOTICE);
return false;
}
if (!this.core.managers.networkManager.isOnline) {
this._log("Network is offline", showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO);
return false;
}
if (!(await this.core.$everyBeforeReplicate(showMessage))) {
Logger($msg("Replicator.Message.SomeModuleFailed"), LOG_LEVEL_NOTICE);
return false;
}
return true;
}
async $$_replicate(showMessage: boolean = false): Promise<boolean | void> {
const checkBeforeReplicate = await this.$$canReplicate(showMessage);
if (!checkBeforeReplicate) return false;
//<-- Here could be an module.
const ret = await this.core.replicator.openReplication(this.settings, false, showMessage, false);

View File

@@ -15,15 +15,22 @@ export class ModuleReplicatorCouchDB extends AbstractModule implements ICoreModu
return Promise.resolve(new LiveSyncCouchDBReplicator(this.core));
}
$everyAfterResumeProcess(): Promise<boolean> {
if (!this.core.$$isSuspended) return Promise.resolve(true);
if (!this.core.$$isReady) return Promise.resolve(true);
if (this.settings.remoteType != REMOTE_MINIO && this.settings.remoteType != REMOTE_P2P) {
// If LiveSync enabled, open replication
if (this.settings.liveSync) {
fireAndForget(() => this.core.replicator.openReplication(this.settings, true, false, false));
}
// If sync on start enabled, open replication
if (!this.settings.liveSync && this.settings.syncOnStart) {
// Possibly ok as if only share the result
fireAndForget(() => this.core.replicator.openReplication(this.settings, false, false, false));
const LiveSyncEnabled = this.settings.liveSync;
const continuous = LiveSyncEnabled;
const eventualOnStart = !LiveSyncEnabled && this.settings.syncOnStart;
// If enabled LiveSync or on start, open replication
if (LiveSyncEnabled || eventualOnStart) {
// And note that we do not open the conflict detection dialogue directly during this process.
// This should be raised explicitly if needed.
fireAndForget(async () => {
const canReplicate = await this.core.$$canReplicate(false);
if (!canReplicate) return;
void this.core.replicator.openReplication(this.settings, continuous, false, false);
});
}
}

View File

@@ -6,6 +6,10 @@ import { $msg } from "src/lib/src/common/i18n.ts";
export class ModuleCheckRemoteSize extends AbstractModule implements ICoreModule {
async $allScanStat(): Promise<boolean> {
if (this.core.managers.networkManager.isOnline === false) {
this._log("Network is offline, skipping remote size check.", LOG_LEVEL_INFO);
return true;
}
this._log($msg("moduleCheckRemoteSize.logCheckingStorageSizes"), LOG_LEVEL_VERBOSE);
if (this.settings.notifyThresholdOfRemoteStorageSize < 0) {
const message = $msg("moduleCheckRemoteSize.msgSetDBCapacity");

View File

@@ -11,7 +11,7 @@ import { type UXFileInfo } from "../../../lib/src/common/types.ts";
function getFileLockKey(file: TFile | TFolder | string | UXFileInfo) {
return `fl:${typeof file == "string" ? file : file.path}`;
}
function toArrayBuffer(arr: Uint8Array | ArrayBuffer | DataView): ArrayBufferLike {
function toArrayBuffer(arr: Uint8Array<ArrayBuffer> | ArrayBuffer | DataView<ArrayBuffer>): ArrayBuffer {
if (arr instanceof Uint8Array) {
return arr.buffer;
}
@@ -77,7 +77,11 @@ export class SerializedFileAccess {
return await processReadFile(file, () => this.app.vault.adapter.readBinary(path));
}
async adapterWrite(file: TFile | string, data: string | ArrayBuffer | Uint8Array, options?: DataWriteOptions) {
async adapterWrite(
file: TFile | string,
data: string | ArrayBuffer | Uint8Array<ArrayBuffer>,
options?: DataWriteOptions
) {
const path = file instanceof TFile ? file.path : file;
if (typeof data === "string") {
return await processWriteFile(file, () => this.app.vault.adapter.write(path, data, options));
@@ -106,7 +110,7 @@ export class SerializedFileAccess {
return await processReadFile(file, () => this.app.vault.readBinary(file));
}
async vaultModify(file: TFile, data: string | ArrayBuffer | Uint8Array, options?: DataWriteOptions) {
async vaultModify(file: TFile, data: string | ArrayBuffer | Uint8Array<ArrayBuffer>, options?: DataWriteOptions) {
if (typeof data === "string") {
return await processWriteFile(file, async () => {
const oldData = await this.app.vault.read(file);
@@ -131,7 +135,7 @@ export class SerializedFileAccess {
}
async vaultCreate(
path: string,
data: string | ArrayBuffer | Uint8Array,
data: string | ArrayBuffer | Uint8Array<ArrayBuffer>,
options?: DataWriteOptions
): Promise<TFile> {
if (typeof data === "string") {

View File

@@ -207,6 +207,9 @@ export class StorageEventManagerObsidian extends StorageEventManager {
}
}
if (file instanceof TFolder) continue;
// TODO: Confirm why only the TFolder skipping
// Possibly following line is needed...
// if (file?.isFolder) continue;
if (!(await this.core.$$isTargetFile(file.path))) continue;
// Stop cache using to prevent the corruption;

View File

@@ -15,6 +15,16 @@ import { performDoctorConsultation, RebuildOptions } from "../../lib/src/common/
import { getPath, isValidPath } from "../../common/utils.ts";
import { isMetaEntry } from "../../lib/src/common/types.ts";
import { isDeletedEntry, isDocContentSame, isLoadedEntry, readAsBlob } from "../../lib/src/common/utils.ts";
import { countCompromisedChunks } from "../../lib/src/pouchdb/negotiation.ts";
type ErrorInfo = {
path: string;
recordedSize: number;
actualSize: number;
storageSize: number;
contentMatched: boolean;
isConflicted?: boolean;
};
export class ModuleMigration extends AbstractModule implements ICoreModule {
async migrateUsingDoctor(skipRebuild: boolean = false, activateReason = "updated", forceRescan = false) {
@@ -36,11 +46,14 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
if (shouldRebuild) {
await this.core.rebuilder.scheduleRebuild();
await this.core.$$performRestart();
return false;
} else if (shouldRebuildLocal) {
await this.core.rebuilder.scheduleFetch();
await this.core.$$performRestart();
return false;
}
}
return true;
}
async migrateDisableBulkSend() {
@@ -100,7 +113,7 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
return false;
}
async checkIncompleteDocs(force: boolean = false): Promise<boolean> {
async hasIncompleteDocs(force: boolean = false): Promise<boolean> {
const incompleteDocsChecked = (await this.core.kvDB.get<boolean>("checkIncompleteDocs")) || false;
if (incompleteDocsChecked && !force) {
this._log("Incomplete docs check already done, skipping.", LOG_LEVEL_VERBOSE);
@@ -108,7 +121,8 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
}
this._log("Checking for incomplete documents...", LOG_LEVEL_NOTICE, "check-incomplete");
const errorFiles = [];
const errorFiles = [] as ErrorInfo[];
for await (const metaDoc of this.localDatabase.findAllNormalDocs({ conflicts: true })) {
const path = getPath(metaDoc);
@@ -129,17 +143,38 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
if (isDeletedEntry(doc)) {
continue;
}
const storageFileContent = await this.core.storageAccess.readHiddenFileBinary(path);
const isConflicted = metaDoc?._conflicts && metaDoc._conflicts.length > 0;
let storageFileContent;
try {
storageFileContent = await this.core.storageAccess.readHiddenFileBinary(path);
} catch (e) {
Logger(`Failed to read file ${path}: Possibly unprocessed or missing`);
Logger(e, LOG_LEVEL_VERBOSE);
continue;
}
// const storageFileBlob = createBlob(storageFileContent);
const sizeOnStorage = storageFileContent.byteLength;
const recordedSize = doc.size;
const docBlob = readAsBlob(doc);
const actualSize = docBlob.size;
if (recordedSize !== actualSize || sizeOnStorage !== actualSize || sizeOnStorage !== recordedSize) {
if (
recordedSize !== actualSize ||
sizeOnStorage !== actualSize ||
sizeOnStorage !== recordedSize ||
isConflicted
) {
const contentMatched = await isDocContentSame(doc.data, storageFileContent);
errorFiles.push({ path, recordedSize, actualSize, storageSize: sizeOnStorage, contentMatched });
errorFiles.push({
path,
recordedSize,
actualSize,
storageSize: sizeOnStorage,
contentMatched,
isConflicted,
});
Logger(
`Size mismatch for ${path}: ${recordedSize} (DB Recorded) , ${actualSize} (DB Stored) , ${sizeOnStorage} (Storage Stored), ${contentMatched ? "Content Matched" : "Content Mismatched"}`
`Size mismatch for ${path}: ${recordedSize} (DB Recorded) , ${actualSize} (DB Stored) , ${sizeOnStorage} (Storage Stored), ${contentMatched ? "Content Matched" : "Content Mismatched"} ${isConflicted ? "Conflicted" : "Not Conflicted"}`
);
}
}
@@ -163,24 +198,23 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
// Probably restored by the user by resolving A or B on other device, We should overwrite the storage
// Also do not fix it automatically. It should be overwritten by replication.
const recoverable = errorFiles.filter((e) => {
return e.recordedSize === e.storageSize;
return e.recordedSize === e.storageSize && !e.isConflicted;
});
const unrecoverable = errorFiles.filter((e) => {
return e.recordedSize !== e.storageSize;
return e.recordedSize !== e.storageSize || e.isConflicted;
});
const fileInfo = (e: (typeof errorFiles)[0]) => {
return `${e.path} (M: ${e.recordedSize}, A: ${e.actualSize}, S: ${e.storageSize}) ${e.isConflicted ? "(Conflicted)" : ""}`;
};
const messageUnrecoverable =
unrecoverable.length > 0
? $msg("moduleMigration.fix0256.messageUnrecoverable", {
filesNotRecoverable: unrecoverable
.map((e) => `- ${e.path} (M: ${e.recordedSize}, A: ${e.actualSize}, S: ${e.storageSize})`)
.join("\n"),
filesNotRecoverable: unrecoverable.map((e) => `- ${fileInfo(e)}`).join("\n"),
})
: "";
const message = $msg("moduleMigration.fix0256.message", {
files: recoverable
.map((e) => `- ${e.path} (M: ${e.recordedSize}, A: ${e.actualSize}, S: ${e.storageSize})`)
.join("\n"),
files: recoverable.map((e) => `- ${fileInfo(e)}`).join("\n"),
messageUnrecoverable,
});
const CHECK_IT_LATER = $msg("moduleMigration.fix0256.buttons.checkItLater");
@@ -215,15 +249,79 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
return Promise.resolve(true);
}
async hasCompromisedChunks(): Promise<boolean> {
Logger(`Checking for compromised chunks...`, LOG_LEVEL_VERBOSE);
if (!this.settings.encrypt) {
// If not encrypted, we do not need to check for compromised chunks.
return true;
}
// Check local database for compromised chunks
const localCompromised = await countCompromisedChunks(this.localDatabase.localDatabase);
const remote = this.core.$$getReplicator();
const remoteCompromised = this.core.managers.networkManager.isOnline
? await remote.countCompromisedChunks()
: 0;
if (localCompromised === false) {
Logger(`Failed to count compromised chunks in local database`, LOG_LEVEL_NOTICE);
return false;
}
if (remoteCompromised === false) {
Logger(`Failed to count compromised chunks in remote database`, LOG_LEVEL_NOTICE);
return false;
}
if (remoteCompromised === 0 && localCompromised === 0) {
return true;
}
Logger(
`Found compromised chunks : ${localCompromised} in local, ${remoteCompromised} in remote`,
LOG_LEVEL_NOTICE
);
const title = $msg("moduleMigration.insecureChunkExist.title");
const msg = $msg("moduleMigration.insecureChunkExist.message");
const REBUILD = $msg("moduleMigration.insecureChunkExist.buttons.rebuild");
const FETCH = $msg("moduleMigration.insecureChunkExist.buttons.fetch");
const DISMISS = $msg("moduleMigration.insecureChunkExist.buttons.later");
const buttons = [REBUILD, FETCH, DISMISS];
if (remoteCompromised != 0) {
buttons.splice(buttons.indexOf(FETCH), 1);
}
const result = await this.core.confirm.askSelectStringDialogue(msg, buttons, {
title,
defaultAction: DISMISS,
timeout: 0,
});
if (result === REBUILD) {
// Rebuild the database
await this.core.rebuilder.scheduleRebuild();
await this.core.$$performRestart();
return false;
} else if (result === FETCH) {
// Fetch the latest data from remote
await this.core.rebuilder.scheduleFetch();
await this.core.$$performRestart();
return false;
} else {
// User chose to dismiss the issue
this._log($msg("moduleMigration.insecureChunkExist.laterMessage"), LOG_LEVEL_NOTICE);
}
return true;
}
async $everyOnFirstInitialize(): Promise<boolean> {
if (!this.localDatabase.isReady) {
this._log($msg("moduleMigration.logLocalDatabaseNotReady"), LOG_LEVEL_NOTICE);
return false;
}
if (this.settings.isConfigured) {
// TODO: Probably we have to check for insecure chunks
await this.checkIncompleteDocs();
await this.migrateUsingDoctor(false);
if (!(await this.hasCompromisedChunks())) {
return false;
}
if (!(await this.hasIncompleteDocs())) {
return false;
}
if (!(await this.migrateUsingDoctor(false))) {
return false;
}
// await this.migrationCheck();
await this.migrateDisableBulkSend();
}
@@ -233,7 +331,9 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
this._log($msg("moduleMigration.logSetupCancelled"), LOG_LEVEL_NOTICE);
return false;
}
await this.migrateUsingDoctor(true);
if (!(await this.migrateUsingDoctor(true))) {
return false;
}
}
return true;
}
@@ -242,7 +342,7 @@ export class ModuleMigration extends AbstractModule implements ICoreModule {
await this.migrateUsingDoctor(false, reason, true);
});
eventHub.onEvent(EVENT_REQUEST_RUN_FIX_INCOMPLETE, async () => {
await this.checkIncompleteDocs(true);
await this.hasIncompleteDocs(true);
});
return Promise.resolve(true);
}

View File

@@ -106,6 +106,9 @@ export class ModuleObsidianAPI extends AbstractObsidianModule implements IObsidi
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters.";
if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces.";
if (!this.core.managers.networkManager.isOnline) {
return "Network is offline";
}
// let authHeader = await this._authHeader.getAuthorizationHeader(auth);
const conf: PouchDB.HttpAdapter.HttpAdapterConfiguration = {

View File

@@ -25,8 +25,8 @@ export class ConflictResolveModal extends Modal {
title: string = "Conflicting changes";
pluginPickMode: boolean = false;
localName: string = "Use Base";
remoteName: string = "Use Conflicted";
localName: string = "Base";
remoteName: string = "Conflicted";
offEvent?: ReturnType<typeof eventHub.onEvent>;
constructor(app: App, filename: string, diff: diff_result, pluginPickMode?: boolean, remoteName?: string) {
@@ -36,8 +36,8 @@ export class ConflictResolveModal extends Modal {
this.pluginPickMode = pluginPickMode || false;
if (this.pluginPickMode) {
this.title = "Pick a version";
this.remoteName = `Use ${remoteName || "Remote"}`;
this.localName = "Use Local";
this.remoteName = `${remoteName || "Remote"}`;
this.localName = "Local";
}
// Send cancel signal for the previous merge dialogue
// if not there, simply be ignored.
@@ -93,12 +93,13 @@ export class ConflictResolveModal extends Modal {
const date2 =
new Date(this.result.right.mtime).toLocaleString() + (this.result.right.deleted ? " (Deleted)" : "");
div2.innerHTML = `
<span class='deleted'>A:${date1}</span><br /><span class='added'>B:${date2}</span><br>
<span class='deleted'><span class='conflict-dev-name'>${this.localName}</span>: ${date1}</span><br>
<span class='added'><span class='conflict-dev-name'>${this.remoteName}</span>: ${date2}</span><br>
`;
contentEl.createEl("button", { text: this.localName }, (e) =>
contentEl.createEl("button", { text: `Use ${this.localName}` }, (e) =>
e.addEventListener("click", () => this.sendResponse(this.result.right.rev))
).style.marginRight = "4px";
contentEl.createEl("button", { text: this.remoteName }, (e) =>
contentEl.createEl("button", { text: `Use ${this.remoteName}` }, (e) =>
e.addEventListener("click", () => this.sendResponse(this.result.left.rev))
).style.marginRight = "4px";
if (!this.pluginPickMode) {

View File

@@ -1,6 +1,6 @@
import { type IObsidianModule, AbstractObsidianModule } from "../AbstractObsidianModule.ts";
// import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser";
import { EVENT_REQUEST_RELOAD_SETTING_TAB, EVENT_SETTING_SAVED, eventHub } from "../../common/events";
import { EVENT_REQUEST_RELOAD_SETTING_TAB, EVENT_SETTING_SAVED, eventHub } from "../../common/events.ts";
import {
type BucketSyncSetting,
ChunkAlgorithmNames,
@@ -11,8 +11,8 @@ import {
SALT_OF_PASSPHRASE,
} from "../../lib/src/common/types";
import { LOG_LEVEL_NOTICE, LOG_LEVEL_URGENT } from "octagonal-wheels/common/logger";
import { $msg, setLang } from "../../lib/src/common/i18n";
import { isCloudantURI } from "../../lib/src/pouchdb/utils_couchdb";
import { $msg, setLang } from "../../lib/src/common/i18n.ts";
import { isCloudantURI } from "../../lib/src/pouchdb/utils_couchdb.ts";
import { getLanguage } from "obsidian";
import { SUPPORTED_I18N_LANGS, type I18N_LANGS } from "../../lib/src/common/rosetta.ts";
import { decryptString, encryptString } from "@/lib/src/encryption/stringEncryption.ts";
@@ -23,8 +23,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
const obsidianLanguage = getLanguage();
if (
SUPPORTED_I18N_LANGS.indexOf(obsidianLanguage) !== -1 && // Check if the language is supported
obsidianLanguage != this.settings.displayLanguage && // Check if the language is different from the current setting
this.settings.displayLanguage != ""
obsidianLanguage != this.settings.displayLanguage // Check if the language is different from the current setting
) {
// Check if the current setting is not empty (Means migrated or installed).
this.settings.displayLanguage = obsidianLanguage as I18N_LANGS;
@@ -141,6 +140,7 @@ export class ModuleObsidianSettings extends AbstractObsidianModule implements IO
jwtSub: settings.jwtSub,
useRequestAPI: settings.useRequestAPI,
bucketPrefix: settings.bucketPrefix,
forcePathStyle: settings.forcePathStyle,
};
settings.encryptedCouchDBConnection = await this.encryptConfigurationItem(
JSON.stringify(connectionSetting),

View File

@@ -859,26 +859,7 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
}
getMinioJournalSyncClient() {
const id = this.plugin.settings.accessKey;
const key = this.plugin.settings.secretKey;
const bucket = this.plugin.settings.bucket;
const prefix = this.plugin.settings.bucketPrefix;
const region = this.plugin.settings.region;
const endpoint = this.plugin.settings.endpoint;
const useCustomRequestHandler = this.plugin.settings.useCustomRequestHandler;
const customHeaders = this.plugin.settings.bucketCustomHeaders;
return new JournalSyncMinio(
id,
key,
endpoint,
bucket,
prefix,
this.plugin.simpleStore,
this.plugin,
useCustomRequestHandler,
region,
customHeaders
);
return new JournalSyncMinio(this.plugin.settings, this.plugin.simpleStore, this.plugin);
}
async resetRemoteBucket() {
const minioJournal = this.getMinioJournalSyncClient();

View File

@@ -158,36 +158,40 @@ export function paneMaintenance(
)
.addOnUpdate(this.onlyOnMinIO);
});
void addPanel(paneEl, "Garbage Collection (Beta)", (e) => e, this.onlyOnP2POrCouchDB).then((paneEl) => {
void addPanel(paneEl, "Garbage Collection (Beta2)", (e) => e, this.onlyOnP2POrCouchDB).then((paneEl) => {
new Setting(paneEl)
.setName("Remove all orphaned chunks")
.setDesc("Remove all orphaned chunks from the local database.")
.setName("Scan garbage")
.setDesc("Scan for garbage chunks in the database.")
.addButton((button) =>
button
.setButtonText("Remove")
.setWarning()
.setButtonText("Scan")
// .setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin
.getAddOn<LocalDatabaseMaintenance>(LocalDatabaseMaintenance.name)
?.removeUnusedChunks();
?.trackChanges(false, true);
})
);
new Setting(paneEl)
.setName("Resurrect deleted chunks")
.setDesc(
"If you have deleted chunks before fully synchronised and missed some chunks, you possibly can resurrect them."
)
.addButton((button) =>
button.setButtonText("Rescan").onClick(async () => {
await this.plugin
.getAddOn<LocalDatabaseMaintenance>(LocalDatabaseMaintenance.name)
?.trackChanges(true, true);
})
);
new Setting(paneEl)
.setName("Collect garbage")
.setDesc("Remove all unused chunks from the local database.")
.addButton((button) =>
button
.setButtonText("Try resurrect")
.setButtonText("Collect")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin
.getAddOn<LocalDatabaseMaintenance>(LocalDatabaseMaintenance.name)
?.resurrectChunks();
?.performGC(true);
})
);
new Setting(paneEl)
@@ -205,6 +209,41 @@ export function paneMaintenance(
})
);
});
void addPanel(paneEl, "Garbage Collection (Old and Experimental)", (e) => e, this.onlyOnP2POrCouchDB).then(
(paneEl) => {
new Setting(paneEl)
.setName("Remove all orphaned chunks")
.setDesc("Remove all orphaned chunks from the local database.")
.addButton((button) =>
button
.setButtonText("Remove")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin
.getAddOn<LocalDatabaseMaintenance>(LocalDatabaseMaintenance.name)
?.removeUnusedChunks();
})
);
new Setting(paneEl)
.setName("Resurrect deleted chunks")
.setDesc(
"If you have deleted chunks before fully synchronised and missed some chunks, you possibly can resurrect them."
)
.addButton((button) =>
button
.setButtonText("Try resurrect")
.setWarning()
.setDisabled(false)
.onClick(async () => {
await this.plugin
.getAddOn<LocalDatabaseMaintenance>(LocalDatabaseMaintenance.name)
?.resurrectChunks();
})
);
}
);
void addPanel(paneEl, "Rebuilding Operations (Local)").then((paneEl) => {
new Setting(paneEl)
.setName("Fetch from remote")

View File

@@ -199,12 +199,16 @@ export function paneRemoteConfig(
) {
addResult($msg("obsidianLiveSyncSettingTab.okCorsOrigins"));
} else {
const fixedValue = [
...new Set([
...ConfiguredOrigins.map((e) => e.trim()),
"app://obsidian.md",
"capacitor://localhost",
"http://localhost",
]),
].join(",");
addResult($msg("obsidianLiveSyncSettingTab.errCorsOrigins"));
addConfigFixButton(
$msg("obsidianLiveSyncSettingTab.msgSetCorsOrigins"),
"cors/origins",
"app://obsidian.md,capacitor://localhost,http://localhost"
);
addConfigFixButton($msg("obsidianLiveSyncSettingTab.msgSetCorsOrigins"), "cors/origins", fixedValue);
isSuccessful = false;
}
addResult($msg("obsidianLiveSyncSettingTab.msgConnectionCheck"), ["ob-btn-config-head"]);
@@ -316,6 +320,7 @@ The pane also can be launched by \`P2P Replicator\` command from the Command Pal
syncWarnMinio.addClass("op-warn-info");
new Setting(paneEl).autoWireText("endpoint", { holdValue: true });
new Setting(paneEl).autoWireToggle("forcePathStyle", { holdValue: true });
new Setting(paneEl).autoWireText("accessKey", { holdValue: true });
new Setting(paneEl).autoWireText("secretKey", {

View File

@@ -12,6 +12,11 @@
background-color: var(--text-muted);
}
.conflict-dev-name {
display: inline-block;
min-width: 5em;
}
.op-scrollable {
overflow-y: scroll;
/* min-height: 280px; */
@@ -385,6 +390,18 @@ span.ls-mark-cr::after {
font-size: 80%;
}
div.workspace-leaf-content[data-type=bases] .livesync-status {
top: calc(var(--bases-header-height) + var(--header-height));
padding: 5px;
padding-right:18px;
}
.is-mobile div.workspace-leaf-content[data-type=bases] .livesync-status {
top: calc(var(--bases-header-height) + var(--view-header-height));
padding: 6px;
padding-right:18px;
}
.livesync-status div {
opacity: 0.6;
-webkit-filter: grayscale(100%);
@@ -405,6 +422,7 @@ span.ls-mark-cr::after {
filter: unset;
}
.menu-setting-poweruser-disabled .sls-setting-poweruser {
display: none;
}

View File

@@ -1,139 +1,89 @@
## 0.25.7
## 0.25.16
15th August, 2025
4th September, 2025
**Since the release of 0.25.6, there are two large problem. Please update immediately.**
### Improved
- Improved connectivity for P2P connections
- The connection to the signalling server can now be disconnected while in the background or when explicitly disconnected.
- These features use a patch that has not been incorporated upstream.
- This patch is available at [vrtmrz/trystero](https://github.com/vrtmrz/trystero).
- We may have corrupted some documents during the migration process. **Please check your documents on the wizard.**
- Due to a chunk ID assignment issue, some data has not been encrypted. **Please rebuild the database using Rebuild Everything** if you have enabled E2EE.
## 0.25.15
**_So, If you have enabled E2EE, please perform `Rebuild everything`. If not, please check your documents on the wizard._**
In next version, insecure chunk detection will be implemented.
### Fixed
- Off-loaded chunking have been fixed to ensure proper functionality (#693).
- Chunk document ID assignment has been fixed.
- Replication prevention message during version up detection has been improved (#686).
- `Keep A` and `Keep B` on Conflict resolving dialogue has been renamed to `Use Base` and `Use Conflicted` (#691).
3rd September, 2025
### Improved
- Metadata and content-size unmatched documents are now detected and reported, prevented to be applied to the storage.
- This behaviour can be configured in `Patch` -> `Edge case addressing (Behaviour)` -> `Process files even if seems to be corrupted`
- Note: this toggle is for the direct-database-manipulation users.
- Now we can configure `forcePathStyle` for bucket synchronisation (#707).
### New Features
- `Scan for Broken files` has been implemented on `Hatch` -> `TroubleShooting`.
## 0.25.14
2nd September, 2025
### Fixed
- Opening IndexedDB handling has been ensured.
- Migration check of corrupted files detection has been fixed.
- Now informs us about conflicted files as non-recoverable, but noted so.
- No longer errors on not-found files.
## 0.25.13
1st September, 2025
### Fixed
- Conflict resolving dialogue now properly displays the changeset name instead of A or B (#691).
## 0.25.12
29th August, 2025
### Fixed
- Fixed an issue with automatic synchronisation starting (#702).
## 0.25.11
28th August, 2025
### Fixed
- Automatic translation detection on the first launch now works correctly (#630).
- No errors are shown during synchronisations in offline (if not explicitly requested) (#699).
- Missing some checking during automatic-synchronisation now works correctly.
## 0.25.10
26th August, 2025
### New experimental feature
- We can perform Garbage Collection (Beta2) without rebuilding the entire database, and also fetch the database.
- Note that this feature is very experimental and should be used with caution.
- This feature requires disabling `Fetch chunks on demand`.
### Fixed
- Resetting the bucket now properly clears all uploaded files.
### Refactored
- Off-loaded processes have been refactored for the better maintainability.
- Files prefixed `bg.worker` are now work on the worker threads.
- Files prefixed `bgWorker.` are now also controls these worker threads. (I know what you want to say... I will rename them).
- Removed unused code.
## ~~0.25.5~~ 0.25.6
(0.25.5 has been withdrawn due to a bug in the `Fetch chunks on demand` feature).
9th August, 2025
- Some files have been moved to better reflect their purpose and improve maintainability.
- The extensive LiveSyncLocalDB has been split into separate files for each role.
### Fixed
- Storage scanning no longer occurs when `Suspend file watching` is enabled (including boot-sequence).
- This change improves safety when troubleshooting or fetching the remote database.
- `Fetch chunks on demand` is now working again (if you installed 0.25.5, other versions are not affected).
### Improved
- Saving notes and files now consumes less memory.
- Data is no longer fully buffered in memory and written at once; instead, it is now written in each over-2MB increments.
- Chunk caching is now more efficient.
- Chunks are now managed solely by their count (still maintained as LRU). If memory usage becomes excessive, they will be automatically released by the system-runtime.
- Reverse-indexing is also no longer used. It is performed as scanning caches and act also as a WeakRef thinning.
- Both of them (may) are effective for #692, #680, and some more.
### Changed
- `Incubate Chunks in Document` (also known as `Eden`) is now fully sunset.
- Existing chunks can still be read, but new ones will no longer be created.
- The `Compute revisions for chunks` setting has also been removed.
- This feature is now always enabled and is no longer configurable (restoring the original behaviour).
- As mentioned, `Memory cache size (by total characters)` has been removed.
- The `Memory cache size (by total items)` setting is now the only option available (but it has 10x ratio compared to the previous version).
- Unexpected `Failed to obtain PBKDF2 salt` or similar errors during bucket-synchronisation no longer occur.
- Unexpected long delays for chunk-missing documents when using bucket-synchronisation have been resolved.
- Fetched remote chunks are now properly stored in the local database if `Fetch chunks on demand` is enabled.
- The 'fetch' dialogue's message has been refined.
- No longer overwriting any corrupted documents to the storage on boot-sequence.
### Refactored
- A significant refactoring of the core codebase is underway.
- This is part of our ongoing efforts to improve code maintainability, readability, and to unify interfaces.
- Previously, complex files posed a risk due to a low bus factor. Fortunately, as our devices have become faster and more capable, we can now write code that is clearer and more maintainable (And not so much costs on performance).
- Hashing functions have been refactored into the `HashManager` class and its derived classes.
- Chunk splitting functions have been refactored into the `ContentSplitterCore` class and its derived classes.
- Change tracking functions have been refactored into the `ChangeManager` class.
- Chunk read/write functions have been refactored into the `ChunkManager` class.
- Fetching chunks on demand is now handled separately from the `ChunkManager` and chunk reading functions. Chunks are queued by the `ChunkManager` and then processed by the `ChunkFetcher`, simplifying the process and reducing unnecessary complexity.
- Then, local database access via `LiveSyncLocalDB` has been refactored to use the new classes.
- References to external sources from `commonlib` have been corrected.
- Type definitions in `types.ts` have been refined.
- Unit tests are being added incrementally.
- I am using `Deno` for testing, to simplify testing and coverage reporting.
- While this is not identical to the Obsidian environment, `jest` may also have limitations. It is certainly better than having no tests.
- In other words, recent manual scenario testing has highlighted some shortcomings.
- `pouchdb-test`, used for testing PouchDB with Deno, has been added, utilising the `memory` adapter.
Side note: Although class-oriented programming is sometimes considered an outdated style, However, I have come to re-evaluate it as valuable from the perspectives of maintainability and readability.
## 0.25.4
29th July, 2025
### Fixed
- The PBKDF2Salt is no longer corrupted when attempting replication while the device is offline. (#686)
- If this issue has already occurred, please use `Maintenance` -> `Rebuilding Operations (Remote Only)` -> `Overwrite Remote` and `Send` to resolve it.
- Please perform this operation on the device that is most reliable.
- I am so sorry for the inconvenience; there are no patching workarounds. The rebuilding operation is the only solution.
- This issue only affects the encryption of the remote database and does not impact the local databases on any devices.
- (Preventing synchronisation is by design and expected behaviour, even if it is sometimes inconvenient. This is also why we should avoid using workarounds; it is, admittedly, an excuse).
- In any case, we can unlock the remote from the warning dialogue on receiving devices. We are performing replication, instead of simple synchronisation at the expense of a little complexity (I would love to express thank you again for your every effort to manage and maintain the settings! Your all understanding saves our notes).
- This process may require considerable time and bandwidth (as usual), so please wait patiently and ensure a stable network connection.
### Side note
The PBKDF2Salt will be referred to as the `Security Seed`, and it is used to derive the encryption key for replication. Therefore, it should be stored on the server prior to synchronisation. We apologise for the lack of explanation in previous updates!
## 0.25.3
22nd July, 2025
### Fixed
- Now the `Doctor` at migration will save the configuration.
## 0.25.2 ~~0.25.1~~
(0.25.1 was missed due to a mistake in the versioning process).
19th July, 2025
### Refined and New Features
- Fetching the remote database on `RedFlag` now also retrieves remote configurations optionally.
- This is beneficial if we have already set up another device and wish to use the same configuration. We will see a much less frequent `Unmatched` dialogue.
- The setup wizard using Set-up URI and QR code has been improved.
- The message is now more user-friendly.
- The obsolete method (manual setting application) has been removed.
- The `Cancel` button has been added to the setup wizard.
- We can now fetch the remote configuration from the server if it exists, which is useful for adding new devices.
- Mostly same as a `RedFlag` fetching remote configuration.
- We can also use the `Doctor` to check and fix the imported (and fetched) configuration before applying it.
### Changes
- The Set-up URI is now encrypted with a new encryption algorithm (mostly the same as `V2`).
- The new Set-up URI is not compatible with version 0.24.x or earlier.
- Type errors have been corrected.
## 0.25.0

View File

@@ -9,6 +9,169 @@ I have now rewritten the E2EE code to be more robust and easier to understand. I
As a result, this is the first time in a while that forward compatibility has been broken. We have also taken the opportunity to change all metadata to use encryption rather than obfuscation. Furthermore, the `Dynamic Iteration Count` setting is now redundant and has been moved to the `Patches` pane in the settings. Thanks to Rabin-Karp, the eden setting is also no longer necessary and has been relocated accordingly. Therefore, v0.25.0 represents a legitimate and correct evolution.
---
## 0.25.9
20th August, 2025
### Fixed
- CORS Checking messages now use replacements.
- Configuring CORS setting via the UI now respects the existing rules.
- Now startup-checking works correctly again, performs migration check serially and then it will also fix starting LiveSync or start-up sync. (#696)
- Statusline in editor now supported 'Bases'.
## 0.25.8
18th August, 2025
### New feature
- Insecure chunk detection has been implemented.
- A notification dialogue will be shown if any insecure chunks are detected; these may have been created by v0.25.6 due to its issue. If this dialogue appears, please ensure you rebuild the database after backing it up.
## 0.25.7
15th August, 2025
**Since the release of 0.25.6, there are two large problem. Please update immediately.**
- We may have corrupted some documents during the migration process. **Please check your documents on the wizard.**
- Due to a chunk ID assignment issue, some data has not been encrypted. **Please rebuild the database using Rebuild Everything** if you have enabled E2EE.
**_So, If you have enabled E2EE, please perform `Rebuild everything`. If not, please check your documents on the wizard._**
In next version, insecure chunk detection will be implemented.
### Fixed
- Off-loaded chunking have been fixed to ensure proper functionality (#693).
- Chunk document ID assignment has been fixed.
- Replication prevention message during version up detection has been improved (#686).
- `Keep A` and `Keep B` on Conflict resolving dialogue has been renamed to `Use Base` and `Use Conflicted` (#691).
### Improved
- Metadata and content-size unmatched documents are now detected and reported, prevented to be applied to the storage.
- This behaviour can be configured in `Patch` -> `Edge case addressing (Behaviour)` -> `Process files even if seems to be corrupted`
- Note: this toggle is for the direct-database-manipulation users.
### New Features
- `Scan for Broken files` has been implemented on `Hatch` -> `TroubleShooting`.
### Refactored
- Off-loaded processes have been refactored for the better maintainability.
- Files prefixed `bg.worker` are now work on the worker threads.
- Files prefixed `bgWorker.` are now also controls these worker threads. (I know what you want to say... I will rename them).
- Removed unused code.
## ~~0.25.5~~ 0.25.6
(0.25.5 has been withdrawn due to a bug in the `Fetch chunks on demand` feature).
9th August, 2025
### Fixed
- Storage scanning no longer occurs when `Suspend file watching` is enabled (including boot-sequence).
- This change improves safety when troubleshooting or fetching the remote database.
- `Fetch chunks on demand` is now working again (if you installed 0.25.5, other versions are not affected).
### Improved
- Saving notes and files now consumes less memory.
- Data is no longer fully buffered in memory and written at once; instead, it is now written in each over-2MB increments.
- Chunk caching is now more efficient.
- Chunks are now managed solely by their count (still maintained as LRU). If memory usage becomes excessive, they will be automatically released by the system-runtime.
- Reverse-indexing is also no longer used. It is performed as scanning caches and act also as a WeakRef thinning.
- Both of them (may) are effective for #692, #680, and some more.
### Changed
- `Incubate Chunks in Document` (also known as `Eden`) is now fully sunset.
- Existing chunks can still be read, but new ones will no longer be created.
- The `Compute revisions for chunks` setting has also been removed.
- This feature is now always enabled and is no longer configurable (restoring the original behaviour).
- As mentioned, `Memory cache size (by total characters)` has been removed.
- The `Memory cache size (by total items)` setting is now the only option available (but it has 10x ratio compared to the previous version).
### Refactored
- A significant refactoring of the core codebase is underway.
- This is part of our ongoing efforts to improve code maintainability, readability, and to unify interfaces.
- Previously, complex files posed a risk due to a low bus factor. Fortunately, as our devices have become faster and more capable, we can now write code that is clearer and more maintainable (And not so much costs on performance).
- Hashing functions have been refactored into the `HashManager` class and its derived classes.
- Chunk splitting functions have been refactored into the `ContentSplitterCore` class and its derived classes.
- Change tracking functions have been refactored into the `ChangeManager` class.
- Chunk read/write functions have been refactored into the `ChunkManager` class.
- Fetching chunks on demand is now handled separately from the `ChunkManager` and chunk reading functions. Chunks are queued by the `ChunkManager` and then processed by the `ChunkFetcher`, simplifying the process and reducing unnecessary complexity.
- Then, local database access via `LiveSyncLocalDB` has been refactored to use the new classes.
- References to external sources from `commonlib` have been corrected.
- Type definitions in `types.ts` have been refined.
- Unit tests are being added incrementally.
- I am using `Deno` for testing, to simplify testing and coverage reporting.
- While this is not identical to the Obsidian environment, `jest` may also have limitations. It is certainly better than having no tests.
- In other words, recent manual scenario testing has highlighted some shortcomings.
- `pouchdb-test`, used for testing PouchDB with Deno, has been added, utilising the `memory` adapter.
Side note: Although class-oriented programming is sometimes considered an outdated style, However, I have come to re-evaluate it as valuable from the perspectives of maintainability and readability.
## 0.25.4
29th July, 2025
### Fixed
- The PBKDF2Salt is no longer corrupted when attempting replication while the device is offline. (#686)
- If this issue has already occurred, please use `Maintenance` -> `Rebuilding Operations (Remote Only)` -> `Overwrite Remote` and `Send` to resolve it.
- Please perform this operation on the device that is most reliable.
- I am so sorry for the inconvenience; there are no patching workarounds. The rebuilding operation is the only solution.
- This issue only affects the encryption of the remote database and does not impact the local databases on any devices.
- (Preventing synchronisation is by design and expected behaviour, even if it is sometimes inconvenient. This is also why we should avoid using workarounds; it is, admittedly, an excuse).
- In any case, we can unlock the remote from the warning dialogue on receiving devices. We are performing replication, instead of simple synchronisation at the expense of a little complexity (I would love to express thank you again for your every effort to manage and maintain the settings! Your all understanding saves our notes).
- This process may require considerable time and bandwidth (as usual), so please wait patiently and ensure a stable network connection.
### Side note
The PBKDF2Salt will be referred to as the `Security Seed`, and it is used to derive the encryption key for replication. Therefore, it should be stored on the server prior to synchronisation. We apologise for the lack of explanation in previous updates!
## 0.25.3
22nd July, 2025
### Fixed
- Now the `Doctor` at migration will save the configuration.
## 0.25.2 ~~0.25.1~~
(0.25.1 was missed due to a mistake in the versioning process).
19th July, 2025
### Refined and New Features
- Fetching the remote database on `RedFlag` now also retrieves remote configurations optionally.
- This is beneficial if we have already set up another device and wish to use the same configuration. We will see a much less frequent `Unmatched` dialogue.
- The setup wizard using Set-up URI and QR code has been improved.
- The message is now more user-friendly.
- The obsolete method (manual setting application) has been removed.
- The `Cancel` button has been added to the setup wizard.
- We can now fetch the remote configuration from the server if it exists, which is useful for adding new devices.
- Mostly same as a `RedFlag` fetching remote configuration.
- We can also use the `Doctor` to check and fix the imported (and fetched) configuration before applying it.
### Changes
- The Set-up URI is now encrypted with a new encryption algorithm (mostly the same as `V2`).
- The new Set-up URI is not compatible with version 0.24.x or earlier.
## 0.25.0
### Fixed
- The encryption algorithm now uses HKDF with a master key.
@@ -38,8 +201,6 @@ As a result, this is the first time in a while that forward compatibility has be
- `couchdb_utils.ts` has been separated into several explicitly named files.
- Some missing functions in `bgWorker.mock.ts` have been added.
## 0.24.0
I know that we have been waiting for a long time. It is finally released!