mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-02-27 14:38:48 +00:00
Compare commits
10 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0a8dbe097e | ||
|
|
2c0fcf74d0 | ||
|
|
a1ab1efd5d | ||
|
|
c8fcf2d0d5 | ||
|
|
c384e2f7fb | ||
|
|
99c1c7dc1a | ||
|
|
84adec4b1a | ||
|
|
f0b202bd91 | ||
|
|
d54b7e2d93 | ||
|
|
6952ef37f5 |
@@ -45,7 +45,7 @@ This plug-in might be useful for researchers, engineers, and developers with a n
|
||||
2. Configure plug-in in [Quick Setup](docs/quick_setup.md)
|
||||
|
||||
> [!TIP]
|
||||
> We are still able to use IBM Cloudant. However, it is not recommended for several reasons nowadays. Here is [Setup IBM Cloudant](docs/setup_cloudant.md)
|
||||
> Now, fly.io has become not free. Fortunately, even though there are some issues, we are still able to use IBM Cloudant. Here is [Setup IBM Cloudant](docs/setup_cloudant.md). It will be updated soon!
|
||||
|
||||
|
||||
## Information in StatusBar
|
||||
|
||||
@@ -16,7 +16,8 @@ There are three methods to set up Self-hosted LiveSync.
|
||||
|
||||
### 1. Using setup URIs
|
||||
|
||||
> [!TIP] What is the setup URI? Why is it required?
|
||||
> [!TIP]
|
||||
> What is the setup URI? Why is it required?
|
||||
> The setup URI is the encrypted representation of Self-hosted LiveSync configuration as a URI. This starts `obsidian://setuplivesync?settings=`. This is encrypted with a passphrase, so that it can be shared relatively securely between devices. It is a bit long, but it is one line. This allows a series of settings to be set at once without any inconsistencies.
|
||||
>
|
||||
> If you have configured the remote database by [Automated setup on Fly.io](./setup_flyio.md#a-very-automated-setup) or [set up your server with the tool](./setup_own_server.md#1-generate-the-setup-uri-on-a-desktop-device-or-server), **you should have one of them**
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.22.16",
|
||||
"version": "0.23.0",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
2748
package-lock.json
generated
2748
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.22.16",
|
||||
"version": "0.23.0",
|
||||
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"main": "main.js",
|
||||
"type": "module",
|
||||
@@ -54,7 +54,12 @@
|
||||
"typescript": "^5.4.2"
|
||||
},
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-s3": "^3.556.0",
|
||||
"@smithy/fetch-http-handler": "^2.5.0",
|
||||
"@smithy/protocol-http": "^3.3.0",
|
||||
"@smithy/querystring-builder": "^2.2.0",
|
||||
"diff-match-patch": "^1.0.5",
|
||||
"fflate": "^0.8.2",
|
||||
"idb": "^8.0.0",
|
||||
"minimatch": "^9.0.3",
|
||||
"xxhash-wasm": "0.4.2",
|
||||
|
||||
@@ -4,7 +4,7 @@ import { Notice, type PluginManifest, parseYaml, normalizePath, type ListedFiles
|
||||
import type { EntryDoc, LoadedEntry, InternalFileEntry, FilePathWithPrefix, FilePath, DocumentID, AnyEntry, SavingEntry } from "./lib/src/types";
|
||||
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, MODE_SELECTIVE } from "./lib/src/types";
|
||||
import { ICXHeader, PERIODIC_PLUGIN_SWEEP, } from "./types";
|
||||
import { createSavingEntryFromLoadedEntry, createTextBlob, delay, fireAndForget, getDocData, isDocContentSame } from "./lib/src/utils";
|
||||
import { createSavingEntryFromLoadedEntry, createTextBlob, delay, fireAndForget, getDocData, isDocContentSame, throttle } from "./lib/src/utils";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { readString, decodeBinary, arrayBufferToBase64, digestHash } from "./lib/src/strbin";
|
||||
import { serialized } from "./lib/src/lock";
|
||||
@@ -305,7 +305,8 @@ export class ConfigSync extends LiveSyncCommands {
|
||||
}
|
||||
return false;
|
||||
}
|
||||
createMissingConfigurationEntry() {
|
||||
createMissingConfigurationEntry = throttle(() => this._createMissingConfigurationEntry(), 1000);
|
||||
_createMissingConfigurationEntry() {
|
||||
let saveRequired = false;
|
||||
for (const v of this.pluginList) {
|
||||
const key = `${v.category}/${v.name}`;
|
||||
@@ -349,8 +350,7 @@ export class ConfigSync extends LiveSyncCommands {
|
||||
Logger(ex, LOG_LEVEL_VERBOSE);
|
||||
}
|
||||
return [];
|
||||
}, { suspended: false, batchSize: 1, concurrentLimit: 10, delay: 100, yieldThreshold: 10, maintainDelay: false, totalRemainingReactiveSource: pluginScanningCount }).startPipeline().root.onIdle(() => {
|
||||
// Logger(`All files enumerated`, LOG_LEVEL_INFO, "get-plugins");
|
||||
}, { suspended: false, batchSize: 1, concurrentLimit: 10, delay: 100, yieldThreshold: 10, maintainDelay: false, totalRemainingReactiveSource: pluginScanningCount }).startPipeline().root.onUpdateProgress(() => {
|
||||
this.createMissingConfigurationEntry();
|
||||
});
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ import { serialized } from "./lib/src/lock";
|
||||
import { JsonResolveModal } from "./JsonResolveModal";
|
||||
import { LiveSyncCommands } from "./LiveSyncCommands";
|
||||
import { addPrefix, stripAllPrefixes } from "./lib/src/path";
|
||||
import { KeyedQueueProcessor, QueueProcessor } from "./lib/src/processor";
|
||||
import { QueueProcessor } from "./lib/src/processor";
|
||||
import { hiddenFilesEventCount, hiddenFilesProcessingCount } from "./lib/src/stores";
|
||||
|
||||
export class HiddenFileSync extends LiveSyncCommands {
|
||||
@@ -73,15 +73,15 @@ export class HiddenFileSync extends LiveSyncCommands {
|
||||
}
|
||||
|
||||
procInternalFile(filename: string) {
|
||||
this.internalFileProcessor.enqueueWithKey(filename, filename);
|
||||
this.internalFileProcessor.enqueue(filename);
|
||||
}
|
||||
internalFileProcessor = new KeyedQueueProcessor<string, any>(
|
||||
internalFileProcessor = new QueueProcessor<string, any>(
|
||||
async (filenames) => {
|
||||
Logger(`START :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE);
|
||||
await this.syncInternalFilesAndDatabase("pull", false, false, filenames);
|
||||
Logger(`DONE :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE);
|
||||
return;
|
||||
}, { batchSize: 100, concurrentLimit: 1, delay: 10, yieldThreshold: 10, suspended: false, totalRemainingReactiveSource: hiddenFilesEventCount }
|
||||
}, { batchSize: 100, concurrentLimit: 1, delay: 10, yieldThreshold: 100, suspended: false, totalRemainingReactiveSource: hiddenFilesEventCount }
|
||||
);
|
||||
|
||||
recentProcessedInternalFiles = [] as string[];
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { type EntryDoc, type ObsidianLiveSyncSettings, DEFAULT_SETTINGS, LOG_LEVEL_NOTICE } from "./lib/src/types";
|
||||
import { type EntryDoc, type ObsidianLiveSyncSettings, DEFAULT_SETTINGS, LOG_LEVEL_NOTICE, REMOTE_COUCHDB, REMOTE_MINIO } from "./lib/src/types";
|
||||
import { configURIBase } from "./types";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { PouchDB } from "./lib/src/pouchdb-browser.js";
|
||||
@@ -9,6 +9,7 @@ import { delay, fireAndForget } from "./lib/src/utils";
|
||||
import { confirmWithMessage } from "./dialogs";
|
||||
import { Platform } from "./deps";
|
||||
import { fetchAllUsedChunks } from "./lib/src/utils_couchdb";
|
||||
import type { LiveSyncCouchDBReplicator } from "./lib/src/LiveSyncReplicator.js";
|
||||
|
||||
export class SetupLiveSync extends LiveSyncCommands {
|
||||
onunload() { }
|
||||
@@ -50,7 +51,7 @@ export class SetupLiveSync extends LiveSyncCommands {
|
||||
const encryptingPassphrase = await askString(this.app, "Encrypt your settings", "The passphrase to encrypt the setup URI", "", true);
|
||||
if (encryptingPassphrase === false)
|
||||
return;
|
||||
const setting = { ...this.settings, configPassphraseStore: "", encryptedCouchDBConnection: "", encryptedPassphrase: "" };
|
||||
const setting = { ...this.settings, configPassphraseStore: "", encryptedCouchDBConnection: "", encryptedPassphrase: "" } as Partial<ObsidianLiveSyncSettings>;
|
||||
if (stripExtra) {
|
||||
delete setting.pluginSyncExtendedSetting;
|
||||
}
|
||||
@@ -311,6 +312,7 @@ Of course, we are able to disable these features.`
|
||||
}
|
||||
async suspendReflectingDatabase() {
|
||||
if (this.plugin.settings.doNotSuspendOnFetching) return;
|
||||
if (this.plugin.settings.remoteType == REMOTE_MINIO) return;
|
||||
Logger(`Suspending reflection: Database and storage changes will not be reflected in each other until completely finished the fetching.`, LOG_LEVEL_NOTICE);
|
||||
this.plugin.settings.suspendParseReplicationResult = true;
|
||||
this.plugin.settings.suspendFileWatching = true;
|
||||
@@ -318,6 +320,7 @@ Of course, we are able to disable these features.`
|
||||
}
|
||||
async resumeReflectingDatabase() {
|
||||
if (this.plugin.settings.doNotSuspendOnFetching) return;
|
||||
if (this.plugin.settings.remoteType == REMOTE_MINIO) return;
|
||||
Logger(`Database and storage reflection has been resumed!`, LOG_LEVEL_NOTICE);
|
||||
this.plugin.settings.suspendParseReplicationResult = false;
|
||||
this.plugin.settings.suspendFileWatching = false;
|
||||
@@ -348,9 +351,10 @@ Of course, we are able to disable these features.`
|
||||
await this.plugin.resetLocalDatabase();
|
||||
}
|
||||
async fetchRemoteChunks() {
|
||||
if (!this.plugin.settings.doNotSuspendOnFetching && this.plugin.settings.readChunksOnline) {
|
||||
if (!this.plugin.settings.doNotSuspendOnFetching && this.plugin.settings.readChunksOnline && this.plugin.settings.remoteType == REMOTE_COUCHDB) {
|
||||
Logger(`Fetching chunks`, LOG_LEVEL_NOTICE);
|
||||
const remoteDB = await this.plugin.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.plugin.getIsMobile(), true);
|
||||
const replicator = this.plugin.getReplicator() as LiveSyncCouchDBReplicator;
|
||||
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.plugin.getIsMobile(), true);
|
||||
if (typeof remoteDB == "string") {
|
||||
Logger(remoteDB, LOG_LEVEL_NOTICE);
|
||||
} else {
|
||||
@@ -377,9 +381,6 @@ Of course, we are able to disable these features.`
|
||||
await this.plugin.replicateAllFromServer(true);
|
||||
await delay(1000);
|
||||
await this.plugin.replicateAllFromServer(true);
|
||||
// if (!tryLessFetching) {
|
||||
// await this.fetchRemoteChunks();
|
||||
// }
|
||||
await this.resumeReflectingDatabase();
|
||||
await this.askHiddenFileConfiguration({ enableFetch: true });
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@ export interface KeyValueDatabase {
|
||||
clear(): Promise<void>;
|
||||
keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]>;
|
||||
close(): void;
|
||||
destroy(): void;
|
||||
destroy(): Promise<void>;
|
||||
}
|
||||
const databaseCache: { [key: string]: IDBPDatabase<any> } = {};
|
||||
export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => {
|
||||
@@ -20,8 +20,7 @@ export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatab
|
||||
db.createObjectStore(storeKey);
|
||||
},
|
||||
});
|
||||
let db: IDBPDatabase<any> = null;
|
||||
db = await dbPromise;
|
||||
const db = await dbPromise;
|
||||
databaseCache[dbKey] = db;
|
||||
return {
|
||||
get<T>(key: string): Promise<T> {
|
||||
|
||||
83
src/MultipleRegExpControl.svelte
Normal file
83
src/MultipleRegExpControl.svelte
Normal file
@@ -0,0 +1,83 @@
|
||||
<script lang="ts">
|
||||
export let patterns = [] as string[];
|
||||
export let originals = [] as string[];
|
||||
|
||||
export let apply: (args: string[]) => Promise<void> = (_: string[]) => Promise.resolve();
|
||||
function revert() {
|
||||
patterns = [...originals];
|
||||
}
|
||||
const CHECK_OK = "✔";
|
||||
const CHECK_NG = "⚠";
|
||||
const MARK_MODIFIED = "✏ ";
|
||||
function checkRegExp(pattern: string) {
|
||||
if (pattern.trim() == "") return "";
|
||||
try {
|
||||
const _ = new RegExp(pattern);
|
||||
return CHECK_OK;
|
||||
} catch (ex) {
|
||||
return CHECK_NG;
|
||||
}
|
||||
}
|
||||
$: status = patterns.map((e) => checkRegExp(e));
|
||||
$: modified = patterns.map((e, i) => (e != originals?.[i] ?? "" ? MARK_MODIFIED : ""));
|
||||
|
||||
function remove(idx: number) {
|
||||
patterns[idx] = "";
|
||||
}
|
||||
function add() {
|
||||
patterns = [...patterns, ""];
|
||||
}
|
||||
</script>
|
||||
|
||||
<ul>
|
||||
{#each patterns as pattern, idx}
|
||||
<li><label>{modified[idx]}{status[idx]}</label><input type="text" bind:value={pattern} class={modified[idx]} /><button class="iconbutton" on:click={() => remove(idx)}>🗑</button></li>
|
||||
{/each}
|
||||
<li>
|
||||
<label><button on:click={() => add()}>Add</button></label>
|
||||
</li>
|
||||
<li class="buttons">
|
||||
<button on:click={() => apply(patterns)} disabled={status.some((e) => e == CHECK_NG) || modified.every((e) => e == "")}>Apply</button>
|
||||
<button on:click={() => revert()} disabled={status.some((e) => e == CHECK_NG) || modified.every((e) => e == "")}>Revert</button>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<style>
|
||||
label {
|
||||
min-width: 4em;
|
||||
width: 4em;
|
||||
display: inline-flex;
|
||||
flex-direction: row;
|
||||
justify-content: flex-end;
|
||||
}
|
||||
ul {
|
||||
flex-grow: 1;
|
||||
display: inline-flex;
|
||||
flex-direction: column;
|
||||
list-style-type: none;
|
||||
margin-block-start: 0;
|
||||
margin-block-end: 0;
|
||||
margin-inline-start: 0px;
|
||||
margin-inline-end: 0px;
|
||||
padding-inline-start: 0;
|
||||
}
|
||||
li {
|
||||
padding: var(--size-2-1) var(--size-4-1);
|
||||
display: inline-flex;
|
||||
flex-grow: 1;
|
||||
align-items: center;
|
||||
justify-content: flex-end;
|
||||
gap: var(--size-4-2);
|
||||
}
|
||||
li input {
|
||||
min-width: 10em;
|
||||
}
|
||||
li.buttons {
|
||||
}
|
||||
button.iconbutton {
|
||||
max-width: 4em;
|
||||
}
|
||||
span.spacer {
|
||||
flex-grow: 1;
|
||||
}
|
||||
</style>
|
||||
133
src/ObsHttpHandler.ts
Normal file
133
src/ObsHttpHandler.ts
Normal file
@@ -0,0 +1,133 @@
|
||||
// This file is based on a file that was published by the @remotely-save, under the Apache 2 License.
|
||||
// I would love to express my deepest gratitude to the original authors for their hard work and dedication. Without their contributions, this project would not have been possible.
|
||||
//
|
||||
// Original Implementation is here: https://github.com/remotely-save/remotely-save/blob/28b99557a864ef59c19d2ad96101196e401718f0/src/remoteForS3.ts
|
||||
|
||||
import {
|
||||
FetchHttpHandler,
|
||||
type FetchHttpHandlerOptions,
|
||||
} from "@smithy/fetch-http-handler";
|
||||
import { HttpRequest, HttpResponse, type HttpHandlerOptions } from "@smithy/protocol-http";
|
||||
//@ts-ignore
|
||||
import { requestTimeout } from "@smithy/fetch-http-handler/dist-es/request-timeout";
|
||||
import { buildQueryString } from "@smithy/querystring-builder";
|
||||
import { requestUrl, type RequestUrlParam } from "./deps";
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
// special handler using Obsidian requestUrl
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
/**
|
||||
* This is close to origin implementation of FetchHttpHandler
|
||||
* https://github.com/aws/aws-sdk-js-v3/blob/main/packages/fetch-http-handler/src/fetch-http-handler.ts
|
||||
* that is released under Apache 2 License.
|
||||
* But this uses Obsidian requestUrl instead.
|
||||
*/
|
||||
export class ObsHttpHandler extends FetchHttpHandler {
|
||||
requestTimeoutInMs: number | undefined;
|
||||
reverseProxyNoSignUrl: string | undefined;
|
||||
constructor(
|
||||
options?: FetchHttpHandlerOptions,
|
||||
reverseProxyNoSignUrl?: string
|
||||
) {
|
||||
super(options);
|
||||
this.requestTimeoutInMs =
|
||||
options === undefined ? undefined : options.requestTimeout;
|
||||
this.reverseProxyNoSignUrl = reverseProxyNoSignUrl;
|
||||
}
|
||||
async handle(
|
||||
request: HttpRequest,
|
||||
{ abortSignal }: HttpHandlerOptions = {}
|
||||
): Promise<{ response: HttpResponse }> {
|
||||
if (abortSignal?.aborted) {
|
||||
const abortError = new Error("Request aborted");
|
||||
abortError.name = "AbortError";
|
||||
return Promise.reject(abortError);
|
||||
}
|
||||
|
||||
let path = request.path;
|
||||
if (request.query) {
|
||||
const queryString = buildQueryString(request.query);
|
||||
if (queryString) {
|
||||
path += `?${queryString}`;
|
||||
}
|
||||
}
|
||||
|
||||
const { port, method } = request;
|
||||
let url = `${request.protocol}//${request.hostname}${port ? `:${port}` : ""
|
||||
}${path}`;
|
||||
if (
|
||||
this.reverseProxyNoSignUrl !== undefined &&
|
||||
this.reverseProxyNoSignUrl !== ""
|
||||
) {
|
||||
const urlObj = new URL(url);
|
||||
urlObj.host = this.reverseProxyNoSignUrl;
|
||||
url = urlObj.href;
|
||||
}
|
||||
const body =
|
||||
method === "GET" || method === "HEAD" ? undefined : request.body;
|
||||
|
||||
const transformedHeaders: Record<string, string> = {};
|
||||
for (const key of Object.keys(request.headers)) {
|
||||
const keyLower = key.toLowerCase();
|
||||
if (keyLower === "host" || keyLower === "content-length") {
|
||||
continue;
|
||||
}
|
||||
transformedHeaders[keyLower] = request.headers[key];
|
||||
}
|
||||
|
||||
let contentType: string | undefined = undefined;
|
||||
if (transformedHeaders["content-type"] !== undefined) {
|
||||
contentType = transformedHeaders["content-type"];
|
||||
}
|
||||
|
||||
let transformedBody: any = body;
|
||||
if (ArrayBuffer.isView(body)) {
|
||||
transformedBody = new Uint8Array(body.buffer).buffer;
|
||||
}
|
||||
|
||||
const param: RequestUrlParam = {
|
||||
body: transformedBody,
|
||||
headers: transformedHeaders,
|
||||
method: method,
|
||||
url: url,
|
||||
contentType: contentType,
|
||||
};
|
||||
|
||||
const raceOfPromises = [
|
||||
requestUrl(param).then((rsp) => {
|
||||
const headers = rsp.headers;
|
||||
const headersLower: Record<string, string> = {};
|
||||
for (const key of Object.keys(headers)) {
|
||||
headersLower[key.toLowerCase()] = headers[key];
|
||||
}
|
||||
const stream = new ReadableStream<Uint8Array>({
|
||||
start(controller) {
|
||||
controller.enqueue(new Uint8Array(rsp.arrayBuffer));
|
||||
controller.close();
|
||||
},
|
||||
});
|
||||
return {
|
||||
response: new HttpResponse({
|
||||
headers: headersLower,
|
||||
statusCode: rsp.status,
|
||||
body: stream,
|
||||
}),
|
||||
};
|
||||
}),
|
||||
requestTimeout(this.requestTimeoutInMs),
|
||||
];
|
||||
|
||||
if (abortSignal) {
|
||||
raceOfPromises.push(
|
||||
new Promise<never>((resolve, reject) => {
|
||||
abortSignal.onabort = () => {
|
||||
const abortError = new Error("Request aborted");
|
||||
abortError.name = "AbortError";
|
||||
reject(abortError);
|
||||
};
|
||||
})
|
||||
);
|
||||
}
|
||||
return Promise.race(raceOfPromises);
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,7 @@ import type { SerializedFileAccess } from "./SerializedFileAccess";
|
||||
import { Plugin, TAbstractFile, TFile, TFolder } from "./deps";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { shouldBeIgnored } from "./lib/src/path";
|
||||
import type { KeyedQueueProcessor } from "./lib/src/processor";
|
||||
import type { QueueProcessor } from "./lib/src/processor";
|
||||
import { LOG_LEVEL_NOTICE, type FilePath, type ObsidianLiveSyncSettings } from "./lib/src/types";
|
||||
import { delay } from "./lib/src/utils";
|
||||
import { type FileEventItem, type FileEventType, type FileInfo, type InternalFileInfo } from "./types";
|
||||
@@ -19,7 +19,7 @@ type LiveSyncForStorageEventManager = Plugin &
|
||||
vaultAccess: SerializedFileAccess
|
||||
} & {
|
||||
isTargetFile: (file: string | TAbstractFile) => Promise<boolean>,
|
||||
fileEventQueue: KeyedQueueProcessor<FileEventItem, any>,
|
||||
fileEventQueue: QueueProcessor<FileEventItem, any>,
|
||||
isFileSizeExceeded: (size: number) => boolean;
|
||||
};
|
||||
|
||||
@@ -133,8 +133,7 @@ export class StorageEventManagerObsidian extends StorageEventManager {
|
||||
path: file.path,
|
||||
size: file.stat.size
|
||||
} as FileInfo : file as InternalFileInfo;
|
||||
|
||||
this.plugin.fileEventQueue.enqueueWithKey(`file-${fileInfo.path}`, {
|
||||
this.plugin.fileEventQueue.enqueue({
|
||||
type,
|
||||
args: {
|
||||
file: fileInfo,
|
||||
|
||||
2
src/lib
2
src/lib
Submodule src/lib updated: 98809f37df...5da1dbc7fc
479
src/main.ts
479
src/main.ts
@@ -2,9 +2,9 @@ const isDebug = false;
|
||||
|
||||
import { type Diff, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch, stringifyYaml, parseYaml } from "./deps";
|
||||
import { Notice, Plugin, TFile, addIcon, TFolder, normalizePath, TAbstractFile, Editor, MarkdownView, type RequestUrlParam, type RequestUrlResponse, requestUrl, type MarkdownFileInfo } from "./deps";
|
||||
import { type EntryDoc, type LoadedEntry, type ObsidianLiveSyncSettings, type diff_check_result, type diff_result_leaf, type EntryBody, LOG_LEVEL, VER, DEFAULT_SETTINGS, type diff_result, FLAGMD_REDFLAG, SYNCINFO_ID, SALT_OF_PASSPHRASE, type ConfigPassphraseStore, type CouchDBConnection, FLAGMD_REDFLAG2, FLAGMD_REDFLAG3, PREFIXMD_LOGFILE, type DatabaseConnectingStatus, type EntryHasPath, type DocumentID, type FilePathWithPrefix, type FilePath, type AnyEntry, LOG_LEVEL_DEBUG, LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_URGENT, LOG_LEVEL_VERBOSE, type SavingEntry, MISSING_OR_ERROR, NOT_CONFLICTED, AUTO_MERGED, CANCELLED, LEAVE_TO_SUBSEQUENT, FLAGMD_REDFLAG2_HR, FLAGMD_REDFLAG3_HR, } from "./lib/src/types";
|
||||
import { type EntryDoc, type LoadedEntry, type ObsidianLiveSyncSettings, type diff_check_result, type diff_result_leaf, type EntryBody, LOG_LEVEL, VER, DEFAULT_SETTINGS, type diff_result, FLAGMD_REDFLAG, SYNCINFO_ID, SALT_OF_PASSPHRASE, type ConfigPassphraseStore, type CouchDBConnection, FLAGMD_REDFLAG2, FLAGMD_REDFLAG3, PREFIXMD_LOGFILE, type DatabaseConnectingStatus, type EntryHasPath, type DocumentID, type FilePathWithPrefix, type FilePath, type AnyEntry, LOG_LEVEL_DEBUG, LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_URGENT, LOG_LEVEL_VERBOSE, type SavingEntry, MISSING_OR_ERROR, NOT_CONFLICTED, AUTO_MERGED, CANCELLED, LEAVE_TO_SUBSEQUENT, FLAGMD_REDFLAG2_HR, FLAGMD_REDFLAG3_HR, REMOTE_MINIO, REMOTE_COUCHDB, type BucketSyncSetting, } from "./lib/src/types";
|
||||
import { type InternalFileInfo, type CacheData, type FileEventItem, FileWatchEventQueueMax } from "./types";
|
||||
import { arrayToChunkedArray, createBlob, determineTypeFromBlob, fireAndForget, getDocData, isAnyNote, isDocContentSame, isObjectDifferent, readContent, sendValue } from "./lib/src/utils";
|
||||
import { arrayToChunkedArray, createBlob, delay, determineTypeFromBlob, fireAndForget, getDocData, isAnyNote, isDocContentSame, isObjectDifferent, readContent, sendValue, throttle } from "./lib/src/utils";
|
||||
import { Logger, setGlobalLogFunction } from "./lib/src/logger";
|
||||
import { PouchDB } from "./lib/src/pouchdb-browser.js";
|
||||
import { ConflictResolveModal } from "./ConflictResolveModal";
|
||||
@@ -12,7 +12,7 @@ import { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab";
|
||||
import { DocumentHistoryModal } from "./DocumentHistoryModal";
|
||||
import { applyPatch, cancelAllPeriodicTask, cancelAllTasks, cancelTask, generatePatchObj, id2path, isObjectMargeApplicable, isSensibleMargeApplicable, flattenObject, path2id, scheduleTask, tryParseJSON, isValidPath, isInternalMetadata, isPluginMetadata, stripInternalMetadataPrefix, isChunk, askSelectString, askYesNo, askString, PeriodicProcessor, getPath, getPathWithoutPrefix, getPathFromTFile, performRebuildDB, memoIfNotExist, memoObject, retrieveMemoObject, disposeMemoObject, isCustomisationSyncMetadata, compareFileFreshness, BASE_IS_NEW, TARGET_IS_NEW, EVEN, compareMTime, markChangesAreSame } from "./utils";
|
||||
import { encrypt, tryDecrypt } from "./lib/src/e2ee_v2";
|
||||
import { balanceChunkPurgedDBs, enableEncryption, isCloudantURI, isErrorOfMissingDoc, isValidRemoteCouchDBURI, purgeUnreferencedChunks } from "./lib/src/utils_couchdb";
|
||||
import { balanceChunkPurgedDBs, enableCompression, enableEncryption, isCloudantURI, isErrorOfMissingDoc, isValidRemoteCouchDBURI, purgeUnreferencedChunks } from "./lib/src/utils_couchdb";
|
||||
import { logStore, type LogEntry, collectingChunks, pluginScanningCount, hiddenFilesProcessingCount, hiddenFilesEventCount, logMessages } from "./lib/src/stores";
|
||||
import { setNoticeClass } from "./lib/src/wrapper";
|
||||
import { versionNumberString2Number, writeString, decodeBinary, readString } from "./lib/src/strbin";
|
||||
@@ -20,7 +20,7 @@ import { addPrefix, isAcceptedAll, isPlainText, shouldBeIgnored, stripAllPrefixe
|
||||
import { isLockAcquired, serialized, shareRunningResult, skipIfDuplicated } from "./lib/src/lock";
|
||||
import { StorageEventManager, StorageEventManagerObsidian } from "./StorageEventManager";
|
||||
import { LiveSyncLocalDB, type LiveSyncLocalDBEnv } from "./lib/src/LiveSyncLocalDB";
|
||||
import { LiveSyncDBReplicator, type LiveSyncReplicatorEnv } from "./lib/src/LiveSyncReplicator";
|
||||
import { LiveSyncAbstractReplicator, type LiveSyncReplicatorEnv } from "./lib/src/LiveSyncAbstractReplicator.js";
|
||||
import { type KeyValueDatabase, OpenKeyValueDatabase } from "./KeyValueDB";
|
||||
import { LiveSyncCommands } from "./LiveSyncCommands";
|
||||
import { HiddenFileSync } from "./CmdHiddenFileSync";
|
||||
@@ -31,9 +31,14 @@ import { GlobalHistoryView, VIEW_TYPE_GLOBAL_HISTORY } from "./GlobalHistoryView
|
||||
import { LogPaneView, VIEW_TYPE_LOG } from "./LogPaneView";
|
||||
import { LRUCache } from "./lib/src/LRUCache";
|
||||
import { SerializedFileAccess } from "./SerializedFileAccess.js";
|
||||
import { KeyedQueueProcessor, QueueProcessor, type QueueItemWithKey } from "./lib/src/processor.js";
|
||||
import { QueueProcessor } from "./lib/src/processor.js";
|
||||
import { reactive, reactiveSource } from "./lib/src/reactive.js";
|
||||
import { initializeStores } from "./stores.js";
|
||||
import { JournalSyncMinio } from "./lib/src/JournalSyncMinio.js";
|
||||
import { LiveSyncJournalReplicator, type LiveSyncJournalReplicatorEnv } from "./lib/src/LiveSyncJournalReplicator.js";
|
||||
import { LiveSyncCouchDBReplicator, type LiveSyncCouchDBReplicatorEnv } from "./lib/src/LiveSyncReplicator.js";
|
||||
import type { CheckPointInfo, SimpleStore } from "./lib/src/JournalSyncTypes.js";
|
||||
import { ObsHttpHandler } from "./ObsHttpHandler.js";
|
||||
|
||||
setNoticeClass(Notice);
|
||||
|
||||
@@ -69,11 +74,16 @@ const SETTING_HEADER = "````yaml:livesync-setting\n";
|
||||
const SETTING_FOOTER = "\n````";
|
||||
|
||||
export default class ObsidianLiveSyncPlugin extends Plugin
|
||||
implements LiveSyncLocalDBEnv, LiveSyncReplicatorEnv {
|
||||
implements LiveSyncLocalDBEnv, LiveSyncReplicatorEnv, LiveSyncJournalReplicatorEnv, LiveSyncCouchDBReplicatorEnv {
|
||||
_customHandler!: ObsHttpHandler;
|
||||
customFetchHandler() {
|
||||
if (!this._customHandler) this._customHandler = new ObsHttpHandler(undefined, undefined);
|
||||
return this._customHandler;
|
||||
}
|
||||
|
||||
settings!: ObsidianLiveSyncSettings;
|
||||
localDatabase!: LiveSyncLocalDB;
|
||||
replicator!: LiveSyncDBReplicator;
|
||||
replicator!: LiveSyncAbstractReplicator;
|
||||
|
||||
statusBar?: HTMLElement;
|
||||
_suspended = false;
|
||||
@@ -119,7 +129,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin
|
||||
requestCount = reactiveSource(0);
|
||||
responseCount = reactiveSource(0);
|
||||
processReplication = (e: PouchDB.Core.ExistingDocument<EntryDoc>[]) => this.parseReplicationResult(e);
|
||||
async connectRemoteCouchDB(uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | false, useDynamicIterationCount: boolean, performSetup: boolean, skipInfo: boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> {
|
||||
async connectRemoteCouchDB(uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | false, useDynamicIterationCount: boolean, performSetup: boolean, skipInfo: boolean, compression: boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> {
|
||||
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
|
||||
if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters.";
|
||||
if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces.";
|
||||
@@ -237,6 +247,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin
|
||||
};
|
||||
|
||||
const db: PouchDB.Database<EntryDoc> = new PouchDB<EntryDoc>(uri, conf);
|
||||
enableCompression(db, compression);
|
||||
if (passphrase !== "false" && typeof passphrase === "string") {
|
||||
enableEncryption(db, passphrase, useDynamicIterationCount, false);
|
||||
}
|
||||
@@ -307,16 +318,24 @@ export default class ObsidianLiveSyncPlugin extends Plugin
|
||||
onClose(db: LiveSyncLocalDB): void {
|
||||
this.kvDB.close();
|
||||
}
|
||||
getNewReplicator(settingOverride: Partial<ObsidianLiveSyncSettings> = {}): LiveSyncAbstractReplicator {
|
||||
const settings = { ...this.settings, ...settingOverride };
|
||||
if (settings.remoteType == REMOTE_MINIO) {
|
||||
return new LiveSyncJournalReplicator(this);
|
||||
}
|
||||
return new LiveSyncCouchDBReplicator(this);
|
||||
}
|
||||
async onInitializeDatabase(db: LiveSyncLocalDB): Promise<void> {
|
||||
this.kvDB = await OpenKeyValueDatabase(db.dbname + "-livesync-kv");
|
||||
this.replicator = new LiveSyncDBReplicator(this);
|
||||
this.replicator = this.getNewReplicator();
|
||||
}
|
||||
async onResetDatabase(db: LiveSyncLocalDB): Promise<void> {
|
||||
const lsKey = "obsidian-livesync-queuefiles-" + this.getVaultName();
|
||||
localStorage.removeItem(lsKey);
|
||||
const kvDBKey = "queued-files"
|
||||
this.kvDB.del(kvDBKey);
|
||||
// localStorage.removeItem(lsKey);
|
||||
await this.kvDB.destroy();
|
||||
this.kvDB = await OpenKeyValueDatabase(db.dbname + "-livesync-kv");
|
||||
this.replicator = new LiveSyncDBReplicator(this);
|
||||
this.replicator = this.getNewReplicator()
|
||||
}
|
||||
getReplicator() {
|
||||
return this.replicator;
|
||||
@@ -445,6 +464,52 @@ export default class ObsidianLiveSyncPlugin extends Plugin
|
||||
}
|
||||
Logger(`Checking expired file history done`);
|
||||
}
|
||||
|
||||
simpleStore: SimpleStore<CheckPointInfo> = {
|
||||
get: async (key: string) => {
|
||||
return await this.kvDB.get(`os-${key}`);
|
||||
},
|
||||
set: async (key: string, value: any) => {
|
||||
await this.kvDB.set(`os-${key}`, value);
|
||||
},
|
||||
delete: async (key) => {
|
||||
await this.kvDB.del(`os-${key}`);
|
||||
},
|
||||
keys: async (from: string | undefined, to: string | undefined, count?: number | undefined): Promise<string[]> => {
|
||||
const ret = this.kvDB.keys(IDBKeyRange.bound(`os-${from || ""}`, `os-${to || ""}`), count);
|
||||
return (await ret).map(e => e.toString());
|
||||
}
|
||||
}
|
||||
getMinioJournalSyncClient() {
|
||||
const id = this.settings.accessKey
|
||||
const key = this.settings.secretKey
|
||||
const bucket = this.settings.bucket
|
||||
const region = this.settings.region
|
||||
const endpoint = this.settings.endpoint
|
||||
const useCustomRequestHandler = this.settings.useCustomRequestHandler;
|
||||
return new JournalSyncMinio(id, key, endpoint, bucket, this.simpleStore, this, useCustomRequestHandler, region);
|
||||
}
|
||||
async resetRemoteBucket() {
|
||||
const minioJournal = this.getMinioJournalSyncClient();
|
||||
await minioJournal.resetBucket();
|
||||
}
|
||||
async resetJournalSync() {
|
||||
const minioJournal = this.getMinioJournalSyncClient();
|
||||
await minioJournal.resetCheckpointInfo();
|
||||
}
|
||||
async journalSendTest() {
|
||||
const minioJournal = this.getMinioJournalSyncClient();
|
||||
await minioJournal.sendLocalJournal();
|
||||
}
|
||||
async journalFetchTest() {
|
||||
const minioJournal = this.getMinioJournalSyncClient();
|
||||
await minioJournal.receiveRemoteJournal();
|
||||
}
|
||||
|
||||
async journalSyncTest() {
|
||||
const minioJournal = this.getMinioJournalSyncClient();
|
||||
await minioJournal.sync();
|
||||
}
|
||||
async onLayoutReady() {
|
||||
this.registerFileWatchEvents();
|
||||
if (!this.localDatabase.isReady) {
|
||||
@@ -535,8 +600,8 @@ Click anywhere to stop counting down.
|
||||
this.registerWatchEvents();
|
||||
await this.realizeSettingSyncMode();
|
||||
this.swapSaveCommand();
|
||||
if (this.settings.syncOnStart) {
|
||||
this.replicator.openReplication(this.settings, false, false);
|
||||
if (!this.settings.liveSync && this.settings.syncOnStart) {
|
||||
this.replicator.openReplication(this.settings, false, false, false);
|
||||
}
|
||||
this.scanStat();
|
||||
} catch (ex) {
|
||||
@@ -617,6 +682,63 @@ Note: We can always able to read V1 format. It will be progressively converted.
|
||||
this.addOnConfigSync.showPluginSyncModal();
|
||||
}).addClass("livesync-ribbon-showcustom");
|
||||
|
||||
this.addCommand({
|
||||
id: "debug-x1",
|
||||
name: "Journal send",
|
||||
callback: () => {
|
||||
this.journalSendTest();
|
||||
}
|
||||
});
|
||||
this.addCommand({
|
||||
id: "debug-x3",
|
||||
name: "Journal receive",
|
||||
callback: () => {
|
||||
this.journalFetchTest();
|
||||
}
|
||||
});
|
||||
this.addCommand({
|
||||
id: "debug-x4",
|
||||
name: "Sync By Journal",
|
||||
callback: () => {
|
||||
this.journalSyncTest();
|
||||
}
|
||||
});
|
||||
this.addCommand({
|
||||
id: "debug-x5",
|
||||
name: "Reset journal sync",
|
||||
callback: () => {
|
||||
this.resetJournalSync();
|
||||
}
|
||||
});
|
||||
this.addCommand({
|
||||
id: "debug-x6",
|
||||
name: "Reset journal sync and delete all items on the bucket",
|
||||
callback: () => {
|
||||
this.resetRemoteBucket();
|
||||
}
|
||||
})
|
||||
this.addCommand({
|
||||
id: "debug-x7",
|
||||
name: "Perform Test",
|
||||
callback: () => {
|
||||
// const p = getMockedPouch();
|
||||
// this.localDatabase.localDatabase.replicate.to(p, { since: 1000, checkpoint: "source" });
|
||||
}
|
||||
})
|
||||
this.addCommand({
|
||||
id: "debug-x8",
|
||||
name: "Pack test",
|
||||
callback: async () => {
|
||||
const minioJournal = this.getMinioJournalSyncClient();
|
||||
// const pack = await minioJournal.createJournalPack();
|
||||
// console.warn();
|
||||
console.warn(await minioJournal._createJournalPack());
|
||||
}
|
||||
})
|
||||
|
||||
|
||||
|
||||
|
||||
this.addCommand({
|
||||
id: "view-log",
|
||||
name: "Show log",
|
||||
@@ -953,17 +1075,19 @@ Note: We can always able to read V1 format. It will be progressively converted.
|
||||
Logger("Could not determine passphrase for reading data.json! DO NOT synchronize with the remote before making sure your configuration is!", LOG_LEVEL_URGENT);
|
||||
} else {
|
||||
if (settings.encryptedCouchDBConnection) {
|
||||
const keys = ["couchDB_URI", "couchDB_USER", "couchDB_PASSWORD", "couchDB_DBNAME"] as (keyof CouchDBConnection)[];
|
||||
const decrypted = this.tryDecodeJson(await this.decryptConfigurationItem(settings.encryptedCouchDBConnection, passphrase)) as CouchDBConnection;
|
||||
const keys = ["couchDB_URI", "couchDB_USER", "couchDB_PASSWORD", "couchDB_DBNAME", "accessKey", "bucket", "endpoint", "region", "secretKey"] as (keyof CouchDBConnection | keyof BucketSyncSetting)[];
|
||||
const decrypted = this.tryDecodeJson(await this.decryptConfigurationItem(settings.encryptedCouchDBConnection, passphrase)) as (CouchDBConnection & BucketSyncSetting);
|
||||
if (decrypted) {
|
||||
for (const key of keys) {
|
||||
if (key in decrypted) {
|
||||
//@ts-ignore
|
||||
settings[key] = decrypted[key]
|
||||
}
|
||||
}
|
||||
} else {
|
||||
Logger("Could not decrypt passphrase for reading data.json! DO NOT synchronize with the remote before making sure your configuration is!", LOG_LEVEL_URGENT);
|
||||
for (const key of keys) {
|
||||
//@ts-ignore
|
||||
settings[key] = "";
|
||||
}
|
||||
}
|
||||
@@ -1007,7 +1131,7 @@ Note: We can always able to read V1 format. It will be progressively converted.
|
||||
}
|
||||
this.deviceAndVaultName = localStorage.getItem(lsKey) || "";
|
||||
this.ignoreFiles = this.settings.ignoreFiles.split(",").map(e => e.trim());
|
||||
this.fileEventQueue.delay = this.settings.batchSave ? 5000 : 100;
|
||||
this.fileEventQueue.delay = (!this.settings.liveSync && this.settings.batchSave) ? 5000 : 100;
|
||||
}
|
||||
|
||||
async saveSettingData() {
|
||||
@@ -1020,26 +1144,38 @@ Note: We can always able to read V1 format. It will be progressively converted.
|
||||
Logger("Could not determine passphrase for saving data.json! Our data.json have insecure items!", LOG_LEVEL_NOTICE);
|
||||
} else {
|
||||
if (settings.couchDB_PASSWORD != "" || settings.couchDB_URI != "" || settings.couchDB_USER != "" || settings.couchDB_DBNAME) {
|
||||
const connectionSetting: CouchDBConnection = {
|
||||
const connectionSetting: CouchDBConnection & BucketSyncSetting = {
|
||||
couchDB_DBNAME: settings.couchDB_DBNAME,
|
||||
couchDB_PASSWORD: settings.couchDB_PASSWORD,
|
||||
couchDB_URI: settings.couchDB_URI,
|
||||
couchDB_USER: settings.couchDB_USER,
|
||||
accessKey: settings.accessKey,
|
||||
bucket: settings.bucket,
|
||||
endpoint: settings.endpoint,
|
||||
region: settings.region,
|
||||
secretKey: settings.secretKey,
|
||||
useCustomRequestHandler: settings.useCustomRequestHandler
|
||||
};
|
||||
settings.encryptedCouchDBConnection = await this.encryptConfigurationItem(JSON.stringify(connectionSetting), settings);
|
||||
settings.couchDB_PASSWORD = "";
|
||||
settings.couchDB_DBNAME = "";
|
||||
settings.couchDB_URI = "";
|
||||
settings.couchDB_USER = "";
|
||||
settings.accessKey = "";
|
||||
settings.bucket = "";
|
||||
settings.region = "";
|
||||
settings.secretKey = "";
|
||||
settings.endpoint = "";
|
||||
}
|
||||
if (settings.encrypt && settings.passphrase != "") {
|
||||
settings.encryptedPassphrase = await this.encryptConfigurationItem(settings.passphrase, settings);
|
||||
settings.passphrase = "";
|
||||
}
|
||||
|
||||
}
|
||||
await this.saveData(settings);
|
||||
this.localDatabase.settings = this.settings;
|
||||
this.fileEventQueue.delay = this.settings.batchSave ? 5000 : 100;
|
||||
this.fileEventQueue.delay = (!this.settings.liveSync && this.settings.batchSave) ? 5000 : 100;
|
||||
this.ignoreFiles = this.settings.ignoreFiles.split(",").map(e => e.trim());
|
||||
if (this.settings.settingSyncFile != "") {
|
||||
fireAndForget(() => this.saveSettingToMarkdown(this.settings.settingSyncFile));
|
||||
@@ -1237,9 +1373,13 @@ We can perform a command in this file.
|
||||
_this.performCommand('editor:save-file');
|
||||
};
|
||||
}
|
||||
hasFocus = true;
|
||||
isLastHidden = false;
|
||||
registerWatchEvents() {
|
||||
this.registerEvent(this.app.workspace.on("file-open", this.watchWorkspaceOpen));
|
||||
this.registerDomEvent(document, "visibilitychange", this.watchWindowVisibility);
|
||||
this.registerDomEvent(window, "focus", () => this.setHasFocus(true));
|
||||
this.registerDomEvent(window, "blur", () => this.setHasFocus(false));
|
||||
this.registerDomEvent(window, "online", this.watchOnline);
|
||||
this.registerDomEvent(window, "offline", this.watchOnline);
|
||||
}
|
||||
@@ -1255,15 +1395,30 @@ We can perform a command in this file.
|
||||
await this.syncAllFiles();
|
||||
}
|
||||
}
|
||||
setHasFocus(hasFocus: boolean) {
|
||||
this.hasFocus = hasFocus;
|
||||
this.watchWindowVisibility();
|
||||
}
|
||||
watchWindowVisibility() {
|
||||
scheduleTask("watch-window-visibility", 500, () => fireAndForget(() => this.watchWindowVisibilityAsync()));
|
||||
scheduleTask("watch-window-visibility", 100, () => fireAndForget(() => this.watchWindowVisibilityAsync()));
|
||||
}
|
||||
|
||||
async watchWindowVisibilityAsync() {
|
||||
if (this.settings.suspendFileWatching) return;
|
||||
if (!this.settings.isConfigured) return;
|
||||
if (!this.isReady) return;
|
||||
|
||||
if (this.isLastHidden && !this.hasFocus) {
|
||||
// NO OP while non-focused after made hidden;
|
||||
return;
|
||||
}
|
||||
|
||||
const isHidden = document.hidden;
|
||||
if (this.isLastHidden === isHidden) {
|
||||
return;
|
||||
}
|
||||
this.isLastHidden = isHidden;
|
||||
|
||||
await this.applyBatchChange();
|
||||
if (isHidden) {
|
||||
this.replicator.closeReplication();
|
||||
@@ -1272,23 +1427,25 @@ We can perform a command in this file.
|
||||
// suspend all temporary.
|
||||
if (this.suspended) return;
|
||||
await Promise.all(this.addOns.map(e => e.onResume()));
|
||||
if (this.settings.liveSync) {
|
||||
this.replicator.openReplication(this.settings, true, false);
|
||||
if (this.settings.remoteType == REMOTE_COUCHDB) {
|
||||
if (this.settings.liveSync) {
|
||||
this.replicator.openReplication(this.settings, true, false, false);
|
||||
}
|
||||
}
|
||||
if (this.settings.syncOnStart) {
|
||||
this.replicator.openReplication(this.settings, false, false);
|
||||
this.replicator.openReplication(this.settings, false, false, false);
|
||||
}
|
||||
this.periodicSyncProcessor.enable(this.settings.periodicReplication ? this.settings.periodicReplicationInterval * 1000 : 0);
|
||||
}
|
||||
}
|
||||
|
||||
cancelRelativeEvent(item: FileEventItem) {
|
||||
this.fileEventQueue.modifyQueue((items) => [...items.filter(e => e.entity.key != item.key)])
|
||||
this.fileEventQueue.modifyQueue((items) => [...items.filter(e => e.key != item.key)])
|
||||
}
|
||||
|
||||
queueNextFileEvent(items: QueueItemWithKey<FileEventItem>[], newItem: QueueItemWithKey<FileEventItem>): QueueItemWithKey<FileEventItem>[] {
|
||||
queueNextFileEvent(items: FileEventItem[], newItem: FileEventItem): FileEventItem[] {
|
||||
if (this.settings.batchSave && !this.settings.liveSync) {
|
||||
const file = newItem.entity.args.file;
|
||||
const file = newItem.args.file;
|
||||
// if the latest event is the same type, omit that
|
||||
// a.md MODIFY <- this should be cancelled when a.md MODIFIED
|
||||
// b.md MODIFY <- this should be cancelled when b.md MODIFIED
|
||||
@@ -1300,16 +1457,16 @@ We can perform a command in this file.
|
||||
while (i >= 0) {
|
||||
i--;
|
||||
if (i < 0) break L1;
|
||||
if (items[i].entity.args.file.path != file.path) {
|
||||
if (items[i].args.file.path != file.path) {
|
||||
continue L1;
|
||||
}
|
||||
if (items[i].entity.type != newItem.entity.type) break L1;
|
||||
if (items[i].type != newItem.type) break L1;
|
||||
items.remove(items[i]);
|
||||
}
|
||||
}
|
||||
items.push(newItem);
|
||||
// When deleting or renaming, the queue must be flushed once before processing subsequent processes to prevent unexpected race condition.
|
||||
if (newItem.entity.type == "DELETE" || newItem.entity.type == "RENAME") {
|
||||
if (newItem.type == "DELETE" || newItem.type == "RENAME") {
|
||||
this.fileEventQueue.requestNextFlush();
|
||||
}
|
||||
return items;
|
||||
@@ -1363,7 +1520,7 @@ We can perform a command in this file.
|
||||
pendingFileEventCount = reactiveSource(0);
|
||||
processingFileEventCount = reactiveSource(0);
|
||||
fileEventQueue =
|
||||
new KeyedQueueProcessor(
|
||||
new QueueProcessor(
|
||||
(items: FileEventItem[]) => this.handleFileEvent(items[0]),
|
||||
{ suspended: true, batchSize: 1, concurrentLimit: 5, delay: 100, yieldThreshold: FileWatchEventQueueMax, totalRemainingReactiveSource: this.pendingFileEventCount, processingEntitiesReactiveSource: this.processingFileEventCount }
|
||||
).replaceEnqueueProcessor((items, newItem) => this.queueNextFileEvent(items, newItem));
|
||||
@@ -1622,21 +1779,32 @@ We can perform a command in this file.
|
||||
this.conflictCheckQueue.enqueue(path);
|
||||
}
|
||||
|
||||
_saveQueuedFiles = throttle(() => {
|
||||
const saveData = this.replicationResultProcessor._queue.filter(e => e !== undefined && e !== null).map((e) => e?._id ?? "" as string) as string[];
|
||||
const kvDBKey = "queued-files"
|
||||
// localStorage.setItem(lsKey, saveData);
|
||||
fireAndForget(() => this.kvDB.set(kvDBKey, saveData));
|
||||
}, 100);
|
||||
saveQueuedFiles() {
|
||||
const saveData = JSON.stringify(this.replicationResultProcessor._queue.map((e) => e._id));
|
||||
const lsKey = "obsidian-livesync-queuefiles-" + this.getVaultName();
|
||||
localStorage.setItem(lsKey, saveData);
|
||||
this._saveQueuedFiles();
|
||||
}
|
||||
async loadQueuedFiles() {
|
||||
if (this.settings.suspendParseReplicationResult) return;
|
||||
if (!this.settings.isConfigured) return;
|
||||
const lsKey = "obsidian-livesync-queuefiles-" + this.getVaultName();
|
||||
const ids = [...new Set(JSON.parse(localStorage.getItem(lsKey) || "[]"))] as string[];
|
||||
const kvDBKey = "queued-files"
|
||||
// const ids = [...new Set(JSON.parse(localStorage.getItem(lsKey) || "[]"))] as string[];
|
||||
const ids = [...new Set(await this.kvDB.get<string[]>(kvDBKey) ?? [])];
|
||||
const batchSize = 100;
|
||||
const chunkedIds = arrayToChunkedArray(ids, batchSize);
|
||||
for await (const idsBatch of chunkedIds) {
|
||||
const ret = await this.localDatabase.allDocsRaw<EntryDoc>({ keys: idsBatch, include_docs: true, limit: 100 });
|
||||
this.replicationResultProcessor.enqueueAll(ret.rows.map(doc => doc.doc!));
|
||||
const docs = ret.rows.filter(e => e.doc).map(e => e.doc) as PouchDB.Core.ExistingDocument<EntryDoc>[];
|
||||
const errors = ret.rows.filter(e => !e.doc && !e.value.deleted);
|
||||
if (errors.length > 0) {
|
||||
Logger("Some queued processes were not resurrected");
|
||||
Logger(JSON.stringify(errors), LOG_LEVEL_VERBOSE);
|
||||
}
|
||||
this.replicationResultProcessor.enqueueAll(docs);
|
||||
await this.replicationResultProcessor.waitForPipeline();
|
||||
}
|
||||
|
||||
@@ -1658,34 +1826,43 @@ We can perform a command in this file.
|
||||
const filename = this.getPathWithoutPrefix(doc);
|
||||
this.isTargetFile(filename).then((ret) => ret ? this.addOnHiddenFileSync.procInternalFile(filename) : Logger(`Skipped (Not target:${filename})`, LOG_LEVEL_VERBOSE));
|
||||
} else if (isValidPath(this.getPath(doc))) {
|
||||
this.storageApplyingProcessor.enqueueWithKey(doc.path, doc);
|
||||
this.storageApplyingProcessor.enqueue(doc);
|
||||
} else {
|
||||
Logger(`Skipped: ${doc._id.substring(0, 8)}`, LOG_LEVEL_VERBOSE);
|
||||
}
|
||||
return;
|
||||
}, { suspended: true, batchSize: 1, concurrentLimit: 10, yieldThreshold: 1, delay: 0, totalRemainingReactiveSource: this.databaseQueueCount }).startPipeline();
|
||||
}, { suspended: true, batchSize: 1, concurrentLimit: 10, yieldThreshold: 1, delay: 0, totalRemainingReactiveSource: this.databaseQueueCount }).replaceEnqueueProcessor((queue, newItem) => {
|
||||
const q = queue.filter(e => e._id != newItem._id);
|
||||
return [...q, newItem];
|
||||
}).startPipeline();
|
||||
|
||||
storageApplyingCount = reactiveSource(0);
|
||||
storageApplyingProcessor = new KeyedQueueProcessor(async (docs: LoadedEntry[]) => {
|
||||
storageApplyingProcessor = new QueueProcessor(async (docs: LoadedEntry[]) => {
|
||||
const entry = docs[0];
|
||||
const path = this.getPath(entry);
|
||||
Logger(`Processing ${path} (${entry._id.substring(0, 8)}: ${entry._rev?.substring(0, 5)}) :Started...`, LOG_LEVEL_VERBOSE);
|
||||
const targetFile = this.vaultAccess.getAbstractFileByPath(this.getPathWithoutPrefix(entry));
|
||||
if (targetFile instanceof TFolder) {
|
||||
Logger(`${this.getPath(entry)} is already exist as the folder`);
|
||||
} else {
|
||||
await this.processEntryDoc(entry, targetFile instanceof TFile ? targetFile : undefined);
|
||||
Logger(`Processing ${path} (${entry._id.substring(0, 8)} :${entry._rev?.substring(0, 5)}) : Done`);
|
||||
}
|
||||
await serialized(entry.path, async () => {
|
||||
const path = this.getPath(entry);
|
||||
Logger(`Processing ${path} (${entry._id.substring(0, 8)}: ${entry._rev?.substring(0, 5)}) :Started...`, LOG_LEVEL_VERBOSE);
|
||||
const targetFile = this.vaultAccess.getAbstractFileByPath(this.getPathWithoutPrefix(entry));
|
||||
if (targetFile instanceof TFolder) {
|
||||
Logger(`${this.getPath(entry)} is already exist as the folder`);
|
||||
} else {
|
||||
await this.processEntryDoc(entry, targetFile instanceof TFile ? targetFile : undefined);
|
||||
Logger(`Processing ${path} (${entry._id.substring(0, 8)} :${entry._rev?.substring(0, 5)}) : Done`);
|
||||
}
|
||||
});
|
||||
|
||||
return;
|
||||
}, { suspended: true, batchSize: 1, concurrentLimit: 2, yieldThreshold: 1, delay: 0, totalRemainingReactiveSource: this.storageApplyingCount }).startPipeline()
|
||||
}, { suspended: true, batchSize: 1, concurrentLimit: 6, yieldThreshold: 1, delay: 0, totalRemainingReactiveSource: this.storageApplyingCount }).replaceEnqueueProcessor((queue, newItem) => {
|
||||
const q = queue.filter(e => e._id != newItem._id);
|
||||
return [...q, newItem];
|
||||
}).startPipeline()
|
||||
|
||||
|
||||
replicationResultCount = reactiveSource(0);
|
||||
replicationResultProcessor = new QueueProcessor(async (docs: PouchDB.Core.ExistingDocument<EntryDoc>[]) => {
|
||||
if (this.settings.suspendParseReplicationResult) return;
|
||||
const change = docs[0];
|
||||
if (!change) return;
|
||||
if (isChunk(change._id)) {
|
||||
// SendSignal?
|
||||
// this.parseIncomingChunk(change);
|
||||
@@ -1722,16 +1899,19 @@ We can perform a command in this file.
|
||||
this.databaseQueuedProcessor.enqueue(change);
|
||||
}
|
||||
return;
|
||||
}, { batchSize: 1, suspended: true, concurrentLimit: 100, delay: 0, totalRemainingReactiveSource: this.replicationResultCount }).startPipeline().onUpdateProgress(() => {
|
||||
}, { batchSize: 1, suspended: true, concurrentLimit: 100, delay: 0, totalRemainingReactiveSource: this.replicationResultCount }).replaceEnqueueProcessor((queue, newItem) => {
|
||||
const q = queue.filter(e => e._id != newItem._id);
|
||||
return [...q, newItem];
|
||||
}).startPipeline().onUpdateProgress(() => {
|
||||
this.saveQueuedFiles();
|
||||
});
|
||||
//---> Sync
|
||||
parseReplicationResult(docs: Array<PouchDB.Core.ExistingDocument<EntryDoc>>) {
|
||||
if (this.settings.suspendParseReplicationResult) {
|
||||
if (this.settings.suspendParseReplicationResult && !this.replicationResultProcessor.isSuspended) {
|
||||
this.replicationResultProcessor.suspend()
|
||||
}
|
||||
this.replicationResultProcessor.enqueueAll(docs);
|
||||
if (!this.settings.suspendParseReplicationResult) {
|
||||
if (!this.settings.suspendParseReplicationResult && this.replicationResultProcessor.isSuspended) {
|
||||
this.replicationResultProcessor.resume()
|
||||
}
|
||||
}
|
||||
@@ -1746,8 +1926,10 @@ We can perform a command in this file.
|
||||
// disable all sync temporary.
|
||||
if (this.suspended) return;
|
||||
await Promise.all(this.addOns.map(e => e.onResume()));
|
||||
if (this.settings.liveSync) {
|
||||
this.replicator.openReplication(this.settings, true, false);
|
||||
if (this.settings.remoteType == REMOTE_COUCHDB) {
|
||||
if (this.settings.liveSync) {
|
||||
this.replicator.openReplication(this.settings, true, false, false);
|
||||
}
|
||||
}
|
||||
|
||||
const q = activeDocument.querySelector(`.livesync-ribbon-showcustom`);
|
||||
@@ -1761,8 +1943,33 @@ We can perform a command in this file.
|
||||
lastMessage = "";
|
||||
|
||||
observeForLogs() {
|
||||
const padSpaces = `\u{2007}`.repeat(10);
|
||||
// const emptyMark = `\u{2003}`;
|
||||
const rerenderTimer = new Map<string, [ReturnType<typeof setTimeout>, number]>;
|
||||
const tick = reactiveSource(0);
|
||||
function padLeftSp(num: number, mark: string) {
|
||||
const numLen = `${num}`.length + 1;
|
||||
const [timer, len] = rerenderTimer.get(mark) ?? [undefined, numLen];
|
||||
if (num || timer) {
|
||||
if (num) {
|
||||
if (timer) clearTimeout(timer);
|
||||
rerenderTimer.set(mark, [setTimeout(async () => {
|
||||
rerenderTimer.delete(mark);
|
||||
await delay(100);
|
||||
tick.value = tick.value + 1;
|
||||
}, 3000), Math.max(len, numLen)]);
|
||||
}
|
||||
return ` ${mark}${`${padSpaces}${num}`.slice(-(len))}`;
|
||||
} else {
|
||||
return "";
|
||||
}
|
||||
}
|
||||
// const logStore
|
||||
const queueCountLabel = reactive(() => {
|
||||
// For invalidating
|
||||
// @ts-ignore
|
||||
// eslint-disable-next-line @typescript-eslint/no-unused-vars
|
||||
const _ = tick.value;
|
||||
const dbCount = this.databaseQueueCount.value;
|
||||
const replicationCount = this.replicationResultCount.value;
|
||||
const storageApplyingCount = this.storageApplyingCount.value;
|
||||
@@ -1770,13 +1977,13 @@ We can perform a command in this file.
|
||||
const pluginScanCount = pluginScanningCount.value;
|
||||
const hiddenFilesCount = hiddenFilesEventCount.value + hiddenFilesProcessingCount.value;
|
||||
const conflictProcessCount = this.conflictProcessQueueCount.value;
|
||||
const labelReplication = replicationCount ? `📥 ${replicationCount} ` : "";
|
||||
const labelDBCount = dbCount ? `📄 ${dbCount} ` : "";
|
||||
const labelStorageCount = storageApplyingCount ? `💾 ${storageApplyingCount}` : "";
|
||||
const labelChunkCount = chunkCount ? `🧩${chunkCount} ` : "";
|
||||
const labelPluginScanCount = pluginScanCount ? `🔌${pluginScanCount} ` : "";
|
||||
const labelHiddenFilesCount = hiddenFilesCount ? `⚙️${hiddenFilesCount} ` : "";
|
||||
const labelConflictProcessCount = conflictProcessCount ? `🔩${conflictProcessCount} ` : "";
|
||||
const labelReplication = padLeftSp(replicationCount, `📥`);
|
||||
const labelDBCount = padLeftSp(dbCount, `📄`);
|
||||
const labelStorageCount = padLeftSp(storageApplyingCount, `💾`);
|
||||
const labelChunkCount = padLeftSp(chunkCount, `🧩`);
|
||||
const labelPluginScanCount = padLeftSp(pluginScanCount, `🔌`);
|
||||
const labelHiddenFilesCount = padLeftSp(hiddenFilesCount, `⚙️`)
|
||||
const labelConflictProcessCount = padLeftSp(conflictProcessCount, `🔩`);
|
||||
return `${labelReplication}${labelDBCount}${labelStorageCount}${labelChunkCount}${labelPluginScanCount}${labelHiddenFilesCount}${labelConflictProcessCount}`;
|
||||
})
|
||||
const requestingStatLabel = reactive(() => {
|
||||
@@ -1795,6 +2002,11 @@ We can perform a command in this file.
|
||||
let pushLast = "";
|
||||
let pullLast = "";
|
||||
let w = "";
|
||||
const labels: Partial<Record<DatabaseConnectingStatus, string>> = {
|
||||
"CONNECTED": "⚡",
|
||||
"JOURNAL_SEND": "📦↑",
|
||||
"JOURNAL_RECEIVE": "📦↓",
|
||||
}
|
||||
switch (e.syncStatus) {
|
||||
case "CLOSED":
|
||||
case "COMPLETED":
|
||||
@@ -1808,7 +2020,9 @@ We can perform a command in this file.
|
||||
w = "💤";
|
||||
break;
|
||||
case "CONNECTED":
|
||||
w = "⚡";
|
||||
case "JOURNAL_SEND":
|
||||
case "JOURNAL_RECEIVE":
|
||||
w = labels[e.syncStatus] || "⚡";
|
||||
pushLast = ((lastSyncPushSeq == 0) ? "" : (lastSyncPushSeq >= maxPushSeq ? " (LIVE)" : ` (${maxPushSeq - lastSyncPushSeq})`));
|
||||
pullLast = ((lastSyncPullSeq == 0) ? "" : (lastSyncPullSeq >= maxPullSeq ? " (LIVE)" : ` (${maxPullSeq - lastSyncPullSeq})`));
|
||||
break;
|
||||
@@ -1821,11 +2035,15 @@ We can perform a command in this file.
|
||||
return { w, sent, pushLast, arrived, pullLast };
|
||||
})
|
||||
const waitingLabel = reactive(() => {
|
||||
// For invalidating
|
||||
// @ts-ignore
|
||||
// eslint-disable-next-line @typescript-eslint/no-unused-vars
|
||||
const _ = tick.value;
|
||||
const e = this.pendingFileEventCount.value;
|
||||
const proc = this.processingFileEventCount.value;
|
||||
const pend = e - proc;
|
||||
const labelProc = proc != 0 ? `⏳${proc} ` : "";
|
||||
const labelPend = pend != 0 ? ` 🛫${pend}` : "";
|
||||
const labelProc = padLeftSp(proc, `⏳`);
|
||||
const labelPend = padLeftSp(pend, `🛫`);
|
||||
return `${labelProc}${labelPend}`;
|
||||
})
|
||||
const statusLineLabel = reactive(() => {
|
||||
@@ -1834,7 +2052,7 @@ We can perform a command in this file.
|
||||
const waiting = waitingLabel.value;
|
||||
const networkActivity = requestingStatLabel.value;
|
||||
return {
|
||||
message: `${networkActivity}Sync: ${w} ↑${sent}${pushLast} ↓${arrived}${pullLast}${waiting} ${queued}`,
|
||||
message: `${networkActivity}Sync: ${w} ↑ ${sent}${pushLast} ↓ ${arrived}${pullLast}${waiting}${queued}`,
|
||||
};
|
||||
})
|
||||
const statusBarLabels = reactive(() => {
|
||||
@@ -1845,31 +2063,20 @@ We can perform a command in this file.
|
||||
message, status
|
||||
}
|
||||
})
|
||||
let last = 0;
|
||||
const applyToDisplay = () => {
|
||||
|
||||
const applyToDisplay = throttle(() => {
|
||||
const v = statusBarLabels.value;
|
||||
const now = Date.now();
|
||||
if (now - last < 10) {
|
||||
scheduleTask("applyToDisplay", 20, () => applyToDisplay());
|
||||
return;
|
||||
}
|
||||
this.applyStatusBarText(v.message, v.status);
|
||||
last = now;
|
||||
}
|
||||
|
||||
}, 20);
|
||||
statusBarLabels.onChanged(applyToDisplay);
|
||||
}
|
||||
|
||||
applyStatusBarText(message: string, log: string) {
|
||||
const newMsg = message;
|
||||
const newLog = log;
|
||||
// scheduleTask("update-display", 50, () => {
|
||||
const newMsg = message.replace(/\n/g, "\\A ");
|
||||
const newLog = log.replace(/\n/g, "\\A ");
|
||||
|
||||
this.statusBar?.setText(newMsg.split("\n")[0]);
|
||||
// const selector = `.CodeMirror-wrap,` +
|
||||
// `.markdown-preview-view.cm-s-obsidian,` +
|
||||
// `.markdown-source-view.cm-s-obsidian,` +
|
||||
// `.canvas-wrapper,` +
|
||||
// `.empty-state`
|
||||
// ;
|
||||
if (this.settings.showStatusOnEditor) {
|
||||
const root = activeDocument.documentElement;
|
||||
root.style.setProperty("--sls-log-text", "'" + (newMsg + "\\A " + newLog) + "'");
|
||||
@@ -1877,7 +2084,6 @@ We can perform a command in this file.
|
||||
// const root = activeDocument.documentElement;
|
||||
// root.style.setProperty("--log-text", "'" + (newMsg + "\\A " + newLog) + "'");
|
||||
}
|
||||
// }, true);
|
||||
|
||||
|
||||
scheduleTask("log-hide", 3000, () => { this.statusLog.value = "" });
|
||||
@@ -1896,7 +2102,7 @@ We can perform a command in this file.
|
||||
await this.applyBatchChange();
|
||||
await Promise.all(this.addOns.map(e => e.beforeReplicate(showMessage)));
|
||||
await this.loadQueuedFiles();
|
||||
const ret = await this.replicator.openReplication(this.settings, false, showMessage);
|
||||
const ret = await this.replicator.openReplication(this.settings, false, showMessage, false);
|
||||
if (!ret) {
|
||||
if (this.replicator.remoteLockedAndDeviceNotAccepted) {
|
||||
if (this.replicator.remoteCleaned && this.settings.useIndexedDBAdapter) {
|
||||
@@ -1916,7 +2122,9 @@ Even if you choose to clean up, you will see this option again if you exit Obsid
|
||||
await performRebuildDB(this, "localOnly");
|
||||
}
|
||||
if (ret == CHOICE_CLEAN) {
|
||||
const remoteDB = await this.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.getIsMobile(), true);
|
||||
const replicator = this.getReplicator();
|
||||
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
|
||||
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.getIsMobile(), true);
|
||||
if (typeof remoteDB == "string") {
|
||||
Logger(remoteDB, LOG_LEVEL_NOTICE);
|
||||
return false;
|
||||
@@ -2053,9 +2261,15 @@ Or if you are sure know what had been happened, we can unlock the database from
|
||||
|
||||
const syncFiles = filesStorage.filter((e) => onlyInStorageNames.indexOf(e.path) == -1);
|
||||
Logger("Updating database by new files");
|
||||
const processStatus = {} as Record<string, string>;
|
||||
const logLevel = showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
|
||||
const updateLog = throttle((key: string, msg: string) => {
|
||||
processStatus[key] = msg;
|
||||
const log = Object.values(processStatus).join("\n");
|
||||
Logger(log, logLevel, "syncAll");
|
||||
}, 25);
|
||||
|
||||
const initProcess = [];
|
||||
const logLevel = showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
|
||||
const runAll = async<T>(procedureName: string, objects: T[], callback: (arg: T) => Promise<void>) => {
|
||||
if (objects.length == 0) {
|
||||
Logger(`${procedureName}: Nothing to do`);
|
||||
@@ -2077,12 +2291,14 @@ Or if you are sure know what had been happened, we can unlock the database from
|
||||
failed++;
|
||||
}
|
||||
if ((success + failed) % step == 0) {
|
||||
Logger(`${procedureName}: DONE:${success}, FAILED:${failed}, LAST:${processor._queue.length}`, logLevel, `log-${procedureName}`);
|
||||
const msg = `${procedureName}: DONE:${success}, FAILED:${failed}, LAST:${processor._queue.length}`;
|
||||
updateLog(procedureName, msg);
|
||||
}
|
||||
return;
|
||||
}, { batchSize: 1, concurrentLimit: 10, delay: 0, suspended: true }, objects)
|
||||
await processor.waitForPipeline();
|
||||
Logger(`${procedureName} All done: DONE:${success}, FAILED:${failed}`, logLevel, `log-${procedureName}`);
|
||||
const msg = `${procedureName} All done: DONE:${success}, FAILED:${failed}`;
|
||||
updateLog(procedureName, msg)
|
||||
}
|
||||
initProcess.push(runAll("UPDATE DATABASE", onlyInStorage, async (e) => {
|
||||
if (!this.isFileSizeExceeded(e.stat.size)) {
|
||||
@@ -2116,7 +2332,6 @@ Or if you are sure know what had been happened, we can unlock the database from
|
||||
const id = await this.path2id(getPathFromTFile(file));
|
||||
const pair: FileDocPair = { file, id };
|
||||
return [pair];
|
||||
// processSyncFile.enqueue(pair);
|
||||
}
|
||||
, { batchSize: 1, concurrentLimit: 10, delay: 0, suspended: true }, syncFiles);
|
||||
processPrepareSyncFile
|
||||
@@ -2138,10 +2353,18 @@ Or if you are sure know what had been happened, we can unlock the database from
|
||||
}, { batchSize: 1, concurrentLimit: 5, delay: 10, suspended: false }
|
||||
))
|
||||
|
||||
processPrepareSyncFile.startPipeline();
|
||||
initProcess.push(async () => {
|
||||
await processPrepareSyncFile.waitForPipeline();
|
||||
})
|
||||
const allSyncFiles = syncFiles.length;
|
||||
let lastRemain = allSyncFiles;
|
||||
const step = 25;
|
||||
const remainLog = (remain: number) => {
|
||||
if (lastRemain - remain > step) {
|
||||
const msg = ` CHECK AND SYNC: ${allSyncFiles - remain} / ${allSyncFiles}`;
|
||||
updateLog("sync", msg);
|
||||
lastRemain = remain;
|
||||
}
|
||||
}
|
||||
processPrepareSyncFile.startPipeline().onUpdateProgress(() => remainLog(processPrepareSyncFile.totalRemaining + processPrepareSyncFile.nowProcessing))
|
||||
initProcess.push(processPrepareSyncFile.waitForPipeline());
|
||||
await Promise.all(initProcess);
|
||||
|
||||
// this.setStatusBarText(`NOW TRACKING!`);
|
||||
@@ -2501,38 +2724,39 @@ Or if you are sure know what had been happened, we can unlock the database from
|
||||
|
||||
conflictProcessQueueCount = reactiveSource(0);
|
||||
conflictResolveQueue =
|
||||
new KeyedQueueProcessor(async (entries: { filename: FilePathWithPrefix }[]) => {
|
||||
const entry = entries[0];
|
||||
const filename = entry.filename;
|
||||
const conflictCheckResult = await this.checkConflictAndPerformAutoMerge(filename);
|
||||
if (conflictCheckResult === MISSING_OR_ERROR || conflictCheckResult === NOT_CONFLICTED || conflictCheckResult === CANCELLED) {
|
||||
// nothing to do.
|
||||
return;
|
||||
}
|
||||
if (conflictCheckResult === AUTO_MERGED) {
|
||||
//auto resolved, but need check again;
|
||||
if (this.settings.syncAfterMerge && !this.suspended) {
|
||||
//Wait for the running replication, if not running replication, run it once.
|
||||
await shareRunningResult(`replication`, () => this.replicate());
|
||||
}
|
||||
Logger("conflict:Automatically merged, but we have to check it again");
|
||||
this.conflictCheckQueue.enqueue(filename);
|
||||
return;
|
||||
}
|
||||
if (this.settings.showMergeDialogOnlyOnActive) {
|
||||
const af = this.getActiveFile();
|
||||
if (af && af.path != filename) {
|
||||
Logger(`${filename} is conflicted. Merging process has been postponed to the file have got opened.`, LOG_LEVEL_NOTICE);
|
||||
new QueueProcessor(async (filenames: FilePathWithPrefix[]) => {
|
||||
const filename = filenames[0];
|
||||
await serialized(`conflict-resolve:${filename}`, async () => {
|
||||
const conflictCheckResult = await this.checkConflictAndPerformAutoMerge(filename);
|
||||
if (conflictCheckResult === MISSING_OR_ERROR || conflictCheckResult === NOT_CONFLICTED || conflictCheckResult === CANCELLED) {
|
||||
// nothing to do.
|
||||
return;
|
||||
}
|
||||
}
|
||||
Logger("conflict:Manual merge required!");
|
||||
await this.resolveConflictByUI(filename, conflictCheckResult);
|
||||
if (conflictCheckResult === AUTO_MERGED) {
|
||||
//auto resolved, but need check again;
|
||||
if (this.settings.syncAfterMerge && !this.suspended) {
|
||||
//Wait for the running replication, if not running replication, run it once.
|
||||
await shareRunningResult(`replication`, () => this.replicate());
|
||||
}
|
||||
Logger("conflict:Automatically merged, but we have to check it again");
|
||||
this.conflictCheckQueue.enqueue(filename);
|
||||
return;
|
||||
}
|
||||
if (this.settings.showMergeDialogOnlyOnActive) {
|
||||
const af = this.getActiveFile();
|
||||
if (af && af.path != filename) {
|
||||
Logger(`${filename} is conflicted. Merging process has been postponed to the file have got opened.`, LOG_LEVEL_NOTICE);
|
||||
return;
|
||||
}
|
||||
}
|
||||
Logger("conflict:Manual merge required!");
|
||||
await this.resolveConflictByUI(filename, conflictCheckResult);
|
||||
});
|
||||
}, { suspended: false, batchSize: 1, concurrentLimit: 1, delay: 10, keepResultUntilDownstreamConnected: false }).replaceEnqueueProcessor(
|
||||
(queue, newEntity) => {
|
||||
const filename = newEntity.entity.filename;
|
||||
const filename = newEntity;
|
||||
sendValue("cancel-resolve-conflict:" + filename, true);
|
||||
const newQueue = [...queue].filter(e => e.key != newEntity.key);
|
||||
const newQueue = [...queue].filter(e => e != newEntity);
|
||||
return [...newQueue, newEntity];
|
||||
});
|
||||
|
||||
@@ -2544,10 +2768,9 @@ Or if you are sure know what had been happened, we can unlock the database from
|
||||
const file = this.vaultAccess.getAbstractFileByPath(filename);
|
||||
// if (!file) return;
|
||||
// if (!(file instanceof TFile)) return;
|
||||
if ((file instanceof TFolder)) return;
|
||||
if ((file instanceof TFolder)) return [];
|
||||
// Check again?
|
||||
|
||||
return [{ key: filename, entity: { filename } }];
|
||||
return [filename];
|
||||
// this.conflictResolveQueue.enqueueWithKey(filename, { filename, file });
|
||||
}, {
|
||||
suspended: false, batchSize: 1, concurrentLimit: 5, delay: 10, keepResultUntilDownstreamConnected: true, pipeTo: this.conflictResolveQueue, totalRemainingReactiveSource: this.conflictProcessQueueCount
|
||||
@@ -2859,7 +3082,9 @@ Or if you are sure know what had been happened, we can unlock the database from
|
||||
}
|
||||
async dryRunGC() {
|
||||
await skipIfDuplicated("cleanup", async () => {
|
||||
const remoteDBConn = await this.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.isMobile)
|
||||
const replicator = this.getReplicator();
|
||||
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
|
||||
const remoteDBConn = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.isMobile)
|
||||
if (typeof (remoteDBConn) == "string") {
|
||||
Logger(remoteDBConn);
|
||||
return;
|
||||
@@ -2873,8 +3098,10 @@ Or if you are sure know what had been happened, we can unlock the database from
|
||||
async dbGC() {
|
||||
// Lock the remote completely once.
|
||||
await skipIfDuplicated("cleanup", async () => {
|
||||
const replicator = this.getReplicator();
|
||||
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
|
||||
this.getReplicator().markRemoteLocked(this.settings, true, true);
|
||||
const remoteDBConn = await this.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.isMobile)
|
||||
const remoteDBConn = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.isMobile)
|
||||
if (typeof (remoteDBConn) == "string") {
|
||||
Logger(remoteDBConn);
|
||||
return;
|
||||
|
||||
@@ -103,6 +103,9 @@
|
||||
.canvas-wrapper::before,
|
||||
.empty-state::before {
|
||||
content: var(--sls-log-text, "");
|
||||
font-variant-numeric: tabular-nums;
|
||||
font-variant-emoji: emoji;
|
||||
tab-size: 4;
|
||||
text-align: right;
|
||||
white-space: pre-wrap;
|
||||
position: absolute;
|
||||
|
||||
64
updates.md
64
updates.md
@@ -1,52 +1,22 @@
|
||||
### 0.22.0
|
||||
A few years passed since Self-hosted LiveSync was born, and our codebase had been very complicated. This could be patient now, but it should be a tremendous hurt.
|
||||
Therefore at v0.22.0, for future maintainability, I refined task scheduling logic totally.
|
||||
### 0.23.0
|
||||
Incredibly new features!
|
||||
|
||||
Of course, I think this would be our suffering in some cases. However, I would love to ask you for your cooperation and contribution.
|
||||
Now, we can use object storage (MinIO, S3, R2 or anything you like) for synchronising! Moreover, despite that, we can use all the features as if we were using CouchDB.
|
||||
Note: As this is a pretty experimental feature, hence we have some limitations.
|
||||
- This is built on the append-only architecture. It will not shrink used storage if we do not perform a rebuild.
|
||||
- A bit fragile. However, our version x.yy.0 is always so.
|
||||
- When the first synchronisation, the entire history to date is transferred. For this reason, it is preferable to do this under the WiFi network.
|
||||
- Do not worry, from the second synchronisation, we always transfer only differences.
|
||||
|
||||
Sorry for being absent so much long. And thank you for your patience!
|
||||
I hope this feature empowers users to maintain independence and self-host their data, offering an alternative for those who prefer to manage their own storage solutions and avoid being stuck on the right side of a sudden change in business model.
|
||||
|
||||
Note: we got a very performance improvement.
|
||||
Note at 0.22.2: **Now, to rescue mobile devices, Maximum file size is set to 50 by default**. Please configure the limit as you need. If you do not want to limit the sizes, set zero manually, please.
|
||||
Of course, I use Self-hosted MinIO for testing and recommend this. It is for the same reason as using CouchDB. -- open, controllable, auditable and indeed already audited by numerous eyes.
|
||||
|
||||
Let me write one more acknowledgement.
|
||||
|
||||
I have a lot of respect for that plugin, even though it is sometimes treated as if it is a competitor, remotely-save. I think it is a great architecture that embodies a different approach to my approach of recreating history. This time, with all due respect, I have used some of its code as a reference.
|
||||
Hooray for open source, and generous licences, and the sharing of knowledge by experts.
|
||||
|
||||
#### Version history
|
||||
- 0.22.16:
|
||||
- Fixed:
|
||||
- Fixed the issue that binary files were sometimes corrupted.
|
||||
- Fixed customisation sync data could be corrupted.
|
||||
- Improved:
|
||||
- Now the remote database costs lower memory.
|
||||
- This release requires a brief wait on the first synchronisation, to track the latest changeset again.
|
||||
- Description added for the `Device name`.
|
||||
- Refactored:
|
||||
- Many type-errors have been resolved.
|
||||
- Obsolete file has been deleted.
|
||||
- 0.22.15:
|
||||
- Improved:
|
||||
- Faster start-up by removing too many logs which indicates normality
|
||||
- By streamlined scanning of customised synchronisation extra phases have been deleted.
|
||||
- 0.22.14:
|
||||
- New feature:
|
||||
- We can disable the status bar in the setting dialogue.
|
||||
- Improved:
|
||||
- Now some files are handled as correct data type.
|
||||
- Customisation sync now uses the digest of each file for better performance.
|
||||
- The status in the Editor now works performant.
|
||||
- Refactored:
|
||||
- Common functions have been ready and the codebase has been organised.
|
||||
- Stricter type checking following TypeScript updates.
|
||||
- Remove old iOS workaround for simplicity and performance.
|
||||
- 0.22.13:
|
||||
- Improved:
|
||||
- Now using HTTP for the remote database URI warns of an error (on mobile) or notice (on desktop).
|
||||
- Refactored:
|
||||
- Dependencies have been polished.
|
||||
- 0.22.12:
|
||||
- Changed:
|
||||
- The default settings has been changed.
|
||||
- Improved:
|
||||
- Default and preferred settings are applied on completion of the wizard.
|
||||
- Fixed:
|
||||
- Now Initialisation `Fetch` will be performed smoothly and there will be fewer conflicts.
|
||||
- No longer stuck while Handling transferred or initialised documents.
|
||||
... To continue on to `updates_old.md`.
|
||||
- New feature:
|
||||
- Now we can use Object Storage.
|
||||
@@ -10,6 +10,75 @@ Note: we got a very performance improvement.
|
||||
Note at 0.22.2: **Now, to rescue mobile devices, Maximum file size is set to 50 by default**. Please configure the limit as you need. If you do not want to limit the sizes, set zero manually, please.
|
||||
|
||||
#### Version history
|
||||
- 0.22.19
|
||||
- Fixed:
|
||||
- No longer data corrupting due to false BASE64 detections.
|
||||
- Improved:
|
||||
- A bit more efficient in Automatic data compression.
|
||||
- 0.22.18
|
||||
- New feature (Very Experimental):
|
||||
- Now we can use `Automatic data compression` to reduce amount of traffic and the usage of remote database.
|
||||
- Please make sure all devices are updated to v0.22.18 before trying this feature.
|
||||
- If you are using some other utilities which connected to your vault, please make sure that they have compatibilities.
|
||||
- Note: Setting `File Compression` on the remote database works for shrink the size of remote database. Please refer the [Doc](https://docs.couchdb.org/en/stable/config/couchdb.html#couchdb/file_compression).
|
||||
- 0.22.17:
|
||||
- Fixed:
|
||||
- Error handling on booting now works fine.
|
||||
- Replication is now started automatically in LiveSync mode.
|
||||
- Batch database update is now disabled in LiveSync mode.
|
||||
- No longer automatically reconnection while off-focused.
|
||||
- Status saves are thinned out.
|
||||
- Now Self-hosted LiveSync waits for all files between the local database and storage to be surely checked.
|
||||
- Improved:
|
||||
- The job scheduler is now more robust and stable.
|
||||
- The status indicator no longer flickers and keeps zero for a while.
|
||||
- No longer meaningless frequent updates of status indicators.
|
||||
- Now we can configure regular expression filters in handy UI. Thank you so much, @eth-p!
|
||||
- `Fetch` or `Rebuild everything` is now more safely performed.
|
||||
- Minor things
|
||||
- Some utility function has been added.
|
||||
- Customisation sync now less wrong messages.
|
||||
- Digging the weeds for eradication of type errors.
|
||||
- 0.22.16:
|
||||
- Fixed:
|
||||
- Fixed the issue that binary files were sometimes corrupted.
|
||||
- Fixed customisation sync data could be corrupted.
|
||||
- Improved:
|
||||
- Now the remote database costs lower memory.
|
||||
- This release requires a brief wait on the first synchronisation, to track the latest changeset again.
|
||||
- Description added for the `Device name`.
|
||||
- Refactored:
|
||||
- Many type-errors have been resolved.
|
||||
- Obsolete file has been deleted.
|
||||
- 0.22.15:
|
||||
- Improved:
|
||||
- Faster start-up by removing too many logs which indicates normality
|
||||
- By streamlined scanning of customised synchronisation extra phases have been deleted.
|
||||
... To continue on to `updates_old.md`.
|
||||
- 0.22.14:
|
||||
- New feature:
|
||||
- We can disable the status bar in the setting dialogue.
|
||||
- Improved:
|
||||
- Now some files are handled as correct data type.
|
||||
- Customisation sync now uses the digest of each file for better performance.
|
||||
- The status in the Editor now works performant.
|
||||
- Refactored:
|
||||
- Common functions have been ready and the codebase has been organised.
|
||||
- Stricter type checking following TypeScript updates.
|
||||
- Remove old iOS workaround for simplicity and performance.
|
||||
- 0.22.13:
|
||||
- Improved:
|
||||
- Now using HTTP for the remote database URI warns of an error (on mobile) or notice (on desktop).
|
||||
- Refactored:
|
||||
- Dependencies have been polished.
|
||||
- 0.22.12:
|
||||
- Changed:
|
||||
- The default settings has been changed.
|
||||
- Improved:
|
||||
- Default and preferred settings are applied on completion of the wizard.
|
||||
- Fixed:
|
||||
- Now Initialisation `Fetch` will be performed smoothly and there will be fewer conflicts.
|
||||
- No longer stuck while Handling transferred or initialised documents.
|
||||
- 0.22.11:
|
||||
- Fixed:
|
||||
- `Verify and repair all files` is no longer broken.
|
||||
|
||||
Reference in New Issue
Block a user