Compare commits

...

17 Commits

Author SHA1 Message Date
vorotamoroz
133f5a7109 bump 2024-04-30 11:49:16 +01:00
vorotamoroz
daa3feebf1 Fixed:
- Journal Sync will not hang up during big replication, especially the initial one.
- All changes which have been replicated while rebuilding will not be postponed (Previous behaviour).
Improved:
- Now Journal Sync works efficiently in download and parse, or pack and upload.
- Less server storage and faster packing/unpacking usage by the new chunk format.
2024-04-30 11:48:27 +01:00
vorotamoroz
7b5f7d0fbf bump 2024-04-30 01:40:01 +09:00
vorotamoroz
29532193cb - Fixed:
- Now journal synchronisation considers untransferred each from sent and received.
  - Journal sync now handles retrying.
  - Journal synchronisation no longer considers the synchronisation of chunks as revision updates (Simply ignored).
  - Journal sync now splits the journal pack to prevent mobile device rebooting.
  - Maintenance menus which had been on the command palette are now back in the maintain pane on the setting dialogue.
- Improved:
  - Now all changes which have been replicated while rebuilding will be postponed.
2024-04-30 01:39:09 +09:00
vorotamoroz
5b4309c09d For the future. Because of a good opportunity. 2024-04-29 02:01:27 +09:00
vorotamoroz
16ef582453 Update: wrote about the new Remote Type. 2024-04-28 23:37:26 +09:00
vorotamoroz
3e22f70c7a Update README.md 2024-04-28 17:49:50 +09:00
vorotamoroz
0a8dbe097e bump 2024-04-27 03:35:32 +09:00
vorotamoroz
2c0fcf74d0 New feature: Object storage support 2024-04-27 03:33:59 +09:00
vorotamoroz
a1ab1efd5d Update README.md 2024-04-20 21:45:21 +09:00
vorotamoroz
c8fcf2d0d5 Bump 2024-04-19 12:06:09 +01:00
vorotamoroz
c384e2f7fb Fixed:
- No longer data corrupting due to false BASE64 detections.
2024-04-19 12:04:14 +01:00
vorotamoroz
99c1c7dc1a bump 2024-04-18 12:37:49 +01:00
vorotamoroz
84adec4b1a New feature: Automatic data compression to reduce amount of traffic and the usage of remote database. 2024-04-18 12:30:29 +01:00
vorotamoroz
f0b202bd91 bump 2024-04-12 01:32:03 +09:00
vorotamoroz
d54b7e2d93 - Fixed:
- Error handling on booting now works fine.
  - Replication is now started automatically in LiveSync mode.
  - Batch database update is now disabled in LiveSync mode.
  - No longer automatically reconnection while off-focused.
  - Status saves are thinned out.
  - Now Self-hosted LiveSync waits for all files between the local database and storage to be surely checked.
- Improved:
  - The job scheduler is now more robust and stable.
  - The status indicator no longer flickers and keeps zero for a while.
  - No longer meaningless frequent updates of status indicators.
  - Now we can configure regular expression filters in handy UI. Thank you so much, @eth-p!
  - `Fetch` or `Rebuild everything` is now more safely performed.
- Minor things
  - Some utility function has been added.
  - Customisation sync now less wrong messages.
  - Digging the weeds for eradication of type errors.
2024-04-12 01:30:35 +09:00
vorotamoroz
6952ef37f5 Update quick_setup.md 2024-04-09 13:10:31 +09:00
20 changed files with 4220 additions and 711 deletions

View File

@@ -2,7 +2,7 @@
# Self-hosted LiveSync # Self-hosted LiveSync
[Japanese docs](./README_ja.md) - [Chinese docs](./README_cn.md). [Japanese docs](./README_ja.md) - [Chinese docs](./README_cn.md).
Self-hosted LiveSync is a community-implemented synchronization plugin, available on every obsidian-compatible platform and using CouchDB as the server. Self-hosted LiveSync is a community-implemented synchronization plugin, available on every obsidian-compatible platform and using CouchDB or Object Storage (e.g., MinIO, S3, R2, etc.) as the server.
![obsidian_live_sync_demo](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif) ![obsidian_live_sync_demo](https://user-images.githubusercontent.com/45774780/137355323-f57a8b09-abf2-4501-836c-8cb7d2ff24a3.gif)
@@ -45,7 +45,7 @@ This plug-in might be useful for researchers, engineers, and developers with a n
2. Configure plug-in in [Quick Setup](docs/quick_setup.md) 2. Configure plug-in in [Quick Setup](docs/quick_setup.md)
> [!TIP] > [!TIP]
> We are still able to use IBM Cloudant. However, it is not recommended for several reasons nowadays. Here is [Setup IBM Cloudant](docs/setup_cloudant.md) > Now, fly.io has become not free. Fortunately, even though there are some issues, we are still able to use IBM Cloudant. Here is [Setup IBM Cloudant](docs/setup_cloudant.md). It will be updated soon!
## Information in StatusBar ## Information in StatusBar

View File

@@ -0,0 +1,50 @@
## The design document of the journal sync
Original title: Synchronise without CouchDB
### Goal
- Synchronise vaults without CouchDB
### Motivation
- Serving CouchDB is not pretty easy.
- Full spec DBaaS (Paid IBM Cloudant) is a bit expensive and lacking of alternatives.
- Securing alternatives, from just one protocol.
### Prerequisite
- We should have multiple implementations of the server software.
- We should also be able to use SaaS, with a choice of options.
- We should require them a reasonable sense of cost, ideally free of charge for trials.
- We should be able to serve some instance of the server software, as OSS — with transparency, availability of auditing, and the fact that they actually took place.
### Methods and implementations
Ordinarily, local pouchDB and the remote CouchDB are synchronised by sending each missing document through several conversations in their replication protocol. However, to achieve this plan, we cannot rely on CouchDB and its protocols. This limitation is so harsh. However, Overcoming this means gaining new possibilities. After some trials, It was concluded that synchronisation could be completed even if the actions that could be performed were limited to uploading, downloading and retrieving the list. This means we can use any old-fashioned WebDAV server, and Sophisticated “Object storages” such as Self-hosted MinIO, S3, and R2 or any we like. This is realised by sharing and complementing the differences of the journal by each client. Therefore, The focus is therefore on how to identify which are the differences and send them without dynamic communication.
All clients manage their data in PouchDB. I know this is probably known information, but it has its own journal.
First, all clients should record to what point in the journal they sent themselves last time. The client then packs from the previous point to the latest when sending and also updates their record. This pack is uploaded to the server with the name starting with the timestamp of its creation. This is the send operation.
Conversely, when receiving, the packs uploaded to the server that have not yet been received are received in order. This is easy as their names are in date order. When the process is successfully completed, the names of the files received are recorded. The journals from this pack are then reflected in their own database. Conflict resolution is left to PouchDB, so the client only needs to do the work of applying any differences. And here is the key: the client records the ID and revision of the document that was in the journal and applied.
This key works when creating a pack. When creating a pack, the client omits this 'document recorded as received and used'. This is because received and applied means that it has already been sent by another client and exists on the server. This ensures that unnecessary transmissions do not take place.
Synchronisation is then always started by receiving. This is a little trick to avoid including unnecessary documents in the pack.
These behaviours allow clients to voluntarily send and receive only the missing parts of the journal that are not stored on the server, without having to communicate with each other, and still keep a single, consistent journal on the server.
Source codes actually implemented this is already committed into the repository.
### Test strategy
This implementation replaces the synchronisation performed by CouchDB. Therefore, testing was simply done by comparing the same changes to the same vault, replicated in CouchDB, with those done by this implementation.
### Documentation strategy
- Documentation should be done in a quick setup, at least.
- As several server implementations can be selected, the description is omitted with regard to specific configuration values.
- A MinIO set-up might be nice to have. However, it is not considered essential.
- It would be a good opportunity to also publish these design documents.
### Consideration and Conclusion
This design offers a novel approach to journal synchronisation without relying on CouchDB. It leverages PouchDB's journaling capabilities and leverages simple server-side storage for efficient data exchange. Hence, the new design could be said to have gotten a broader outlook.

View File

@@ -16,7 +16,8 @@ There are three methods to set up Self-hosted LiveSync.
### 1. Using setup URIs ### 1. Using setup URIs
> [!TIP] What is the setup URI? Why is it required? > [!TIP]
> What is the setup URI? Why is it required?
> The setup URI is the encrypted representation of Self-hosted LiveSync configuration as a URI. This starts `obsidian://setuplivesync?settings=`. This is encrypted with a passphrase, so that it can be shared relatively securely between devices. It is a bit long, but it is one line. This allows a series of settings to be set at once without any inconsistencies. > The setup URI is the encrypted representation of Self-hosted LiveSync configuration as a URI. This starts `obsidian://setuplivesync?settings=`. This is encrypted with a passphrase, so that it can be shared relatively securely between devices. It is a bit long, but it is one line. This allows a series of settings to be set at once without any inconsistencies.
> >
> If you have configured the remote database by [Automated setup on Fly.io](./setup_flyio.md#a-very-automated-setup) or [set up your server with the tool](./setup_own_server.md#1-generate-the-setup-uri-on-a-desktop-device-or-server), **you should have one of them** > If you have configured the remote database by [Automated setup on Fly.io](./setup_flyio.md#a-very-automated-setup) or [set up your server with the tool](./setup_own_server.md#1-generate-the-setup-uri-on-a-desktop-device-or-server), **you should have one of them**
@@ -44,20 +45,38 @@ If you do not have any setup URI, Press the `start` button. The setting dialogue
![](../images/quick_setup_2.png) ![](../images/quick_setup_2.png)
#### Remote database configuration
1. Enter the information for the database we have set up. #### Select the remote type
1. Select the Remote Type from dropdown list.
We now have a choice between CouchDB (and its compatibles) and object storage (MinIO, S3, R2). CouchDB is the first choice and is also recommended. And supporting Object Storage is an experimental feature.
#### Remote configuration
##### CouchDB
Enter the information for the database we have set up.
![](../images/quick_setup_3.png) ![](../images/quick_setup_3.png)
##### Object Storage
#### Test database connection and Check database configuration 1. Enter the information for the S3 API and bucket.
![](../images/quick_setup_3b.png)
Note 1: if you use S3, you can leave the Endpoint URL empty.
Note 2: if your Object Storage cannot configure the CORS setting fully, you may able to connect to the server by enabling the `Use Custom HTTP Handler` toggle.
2. Press `Test` of `Test Connection` once and ensure you can connect to the Object Storage.
#### Only CouchDB: Test database connection and Check database configuration
We can check the connectivity to the database, and the database settings. We can check the connectivity to the database, and the database settings.
![](../images/quick_setup_5.png) ![](../images/quick_setup_5.png)
#### Check and Fix database configuration #### Only CouchDB: Check and Fix database configuration
Check the database settings and fix any problems on the spot. Check the database settings and fix any problems on the spot.
@@ -82,6 +101,8 @@ We should proceed to the Next step.
#### Sync Settings #### Sync Settings
Finally, finish the wizard by selecting a preset for synchronisation. Finally, finish the wizard by selecting a preset for synchronisation.
Note: If you are going to use Object Storage, you cannot select `LiveSync`.
![](../images/quick_setup_9_1.png) ![](../images/quick_setup_9_1.png)
Select any synchronisation methods we want to use and `Apply`. If database initialisation is required, it will be performed at this time. When `All done!` is displayed, we are ready to synchronise. Select any synchronisation methods we want to use and `Apply`. If database initialisation is required, it will be performed at this time. When `All done!` is displayed, we are ready to synchronise.

BIN
images/quick_setup_3b.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

View File

@@ -1,7 +1,7 @@
{ {
"id": "obsidian-livesync", "id": "obsidian-livesync",
"name": "Self-hosted LiveSync", "name": "Self-hosted LiveSync",
"version": "0.22.16", "version": "0.23.2",
"minAppVersion": "0.9.12", "minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz", "author": "vorotamoroz",

2748
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{ {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.22.16", "version": "0.23.2",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js", "main": "main.js",
"type": "module", "type": "module",
@@ -54,7 +54,12 @@
"typescript": "^5.4.2" "typescript": "^5.4.2"
}, },
"dependencies": { "dependencies": {
"@aws-sdk/client-s3": "^3.556.0",
"@smithy/fetch-http-handler": "^2.5.0",
"@smithy/protocol-http": "^3.3.0",
"@smithy/querystring-builder": "^2.2.0",
"diff-match-patch": "^1.0.5", "diff-match-patch": "^1.0.5",
"fflate": "^0.8.2",
"idb": "^8.0.0", "idb": "^8.0.0",
"minimatch": "^9.0.3", "minimatch": "^9.0.3",
"xxhash-wasm": "0.4.2", "xxhash-wasm": "0.4.2",

View File

@@ -4,7 +4,7 @@ import { Notice, type PluginManifest, parseYaml, normalizePath, type ListedFiles
import type { EntryDoc, LoadedEntry, InternalFileEntry, FilePathWithPrefix, FilePath, DocumentID, AnyEntry, SavingEntry } from "./lib/src/types"; import type { EntryDoc, LoadedEntry, InternalFileEntry, FilePathWithPrefix, FilePath, DocumentID, AnyEntry, SavingEntry } from "./lib/src/types";
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, MODE_SELECTIVE } from "./lib/src/types"; import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, MODE_SELECTIVE } from "./lib/src/types";
import { ICXHeader, PERIODIC_PLUGIN_SWEEP, } from "./types"; import { ICXHeader, PERIODIC_PLUGIN_SWEEP, } from "./types";
import { createSavingEntryFromLoadedEntry, createTextBlob, delay, fireAndForget, getDocData, isDocContentSame } from "./lib/src/utils"; import { createSavingEntryFromLoadedEntry, createTextBlob, delay, fireAndForget, getDocData, isDocContentSame, throttle } from "./lib/src/utils";
import { Logger } from "./lib/src/logger"; import { Logger } from "./lib/src/logger";
import { readString, decodeBinary, arrayBufferToBase64, digestHash } from "./lib/src/strbin"; import { readString, decodeBinary, arrayBufferToBase64, digestHash } from "./lib/src/strbin";
import { serialized } from "./lib/src/lock"; import { serialized } from "./lib/src/lock";
@@ -305,7 +305,8 @@ export class ConfigSync extends LiveSyncCommands {
} }
return false; return false;
} }
createMissingConfigurationEntry() { createMissingConfigurationEntry = throttle(() => this._createMissingConfigurationEntry(), 1000);
_createMissingConfigurationEntry() {
let saveRequired = false; let saveRequired = false;
for (const v of this.pluginList) { for (const v of this.pluginList) {
const key = `${v.category}/${v.name}`; const key = `${v.category}/${v.name}`;
@@ -349,8 +350,7 @@ export class ConfigSync extends LiveSyncCommands {
Logger(ex, LOG_LEVEL_VERBOSE); Logger(ex, LOG_LEVEL_VERBOSE);
} }
return []; return [];
}, { suspended: false, batchSize: 1, concurrentLimit: 10, delay: 100, yieldThreshold: 10, maintainDelay: false, totalRemainingReactiveSource: pluginScanningCount }).startPipeline().root.onIdle(() => { }, { suspended: false, batchSize: 1, concurrentLimit: 10, delay: 100, yieldThreshold: 10, maintainDelay: false, totalRemainingReactiveSource: pluginScanningCount }).startPipeline().root.onUpdateProgress(() => {
// Logger(`All files enumerated`, LOG_LEVEL_INFO, "get-plugins");
this.createMissingConfigurationEntry(); this.createMissingConfigurationEntry();
}); });

View File

@@ -9,7 +9,7 @@ import { serialized } from "./lib/src/lock";
import { JsonResolveModal } from "./JsonResolveModal"; import { JsonResolveModal } from "./JsonResolveModal";
import { LiveSyncCommands } from "./LiveSyncCommands"; import { LiveSyncCommands } from "./LiveSyncCommands";
import { addPrefix, stripAllPrefixes } from "./lib/src/path"; import { addPrefix, stripAllPrefixes } from "./lib/src/path";
import { KeyedQueueProcessor, QueueProcessor } from "./lib/src/processor"; import { QueueProcessor } from "./lib/src/processor";
import { hiddenFilesEventCount, hiddenFilesProcessingCount } from "./lib/src/stores"; import { hiddenFilesEventCount, hiddenFilesProcessingCount } from "./lib/src/stores";
export class HiddenFileSync extends LiveSyncCommands { export class HiddenFileSync extends LiveSyncCommands {
@@ -73,15 +73,15 @@ export class HiddenFileSync extends LiveSyncCommands {
} }
procInternalFile(filename: string) { procInternalFile(filename: string) {
this.internalFileProcessor.enqueueWithKey(filename, filename); this.internalFileProcessor.enqueue(filename);
} }
internalFileProcessor = new KeyedQueueProcessor<string, any>( internalFileProcessor = new QueueProcessor<string, any>(
async (filenames) => { async (filenames) => {
Logger(`START :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE); Logger(`START :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE);
await this.syncInternalFilesAndDatabase("pull", false, false, filenames); await this.syncInternalFilesAndDatabase("pull", false, false, filenames);
Logger(`DONE :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE); Logger(`DONE :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE);
return; return;
}, { batchSize: 100, concurrentLimit: 1, delay: 10, yieldThreshold: 10, suspended: false, totalRemainingReactiveSource: hiddenFilesEventCount } }, { batchSize: 100, concurrentLimit: 1, delay: 10, yieldThreshold: 100, suspended: false, totalRemainingReactiveSource: hiddenFilesEventCount }
); );
recentProcessedInternalFiles = [] as string[]; recentProcessedInternalFiles = [] as string[];

View File

@@ -1,4 +1,4 @@
import { type EntryDoc, type ObsidianLiveSyncSettings, DEFAULT_SETTINGS, LOG_LEVEL_NOTICE } from "./lib/src/types"; import { type EntryDoc, type ObsidianLiveSyncSettings, DEFAULT_SETTINGS, LOG_LEVEL_NOTICE, REMOTE_COUCHDB, REMOTE_MINIO } from "./lib/src/types";
import { configURIBase } from "./types"; import { configURIBase } from "./types";
import { Logger } from "./lib/src/logger"; import { Logger } from "./lib/src/logger";
import { PouchDB } from "./lib/src/pouchdb-browser.js"; import { PouchDB } from "./lib/src/pouchdb-browser.js";
@@ -9,6 +9,7 @@ import { delay, fireAndForget } from "./lib/src/utils";
import { confirmWithMessage } from "./dialogs"; import { confirmWithMessage } from "./dialogs";
import { Platform } from "./deps"; import { Platform } from "./deps";
import { fetchAllUsedChunks } from "./lib/src/utils_couchdb"; import { fetchAllUsedChunks } from "./lib/src/utils_couchdb";
import type { LiveSyncCouchDBReplicator } from "./lib/src/LiveSyncReplicator.js";
export class SetupLiveSync extends LiveSyncCommands { export class SetupLiveSync extends LiveSyncCommands {
onunload() { } onunload() { }
@@ -50,7 +51,7 @@ export class SetupLiveSync extends LiveSyncCommands {
const encryptingPassphrase = await askString(this.app, "Encrypt your settings", "The passphrase to encrypt the setup URI", "", true); const encryptingPassphrase = await askString(this.app, "Encrypt your settings", "The passphrase to encrypt the setup URI", "", true);
if (encryptingPassphrase === false) if (encryptingPassphrase === false)
return; return;
const setting = { ...this.settings, configPassphraseStore: "", encryptedCouchDBConnection: "", encryptedPassphrase: "" }; const setting = { ...this.settings, configPassphraseStore: "", encryptedCouchDBConnection: "", encryptedPassphrase: "" } as Partial<ObsidianLiveSyncSettings>;
if (stripExtra) { if (stripExtra) {
delete setting.pluginSyncExtendedSetting; delete setting.pluginSyncExtendedSetting;
} }
@@ -311,6 +312,7 @@ Of course, we are able to disable these features.`
} }
async suspendReflectingDatabase() { async suspendReflectingDatabase() {
if (this.plugin.settings.doNotSuspendOnFetching) return; if (this.plugin.settings.doNotSuspendOnFetching) return;
if (this.plugin.settings.remoteType == REMOTE_MINIO) return;
Logger(`Suspending reflection: Database and storage changes will not be reflected in each other until completely finished the fetching.`, LOG_LEVEL_NOTICE); Logger(`Suspending reflection: Database and storage changes will not be reflected in each other until completely finished the fetching.`, LOG_LEVEL_NOTICE);
this.plugin.settings.suspendParseReplicationResult = true; this.plugin.settings.suspendParseReplicationResult = true;
this.plugin.settings.suspendFileWatching = true; this.plugin.settings.suspendFileWatching = true;
@@ -318,6 +320,7 @@ Of course, we are able to disable these features.`
} }
async resumeReflectingDatabase() { async resumeReflectingDatabase() {
if (this.plugin.settings.doNotSuspendOnFetching) return; if (this.plugin.settings.doNotSuspendOnFetching) return;
if (this.plugin.settings.remoteType == REMOTE_MINIO) return;
Logger(`Database and storage reflection has been resumed!`, LOG_LEVEL_NOTICE); Logger(`Database and storage reflection has been resumed!`, LOG_LEVEL_NOTICE);
this.plugin.settings.suspendParseReplicationResult = false; this.plugin.settings.suspendParseReplicationResult = false;
this.plugin.settings.suspendFileWatching = false; this.plugin.settings.suspendFileWatching = false;
@@ -348,9 +351,10 @@ Of course, we are able to disable these features.`
await this.plugin.resetLocalDatabase(); await this.plugin.resetLocalDatabase();
} }
async fetchRemoteChunks() { async fetchRemoteChunks() {
if (!this.plugin.settings.doNotSuspendOnFetching && this.plugin.settings.readChunksOnline) { if (!this.plugin.settings.doNotSuspendOnFetching && this.plugin.settings.readChunksOnline && this.plugin.settings.remoteType == REMOTE_COUCHDB) {
Logger(`Fetching chunks`, LOG_LEVEL_NOTICE); Logger(`Fetching chunks`, LOG_LEVEL_NOTICE);
const remoteDB = await this.plugin.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.plugin.getIsMobile(), true); const replicator = this.plugin.getReplicator() as LiveSyncCouchDBReplicator;
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.plugin.getIsMobile(), true);
if (typeof remoteDB == "string") { if (typeof remoteDB == "string") {
Logger(remoteDB, LOG_LEVEL_NOTICE); Logger(remoteDB, LOG_LEVEL_NOTICE);
} else { } else {
@@ -377,9 +381,6 @@ Of course, we are able to disable these features.`
await this.plugin.replicateAllFromServer(true); await this.plugin.replicateAllFromServer(true);
await delay(1000); await delay(1000);
await this.plugin.replicateAllFromServer(true); await this.plugin.replicateAllFromServer(true);
// if (!tryLessFetching) {
// await this.fetchRemoteChunks();
// }
await this.resumeReflectingDatabase(); await this.resumeReflectingDatabase();
await this.askHiddenFileConfiguration({ enableFetch: true }); await this.askHiddenFileConfiguration({ enableFetch: true });
} }

View File

@@ -1,12 +1,12 @@
import { deleteDB, type IDBPDatabase, openDB } from "idb"; import { deleteDB, type IDBPDatabase, openDB } from "idb";
export interface KeyValueDatabase { export interface KeyValueDatabase {
get<T>(key: string): Promise<T>; get<T>(key: IDBValidKey): Promise<T>;
set<T>(key: string, value: T): Promise<IDBValidKey>; set<T>(key: IDBValidKey, value: T): Promise<IDBValidKey>;
del(key: string): Promise<void>; del(key: IDBValidKey): Promise<void>;
clear(): Promise<void>; clear(): Promise<void>;
keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]>; keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]>;
close(): void; close(): void;
destroy(): void; destroy(): Promise<void>;
} }
const databaseCache: { [key: string]: IDBPDatabase<any> } = {}; const databaseCache: { [key: string]: IDBPDatabase<any> } = {};
export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => { export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => {
@@ -20,24 +20,23 @@ export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatab
db.createObjectStore(storeKey); db.createObjectStore(storeKey);
}, },
}); });
let db: IDBPDatabase<any> = null; const db = await dbPromise;
db = await dbPromise;
databaseCache[dbKey] = db; databaseCache[dbKey] = db;
return { return {
get<T>(key: string): Promise<T> { async get<T>(key: IDBValidKey): Promise<T> {
return db.get(storeKey, key); return await db.get(storeKey, key);
}, },
set<T>(key: string, value: T) { async set<T>(key: IDBValidKey, value: T) {
return db.put(storeKey, value, key); return await db.put(storeKey, value, key);
}, },
del(key: string) { async del(key: IDBValidKey) {
return db.delete(storeKey, key); return await db.delete(storeKey, key);
}, },
clear() { async clear() {
return db.clear(storeKey); return await db.clear(storeKey);
}, },
keys(query?: IDBValidKey | IDBKeyRange, count?: number) { async keys(query?: IDBValidKey | IDBKeyRange, count?: number) {
return db.getAllKeys(storeKey, query, count); return await db.getAllKeys(storeKey, query, count);
}, },
close() { close() {
delete databaseCache[dbKey]; delete databaseCache[dbKey];

View File

@@ -0,0 +1,83 @@
<script lang="ts">
export let patterns = [] as string[];
export let originals = [] as string[];
export let apply: (args: string[]) => Promise<void> = (_: string[]) => Promise.resolve();
function revert() {
patterns = [...originals];
}
const CHECK_OK = "✔";
const CHECK_NG = "⚠";
const MARK_MODIFIED = "✏ ";
function checkRegExp(pattern: string) {
if (pattern.trim() == "") return "";
try {
const _ = new RegExp(pattern);
return CHECK_OK;
} catch (ex) {
return CHECK_NG;
}
}
$: status = patterns.map((e) => checkRegExp(e));
$: modified = patterns.map((e, i) => (e != originals?.[i] ?? "" ? MARK_MODIFIED : ""));
function remove(idx: number) {
patterns[idx] = "";
}
function add() {
patterns = [...patterns, ""];
}
</script>
<ul>
{#each patterns as pattern, idx}
<li><label>{modified[idx]}{status[idx]}</label><input type="text" bind:value={pattern} class={modified[idx]} /><button class="iconbutton" on:click={() => remove(idx)}>🗑</button></li>
{/each}
<li>
<label><button on:click={() => add()}>Add</button></label>
</li>
<li class="buttons">
<button on:click={() => apply(patterns)} disabled={status.some((e) => e == CHECK_NG) || modified.every((e) => e == "")}>Apply</button>
<button on:click={() => revert()} disabled={status.some((e) => e == CHECK_NG) || modified.every((e) => e == "")}>Revert</button>
</li>
</ul>
<style>
label {
min-width: 4em;
width: 4em;
display: inline-flex;
flex-direction: row;
justify-content: flex-end;
}
ul {
flex-grow: 1;
display: inline-flex;
flex-direction: column;
list-style-type: none;
margin-block-start: 0;
margin-block-end: 0;
margin-inline-start: 0px;
margin-inline-end: 0px;
padding-inline-start: 0;
}
li {
padding: var(--size-2-1) var(--size-4-1);
display: inline-flex;
flex-grow: 1;
align-items: center;
justify-content: flex-end;
gap: var(--size-4-2);
}
li input {
min-width: 10em;
}
li.buttons {
}
button.iconbutton {
max-width: 4em;
}
span.spacer {
flex-grow: 1;
}
</style>

133
src/ObsHttpHandler.ts Normal file
View File

@@ -0,0 +1,133 @@
// This file is based on a file that was published by the @remotely-save, under the Apache 2 License.
// I would love to express my deepest gratitude to the original authors for their hard work and dedication. Without their contributions, this project would not have been possible.
//
// Original Implementation is here: https://github.com/remotely-save/remotely-save/blob/28b99557a864ef59c19d2ad96101196e401718f0/src/remoteForS3.ts
import {
FetchHttpHandler,
type FetchHttpHandlerOptions,
} from "@smithy/fetch-http-handler";
import { HttpRequest, HttpResponse, type HttpHandlerOptions } from "@smithy/protocol-http";
//@ts-ignore
import { requestTimeout } from "@smithy/fetch-http-handler/dist-es/request-timeout";
import { buildQueryString } from "@smithy/querystring-builder";
import { requestUrl, type RequestUrlParam } from "./deps";
////////////////////////////////////////////////////////////////////////////////
// special handler using Obsidian requestUrl
////////////////////////////////////////////////////////////////////////////////
/**
* This is close to origin implementation of FetchHttpHandler
* https://github.com/aws/aws-sdk-js-v3/blob/main/packages/fetch-http-handler/src/fetch-http-handler.ts
* that is released under Apache 2 License.
* But this uses Obsidian requestUrl instead.
*/
export class ObsHttpHandler extends FetchHttpHandler {
requestTimeoutInMs: number | undefined;
reverseProxyNoSignUrl: string | undefined;
constructor(
options?: FetchHttpHandlerOptions,
reverseProxyNoSignUrl?: string
) {
super(options);
this.requestTimeoutInMs =
options === undefined ? undefined : options.requestTimeout;
this.reverseProxyNoSignUrl = reverseProxyNoSignUrl;
}
async handle(
request: HttpRequest,
{ abortSignal }: HttpHandlerOptions = {}
): Promise<{ response: HttpResponse }> {
if (abortSignal?.aborted) {
const abortError = new Error("Request aborted");
abortError.name = "AbortError";
return Promise.reject(abortError);
}
let path = request.path;
if (request.query) {
const queryString = buildQueryString(request.query);
if (queryString) {
path += `?${queryString}`;
}
}
const { port, method } = request;
let url = `${request.protocol}//${request.hostname}${port ? `:${port}` : ""
}${path}`;
if (
this.reverseProxyNoSignUrl !== undefined &&
this.reverseProxyNoSignUrl !== ""
) {
const urlObj = new URL(url);
urlObj.host = this.reverseProxyNoSignUrl;
url = urlObj.href;
}
const body =
method === "GET" || method === "HEAD" ? undefined : request.body;
const transformedHeaders: Record<string, string> = {};
for (const key of Object.keys(request.headers)) {
const keyLower = key.toLowerCase();
if (keyLower === "host" || keyLower === "content-length") {
continue;
}
transformedHeaders[keyLower] = request.headers[key];
}
let contentType: string | undefined = undefined;
if (transformedHeaders["content-type"] !== undefined) {
contentType = transformedHeaders["content-type"];
}
let transformedBody: any = body;
if (ArrayBuffer.isView(body)) {
transformedBody = new Uint8Array(body.buffer).buffer;
}
const param: RequestUrlParam = {
body: transformedBody,
headers: transformedHeaders,
method: method,
url: url,
contentType: contentType,
};
const raceOfPromises = [
requestUrl(param).then((rsp) => {
const headers = rsp.headers;
const headersLower: Record<string, string> = {};
for (const key of Object.keys(headers)) {
headersLower[key.toLowerCase()] = headers[key];
}
const stream = new ReadableStream<Uint8Array>({
start(controller) {
controller.enqueue(new Uint8Array(rsp.arrayBuffer));
controller.close();
},
});
return {
response: new HttpResponse({
headers: headersLower,
statusCode: rsp.status,
body: stream,
}),
};
}),
requestTimeout(this.requestTimeoutInMs),
];
if (abortSignal) {
raceOfPromises.push(
new Promise<never>((resolve, reject) => {
abortSignal.onabort = () => {
const abortError = new Error("Request aborted");
abortError.name = "AbortError";
reject(abortError);
};
})
);
}
return Promise.race(raceOfPromises);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@ import type { SerializedFileAccess } from "./SerializedFileAccess";
import { Plugin, TAbstractFile, TFile, TFolder } from "./deps"; import { Plugin, TAbstractFile, TFile, TFolder } from "./deps";
import { Logger } from "./lib/src/logger"; import { Logger } from "./lib/src/logger";
import { shouldBeIgnored } from "./lib/src/path"; import { shouldBeIgnored } from "./lib/src/path";
import type { KeyedQueueProcessor } from "./lib/src/processor"; import type { QueueProcessor } from "./lib/src/processor";
import { LOG_LEVEL_NOTICE, type FilePath, type ObsidianLiveSyncSettings } from "./lib/src/types"; import { LOG_LEVEL_NOTICE, type FilePath, type ObsidianLiveSyncSettings } from "./lib/src/types";
import { delay } from "./lib/src/utils"; import { delay } from "./lib/src/utils";
import { type FileEventItem, type FileEventType, type FileInfo, type InternalFileInfo } from "./types"; import { type FileEventItem, type FileEventType, type FileInfo, type InternalFileInfo } from "./types";
@@ -19,7 +19,7 @@ type LiveSyncForStorageEventManager = Plugin &
vaultAccess: SerializedFileAccess vaultAccess: SerializedFileAccess
} & { } & {
isTargetFile: (file: string | TAbstractFile) => Promise<boolean>, isTargetFile: (file: string | TAbstractFile) => Promise<boolean>,
fileEventQueue: KeyedQueueProcessor<FileEventItem, any>, fileEventQueue: QueueProcessor<FileEventItem, any>,
isFileSizeExceeded: (size: number) => boolean; isFileSizeExceeded: (size: number) => boolean;
}; };
@@ -133,8 +133,7 @@ export class StorageEventManagerObsidian extends StorageEventManager {
path: file.path, path: file.path,
size: file.stat.size size: file.stat.size
} as FileInfo : file as InternalFileInfo; } as FileInfo : file as InternalFileInfo;
this.plugin.fileEventQueue.enqueue({
this.plugin.fileEventQueue.enqueueWithKey(`file-${fileInfo.path}`, {
type, type,
args: { args: {
file: fileInfo, file: fileInfo,

Submodule src/lib updated: 98809f37df...da470ddc41

View File

@@ -2,9 +2,9 @@ const isDebug = false;
import { type Diff, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch, stringifyYaml, parseYaml } from "./deps"; import { type Diff, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch, stringifyYaml, parseYaml } from "./deps";
import { Notice, Plugin, TFile, addIcon, TFolder, normalizePath, TAbstractFile, Editor, MarkdownView, type RequestUrlParam, type RequestUrlResponse, requestUrl, type MarkdownFileInfo } from "./deps"; import { Notice, Plugin, TFile, addIcon, TFolder, normalizePath, TAbstractFile, Editor, MarkdownView, type RequestUrlParam, type RequestUrlResponse, requestUrl, type MarkdownFileInfo } from "./deps";
import { type EntryDoc, type LoadedEntry, type ObsidianLiveSyncSettings, type diff_check_result, type diff_result_leaf, type EntryBody, LOG_LEVEL, VER, DEFAULT_SETTINGS, type diff_result, FLAGMD_REDFLAG, SYNCINFO_ID, SALT_OF_PASSPHRASE, type ConfigPassphraseStore, type CouchDBConnection, FLAGMD_REDFLAG2, FLAGMD_REDFLAG3, PREFIXMD_LOGFILE, type DatabaseConnectingStatus, type EntryHasPath, type DocumentID, type FilePathWithPrefix, type FilePath, type AnyEntry, LOG_LEVEL_DEBUG, LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_URGENT, LOG_LEVEL_VERBOSE, type SavingEntry, MISSING_OR_ERROR, NOT_CONFLICTED, AUTO_MERGED, CANCELLED, LEAVE_TO_SUBSEQUENT, FLAGMD_REDFLAG2_HR, FLAGMD_REDFLAG3_HR, } from "./lib/src/types"; import { type EntryDoc, type LoadedEntry, type ObsidianLiveSyncSettings, type diff_check_result, type diff_result_leaf, type EntryBody, LOG_LEVEL, VER, DEFAULT_SETTINGS, type diff_result, FLAGMD_REDFLAG, SYNCINFO_ID, SALT_OF_PASSPHRASE, type ConfigPassphraseStore, type CouchDBConnection, FLAGMD_REDFLAG2, FLAGMD_REDFLAG3, PREFIXMD_LOGFILE, type DatabaseConnectingStatus, type EntryHasPath, type DocumentID, type FilePathWithPrefix, type FilePath, type AnyEntry, LOG_LEVEL_DEBUG, LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_URGENT, LOG_LEVEL_VERBOSE, type SavingEntry, MISSING_OR_ERROR, NOT_CONFLICTED, AUTO_MERGED, CANCELLED, LEAVE_TO_SUBSEQUENT, FLAGMD_REDFLAG2_HR, FLAGMD_REDFLAG3_HR, REMOTE_MINIO, REMOTE_COUCHDB, type BucketSyncSetting, } from "./lib/src/types";
import { type InternalFileInfo, type CacheData, type FileEventItem, FileWatchEventQueueMax } from "./types"; import { type InternalFileInfo, type CacheData, type FileEventItem, FileWatchEventQueueMax } from "./types";
import { arrayToChunkedArray, createBlob, determineTypeFromBlob, fireAndForget, getDocData, isAnyNote, isDocContentSame, isObjectDifferent, readContent, sendValue } from "./lib/src/utils"; import { arrayToChunkedArray, createBlob, delay, determineTypeFromBlob, fireAndForget, getDocData, isAnyNote, isDocContentSame, isObjectDifferent, readContent, sendValue, throttle, type SimpleStore } from "./lib/src/utils";
import { Logger, setGlobalLogFunction } from "./lib/src/logger"; import { Logger, setGlobalLogFunction } from "./lib/src/logger";
import { PouchDB } from "./lib/src/pouchdb-browser.js"; import { PouchDB } from "./lib/src/pouchdb-browser.js";
import { ConflictResolveModal } from "./ConflictResolveModal"; import { ConflictResolveModal } from "./ConflictResolveModal";
@@ -12,7 +12,7 @@ import { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab";
import { DocumentHistoryModal } from "./DocumentHistoryModal"; import { DocumentHistoryModal } from "./DocumentHistoryModal";
import { applyPatch, cancelAllPeriodicTask, cancelAllTasks, cancelTask, generatePatchObj, id2path, isObjectMargeApplicable, isSensibleMargeApplicable, flattenObject, path2id, scheduleTask, tryParseJSON, isValidPath, isInternalMetadata, isPluginMetadata, stripInternalMetadataPrefix, isChunk, askSelectString, askYesNo, askString, PeriodicProcessor, getPath, getPathWithoutPrefix, getPathFromTFile, performRebuildDB, memoIfNotExist, memoObject, retrieveMemoObject, disposeMemoObject, isCustomisationSyncMetadata, compareFileFreshness, BASE_IS_NEW, TARGET_IS_NEW, EVEN, compareMTime, markChangesAreSame } from "./utils"; import { applyPatch, cancelAllPeriodicTask, cancelAllTasks, cancelTask, generatePatchObj, id2path, isObjectMargeApplicable, isSensibleMargeApplicable, flattenObject, path2id, scheduleTask, tryParseJSON, isValidPath, isInternalMetadata, isPluginMetadata, stripInternalMetadataPrefix, isChunk, askSelectString, askYesNo, askString, PeriodicProcessor, getPath, getPathWithoutPrefix, getPathFromTFile, performRebuildDB, memoIfNotExist, memoObject, retrieveMemoObject, disposeMemoObject, isCustomisationSyncMetadata, compareFileFreshness, BASE_IS_NEW, TARGET_IS_NEW, EVEN, compareMTime, markChangesAreSame } from "./utils";
import { encrypt, tryDecrypt } from "./lib/src/e2ee_v2"; import { encrypt, tryDecrypt } from "./lib/src/e2ee_v2";
import { balanceChunkPurgedDBs, enableEncryption, isCloudantURI, isErrorOfMissingDoc, isValidRemoteCouchDBURI, purgeUnreferencedChunks } from "./lib/src/utils_couchdb"; import { balanceChunkPurgedDBs, enableCompression, enableEncryption, isCloudantURI, isErrorOfMissingDoc, isValidRemoteCouchDBURI, purgeUnreferencedChunks } from "./lib/src/utils_couchdb";
import { logStore, type LogEntry, collectingChunks, pluginScanningCount, hiddenFilesProcessingCount, hiddenFilesEventCount, logMessages } from "./lib/src/stores"; import { logStore, type LogEntry, collectingChunks, pluginScanningCount, hiddenFilesProcessingCount, hiddenFilesEventCount, logMessages } from "./lib/src/stores";
import { setNoticeClass } from "./lib/src/wrapper"; import { setNoticeClass } from "./lib/src/wrapper";
import { versionNumberString2Number, writeString, decodeBinary, readString } from "./lib/src/strbin"; import { versionNumberString2Number, writeString, decodeBinary, readString } from "./lib/src/strbin";
@@ -20,7 +20,7 @@ import { addPrefix, isAcceptedAll, isPlainText, shouldBeIgnored, stripAllPrefixe
import { isLockAcquired, serialized, shareRunningResult, skipIfDuplicated } from "./lib/src/lock"; import { isLockAcquired, serialized, shareRunningResult, skipIfDuplicated } from "./lib/src/lock";
import { StorageEventManager, StorageEventManagerObsidian } from "./StorageEventManager"; import { StorageEventManager, StorageEventManagerObsidian } from "./StorageEventManager";
import { LiveSyncLocalDB, type LiveSyncLocalDBEnv } from "./lib/src/LiveSyncLocalDB"; import { LiveSyncLocalDB, type LiveSyncLocalDBEnv } from "./lib/src/LiveSyncLocalDB";
import { LiveSyncDBReplicator, type LiveSyncReplicatorEnv } from "./lib/src/LiveSyncReplicator"; import { LiveSyncAbstractReplicator, type LiveSyncReplicatorEnv } from "./lib/src/LiveSyncAbstractReplicator.js";
import { type KeyValueDatabase, OpenKeyValueDatabase } from "./KeyValueDB"; import { type KeyValueDatabase, OpenKeyValueDatabase } from "./KeyValueDB";
import { LiveSyncCommands } from "./LiveSyncCommands"; import { LiveSyncCommands } from "./LiveSyncCommands";
import { HiddenFileSync } from "./CmdHiddenFileSync"; import { HiddenFileSync } from "./CmdHiddenFileSync";
@@ -31,9 +31,14 @@ import { GlobalHistoryView, VIEW_TYPE_GLOBAL_HISTORY } from "./GlobalHistoryView
import { LogPaneView, VIEW_TYPE_LOG } from "./LogPaneView"; import { LogPaneView, VIEW_TYPE_LOG } from "./LogPaneView";
import { LRUCache } from "./lib/src/LRUCache"; import { LRUCache } from "./lib/src/LRUCache";
import { SerializedFileAccess } from "./SerializedFileAccess.js"; import { SerializedFileAccess } from "./SerializedFileAccess.js";
import { KeyedQueueProcessor, QueueProcessor, type QueueItemWithKey } from "./lib/src/processor.js"; import { QueueProcessor } from "./lib/src/processor.js";
import { reactive, reactiveSource } from "./lib/src/reactive.js"; import { reactive, reactiveSource } from "./lib/src/reactive.js";
import { initializeStores } from "./stores.js"; import { initializeStores } from "./stores.js";
import { JournalSyncMinio } from "./lib/src/JournalSyncMinio.js";
import { LiveSyncJournalReplicator, type LiveSyncJournalReplicatorEnv } from "./lib/src/LiveSyncJournalReplicator.js";
import { LiveSyncCouchDBReplicator, type LiveSyncCouchDBReplicatorEnv } from "./lib/src/LiveSyncReplicator.js";
import type { CheckPointInfo } from "./lib/src/JournalSyncTypes.js";
import { ObsHttpHandler } from "./ObsHttpHandler.js";
setNoticeClass(Notice); setNoticeClass(Notice);
@@ -69,11 +74,16 @@ const SETTING_HEADER = "````yaml:livesync-setting\n";
const SETTING_FOOTER = "\n````"; const SETTING_FOOTER = "\n````";
export default class ObsidianLiveSyncPlugin extends Plugin export default class ObsidianLiveSyncPlugin extends Plugin
implements LiveSyncLocalDBEnv, LiveSyncReplicatorEnv { implements LiveSyncLocalDBEnv, LiveSyncReplicatorEnv, LiveSyncJournalReplicatorEnv, LiveSyncCouchDBReplicatorEnv {
_customHandler!: ObsHttpHandler;
customFetchHandler() {
if (!this._customHandler) this._customHandler = new ObsHttpHandler(undefined, undefined);
return this._customHandler;
}
settings!: ObsidianLiveSyncSettings; settings!: ObsidianLiveSyncSettings;
localDatabase!: LiveSyncLocalDB; localDatabase!: LiveSyncLocalDB;
replicator!: LiveSyncDBReplicator; replicator!: LiveSyncAbstractReplicator;
statusBar?: HTMLElement; statusBar?: HTMLElement;
_suspended = false; _suspended = false;
@@ -119,7 +129,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin
requestCount = reactiveSource(0); requestCount = reactiveSource(0);
responseCount = reactiveSource(0); responseCount = reactiveSource(0);
processReplication = (e: PouchDB.Core.ExistingDocument<EntryDoc>[]) => this.parseReplicationResult(e); processReplication = (e: PouchDB.Core.ExistingDocument<EntryDoc>[]) => this.parseReplicationResult(e);
async connectRemoteCouchDB(uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | false, useDynamicIterationCount: boolean, performSetup: boolean, skipInfo: boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> { async connectRemoteCouchDB(uri: string, auth: { username: string; password: string }, disableRequestURI: boolean, passphrase: string | false, useDynamicIterationCount: boolean, performSetup: boolean, skipInfo: boolean, compression: boolean): Promise<string | { db: PouchDB.Database<EntryDoc>; info: PouchDB.Core.DatabaseInfo }> {
if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid"; if (!isValidRemoteCouchDBURI(uri)) return "Remote URI is not valid";
if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters."; if (uri.toLowerCase() != uri) return "Remote URI and database name could not contain capital letters.";
if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces."; if (uri.indexOf(" ") !== -1) return "Remote URI and database name could not contain spaces.";
@@ -237,6 +247,7 @@ export default class ObsidianLiveSyncPlugin extends Plugin
}; };
const db: PouchDB.Database<EntryDoc> = new PouchDB<EntryDoc>(uri, conf); const db: PouchDB.Database<EntryDoc> = new PouchDB<EntryDoc>(uri, conf);
enableCompression(db, compression);
if (passphrase !== "false" && typeof passphrase === "string") { if (passphrase !== "false" && typeof passphrase === "string") {
enableEncryption(db, passphrase, useDynamicIterationCount, false); enableEncryption(db, passphrase, useDynamicIterationCount, false);
} }
@@ -307,16 +318,24 @@ export default class ObsidianLiveSyncPlugin extends Plugin
onClose(db: LiveSyncLocalDB): void { onClose(db: LiveSyncLocalDB): void {
this.kvDB.close(); this.kvDB.close();
} }
getNewReplicator(settingOverride: Partial<ObsidianLiveSyncSettings> = {}): LiveSyncAbstractReplicator {
const settings = { ...this.settings, ...settingOverride };
if (settings.remoteType == REMOTE_MINIO) {
return new LiveSyncJournalReplicator(this);
}
return new LiveSyncCouchDBReplicator(this);
}
async onInitializeDatabase(db: LiveSyncLocalDB): Promise<void> { async onInitializeDatabase(db: LiveSyncLocalDB): Promise<void> {
this.kvDB = await OpenKeyValueDatabase(db.dbname + "-livesync-kv"); this.kvDB = await OpenKeyValueDatabase(db.dbname + "-livesync-kv");
this.replicator = new LiveSyncDBReplicator(this); this.replicator = this.getNewReplicator();
} }
async onResetDatabase(db: LiveSyncLocalDB): Promise<void> { async onResetDatabase(db: LiveSyncLocalDB): Promise<void> {
const lsKey = "obsidian-livesync-queuefiles-" + this.getVaultName(); const kvDBKey = "queued-files"
localStorage.removeItem(lsKey); this.kvDB.del(kvDBKey);
// localStorage.removeItem(lsKey);
await this.kvDB.destroy(); await this.kvDB.destroy();
this.kvDB = await OpenKeyValueDatabase(db.dbname + "-livesync-kv"); this.kvDB = await OpenKeyValueDatabase(db.dbname + "-livesync-kv");
this.replicator = new LiveSyncDBReplicator(this); this.replicator = this.getNewReplicator()
} }
getReplicator() { getReplicator() {
return this.replicator; return this.replicator;
@@ -445,6 +464,52 @@ export default class ObsidianLiveSyncPlugin extends Plugin
} }
Logger(`Checking expired file history done`); Logger(`Checking expired file history done`);
} }
simpleStore: SimpleStore<CheckPointInfo> = {
get: async (key: string) => {
return await this.kvDB.get(`os-${key}`);
},
set: async (key: string, value: any) => {
await this.kvDB.set(`os-${key}`, value);
},
delete: async (key) => {
await this.kvDB.del(`os-${key}`);
},
keys: async (from: string | undefined, to: string | undefined, count?: number | undefined): Promise<string[]> => {
const ret = this.kvDB.keys(IDBKeyRange.bound(`os-${from || ""}`, `os-${to || ""}`), count);
return (await ret).map(e => e.toString()).filter(e => e.startsWith("os-")).map(e => e.substring(3));
}
}
getMinioJournalSyncClient() {
const id = this.settings.accessKey
const key = this.settings.secretKey
const bucket = this.settings.bucket
const region = this.settings.region
const endpoint = this.settings.endpoint
const useCustomRequestHandler = this.settings.useCustomRequestHandler;
return new JournalSyncMinio(id, key, endpoint, bucket, this.simpleStore, this, useCustomRequestHandler, region);
}
async resetRemoteBucket() {
const minioJournal = this.getMinioJournalSyncClient();
await minioJournal.resetBucket();
}
async resetJournalSync() {
const minioJournal = this.getMinioJournalSyncClient();
await minioJournal.resetCheckpointInfo();
}
async journalSendTest() {
const minioJournal = this.getMinioJournalSyncClient();
await minioJournal.sendLocalJournal();
}
async journalFetchTest() {
const minioJournal = this.getMinioJournalSyncClient();
await minioJournal.receiveRemoteJournal();
}
async journalSyncTest() {
const minioJournal = this.getMinioJournalSyncClient();
await minioJournal.sync();
}
async onLayoutReady() { async onLayoutReady() {
this.registerFileWatchEvents(); this.registerFileWatchEvents();
if (!this.localDatabase.isReady) { if (!this.localDatabase.isReady) {
@@ -535,8 +600,8 @@ Click anywhere to stop counting down.
this.registerWatchEvents(); this.registerWatchEvents();
await this.realizeSettingSyncMode(); await this.realizeSettingSyncMode();
this.swapSaveCommand(); this.swapSaveCommand();
if (this.settings.syncOnStart) { if (!this.settings.liveSync && this.settings.syncOnStart) {
this.replicator.openReplication(this.settings, false, false); this.replicator.openReplication(this.settings, false, false, false);
} }
this.scanStat(); this.scanStat();
} catch (ex) { } catch (ex) {
@@ -953,17 +1018,19 @@ Note: We can always able to read V1 format. It will be progressively converted.
Logger("Could not determine passphrase for reading data.json! DO NOT synchronize with the remote before making sure your configuration is!", LOG_LEVEL_URGENT); Logger("Could not determine passphrase for reading data.json! DO NOT synchronize with the remote before making sure your configuration is!", LOG_LEVEL_URGENT);
} else { } else {
if (settings.encryptedCouchDBConnection) { if (settings.encryptedCouchDBConnection) {
const keys = ["couchDB_URI", "couchDB_USER", "couchDB_PASSWORD", "couchDB_DBNAME"] as (keyof CouchDBConnection)[]; const keys = ["couchDB_URI", "couchDB_USER", "couchDB_PASSWORD", "couchDB_DBNAME", "accessKey", "bucket", "endpoint", "region", "secretKey"] as (keyof CouchDBConnection | keyof BucketSyncSetting)[];
const decrypted = this.tryDecodeJson(await this.decryptConfigurationItem(settings.encryptedCouchDBConnection, passphrase)) as CouchDBConnection; const decrypted = this.tryDecodeJson(await this.decryptConfigurationItem(settings.encryptedCouchDBConnection, passphrase)) as (CouchDBConnection & BucketSyncSetting);
if (decrypted) { if (decrypted) {
for (const key of keys) { for (const key of keys) {
if (key in decrypted) { if (key in decrypted) {
//@ts-ignore
settings[key] = decrypted[key] settings[key] = decrypted[key]
} }
} }
} else { } else {
Logger("Could not decrypt passphrase for reading data.json! DO NOT synchronize with the remote before making sure your configuration is!", LOG_LEVEL_URGENT); Logger("Could not decrypt passphrase for reading data.json! DO NOT synchronize with the remote before making sure your configuration is!", LOG_LEVEL_URGENT);
for (const key of keys) { for (const key of keys) {
//@ts-ignore
settings[key] = ""; settings[key] = "";
} }
} }
@@ -1007,7 +1074,7 @@ Note: We can always able to read V1 format. It will be progressively converted.
} }
this.deviceAndVaultName = localStorage.getItem(lsKey) || ""; this.deviceAndVaultName = localStorage.getItem(lsKey) || "";
this.ignoreFiles = this.settings.ignoreFiles.split(",").map(e => e.trim()); this.ignoreFiles = this.settings.ignoreFiles.split(",").map(e => e.trim());
this.fileEventQueue.delay = this.settings.batchSave ? 5000 : 100; this.fileEventQueue.delay = (!this.settings.liveSync && this.settings.batchSave) ? 5000 : 100;
} }
async saveSettingData() { async saveSettingData() {
@@ -1020,26 +1087,38 @@ Note: We can always able to read V1 format. It will be progressively converted.
Logger("Could not determine passphrase for saving data.json! Our data.json have insecure items!", LOG_LEVEL_NOTICE); Logger("Could not determine passphrase for saving data.json! Our data.json have insecure items!", LOG_LEVEL_NOTICE);
} else { } else {
if (settings.couchDB_PASSWORD != "" || settings.couchDB_URI != "" || settings.couchDB_USER != "" || settings.couchDB_DBNAME) { if (settings.couchDB_PASSWORD != "" || settings.couchDB_URI != "" || settings.couchDB_USER != "" || settings.couchDB_DBNAME) {
const connectionSetting: CouchDBConnection = { const connectionSetting: CouchDBConnection & BucketSyncSetting = {
couchDB_DBNAME: settings.couchDB_DBNAME, couchDB_DBNAME: settings.couchDB_DBNAME,
couchDB_PASSWORD: settings.couchDB_PASSWORD, couchDB_PASSWORD: settings.couchDB_PASSWORD,
couchDB_URI: settings.couchDB_URI, couchDB_URI: settings.couchDB_URI,
couchDB_USER: settings.couchDB_USER, couchDB_USER: settings.couchDB_USER,
accessKey: settings.accessKey,
bucket: settings.bucket,
endpoint: settings.endpoint,
region: settings.region,
secretKey: settings.secretKey,
useCustomRequestHandler: settings.useCustomRequestHandler
}; };
settings.encryptedCouchDBConnection = await this.encryptConfigurationItem(JSON.stringify(connectionSetting), settings); settings.encryptedCouchDBConnection = await this.encryptConfigurationItem(JSON.stringify(connectionSetting), settings);
settings.couchDB_PASSWORD = ""; settings.couchDB_PASSWORD = "";
settings.couchDB_DBNAME = ""; settings.couchDB_DBNAME = "";
settings.couchDB_URI = ""; settings.couchDB_URI = "";
settings.couchDB_USER = ""; settings.couchDB_USER = "";
settings.accessKey = "";
settings.bucket = "";
settings.region = "";
settings.secretKey = "";
settings.endpoint = "";
} }
if (settings.encrypt && settings.passphrase != "") { if (settings.encrypt && settings.passphrase != "") {
settings.encryptedPassphrase = await this.encryptConfigurationItem(settings.passphrase, settings); settings.encryptedPassphrase = await this.encryptConfigurationItem(settings.passphrase, settings);
settings.passphrase = ""; settings.passphrase = "";
} }
} }
await this.saveData(settings); await this.saveData(settings);
this.localDatabase.settings = this.settings; this.localDatabase.settings = this.settings;
this.fileEventQueue.delay = this.settings.batchSave ? 5000 : 100; this.fileEventQueue.delay = (!this.settings.liveSync && this.settings.batchSave) ? 5000 : 100;
this.ignoreFiles = this.settings.ignoreFiles.split(",").map(e => e.trim()); this.ignoreFiles = this.settings.ignoreFiles.split(",").map(e => e.trim());
if (this.settings.settingSyncFile != "") { if (this.settings.settingSyncFile != "") {
fireAndForget(() => this.saveSettingToMarkdown(this.settings.settingSyncFile)); fireAndForget(() => this.saveSettingToMarkdown(this.settings.settingSyncFile));
@@ -1237,9 +1316,13 @@ We can perform a command in this file.
_this.performCommand('editor:save-file'); _this.performCommand('editor:save-file');
}; };
} }
hasFocus = true;
isLastHidden = false;
registerWatchEvents() { registerWatchEvents() {
this.registerEvent(this.app.workspace.on("file-open", this.watchWorkspaceOpen)); this.registerEvent(this.app.workspace.on("file-open", this.watchWorkspaceOpen));
this.registerDomEvent(document, "visibilitychange", this.watchWindowVisibility); this.registerDomEvent(document, "visibilitychange", this.watchWindowVisibility);
this.registerDomEvent(window, "focus", () => this.setHasFocus(true));
this.registerDomEvent(window, "blur", () => this.setHasFocus(false));
this.registerDomEvent(window, "online", this.watchOnline); this.registerDomEvent(window, "online", this.watchOnline);
this.registerDomEvent(window, "offline", this.watchOnline); this.registerDomEvent(window, "offline", this.watchOnline);
} }
@@ -1255,15 +1338,30 @@ We can perform a command in this file.
await this.syncAllFiles(); await this.syncAllFiles();
} }
} }
setHasFocus(hasFocus: boolean) {
this.hasFocus = hasFocus;
this.watchWindowVisibility();
}
watchWindowVisibility() { watchWindowVisibility() {
scheduleTask("watch-window-visibility", 500, () => fireAndForget(() => this.watchWindowVisibilityAsync())); scheduleTask("watch-window-visibility", 100, () => fireAndForget(() => this.watchWindowVisibilityAsync()));
} }
async watchWindowVisibilityAsync() { async watchWindowVisibilityAsync() {
if (this.settings.suspendFileWatching) return; if (this.settings.suspendFileWatching) return;
if (!this.settings.isConfigured) return; if (!this.settings.isConfigured) return;
if (!this.isReady) return; if (!this.isReady) return;
if (this.isLastHidden && !this.hasFocus) {
// NO OP while non-focused after made hidden;
return;
}
const isHidden = document.hidden; const isHidden = document.hidden;
if (this.isLastHidden === isHidden) {
return;
}
this.isLastHidden = isHidden;
await this.applyBatchChange(); await this.applyBatchChange();
if (isHidden) { if (isHidden) {
this.replicator.closeReplication(); this.replicator.closeReplication();
@@ -1272,23 +1370,25 @@ We can perform a command in this file.
// suspend all temporary. // suspend all temporary.
if (this.suspended) return; if (this.suspended) return;
await Promise.all(this.addOns.map(e => e.onResume())); await Promise.all(this.addOns.map(e => e.onResume()));
if (this.settings.liveSync) { if (this.settings.remoteType == REMOTE_COUCHDB) {
this.replicator.openReplication(this.settings, true, false); if (this.settings.liveSync) {
this.replicator.openReplication(this.settings, true, false, false);
}
} }
if (this.settings.syncOnStart) { if (this.settings.syncOnStart) {
this.replicator.openReplication(this.settings, false, false); this.replicator.openReplication(this.settings, false, false, false);
} }
this.periodicSyncProcessor.enable(this.settings.periodicReplication ? this.settings.periodicReplicationInterval * 1000 : 0); this.periodicSyncProcessor.enable(this.settings.periodicReplication ? this.settings.periodicReplicationInterval * 1000 : 0);
} }
} }
cancelRelativeEvent(item: FileEventItem) { cancelRelativeEvent(item: FileEventItem) {
this.fileEventQueue.modifyQueue((items) => [...items.filter(e => e.entity.key != item.key)]) this.fileEventQueue.modifyQueue((items) => [...items.filter(e => e.key != item.key)])
} }
queueNextFileEvent(items: QueueItemWithKey<FileEventItem>[], newItem: QueueItemWithKey<FileEventItem>): QueueItemWithKey<FileEventItem>[] { queueNextFileEvent(items: FileEventItem[], newItem: FileEventItem): FileEventItem[] {
if (this.settings.batchSave && !this.settings.liveSync) { if (this.settings.batchSave && !this.settings.liveSync) {
const file = newItem.entity.args.file; const file = newItem.args.file;
// if the latest event is the same type, omit that // if the latest event is the same type, omit that
// a.md MODIFY <- this should be cancelled when a.md MODIFIED // a.md MODIFY <- this should be cancelled when a.md MODIFIED
// b.md MODIFY <- this should be cancelled when b.md MODIFIED // b.md MODIFY <- this should be cancelled when b.md MODIFIED
@@ -1300,16 +1400,16 @@ We can perform a command in this file.
while (i >= 0) { while (i >= 0) {
i--; i--;
if (i < 0) break L1; if (i < 0) break L1;
if (items[i].entity.args.file.path != file.path) { if (items[i].args.file.path != file.path) {
continue L1; continue L1;
} }
if (items[i].entity.type != newItem.entity.type) break L1; if (items[i].type != newItem.type) break L1;
items.remove(items[i]); items.remove(items[i]);
} }
} }
items.push(newItem); items.push(newItem);
// When deleting or renaming, the queue must be flushed once before processing subsequent processes to prevent unexpected race condition. // When deleting or renaming, the queue must be flushed once before processing subsequent processes to prevent unexpected race condition.
if (newItem.entity.type == "DELETE" || newItem.entity.type == "RENAME") { if (newItem.type == "DELETE" || newItem.type == "RENAME") {
this.fileEventQueue.requestNextFlush(); this.fileEventQueue.requestNextFlush();
} }
return items; return items;
@@ -1363,7 +1463,7 @@ We can perform a command in this file.
pendingFileEventCount = reactiveSource(0); pendingFileEventCount = reactiveSource(0);
processingFileEventCount = reactiveSource(0); processingFileEventCount = reactiveSource(0);
fileEventQueue = fileEventQueue =
new KeyedQueueProcessor( new QueueProcessor(
(items: FileEventItem[]) => this.handleFileEvent(items[0]), (items: FileEventItem[]) => this.handleFileEvent(items[0]),
{ suspended: true, batchSize: 1, concurrentLimit: 5, delay: 100, yieldThreshold: FileWatchEventQueueMax, totalRemainingReactiveSource: this.pendingFileEventCount, processingEntitiesReactiveSource: this.processingFileEventCount } { suspended: true, batchSize: 1, concurrentLimit: 5, delay: 100, yieldThreshold: FileWatchEventQueueMax, totalRemainingReactiveSource: this.pendingFileEventCount, processingEntitiesReactiveSource: this.processingFileEventCount }
).replaceEnqueueProcessor((items, newItem) => this.queueNextFileEvent(items, newItem)); ).replaceEnqueueProcessor((items, newItem) => this.queueNextFileEvent(items, newItem));
@@ -1622,21 +1722,32 @@ We can perform a command in this file.
this.conflictCheckQueue.enqueue(path); this.conflictCheckQueue.enqueue(path);
} }
_saveQueuedFiles = throttle(() => {
const saveData = this.replicationResultProcessor._queue.filter(e => e !== undefined && e !== null).map((e) => e?._id ?? "" as string) as string[];
const kvDBKey = "queued-files"
// localStorage.setItem(lsKey, saveData);
fireAndForget(() => this.kvDB.set(kvDBKey, saveData));
}, 100);
saveQueuedFiles() { saveQueuedFiles() {
const saveData = JSON.stringify(this.replicationResultProcessor._queue.map((e) => e._id)); this._saveQueuedFiles();
const lsKey = "obsidian-livesync-queuefiles-" + this.getVaultName();
localStorage.setItem(lsKey, saveData);
} }
async loadQueuedFiles() { async loadQueuedFiles() {
if (this.settings.suspendParseReplicationResult) return; if (this.settings.suspendParseReplicationResult) return;
if (!this.settings.isConfigured) return; if (!this.settings.isConfigured) return;
const lsKey = "obsidian-livesync-queuefiles-" + this.getVaultName(); const kvDBKey = "queued-files"
const ids = [...new Set(JSON.parse(localStorage.getItem(lsKey) || "[]"))] as string[]; // const ids = [...new Set(JSON.parse(localStorage.getItem(lsKey) || "[]"))] as string[];
const ids = [...new Set(await this.kvDB.get<string[]>(kvDBKey) ?? [])];
const batchSize = 100; const batchSize = 100;
const chunkedIds = arrayToChunkedArray(ids, batchSize); const chunkedIds = arrayToChunkedArray(ids, batchSize);
for await (const idsBatch of chunkedIds) { for await (const idsBatch of chunkedIds) {
const ret = await this.localDatabase.allDocsRaw<EntryDoc>({ keys: idsBatch, include_docs: true, limit: 100 }); const ret = await this.localDatabase.allDocsRaw<EntryDoc>({ keys: idsBatch, include_docs: true, limit: 100 });
this.replicationResultProcessor.enqueueAll(ret.rows.map(doc => doc.doc!)); const docs = ret.rows.filter(e => e.doc).map(e => e.doc) as PouchDB.Core.ExistingDocument<EntryDoc>[];
const errors = ret.rows.filter(e => !e.doc && !e.value.deleted);
if (errors.length > 0) {
Logger("Some queued processes were not resurrected");
Logger(JSON.stringify(errors), LOG_LEVEL_VERBOSE);
}
this.replicationResultProcessor.enqueueAll(docs);
await this.replicationResultProcessor.waitForPipeline(); await this.replicationResultProcessor.waitForPipeline();
} }
@@ -1658,34 +1769,43 @@ We can perform a command in this file.
const filename = this.getPathWithoutPrefix(doc); const filename = this.getPathWithoutPrefix(doc);
this.isTargetFile(filename).then((ret) => ret ? this.addOnHiddenFileSync.procInternalFile(filename) : Logger(`Skipped (Not target:${filename})`, LOG_LEVEL_VERBOSE)); this.isTargetFile(filename).then((ret) => ret ? this.addOnHiddenFileSync.procInternalFile(filename) : Logger(`Skipped (Not target:${filename})`, LOG_LEVEL_VERBOSE));
} else if (isValidPath(this.getPath(doc))) { } else if (isValidPath(this.getPath(doc))) {
this.storageApplyingProcessor.enqueueWithKey(doc.path, doc); this.storageApplyingProcessor.enqueue(doc);
} else { } else {
Logger(`Skipped: ${doc._id.substring(0, 8)}`, LOG_LEVEL_VERBOSE); Logger(`Skipped: ${doc._id.substring(0, 8)}`, LOG_LEVEL_VERBOSE);
} }
return; return;
}, { suspended: true, batchSize: 1, concurrentLimit: 10, yieldThreshold: 1, delay: 0, totalRemainingReactiveSource: this.databaseQueueCount }).startPipeline(); }, { suspended: true, batchSize: 1, concurrentLimit: 10, yieldThreshold: 1, delay: 0, totalRemainingReactiveSource: this.databaseQueueCount }).replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter(e => e._id != newItem._id);
return [...q, newItem];
}).startPipeline();
storageApplyingCount = reactiveSource(0); storageApplyingCount = reactiveSource(0);
storageApplyingProcessor = new KeyedQueueProcessor(async (docs: LoadedEntry[]) => { storageApplyingProcessor = new QueueProcessor(async (docs: LoadedEntry[]) => {
const entry = docs[0]; const entry = docs[0];
const path = this.getPath(entry); await serialized(entry.path, async () => {
Logger(`Processing ${path} (${entry._id.substring(0, 8)}: ${entry._rev?.substring(0, 5)}) :Started...`, LOG_LEVEL_VERBOSE); const path = this.getPath(entry);
const targetFile = this.vaultAccess.getAbstractFileByPath(this.getPathWithoutPrefix(entry)); Logger(`Processing ${path} (${entry._id.substring(0, 8)}: ${entry._rev?.substring(0, 5)}) :Started...`, LOG_LEVEL_VERBOSE);
if (targetFile instanceof TFolder) { const targetFile = this.vaultAccess.getAbstractFileByPath(this.getPathWithoutPrefix(entry));
Logger(`${this.getPath(entry)} is already exist as the folder`); if (targetFile instanceof TFolder) {
} else { Logger(`${this.getPath(entry)} is already exist as the folder`);
await this.processEntryDoc(entry, targetFile instanceof TFile ? targetFile : undefined); } else {
Logger(`Processing ${path} (${entry._id.substring(0, 8)} :${entry._rev?.substring(0, 5)}) : Done`); await this.processEntryDoc(entry, targetFile instanceof TFile ? targetFile : undefined);
} Logger(`Processing ${path} (${entry._id.substring(0, 8)} :${entry._rev?.substring(0, 5)}) : Done`);
}
});
return; return;
}, { suspended: true, batchSize: 1, concurrentLimit: 2, yieldThreshold: 1, delay: 0, totalRemainingReactiveSource: this.storageApplyingCount }).startPipeline() }, { suspended: true, batchSize: 1, concurrentLimit: 6, yieldThreshold: 1, delay: 0, totalRemainingReactiveSource: this.storageApplyingCount }).replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter(e => e._id != newItem._id);
return [...q, newItem];
}).startPipeline()
replicationResultCount = reactiveSource(0); replicationResultCount = reactiveSource(0);
replicationResultProcessor = new QueueProcessor(async (docs: PouchDB.Core.ExistingDocument<EntryDoc>[]) => { replicationResultProcessor = new QueueProcessor(async (docs: PouchDB.Core.ExistingDocument<EntryDoc>[]) => {
if (this.settings.suspendParseReplicationResult) return; if (this.settings.suspendParseReplicationResult) return;
const change = docs[0]; const change = docs[0];
if (!change) return;
if (isChunk(change._id)) { if (isChunk(change._id)) {
// SendSignal? // SendSignal?
// this.parseIncomingChunk(change); // this.parseIncomingChunk(change);
@@ -1722,16 +1842,19 @@ We can perform a command in this file.
this.databaseQueuedProcessor.enqueue(change); this.databaseQueuedProcessor.enqueue(change);
} }
return; return;
}, { batchSize: 1, suspended: true, concurrentLimit: 100, delay: 0, totalRemainingReactiveSource: this.replicationResultCount }).startPipeline().onUpdateProgress(() => { }, { batchSize: 1, suspended: true, concurrentLimit: 100, delay: 0, totalRemainingReactiveSource: this.replicationResultCount }).replaceEnqueueProcessor((queue, newItem) => {
const q = queue.filter(e => e._id != newItem._id);
return [...q, newItem];
}).startPipeline().onUpdateProgress(() => {
this.saveQueuedFiles(); this.saveQueuedFiles();
}); });
//---> Sync //---> Sync
parseReplicationResult(docs: Array<PouchDB.Core.ExistingDocument<EntryDoc>>) { parseReplicationResult(docs: Array<PouchDB.Core.ExistingDocument<EntryDoc>>) {
if (this.settings.suspendParseReplicationResult) { if (this.settings.suspendParseReplicationResult && !this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.suspend() this.replicationResultProcessor.suspend()
} }
this.replicationResultProcessor.enqueueAll(docs); this.replicationResultProcessor.enqueueAll(docs);
if (!this.settings.suspendParseReplicationResult) { if (!this.settings.suspendParseReplicationResult && this.replicationResultProcessor.isSuspended) {
this.replicationResultProcessor.resume() this.replicationResultProcessor.resume()
} }
} }
@@ -1746,8 +1869,10 @@ We can perform a command in this file.
// disable all sync temporary. // disable all sync temporary.
if (this.suspended) return; if (this.suspended) return;
await Promise.all(this.addOns.map(e => e.onResume())); await Promise.all(this.addOns.map(e => e.onResume()));
if (this.settings.liveSync) { if (this.settings.remoteType == REMOTE_COUCHDB) {
this.replicator.openReplication(this.settings, true, false); if (this.settings.liveSync) {
this.replicator.openReplication(this.settings, true, false, false);
}
} }
const q = activeDocument.querySelector(`.livesync-ribbon-showcustom`); const q = activeDocument.querySelector(`.livesync-ribbon-showcustom`);
@@ -1761,8 +1886,33 @@ We can perform a command in this file.
lastMessage = ""; lastMessage = "";
observeForLogs() { observeForLogs() {
const padSpaces = `\u{2007}`.repeat(10);
// const emptyMark = `\u{2003}`;
const rerenderTimer = new Map<string, [ReturnType<typeof setTimeout>, number]>;
const tick = reactiveSource(0);
function padLeftSp(num: number, mark: string) {
const numLen = `${num}`.length + 1;
const [timer, len] = rerenderTimer.get(mark) ?? [undefined, numLen];
if (num || timer) {
if (num) {
if (timer) clearTimeout(timer);
rerenderTimer.set(mark, [setTimeout(async () => {
rerenderTimer.delete(mark);
await delay(100);
tick.value = tick.value + 1;
}, 3000), Math.max(len, numLen)]);
}
return ` ${mark}${`${padSpaces}${num}`.slice(-(len))}`;
} else {
return "";
}
}
// const logStore // const logStore
const queueCountLabel = reactive(() => { const queueCountLabel = reactive(() => {
// For invalidating
// @ts-ignore
// eslint-disable-next-line @typescript-eslint/no-unused-vars
const _ = tick.value;
const dbCount = this.databaseQueueCount.value; const dbCount = this.databaseQueueCount.value;
const replicationCount = this.replicationResultCount.value; const replicationCount = this.replicationResultCount.value;
const storageApplyingCount = this.storageApplyingCount.value; const storageApplyingCount = this.storageApplyingCount.value;
@@ -1770,13 +1920,13 @@ We can perform a command in this file.
const pluginScanCount = pluginScanningCount.value; const pluginScanCount = pluginScanningCount.value;
const hiddenFilesCount = hiddenFilesEventCount.value + hiddenFilesProcessingCount.value; const hiddenFilesCount = hiddenFilesEventCount.value + hiddenFilesProcessingCount.value;
const conflictProcessCount = this.conflictProcessQueueCount.value; const conflictProcessCount = this.conflictProcessQueueCount.value;
const labelReplication = replicationCount ? `📥 ${replicationCount} ` : ""; const labelReplication = padLeftSp(replicationCount, `📥`);
const labelDBCount = dbCount ? `📄 ${dbCount} ` : ""; const labelDBCount = padLeftSp(dbCount, `📄`);
const labelStorageCount = storageApplyingCount ? `💾 ${storageApplyingCount}` : ""; const labelStorageCount = padLeftSp(storageApplyingCount, `💾`);
const labelChunkCount = chunkCount ? `🧩${chunkCount} ` : ""; const labelChunkCount = padLeftSp(chunkCount, `🧩`);
const labelPluginScanCount = pluginScanCount ? `🔌${pluginScanCount} ` : ""; const labelPluginScanCount = padLeftSp(pluginScanCount, `🔌`);
const labelHiddenFilesCount = hiddenFilesCount ? `⚙️${hiddenFilesCount} ` : ""; const labelHiddenFilesCount = padLeftSp(hiddenFilesCount, `⚙️`)
const labelConflictProcessCount = conflictProcessCount ? `🔩${conflictProcessCount} ` : ""; const labelConflictProcessCount = padLeftSp(conflictProcessCount, `🔩`);
return `${labelReplication}${labelDBCount}${labelStorageCount}${labelChunkCount}${labelPluginScanCount}${labelHiddenFilesCount}${labelConflictProcessCount}`; return `${labelReplication}${labelDBCount}${labelStorageCount}${labelChunkCount}${labelPluginScanCount}${labelHiddenFilesCount}${labelConflictProcessCount}`;
}) })
const requestingStatLabel = reactive(() => { const requestingStatLabel = reactive(() => {
@@ -1795,6 +1945,11 @@ We can perform a command in this file.
let pushLast = ""; let pushLast = "";
let pullLast = ""; let pullLast = "";
let w = ""; let w = "";
const labels: Partial<Record<DatabaseConnectingStatus, string>> = {
"CONNECTED": "⚡",
"JOURNAL_SEND": "📦↑",
"JOURNAL_RECEIVE": "📦↓",
}
switch (e.syncStatus) { switch (e.syncStatus) {
case "CLOSED": case "CLOSED":
case "COMPLETED": case "COMPLETED":
@@ -1808,7 +1963,9 @@ We can perform a command in this file.
w = "💤"; w = "💤";
break; break;
case "CONNECTED": case "CONNECTED":
w = "⚡"; case "JOURNAL_SEND":
case "JOURNAL_RECEIVE":
w = labels[e.syncStatus] || "⚡";
pushLast = ((lastSyncPushSeq == 0) ? "" : (lastSyncPushSeq >= maxPushSeq ? " (LIVE)" : ` (${maxPushSeq - lastSyncPushSeq})`)); pushLast = ((lastSyncPushSeq == 0) ? "" : (lastSyncPushSeq >= maxPushSeq ? " (LIVE)" : ` (${maxPushSeq - lastSyncPushSeq})`));
pullLast = ((lastSyncPullSeq == 0) ? "" : (lastSyncPullSeq >= maxPullSeq ? " (LIVE)" : ` (${maxPullSeq - lastSyncPullSeq})`)); pullLast = ((lastSyncPullSeq == 0) ? "" : (lastSyncPullSeq >= maxPullSeq ? " (LIVE)" : ` (${maxPullSeq - lastSyncPullSeq})`));
break; break;
@@ -1821,11 +1978,15 @@ We can perform a command in this file.
return { w, sent, pushLast, arrived, pullLast }; return { w, sent, pushLast, arrived, pullLast };
}) })
const waitingLabel = reactive(() => { const waitingLabel = reactive(() => {
// For invalidating
// @ts-ignore
// eslint-disable-next-line @typescript-eslint/no-unused-vars
const _ = tick.value;
const e = this.pendingFileEventCount.value; const e = this.pendingFileEventCount.value;
const proc = this.processingFileEventCount.value; const proc = this.processingFileEventCount.value;
const pend = e - proc; const pend = e - proc;
const labelProc = proc != 0 ? `${proc} ` : ""; const labelProc = padLeftSp(proc, ``);
const labelPend = pend != 0 ? ` 🛫${pend}` : ""; const labelPend = padLeftSp(pend, `🛫`);
return `${labelProc}${labelPend}`; return `${labelProc}${labelPend}`;
}) })
const statusLineLabel = reactive(() => { const statusLineLabel = reactive(() => {
@@ -1834,7 +1995,7 @@ We can perform a command in this file.
const waiting = waitingLabel.value; const waiting = waitingLabel.value;
const networkActivity = requestingStatLabel.value; const networkActivity = requestingStatLabel.value;
return { return {
message: `${networkActivity}Sync: ${w}${sent}${pushLast}${arrived}${pullLast}${waiting} ${queued}`, message: `${networkActivity}Sync: ${w} ${sent}${pushLast} ${arrived}${pullLast}${waiting}${queued}`,
}; };
}) })
const statusBarLabels = reactive(() => { const statusBarLabels = reactive(() => {
@@ -1845,31 +2006,20 @@ We can perform a command in this file.
message, status message, status
} }
}) })
let last = 0;
const applyToDisplay = () => { const applyToDisplay = throttle(() => {
const v = statusBarLabels.value; const v = statusBarLabels.value;
const now = Date.now();
if (now - last < 10) {
scheduleTask("applyToDisplay", 20, () => applyToDisplay());
return;
}
this.applyStatusBarText(v.message, v.status); this.applyStatusBarText(v.message, v.status);
last = now;
} }, 20);
statusBarLabels.onChanged(applyToDisplay); statusBarLabels.onChanged(applyToDisplay);
} }
applyStatusBarText(message: string, log: string) { applyStatusBarText(message: string, log: string) {
const newMsg = message; const newMsg = message.replace(/\n/g, "\\A ");
const newLog = log; const newLog = log.replace(/\n/g, "\\A ");
// scheduleTask("update-display", 50, () => {
this.statusBar?.setText(newMsg.split("\n")[0]); this.statusBar?.setText(newMsg.split("\n")[0]);
// const selector = `.CodeMirror-wrap,` +
// `.markdown-preview-view.cm-s-obsidian,` +
// `.markdown-source-view.cm-s-obsidian,` +
// `.canvas-wrapper,` +
// `.empty-state`
// ;
if (this.settings.showStatusOnEditor) { if (this.settings.showStatusOnEditor) {
const root = activeDocument.documentElement; const root = activeDocument.documentElement;
root.style.setProperty("--sls-log-text", "'" + (newMsg + "\\A " + newLog) + "'"); root.style.setProperty("--sls-log-text", "'" + (newMsg + "\\A " + newLog) + "'");
@@ -1877,7 +2027,6 @@ We can perform a command in this file.
// const root = activeDocument.documentElement; // const root = activeDocument.documentElement;
// root.style.setProperty("--log-text", "'" + (newMsg + "\\A " + newLog) + "'"); // root.style.setProperty("--log-text", "'" + (newMsg + "\\A " + newLog) + "'");
} }
// }, true);
scheduleTask("log-hide", 3000, () => { this.statusLog.value = "" }); scheduleTask("log-hide", 3000, () => { this.statusLog.value = "" });
@@ -1896,7 +2045,7 @@ We can perform a command in this file.
await this.applyBatchChange(); await this.applyBatchChange();
await Promise.all(this.addOns.map(e => e.beforeReplicate(showMessage))); await Promise.all(this.addOns.map(e => e.beforeReplicate(showMessage)));
await this.loadQueuedFiles(); await this.loadQueuedFiles();
const ret = await this.replicator.openReplication(this.settings, false, showMessage); const ret = await this.replicator.openReplication(this.settings, false, showMessage, false);
if (!ret) { if (!ret) {
if (this.replicator.remoteLockedAndDeviceNotAccepted) { if (this.replicator.remoteLockedAndDeviceNotAccepted) {
if (this.replicator.remoteCleaned && this.settings.useIndexedDBAdapter) { if (this.replicator.remoteCleaned && this.settings.useIndexedDBAdapter) {
@@ -1916,7 +2065,9 @@ Even if you choose to clean up, you will see this option again if you exit Obsid
await performRebuildDB(this, "localOnly"); await performRebuildDB(this, "localOnly");
} }
if (ret == CHOICE_CLEAN) { if (ret == CHOICE_CLEAN) {
const remoteDB = await this.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.getIsMobile(), true); const replicator = this.getReplicator();
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.getIsMobile(), true);
if (typeof remoteDB == "string") { if (typeof remoteDB == "string") {
Logger(remoteDB, LOG_LEVEL_NOTICE); Logger(remoteDB, LOG_LEVEL_NOTICE);
return false; return false;
@@ -2053,9 +2204,15 @@ Or if you are sure know what had been happened, we can unlock the database from
const syncFiles = filesStorage.filter((e) => onlyInStorageNames.indexOf(e.path) == -1); const syncFiles = filesStorage.filter((e) => onlyInStorageNames.indexOf(e.path) == -1);
Logger("Updating database by new files"); Logger("Updating database by new files");
const processStatus = {} as Record<string, string>;
const logLevel = showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const updateLog = throttle((key: string, msg: string) => {
processStatus[key] = msg;
const log = Object.values(processStatus).join("\n");
Logger(log, logLevel, "syncAll");
}, 25);
const initProcess = []; const initProcess = [];
const logLevel = showingNotice ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
const runAll = async<T>(procedureName: string, objects: T[], callback: (arg: T) => Promise<void>) => { const runAll = async<T>(procedureName: string, objects: T[], callback: (arg: T) => Promise<void>) => {
if (objects.length == 0) { if (objects.length == 0) {
Logger(`${procedureName}: Nothing to do`); Logger(`${procedureName}: Nothing to do`);
@@ -2077,12 +2234,14 @@ Or if you are sure know what had been happened, we can unlock the database from
failed++; failed++;
} }
if ((success + failed) % step == 0) { if ((success + failed) % step == 0) {
Logger(`${procedureName}: DONE:${success}, FAILED:${failed}, LAST:${processor._queue.length}`, logLevel, `log-${procedureName}`); const msg = `${procedureName}: DONE:${success}, FAILED:${failed}, LAST:${processor._queue.length}`;
updateLog(procedureName, msg);
} }
return; return;
}, { batchSize: 1, concurrentLimit: 10, delay: 0, suspended: true }, objects) }, { batchSize: 1, concurrentLimit: 10, delay: 0, suspended: true }, objects)
await processor.waitForPipeline(); await processor.waitForPipeline();
Logger(`${procedureName} All done: DONE:${success}, FAILED:${failed}`, logLevel, `log-${procedureName}`); const msg = `${procedureName} All done: DONE:${success}, FAILED:${failed}`;
updateLog(procedureName, msg)
} }
initProcess.push(runAll("UPDATE DATABASE", onlyInStorage, async (e) => { initProcess.push(runAll("UPDATE DATABASE", onlyInStorage, async (e) => {
if (!this.isFileSizeExceeded(e.stat.size)) { if (!this.isFileSizeExceeded(e.stat.size)) {
@@ -2116,7 +2275,6 @@ Or if you are sure know what had been happened, we can unlock the database from
const id = await this.path2id(getPathFromTFile(file)); const id = await this.path2id(getPathFromTFile(file));
const pair: FileDocPair = { file, id }; const pair: FileDocPair = { file, id };
return [pair]; return [pair];
// processSyncFile.enqueue(pair);
} }
, { batchSize: 1, concurrentLimit: 10, delay: 0, suspended: true }, syncFiles); , { batchSize: 1, concurrentLimit: 10, delay: 0, suspended: true }, syncFiles);
processPrepareSyncFile processPrepareSyncFile
@@ -2138,10 +2296,18 @@ Or if you are sure know what had been happened, we can unlock the database from
}, { batchSize: 1, concurrentLimit: 5, delay: 10, suspended: false } }, { batchSize: 1, concurrentLimit: 5, delay: 10, suspended: false }
)) ))
processPrepareSyncFile.startPipeline(); const allSyncFiles = syncFiles.length;
initProcess.push(async () => { let lastRemain = allSyncFiles;
await processPrepareSyncFile.waitForPipeline(); const step = 25;
}) const remainLog = (remain: number) => {
if (lastRemain - remain > step) {
const msg = ` CHECK AND SYNC: ${allSyncFiles - remain} / ${allSyncFiles}`;
updateLog("sync", msg);
lastRemain = remain;
}
}
processPrepareSyncFile.startPipeline().onUpdateProgress(() => remainLog(processPrepareSyncFile.totalRemaining + processPrepareSyncFile.nowProcessing))
initProcess.push(processPrepareSyncFile.waitForPipeline());
await Promise.all(initProcess); await Promise.all(initProcess);
// this.setStatusBarText(`NOW TRACKING!`); // this.setStatusBarText(`NOW TRACKING!`);
@@ -2501,38 +2667,39 @@ Or if you are sure know what had been happened, we can unlock the database from
conflictProcessQueueCount = reactiveSource(0); conflictProcessQueueCount = reactiveSource(0);
conflictResolveQueue = conflictResolveQueue =
new KeyedQueueProcessor(async (entries: { filename: FilePathWithPrefix }[]) => { new QueueProcessor(async (filenames: FilePathWithPrefix[]) => {
const entry = entries[0]; const filename = filenames[0];
const filename = entry.filename; await serialized(`conflict-resolve:${filename}`, async () => {
const conflictCheckResult = await this.checkConflictAndPerformAutoMerge(filename); const conflictCheckResult = await this.checkConflictAndPerformAutoMerge(filename);
if (conflictCheckResult === MISSING_OR_ERROR || conflictCheckResult === NOT_CONFLICTED || conflictCheckResult === CANCELLED) { if (conflictCheckResult === MISSING_OR_ERROR || conflictCheckResult === NOT_CONFLICTED || conflictCheckResult === CANCELLED) {
// nothing to do. // nothing to do.
return;
}
if (conflictCheckResult === AUTO_MERGED) {
//auto resolved, but need check again;
if (this.settings.syncAfterMerge && !this.suspended) {
//Wait for the running replication, if not running replication, run it once.
await shareRunningResult(`replication`, () => this.replicate());
}
Logger("conflict:Automatically merged, but we have to check it again");
this.conflictCheckQueue.enqueue(filename);
return;
}
if (this.settings.showMergeDialogOnlyOnActive) {
const af = this.getActiveFile();
if (af && af.path != filename) {
Logger(`${filename} is conflicted. Merging process has been postponed to the file have got opened.`, LOG_LEVEL_NOTICE);
return; return;
} }
} if (conflictCheckResult === AUTO_MERGED) {
Logger("conflict:Manual merge required!"); //auto resolved, but need check again;
await this.resolveConflictByUI(filename, conflictCheckResult); if (this.settings.syncAfterMerge && !this.suspended) {
//Wait for the running replication, if not running replication, run it once.
await shareRunningResult(`replication`, () => this.replicate());
}
Logger("conflict:Automatically merged, but we have to check it again");
this.conflictCheckQueue.enqueue(filename);
return;
}
if (this.settings.showMergeDialogOnlyOnActive) {
const af = this.getActiveFile();
if (af && af.path != filename) {
Logger(`${filename} is conflicted. Merging process has been postponed to the file have got opened.`, LOG_LEVEL_NOTICE);
return;
}
}
Logger("conflict:Manual merge required!");
await this.resolveConflictByUI(filename, conflictCheckResult);
});
}, { suspended: false, batchSize: 1, concurrentLimit: 1, delay: 10, keepResultUntilDownstreamConnected: false }).replaceEnqueueProcessor( }, { suspended: false, batchSize: 1, concurrentLimit: 1, delay: 10, keepResultUntilDownstreamConnected: false }).replaceEnqueueProcessor(
(queue, newEntity) => { (queue, newEntity) => {
const filename = newEntity.entity.filename; const filename = newEntity;
sendValue("cancel-resolve-conflict:" + filename, true); sendValue("cancel-resolve-conflict:" + filename, true);
const newQueue = [...queue].filter(e => e.key != newEntity.key); const newQueue = [...queue].filter(e => e != newEntity);
return [...newQueue, newEntity]; return [...newQueue, newEntity];
}); });
@@ -2544,10 +2711,9 @@ Or if you are sure know what had been happened, we can unlock the database from
const file = this.vaultAccess.getAbstractFileByPath(filename); const file = this.vaultAccess.getAbstractFileByPath(filename);
// if (!file) return; // if (!file) return;
// if (!(file instanceof TFile)) return; // if (!(file instanceof TFile)) return;
if ((file instanceof TFolder)) return; if ((file instanceof TFolder)) return [];
// Check again? // Check again?
return [filename];
return [{ key: filename, entity: { filename } }];
// this.conflictResolveQueue.enqueueWithKey(filename, { filename, file }); // this.conflictResolveQueue.enqueueWithKey(filename, { filename, file });
}, { }, {
suspended: false, batchSize: 1, concurrentLimit: 5, delay: 10, keepResultUntilDownstreamConnected: true, pipeTo: this.conflictResolveQueue, totalRemainingReactiveSource: this.conflictProcessQueueCount suspended: false, batchSize: 1, concurrentLimit: 5, delay: 10, keepResultUntilDownstreamConnected: true, pipeTo: this.conflictResolveQueue, totalRemainingReactiveSource: this.conflictProcessQueueCount
@@ -2859,7 +3025,9 @@ Or if you are sure know what had been happened, we can unlock the database from
} }
async dryRunGC() { async dryRunGC() {
await skipIfDuplicated("cleanup", async () => { await skipIfDuplicated("cleanup", async () => {
const remoteDBConn = await this.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.isMobile) const replicator = this.getReplicator();
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
const remoteDBConn = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.isMobile)
if (typeof (remoteDBConn) == "string") { if (typeof (remoteDBConn) == "string") {
Logger(remoteDBConn); Logger(remoteDBConn);
return; return;
@@ -2873,8 +3041,10 @@ Or if you are sure know what had been happened, we can unlock the database from
async dbGC() { async dbGC() {
// Lock the remote completely once. // Lock the remote completely once.
await skipIfDuplicated("cleanup", async () => { await skipIfDuplicated("cleanup", async () => {
const replicator = this.getReplicator();
if (!(replicator instanceof LiveSyncCouchDBReplicator)) return;
this.getReplicator().markRemoteLocked(this.settings, true, true); this.getReplicator().markRemoteLocked(this.settings, true, true);
const remoteDBConn = await this.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.isMobile) const remoteDBConn = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.isMobile)
if (typeof (remoteDBConn) == "string") { if (typeof (remoteDBConn) == "string") {
Logger(remoteDBConn); Logger(remoteDBConn);
return; return;

View File

@@ -103,6 +103,9 @@
.canvas-wrapper::before, .canvas-wrapper::before,
.empty-state::before { .empty-state::before {
content: var(--sls-log-text, ""); content: var(--sls-log-text, "");
font-variant-numeric: tabular-nums;
font-variant-emoji: emoji;
tab-size: 4;
text-align: right; text-align: right;
white-space: pre-wrap; white-space: pre-wrap;
position: absolute; position: absolute;

View File

@@ -1,52 +1,41 @@
### 0.22.0 ### 0.23.0
A few years passed since Self-hosted LiveSync was born, and our codebase had been very complicated. This could be patient now, but it should be a tremendous hurt. Incredibly new features!
Therefore at v0.22.0, for future maintainability, I refined task scheduling logic totally.
Of course, I think this would be our suffering in some cases. However, I would love to ask you for your cooperation and contribution. Now, we can use object storage (MinIO, S3, R2 or anything you like) for synchronising! Moreover, despite that, we can use all the features as if we were using CouchDB.
Note: As this is a pretty experimental feature, hence we have some limitations.
- This is built on the append-only architecture. It will not shrink used storage if we do not perform a rebuild.
- A bit fragile. However, our version x.yy.0 is always so.
- When the first synchronisation, the entire history to date is transferred. For this reason, it is preferable to do this under the WiFi network.
- Do not worry, from the second synchronisation, we always transfer only differences.
Sorry for being absent so much long. And thank you for your patience! I hope this feature empowers users to maintain independence and self-host their data, offering an alternative for those who prefer to manage their own storage solutions and avoid being stuck on the right side of a sudden change in business model.
Note: we got a very performance improvement. Of course, I use Self-hosted MinIO for testing and recommend this. It is for the same reason as using CouchDB. -- open, controllable, auditable and indeed already audited by numerous eyes.
Note at 0.22.2: **Now, to rescue mobile devices, Maximum file size is set to 50 by default**. Please configure the limit as you need. If you do not want to limit the sizes, set zero manually, please.
Let me write one more acknowledgement.
I have a lot of respect for that plugin, even though it is sometimes treated as if it is a competitor, remotely-save. I think it is a great architecture that embodies a different approach to my approach of recreating history. This time, with all due respect, I have used some of its code as a reference.
Hooray for open source, and generous licences, and the sharing of knowledge by experts.
#### Version history #### Version history
- 0.22.16: - 0.23.2
- Sorry for all the fixes to experimental features. (These things were also critical for dogfooding). The next release would be the main fixes! Thank you for your patience and understanding!
- Fixed: - Fixed:
- Fixed the issue that binary files were sometimes corrupted. - Journal Sync will not hang up during big replication, especially the initial one.
- Fixed customisation sync data could be corrupted. - All changes which have been replicated while rebuilding will not be postponed (Previous behaviour).
- Improved: - Improved:
- Now the remote database costs lower memory. - Now Journal Sync works efficiently in download and parse, or pack and upload.
- This release requires a brief wait on the first synchronisation, to track the latest changeset again. - Less server storage and faster packing/unpacking usage by the new chunk format.
- Description added for the `Device name`. - 0.23.1
- Refactored: - Fixed:
- Many type-errors have been resolved. - Now journal synchronisation considers untransferred each from sent and received.
- Obsolete file has been deleted. - Journal sync now handles retrying.
- 0.22.15: - Journal synchronisation no longer considers the synchronisation of chunks as revision updates (Simply ignored).
- Journal sync now splits the journal pack to prevent mobile device rebooting.
- Maintenance menus which had been on the command palette are now back in the maintain pane on the setting dialogue.
- Improved: - Improved:
- Faster start-up by removing too many logs which indicates normality - Now all changes which have been replicated while rebuilding will be postponed.
- By streamlined scanning of customised synchronisation extra phases have been deleted.
- 0.22.14: - 0.23.0
- New feature: - New feature:
- We can disable the status bar in the setting dialogue. - Now we can use Object Storage.
- Improved:
- Now some files are handled as correct data type.
- Customisation sync now uses the digest of each file for better performance.
- The status in the Editor now works performant.
- Refactored:
- Common functions have been ready and the codebase has been organised.
- Stricter type checking following TypeScript updates.
- Remove old iOS workaround for simplicity and performance.
- 0.22.13:
- Improved:
- Now using HTTP for the remote database URI warns of an error (on mobile) or notice (on desktop).
- Refactored:
- Dependencies have been polished.
- 0.22.12:
- Changed:
- The default settings has been changed.
- Improved:
- Default and preferred settings are applied on completion of the wizard.
- Fixed:
- Now Initialisation `Fetch` will be performed smoothly and there will be fewer conflicts.
- No longer stuck while Handling transferred or initialised documents.
... To continue on to `updates_old.md`.

View File

@@ -10,6 +10,75 @@ Note: we got a very performance improvement.
Note at 0.22.2: **Now, to rescue mobile devices, Maximum file size is set to 50 by default**. Please configure the limit as you need. If you do not want to limit the sizes, set zero manually, please. Note at 0.22.2: **Now, to rescue mobile devices, Maximum file size is set to 50 by default**. Please configure the limit as you need. If you do not want to limit the sizes, set zero manually, please.
#### Version history #### Version history
- 0.22.19
- Fixed:
- No longer data corrupting due to false BASE64 detections.
- Improved:
- A bit more efficient in Automatic data compression.
- 0.22.18
- New feature (Very Experimental):
- Now we can use `Automatic data compression` to reduce amount of traffic and the usage of remote database.
- Please make sure all devices are updated to v0.22.18 before trying this feature.
- If you are using some other utilities which connected to your vault, please make sure that they have compatibilities.
- Note: Setting `File Compression` on the remote database works for shrink the size of remote database. Please refer the [Doc](https://docs.couchdb.org/en/stable/config/couchdb.html#couchdb/file_compression).
- 0.22.17:
- Fixed:
- Error handling on booting now works fine.
- Replication is now started automatically in LiveSync mode.
- Batch database update is now disabled in LiveSync mode.
- No longer automatically reconnection while off-focused.
- Status saves are thinned out.
- Now Self-hosted LiveSync waits for all files between the local database and storage to be surely checked.
- Improved:
- The job scheduler is now more robust and stable.
- The status indicator no longer flickers and keeps zero for a while.
- No longer meaningless frequent updates of status indicators.
- Now we can configure regular expression filters in handy UI. Thank you so much, @eth-p!
- `Fetch` or `Rebuild everything` is now more safely performed.
- Minor things
- Some utility function has been added.
- Customisation sync now less wrong messages.
- Digging the weeds for eradication of type errors.
- 0.22.16:
- Fixed:
- Fixed the issue that binary files were sometimes corrupted.
- Fixed customisation sync data could be corrupted.
- Improved:
- Now the remote database costs lower memory.
- This release requires a brief wait on the first synchronisation, to track the latest changeset again.
- Description added for the `Device name`.
- Refactored:
- Many type-errors have been resolved.
- Obsolete file has been deleted.
- 0.22.15:
- Improved:
- Faster start-up by removing too many logs which indicates normality
- By streamlined scanning of customised synchronisation extra phases have been deleted.
... To continue on to `updates_old.md`.
- 0.22.14:
- New feature:
- We can disable the status bar in the setting dialogue.
- Improved:
- Now some files are handled as correct data type.
- Customisation sync now uses the digest of each file for better performance.
- The status in the Editor now works performant.
- Refactored:
- Common functions have been ready and the codebase has been organised.
- Stricter type checking following TypeScript updates.
- Remove old iOS workaround for simplicity and performance.
- 0.22.13:
- Improved:
- Now using HTTP for the remote database URI warns of an error (on mobile) or notice (on desktop).
- Refactored:
- Dependencies have been polished.
- 0.22.12:
- Changed:
- The default settings has been changed.
- Improved:
- Default and preferred settings are applied on completion of the wizard.
- Fixed:
- Now Initialisation `Fetch` will be performed smoothly and there will be fewer conflicts.
- No longer stuck while Handling transferred or initialised documents.
- 0.22.11: - 0.22.11:
- Fixed: - Fixed:
- `Verify and repair all files` is no longer broken. - `Verify and repair all files` is no longer broken.