mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-02-25 13:38:49 +00:00
Compare commits
23 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
133f5a7109 | ||
|
|
daa3feebf1 | ||
|
|
7b5f7d0fbf | ||
|
|
29532193cb | ||
|
|
5b4309c09d | ||
|
|
16ef582453 | ||
|
|
3e22f70c7a | ||
|
|
0a8dbe097e | ||
|
|
2c0fcf74d0 | ||
|
|
a1ab1efd5d | ||
|
|
c8fcf2d0d5 | ||
|
|
c384e2f7fb | ||
|
|
99c1c7dc1a | ||
|
|
84adec4b1a | ||
|
|
f0b202bd91 | ||
|
|
d54b7e2d93 | ||
|
|
6952ef37f5 | ||
|
|
9630bcbae8 | ||
|
|
c3f925ab9a | ||
|
|
034dc0538f | ||
|
|
b6136df836 | ||
|
|
24aacdc2a1 | ||
|
|
f91109b1ad |
@@ -2,7 +2,7 @@
|
||||
# Self-hosted LiveSync
|
||||
[Japanese docs](./README_ja.md) - [Chinese docs](./README_cn.md).
|
||||
|
||||
Self-hosted LiveSync is a community-implemented synchronization plugin, available on every obsidian-compatible platform and using CouchDB as the server.
|
||||
Self-hosted LiveSync is a community-implemented synchronization plugin, available on every obsidian-compatible platform and using CouchDB or Object Storage (e.g., MinIO, S3, R2, etc.) as the server.
|
||||
|
||||

|
||||
|
||||
@@ -45,7 +45,7 @@ This plug-in might be useful for researchers, engineers, and developers with a n
|
||||
2. Configure plug-in in [Quick Setup](docs/quick_setup.md)
|
||||
|
||||
> [!TIP]
|
||||
> We are still able to use IBM Cloudant. However, it is not recommended for several reasons nowadays. Here is [Setup IBM Cloudant](docs/setup_cloudant.md)
|
||||
> Now, fly.io has become not free. Fortunately, even though there are some issues, we are still able to use IBM Cloudant. Here is [Setup IBM Cloudant](docs/setup_cloudant.md). It will be updated soon!
|
||||
|
||||
|
||||
## Information in StatusBar
|
||||
|
||||
50
docs/design_docs_of_journalsync.md
Normal file
50
docs/design_docs_of_journalsync.md
Normal file
@@ -0,0 +1,50 @@
|
||||
## The design document of the journal sync
|
||||
|
||||
Original title: Synchronise without CouchDB
|
||||
|
||||
### Goal
|
||||
- Synchronise vaults without CouchDB
|
||||
|
||||
### Motivation
|
||||
- Serving CouchDB is not pretty easy.
|
||||
- Full spec DBaaS (Paid IBM Cloudant) is a bit expensive and lacking of alternatives.
|
||||
- Securing alternatives, from just one protocol.
|
||||
|
||||
### Prerequisite
|
||||
- We should have multiple implementations of the server software.
|
||||
- We should also be able to use SaaS, with a choice of options.
|
||||
- We should require them a reasonable sense of cost, ideally free of charge for trials.
|
||||
- We should be able to serve some instance of the server software, as OSS — with transparency, availability of auditing, and the fact that they actually took place.
|
||||
|
||||
### Methods and implementations
|
||||
|
||||
Ordinarily, local pouchDB and the remote CouchDB are synchronised by sending each missing document through several conversations in their replication protocol. However, to achieve this plan, we cannot rely on CouchDB and its protocols. This limitation is so harsh. However, Overcoming this means gaining new possibilities. After some trials, It was concluded that synchronisation could be completed even if the actions that could be performed were limited to uploading, downloading and retrieving the list. This means we can use any old-fashioned WebDAV server, and Sophisticated “Object storages” such as Self-hosted MinIO, S3, and R2 or any we like. This is realised by sharing and complementing the differences of the journal by each client. Therefore, The focus is therefore on how to identify which are the differences and send them without dynamic communication.
|
||||
|
||||
All clients manage their data in PouchDB. I know this is probably known information, but it has its own journal.
|
||||
|
||||
First, all clients should record to what point in the journal they sent themselves last time. The client then packs from the previous point to the latest when sending and also updates their record. This pack is uploaded to the server with the name starting with the timestamp of its creation. This is the send operation.
|
||||
|
||||
Conversely, when receiving, the packs uploaded to the server that have not yet been received are received in order. This is easy as their names are in date order. When the process is successfully completed, the names of the files received are recorded. The journals from this pack are then reflected in their own database. Conflict resolution is left to PouchDB, so the client only needs to do the work of applying any differences. And here is the key: the client records the ID and revision of the document that was in the journal and applied.
|
||||
|
||||
This key works when creating a pack. When creating a pack, the client omits this 'document recorded as received and used'. This is because received and applied means that it has already been sent by another client and exists on the server. This ensures that unnecessary transmissions do not take place.
|
||||
|
||||
Synchronisation is then always started by receiving. This is a little trick to avoid including unnecessary documents in the pack.
|
||||
|
||||
These behaviours allow clients to voluntarily send and receive only the missing parts of the journal that are not stored on the server, without having to communicate with each other, and still keep a single, consistent journal on the server.
|
||||
|
||||
Source codes actually implemented this is already committed into the repository.
|
||||
|
||||
### Test strategy
|
||||
|
||||
This implementation replaces the synchronisation performed by CouchDB. Therefore, testing was simply done by comparing the same changes to the same vault, replicated in CouchDB, with those done by this implementation.
|
||||
|
||||
### Documentation strategy
|
||||
|
||||
- Documentation should be done in a quick setup, at least.
|
||||
- As several server implementations can be selected, the description is omitted with regard to specific configuration values.
|
||||
- A MinIO set-up might be nice to have. However, it is not considered essential.
|
||||
- It would be a good opportunity to also publish these design documents.
|
||||
|
||||
### Consideration and Conclusion
|
||||
|
||||
This design offers a novel approach to journal synchronisation without relying on CouchDB. It leverages PouchDB's journaling capabilities and leverages simple server-side storage for efficient data exchange. Hence, the new design could be said to have gotten a broader outlook.
|
||||
@@ -16,7 +16,8 @@ There are three methods to set up Self-hosted LiveSync.
|
||||
|
||||
### 1. Using setup URIs
|
||||
|
||||
> [!TIP] What is the setup URI? Why is it required?
|
||||
> [!TIP]
|
||||
> What is the setup URI? Why is it required?
|
||||
> The setup URI is the encrypted representation of Self-hosted LiveSync configuration as a URI. This starts `obsidian://setuplivesync?settings=`. This is encrypted with a passphrase, so that it can be shared relatively securely between devices. It is a bit long, but it is one line. This allows a series of settings to be set at once without any inconsistencies.
|
||||
>
|
||||
> If you have configured the remote database by [Automated setup on Fly.io](./setup_flyio.md#a-very-automated-setup) or [set up your server with the tool](./setup_own_server.md#1-generate-the-setup-uri-on-a-desktop-device-or-server), **you should have one of them**
|
||||
@@ -44,19 +45,38 @@ If you do not have any setup URI, Press the `start` button. The setting dialogue
|
||||
|
||||

|
||||
|
||||
#### Remote database configuration
|
||||
|
||||
1. Enter the information for the database we have set up.
|
||||
#### Select the remote type
|
||||
|
||||
1. Select the Remote Type from dropdown list.
|
||||
We now have a choice between CouchDB (and its compatibles) and object storage (MinIO, S3, R2). CouchDB is the first choice and is also recommended. And supporting Object Storage is an experimental feature.
|
||||
|
||||
#### Remote configuration
|
||||
|
||||
##### CouchDB
|
||||
|
||||
Enter the information for the database we have set up.
|
||||
|
||||

|
||||
|
||||
##### Object Storage
|
||||
|
||||
#### Test database connection and Check database configuration
|
||||
1. Enter the information for the S3 API and bucket.
|
||||
|
||||

|
||||
|
||||
Note 1: if you use S3, you can leave the Endpoint URL empty.
|
||||
Note 2: if your Object Storage cannot configure the CORS setting fully, you may able to connect to the server by enabling the `Use Custom HTTP Handler` toggle.
|
||||
|
||||
2. Press `Test` of `Test Connection` once and ensure you can connect to the Object Storage.
|
||||
|
||||
#### Only CouchDB: Test database connection and Check database configuration
|
||||
|
||||
We can check the connectivity to the database, and the database settings.
|
||||
|
||||

|
||||
|
||||
#### Check and Fix database configuration
|
||||
#### Only CouchDB: Check and Fix database configuration
|
||||
|
||||
Check the database settings and fix any problems on the spot.
|
||||
|
||||
@@ -81,6 +101,8 @@ We should proceed to the Next step.
|
||||
#### Sync Settings
|
||||
Finally, finish the wizard by selecting a preset for synchronisation.
|
||||
|
||||
Note: If you are going to use Object Storage, you cannot select `LiveSync`.
|
||||
|
||||

|
||||
|
||||
Select any synchronisation methods we want to use and `Apply`. If database initialisation is required, it will be performed at this time. When `All done!` is displayed, we are ready to synchronise.
|
||||
@@ -104,4 +126,4 @@ And, please copy the setup URI by `Copy current settings as a new setup URI` and
|
||||
## At the subsequent device
|
||||
After installing Self-hosted LiveSync on the first device, we should have a setup URI. **The first choice is to use it**. Please share it with the device you want to setup.
|
||||
|
||||
It is completely same as [Using setup URIs on the first device](#1-using-setup-uris). Please refer it.
|
||||
It is completely same as [Using setup URIs on the first device](#1-using-setup-uris). Please refer it.
|
||||
|
||||
BIN
images/quick_setup_3b.png
Normal file
BIN
images/quick_setup_3b.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 74 KiB |
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.22.14",
|
||||
"version": "0.23.2",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
2748
package-lock.json
generated
2748
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.22.14",
|
||||
"version": "0.23.2",
|
||||
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"main": "main.js",
|
||||
"type": "module",
|
||||
@@ -54,7 +54,12 @@
|
||||
"typescript": "^5.4.2"
|
||||
},
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-s3": "^3.556.0",
|
||||
"@smithy/fetch-http-handler": "^2.5.0",
|
||||
"@smithy/protocol-http": "^3.3.0",
|
||||
"@smithy/querystring-builder": "^2.2.0",
|
||||
"diff-match-patch": "^1.0.5",
|
||||
"fflate": "^0.8.2",
|
||||
"idb": "^8.0.0",
|
||||
"minimatch": "^9.0.3",
|
||||
"xxhash-wasm": "0.4.2",
|
||||
|
||||
@@ -4,7 +4,7 @@ import { Notice, type PluginManifest, parseYaml, normalizePath, type ListedFiles
|
||||
import type { EntryDoc, LoadedEntry, InternalFileEntry, FilePathWithPrefix, FilePath, DocumentID, AnyEntry, SavingEntry } from "./lib/src/types";
|
||||
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE, MODE_SELECTIVE } from "./lib/src/types";
|
||||
import { ICXHeader, PERIODIC_PLUGIN_SWEEP, } from "./types";
|
||||
import { createSavingEntryFromLoadedEntry, createTextBlob, delay, fireAndForget, getDocData, isDocContentSame, sendSignal, waitForSignal } from "./lib/src/utils";
|
||||
import { createSavingEntryFromLoadedEntry, createTextBlob, delay, fireAndForget, getDocData, isDocContentSame, throttle } from "./lib/src/utils";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { readString, decodeBinary, arrayBufferToBase64, digestHash } from "./lib/src/strbin";
|
||||
import { serialized } from "./lib/src/lock";
|
||||
@@ -305,7 +305,8 @@ export class ConfigSync extends LiveSyncCommands {
|
||||
}
|
||||
return false;
|
||||
}
|
||||
createMissingConfigurationEntry() {
|
||||
createMissingConfigurationEntry = throttle(() => this._createMissingConfigurationEntry(), 1000);
|
||||
_createMissingConfigurationEntry() {
|
||||
let saveRequired = false;
|
||||
for (const v of this.pluginList) {
|
||||
const key = `${v.category}/${v.name}`;
|
||||
@@ -335,7 +336,11 @@ export class ConfigSync extends LiveSyncCommands {
|
||||
try {
|
||||
const pluginData = await this.loadPluginData(path);
|
||||
if (pluginData) {
|
||||
return [pluginData];
|
||||
let newList = [...this.pluginList];
|
||||
newList = newList.filter(x => x.documentPath != pluginData.documentPath);
|
||||
newList.push(pluginData);
|
||||
this.pluginList = newList;
|
||||
pluginList.set(newList);
|
||||
}
|
||||
// Failed to load
|
||||
return [];
|
||||
@@ -345,28 +350,9 @@ export class ConfigSync extends LiveSyncCommands {
|
||||
Logger(ex, LOG_LEVEL_VERBOSE);
|
||||
}
|
||||
return [];
|
||||
}, { suspended: false, batchSize: 1, concurrentLimit: 5, delay: 100, yieldThreshold: 10, maintainDelay: false }).pipeTo(
|
||||
new QueueProcessor(
|
||||
async (pluginDataList) => {
|
||||
// Concurrency is two, therefore, we can unlock the previous awaiting.
|
||||
sendSignal("plugin-next-load");
|
||||
let newList = [...this.pluginList];
|
||||
for (const item of pluginDataList) {
|
||||
newList = newList.filter(x => x.documentPath != item.documentPath);
|
||||
newList.push(item)
|
||||
}
|
||||
this.pluginList = newList;
|
||||
pluginList.set(newList);
|
||||
if (pluginDataList.length != 10) {
|
||||
// If the queue is going to be empty, await subsequent for a while.
|
||||
await waitForSignal("plugin-next-load", 1000);
|
||||
}
|
||||
return;
|
||||
}
|
||||
, { suspended: false, batchSize: 10, concurrentLimit: 2, delay: 100, yieldThreshold: 25, totalRemainingReactiveSource: pluginScanningCount, maintainDelay: false })).startPipeline().root.onIdle(() => {
|
||||
// Logger(`All files enumerated`, LOG_LEVEL_INFO, "get-plugins");
|
||||
this.createMissingConfigurationEntry();
|
||||
});
|
||||
}, { suspended: false, batchSize: 1, concurrentLimit: 10, delay: 100, yieldThreshold: 10, maintainDelay: false, totalRemainingReactiveSource: pluginScanningCount }).startPipeline().root.onUpdateProgress(() => {
|
||||
this.createMissingConfigurationEntry();
|
||||
});
|
||||
|
||||
|
||||
async updatePluginList(showMessage: boolean, updatedDocumentPath?: FilePathWithPrefix): Promise<void> {
|
||||
@@ -412,7 +398,7 @@ export class ConfigSync extends LiveSyncCommands {
|
||||
showJSONMergeDialogAndMerge(docA: LoadedEntry, docB: LoadedEntry, pluginDataA: PluginDataEx, pluginDataB: PluginDataEx): Promise<boolean> {
|
||||
const fileA = { ...pluginDataA.files[0], ctime: pluginDataA.files[0].mtime, _id: `${pluginDataA.documentPath}` as DocumentID };
|
||||
const fileB = pluginDataB.files[0];
|
||||
const docAx = { ...docA, ...fileA } as LoadedEntry, docBx = { ...docB, ...fileB } as LoadedEntry
|
||||
const docAx = { ...docA, ...fileA, datatype: "newnote" } as LoadedEntry, docBx = { ...docB, ...fileB, datatype: "newnote" } as LoadedEntry
|
||||
return serialized("config:merge-data", () => new Promise((res) => {
|
||||
Logger("Opening data-merging dialog", LOG_LEVEL_VERBOSE);
|
||||
// const docs = [docA, docB];
|
||||
|
||||
@@ -9,7 +9,7 @@ import { serialized } from "./lib/src/lock";
|
||||
import { JsonResolveModal } from "./JsonResolveModal";
|
||||
import { LiveSyncCommands } from "./LiveSyncCommands";
|
||||
import { addPrefix, stripAllPrefixes } from "./lib/src/path";
|
||||
import { KeyedQueueProcessor, QueueProcessor } from "./lib/src/processor";
|
||||
import { QueueProcessor } from "./lib/src/processor";
|
||||
import { hiddenFilesEventCount, hiddenFilesProcessingCount } from "./lib/src/stores";
|
||||
|
||||
export class HiddenFileSync extends LiveSyncCommands {
|
||||
@@ -73,15 +73,15 @@ export class HiddenFileSync extends LiveSyncCommands {
|
||||
}
|
||||
|
||||
procInternalFile(filename: string) {
|
||||
this.internalFileProcessor.enqueueWithKey(filename, filename);
|
||||
this.internalFileProcessor.enqueue(filename);
|
||||
}
|
||||
internalFileProcessor = new KeyedQueueProcessor<string, any>(
|
||||
internalFileProcessor = new QueueProcessor<string, any>(
|
||||
async (filenames) => {
|
||||
Logger(`START :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE);
|
||||
await this.syncInternalFilesAndDatabase("pull", false, false, filenames);
|
||||
Logger(`DONE :Applying hidden ${filenames.length} files change`, LOG_LEVEL_VERBOSE);
|
||||
return;
|
||||
}, { batchSize: 100, concurrentLimit: 1, delay: 10, yieldThreshold: 10, suspended: false, totalRemainingReactiveSource: hiddenFilesEventCount }
|
||||
}, { batchSize: 100, concurrentLimit: 1, delay: 10, yieldThreshold: 100, suspended: false, totalRemainingReactiveSource: hiddenFilesEventCount }
|
||||
);
|
||||
|
||||
recentProcessedInternalFiles = [] as string[];
|
||||
|
||||
@@ -1,314 +0,0 @@
|
||||
import { normalizePath, type PluginManifest } from "./deps";
|
||||
import type { DocumentID, EntryDoc, FilePathWithPrefix, LoadedEntry, SavingEntry } from "./lib/src/types";
|
||||
import { LOG_LEVEL_INFO, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "./lib/src/types";
|
||||
import { type PluginDataEntry, PERIODIC_PLUGIN_SWEEP, type PluginList, type DevicePluginList, PSCHeader, PSCHeaderEnd } from "./types";
|
||||
import { createTextBlob, getDocData, isDocContentSame } from "./lib/src/utils";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { PouchDB } from "./lib/src/pouchdb-browser.js";
|
||||
import { isPluginMetadata, PeriodicProcessor } from "./utils";
|
||||
import { PluginDialogModal } from "./dialogs";
|
||||
import { NewNotice } from "./lib/src/wrapper";
|
||||
import { versionNumberString2Number } from "./lib/src/strbin";
|
||||
import { serialized, skipIfDuplicated } from "./lib/src/lock";
|
||||
import { LiveSyncCommands } from "./LiveSyncCommands";
|
||||
|
||||
export class PluginAndTheirSettings extends LiveSyncCommands {
|
||||
|
||||
get deviceAndVaultName() {
|
||||
return this.plugin.deviceAndVaultName;
|
||||
}
|
||||
pluginDialog: PluginDialogModal = null;
|
||||
periodicPluginSweepProcessor = new PeriodicProcessor(this.plugin, async () => await this.sweepPlugin(false));
|
||||
|
||||
showPluginSyncModal() {
|
||||
if (this.pluginDialog != null) {
|
||||
this.pluginDialog.open();
|
||||
} else {
|
||||
this.pluginDialog = new PluginDialogModal(this.app, this.plugin);
|
||||
this.pluginDialog.open();
|
||||
}
|
||||
}
|
||||
|
||||
hidePluginSyncModal() {
|
||||
if (this.pluginDialog != null) {
|
||||
this.pluginDialog.close();
|
||||
this.pluginDialog = null;
|
||||
}
|
||||
}
|
||||
onload(): void | Promise<void> {
|
||||
this.plugin.addCommand({
|
||||
id: "livesync-plugin-dialog",
|
||||
name: "Show Plugins and their settings",
|
||||
callback: () => {
|
||||
this.showPluginSyncModal();
|
||||
},
|
||||
});
|
||||
this.showPluginSyncModal();
|
||||
}
|
||||
onunload() {
|
||||
this.hidePluginSyncModal();
|
||||
this.periodicPluginSweepProcessor?.disable();
|
||||
}
|
||||
parseReplicationResultItem(doc: PouchDB.Core.ExistingDocument<EntryDoc>) {
|
||||
if (isPluginMetadata(doc._id)) {
|
||||
if (this.settings.notifyPluginOrSettingUpdated) {
|
||||
this.triggerCheckPluginUpdate();
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
async beforeReplicate(showMessage: boolean) {
|
||||
if (this.settings.autoSweepPlugins) {
|
||||
await this.sweepPlugin(showMessage);
|
||||
}
|
||||
}
|
||||
async onResume() {
|
||||
if (this.plugin.suspended)
|
||||
return;
|
||||
if (this.settings.autoSweepPlugins) {
|
||||
await this.sweepPlugin(false);
|
||||
}
|
||||
this.periodicPluginSweepProcessor.enable(this.settings.autoSweepPluginsPeriodic && !this.settings.watchInternalFileChanges ? (PERIODIC_PLUGIN_SWEEP * 1000) : 0);
|
||||
}
|
||||
async onInitializeDatabase(showNotice: boolean) {
|
||||
if (this.settings.usePluginSync) {
|
||||
try {
|
||||
Logger("Scanning plugins...");
|
||||
await this.sweepPlugin(showNotice);
|
||||
Logger("Scanning plugins done");
|
||||
} catch (ex) {
|
||||
Logger("Scanning plugins failed");
|
||||
Logger(ex, LOG_LEVEL_VERBOSE);
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
async realizeSettingSyncMode() {
|
||||
this.periodicPluginSweepProcessor?.disable();
|
||||
if (this.plugin.suspended)
|
||||
return;
|
||||
if (this.settings.autoSweepPlugins) {
|
||||
await this.sweepPlugin(false);
|
||||
}
|
||||
this.periodicPluginSweepProcessor.enable(this.settings.autoSweepPluginsPeriodic && !this.settings.watchInternalFileChanges ? (PERIODIC_PLUGIN_SWEEP * 1000) : 0);
|
||||
}
|
||||
|
||||
triggerCheckPluginUpdate() {
|
||||
(async () => await this.checkPluginUpdate())();
|
||||
}
|
||||
|
||||
|
||||
async getPluginList(): Promise<{ plugins: PluginList; allPlugins: DevicePluginList; thisDevicePlugins: DevicePluginList; }> {
|
||||
const docList = await this.localDatabase.allDocsRaw<PluginDataEntry>({ startkey: PSCHeader, endkey: PSCHeaderEnd, include_docs: false });
|
||||
const oldDocs: PluginDataEntry[] = ((await Promise.all(docList.rows.map(async (e) => await this.localDatabase.getDBEntry(e.id as FilePathWithPrefix /* WARN!! THIS SHOULD BE WRAPPED */)))).filter((e) => e !== false) as LoadedEntry[]).map((e) => JSON.parse(getDocData(e.data)));
|
||||
const plugins: { [key: string]: PluginDataEntry[]; } = {};
|
||||
const allPlugins: { [key: string]: PluginDataEntry; } = {};
|
||||
const thisDevicePlugins: { [key: string]: PluginDataEntry; } = {};
|
||||
for (const v of oldDocs) {
|
||||
if (typeof plugins[v.deviceVaultName] === "undefined") {
|
||||
plugins[v.deviceVaultName] = [];
|
||||
}
|
||||
plugins[v.deviceVaultName].push(v);
|
||||
allPlugins[v._id] = v;
|
||||
if (v.deviceVaultName == this.deviceAndVaultName) {
|
||||
thisDevicePlugins[v.manifest.id] = v;
|
||||
}
|
||||
}
|
||||
return { plugins, allPlugins, thisDevicePlugins };
|
||||
}
|
||||
|
||||
async checkPluginUpdate() {
|
||||
if (!this.plugin.settings.usePluginSync)
|
||||
return;
|
||||
await this.sweepPlugin(false);
|
||||
const { allPlugins, thisDevicePlugins } = await this.getPluginList();
|
||||
const arrPlugins = Object.values(allPlugins);
|
||||
let updateFound = false;
|
||||
for (const plugin of arrPlugins) {
|
||||
const ownPlugin = thisDevicePlugins[plugin.manifest.id];
|
||||
if (ownPlugin) {
|
||||
const remoteVersion = versionNumberString2Number(plugin.manifest.version);
|
||||
const ownVersion = versionNumberString2Number(ownPlugin.manifest.version);
|
||||
if (remoteVersion > ownVersion) {
|
||||
updateFound = true;
|
||||
}
|
||||
if (((plugin.mtime / 1000) | 0) > ((ownPlugin.mtime / 1000) | 0) && (plugin.dataJson ?? "") != (ownPlugin.dataJson ?? "")) {
|
||||
updateFound = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (updateFound) {
|
||||
const fragment = createFragment((doc) => {
|
||||
doc.createEl("a", null, (a) => {
|
||||
a.text = "There're some new plugins or their settings";
|
||||
a.addEventListener("click", () => this.showPluginSyncModal());
|
||||
});
|
||||
});
|
||||
NewNotice(fragment, 10000);
|
||||
} else {
|
||||
Logger("Everything is up to date.", LOG_LEVEL_NOTICE);
|
||||
}
|
||||
}
|
||||
|
||||
async sweepPlugin(showMessage = false, specificPluginPath = "") {
|
||||
if (!this.settings.usePluginSync)
|
||||
return;
|
||||
if (!this.localDatabase.isReady)
|
||||
return;
|
||||
// @ts-ignore
|
||||
const pl = this.app.plugins;
|
||||
const manifests: PluginManifest[] = Object.values(pl.manifests);
|
||||
let specificPlugin = "";
|
||||
if (specificPluginPath != "") {
|
||||
specificPlugin = manifests.find(e => e.dir.endsWith("/" + specificPluginPath))?.id ?? "";
|
||||
}
|
||||
await skipIfDuplicated("sweepplugin", async () => {
|
||||
const logLevel = showMessage ? LOG_LEVEL_NOTICE : LOG_LEVEL_INFO;
|
||||
if (!this.deviceAndVaultName) {
|
||||
Logger("You have to set your device name.", LOG_LEVEL_NOTICE);
|
||||
return;
|
||||
}
|
||||
Logger("Scanning plugins", logLevel);
|
||||
const oldDocs = await this.localDatabase.allDocsRaw<EntryDoc>({
|
||||
startkey: `ps:${this.deviceAndVaultName}-${specificPlugin}`,
|
||||
endkey: `ps:${this.deviceAndVaultName}-${specificPlugin}\u{10ffff}`,
|
||||
include_docs: true,
|
||||
});
|
||||
// Logger("OLD DOCS.", LOG_LEVEL_VERBOSE);
|
||||
// sweep current plugin.
|
||||
const procs = manifests.map(async (m) => {
|
||||
const pluginDataEntryID = `ps:${this.deviceAndVaultName}-${m.id}` as DocumentID;
|
||||
try {
|
||||
if (specificPlugin && m.id != specificPlugin) {
|
||||
return;
|
||||
}
|
||||
Logger(`Reading plugin:${m.name}(${m.id})`, LOG_LEVEL_VERBOSE);
|
||||
const path = normalizePath(m.dir) + "/";
|
||||
const files = ["manifest.json", "main.js", "styles.css", "data.json"];
|
||||
const pluginData: { [key: string]: string; } = {};
|
||||
for (const file of files) {
|
||||
const thePath = path + file;
|
||||
if (await this.plugin.vaultAccess.adapterExists(thePath)) {
|
||||
pluginData[file] = await this.plugin.vaultAccess.adapterRead(thePath);
|
||||
}
|
||||
}
|
||||
let mtime = 0;
|
||||
if (await this.plugin.vaultAccess.adapterExists(path + "/data.json")) {
|
||||
mtime = (await this.plugin.vaultAccess.adapterStat(path + "/data.json")).mtime;
|
||||
}
|
||||
|
||||
const p: PluginDataEntry = {
|
||||
_id: pluginDataEntryID,
|
||||
dataJson: pluginData["data.json"],
|
||||
deviceVaultName: this.deviceAndVaultName,
|
||||
mainJs: pluginData["main.js"],
|
||||
styleCss: pluginData["styles.css"],
|
||||
manifest: m,
|
||||
manifestJson: pluginData["manifest.json"],
|
||||
mtime: mtime,
|
||||
type: "plugin",
|
||||
};
|
||||
const blob = createTextBlob(JSON.stringify(p));
|
||||
const d: SavingEntry = {
|
||||
_id: p._id,
|
||||
path: p._id as string as FilePathWithPrefix,
|
||||
data: blob,
|
||||
ctime: mtime,
|
||||
mtime: mtime,
|
||||
size: blob.size,
|
||||
children: [],
|
||||
datatype: "plain",
|
||||
type: "plain"
|
||||
};
|
||||
Logger(`check diff:${m.name}(${m.id})`, LOG_LEVEL_VERBOSE);
|
||||
await serialized("plugin-" + m.id, async () => {
|
||||
const old = await this.localDatabase.getDBEntry(p._id as string as FilePathWithPrefix /* This also should be explained */, null, false, false);
|
||||
if (old !== false) {
|
||||
const oldData = { data: old.data, deleted: old._deleted };
|
||||
const newData = { data: d.data, deleted: d._deleted };
|
||||
if (await isDocContentSame(oldData.data, newData.data) && oldData.deleted == newData.deleted) {
|
||||
Logger(`Nothing changed:${m.name}`);
|
||||
return;
|
||||
}
|
||||
}
|
||||
await this.localDatabase.putDBEntry(d);
|
||||
Logger(`Plugin saved:${m.name}`, logLevel);
|
||||
});
|
||||
} catch (ex) {
|
||||
Logger(`Plugin save failed:${m.name}`, LOG_LEVEL_NOTICE);
|
||||
} finally {
|
||||
oldDocs.rows = oldDocs.rows.filter((e) => e.id != pluginDataEntryID);
|
||||
}
|
||||
//remove saved plugin data.
|
||||
}
|
||||
);
|
||||
|
||||
await Promise.all(procs);
|
||||
|
||||
const delDocs = oldDocs.rows.map((e) => {
|
||||
// e.doc._deleted = true;
|
||||
if (e.doc.type == "newnote" || e.doc.type == "plain") {
|
||||
e.doc.deleted = true;
|
||||
if (this.settings.deleteMetadataOfDeletedFiles) {
|
||||
e.doc._deleted = true;
|
||||
}
|
||||
} else {
|
||||
e.doc._deleted = true;
|
||||
}
|
||||
return e.doc;
|
||||
});
|
||||
Logger(`Deleting old plugin:(${delDocs.length})`, LOG_LEVEL_VERBOSE);
|
||||
await this.localDatabase.bulkDocsRaw(delDocs);
|
||||
Logger(`Scan plugin done.`, logLevel);
|
||||
});
|
||||
}
|
||||
|
||||
async applyPluginData(plugin: PluginDataEntry) {
|
||||
await serialized("plugin-" + plugin.manifest.id, async () => {
|
||||
const pluginTargetFolderPath = normalizePath(plugin.manifest.dir) + "/";
|
||||
// @ts-ignore
|
||||
const stat = this.app.plugins.enabledPlugins.has(plugin.manifest.id) == true;
|
||||
if (stat) {
|
||||
// @ts-ignore
|
||||
await this.app.plugins.unloadPlugin(plugin.manifest.id);
|
||||
Logger(`Unload plugin:${plugin.manifest.id}`, LOG_LEVEL_NOTICE);
|
||||
}
|
||||
if (plugin.dataJson)
|
||||
await this.plugin.vaultAccess.adapterWrite(pluginTargetFolderPath + "data.json", plugin.dataJson);
|
||||
Logger("wrote:" + pluginTargetFolderPath + "data.json", LOG_LEVEL_NOTICE);
|
||||
if (stat) {
|
||||
// @ts-ignore
|
||||
await this.app.plugins.loadPlugin(plugin.manifest.id);
|
||||
Logger(`Load plugin:${plugin.manifest.id}`, LOG_LEVEL_NOTICE);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
async applyPlugin(plugin: PluginDataEntry) {
|
||||
await serialized("plugin-" + plugin.manifest.id, async () => {
|
||||
// @ts-ignore
|
||||
const stat = this.app.plugins.enabledPlugins.has(plugin.manifest.id) == true;
|
||||
if (stat) {
|
||||
// @ts-ignore
|
||||
await this.app.plugins.unloadPlugin(plugin.manifest.id);
|
||||
Logger(`Unload plugin:${plugin.manifest.id}`, LOG_LEVEL_NOTICE);
|
||||
}
|
||||
|
||||
const pluginTargetFolderPath = normalizePath(plugin.manifest.dir) + "/";
|
||||
if ((await this.plugin.vaultAccess.adapterExists(pluginTargetFolderPath)) === false) {
|
||||
await this.app.vault.adapter.mkdir(pluginTargetFolderPath);
|
||||
}
|
||||
await this.plugin.vaultAccess.adapterWrite(pluginTargetFolderPath + "main.js", plugin.mainJs);
|
||||
await this.plugin.vaultAccess.adapterWrite(pluginTargetFolderPath + "manifest.json", plugin.manifestJson);
|
||||
if (plugin.styleCss)
|
||||
await this.plugin.vaultAccess.adapterWrite(pluginTargetFolderPath + "styles.css", plugin.styleCss);
|
||||
if (stat) {
|
||||
// @ts-ignore
|
||||
await this.app.plugins.loadPlugin(plugin.manifest.id);
|
||||
Logger(`Load plugin:${plugin.manifest.id}`, LOG_LEVEL_NOTICE);
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
import { type EntryDoc, type ObsidianLiveSyncSettings, DEFAULT_SETTINGS, LOG_LEVEL_NOTICE } from "./lib/src/types";
|
||||
import { type EntryDoc, type ObsidianLiveSyncSettings, DEFAULT_SETTINGS, LOG_LEVEL_NOTICE, REMOTE_COUCHDB, REMOTE_MINIO } from "./lib/src/types";
|
||||
import { configURIBase } from "./types";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { PouchDB } from "./lib/src/pouchdb-browser.js";
|
||||
@@ -9,6 +9,7 @@ import { delay, fireAndForget } from "./lib/src/utils";
|
||||
import { confirmWithMessage } from "./dialogs";
|
||||
import { Platform } from "./deps";
|
||||
import { fetchAllUsedChunks } from "./lib/src/utils_couchdb";
|
||||
import type { LiveSyncCouchDBReplicator } from "./lib/src/LiveSyncReplicator.js";
|
||||
|
||||
export class SetupLiveSync extends LiveSyncCommands {
|
||||
onunload() { }
|
||||
@@ -50,7 +51,7 @@ export class SetupLiveSync extends LiveSyncCommands {
|
||||
const encryptingPassphrase = await askString(this.app, "Encrypt your settings", "The passphrase to encrypt the setup URI", "", true);
|
||||
if (encryptingPassphrase === false)
|
||||
return;
|
||||
const setting = { ...this.settings, configPassphraseStore: "", encryptedCouchDBConnection: "", encryptedPassphrase: "" };
|
||||
const setting = { ...this.settings, configPassphraseStore: "", encryptedCouchDBConnection: "", encryptedPassphrase: "" } as Partial<ObsidianLiveSyncSettings>;
|
||||
if (stripExtra) {
|
||||
delete setting.pluginSyncExtendedSetting;
|
||||
}
|
||||
@@ -311,6 +312,7 @@ Of course, we are able to disable these features.`
|
||||
}
|
||||
async suspendReflectingDatabase() {
|
||||
if (this.plugin.settings.doNotSuspendOnFetching) return;
|
||||
if (this.plugin.settings.remoteType == REMOTE_MINIO) return;
|
||||
Logger(`Suspending reflection: Database and storage changes will not be reflected in each other until completely finished the fetching.`, LOG_LEVEL_NOTICE);
|
||||
this.plugin.settings.suspendParseReplicationResult = true;
|
||||
this.plugin.settings.suspendFileWatching = true;
|
||||
@@ -318,6 +320,7 @@ Of course, we are able to disable these features.`
|
||||
}
|
||||
async resumeReflectingDatabase() {
|
||||
if (this.plugin.settings.doNotSuspendOnFetching) return;
|
||||
if (this.plugin.settings.remoteType == REMOTE_MINIO) return;
|
||||
Logger(`Database and storage reflection has been resumed!`, LOG_LEVEL_NOTICE);
|
||||
this.plugin.settings.suspendParseReplicationResult = false;
|
||||
this.plugin.settings.suspendFileWatching = false;
|
||||
@@ -348,9 +351,10 @@ Of course, we are able to disable these features.`
|
||||
await this.plugin.resetLocalDatabase();
|
||||
}
|
||||
async fetchRemoteChunks() {
|
||||
if (!this.plugin.settings.doNotSuspendOnFetching && this.plugin.settings.readChunksOnline) {
|
||||
if (!this.plugin.settings.doNotSuspendOnFetching && this.plugin.settings.readChunksOnline && this.plugin.settings.remoteType == REMOTE_COUCHDB) {
|
||||
Logger(`Fetching chunks`, LOG_LEVEL_NOTICE);
|
||||
const remoteDB = await this.plugin.getReplicator().connectRemoteCouchDBWithSetting(this.settings, this.plugin.getIsMobile(), true);
|
||||
const replicator = this.plugin.getReplicator() as LiveSyncCouchDBReplicator;
|
||||
const remoteDB = await replicator.connectRemoteCouchDBWithSetting(this.settings, this.plugin.getIsMobile(), true);
|
||||
if (typeof remoteDB == "string") {
|
||||
Logger(remoteDB, LOG_LEVEL_NOTICE);
|
||||
} else {
|
||||
@@ -377,9 +381,6 @@ Of course, we are able to disable these features.`
|
||||
await this.plugin.replicateAllFromServer(true);
|
||||
await delay(1000);
|
||||
await this.plugin.replicateAllFromServer(true);
|
||||
// if (!tryLessFetching) {
|
||||
// await this.fetchRemoteChunks();
|
||||
// }
|
||||
await this.resumeReflectingDatabase();
|
||||
await this.askHiddenFileConfiguration({ enableFetch: true });
|
||||
}
|
||||
|
||||
@@ -25,7 +25,7 @@ function readDocument(w: LoadedEntry) {
|
||||
if (isImage(w.path)) {
|
||||
return new Uint8Array(decodeBinary(w.data));
|
||||
}
|
||||
if (w.data == "plain") return getDocData(w.data);
|
||||
if (w.type == "plain" || w.datatype == "plain") return getDocData(w.data);
|
||||
if (isComparableTextDecode(w.path)) return readString(new Uint8Array(decodeBinary(w.data)));
|
||||
if (isComparableText(w.path)) return getDocData(w.data);
|
||||
try {
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
import ObsidianLiveSyncPlugin from "./main";
|
||||
import { onDestroy, onMount } from "svelte";
|
||||
import type { AnyEntry, FilePathWithPrefix } from "./lib/src/types";
|
||||
import { getDocData, isDocContentSame, readAsBlob } from "./lib/src/utils";
|
||||
import { getDocData, isAnyNote, isDocContentSame, readAsBlob } from "./lib/src/utils";
|
||||
import { diff_match_patch } from "./deps";
|
||||
import { DocumentHistoryModal } from "./DocumentHistoryModal";
|
||||
import { isPlainText, stripAllPrefixes } from "./lib/src/path";
|
||||
@@ -30,7 +30,7 @@
|
||||
|
||||
type HistoryData = {
|
||||
id: string;
|
||||
rev: string;
|
||||
rev?: string;
|
||||
path: string;
|
||||
dirname: string;
|
||||
filename: string;
|
||||
@@ -53,12 +53,12 @@
|
||||
if (docA.mtime < range_from_epoch) {
|
||||
continue;
|
||||
}
|
||||
if (docA.type != "newnote" && docA.type != "plain") continue;
|
||||
if (!isAnyNote(docA)) continue;
|
||||
const path = plugin.getPath(docA as AnyEntry);
|
||||
const isPlain = isPlainText(docA.path);
|
||||
const revs = await db.getRaw(docA._id, { revs_info: true });
|
||||
let p: string = undefined;
|
||||
const reversedRevs = revs._revs_info.reverse();
|
||||
let p: string | undefined = undefined;
|
||||
const reversedRevs = (revs._revs_info ?? []).reverse();
|
||||
const DIFF_DELETE = -1;
|
||||
|
||||
const DIFF_EQUAL = 0;
|
||||
@@ -177,7 +177,7 @@
|
||||
onDestroy(() => {});
|
||||
|
||||
function showHistory(file: string, rev: string) {
|
||||
new DocumentHistoryModal(plugin.app, plugin, file as unknown as FilePathWithPrefix, null, rev).open();
|
||||
new DocumentHistoryModal(plugin.app, plugin, file as unknown as FilePathWithPrefix, undefined, rev).open();
|
||||
}
|
||||
function openFile(file: string) {
|
||||
plugin.app.workspace.openLinkText(file, file);
|
||||
@@ -232,7 +232,7 @@
|
||||
<td>
|
||||
<span class="rev">
|
||||
{#if entry.isPlain}
|
||||
<a on:click={() => showHistory(entry.path, entry.rev)}>{entry.rev}</a>
|
||||
<a on:click={() => showHistory(entry.path, entry?.rev || "")}>{entry.rev}</a>
|
||||
{:else}
|
||||
{entry.rev}
|
||||
{/if}
|
||||
|
||||
@@ -6,15 +6,15 @@
|
||||
import { mergeObject } from "./utils";
|
||||
|
||||
export let docs: LoadedEntry[] = [];
|
||||
export let callback: (keepRev: string, mergedStr?: string) => Promise<void> = async (_, __) => {
|
||||
export let callback: (keepRev?: string, mergedStr?: string) => Promise<void> = async (_, __) => {
|
||||
Promise.resolve();
|
||||
};
|
||||
export let filename: FilePath = "" as FilePath;
|
||||
export let nameA: string = "A";
|
||||
export let nameB: string = "B";
|
||||
export let defaultSelect: string = "";
|
||||
let docA: LoadedEntry = undefined;
|
||||
let docB: LoadedEntry = undefined;
|
||||
let docA: LoadedEntry;
|
||||
let docB: LoadedEntry;
|
||||
let docAContent = "";
|
||||
let docBContent = "";
|
||||
let objA: any = {};
|
||||
@@ -28,7 +28,8 @@
|
||||
function docToString(doc: LoadedEntry) {
|
||||
return doc.datatype == "plain" ? getDocData(doc.data) : readString(new Uint8Array(decodeBinary(doc.data)));
|
||||
}
|
||||
function revStringToRevNumber(rev: string) {
|
||||
function revStringToRevNumber(rev?: string) {
|
||||
if (!rev) return "";
|
||||
return rev.split("-")[0];
|
||||
}
|
||||
|
||||
@@ -44,15 +45,15 @@
|
||||
}
|
||||
function apply() {
|
||||
if (docA._id == docB._id) {
|
||||
if (mode == "A") return callback(docA._rev, null);
|
||||
if (mode == "B") return callback(docB._rev, null);
|
||||
if (mode == "A") return callback(docA._rev!, undefined);
|
||||
if (mode == "B") return callback(docB._rev!, undefined);
|
||||
} else {
|
||||
if (mode == "A") return callback(null, docToString(docA));
|
||||
if (mode == "B") return callback(null, docToString(docB));
|
||||
if (mode == "A") return callback(undefined, docToString(docA));
|
||||
if (mode == "B") return callback(undefined, docToString(docB));
|
||||
}
|
||||
if (mode == "BA") return callback(null, JSON.stringify(objBA, null, 2));
|
||||
if (mode == "AB") return callback(null, JSON.stringify(objAB, null, 2));
|
||||
callback(null, null);
|
||||
if (mode == "BA") return callback(undefined, JSON.stringify(objBA, null, 2));
|
||||
if (mode == "AB") return callback(undefined, JSON.stringify(objAB, null, 2));
|
||||
callback(undefined, undefined);
|
||||
}
|
||||
$: {
|
||||
if (docs && docs.length >= 1) {
|
||||
@@ -133,13 +134,17 @@
|
||||
{/if}
|
||||
<div>
|
||||
{nameA}
|
||||
{#if docA._id == docB._id} Rev:{revStringToRevNumber(docA._rev)} {/if} ,{new Date(docA.mtime).toLocaleString()}
|
||||
{#if docA._id == docB._id}
|
||||
Rev:{revStringToRevNumber(docA._rev)}
|
||||
{/if} ,{new Date(docA.mtime).toLocaleString()}
|
||||
{docAContent.length} letters
|
||||
</div>
|
||||
|
||||
<div>
|
||||
{nameB}
|
||||
{#if docA._id == docB._id} Rev:{revStringToRevNumber(docB._rev)} {/if} ,{new Date(docB.mtime).toLocaleString()}
|
||||
{#if docA._id == docB._id}
|
||||
Rev:{revStringToRevNumber(docB._rev)}
|
||||
{/if} ,{new Date(docB.mtime).toLocaleString()}
|
||||
{docBContent.length} letters
|
||||
</div>
|
||||
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
import { deleteDB, type IDBPDatabase, openDB } from "idb";
|
||||
export interface KeyValueDatabase {
|
||||
get<T>(key: string): Promise<T>;
|
||||
set<T>(key: string, value: T): Promise<IDBValidKey>;
|
||||
del(key: string): Promise<void>;
|
||||
get<T>(key: IDBValidKey): Promise<T>;
|
||||
set<T>(key: IDBValidKey, value: T): Promise<IDBValidKey>;
|
||||
del(key: IDBValidKey): Promise<void>;
|
||||
clear(): Promise<void>;
|
||||
keys(query?: IDBValidKey | IDBKeyRange, count?: number): Promise<IDBValidKey[]>;
|
||||
close(): void;
|
||||
destroy(): void;
|
||||
destroy(): Promise<void>;
|
||||
}
|
||||
const databaseCache: { [key: string]: IDBPDatabase<any> } = {};
|
||||
export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatabase> => {
|
||||
@@ -20,24 +20,23 @@ export const OpenKeyValueDatabase = async (dbKey: string): Promise<KeyValueDatab
|
||||
db.createObjectStore(storeKey);
|
||||
},
|
||||
});
|
||||
let db: IDBPDatabase<any> = null;
|
||||
db = await dbPromise;
|
||||
const db = await dbPromise;
|
||||
databaseCache[dbKey] = db;
|
||||
return {
|
||||
get<T>(key: string): Promise<T> {
|
||||
return db.get(storeKey, key);
|
||||
async get<T>(key: IDBValidKey): Promise<T> {
|
||||
return await db.get(storeKey, key);
|
||||
},
|
||||
set<T>(key: string, value: T) {
|
||||
return db.put(storeKey, value, key);
|
||||
async set<T>(key: IDBValidKey, value: T) {
|
||||
return await db.put(storeKey, value, key);
|
||||
},
|
||||
del(key: string) {
|
||||
return db.delete(storeKey, key);
|
||||
async del(key: IDBValidKey) {
|
||||
return await db.delete(storeKey, key);
|
||||
},
|
||||
clear() {
|
||||
return db.clear(storeKey);
|
||||
async clear() {
|
||||
return await db.clear(storeKey);
|
||||
},
|
||||
keys(query?: IDBValidKey | IDBKeyRange, count?: number) {
|
||||
return db.getAllKeys(storeKey, query, count);
|
||||
async keys(query?: IDBValidKey | IDBKeyRange, count?: number) {
|
||||
return await db.getAllKeys(storeKey, query, count);
|
||||
},
|
||||
close() {
|
||||
delete databaseCache[dbKey];
|
||||
|
||||
83
src/MultipleRegExpControl.svelte
Normal file
83
src/MultipleRegExpControl.svelte
Normal file
@@ -0,0 +1,83 @@
|
||||
<script lang="ts">
|
||||
export let patterns = [] as string[];
|
||||
export let originals = [] as string[];
|
||||
|
||||
export let apply: (args: string[]) => Promise<void> = (_: string[]) => Promise.resolve();
|
||||
function revert() {
|
||||
patterns = [...originals];
|
||||
}
|
||||
const CHECK_OK = "✔";
|
||||
const CHECK_NG = "⚠";
|
||||
const MARK_MODIFIED = "✏ ";
|
||||
function checkRegExp(pattern: string) {
|
||||
if (pattern.trim() == "") return "";
|
||||
try {
|
||||
const _ = new RegExp(pattern);
|
||||
return CHECK_OK;
|
||||
} catch (ex) {
|
||||
return CHECK_NG;
|
||||
}
|
||||
}
|
||||
$: status = patterns.map((e) => checkRegExp(e));
|
||||
$: modified = patterns.map((e, i) => (e != originals?.[i] ?? "" ? MARK_MODIFIED : ""));
|
||||
|
||||
function remove(idx: number) {
|
||||
patterns[idx] = "";
|
||||
}
|
||||
function add() {
|
||||
patterns = [...patterns, ""];
|
||||
}
|
||||
</script>
|
||||
|
||||
<ul>
|
||||
{#each patterns as pattern, idx}
|
||||
<li><label>{modified[idx]}{status[idx]}</label><input type="text" bind:value={pattern} class={modified[idx]} /><button class="iconbutton" on:click={() => remove(idx)}>🗑</button></li>
|
||||
{/each}
|
||||
<li>
|
||||
<label><button on:click={() => add()}>Add</button></label>
|
||||
</li>
|
||||
<li class="buttons">
|
||||
<button on:click={() => apply(patterns)} disabled={status.some((e) => e == CHECK_NG) || modified.every((e) => e == "")}>Apply</button>
|
||||
<button on:click={() => revert()} disabled={status.some((e) => e == CHECK_NG) || modified.every((e) => e == "")}>Revert</button>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<style>
|
||||
label {
|
||||
min-width: 4em;
|
||||
width: 4em;
|
||||
display: inline-flex;
|
||||
flex-direction: row;
|
||||
justify-content: flex-end;
|
||||
}
|
||||
ul {
|
||||
flex-grow: 1;
|
||||
display: inline-flex;
|
||||
flex-direction: column;
|
||||
list-style-type: none;
|
||||
margin-block-start: 0;
|
||||
margin-block-end: 0;
|
||||
margin-inline-start: 0px;
|
||||
margin-inline-end: 0px;
|
||||
padding-inline-start: 0;
|
||||
}
|
||||
li {
|
||||
padding: var(--size-2-1) var(--size-4-1);
|
||||
display: inline-flex;
|
||||
flex-grow: 1;
|
||||
align-items: center;
|
||||
justify-content: flex-end;
|
||||
gap: var(--size-4-2);
|
||||
}
|
||||
li input {
|
||||
min-width: 10em;
|
||||
}
|
||||
li.buttons {
|
||||
}
|
||||
button.iconbutton {
|
||||
max-width: 4em;
|
||||
}
|
||||
span.spacer {
|
||||
flex-grow: 1;
|
||||
}
|
||||
</style>
|
||||
133
src/ObsHttpHandler.ts
Normal file
133
src/ObsHttpHandler.ts
Normal file
@@ -0,0 +1,133 @@
|
||||
// This file is based on a file that was published by the @remotely-save, under the Apache 2 License.
|
||||
// I would love to express my deepest gratitude to the original authors for their hard work and dedication. Without their contributions, this project would not have been possible.
|
||||
//
|
||||
// Original Implementation is here: https://github.com/remotely-save/remotely-save/blob/28b99557a864ef59c19d2ad96101196e401718f0/src/remoteForS3.ts
|
||||
|
||||
import {
|
||||
FetchHttpHandler,
|
||||
type FetchHttpHandlerOptions,
|
||||
} from "@smithy/fetch-http-handler";
|
||||
import { HttpRequest, HttpResponse, type HttpHandlerOptions } from "@smithy/protocol-http";
|
||||
//@ts-ignore
|
||||
import { requestTimeout } from "@smithy/fetch-http-handler/dist-es/request-timeout";
|
||||
import { buildQueryString } from "@smithy/querystring-builder";
|
||||
import { requestUrl, type RequestUrlParam } from "./deps";
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
// special handler using Obsidian requestUrl
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
/**
|
||||
* This is close to origin implementation of FetchHttpHandler
|
||||
* https://github.com/aws/aws-sdk-js-v3/blob/main/packages/fetch-http-handler/src/fetch-http-handler.ts
|
||||
* that is released under Apache 2 License.
|
||||
* But this uses Obsidian requestUrl instead.
|
||||
*/
|
||||
export class ObsHttpHandler extends FetchHttpHandler {
|
||||
requestTimeoutInMs: number | undefined;
|
||||
reverseProxyNoSignUrl: string | undefined;
|
||||
constructor(
|
||||
options?: FetchHttpHandlerOptions,
|
||||
reverseProxyNoSignUrl?: string
|
||||
) {
|
||||
super(options);
|
||||
this.requestTimeoutInMs =
|
||||
options === undefined ? undefined : options.requestTimeout;
|
||||
this.reverseProxyNoSignUrl = reverseProxyNoSignUrl;
|
||||
}
|
||||
async handle(
|
||||
request: HttpRequest,
|
||||
{ abortSignal }: HttpHandlerOptions = {}
|
||||
): Promise<{ response: HttpResponse }> {
|
||||
if (abortSignal?.aborted) {
|
||||
const abortError = new Error("Request aborted");
|
||||
abortError.name = "AbortError";
|
||||
return Promise.reject(abortError);
|
||||
}
|
||||
|
||||
let path = request.path;
|
||||
if (request.query) {
|
||||
const queryString = buildQueryString(request.query);
|
||||
if (queryString) {
|
||||
path += `?${queryString}`;
|
||||
}
|
||||
}
|
||||
|
||||
const { port, method } = request;
|
||||
let url = `${request.protocol}//${request.hostname}${port ? `:${port}` : ""
|
||||
}${path}`;
|
||||
if (
|
||||
this.reverseProxyNoSignUrl !== undefined &&
|
||||
this.reverseProxyNoSignUrl !== ""
|
||||
) {
|
||||
const urlObj = new URL(url);
|
||||
urlObj.host = this.reverseProxyNoSignUrl;
|
||||
url = urlObj.href;
|
||||
}
|
||||
const body =
|
||||
method === "GET" || method === "HEAD" ? undefined : request.body;
|
||||
|
||||
const transformedHeaders: Record<string, string> = {};
|
||||
for (const key of Object.keys(request.headers)) {
|
||||
const keyLower = key.toLowerCase();
|
||||
if (keyLower === "host" || keyLower === "content-length") {
|
||||
continue;
|
||||
}
|
||||
transformedHeaders[keyLower] = request.headers[key];
|
||||
}
|
||||
|
||||
let contentType: string | undefined = undefined;
|
||||
if (transformedHeaders["content-type"] !== undefined) {
|
||||
contentType = transformedHeaders["content-type"];
|
||||
}
|
||||
|
||||
let transformedBody: any = body;
|
||||
if (ArrayBuffer.isView(body)) {
|
||||
transformedBody = new Uint8Array(body.buffer).buffer;
|
||||
}
|
||||
|
||||
const param: RequestUrlParam = {
|
||||
body: transformedBody,
|
||||
headers: transformedHeaders,
|
||||
method: method,
|
||||
url: url,
|
||||
contentType: contentType,
|
||||
};
|
||||
|
||||
const raceOfPromises = [
|
||||
requestUrl(param).then((rsp) => {
|
||||
const headers = rsp.headers;
|
||||
const headersLower: Record<string, string> = {};
|
||||
for (const key of Object.keys(headers)) {
|
||||
headersLower[key.toLowerCase()] = headers[key];
|
||||
}
|
||||
const stream = new ReadableStream<Uint8Array>({
|
||||
start(controller) {
|
||||
controller.enqueue(new Uint8Array(rsp.arrayBuffer));
|
||||
controller.close();
|
||||
},
|
||||
});
|
||||
return {
|
||||
response: new HttpResponse({
|
||||
headers: headersLower,
|
||||
statusCode: rsp.status,
|
||||
body: stream,
|
||||
}),
|
||||
};
|
||||
}),
|
||||
requestTimeout(this.requestTimeoutInMs),
|
||||
];
|
||||
|
||||
if (abortSignal) {
|
||||
raceOfPromises.push(
|
||||
new Promise<never>((resolve, reject) => {
|
||||
abortSignal.onabort = () => {
|
||||
const abortError = new Error("Request aborted");
|
||||
abortError.name = "AbortError";
|
||||
reject(abortError);
|
||||
};
|
||||
})
|
||||
);
|
||||
}
|
||||
return Promise.race(raceOfPromises);
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,7 @@ import type { SerializedFileAccess } from "./SerializedFileAccess";
|
||||
import { Plugin, TAbstractFile, TFile, TFolder } from "./deps";
|
||||
import { Logger } from "./lib/src/logger";
|
||||
import { shouldBeIgnored } from "./lib/src/path";
|
||||
import type { KeyedQueueProcessor } from "./lib/src/processor";
|
||||
import type { QueueProcessor } from "./lib/src/processor";
|
||||
import { LOG_LEVEL_NOTICE, type FilePath, type ObsidianLiveSyncSettings } from "./lib/src/types";
|
||||
import { delay } from "./lib/src/utils";
|
||||
import { type FileEventItem, type FileEventType, type FileInfo, type InternalFileInfo } from "./types";
|
||||
@@ -19,7 +19,7 @@ type LiveSyncForStorageEventManager = Plugin &
|
||||
vaultAccess: SerializedFileAccess
|
||||
} & {
|
||||
isTargetFile: (file: string | TAbstractFile) => Promise<boolean>,
|
||||
fileEventQueue: KeyedQueueProcessor<FileEventItem, any>,
|
||||
fileEventQueue: QueueProcessor<FileEventItem, any>,
|
||||
isFileSizeExceeded: (size: number) => boolean;
|
||||
};
|
||||
|
||||
@@ -133,8 +133,7 @@ export class StorageEventManagerObsidian extends StorageEventManager {
|
||||
path: file.path,
|
||||
size: file.stat.size
|
||||
} as FileInfo : file as InternalFileInfo;
|
||||
|
||||
this.plugin.fileEventQueue.enqueueWithKey(`file-${fileInfo.path}`, {
|
||||
this.plugin.fileEventQueue.enqueue({
|
||||
type,
|
||||
args: {
|
||||
file: fileInfo,
|
||||
|
||||
2
src/lib
2
src/lib
Submodule src/lib updated: 29e23f5763...da470ddc41
533
src/main.ts
533
src/main.ts
File diff suppressed because it is too large
Load Diff
@@ -451,7 +451,11 @@ export function isMarkedAsSameChanges(file: TFile | AnyEntry | string, mtimes: n
|
||||
return EVEN;
|
||||
}
|
||||
}
|
||||
export function compareFileFreshness(baseFile: TFile | AnyEntry, checkTarget: TFile | AnyEntry): typeof BASE_IS_NEW | typeof TARGET_IS_NEW | typeof EVEN {
|
||||
export function compareFileFreshness(baseFile: TFile | AnyEntry | undefined, checkTarget: TFile | AnyEntry | undefined): typeof BASE_IS_NEW | typeof TARGET_IS_NEW | typeof EVEN {
|
||||
if (baseFile === undefined && checkTarget == undefined) return EVEN;
|
||||
if (baseFile == undefined) return TARGET_IS_NEW;
|
||||
if (checkTarget == undefined) return BASE_IS_NEW;
|
||||
|
||||
const modifiedBase = baseFile instanceof TFile ? baseFile?.stat?.mtime ?? 0 : baseFile?.mtime ?? 0;
|
||||
const modifiedTarget = checkTarget instanceof TFile ? checkTarget?.stat?.mtime ?? 0 : checkTarget?.mtime ?? 0;
|
||||
|
||||
|
||||
@@ -103,6 +103,9 @@
|
||||
.canvas-wrapper::before,
|
||||
.empty-state::before {
|
||||
content: var(--sls-log-text, "");
|
||||
font-variant-numeric: tabular-nums;
|
||||
font-variant-emoji: emoji;
|
||||
tab-size: 4;
|
||||
text-align: right;
|
||||
white-space: pre-wrap;
|
||||
position: absolute;
|
||||
|
||||
77
updates.md
77
updates.md
@@ -1,52 +1,41 @@
|
||||
### 0.22.0
|
||||
A few years passed since Self-hosted LiveSync was born, and our codebase had been very complicated. This could be patient now, but it should be a tremendous hurt.
|
||||
Therefore at v0.22.0, for future maintainability, I refined task scheduling logic totally.
|
||||
### 0.23.0
|
||||
Incredibly new features!
|
||||
|
||||
Of course, I think this would be our suffering in some cases. However, I would love to ask you for your cooperation and contribution.
|
||||
Now, we can use object storage (MinIO, S3, R2 or anything you like) for synchronising! Moreover, despite that, we can use all the features as if we were using CouchDB.
|
||||
Note: As this is a pretty experimental feature, hence we have some limitations.
|
||||
- This is built on the append-only architecture. It will not shrink used storage if we do not perform a rebuild.
|
||||
- A bit fragile. However, our version x.yy.0 is always so.
|
||||
- When the first synchronisation, the entire history to date is transferred. For this reason, it is preferable to do this under the WiFi network.
|
||||
- Do not worry, from the second synchronisation, we always transfer only differences.
|
||||
|
||||
Sorry for being absent so much long. And thank you for your patience!
|
||||
I hope this feature empowers users to maintain independence and self-host their data, offering an alternative for those who prefer to manage their own storage solutions and avoid being stuck on the right side of a sudden change in business model.
|
||||
|
||||
Note: we got a very performance improvement.
|
||||
Note at 0.22.2: **Now, to rescue mobile devices, Maximum file size is set to 50 by default**. Please configure the limit as you need. If you do not want to limit the sizes, set zero manually, please.
|
||||
Of course, I use Self-hosted MinIO for testing and recommend this. It is for the same reason as using CouchDB. -- open, controllable, auditable and indeed already audited by numerous eyes.
|
||||
|
||||
Let me write one more acknowledgement.
|
||||
|
||||
I have a lot of respect for that plugin, even though it is sometimes treated as if it is a competitor, remotely-save. I think it is a great architecture that embodies a different approach to my approach of recreating history. This time, with all due respect, I have used some of its code as a reference.
|
||||
Hooray for open source, and generous licences, and the sharing of knowledge by experts.
|
||||
|
||||
#### Version history
|
||||
- 0.22.14:
|
||||
- New feature:
|
||||
- We can disable the status bar in the setting dialogue.
|
||||
- Improved:
|
||||
- Now some files are handled as correct data type.
|
||||
- Customisation sync now uses the digest of each file for better performance.
|
||||
- The status in the Editor now works performant.
|
||||
- Refactored:
|
||||
- Common functions have been ready and the codebase has been organised.
|
||||
- Stricter type checking following TypeScript updates.
|
||||
- Remove old iOS workaround for simplicity and performance.
|
||||
- 0.22.13:
|
||||
- Improved:
|
||||
- Now using HTTP for the remote database URI warns of an error (on mobile) or notice (on desktop).
|
||||
- Refactored:
|
||||
- Dependencies have been polished.
|
||||
- 0.22.12:
|
||||
- Changed:
|
||||
- The default settings has been changed.
|
||||
- Improved:
|
||||
- Default and preferred settings are applied on completion of the wizard.
|
||||
- 0.23.2
|
||||
- Sorry for all the fixes to experimental features. (These things were also critical for dogfooding). The next release would be the main fixes! Thank you for your patience and understanding!
|
||||
- Fixed:
|
||||
- Now Initialisation `Fetch` will be performed smoothly and there will be fewer conflicts.
|
||||
- No longer stuck while Handling transferred or initialised documents.
|
||||
- 0.22.11:
|
||||
- Journal Sync will not hang up during big replication, especially the initial one.
|
||||
- All changes which have been replicated while rebuilding will not be postponed (Previous behaviour).
|
||||
- Improved:
|
||||
- Now Journal Sync works efficiently in download and parse, or pack and upload.
|
||||
- Less server storage and faster packing/unpacking usage by the new chunk format.
|
||||
- 0.23.1
|
||||
- Fixed:
|
||||
- `Verify and repair all files` is no longer broken.
|
||||
- Now journal synchronisation considers untransferred each from sent and received.
|
||||
- Journal sync now handles retrying.
|
||||
- Journal synchronisation no longer considers the synchronisation of chunks as revision updates (Simply ignored).
|
||||
- Journal sync now splits the journal pack to prevent mobile device rebooting.
|
||||
- Maintenance menus which had been on the command palette are now back in the maintain pane on the setting dialogue.
|
||||
- Improved:
|
||||
- Now all changes which have been replicated while rebuilding will be postponed.
|
||||
|
||||
- 0.23.0
|
||||
- New feature:
|
||||
- Now `Verify and repair all files` is able to...
|
||||
- Restore if the file only in the local database.
|
||||
- Show the history.
|
||||
- Improved:
|
||||
- Performance improved.
|
||||
- 0.22.10
|
||||
- Fixed:
|
||||
- No longer unchanged hidden files and customisations are saved and transferred now.
|
||||
- File integrity of vault history indicates the integrity correctly.
|
||||
- Improved:
|
||||
- In the report, the schema of the remote database URI is now printed.
|
||||
... To continue on to `updates_old.md`.
|
||||
- Now we can use Object Storage.
|
||||
@@ -10,6 +10,90 @@ Note: we got a very performance improvement.
|
||||
Note at 0.22.2: **Now, to rescue mobile devices, Maximum file size is set to 50 by default**. Please configure the limit as you need. If you do not want to limit the sizes, set zero manually, please.
|
||||
|
||||
#### Version history
|
||||
- 0.22.19
|
||||
- Fixed:
|
||||
- No longer data corrupting due to false BASE64 detections.
|
||||
- Improved:
|
||||
- A bit more efficient in Automatic data compression.
|
||||
- 0.22.18
|
||||
- New feature (Very Experimental):
|
||||
- Now we can use `Automatic data compression` to reduce amount of traffic and the usage of remote database.
|
||||
- Please make sure all devices are updated to v0.22.18 before trying this feature.
|
||||
- If you are using some other utilities which connected to your vault, please make sure that they have compatibilities.
|
||||
- Note: Setting `File Compression` on the remote database works for shrink the size of remote database. Please refer the [Doc](https://docs.couchdb.org/en/stable/config/couchdb.html#couchdb/file_compression).
|
||||
- 0.22.17:
|
||||
- Fixed:
|
||||
- Error handling on booting now works fine.
|
||||
- Replication is now started automatically in LiveSync mode.
|
||||
- Batch database update is now disabled in LiveSync mode.
|
||||
- No longer automatically reconnection while off-focused.
|
||||
- Status saves are thinned out.
|
||||
- Now Self-hosted LiveSync waits for all files between the local database and storage to be surely checked.
|
||||
- Improved:
|
||||
- The job scheduler is now more robust and stable.
|
||||
- The status indicator no longer flickers and keeps zero for a while.
|
||||
- No longer meaningless frequent updates of status indicators.
|
||||
- Now we can configure regular expression filters in handy UI. Thank you so much, @eth-p!
|
||||
- `Fetch` or `Rebuild everything` is now more safely performed.
|
||||
- Minor things
|
||||
- Some utility function has been added.
|
||||
- Customisation sync now less wrong messages.
|
||||
- Digging the weeds for eradication of type errors.
|
||||
- 0.22.16:
|
||||
- Fixed:
|
||||
- Fixed the issue that binary files were sometimes corrupted.
|
||||
- Fixed customisation sync data could be corrupted.
|
||||
- Improved:
|
||||
- Now the remote database costs lower memory.
|
||||
- This release requires a brief wait on the first synchronisation, to track the latest changeset again.
|
||||
- Description added for the `Device name`.
|
||||
- Refactored:
|
||||
- Many type-errors have been resolved.
|
||||
- Obsolete file has been deleted.
|
||||
- 0.22.15:
|
||||
- Improved:
|
||||
- Faster start-up by removing too many logs which indicates normality
|
||||
- By streamlined scanning of customised synchronisation extra phases have been deleted.
|
||||
... To continue on to `updates_old.md`.
|
||||
- 0.22.14:
|
||||
- New feature:
|
||||
- We can disable the status bar in the setting dialogue.
|
||||
- Improved:
|
||||
- Now some files are handled as correct data type.
|
||||
- Customisation sync now uses the digest of each file for better performance.
|
||||
- The status in the Editor now works performant.
|
||||
- Refactored:
|
||||
- Common functions have been ready and the codebase has been organised.
|
||||
- Stricter type checking following TypeScript updates.
|
||||
- Remove old iOS workaround for simplicity and performance.
|
||||
- 0.22.13:
|
||||
- Improved:
|
||||
- Now using HTTP for the remote database URI warns of an error (on mobile) or notice (on desktop).
|
||||
- Refactored:
|
||||
- Dependencies have been polished.
|
||||
- 0.22.12:
|
||||
- Changed:
|
||||
- The default settings has been changed.
|
||||
- Improved:
|
||||
- Default and preferred settings are applied on completion of the wizard.
|
||||
- Fixed:
|
||||
- Now Initialisation `Fetch` will be performed smoothly and there will be fewer conflicts.
|
||||
- No longer stuck while Handling transferred or initialised documents.
|
||||
- 0.22.11:
|
||||
- Fixed:
|
||||
- `Verify and repair all files` is no longer broken.
|
||||
- New feature:
|
||||
- Now `Verify and repair all files` is able to...
|
||||
- Restore if the file only in the local database.
|
||||
- Show the history.
|
||||
- Improved:
|
||||
- Performance improved.
|
||||
- 0.22.10
|
||||
- Fixed:
|
||||
- No longer unchanged hidden files and customisations are saved and transferred now.
|
||||
- File integrity of vault history indicates the integrity correctly.
|
||||
- Improved:
|
||||
- In the report, the schema of the remote database URI is now printed.
|
||||
- 0.22.9
|
||||
- Fixed:
|
||||
- Fixed a bug on `fetch chunks on demand` that could not fetch the chunks on demand.
|
||||
|
||||
Reference in New Issue
Block a user