Compare commits

..

18 Commits

Author SHA1 Message Date
vorotamoroz
6da982b54c WIP: Implement a feature that save settings remotely and apply them automatically 2026-05-06 21:53:59 +09:00
vorotamoroz
70e709ec9a Write design docs
Co-authored-by: Copilot <copilot@github.com>
2026-05-06 12:10:50 +09:00
vorotamoroz
fa7ef62302 Fix: adjusting help
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:42:54 +09:00
vorotamoroz
81d8224330 bump
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:39:48 +09:00
vorotamoroz
cc466a4b3c ### Fixed
- Now larger settings can be exported and imported via QR code without issues. (#595)

- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:37:44 +09:00
vorotamoroz
ceebca7de9 Merge pull request #862 from fabiomanz/main
chore: remove obsolete `version` attribute from docker-compose.yml
2026-04-29 17:30:35 +09:00
Fabio
c2f696d0a4 chore: attribute version is obsolete 2026-04-29 07:07:45 +00:00
vorotamoroz
1aa7c45794 Fix the readme
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:55:34 +09:00
vorotamoroz
faefa80cbd Fix again 2026-04-29 12:40:40 +09:00
vorotamoroz
3737eacffd Fix readme
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:39:42 +09:00
vorotamoroz
4c0af0b608 Fixed(cli):
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:22:00 +09:00
vorotamoroz
bb69eb13e7 bump 2026-04-27 11:15:07 +09:00
vorotamoroz
7c9db6376f Fixed:
- No longer Setup-wizard drops username and password silently. (#865)
- Setup URI is now correctly imported (#859).
- now French translation is added.
2026-04-27 11:14:06 +09:00
vorotamoroz
4c04e4e676 Merge pull request #863 from koteitan/fix/859-strip-trailing-slash-from-uri
fix: strip trailing slash from couchDB_URI to avoid double-slash 401
2026-04-27 11:08:11 +09:00
koteitan
14ec35b257 fix: strip trailing slash from couchDB_URI to avoid double-slash 401
When couchDB_URI ends with a trailing slash (e.g. https://host/), the
database name concatenation produces a double-slash path
(https://host//obsidiannotes), which causes CouchDB to reject requests
with 401 "Name or password is incorrect".

Strip trailing slashes from couchDB_URI / baseUri at the path
concatenation sites in:
- src/common/utils.ts (_requestToCouchDBFetch, _requestToCouchDB)
- src/features/LocalDatabaseMainte/CmdLocalDatabaseMainte.ts

The companion fix for the replication path is in the livesync-commonlib
submodule.

Ref: #859
2026-04-27 00:12:57 +09:00
vorotamoroz
b609e4973c Merge remote-tracking branch 'refs/remotes/origin/main' 2026-04-25 20:37:08 +09:00
vorotamoroz
16804ed34c Merge pull request #842 from kdavh/patch-1
Update README.md, fix webpeer link
2026-04-25 19:07:56 +09:00
kdavh
12f04f6cf7 Update README.md, fix webpeer link 2026-03-28 12:47:28 -04:00
27 changed files with 624 additions and 188 deletions

View File

@@ -24,7 +24,7 @@ Additionally, it supports peer-to-peer synchronisation using WebRTC now (experim
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**. - WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers: - Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices. - [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices. - [webpeer](https://github.com/vrtmrz/obsidian-livesync/tree/main/src/apps/webpeer): A pseudo-client for receiving and sending data between devices.
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it. - A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync). - For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).

92
aggregator.html Normal file
View File

@@ -0,0 +1,92 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Self-hosted LiveSync Setup QR Aggregator</title>
<style>
body { font-family: sans-serif; display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100vh; margin: 0; background-color: #f4f4f9; color: #333; }
.container { background: white; padding: 2rem; border-radius: 8px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); text-align: center; max-width: 90%; }
.progress { margin: 20px 0; font-size: 1.2rem; font-weight: bold; }
.status { margin-bottom: 20px; color: #666; }
.btn { display: inline-block; padding: 12px 24px; background-color: #7c4dff; color: white; text-decoration: none; border-radius: 4px; font-weight: bold; transition: background-color 0.2s; border: none; cursor: pointer; }
.btn:hover { background-color: #651fff; }
.btn:disabled { background-color: #ccc; cursor: not-allowed; }
.grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(40px, 1fr)); gap: 8px; margin: 20px 0; }
.tile { width: 40px; height: 40px; border: 2px solid #ddd; border-radius: 4px; display: flex; align-items: center; justify-content: center; font-size: 0.8rem; }
.tile.filled { background-color: #7c4dff; color: white; border-color: #7c4dff; }
</style>
</head>
<body>
<div class="container">
<h1>LiveSync Setup</h1>
<div id="app">
<p>Checking hash data...</p>
</div>
</div>
<script>
function updateUI() {
const hash = window.location.hash.substring(1);
const params = new URLSearchParams(hash);
const id = params.get('id');
const total = parseInt(params.get('n') || '0');
const index = parseInt(params.get('i') || '-1');
const data = params.get('d');
const app = document.getElementById('app');
if (!id || total <= 0 || index === -1 || !data) {
app.innerHTML = '<p class="status">Invalid setup URL. Please scan the QR code correctly.</p>';
return;
}
// Get session data
const storageKey = 'ls_agg_' + id;
let session = JSON.parse(localStorage.getItem(storageKey) || '{}');
// Save current data
session[index] = data;
localStorage.setItem(storageKey, JSON.stringify(session));
const receivedIndexes = Object.keys(session).map(Number);
const count = receivedIndexes.length;
let html = `
<div class="status">Session ID: ${id}</div>
<div class="progress">${count} / ${total} Loaded</div>
<div class="grid">
`;
for (let i = 0; i < total; i++) {
const isFilled = session[i] !== undefined;
html += `<div class="tile ${isFilled ? 'filled' : ''}">${i + 1}</div>`;
}
html += `</div>`;
if (count === total) {
const sortedData = Array.from({length: total}, (_, i) => session[i]).join('');
// Use the correct protocol for settings
const obsidianUri = `obsidian://setuplivesync?settingsQR=${sortedData}`;
html += `
<p>All parts have been collected!</p>
<a href="${obsidianUri}" class="btn">Open Obsidian to complete setup</a>
<p style="margin-top:20px; font-size:0.8rem; color: #999;">Note: If the button does not respond, please ensure you are opening this in a browser that can trigger Obsidian.</p>
`;
} else {
html += `
<p class="status">Please scan the next QR code.</p>
<button class="btn" disabled>Waiting...</button>
`;
}
app.innerHTML = html;
}
window.addEventListener('hashchange', updateUI);
updateUI();
</script>
</body>
</html>

View File

@@ -1,7 +1,6 @@
# For details and other explanations about this file refer to: # For details and other explanations about this file refer to:
# https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik # https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb:latest image: couchdb:latest

View File

@@ -0,0 +1,51 @@
# Auto Configuration via Remote Database
## Goal
To prevent fatal synchronisation issues and data corruption caused by misaligned settings across devices by introducing a mechanism that automatically fetches and applies shared configurations from the remote database.
## Motivation
In Obsidian LiveSync, inconsistencies in certain settings across devices (e.g., encryption algorithms, chunk splitting rules) lead to severe issues such as decryption failures or structural breakdowns (e.g., conflicting update loops, wasted storage).
To resolve this, we will introduce an "Auto Configuration" feature. Once database access is established, the plugin will fetch the "Shared Settings" stored on the remote database and automatically keep the local settings up to date.
## Prerequisites
* The configuration parameters must be strictly categorised into synchronised and non-synchronised scopes.
* The feature must be opt-in to prevent unexpected setting overwrites.
* The remote configuration must act as the Single Source of Truth.
## Outlined Methods and Implementation Plans
### 1. Scope of Synchronised Settings
Settings are strictly divided into "shared targets" (defined as constants) and "excluded targets".
* **Synchronised Targets (Centrally managed on the Remote):**
* **<2A> Efficiency-affecting Settings:** `hashAlg`, `chunkSplitterVersion`, `enableChunkSplitterV2`, `useSegmenter`, `minimumChunkSize`, `customChunkSize`
* *Note: To ensure resilience against future expansions, these target keys must be defined as constants (e.g., an array) within the codebase, allowing for flexible programmatic processing.*
* **Excluded Targets (Kept locally):**
* **🔴 Rebuild-Requiring Settings (Incompatible Changes):** `encrypt`, `usePathObfuscation`, `E2EEAlgorithm`, `useDynamicIterationCount`. Changing these requires a full local database rebuild to maintain integrity. Therefore, they are excluded from silent "Auto Configuration" and will continue to rely entirely on the explicit "Tweak Mismatch" resolution dialogue flow.
* **🛑 Environment Blockers:** `handleFilenameCaseSensitive`. This setting depends inherently on the OS's file system (e.g., Windows/macOS being typically case-insensitive). Attempting to auto-configure this to a state unsupported by the local environment will cause silent corruption. Therefore, it is strictly excluded from Auto Configuration. If a mismatch is detected, the plugin must explicitly block synchronisation with a fatal error.
* **"Chicken-and-egg" Settings:** `couchDB_URI`, `passphrase`, `remoteType`, and Bucket configurations—settings inherently required to connect to and decrypt the remote database in the first place.
* **🟢 Client-specific Behaviour & UX:** UI options (e.g., `showVerboseLog`), batch sizes, synchronisation trigger settings, and local file rules, as these are expected to vary per device.
### 2. Opt-in, Initialisation, and Activation Process
To prevent accidents where settings are unexpectedly altered, this feature is strictly "opt-in".
When enabling the feature, the plugin secures user consent via the following dialogue flow, depending on the state of the Remote DB:
1. The user turns on "Auto Configuration".
2. The plugin attempts to fetch the configuration document from the Remote database.
3. **If the configuration document does NOT exist on the Remote:**
* Dialogue: "No shared configuration was found on the remote database. Would you like to save this device's current settings to the remote as the standard configuration and enable auto-configuration?"
* Upon consent: The plugin writes the current local settings (only the target keys) to the Remote, appending a timestamp.
4. **If the configuration document exists on the Remote:**
* Dialogue: "A shared configuration was found on the remote database. Would you like to overwrite this device's settings with the remote standard and enable auto-configuration?"
* Upon consent: The plugin fetches the settings from the Remote and applies them locally.
### 3. Version Control (Timestamps) and Continuous Synchronisation
* **Single Source of Truth:** The configuration document stored in the Remote database is always treated as the definitive master record.
* **Timestamp Management:** The configuration document saved on the Remote holds a last-modified timestamp (or version number).
* **Update Flow:** When a user alters and saves settings locally, the plugin communicates with the Remote to ensure it possesses the latest state before applying changes. It then writes back the updated settings to the Remote, assigning a newer timestamp.
### 4. UX Considerations for Offline Scenarios
If a user opens the settings screen while offline and attempts to edit shared settings, the plugin must explicitly communicate the limitations:
* **Warning Notice:** "Unable to fetch the latest settings from the server. If you modify settings now, they might be overwritten by the server's configuration the next time you connect (Alternatively, shared settings can only be saved whilst online)."
* *Rationale:* This prevents user confusion and wasted effort, mitigating frustrations such as "I changed my settings, but they reverted themselves".

40
docs/setting_spec.md Normal file
View File

@@ -0,0 +1,40 @@
# Synchronisation Settings Consistency: Impact Categorisation
Categorisation of impacts when synchronisation settings are inconsistent across devices.
## Exceptions
* **DB & Remote Connection:** `couchDB_URI`, `couchDB_DBNAME`, `remoteType`, `useJWT`, various bucket settings (`accessKey`, `bucket`, `endpoint`, etc.)
These settings are required to establish a connection to the remote storage. If they do not match, we should ask what the inconsistency is. Hence, these settings are not categorised below, as they require a separate handling process.
## 💀 Environment Blockers (Unsyncable Fatal Inconsistencies)
If this setting is inconsistent with the device's physical capabilities, it is physically impossible to sync. This cannot be auto-configured and must explicitly block synchronisation.
* **File System Constraint:** `handleFilenameCaseSensitive` (If the remote expects case sensitivity but the local filesystem is inherently case-insensitive, they cannot safely merge and will result in silent file corruption.)
## 🔴 Impossible to function if inconsistent (Synchronisation Failure / Data Corruption / Logical Breakdown)
If these do not match, it causes fatal issues such as decryption failure, architecture breakdown due to chunk hash mismatches, or unintended overwriting loops. These items must match perfectly.
* **Encryption Settings:** `encrypt`, `passphrase`, `E2EEAlgorithm`, `usePathObfuscation`
## 🟡 Slightly inefficient but no corruption (Systemic Inefficiency)
Synchronisation completes without corruption, but systemic inefficiencies arise, such as increased storage consumption due to ineffective deduplication.
* **Chunk Algorithms:** `hashAlg`, `chunkSplitterVersion`, `enableChunkSplitterV2`, `useSegmenter`
* **Chunk Size:** `minimumChunkSize`, `customChunkSize`
* Cache & Tuning: `useEden`, `maxChunksInEden`, `maxTotalLengthInEden`, `enableCompression`
## 🟢 No problem (Client-specific behaviour / UX / Performance)
Differences only affect device-specific processing timing or user experience, and do not lead to DB corruption or fatal synchronisation loops.
* **UI, Logs & Notifications:** `showVerboseLog`, `showStatusOnEditor`, `networkWarningStyle`, `displayLanguage`, `hideFileWarningNotice`, `writeLogToTheFile`, etc.
* **Synchronisation Triggers:** `liveSync`, `syncOnSave`, `syncOnStart`, `syncOnFileOpen`, `syncMinimumInterval`
* **Local File Rules:** `trashInsteadDelete`, `doNotDeleteFolder`
* **Target Filtering:** `syncOnlyRegEx`, `syncIgnoreRegEx`, `syncInternalFiles`
* **Conflict Resolution & Merging:** `resolveConflictsByNewerFile`, `disableMarkdownAutoMerge`, `checkConflictOnlyOnOpen`, `showMergeDialogOnlyOnActive`
* **Plugin Synchronisation:** `usePluginSync`, `showOwnPlugins`, `autoSweepPlugins`
* **Setting Check Mechanism:** `disableCheckingConfigMismatch`
* **Performance Adjustments (Client-side considerations):**
* Transfer/Save Batching: `batch_size`, `batches_limit`, `batchSave`, `batchSaveMinimumDelay`, `batchSaveMaximumDelay`
* Fetch Speed: `concurrencyOfReadChunksOnline`, `minimumIntervalOfReadChunksOnline`
* Cache & Tuning: `processSmallFilesInUIThread`, `disableWorkerForGeneratingChunks`

View File

@@ -230,7 +230,6 @@ And, be sure to check the server log and be careful of malicious access.
If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`. If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`.
```yaml ```yaml
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb:latest image: couchdb:latest

View File

@@ -71,7 +71,6 @@ obsidian-livesync
可以参照以下内容编辑 `docker-compose.yml`: 可以参照以下内容编辑 `docker-compose.yml`:
```yaml ```yaml
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb image: couchdb

View File

@@ -1,7 +1,7 @@
{ {
"id": "obsidian-livesync", "id": "obsidian-livesync",
"name": "Self-hosted LiveSync", "name": "Self-hosted LiveSync",
"version": "0.25.58", "version": "0.25.60",
"minAppVersion": "0.9.12", "minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz", "author": "vorotamoroz",

4
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.25.58", "version": "0.25.60",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.25.58", "version": "0.25.60",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@aws-sdk/client-s3": "^3.808.0", "@aws-sdk/client-s3": "^3.808.0",

View File

@@ -1,6 +1,6 @@
{ {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.25.58", "version": "0.25.60",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js", "main": "main.js",
"type": "module", "type": "module",

View File

@@ -13,6 +13,7 @@ import type { CheckPointInfo } from "./lib/src/replication/journal/JournalSyncTy
import type { LiveSyncJournalReplicatorEnv } from "./lib/src/replication/journal/LiveSyncJournalReplicatorEnv"; import type { LiveSyncJournalReplicatorEnv } from "./lib/src/replication/journal/LiveSyncJournalReplicatorEnv";
import type { LiveSyncReplicatorEnv } from "./lib/src/replication/LiveSyncAbstractReplicator"; import type { LiveSyncReplicatorEnv } from "./lib/src/replication/LiveSyncAbstractReplicator";
import { useTargetFilters } from "./lib/src/serviceFeatures/targetFilter"; import { useTargetFilters } from "./lib/src/serviceFeatures/targetFilter";
import { useSharedConfigFeature } from "./lib/src/serviceFeatures/sharedConfig";
import { useRemoteConfigurationMigration } from "./lib/src/serviceFeatures/remoteConfig"; import { useRemoteConfigurationMigration } from "./lib/src/serviceFeatures/remoteConfig";
import type { ServiceContext } from "./lib/src/services/base/ServiceBase"; import type { ServiceContext } from "./lib/src/services/base/ServiceBase";
import type { InjectableServiceHub } from "./lib/src/services/InjectableServices"; import type { InjectableServiceHub } from "./lib/src/services/InjectableServices";
@@ -275,6 +276,8 @@ export class LiveSyncBaseCore<
usePrepareDatabaseForUse(this); usePrepareDatabaseForUse(this);
// Migration to multiple remote configurations // Migration to multiple remote configurations
useRemoteConfigurationMigration(this); useRemoteConfigurationMigration(this);
// Shared Configuration
useSharedConfigFeature(this);
} }
} }

View File

@@ -45,11 +45,73 @@ CLI Main
- Settings management (JSON file) - Settings management (JSON file)
- Graceful shutdown handling - Graceful shutdown handling
## Something I realised later that could lead to misunderstandings ## Usage
The term `vault` in this README refers to the directory containing your local database and settings file. Not the actual files you want to sync. I will fix this later, but please be mind this for now. The CLI operates on a **database directory** which contains PouchDB data and settings.
## Docker > [!NOTE]
> `livesync-cli` is the alias for the CLI executable. Please replace with the actual command of your installation (e.g. `npm run --silent cli --` or `docker run ...`).
```bash
livesync-cli [database-path] [command] [args...]
```
### Arguments
- `database-path`: Path to the directory where `.livesync` folder and `settings.json` are (or will be) located.
- Note: In previous versions, this was referred to as the "vault" path. Now it is clearly distinguished from the actual vault (the directory containing your `.md` files).
### Commands
- `sync`: Run one replication cycle with the remote CouchDB.
- `mirror [vault-path]`: Bidirectional sync between the local database and a local directory (**the actual vault**).
- If `vault-path` is provided, the CLI will synchronise the database with files in the vault directory.
- If `vault-path` is omitted, it defaults to `database-path` (compatibility mode).
- Use this command to keep your local `.md` files in sync with the database.
- `ls [prefix]`: List files currently stored in the local database.
- `push <src> <dst>`: Push a local file `<src>` into the database at path `<dst>`.
- `pull <src> <dst>`: Pull a file `<src>` from the database into local file `<dst>`.
- `cat <src>`: Read a file from the database and write to stdout.
- `put <dst>`: Read from stdin and write to the database path `<dst>`.
- `init-settings [file]`: Create a default settings file.
### Examples
```bash
# Basic sync with remote
livesync-cli ./my-db sync
# Mirroring to your actual Obsidian vault
livesync-cli ./my-db mirror /path/to/obsidian-vault
# Manual file operations
livesync-cli ./my-db push ./note.md folder/note.md
livesync-cli ./my-db pull folder/note.md ./note.md
```
## Installation
### Build from source
```bash
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
# due to shared dependencies with webapp and main library
npm install
# Build the project (ensure you are in `src/apps/cli` directory)
npm run build
```
Run the CLI:
```bash
# Run with npm script (from repository root)
npm run --silent cli -- [database-path] [command] [args...]
# Run the built executable directly
node src/apps/cli/dist/index.cjs [database-path] [command] [args...]
```
### Docker
A Docker image is provided for headless / server deployments. Build from the repository root: A Docker image is provided for headless / server deployments. Build from the repository root:
@@ -61,28 +123,28 @@ Run:
```bash ```bash
# Sync with CouchDB # Sync with CouchDB
docker run --rm -v /path/to/your/vault:/data livesync-cli sync docker run --rm -v /path/to/your/db:/data livesync-cli sync
# Mirror to a specific vault directory
docker run --rm -v /path/to/your/db:/data -v /path/to/your/vault:/vault livesync-cli mirror /vault
# List files in the local database # List files in the local database
docker run --rm -v /path/to/your/vault:/data livesync-cli ls docker run --rm -v /path/to/your/db:/data livesync-cli ls
# Generate a default settings file
docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
``` ```
The vault directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`. The database directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
### P2P (WebRTC) and Docker networking #### P2P (WebRTC) and Docker networking
The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates
three kinds of ICE candidates. The default Docker bridge network affects which three kinds of ICE candidates. The default Docker bridge network affects which
candidates are usable: candidates are usable:
| Candidate type | Description | Bridge network | | Candidate type | Description | Bridge network |
|---|---|---| | -------------- | ---------------------------------- | -------------------------- |
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers | | `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
| `srflx` | Host public IP via STUN reflection | Works over the internet | | `srflx` | Host public IP via STUN reflection | Works over the internet |
| `relay` | Traffic relayed via TURN server | Always reachable | | `relay` | Traffic relayed via TURN server | Always reachable |
**LAN P2P on Linux** — use `--network host` so that the real host IP is **LAN P2P on Linux** — use `--network host` so that the real host IP is
advertised as the `host` candidate: advertised as the `host` candidate:
@@ -91,6 +153,8 @@ advertised as the `host` candidate:
docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host
``` ```
Note: also fix the alias to include `--network host` if you want to use `livesync-cli` for P2P commands.
> `--network host` is not available on Docker Desktop for macOS or Windows. > `--network host` is not available on Docker Desktop for macOS or Windows.
**LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the **LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the
@@ -103,16 +167,35 @@ candidate carries the host's public IP and peers can connect normally.
**CouchDB sync only (no P2P)** — no special network configuration is required. **CouchDB sync only (no P2P)** — no special network configuration is required.
## Installation
### Adding `livesync-cli` alias
To use the `livesync-cli` command globally, you can add an alias to your shell configuration file (e.g., `.zshrc` or `.bashrc`).
If you are using `npm run`, add the following line:
```bash ```bash
# Install dependencies (ensure you are in repository root directory, not src/apps/cli) alias livesync-cli='npm run --silent --prefix /path/to/repository/src/apps/cli cli --'
# due to shared dependencies with webapp and main library # or
npm install alias livesync-cli="npm run --silent --prefix $PWD cli --"
# Build the project (ensure you are in `src/apps/cli` directory)
npm run build
``` ```
Alternatively, if you want to use the built executable directly:
```bash
alias livesync-cli='node /path/to/repository/src/apps/cli/dist/index.cjs'
or
alias livesync-cli="node $PWD/dist/index.cjs"
```
If you prefer using Docker:
```bash
alias livesync-cli='docker run --rm -v /path/to/your/db:/data livesync-cli'
```
After adding the alias, restart your shell or run `source ~/.zshrc` (or `.bashrc`).
## Usage ## Usage
### Basic Usage ### Basic Usage
@@ -121,43 +204,43 @@ As you know, the CLI is designed to be used in a headless environment. Hence all
```bash ```bash
# Sync local database with CouchDB (no files will be changed). # Sync local database with CouchDB (no files will be changed).
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync livesync-cli /path/to/your-local-database --settings /path/to/settings.json sync
# Push files to local database # Push files to local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
# Pull files from local database # Pull files from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
# Verbose logging # Verbose logging
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose livesync-cli /path/to/your-local-database --settings /path/to/settings.json --verbose
# Apply setup URI to settings file (settings only; does not run synchronisation) # Apply setup URI to settings file (settings only; does not run synchronisation)
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..." livesync-cli /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
# Put text from stdin into local database # Put text from stdin into local database
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md echo "Hello from stdin" | livesync-cli /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
# Output a file from local database to stdout # Output a file from local database to stdout
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
# Output a specific revision of a file from local database # Output a specific revision of a file from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
# Pull a specific revision of a file from local database to local storage # Pull a specific revision of a file from local database to local storage
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
# List files in local database # List files in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/ livesync-cli /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
# Show metadata for a file in local database # Show metadata for a file in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
# Mark a file as deleted in local database # Mark a file as deleted in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
# Resolve conflict by keeping a specific revision # Resolve conflict by keeping a specific revision
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef livesync-cli /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
``` ```
### Configuration ### Configuration
@@ -192,7 +275,8 @@ The CLI uses the same settings format as the Obsidian plugin. Create a `.livesyn
``` ```
Usage: Usage:
livesync-cli [database-path] [options] [command] [command-args] livesync-cli <database-path> [options] <command> [command-args]
livesync-cli init-settings [path]
Arguments: Arguments:
database-path Path to the local database directory (required except for init-settings) database-path Path to the local database directory (required except for init-settings)
@@ -201,7 +285,8 @@ Options:
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory) --settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
--force, -f Overwrite existing file on init-settings --force, -f Overwrite existing file on init-settings
--verbose, -v Enable verbose logging --verbose, -v Enable verbose logging
--help, -h Show this help message --debug, -d Enable debug logging (includes verbose)
--help, -h Show help message
Commands: Commands:
init-settings [path] Create settings JSON from DEFAULT_SETTINGS init-settings [path] Create settings JSON from DEFAULT_SETTINGS
@@ -211,16 +296,16 @@ Commands:
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C) p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
push <src> <dst> Push local file <src> into local database path <dst> push <src> <dst> Push local file <src> into local database path <dst>
pull <src> <dst> Pull file <src> from local database into local file <dst> pull <src> <dst> Pull file <src> from local database into local file <dst>
pull-rev <src> <dst> <revision> Pull specific revision into local file <dst> pull-rev <src> <dst> <rev> Pull specific revision <rev> into local file <dst>
setup <setupURI> Apply setup URI to settings file setup <setupURI> Apply setup URI to settings file
put <vaultPath> Read text from standard input and write to local database put <dst> Read text from standard input and write to local database path <dst>
cat <vaultPath> Write latest file content from local database to standard output cat <src> Write latest file content from local database to standard output
cat-rev <vaultPath> <revision> Write specific revision content from local database to standard output cat-rev <src> <rev> Write specific revision <rev> content from local database to standard output
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*] ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list info <path> Show file metadata including current and past revisions, conflicts, and chunk list
rm <vaultPath> Mark file as deleted in local database rm <path> Mark file as deleted in local database
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision resolve <path> <rev> Resolve conflict by keeping the specified revision
mirror <storagePath> <vaultPath> Mirror local file into local database. mirror [vaultPath] Mirror database contents to the local file system (vaultPath defaults to database-path)
``` ```
Run via npm script: Run via npm script:
@@ -300,11 +385,11 @@ In other words, it performs the following actions:
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time): 5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
| Group | Condition | Action | | Group | Condition | Action |
|---|---|---| | ----------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. | | **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. | | **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. | | **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2. 6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
@@ -323,9 +408,9 @@ Note: `mirror` does not respect file deletions. If a file is deleted in storage,
Create default settings, apply a setup URI, then run one sync cycle. Create default settings, apply a setup URI, then run one sync cycle.
```bash ```bash
npm run --silent cli -- init-settings /data/livesync-settings.json livesync-cli -- init-settings /data/livesync-settings.json
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI" printf '%s\n' "$SETUP_PASSPHRASE" | livesync-cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync livesync-cli -- /data/vault --settings /data/livesync-settings.json sync
``` ```
### 2. Scripted import and export ### 2. Scripted import and export
@@ -333,8 +418,8 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
Push local files into the database from automation, and pull them back for export or backup. Push local files into the database from automation, and pull them back for export or backup.
```bash ```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md livesync-cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md livesync-cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
``` ```
### 3. Revision inspection and restore ### 3. Revision inspection and restore
@@ -342,9 +427,9 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`). List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
```bash ```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef livesync-cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef livesync-cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
``` ```
### 4. Conflict and cleanup workflow ### 4. Conflict and cleanup workflow
@@ -352,9 +437,9 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files. Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
```bash ```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef livesync-cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md livesync-cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
``` ```
### 5. CI smoke test for content round-trip ### 5. CI smoke test for content round-trip
@@ -362,8 +447,8 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm n
Validate that `put`/`cat` is behaving as expected in a pipeline. Validate that `put`/`cat` is behaving as expected in a pipeline.
```bash ```bash
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md echo "hello-ci" | livesync-cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md livesync-cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
``` ```
## Development ## Development

View File

@@ -5,13 +5,13 @@ import { configURIBase } from "@lib/common/models/shared.const";
import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types"; import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types";
import { stripAllPrefixes } from "@lib/string_and_binary/path"; import { stripAllPrefixes } from "@lib/string_and_binary/path";
import type { CLICommandContext, CLIOptions } from "./types"; import type { CLICommandContext, CLIOptions } from "./types";
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils"; import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toDatabaseRelativePath } from "./utils";
import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p"; import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p";
import { performFullScan } from "@lib/serviceFeatures/offlineScanner"; import { performFullScan } from "@lib/serviceFeatures/offlineScanner";
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager"; import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> { export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> {
const { vaultPath, core, settingsPath } = context; const { databasePath, core, settingsPath } = context;
await core.services.control.activated; await core.services.control.activated;
if (options.command === "daemon") { if (options.command === "daemon") {
@@ -77,16 +77,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
throw new Error("push requires two arguments: <src> <dst>"); throw new Error("push requires two arguments: <src> <dst>");
} }
const sourcePath = path.resolve(options.commandArgs[0]); const sourcePath = path.resolve(options.commandArgs[0]);
const destinationVaultPath = toVaultRelativePath(options.commandArgs[1], vaultPath); const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[1], databasePath);
const sourceData = await fs.readFile(sourcePath); const sourceData = await fs.readFile(sourcePath);
const sourceStat = await fs.stat(sourcePath); const sourceStat = await fs.stat(sourcePath);
console.log(`[Command] push ${sourcePath} -> ${destinationVaultPath}`); console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`);
await core.serviceModules.storageAccess.writeFileAuto(destinationVaultPath, toArrayBuffer(sourceData), { await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), {
mtime: sourceStat.mtimeMs, mtime: sourceStat.mtimeMs,
ctime: sourceStat.ctimeMs, ctime: sourceStat.ctimeMs,
}); });
const destinationPathWithPrefix = destinationVaultPath as FilePathWithPrefix; const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix;
const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true); const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true);
return stored; return stored;
} }
@@ -95,16 +95,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("pull requires two arguments: <src> <dst>"); throw new Error("pull requires two arguments: <src> <dst>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const destinationPath = path.resolve(options.commandArgs[1]); const destinationPath = path.resolve(options.commandArgs[1]);
console.log(`[Command] pull ${sourceVaultPath} -> ${destinationPath}`); console.log(`[Command] pull ${sourceDatabasePath} -> ${destinationPath}`);
const sourcePathWithPrefix = sourceVaultPath as FilePathWithPrefix; const sourcePathWithPrefix = sourceDatabasePath as FilePathWithPrefix;
const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true); const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true);
if (!restored) { if (!restored) {
return false; return false;
} }
const data = await core.serviceModules.storageAccess.readFileAuto(sourceVaultPath); const data = await core.serviceModules.storageAccess.readFileAuto(sourceDatabasePath);
await fs.mkdir(path.dirname(destinationPath), { recursive: true }); await fs.mkdir(path.dirname(destinationPath), { recursive: true });
if (typeof data === "string") { if (typeof data === "string") {
await fs.writeFile(destinationPath, data, "utf-8"); await fs.writeFile(destinationPath, data, "utf-8");
@@ -118,16 +118,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 3) { if (options.commandArgs.length < 3) {
throw new Error("pull-rev requires three arguments: <src> <dst> <rev>"); throw new Error("pull-rev requires three arguments: <src> <dst> <rev>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const destinationPath = path.resolve(options.commandArgs[1]); const destinationPath = path.resolve(options.commandArgs[1]);
const rev = options.commandArgs[2].trim(); const rev = options.commandArgs[2].trim();
if (!rev) { if (!rev) {
throw new Error("pull-rev requires a non-empty revision"); throw new Error("pull-rev requires a non-empty revision");
} }
console.log(`[Command] pull-rev ${sourceVaultPath}@${rev} -> ${destinationPath}`); console.log(`[Command] pull-rev ${sourceDatabasePath}@${rev} -> ${destinationPath}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
rev, rev,
true true
); );
@@ -175,11 +175,11 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("put requires one argument: <dst>"); throw new Error("put requires one argument: <dst>");
} }
const destinationVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const content = await readStdinAsUtf8(); const content = await readStdinAsUtf8();
console.log(`[Command] put stdin -> ${destinationVaultPath}`); console.log(`[Command] put stdin -> ${destinationDatabasePath}`);
return await core.serviceModules.databaseFileAccess.storeContent( return await core.serviceModules.databaseFileAccess.storeContent(
destinationVaultPath as FilePathWithPrefix, destinationDatabasePath as FilePathWithPrefix,
content content
); );
} }
@@ -188,10 +188,10 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("cat requires one argument: <src>"); throw new Error("cat requires one argument: <src>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
console.error(`[Command] cat ${sourceVaultPath}`); console.error(`[Command] cat ${sourceDatabasePath}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
undefined, undefined,
true true
); );
@@ -212,14 +212,14 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("cat-rev requires two arguments: <src> <rev>"); throw new Error("cat-rev requires two arguments: <src> <rev>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const rev = options.commandArgs[1].trim(); const rev = options.commandArgs[1].trim();
if (!rev) { if (!rev) {
throw new Error("cat-rev requires a non-empty revision"); throw new Error("cat-rev requires a non-empty revision");
} }
console.error(`[Command] cat-rev ${sourceVaultPath} @ ${rev}`); console.error(`[Command] cat-rev ${sourceDatabasePath} @ ${rev}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
rev, rev,
true true
); );
@@ -239,7 +239,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.command === "ls") { if (options.command === "ls") {
const prefix = const prefix =
options.commandArgs.length > 0 && options.commandArgs[0].trim() !== "" options.commandArgs.length > 0 && options.commandArgs[0].trim() !== ""
? toVaultRelativePath(options.commandArgs[0], vaultPath) ? toDatabaseRelativePath(options.commandArgs[0], databasePath)
: ""; : "";
const rows: { path: string; line: string }[] = []; const rows: { path: string; line: string }[] = [];
@@ -261,6 +261,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
rows.sort((a, b) => a.path.localeCompare(b.path)); rows.sort((a, b) => a.path.localeCompare(b.path));
if (rows.length > 0) { if (rows.length > 0) {
process.stdout.write(rows.map((e) => e.line).join("\n") + "\n"); process.stdout.write(rows.map((e) => e.line).join("\n") + "\n");
} else {
process.stderr.write("[Info] No documents found in the local database.\n");
} }
return true; return true;
} }
@@ -269,7 +271,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("info requires one argument: <path>"); throw new Error("info requires one argument: <path>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) { for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) {
if (doc._deleted || doc.deleted) continue; if (doc._deleted || doc.deleted) continue;
@@ -313,7 +315,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("rm requires one argument: <path>"); throw new Error("rm requires one argument: <path>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
console.error(`[Command] rm ${targetPath}`); console.error(`[Command] rm ${targetPath}`);
return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix); return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix);
} }
@@ -322,7 +324,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("resolve requires two arguments: <path> <revision-to-keep>"); throw new Error("resolve requires two arguments: <path> <revision-to-keep>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath) as FilePathWithPrefix; const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath) as FilePathWithPrefix;
const revisionToKeep = options.commandArgs[1].trim(); const revisionToKeep = options.commandArgs[1].trim();
if (revisionToKeep === "") { if (revisionToKeep === "") {
throw new Error("resolve requires a non-empty revision-to-keep"); throw new Error("resolve requires a non-empty revision-to-keep");

View File

@@ -58,7 +58,7 @@ async function createSetupURI(passphrase: string): Promise<string> {
describe("runCommand abnormal cases", () => { describe("runCommand abnormal cases", () => {
const context = { const context = {
vaultPath: "/tmp/vault", databasePath: "/tmp/vault",
settingsPath: "/tmp/vault/.livesync/settings.json", settingsPath: "/tmp/vault/.livesync/settings.json",
} as any; } as any;

View File

@@ -32,7 +32,7 @@ export interface CLIOptions {
} }
export interface CLICommandContext { export interface CLICommandContext {
vaultPath: string; databasePath: string;
core: LiveSyncBaseCore<ServiceContext, any>; core: LiveSyncBaseCore<ServiceContext, any>;
settingsPath: string; settingsPath: string;
} }

View File

@@ -5,19 +5,19 @@ export function toArrayBuffer(data: Buffer): ArrayBuffer {
return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer; return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer;
} }
export function toVaultRelativePath(inputPath: string, vaultPath: string): string { export function toDatabaseRelativePath(inputPath: string, databasePath: string): string {
const stripped = inputPath.replace(/^[/\\]+/, ""); const stripped = inputPath.replace(/^[/\\]+/, "");
if (!path.isAbsolute(inputPath)) { if (!path.isAbsolute(inputPath)) {
const normalized = stripped.replace(/\\/g, "/"); const normalized = stripped.replace(/\\/g, "/");
const resolved = path.resolve(vaultPath, normalized); const resolved = path.resolve(databasePath, normalized);
const rel = path.relative(vaultPath, resolved); const rel = path.relative(databasePath, resolved);
if (rel.startsWith("..") || path.isAbsolute(rel)) { if (rel.startsWith("..") || path.isAbsolute(rel)) {
throw new Error(`Path ${inputPath} is outside of the local database directory`); throw new Error(`Path ${inputPath} is outside of the local database directory`);
} }
return rel.replace(/\\/g, "/"); return rel.replace(/\\/g, "/");
} }
const resolved = path.resolve(inputPath); const resolved = path.resolve(inputPath);
const rel = path.relative(vaultPath, resolved); const rel = path.relative(databasePath, resolved);
if (rel.startsWith("..") || path.isAbsolute(rel)) { if (rel.startsWith("..") || path.isAbsolute(rel)) {
throw new Error(`Path ${inputPath} is outside of the local database directory`); throw new Error(`Path ${inputPath} is outside of the local database directory`);
} }
@@ -25,15 +25,15 @@ export function toVaultRelativePath(inputPath: string, vaultPath: string): strin
} }
export async function readStdinAsUtf8(): Promise<string> { export async function readStdinAsUtf8(): Promise<string> {
const chunks: Buffer[] = []; const chunks = [];
for await (const chunk of process.stdin) { for await (const chunk of process.stdin) {
if (typeof chunk === "string") { if (typeof chunk === "string") {
chunks.push(Buffer.from(chunk, "utf-8")); chunks.push(Buffer.from(chunk, "utf-8"));
} else { } else {
chunks.push(chunk); chunks.push(chunk as Buffer);
} }
} }
return Buffer.concat(chunks).toString("utf-8"); return Buffer.concat(chunks as Uint8Array[]).toString("utf-8");
} }
export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> { export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> {

View File

@@ -1,29 +1,33 @@
import * as path from "path"; import * as path from "path";
import { describe, expect, it } from "vitest"; import { describe, expect, it } from "vitest";
import { toVaultRelativePath } from "./utils"; import { toDatabaseRelativePath } from "./utils";
describe("toVaultRelativePath", () => { describe("toDatabaseRelativePath", () => {
const vaultPath = path.resolve("/tmp/livesync-vault"); const databasePath = path.resolve("/tmp/livesync-vault");
it("rejects absolute paths outside vault", () => { it("rejects absolute paths outside vault", () => {
expect(() => toVaultRelativePath("/etc/passwd", vaultPath)).toThrow("outside of the local database directory"); expect(() => toDatabaseRelativePath("/etc/passwd", databasePath)).toThrow(
"outside of the local database directory"
);
}); });
it("normalizes leading slash for absolute path inside vault", () => { it("normalizes leading slash for absolute path inside vault", () => {
const absoluteInsideVault = path.join(vaultPath, "notes", "foo.md"); const absoluteInsideVault = path.join(databasePath, "notes", "foo.md");
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("notes/foo.md"); expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("notes/foo.md");
}); });
it("normalizes Windows-style separators", () => { it("normalizes Windows-style separators", () => {
expect(toVaultRelativePath("notes\\daily\\2026-03-12.md", vaultPath)).toBe("notes/daily/2026-03-12.md"); expect(toDatabaseRelativePath("notes\\daily\\2026-03-12.md", databasePath)).toBe("notes/daily/2026-03-12.md");
}); });
it("returns vault-relative path for another absolute path inside vault", () => { it("returns vault-relative path for another absolute path inside vault", () => {
const absoluteInsideVault = path.join(vaultPath, "docs", "inside.md"); const absoluteInsideVault = path.join(databasePath, "docs", "inside.md");
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("docs/inside.md"); expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("docs/inside.md");
}); });
it("rejects relative path traversal that escapes vault", () => { it("rejects relative path traversal that escapes vault", () => {
expect(() => toVaultRelativePath("../escape.md", vaultPath)).toThrow("outside of the local database directory"); expect(() => toDatabaseRelativePath("../escape.md", databasePath)).toThrow(
"outside of the local database directory"
);
}); });
}); });

View File

@@ -36,14 +36,15 @@ function printHelp(): void {
Self-hosted LiveSync CLI Self-hosted LiveSync CLI
Usage: Usage:
livesync-cli [database-path] [options] [command] [command-args] livesync-cli <database-path> [options] <command> [command-args]
livesync-cli init-settings [path]
Arguments: Arguments:
database-path Path to the local database directory (required) database-path Path to the local database directory
Commands: Commands:
sync Run one replication cycle and exit sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name> p2p-peers <timeout> Show discovered peers as [peer]\t<peer-id>\t<peer-name>
p2p-sync <peer> <timeout> p2p-sync <peer> <timeout>
Sync with the specified peer-id or peer-name Sync with the specified peer-id or peer-name
p2p-host Start P2P host mode and wait until interrupted p2p-host Start P2P host mode and wait until interrupted
@@ -54,28 +55,29 @@ Commands:
put <dst> Read UTF-8 content from stdin and write to local database path <dst> put <dst> Read UTF-8 content from stdin and write to local database path <dst>
cat <src> Read file <src> from local database and write to stdout cat <src> Read file <src> from local database and write to stdout
cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout
ls [prefix] List DB files as path<TAB>size<TAB>mtime<TAB>revision[*] ls [prefix] List DB files as path\tsize\tmtime\trevision[*]
info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks) info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks)
rm <path> Mark a file as deleted in local database rm <path> Mark a file as deleted in local database
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path)
Examples: Examples:
livesync-cli ./my-database sync livesync-cli ./my-database sync
livesync-cli ./my-database p2p-peers 5 livesync-cli ./my-database p2p-peers 5
livesync-cli ./my-database p2p-sync my-peer-name 15 livesync-cli ./my-database p2p-sync my-peer-name 15
livesync-cli ./my-database p2p-host livesync-cli ./my-database p2p-host
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
livesync-cli ./my-database pull folder/note.md ./exports/note.md livesync-cli ./my-database pull folder/note.md ./exports/note.md
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..." livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
echo "Hello" | livesync-cli ./my-database put notes/hello.md echo "Hello" | livesync-cli ./my-database put notes/hello.md
livesync-cli ./my-database cat notes/hello.md livesync-cli ./my-database cat notes/hello.md
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
livesync-cli ./my-database ls notes/ livesync-cli ./my-database ls notes/
livesync-cli ./my-database info notes/hello.md livesync-cli ./my-database info notes/hello.md
livesync-cli ./my-database rm notes/hello.md livesync-cli ./my-database rm notes/hello.md
livesync-cli ./my-database resolve notes/hello.md 3-abcdef livesync-cli ./my-database resolve notes/hello.md 3-abcdef
livesync-cli init-settings ./data.json livesync-cli init-settings ./data.json
livesync-cli ./my-database --verbose livesync-cli ./my-database --verbose
`); `);
} }
@@ -112,6 +114,7 @@ export function parseArgs(): CLIOptions {
case "-d": case "-d":
// debugging automatically enables verbose logging, as it is intended for debugging issues. // debugging automatically enables verbose logging, as it is intended for debugging issues.
debug = true; debug = true;
// falls through
case "--verbose": case "--verbose":
case "-v": case "-v":
verbose = true; verbose = true;
@@ -220,34 +223,34 @@ export async function main() {
return; return;
} }
// Resolve vault path // Resolve database path
const vaultPath = path.resolve(options.databasePath!); const databasePath = path.resolve(options.databasePath!);
// Check if vault directory exists // Check if database directory exists
try { try {
const stat = await fs.stat(vaultPath); const stat = await fs.stat(databasePath);
if (!stat.isDirectory()) { if (!stat.isDirectory()) {
console.error(`Error: ${vaultPath} is not a directory`); console.error(`Error: ${databasePath} is not a directory`);
process.exit(1); process.exit(1);
} }
} catch (error) { } catch (error) {
console.error(`Error: Vault directory ${vaultPath} does not exist`); console.error(`Error: Database directory ${databasePath} does not exist`);
process.exit(1); process.exit(1);
} }
// Resolve settings path // Resolve settings path
const settingsPath = options.settingsPath const settingsPath = options.settingsPath
? path.resolve(options.settingsPath) ? path.resolve(options.settingsPath)
: path.join(vaultPath, SETTINGS_FILE); : path.join(databasePath, SETTINGS_FILE);
configureNodeLocalStorage(path.join(vaultPath, ".livesync", "runtime", "local-storage.json")); configureNodeLocalStorage(path.join(databasePath, ".livesync", "runtime", "local-storage.json"));
infoLog(`Self-hosted LiveSync CLI`); infoLog(`Self-hosted LiveSync CLI`);
infoLog(`Vault: ${vaultPath}`); infoLog(`Database Path: ${databasePath}`);
infoLog(`Settings: ${settingsPath}`); infoLog(`Settings: ${settingsPath}`);
infoLog(""); infoLog("");
// Create service context and hub // Create service context and hub
const context = new NodeServiceContext(vaultPath); const context = new NodeServiceContext(databasePath);
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(vaultPath, context); const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(databasePath, context);
serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => { serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => {
let levelStr = ""; let levelStr = "";
switch (level) { switch (level) {
@@ -321,7 +324,11 @@ export async function main() {
const core = new LiveSyncBaseCore( const core = new LiveSyncBaseCore(
serviceHubInstance, serviceHubInstance,
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => { (core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
return initialiseServiceModulesCLI(vaultPath, core, serviceHub); const mirrorVaultPath =
options.command === "mirror" && options.commandArgs[0]
? path.resolve(options.commandArgs[0])
: databasePath;
return initialiseServiceModulesCLI(mirrorVaultPath, core, serviceHub);
}, },
(core) => [ (core) => [
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts // No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
@@ -331,8 +338,8 @@ export async function main() {
(core) => { (core) => {
// Add target filter to prevent internal files are handled // Add target filter to prevent internal files are handled
core.services.vault.isTargetFile.addHandler(async (target) => { core.services.vault.isTargetFile.addHandler(async (target) => {
const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target)); const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
const parts = vaultPath.split(path.sep); const parts = targetPath.split(path.sep);
// if some part of the path starts with dot, treat it as internal file and ignore. // if some part of the path starts with dot, treat it as internal file and ignore.
if (parts.some((part) => part.startsWith("."))) { if (parts.some((part) => part.startsWith("."))) {
return await Promise.resolve(false); return await Promise.resolve(false);
@@ -393,7 +400,7 @@ export async function main() {
infoLog(""); infoLog("");
} }
const result = await runCommand(options, { vaultPath, core, settingsPath }); const result = await runCommand(options, { databasePath, core, settingsPath });
if (!result) { if (!result) {
console.error(`[Error] Command '${options.command}' failed`); console.error(`[Error] Command '${options.command}' failed`);
process.exitCode = 1; process.exitCode = 1;

View File

@@ -17,7 +17,7 @@ describe("CLI parseArgs", () => {
}); });
it("exits 1 when --settings has no value", () => { it("exits 1 when --settings has no value", () => {
process.argv = ["node", "livesync-cli", "./vault", "--settings"]; process.argv = ["node", "livesync-cli", "./databasePath", "--settings"];
const exitMock = mockProcessExit(); const exitMock = mockProcessExit();
const stderr = vi.spyOn(console, "error").mockImplementation(() => {}); const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
@@ -37,7 +37,7 @@ describe("CLI parseArgs", () => {
}); });
it("exits 1 for unknown command after database-path", () => { it("exits 1 for unknown command after database-path", () => {
process.argv = ["node", "livesync-cli", "./vault", "unknown-cmd"]; process.argv = ["node", "livesync-cli", "./databasePath", "unknown-cmd"];
const exitMock = mockProcessExit(); const exitMock = mockProcessExit();
const stderr = vi.spyOn(console, "error").mockImplementation(() => {}); const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
@@ -56,32 +56,32 @@ describe("CLI parseArgs", () => {
expect(stdout).toHaveBeenCalled(); expect(stdout).toHaveBeenCalled();
const combined = stdout.mock.calls.flat().join("\n"); const combined = stdout.mock.calls.flat().join("\n");
expect(combined).toContain("Usage:"); expect(combined).toContain("Usage:");
expect(combined).toContain("livesync-cli [database-path]"); expect(combined).toContain("livesync-cli <database-path> [options] <command> [command-args]");
}); });
it("parses p2p-peers command and timeout", () => { it("parses p2p-peers command and timeout", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-peers", "5"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-peers", "5"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-peers"); expect(parsed.command).toBe("p2p-peers");
expect(parsed.commandArgs).toEqual(["5"]); expect(parsed.commandArgs).toEqual(["5"]);
}); });
it("parses p2p-sync command with peer and timeout", () => { it("parses p2p-sync command with peer and timeout", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-sync", "peer-1", "12"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-sync", "peer-1", "12"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-sync"); expect(parsed.command).toBe("p2p-sync");
expect(parsed.commandArgs).toEqual(["peer-1", "12"]); expect(parsed.commandArgs).toEqual(["peer-1", "12"]);
}); });
it("parses p2p-host command", () => { it("parses p2p-host command", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-host"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-host"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-host"); expect(parsed.command).toBe("p2p-host");
expect(parsed.commandArgs).toEqual([]); expect(parsed.commandArgs).toEqual([]);
}); });

View File

@@ -27,10 +27,10 @@ import { DatabaseService } from "@lib/services/base/DatabaseService";
import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types"; import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types";
export class NodeServiceContext extends ServiceContext { export class NodeServiceContext extends ServiceContext {
vaultPath: string; databasePath: string;
constructor(vaultPath: string) { constructor(databasePath: string) {
super(); super();
this.vaultPath = vaultPath; this.databasePath = databasePath;
} }
} }
@@ -64,7 +64,7 @@ class NodeDatabaseService<T extends NodeServiceContext> extends DatabaseService<
): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } { ): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } {
const optionPass = { const optionPass = {
...options, ...options,
prefix: this.context.vaultPath + nodePath.sep, prefix: this.context.databasePath + nodePath.sep,
}; };
const passSettings = { ...settings, useIndexedDBAdapter: false }; const passSettings = { ...settings, useIndexedDBAdapter: false };
return super.modifyDatabaseOptions(passSettings, name, optionPass); return super.modifyDatabaseOptions(passSettings, name, optionPass);

View File

@@ -0,0 +1,49 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
cd "$CLI_DIR"
source "$SCRIPT_DIR/test-helpers.sh"
display_test_info "Test for Issue #860: Empty output from ls and mirror"
RUN_BUILD="${RUN_BUILD:-1}"
cli_test_init_cli_cmd
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-repro-860.XXXXXX")"
trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault"
mkdir -p "$VAULT_DIR"
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..."
npm run build
fi
echo "[INFO] generating settings -> $SETTINGS_FILE"
cli_test_init_settings_file "$SETTINGS_FILE"
# 1. Test 'ls' on empty database
echo "[INFO] Testing 'ls' on empty database..."
LS_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls)
if [[ -z "$LS_OUTPUT" ]]; then
echo "[REPRODUCED] 'ls' returned empty output for empty database."
else
echo "[INFO] 'ls' output: $LS_OUTPUT"
fi
# 2. Test 'mirror' on empty vault
echo "[INFO] Testing 'mirror' on empty vault..."
MIRROR_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror 2>&1)
if [[ "$MIRROR_OUTPUT" == *"[Command] mirror"* ]] && [[ ! "$MIRROR_OUTPUT" == *"[Mirror]"* ]]; then
# Note: currently it prints [Command] mirror to stderr.
# Let's see if it prints anything else.
echo "[REPRODUCED] 'mirror' produced no functional logs (only command header)."
else
echo "[INFO] 'mirror' output: $MIRROR_OUTPUT"
fi
echo "[DONE] finished repro-860 test"

83
src/apps/cli/test/test-mirror-linux.sh Normal file → Executable file
View File

@@ -28,7 +28,9 @@ trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json" SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault" VAULT_DIR="$WORK_DIR/vault"
DB_DIR="$WORK_DIR/db"
mkdir -p "$VAULT_DIR/test" mkdir -p "$VAULT_DIR/test"
mkdir -p "$DB_DIR"
if [[ "$RUN_BUILD" == "1" ]]; then if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..." echo "[INFO] building CLI..."
@@ -41,6 +43,20 @@ cli_test_init_settings_file "$SETTINGS_FILE"
# isConfigured=true is required for mirror (canProceedScan checks this) # isConfigured=true is required for mirror (canProceedScan checks this)
cli_test_mark_settings_configured "$SETTINGS_FILE" cli_test_mark_settings_configured "$SETTINGS_FILE"
# Preparation: Sync settings and files logic
DB_SETTINGS="$DB_DIR/settings.json"
cp "$SETTINGS_FILE" "$DB_SETTINGS"
# Helper for standard run (Separated paths)
run_mirror_test() {
run_cli "$DB_DIR" --settings "$DB_SETTINGS" mirror "$VAULT_DIR"
}
# Helper for compatibility run (Same path)
run_mirror_compat() {
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
}
PASS=0 PASS=0
FAIL=0 FAIL=0
@@ -78,19 +94,27 @@ portable_touch_timestamp() {
# Case 1: File exists only in storage → should be synced into DB after mirror # Case 1: File exists only in storage → should be synced into DB after mirror
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 1: storage-only → DB ===" echo "=== Case 1: storage-only → DB (Separated Paths) ==="
printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md" printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror echo "[DEBUG] DB_DIR: $DB_DIR"
echo "[DEBUG] VAULT_DIR: $VAULT_DIR"
run_mirror_test
RESULT_FILE="$WORK_DIR/case1-cat.txt" RESULT_FILE="$WORK_DIR/case1-cat.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE" # Try 'ls' first to see what's in the DB
echo "--- DB contents ---"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" ls
echo "-------------------"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/storage-only.md "$RESULT_FILE"
if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then
assert_pass "storage-only file was synced into DB" assert_pass "storage-only file was synced into DB using separated paths"
else else
assert_fail "storage-only file NOT synced into DB" assert_fail "storage-only file NOT synced into DB with separated paths"
echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2 echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2
echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2 echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2
fi fi
@@ -99,9 +123,9 @@ fi
# Case 2: File exists only in DB → should be restored to storage after mirror # Case 2: File exists only in DB → should be restored to storage after mirror
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 2: DB-only → storage ===" echo "=== Case 2: DB-only → storage (Separated Paths) ==="
printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md printf 'db-only content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/db-only.md
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
assert_fail "db-only.md unexpectedly exists in storage before mirror" assert_fail "db-only.md unexpectedly exists in storage before mirror"
@@ -109,7 +133,7 @@ else
echo "[INFO] confirmed: test/db-only.md not in storage before mirror" echo "[INFO] confirmed: test/db-only.md not in storage before mirror"
fi fi
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")" STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")"
@@ -119,19 +143,19 @@ if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')" assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')"
fi fi
else else
assert_fail "DB-only file was NOT restored to storage" assert_fail "DB-only file NOT restored to storage after mirror"
fi fi
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
# Case 3: File deleted in DB → should NOT be created in storage # Case 3: File deleted in DB → should NOT be created in storage
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 3: DB-deleted → storage untouched ===" echo "=== Case 3: DB-deleted → storage untouched (Separated Paths) ==="
printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md printf 'to-be-deleted\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/deleted.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md run_cli "$DB_DIR" --settings "$DB_SETTINGS" rm test/deleted.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then
assert_pass "deleted DB entry was not restored to storage" assert_pass "deleted DB entry was not restored to storage"
@@ -143,19 +167,19 @@ fi
# Case 4: Both exist, storage is newer → DB should be updated # Case 4: Both exist, storage is newer → DB should be updated
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 4: storage newer → DB updated ===" echo "=== Case 4: storage newer → DB updated (Separated Paths) ==="
# Seed DB with old content (mtime ≈ now) # Seed DB with old content (mtime ≈ now)
printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md printf 'old content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-storage-newer.md
# Write new content to storage with a timestamp 1 hour in the future # Write new content to storage with a timestamp 1 hour in the future
printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md" printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md"
touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md" touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
DB_RESULT_FILE="$WORK_DIR/case4-pull.txt" DB_RESULT_FILE="$WORK_DIR/case4-pull.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE" run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then
assert_pass "DB updated to match newer storage file" assert_pass "DB updated to match newer storage file"
else else
@@ -168,16 +192,16 @@ fi
# Case 5: Both exist, DB is newer → storage should be updated # Case 5: Both exist, DB is newer → storage should be updated
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 5: DB newer → storage updated ===" echo "=== Case 5: DB newer → storage updated (Separated Paths) ==="
# Write old content to storage with a timestamp 1 hour in the past # Write old content to storage with a timestamp 1 hour in the past
printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md" printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md"
touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md" touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md"
# Write new content to DB only (mtime ≈ now, newer than the storage file) # Write new content to DB only (mtime ≈ now, newer than the storage file)
printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md printf 'new db content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-db-newer.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")" STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")"
if [[ "$STORAGE_CONTENT" == "new db content" ]]; then if [[ "$STORAGE_CONTENT" == "new db content" ]]; then
@@ -186,6 +210,25 @@ else
assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')" assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')"
fi fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 6: Compatibility test - omitted vault-path
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 6: omitted vault-path (Compatibility Mode) ==="
# We use VAULT_DIR as the "main" database path for this part.
printf 'compat-content\n' > "$VAULT_DIR/compat.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
# In compat mode, it should find it in the DB at root
CAT_RESULT="$WORK_DIR/compat-cat.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull compat.md "$CAT_RESULT"
if [[ "$(cat "$CAT_RESULT")" == "compat-content" ]]; then
assert_pass "Compatibility mode works (omitted vault-path)"
else
assert_fail "Compatibility mode failed to sync file into DB"
fi
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
# Summary # Summary
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────

View File

@@ -138,7 +138,7 @@ export const _requestToCouchDBFetch = async (
authorization: authHeader, authorization: authHeader,
"content-type": "application/json", "content-type": "application/json",
}; };
const uri = `${baseUri}/${path}`; const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
const requestParam = { const requestParam = {
url: uri, url: uri,
method: method || (body ? "PUT" : "GET"), method: method || (body ? "PUT" : "GET"),
@@ -162,7 +162,7 @@ export const _requestToCouchDB = async (
const authHeaderGen = new AuthorizationHeaderGenerator(); const authHeaderGen = new AuthorizationHeaderGenerator();
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials); const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders }; const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
const uri = `${baseUri}/${path}`; const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
const requestParam: RequestUrlParam = { const requestParam: RequestUrlParam = {
url: uri, url: uri,
method: method || (body ? "PUT" : "GET"), method: method || (body ? "PUT" : "GET"),

View File

@@ -781,7 +781,8 @@ Success: ${successCount}, Errored: ${errored}`;
const credential = generateCredentialObject(this.settings); const credential = generateCredentialObject(this.settings);
const request = async (path: string, method: string = "GET", body: any = undefined) => { const request = async (path: string, method: string = "GET", body: any = undefined) => {
const req = await _requestToCouchDB( const req = await _requestToCouchDB(
this.settings.couchDB_URI + (this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""), this.settings.couchDB_URI.replace(/\/+$/, "") +
(this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
credential, credential,
window.origin, window.origin,
path, path,

Submodule src/lib updated: 5dc3b21d36...293d3c9c17

View File

@@ -8,6 +8,7 @@ import { Logger } from "../../../lib/src/common/logger.ts";
import { $msg } from "../../../lib/src/common/i18n.ts"; import { $msg } from "../../../lib/src/common/i18n.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts"; import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import { EVENT_REQUEST_COPY_SETUP_URI, eventHub } from "../../../common/events.ts"; import { EVENT_REQUEST_COPY_SETUP_URI, eventHub } from "../../../common/events.ts";
import { triggerEnableAutoConfiguration } from "../../../lib/src/serviceFeatures/sharedConfig.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts"; import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
import type { PageFunctions } from "./SettingPane.ts"; import type { PageFunctions } from "./SettingPane.ts";
import { visibleOnly } from "./SettingPane.ts"; import { visibleOnly } from "./SettingPane.ts";
@@ -189,6 +190,34 @@ export function paneSyncSettings(
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncOnFileOpen", { onUpdate: onlyOnNonLiveSync }); new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncOnFileOpen", { onUpdate: onlyOnNonLiveSync });
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncOnStart", { onUpdate: onlyOnNonLiveSync }); new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncOnStart", { onUpdate: onlyOnNonLiveSync });
new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncAfterMerge", { onUpdate: onlyOnNonLiveSync }); new Setting(paneEl).setClass("wizardHidden").autoWireToggle("syncAfterMerge", { onUpdate: onlyOnNonLiveSync });
new Setting(paneEl)
.setClass("wizardHidden")
.setName($msg("obsidianLiveSyncSettingTab.autoConfigByRemote"))
.setDesc($msg("obsidianLiveSyncSettingTab.autoConfigByRemoteDesc"))
.addToggle((toggle) => {
toggle.setValue(this.editingSettings.useAutoConfig);
toggle.onChange(async (val) => {
if (val) {
const enableRes = await triggerEnableAutoConfiguration(this.plugin.core as any);
if (enableRes) {
// Copy settings from service to editingSettings so dialogue is consistent
this.editingSettings.useAutoConfig = true;
const copiedSettings = this.services.setting.settings;
this.editingSettings.hashAlg = copiedSettings.hashAlg;
this.editingSettings.chunkSplitterVersion = copiedSettings.chunkSplitterVersion;
this.editingSettings.enableChunkSplitterV2 = copiedSettings.enableChunkSplitterV2;
this.editingSettings.useSegmenter = copiedSettings.useSegmenter;
this.editingSettings.minimumChunkSize = copiedSettings.minimumChunkSize;
this.editingSettings.customChunkSize = copiedSettings.customChunkSize;
}
} else {
this.editingSettings.useAutoConfig = false;
await this.saveSettings(["liveSync"]);
}
this.display(); // re-render
});
});
}); });
void addPanel( void addPanel(

View File

@@ -3,6 +3,39 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope. The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
## 0.25.60
29th April, 2026
### Fixed
- Now larger settings can be exported and imported via QR code without issues. (#595)
- When the settings data exceeds the QR code capacity, it is now split into multiple QR codes.
- These QR codes are reassembled by the aggregator page, which collects the split data and reconstructs the original settings.
- Aggregator page is available at `https://vrtmrz.github.io/obsidian-livesync/aggregator.html`, and this file is also included in the repository.
- We will not send the settings data to any server. The QR code data is generated and processed entirely on the client side, ensuring that your settings remain private and secure. HOWEVER, please be careful your network environment.
- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.
### Fixed (CLI)
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
- Improved the clarity of CLI command logs by including the total count of processed items.
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.
## 0.25.59
### Fixed
- No longer Setup-wizard drops username and password silently. (#865)
- Thank you so much for @koteitan !
- Setup URI is now correctly imported (#859).
- Also thank you so much for @koteitan !
### Improved
- now French translation is added by @foXaCe ! Thank you so much!
## 0.25.58 ## 0.25.58
### Fixed ### Fixed