From 4c0af0b608e43d9143407c59d76baf4d7179f7a5 Mon Sep 17 00:00:00 2001 From: vorotamoroz Date: Wed, 29 Apr 2026 12:22:00 +0900 Subject: [PATCH] Fixed(cli): - `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860). - The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`. - The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported. Co-authored-by: Copilot --- src/apps/cli/README.md | 73 ++++++++++++---- src/apps/cli/commands/runCommand.ts | 54 ++++++------ src/apps/cli/commands/runCommand.unit.spec.ts | 2 +- src/apps/cli/commands/types.ts | 2 +- src/apps/cli/commands/utils.ts | 14 ++-- src/apps/cli/commands/utils.unit.spec.ts | 24 +++--- src/apps/cli/main.ts | 36 ++++---- src/apps/cli/main.unit.spec.ts | 16 ++-- src/apps/cli/services/NodeServiceHub.ts | 8 +- src/apps/cli/test/repro-issue-860.sh | 49 +++++++++++ src/apps/cli/test/test-mirror-linux.sh | 83 ++++++++++++++----- src/lib | 2 +- updates.md | 9 ++ 13 files changed, 261 insertions(+), 111 deletions(-) create mode 100755 src/apps/cli/test/repro-issue-860.sh mode change 100644 => 100755 src/apps/cli/test/test-mirror-linux.sh diff --git a/src/apps/cli/README.md b/src/apps/cli/README.md index 8ea8db4..02dba18 100644 --- a/src/apps/cli/README.md +++ b/src/apps/cli/README.md @@ -45,9 +45,46 @@ CLI Main - Settings management (JSON file) - Graceful shutdown handling -## Something I realised later that could lead to misunderstandings +## Usage -The term `vault` in this README refers to the directory containing your local database and settings file. Not the actual files you want to sync. I will fix this later, but please be mind this for now. +The CLI operates on a **database directory** which contains PouchDB data and settings. + +```bash +lsync [database-path] [command] [args...] +``` + +### Arguments + +- `database-path`: Path to the directory where `.livesync` folder and `settings.json` are (or will be) located. + - Note: In previous versions, this was referred to as the "vault" path. Now it is clearly distinguished from the actual vault (the directory containing your `.md` files). + +### Commands + +- `sync`: Run one replication cycle with the remote CouchDB. +- `mirror [vault-path]`: Bidirectional sync between the local database and a local directory (**the actual vault**). + - If `vault-path` is provided, the CLI will synchronise the database with files in that directory. + - If `vault-path` is omitted, it defaults to `database-path` (compatibility mode). + - Use this command to keep your local `.md` files in sync with the database. +- `ls [prefix]`: List files currently stored in the local database. +- `push `: Push a local file `` into the database at path ``. +- `pull `: Pull a file `` from the database into local file ``. +- `cat `: Read a file from the database and write to stdout. +- `put `: Read from stdin and write to the database path ``. +- `init-settings [file]`: Create a default settings file. + +### Examples + +```bash +# Basic sync with remote +lsync ./my-db sync + +# Mirroring to your actual Obsidian vault +lsync ./my-db mirror /path/to/obsidian-vault + +# Manual file operations +lsync ./my-db push ./note.md folder/note.md +lsync ./my-db pull folder/note.md ./note.md +``` ## Docker @@ -61,16 +98,16 @@ Run: ```bash # Sync with CouchDB -docker run --rm -v /path/to/your/vault:/data livesync-cli sync +docker run --rm -v /path/to/your/db:/data livesync-cli sync + +# Mirror to a specific vault directory +docker run --rm -v /path/to/your/db:/data -v /path/to/your/vault:/vault livesync-cli mirror /vault # List files in the local database -docker run --rm -v /path/to/your/vault:/data livesync-cli ls - -# Generate a default settings file -docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings +docker run --rm -v /path/to/your/db:/data livesync-cli ls ``` -The vault directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`. +The database directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`. ### P2P (WebRTC) and Docker networking @@ -78,11 +115,11 @@ The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generat three kinds of ICE candidates. The default Docker bridge network affects which candidates are usable: -| Candidate type | Description | Bridge network | -|---|---|---| -| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers | -| `srflx` | Host public IP via STUN reflection | Works over the internet | -| `relay` | Traffic relayed via TURN server | Always reachable | +| Candidate type | Description | Bridge network | +| -------------- | ---------------------------------- | -------------------------- | +| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers | +| `srflx` | Host public IP via STUN reflection | Works over the internet | +| `relay` | Traffic relayed via TURN server | Always reachable | **LAN P2P on Linux** — use `--network host` so that the real host IP is advertised as the `host` candidate: @@ -300,11 +337,11 @@ In other words, it performs the following actions: 5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time): - | Group | Condition | Action | - |---|---|---| - | **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. | - | **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. | - | **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. | + | Group | Condition | Action | + | ----------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. | + | **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. | + | **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. | 6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2. diff --git a/src/apps/cli/commands/runCommand.ts b/src/apps/cli/commands/runCommand.ts index 12a315b..e188c23 100644 --- a/src/apps/cli/commands/runCommand.ts +++ b/src/apps/cli/commands/runCommand.ts @@ -5,13 +5,13 @@ import { configURIBase } from "@lib/common/models/shared.const"; import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types"; import { stripAllPrefixes } from "@lib/string_and_binary/path"; import type { CLICommandContext, CLIOptions } from "./types"; -import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils"; +import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toDatabaseRelativePath } from "./utils"; import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p"; import { performFullScan } from "@lib/serviceFeatures/offlineScanner"; import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager"; export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise { - const { vaultPath, core, settingsPath } = context; + const { databasePath, core, settingsPath } = context; await core.services.control.activated; if (options.command === "daemon") { @@ -77,16 +77,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext throw new Error("push requires two arguments: "); } const sourcePath = path.resolve(options.commandArgs[0]); - const destinationVaultPath = toVaultRelativePath(options.commandArgs[1], vaultPath); + const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[1], databasePath); const sourceData = await fs.readFile(sourcePath); const sourceStat = await fs.stat(sourcePath); - console.log(`[Command] push ${sourcePath} -> ${destinationVaultPath}`); + console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`); - await core.serviceModules.storageAccess.writeFileAuto(destinationVaultPath, toArrayBuffer(sourceData), { + await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), { mtime: sourceStat.mtimeMs, ctime: sourceStat.ctimeMs, }); - const destinationPathWithPrefix = destinationVaultPath as FilePathWithPrefix; + const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix; const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true); return stored; } @@ -95,16 +95,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.commandArgs.length < 2) { throw new Error("pull requires two arguments: "); } - const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); + const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath); const destinationPath = path.resolve(options.commandArgs[1]); - console.log(`[Command] pull ${sourceVaultPath} -> ${destinationPath}`); + console.log(`[Command] pull ${sourceDatabasePath} -> ${destinationPath}`); - const sourcePathWithPrefix = sourceVaultPath as FilePathWithPrefix; + const sourcePathWithPrefix = sourceDatabasePath as FilePathWithPrefix; const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true); if (!restored) { return false; } - const data = await core.serviceModules.storageAccess.readFileAuto(sourceVaultPath); + const data = await core.serviceModules.storageAccess.readFileAuto(sourceDatabasePath); await fs.mkdir(path.dirname(destinationPath), { recursive: true }); if (typeof data === "string") { await fs.writeFile(destinationPath, data, "utf-8"); @@ -118,16 +118,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.commandArgs.length < 3) { throw new Error("pull-rev requires three arguments: "); } - const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); + const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath); const destinationPath = path.resolve(options.commandArgs[1]); const rev = options.commandArgs[2].trim(); if (!rev) { throw new Error("pull-rev requires a non-empty revision"); } - console.log(`[Command] pull-rev ${sourceVaultPath}@${rev} -> ${destinationPath}`); + console.log(`[Command] pull-rev ${sourceDatabasePath}@${rev} -> ${destinationPath}`); const source = await core.serviceModules.databaseFileAccess.fetch( - sourceVaultPath as FilePathWithPrefix, + sourceDatabasePath as FilePathWithPrefix, rev, true ); @@ -175,11 +175,11 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.commandArgs.length < 1) { throw new Error("put requires one argument: "); } - const destinationVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); + const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath); const content = await readStdinAsUtf8(); - console.log(`[Command] put stdin -> ${destinationVaultPath}`); + console.log(`[Command] put stdin -> ${destinationDatabasePath}`); return await core.serviceModules.databaseFileAccess.storeContent( - destinationVaultPath as FilePathWithPrefix, + destinationDatabasePath as FilePathWithPrefix, content ); } @@ -188,10 +188,10 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.commandArgs.length < 1) { throw new Error("cat requires one argument: "); } - const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); - console.error(`[Command] cat ${sourceVaultPath}`); + const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath); + console.error(`[Command] cat ${sourceDatabasePath}`); const source = await core.serviceModules.databaseFileAccess.fetch( - sourceVaultPath as FilePathWithPrefix, + sourceDatabasePath as FilePathWithPrefix, undefined, true ); @@ -212,14 +212,14 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.commandArgs.length < 2) { throw new Error("cat-rev requires two arguments: "); } - const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); + const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath); const rev = options.commandArgs[1].trim(); if (!rev) { throw new Error("cat-rev requires a non-empty revision"); } - console.error(`[Command] cat-rev ${sourceVaultPath} @ ${rev}`); + console.error(`[Command] cat-rev ${sourceDatabasePath} @ ${rev}`); const source = await core.serviceModules.databaseFileAccess.fetch( - sourceVaultPath as FilePathWithPrefix, + sourceDatabasePath as FilePathWithPrefix, rev, true ); @@ -239,7 +239,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.command === "ls") { const prefix = options.commandArgs.length > 0 && options.commandArgs[0].trim() !== "" - ? toVaultRelativePath(options.commandArgs[0], vaultPath) + ? toDatabaseRelativePath(options.commandArgs[0], databasePath) : ""; const rows: { path: string; line: string }[] = []; @@ -261,6 +261,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext rows.sort((a, b) => a.path.localeCompare(b.path)); if (rows.length > 0) { process.stdout.write(rows.map((e) => e.line).join("\n") + "\n"); + } else { + process.stderr.write("[Info] No documents found in the local database.\n"); } return true; } @@ -269,7 +271,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.commandArgs.length < 1) { throw new Error("info requires one argument: "); } - const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath); + const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath); for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) { if (doc._deleted || doc.deleted) continue; @@ -313,7 +315,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.commandArgs.length < 1) { throw new Error("rm requires one argument: "); } - const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath); + const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath); console.error(`[Command] rm ${targetPath}`); return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix); } @@ -322,7 +324,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext if (options.commandArgs.length < 2) { throw new Error("resolve requires two arguments: "); } - const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath) as FilePathWithPrefix; + const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath) as FilePathWithPrefix; const revisionToKeep = options.commandArgs[1].trim(); if (revisionToKeep === "") { throw new Error("resolve requires a non-empty revision-to-keep"); diff --git a/src/apps/cli/commands/runCommand.unit.spec.ts b/src/apps/cli/commands/runCommand.unit.spec.ts index 1a5b3da..85a91b8 100644 --- a/src/apps/cli/commands/runCommand.unit.spec.ts +++ b/src/apps/cli/commands/runCommand.unit.spec.ts @@ -58,7 +58,7 @@ async function createSetupURI(passphrase: string): Promise { describe("runCommand abnormal cases", () => { const context = { - vaultPath: "/tmp/vault", + databasePath: "/tmp/vault", settingsPath: "/tmp/vault/.livesync/settings.json", } as any; diff --git a/src/apps/cli/commands/types.ts b/src/apps/cli/commands/types.ts index 01ea118..f63f751 100644 --- a/src/apps/cli/commands/types.ts +++ b/src/apps/cli/commands/types.ts @@ -32,7 +32,7 @@ export interface CLIOptions { } export interface CLICommandContext { - vaultPath: string; + databasePath: string; core: LiveSyncBaseCore; settingsPath: string; } diff --git a/src/apps/cli/commands/utils.ts b/src/apps/cli/commands/utils.ts index f56940f..b596a71 100644 --- a/src/apps/cli/commands/utils.ts +++ b/src/apps/cli/commands/utils.ts @@ -5,19 +5,19 @@ export function toArrayBuffer(data: Buffer): ArrayBuffer { return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer; } -export function toVaultRelativePath(inputPath: string, vaultPath: string): string { +export function toDatabaseRelativePath(inputPath: string, databasePath: string): string { const stripped = inputPath.replace(/^[/\\]+/, ""); if (!path.isAbsolute(inputPath)) { const normalized = stripped.replace(/\\/g, "/"); - const resolved = path.resolve(vaultPath, normalized); - const rel = path.relative(vaultPath, resolved); + const resolved = path.resolve(databasePath, normalized); + const rel = path.relative(databasePath, resolved); if (rel.startsWith("..") || path.isAbsolute(rel)) { throw new Error(`Path ${inputPath} is outside of the local database directory`); } return rel.replace(/\\/g, "/"); } const resolved = path.resolve(inputPath); - const rel = path.relative(vaultPath, resolved); + const rel = path.relative(databasePath, resolved); if (rel.startsWith("..") || path.isAbsolute(rel)) { throw new Error(`Path ${inputPath} is outside of the local database directory`); } @@ -25,15 +25,15 @@ export function toVaultRelativePath(inputPath: string, vaultPath: string): strin } export async function readStdinAsUtf8(): Promise { - const chunks: Buffer[] = []; + const chunks = []; for await (const chunk of process.stdin) { if (typeof chunk === "string") { chunks.push(Buffer.from(chunk, "utf-8")); } else { - chunks.push(chunk); + chunks.push(chunk as Buffer); } } - return Buffer.concat(chunks).toString("utf-8"); + return Buffer.concat(chunks as Uint8Array[]).toString("utf-8"); } export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise { diff --git a/src/apps/cli/commands/utils.unit.spec.ts b/src/apps/cli/commands/utils.unit.spec.ts index e209bf7..5d5f77a 100644 --- a/src/apps/cli/commands/utils.unit.spec.ts +++ b/src/apps/cli/commands/utils.unit.spec.ts @@ -1,29 +1,33 @@ import * as path from "path"; import { describe, expect, it } from "vitest"; -import { toVaultRelativePath } from "./utils"; +import { toDatabaseRelativePath } from "./utils"; -describe("toVaultRelativePath", () => { - const vaultPath = path.resolve("/tmp/livesync-vault"); +describe("toDatabaseRelativePath", () => { + const databasePath = path.resolve("/tmp/livesync-vault"); it("rejects absolute paths outside vault", () => { - expect(() => toVaultRelativePath("/etc/passwd", vaultPath)).toThrow("outside of the local database directory"); + expect(() => toDatabaseRelativePath("/etc/passwd", databasePath)).toThrow( + "outside of the local database directory" + ); }); it("normalizes leading slash for absolute path inside vault", () => { - const absoluteInsideVault = path.join(vaultPath, "notes", "foo.md"); - expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("notes/foo.md"); + const absoluteInsideVault = path.join(databasePath, "notes", "foo.md"); + expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("notes/foo.md"); }); it("normalizes Windows-style separators", () => { - expect(toVaultRelativePath("notes\\daily\\2026-03-12.md", vaultPath)).toBe("notes/daily/2026-03-12.md"); + expect(toDatabaseRelativePath("notes\\daily\\2026-03-12.md", databasePath)).toBe("notes/daily/2026-03-12.md"); }); it("returns vault-relative path for another absolute path inside vault", () => { - const absoluteInsideVault = path.join(vaultPath, "docs", "inside.md"); - expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("docs/inside.md"); + const absoluteInsideVault = path.join(databasePath, "docs", "inside.md"); + expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("docs/inside.md"); }); it("rejects relative path traversal that escapes vault", () => { - expect(() => toVaultRelativePath("../escape.md", vaultPath)).toThrow("outside of the local database directory"); + expect(() => toDatabaseRelativePath("../escape.md", databasePath)).toThrow( + "outside of the local database directory" + ); }); }); diff --git a/src/apps/cli/main.ts b/src/apps/cli/main.ts index ecd8afa..29a6ec4 100644 --- a/src/apps/cli/main.ts +++ b/src/apps/cli/main.ts @@ -58,6 +58,7 @@ Commands: info Show detailed metadata for a file (ID, revision, conflicts, chunks) rm Mark a file as deleted in local database resolve Resolve conflicts by keeping and deleting others + mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path) Examples: livesync-cli ./my-database sync livesync-cli ./my-database p2p-peers 5 @@ -112,6 +113,7 @@ export function parseArgs(): CLIOptions { case "-d": // debugging automatically enables verbose logging, as it is intended for debugging issues. debug = true; + // falls through case "--verbose": case "-v": verbose = true; @@ -220,34 +222,34 @@ export async function main() { return; } - // Resolve vault path - const vaultPath = path.resolve(options.databasePath!); - // Check if vault directory exists + // Resolve database path + const databasePath = path.resolve(options.databasePath!); + // Check if database directory exists try { - const stat = await fs.stat(vaultPath); + const stat = await fs.stat(databasePath); if (!stat.isDirectory()) { - console.error(`Error: ${vaultPath} is not a directory`); + console.error(`Error: ${databasePath} is not a directory`); process.exit(1); } } catch (error) { - console.error(`Error: Vault directory ${vaultPath} does not exist`); + console.error(`Error: Database directory ${databasePath} does not exist`); process.exit(1); } // Resolve settings path const settingsPath = options.settingsPath ? path.resolve(options.settingsPath) - : path.join(vaultPath, SETTINGS_FILE); - configureNodeLocalStorage(path.join(vaultPath, ".livesync", "runtime", "local-storage.json")); + : path.join(databasePath, SETTINGS_FILE); + configureNodeLocalStorage(path.join(databasePath, ".livesync", "runtime", "local-storage.json")); infoLog(`Self-hosted LiveSync CLI`); - infoLog(`Vault: ${vaultPath}`); + infoLog(`Database Path: ${databasePath}`); infoLog(`Settings: ${settingsPath}`); infoLog(""); // Create service context and hub - const context = new NodeServiceContext(vaultPath); - const serviceHubInstance = new NodeServiceHub(vaultPath, context); + const context = new NodeServiceContext(databasePath); + const serviceHubInstance = new NodeServiceHub(databasePath, context); serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => { let levelStr = ""; switch (level) { @@ -321,7 +323,11 @@ export async function main() { const core = new LiveSyncBaseCore( serviceHubInstance, (core: LiveSyncBaseCore, serviceHub: InjectableServiceHub) => { - return initialiseServiceModulesCLI(vaultPath, core, serviceHub); + const mirrorVaultPath = + options.command === "mirror" && options.commandArgs[0] + ? path.resolve(options.commandArgs[0]) + : databasePath; + return initialiseServiceModulesCLI(mirrorVaultPath, core, serviceHub); }, (core) => [ // No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts @@ -331,8 +337,8 @@ export async function main() { (core) => { // Add target filter to prevent internal files are handled core.services.vault.isTargetFile.addHandler(async (target) => { - const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target)); - const parts = vaultPath.split(path.sep); + const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target)); + const parts = targetPath.split(path.sep); // if some part of the path starts with dot, treat it as internal file and ignore. if (parts.some((part) => part.startsWith("."))) { return await Promise.resolve(false); @@ -393,7 +399,7 @@ export async function main() { infoLog(""); } - const result = await runCommand(options, { vaultPath, core, settingsPath }); + const result = await runCommand(options, { databasePath, core, settingsPath }); if (!result) { console.error(`[Error] Command '${options.command}' failed`); process.exitCode = 1; diff --git a/src/apps/cli/main.unit.spec.ts b/src/apps/cli/main.unit.spec.ts index 8206f03..83c3177 100644 --- a/src/apps/cli/main.unit.spec.ts +++ b/src/apps/cli/main.unit.spec.ts @@ -17,7 +17,7 @@ describe("CLI parseArgs", () => { }); it("exits 1 when --settings has no value", () => { - process.argv = ["node", "livesync-cli", "./vault", "--settings"]; + process.argv = ["node", "livesync-cli", "./databasePath", "--settings"]; const exitMock = mockProcessExit(); const stderr = vi.spyOn(console, "error").mockImplementation(() => {}); @@ -37,7 +37,7 @@ describe("CLI parseArgs", () => { }); it("exits 1 for unknown command after database-path", () => { - process.argv = ["node", "livesync-cli", "./vault", "unknown-cmd"]; + process.argv = ["node", "livesync-cli", "./databasePath", "unknown-cmd"]; const exitMock = mockProcessExit(); const stderr = vi.spyOn(console, "error").mockImplementation(() => {}); @@ -60,28 +60,28 @@ describe("CLI parseArgs", () => { }); it("parses p2p-peers command and timeout", () => { - process.argv = ["node", "livesync-cli", "./vault", "p2p-peers", "5"]; + process.argv = ["node", "livesync-cli", "./databasePath", "p2p-peers", "5"]; const parsed = parseArgs(); - expect(parsed.databasePath).toBe("./vault"); + expect(parsed.databasePath).toBe("./databasePath"); expect(parsed.command).toBe("p2p-peers"); expect(parsed.commandArgs).toEqual(["5"]); }); it("parses p2p-sync command with peer and timeout", () => { - process.argv = ["node", "livesync-cli", "./vault", "p2p-sync", "peer-1", "12"]; + process.argv = ["node", "livesync-cli", "./databasePath", "p2p-sync", "peer-1", "12"]; const parsed = parseArgs(); - expect(parsed.databasePath).toBe("./vault"); + expect(parsed.databasePath).toBe("./databasePath"); expect(parsed.command).toBe("p2p-sync"); expect(parsed.commandArgs).toEqual(["peer-1", "12"]); }); it("parses p2p-host command", () => { - process.argv = ["node", "livesync-cli", "./vault", "p2p-host"]; + process.argv = ["node", "livesync-cli", "./databasePath", "p2p-host"]; const parsed = parseArgs(); - expect(parsed.databasePath).toBe("./vault"); + expect(parsed.databasePath).toBe("./databasePath"); expect(parsed.command).toBe("p2p-host"); expect(parsed.commandArgs).toEqual([]); }); diff --git a/src/apps/cli/services/NodeServiceHub.ts b/src/apps/cli/services/NodeServiceHub.ts index 9815f42..eb2ad69 100644 --- a/src/apps/cli/services/NodeServiceHub.ts +++ b/src/apps/cli/services/NodeServiceHub.ts @@ -27,10 +27,10 @@ import { DatabaseService } from "@lib/services/base/DatabaseService"; import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types"; export class NodeServiceContext extends ServiceContext { - vaultPath: string; - constructor(vaultPath: string) { + databasePath: string; + constructor(databasePath: string) { super(); - this.vaultPath = vaultPath; + this.databasePath = databasePath; } } @@ -64,7 +64,7 @@ class NodeDatabaseService extends DatabaseService< ): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } { const optionPass = { ...options, - prefix: this.context.vaultPath + nodePath.sep, + prefix: this.context.databasePath + nodePath.sep, }; const passSettings = { ...settings, useIndexedDBAdapter: false }; return super.modifyDatabaseOptions(passSettings, name, optionPass); diff --git a/src/apps/cli/test/repro-issue-860.sh b/src/apps/cli/test/repro-issue-860.sh new file mode 100755 index 0000000..801e53c --- /dev/null +++ b/src/apps/cli/test/repro-issue-860.sh @@ -0,0 +1,49 @@ +#!/usr/bin/env bash +set -euo pipefail + +SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" +CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)" +cd "$CLI_DIR" +source "$SCRIPT_DIR/test-helpers.sh" + +display_test_info "Test for Issue #860: Empty output from ls and mirror" + +RUN_BUILD="${RUN_BUILD:-1}" +cli_test_init_cli_cmd + +WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-repro-860.XXXXXX")" +trap 'rm -rf "$WORK_DIR"' EXIT + +SETTINGS_FILE="$WORK_DIR/data.json" +VAULT_DIR="$WORK_DIR/vault" +mkdir -p "$VAULT_DIR" + +if [[ "$RUN_BUILD" == "1" ]]; then + echo "[INFO] building CLI..." + npm run build +fi + +echo "[INFO] generating settings -> $SETTINGS_FILE" +cli_test_init_settings_file "$SETTINGS_FILE" + +# 1. Test 'ls' on empty database +echo "[INFO] Testing 'ls' on empty database..." +LS_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls) +if [[ -z "$LS_OUTPUT" ]]; then + echo "[REPRODUCED] 'ls' returned empty output for empty database." +else + echo "[INFO] 'ls' output: $LS_OUTPUT" +fi + +# 2. Test 'mirror' on empty vault +echo "[INFO] Testing 'mirror' on empty vault..." +MIRROR_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror 2>&1) +if [[ "$MIRROR_OUTPUT" == *"[Command] mirror"* ]] && [[ ! "$MIRROR_OUTPUT" == *"[Mirror]"* ]]; then + # Note: currently it prints [Command] mirror to stderr. + # Let's see if it prints anything else. + echo "[REPRODUCED] 'mirror' produced no functional logs (only command header)." +else + echo "[INFO] 'mirror' output: $MIRROR_OUTPUT" +fi + +echo "[DONE] finished repro-860 test" diff --git a/src/apps/cli/test/test-mirror-linux.sh b/src/apps/cli/test/test-mirror-linux.sh old mode 100644 new mode 100755 index 389cf00..21a24d3 --- a/src/apps/cli/test/test-mirror-linux.sh +++ b/src/apps/cli/test/test-mirror-linux.sh @@ -28,7 +28,9 @@ trap 'rm -rf "$WORK_DIR"' EXIT SETTINGS_FILE="$WORK_DIR/data.json" VAULT_DIR="$WORK_DIR/vault" +DB_DIR="$WORK_DIR/db" mkdir -p "$VAULT_DIR/test" +mkdir -p "$DB_DIR" if [[ "$RUN_BUILD" == "1" ]]; then echo "[INFO] building CLI..." @@ -41,6 +43,20 @@ cli_test_init_settings_file "$SETTINGS_FILE" # isConfigured=true is required for mirror (canProceedScan checks this) cli_test_mark_settings_configured "$SETTINGS_FILE" +# Preparation: Sync settings and files logic +DB_SETTINGS="$DB_DIR/settings.json" +cp "$SETTINGS_FILE" "$DB_SETTINGS" + +# Helper for standard run (Separated paths) +run_mirror_test() { + run_cli "$DB_DIR" --settings "$DB_SETTINGS" mirror "$VAULT_DIR" +} + +# Helper for compatibility run (Same path) +run_mirror_compat() { + run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror +} + PASS=0 FAIL=0 @@ -78,19 +94,27 @@ portable_touch_timestamp() { # Case 1: File exists only in storage → should be synced into DB after mirror # ───────────────────────────────────────────────────────────────────────────── echo "" -echo "=== Case 1: storage-only → DB ===" +echo "=== Case 1: storage-only → DB (Separated Paths) ===" printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md" -run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror +echo "[DEBUG] DB_DIR: $DB_DIR" +echo "[DEBUG] VAULT_DIR: $VAULT_DIR" + +run_mirror_test RESULT_FILE="$WORK_DIR/case1-cat.txt" -run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE" +# Try 'ls' first to see what's in the DB +echo "--- DB contents ---" +run_cli "$DB_DIR" --settings "$DB_SETTINGS" ls +echo "-------------------" + +run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/storage-only.md "$RESULT_FILE" if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then - assert_pass "storage-only file was synced into DB" + assert_pass "storage-only file was synced into DB using separated paths" else - assert_fail "storage-only file NOT synced into DB" + assert_fail "storage-only file NOT synced into DB with separated paths" echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2 echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2 fi @@ -99,9 +123,9 @@ fi # Case 2: File exists only in DB → should be restored to storage after mirror # ───────────────────────────────────────────────────────────────────────────── echo "" -echo "=== Case 2: DB-only → storage ===" +echo "=== Case 2: DB-only → storage (Separated Paths) ===" -printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md +printf 'db-only content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/db-only.md if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then assert_fail "db-only.md unexpectedly exists in storage before mirror" @@ -109,7 +133,7 @@ else echo "[INFO] confirmed: test/db-only.md not in storage before mirror" fi -run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror +run_mirror_test if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")" @@ -119,19 +143,19 @@ if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')" fi else - assert_fail "DB-only file was NOT restored to storage" + assert_fail "DB-only file NOT restored to storage after mirror" fi # ───────────────────────────────────────────────────────────────────────────── # Case 3: File deleted in DB → should NOT be created in storage # ───────────────────────────────────────────────────────────────────────────── echo "" -echo "=== Case 3: DB-deleted → storage untouched ===" +echo "=== Case 3: DB-deleted → storage untouched (Separated Paths) ===" -printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md -run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md +printf 'to-be-deleted\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/deleted.md +run_cli "$DB_DIR" --settings "$DB_SETTINGS" rm test/deleted.md -run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror +run_mirror_test if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then assert_pass "deleted DB entry was not restored to storage" @@ -143,19 +167,19 @@ fi # Case 4: Both exist, storage is newer → DB should be updated # ───────────────────────────────────────────────────────────────────────────── echo "" -echo "=== Case 4: storage newer → DB updated ===" +echo "=== Case 4: storage newer → DB updated (Separated Paths) ===" # Seed DB with old content (mtime ≈ now) -printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md +printf 'old content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-storage-newer.md # Write new content to storage with a timestamp 1 hour in the future printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md" touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md" -run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror +run_mirror_test DB_RESULT_FILE="$WORK_DIR/case4-pull.txt" -run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE" +run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/sync-storage-newer.md "$DB_RESULT_FILE" if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then assert_pass "DB updated to match newer storage file" else @@ -168,16 +192,16 @@ fi # Case 5: Both exist, DB is newer → storage should be updated # ───────────────────────────────────────────────────────────────────────────── echo "" -echo "=== Case 5: DB newer → storage updated ===" +echo "=== Case 5: DB newer → storage updated (Separated Paths) ===" # Write old content to storage with a timestamp 1 hour in the past printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md" touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md" # Write new content to DB only (mtime ≈ now, newer than the storage file) -printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md +printf 'new db content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-db-newer.md -run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror +run_mirror_test STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")" if [[ "$STORAGE_CONTENT" == "new db content" ]]; then @@ -186,6 +210,25 @@ else assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')" fi +# ───────────────────────────────────────────────────────────────────────────── +# Case 6: Compatibility test - omitted vault-path +# ───────────────────────────────────────────────────────────────────────────── +echo "" +echo "=== Case 6: omitted vault-path (Compatibility Mode) ===" + +# We use VAULT_DIR as the "main" database path for this part. +printf 'compat-content\n' > "$VAULT_DIR/compat.md" +run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror + +# In compat mode, it should find it in the DB at root +CAT_RESULT="$WORK_DIR/compat-cat.txt" +run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull compat.md "$CAT_RESULT" +if [[ "$(cat "$CAT_RESULT")" == "compat-content" ]]; then + assert_pass "Compatibility mode works (omitted vault-path)" +else + assert_fail "Compatibility mode failed to sync file into DB" +fi + # ───────────────────────────────────────────────────────────────────────────── # Summary # ───────────────────────────────────────────────────────────────────────────── diff --git a/src/lib b/src/lib index e4380cc..57fb114 160000 --- a/src/lib +++ b/src/lib @@ -1 +1 @@ -Subproject commit e4380ccd000278ed99c3de767f2e872087cca1db +Subproject commit 57fb114d27bd9edf477c173a301a9dbf87d5bfd4 diff --git a/updates.md b/updates.md index 576f123..c2ab3de 100644 --- a/updates.md +++ b/updates.md @@ -3,6 +3,15 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025) The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope. +## Untagged (29th April, 2026) + +### Fixed (CLI) + +- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860). + - Improved the clarity of CLI command logs by including the total count of processed items. +- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`. +- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported. + ## 0.25.59 ### Fixed