mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-05-15 20:11:17 +00:00
Fixed(cli):
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860). - The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`. - The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported. Co-authored-by: Copilot <copilot@github.com>
This commit is contained in:
@@ -45,9 +45,46 @@ CLI Main
|
||||
- Settings management (JSON file)
|
||||
- Graceful shutdown handling
|
||||
|
||||
## Something I realised later that could lead to misunderstandings
|
||||
## Usage
|
||||
|
||||
The term `vault` in this README refers to the directory containing your local database and settings file. Not the actual files you want to sync. I will fix this later, but please be mind this for now.
|
||||
The CLI operates on a **database directory** which contains PouchDB data and settings.
|
||||
|
||||
```bash
|
||||
lsync [database-path] [command] [args...]
|
||||
```
|
||||
|
||||
### Arguments
|
||||
|
||||
- `database-path`: Path to the directory where `.livesync` folder and `settings.json` are (or will be) located.
|
||||
- Note: In previous versions, this was referred to as the "vault" path. Now it is clearly distinguished from the actual vault (the directory containing your `.md` files).
|
||||
|
||||
### Commands
|
||||
|
||||
- `sync`: Run one replication cycle with the remote CouchDB.
|
||||
- `mirror [vault-path]`: Bidirectional sync between the local database and a local directory (**the actual vault**).
|
||||
- If `vault-path` is provided, the CLI will synchronise the database with files in that directory.
|
||||
- If `vault-path` is omitted, it defaults to `database-path` (compatibility mode).
|
||||
- Use this command to keep your local `.md` files in sync with the database.
|
||||
- `ls [prefix]`: List files currently stored in the local database.
|
||||
- `push <src> <dst>`: Push a local file `<src>` into the database at path `<dst>`.
|
||||
- `pull <src> <dst>`: Pull a file `<src>` from the database into local file `<dst>`.
|
||||
- `cat <src>`: Read a file from the database and write to stdout.
|
||||
- `put <dst>`: Read from stdin and write to the database path `<dst>`.
|
||||
- `init-settings [file]`: Create a default settings file.
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Basic sync with remote
|
||||
lsync ./my-db sync
|
||||
|
||||
# Mirroring to your actual Obsidian vault
|
||||
lsync ./my-db mirror /path/to/obsidian-vault
|
||||
|
||||
# Manual file operations
|
||||
lsync ./my-db push ./note.md folder/note.md
|
||||
lsync ./my-db pull folder/note.md ./note.md
|
||||
```
|
||||
|
||||
## Docker
|
||||
|
||||
@@ -61,16 +98,16 @@ Run:
|
||||
|
||||
```bash
|
||||
# Sync with CouchDB
|
||||
docker run --rm -v /path/to/your/vault:/data livesync-cli sync
|
||||
docker run --rm -v /path/to/your/db:/data livesync-cli sync
|
||||
|
||||
# Mirror to a specific vault directory
|
||||
docker run --rm -v /path/to/your/db:/data -v /path/to/your/vault:/vault livesync-cli mirror /vault
|
||||
|
||||
# List files in the local database
|
||||
docker run --rm -v /path/to/your/vault:/data livesync-cli ls
|
||||
|
||||
# Generate a default settings file
|
||||
docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
|
||||
docker run --rm -v /path/to/your/db:/data livesync-cli ls
|
||||
```
|
||||
|
||||
The vault directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
|
||||
The database directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
|
||||
|
||||
### P2P (WebRTC) and Docker networking
|
||||
|
||||
@@ -78,11 +115,11 @@ The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generat
|
||||
three kinds of ICE candidates. The default Docker bridge network affects which
|
||||
candidates are usable:
|
||||
|
||||
| Candidate type | Description | Bridge network |
|
||||
|---|---|---|
|
||||
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
|
||||
| `srflx` | Host public IP via STUN reflection | Works over the internet |
|
||||
| `relay` | Traffic relayed via TURN server | Always reachable |
|
||||
| Candidate type | Description | Bridge network |
|
||||
| -------------- | ---------------------------------- | -------------------------- |
|
||||
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
|
||||
| `srflx` | Host public IP via STUN reflection | Works over the internet |
|
||||
| `relay` | Traffic relayed via TURN server | Always reachable |
|
||||
|
||||
**LAN P2P on Linux** — use `--network host` so that the real host IP is
|
||||
advertised as the `host` candidate:
|
||||
@@ -300,11 +337,11 @@ In other words, it performs the following actions:
|
||||
|
||||
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
|
||||
|
||||
| Group | Condition | Action |
|
||||
|---|---|---|
|
||||
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
|
||||
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
|
||||
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
|
||||
| Group | Condition | Action |
|
||||
| ----------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
|
||||
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
|
||||
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
|
||||
|
||||
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
|
||||
|
||||
|
||||
@@ -5,13 +5,13 @@ import { configURIBase } from "@lib/common/models/shared.const";
|
||||
import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types";
|
||||
import { stripAllPrefixes } from "@lib/string_and_binary/path";
|
||||
import type { CLICommandContext, CLIOptions } from "./types";
|
||||
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils";
|
||||
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toDatabaseRelativePath } from "./utils";
|
||||
import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p";
|
||||
import { performFullScan } from "@lib/serviceFeatures/offlineScanner";
|
||||
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
|
||||
|
||||
export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> {
|
||||
const { vaultPath, core, settingsPath } = context;
|
||||
const { databasePath, core, settingsPath } = context;
|
||||
|
||||
await core.services.control.activated;
|
||||
if (options.command === "daemon") {
|
||||
@@ -77,16 +77,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
throw new Error("push requires two arguments: <src> <dst>");
|
||||
}
|
||||
const sourcePath = path.resolve(options.commandArgs[0]);
|
||||
const destinationVaultPath = toVaultRelativePath(options.commandArgs[1], vaultPath);
|
||||
const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[1], databasePath);
|
||||
const sourceData = await fs.readFile(sourcePath);
|
||||
const sourceStat = await fs.stat(sourcePath);
|
||||
console.log(`[Command] push ${sourcePath} -> ${destinationVaultPath}`);
|
||||
console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`);
|
||||
|
||||
await core.serviceModules.storageAccess.writeFileAuto(destinationVaultPath, toArrayBuffer(sourceData), {
|
||||
await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), {
|
||||
mtime: sourceStat.mtimeMs,
|
||||
ctime: sourceStat.ctimeMs,
|
||||
});
|
||||
const destinationPathWithPrefix = destinationVaultPath as FilePathWithPrefix;
|
||||
const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix;
|
||||
const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true);
|
||||
return stored;
|
||||
}
|
||||
@@ -95,16 +95,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("pull requires two arguments: <src> <dst>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const destinationPath = path.resolve(options.commandArgs[1]);
|
||||
console.log(`[Command] pull ${sourceVaultPath} -> ${destinationPath}`);
|
||||
console.log(`[Command] pull ${sourceDatabasePath} -> ${destinationPath}`);
|
||||
|
||||
const sourcePathWithPrefix = sourceVaultPath as FilePathWithPrefix;
|
||||
const sourcePathWithPrefix = sourceDatabasePath as FilePathWithPrefix;
|
||||
const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true);
|
||||
if (!restored) {
|
||||
return false;
|
||||
}
|
||||
const data = await core.serviceModules.storageAccess.readFileAuto(sourceVaultPath);
|
||||
const data = await core.serviceModules.storageAccess.readFileAuto(sourceDatabasePath);
|
||||
await fs.mkdir(path.dirname(destinationPath), { recursive: true });
|
||||
if (typeof data === "string") {
|
||||
await fs.writeFile(destinationPath, data, "utf-8");
|
||||
@@ -118,16 +118,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 3) {
|
||||
throw new Error("pull-rev requires three arguments: <src> <dst> <rev>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const destinationPath = path.resolve(options.commandArgs[1]);
|
||||
const rev = options.commandArgs[2].trim();
|
||||
if (!rev) {
|
||||
throw new Error("pull-rev requires a non-empty revision");
|
||||
}
|
||||
console.log(`[Command] pull-rev ${sourceVaultPath}@${rev} -> ${destinationPath}`);
|
||||
console.log(`[Command] pull-rev ${sourceDatabasePath}@${rev} -> ${destinationPath}`);
|
||||
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
rev,
|
||||
true
|
||||
);
|
||||
@@ -175,11 +175,11 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("put requires one argument: <dst>");
|
||||
}
|
||||
const destinationVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const content = await readStdinAsUtf8();
|
||||
console.log(`[Command] put stdin -> ${destinationVaultPath}`);
|
||||
console.log(`[Command] put stdin -> ${destinationDatabasePath}`);
|
||||
return await core.serviceModules.databaseFileAccess.storeContent(
|
||||
destinationVaultPath as FilePathWithPrefix,
|
||||
destinationDatabasePath as FilePathWithPrefix,
|
||||
content
|
||||
);
|
||||
}
|
||||
@@ -188,10 +188,10 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("cat requires one argument: <src>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
console.error(`[Command] cat ${sourceVaultPath}`);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
console.error(`[Command] cat ${sourceDatabasePath}`);
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
undefined,
|
||||
true
|
||||
);
|
||||
@@ -212,14 +212,14 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("cat-rev requires two arguments: <src> <rev>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const rev = options.commandArgs[1].trim();
|
||||
if (!rev) {
|
||||
throw new Error("cat-rev requires a non-empty revision");
|
||||
}
|
||||
console.error(`[Command] cat-rev ${sourceVaultPath} @ ${rev}`);
|
||||
console.error(`[Command] cat-rev ${sourceDatabasePath} @ ${rev}`);
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
rev,
|
||||
true
|
||||
);
|
||||
@@ -239,7 +239,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.command === "ls") {
|
||||
const prefix =
|
||||
options.commandArgs.length > 0 && options.commandArgs[0].trim() !== ""
|
||||
? toVaultRelativePath(options.commandArgs[0], vaultPath)
|
||||
? toDatabaseRelativePath(options.commandArgs[0], databasePath)
|
||||
: "";
|
||||
const rows: { path: string; line: string }[] = [];
|
||||
|
||||
@@ -261,6 +261,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
rows.sort((a, b) => a.path.localeCompare(b.path));
|
||||
if (rows.length > 0) {
|
||||
process.stdout.write(rows.map((e) => e.line).join("\n") + "\n");
|
||||
} else {
|
||||
process.stderr.write("[Info] No documents found in the local database.\n");
|
||||
}
|
||||
return true;
|
||||
}
|
||||
@@ -269,7 +271,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("info requires one argument: <path>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
|
||||
for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) {
|
||||
if (doc._deleted || doc.deleted) continue;
|
||||
@@ -313,7 +315,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("rm requires one argument: <path>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
console.error(`[Command] rm ${targetPath}`);
|
||||
return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix);
|
||||
}
|
||||
@@ -322,7 +324,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("resolve requires two arguments: <path> <revision-to-keep>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath) as FilePathWithPrefix;
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath) as FilePathWithPrefix;
|
||||
const revisionToKeep = options.commandArgs[1].trim();
|
||||
if (revisionToKeep === "") {
|
||||
throw new Error("resolve requires a non-empty revision-to-keep");
|
||||
|
||||
@@ -58,7 +58,7 @@ async function createSetupURI(passphrase: string): Promise<string> {
|
||||
|
||||
describe("runCommand abnormal cases", () => {
|
||||
const context = {
|
||||
vaultPath: "/tmp/vault",
|
||||
databasePath: "/tmp/vault",
|
||||
settingsPath: "/tmp/vault/.livesync/settings.json",
|
||||
} as any;
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ export interface CLIOptions {
|
||||
}
|
||||
|
||||
export interface CLICommandContext {
|
||||
vaultPath: string;
|
||||
databasePath: string;
|
||||
core: LiveSyncBaseCore<ServiceContext, any>;
|
||||
settingsPath: string;
|
||||
}
|
||||
|
||||
@@ -5,19 +5,19 @@ export function toArrayBuffer(data: Buffer): ArrayBuffer {
|
||||
return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer;
|
||||
}
|
||||
|
||||
export function toVaultRelativePath(inputPath: string, vaultPath: string): string {
|
||||
export function toDatabaseRelativePath(inputPath: string, databasePath: string): string {
|
||||
const stripped = inputPath.replace(/^[/\\]+/, "");
|
||||
if (!path.isAbsolute(inputPath)) {
|
||||
const normalized = stripped.replace(/\\/g, "/");
|
||||
const resolved = path.resolve(vaultPath, normalized);
|
||||
const rel = path.relative(vaultPath, resolved);
|
||||
const resolved = path.resolve(databasePath, normalized);
|
||||
const rel = path.relative(databasePath, resolved);
|
||||
if (rel.startsWith("..") || path.isAbsolute(rel)) {
|
||||
throw new Error(`Path ${inputPath} is outside of the local database directory`);
|
||||
}
|
||||
return rel.replace(/\\/g, "/");
|
||||
}
|
||||
const resolved = path.resolve(inputPath);
|
||||
const rel = path.relative(vaultPath, resolved);
|
||||
const rel = path.relative(databasePath, resolved);
|
||||
if (rel.startsWith("..") || path.isAbsolute(rel)) {
|
||||
throw new Error(`Path ${inputPath} is outside of the local database directory`);
|
||||
}
|
||||
@@ -25,15 +25,15 @@ export function toVaultRelativePath(inputPath: string, vaultPath: string): strin
|
||||
}
|
||||
|
||||
export async function readStdinAsUtf8(): Promise<string> {
|
||||
const chunks: Buffer[] = [];
|
||||
const chunks = [];
|
||||
for await (const chunk of process.stdin) {
|
||||
if (typeof chunk === "string") {
|
||||
chunks.push(Buffer.from(chunk, "utf-8"));
|
||||
} else {
|
||||
chunks.push(chunk);
|
||||
chunks.push(chunk as Buffer);
|
||||
}
|
||||
}
|
||||
return Buffer.concat(chunks).toString("utf-8");
|
||||
return Buffer.concat(chunks as Uint8Array[]).toString("utf-8");
|
||||
}
|
||||
|
||||
export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> {
|
||||
|
||||
@@ -1,29 +1,33 @@
|
||||
import * as path from "path";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { toVaultRelativePath } from "./utils";
|
||||
import { toDatabaseRelativePath } from "./utils";
|
||||
|
||||
describe("toVaultRelativePath", () => {
|
||||
const vaultPath = path.resolve("/tmp/livesync-vault");
|
||||
describe("toDatabaseRelativePath", () => {
|
||||
const databasePath = path.resolve("/tmp/livesync-vault");
|
||||
|
||||
it("rejects absolute paths outside vault", () => {
|
||||
expect(() => toVaultRelativePath("/etc/passwd", vaultPath)).toThrow("outside of the local database directory");
|
||||
expect(() => toDatabaseRelativePath("/etc/passwd", databasePath)).toThrow(
|
||||
"outside of the local database directory"
|
||||
);
|
||||
});
|
||||
|
||||
it("normalizes leading slash for absolute path inside vault", () => {
|
||||
const absoluteInsideVault = path.join(vaultPath, "notes", "foo.md");
|
||||
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("notes/foo.md");
|
||||
const absoluteInsideVault = path.join(databasePath, "notes", "foo.md");
|
||||
expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("notes/foo.md");
|
||||
});
|
||||
|
||||
it("normalizes Windows-style separators", () => {
|
||||
expect(toVaultRelativePath("notes\\daily\\2026-03-12.md", vaultPath)).toBe("notes/daily/2026-03-12.md");
|
||||
expect(toDatabaseRelativePath("notes\\daily\\2026-03-12.md", databasePath)).toBe("notes/daily/2026-03-12.md");
|
||||
});
|
||||
|
||||
it("returns vault-relative path for another absolute path inside vault", () => {
|
||||
const absoluteInsideVault = path.join(vaultPath, "docs", "inside.md");
|
||||
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("docs/inside.md");
|
||||
const absoluteInsideVault = path.join(databasePath, "docs", "inside.md");
|
||||
expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("docs/inside.md");
|
||||
});
|
||||
|
||||
it("rejects relative path traversal that escapes vault", () => {
|
||||
expect(() => toVaultRelativePath("../escape.md", vaultPath)).toThrow("outside of the local database directory");
|
||||
expect(() => toDatabaseRelativePath("../escape.md", databasePath)).toThrow(
|
||||
"outside of the local database directory"
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -58,6 +58,7 @@ Commands:
|
||||
info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks)
|
||||
rm <path> Mark a file as deleted in local database
|
||||
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
|
||||
mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path)
|
||||
Examples:
|
||||
livesync-cli ./my-database sync
|
||||
livesync-cli ./my-database p2p-peers 5
|
||||
@@ -112,6 +113,7 @@ export function parseArgs(): CLIOptions {
|
||||
case "-d":
|
||||
// debugging automatically enables verbose logging, as it is intended for debugging issues.
|
||||
debug = true;
|
||||
// falls through
|
||||
case "--verbose":
|
||||
case "-v":
|
||||
verbose = true;
|
||||
@@ -220,34 +222,34 @@ export async function main() {
|
||||
return;
|
||||
}
|
||||
|
||||
// Resolve vault path
|
||||
const vaultPath = path.resolve(options.databasePath!);
|
||||
// Check if vault directory exists
|
||||
// Resolve database path
|
||||
const databasePath = path.resolve(options.databasePath!);
|
||||
// Check if database directory exists
|
||||
try {
|
||||
const stat = await fs.stat(vaultPath);
|
||||
const stat = await fs.stat(databasePath);
|
||||
if (!stat.isDirectory()) {
|
||||
console.error(`Error: ${vaultPath} is not a directory`);
|
||||
console.error(`Error: ${databasePath} is not a directory`);
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Error: Vault directory ${vaultPath} does not exist`);
|
||||
console.error(`Error: Database directory ${databasePath} does not exist`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Resolve settings path
|
||||
const settingsPath = options.settingsPath
|
||||
? path.resolve(options.settingsPath)
|
||||
: path.join(vaultPath, SETTINGS_FILE);
|
||||
configureNodeLocalStorage(path.join(vaultPath, ".livesync", "runtime", "local-storage.json"));
|
||||
: path.join(databasePath, SETTINGS_FILE);
|
||||
configureNodeLocalStorage(path.join(databasePath, ".livesync", "runtime", "local-storage.json"));
|
||||
|
||||
infoLog(`Self-hosted LiveSync CLI`);
|
||||
infoLog(`Vault: ${vaultPath}`);
|
||||
infoLog(`Database Path: ${databasePath}`);
|
||||
infoLog(`Settings: ${settingsPath}`);
|
||||
infoLog("");
|
||||
|
||||
// Create service context and hub
|
||||
const context = new NodeServiceContext(vaultPath);
|
||||
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(vaultPath, context);
|
||||
const context = new NodeServiceContext(databasePath);
|
||||
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(databasePath, context);
|
||||
serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => {
|
||||
let levelStr = "";
|
||||
switch (level) {
|
||||
@@ -321,7 +323,11 @@ export async function main() {
|
||||
const core = new LiveSyncBaseCore(
|
||||
serviceHubInstance,
|
||||
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
|
||||
return initialiseServiceModulesCLI(vaultPath, core, serviceHub);
|
||||
const mirrorVaultPath =
|
||||
options.command === "mirror" && options.commandArgs[0]
|
||||
? path.resolve(options.commandArgs[0])
|
||||
: databasePath;
|
||||
return initialiseServiceModulesCLI(mirrorVaultPath, core, serviceHub);
|
||||
},
|
||||
(core) => [
|
||||
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
|
||||
@@ -331,8 +337,8 @@ export async function main() {
|
||||
(core) => {
|
||||
// Add target filter to prevent internal files are handled
|
||||
core.services.vault.isTargetFile.addHandler(async (target) => {
|
||||
const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target));
|
||||
const parts = vaultPath.split(path.sep);
|
||||
const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
|
||||
const parts = targetPath.split(path.sep);
|
||||
// if some part of the path starts with dot, treat it as internal file and ignore.
|
||||
if (parts.some((part) => part.startsWith("."))) {
|
||||
return await Promise.resolve(false);
|
||||
@@ -393,7 +399,7 @@ export async function main() {
|
||||
infoLog("");
|
||||
}
|
||||
|
||||
const result = await runCommand(options, { vaultPath, core, settingsPath });
|
||||
const result = await runCommand(options, { databasePath, core, settingsPath });
|
||||
if (!result) {
|
||||
console.error(`[Error] Command '${options.command}' failed`);
|
||||
process.exitCode = 1;
|
||||
|
||||
@@ -17,7 +17,7 @@ describe("CLI parseArgs", () => {
|
||||
});
|
||||
|
||||
it("exits 1 when --settings has no value", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "--settings"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "--settings"];
|
||||
const exitMock = mockProcessExit();
|
||||
const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
|
||||
|
||||
@@ -37,7 +37,7 @@ describe("CLI parseArgs", () => {
|
||||
});
|
||||
|
||||
it("exits 1 for unknown command after database-path", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "unknown-cmd"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "unknown-cmd"];
|
||||
const exitMock = mockProcessExit();
|
||||
const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
|
||||
|
||||
@@ -60,28 +60,28 @@ describe("CLI parseArgs", () => {
|
||||
});
|
||||
|
||||
it("parses p2p-peers command and timeout", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-peers", "5"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-peers", "5"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-peers");
|
||||
expect(parsed.commandArgs).toEqual(["5"]);
|
||||
});
|
||||
|
||||
it("parses p2p-sync command with peer and timeout", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-sync", "peer-1", "12"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-sync", "peer-1", "12"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-sync");
|
||||
expect(parsed.commandArgs).toEqual(["peer-1", "12"]);
|
||||
});
|
||||
|
||||
it("parses p2p-host command", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-host"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-host"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-host");
|
||||
expect(parsed.commandArgs).toEqual([]);
|
||||
});
|
||||
|
||||
@@ -27,10 +27,10 @@ import { DatabaseService } from "@lib/services/base/DatabaseService";
|
||||
import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types";
|
||||
|
||||
export class NodeServiceContext extends ServiceContext {
|
||||
vaultPath: string;
|
||||
constructor(vaultPath: string) {
|
||||
databasePath: string;
|
||||
constructor(databasePath: string) {
|
||||
super();
|
||||
this.vaultPath = vaultPath;
|
||||
this.databasePath = databasePath;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -64,7 +64,7 @@ class NodeDatabaseService<T extends NodeServiceContext> extends DatabaseService<
|
||||
): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } {
|
||||
const optionPass = {
|
||||
...options,
|
||||
prefix: this.context.vaultPath + nodePath.sep,
|
||||
prefix: this.context.databasePath + nodePath.sep,
|
||||
};
|
||||
const passSettings = { ...settings, useIndexedDBAdapter: false };
|
||||
return super.modifyDatabaseOptions(passSettings, name, optionPass);
|
||||
|
||||
49
src/apps/cli/test/repro-issue-860.sh
Executable file
49
src/apps/cli/test/repro-issue-860.sh
Executable file
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
display_test_info "Test for Issue #860: Empty output from ls and mirror"
|
||||
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-repro-860.XXXXXX")"
|
||||
trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
|
||||
SETTINGS_FILE="$WORK_DIR/data.json"
|
||||
VAULT_DIR="$WORK_DIR/vault"
|
||||
mkdir -p "$VAULT_DIR"
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
npm run build
|
||||
fi
|
||||
|
||||
echo "[INFO] generating settings -> $SETTINGS_FILE"
|
||||
cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
|
||||
# 1. Test 'ls' on empty database
|
||||
echo "[INFO] Testing 'ls' on empty database..."
|
||||
LS_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls)
|
||||
if [[ -z "$LS_OUTPUT" ]]; then
|
||||
echo "[REPRODUCED] 'ls' returned empty output for empty database."
|
||||
else
|
||||
echo "[INFO] 'ls' output: $LS_OUTPUT"
|
||||
fi
|
||||
|
||||
# 2. Test 'mirror' on empty vault
|
||||
echo "[INFO] Testing 'mirror' on empty vault..."
|
||||
MIRROR_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror 2>&1)
|
||||
if [[ "$MIRROR_OUTPUT" == *"[Command] mirror"* ]] && [[ ! "$MIRROR_OUTPUT" == *"[Mirror]"* ]]; then
|
||||
# Note: currently it prints [Command] mirror to stderr.
|
||||
# Let's see if it prints anything else.
|
||||
echo "[REPRODUCED] 'mirror' produced no functional logs (only command header)."
|
||||
else
|
||||
echo "[INFO] 'mirror' output: $MIRROR_OUTPUT"
|
||||
fi
|
||||
|
||||
echo "[DONE] finished repro-860 test"
|
||||
83
src/apps/cli/test/test-mirror-linux.sh
Normal file → Executable file
83
src/apps/cli/test/test-mirror-linux.sh
Normal file → Executable file
@@ -28,7 +28,9 @@ trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
|
||||
SETTINGS_FILE="$WORK_DIR/data.json"
|
||||
VAULT_DIR="$WORK_DIR/vault"
|
||||
DB_DIR="$WORK_DIR/db"
|
||||
mkdir -p "$VAULT_DIR/test"
|
||||
mkdir -p "$DB_DIR"
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
@@ -41,6 +43,20 @@ cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
# isConfigured=true is required for mirror (canProceedScan checks this)
|
||||
cli_test_mark_settings_configured "$SETTINGS_FILE"
|
||||
|
||||
# Preparation: Sync settings and files logic
|
||||
DB_SETTINGS="$DB_DIR/settings.json"
|
||||
cp "$SETTINGS_FILE" "$DB_SETTINGS"
|
||||
|
||||
# Helper for standard run (Separated paths)
|
||||
run_mirror_test() {
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" mirror "$VAULT_DIR"
|
||||
}
|
||||
|
||||
# Helper for compatibility run (Same path)
|
||||
run_mirror_compat() {
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
}
|
||||
|
||||
PASS=0
|
||||
FAIL=0
|
||||
|
||||
@@ -78,19 +94,27 @@ portable_touch_timestamp() {
|
||||
# Case 1: File exists only in storage → should be synced into DB after mirror
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 1: storage-only → DB ==="
|
||||
echo "=== Case 1: storage-only → DB (Separated Paths) ==="
|
||||
|
||||
printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md"
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
echo "[DEBUG] DB_DIR: $DB_DIR"
|
||||
echo "[DEBUG] VAULT_DIR: $VAULT_DIR"
|
||||
|
||||
run_mirror_test
|
||||
|
||||
RESULT_FILE="$WORK_DIR/case1-cat.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE"
|
||||
# Try 'ls' first to see what's in the DB
|
||||
echo "--- DB contents ---"
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" ls
|
||||
echo "-------------------"
|
||||
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/storage-only.md "$RESULT_FILE"
|
||||
|
||||
if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then
|
||||
assert_pass "storage-only file was synced into DB"
|
||||
assert_pass "storage-only file was synced into DB using separated paths"
|
||||
else
|
||||
assert_fail "storage-only file NOT synced into DB"
|
||||
assert_fail "storage-only file NOT synced into DB with separated paths"
|
||||
echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2
|
||||
echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2
|
||||
fi
|
||||
@@ -99,9 +123,9 @@ fi
|
||||
# Case 2: File exists only in DB → should be restored to storage after mirror
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 2: DB-only → storage ==="
|
||||
echo "=== Case 2: DB-only → storage (Separated Paths) ==="
|
||||
|
||||
printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md
|
||||
printf 'db-only content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/db-only.md
|
||||
|
||||
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
assert_fail "db-only.md unexpectedly exists in storage before mirror"
|
||||
@@ -109,7 +133,7 @@ else
|
||||
echo "[INFO] confirmed: test/db-only.md not in storage before mirror"
|
||||
fi
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")"
|
||||
@@ -119,19 +143,19 @@ if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')"
|
||||
fi
|
||||
else
|
||||
assert_fail "DB-only file was NOT restored to storage"
|
||||
assert_fail "DB-only file NOT restored to storage after mirror"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 3: File deleted in DB → should NOT be created in storage
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 3: DB-deleted → storage untouched ==="
|
||||
echo "=== Case 3: DB-deleted → storage untouched (Separated Paths) ==="
|
||||
|
||||
printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md
|
||||
printf 'to-be-deleted\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/deleted.md
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" rm test/deleted.md
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then
|
||||
assert_pass "deleted DB entry was not restored to storage"
|
||||
@@ -143,19 +167,19 @@ fi
|
||||
# Case 4: Both exist, storage is newer → DB should be updated
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 4: storage newer → DB updated ==="
|
||||
echo "=== Case 4: storage newer → DB updated (Separated Paths) ==="
|
||||
|
||||
# Seed DB with old content (mtime ≈ now)
|
||||
printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md
|
||||
printf 'old content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-storage-newer.md
|
||||
|
||||
# Write new content to storage with a timestamp 1 hour in the future
|
||||
printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md"
|
||||
touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md"
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
DB_RESULT_FILE="$WORK_DIR/case4-pull.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
|
||||
if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then
|
||||
assert_pass "DB updated to match newer storage file"
|
||||
else
|
||||
@@ -168,16 +192,16 @@ fi
|
||||
# Case 5: Both exist, DB is newer → storage should be updated
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 5: DB newer → storage updated ==="
|
||||
echo "=== Case 5: DB newer → storage updated (Separated Paths) ==="
|
||||
|
||||
# Write old content to storage with a timestamp 1 hour in the past
|
||||
printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md"
|
||||
touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md"
|
||||
|
||||
# Write new content to DB only (mtime ≈ now, newer than the storage file)
|
||||
printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md
|
||||
printf 'new db content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-db-newer.md
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")"
|
||||
if [[ "$STORAGE_CONTENT" == "new db content" ]]; then
|
||||
@@ -186,6 +210,25 @@ else
|
||||
assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 6: Compatibility test - omitted vault-path
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 6: omitted vault-path (Compatibility Mode) ==="
|
||||
|
||||
# We use VAULT_DIR as the "main" database path for this part.
|
||||
printf 'compat-content\n' > "$VAULT_DIR/compat.md"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
|
||||
# In compat mode, it should find it in the DB at root
|
||||
CAT_RESULT="$WORK_DIR/compat-cat.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull compat.md "$CAT_RESULT"
|
||||
if [[ "$(cat "$CAT_RESULT")" == "compat-content" ]]; then
|
||||
assert_pass "Compatibility mode works (omitted vault-path)"
|
||||
else
|
||||
assert_fail "Compatibility mode failed to sync file into DB"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Summary
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
Reference in New Issue
Block a user