mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-03-13 21:38:48 +00:00
Compare commits
10 Commits
0.25.52-pa
...
beta
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bf93bddbdd | ||
|
|
44890a34e8 | ||
|
|
a14aa201a8 | ||
|
|
338a9ba9fa | ||
|
|
0c65b5add9 | ||
|
|
29ce9a5df4 | ||
|
|
10f5cb8b42 | ||
|
|
8aad3716d4 | ||
|
|
d45f41500a | ||
|
|
4cc0a11d86 |
84
.github/workflows/cli-e2e.yml
vendored
Normal file
84
.github/workflows/cli-e2e.yml
vendored
Normal file
@@ -0,0 +1,84 @@
|
||||
# Run CLI E2E tests
|
||||
name: cli-e2e
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
suite:
|
||||
description: 'CLI E2E suite to run'
|
||||
type: choice
|
||||
options:
|
||||
- two-vaults-matrix
|
||||
- two-vaults-couchdb
|
||||
- two-vaults-minio
|
||||
default: two-vaults-matrix
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
- beta
|
||||
paths:
|
||||
- '.github/workflows/cli-e2e.yml'
|
||||
- 'src/apps/cli/**'
|
||||
- 'src/lib/src/API/processSetting.ts'
|
||||
- 'package.json'
|
||||
- 'package-lock.json'
|
||||
pull_request:
|
||||
paths:
|
||||
- '.github/workflows/cli-e2e.yml'
|
||||
- 'src/apps/cli/**'
|
||||
- 'src/lib/src/API/processSetting.ts'
|
||||
- 'package.json'
|
||||
- 'package-lock.json'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 45
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: recursive
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '24.x'
|
||||
cache: 'npm'
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Run CLI E2E suite
|
||||
working-directory: src/apps/cli
|
||||
env:
|
||||
CI: true
|
||||
TEST_SUITE: ${{ github.event_name == 'workflow_dispatch' && inputs.suite || 'two-vaults-matrix' }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
echo "[INFO] Running CLI E2E suite: $TEST_SUITE"
|
||||
case "$TEST_SUITE" in
|
||||
two-vaults-matrix)
|
||||
npm run test:e2e:two-vaults:matrix
|
||||
;;
|
||||
two-vaults-couchdb)
|
||||
REMOTE_TYPE=COUCHDB ENCRYPT=0 npm run test:e2e:two-vaults
|
||||
;;
|
||||
two-vaults-minio)
|
||||
REMOTE_TYPE=MINIO ENCRYPT=0 npm run test:e2e:two-vaults
|
||||
;;
|
||||
*)
|
||||
echo "[ERROR] Unknown suite: $TEST_SUITE" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
- name: Stop test containers
|
||||
if: always()
|
||||
working-directory: src/apps/cli
|
||||
run: |
|
||||
bash ./util/couchdb-stop.sh >/dev/null 2>&1 || true
|
||||
bash ./util/minio-stop.sh >/dev/null 2>&1 || true
|
||||
12
.github/workflows/unit-ci.yml
vendored
12
.github/workflows/unit-ci.yml
vendored
@@ -7,6 +7,18 @@ on:
|
||||
branches:
|
||||
- main
|
||||
- beta
|
||||
paths:
|
||||
- 'src/**'
|
||||
- 'test/**'
|
||||
- 'lib/**'
|
||||
- 'package.json'
|
||||
- 'package-lock.json'
|
||||
- 'tsconfig.json'
|
||||
- 'vite.config.ts'
|
||||
- 'vitest.config*.ts'
|
||||
- 'esbuild.config.mjs'
|
||||
- 'eslint.config.mjs'
|
||||
- '.github/workflows/unit-ci.yml'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
@@ -63,43 +63,43 @@ As you know, the CLI is designed to be used in a headless environment. Hence all
|
||||
|
||||
```bash
|
||||
# Sync local database with CouchDB (no files will be changed).
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json sync
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync
|
||||
|
||||
# Push files to local database
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
|
||||
|
||||
# Pull files from local database
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
|
||||
|
||||
# Verbose logging
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose
|
||||
|
||||
# Apply setup URI to settings file (settings only; does not run synchronisation)
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
|
||||
|
||||
# Put text from stdin into local database
|
||||
echo "Hello from stdin" | npm run cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
|
||||
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
|
||||
|
||||
# Output a file from local database to stdout
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
|
||||
|
||||
# Output a specific revision of a file from local database
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
|
||||
|
||||
# Pull a specific revision of a file from local database to local storage
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
|
||||
|
||||
# List files in local database
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
|
||||
|
||||
# Show metadata for a file in local database
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
|
||||
|
||||
# Mark a file as deleted in local database
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
|
||||
|
||||
# Resolve conflict by keeping a specific revision
|
||||
npm run cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
|
||||
```
|
||||
|
||||
### Configuration
|
||||
@@ -159,14 +159,26 @@ Commands:
|
||||
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list
|
||||
rm <vaultPath> Mark file as deleted in local database
|
||||
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision
|
||||
mirror <storagePath> <vaultPath> Mirror local file into local database.
|
||||
```
|
||||
|
||||
Run via npm script:
|
||||
|
||||
```bash
|
||||
npm run cli -- [database-path] [options] [command] [command-args]
|
||||
npm run --silent cli -- [database-path] [options] [command] [command-args]
|
||||
```
|
||||
|
||||
#### Detailed Command Descriptions
|
||||
|
||||
##### ls
|
||||
`ls` lists files in the local database with optional prefix filtering. Output format is:
|
||||
|
||||
```vault/path/file.md<TAB>size<TAB>mtime<TAB>revision[*]
|
||||
```
|
||||
Note: `*` indicates if the file has conflicts.
|
||||
|
||||
##### info
|
||||
|
||||
`info` output fields:
|
||||
|
||||
- `id`: Document ID
|
||||
@@ -179,9 +191,39 @@ npm run cli -- [database-path] [options] [command] [command-args]
|
||||
- `chunks`: Number of chunk IDs
|
||||
- `children`: Chunk ID list
|
||||
|
||||
### Planned options:
|
||||
##### mirror
|
||||
|
||||
TODO: Conflict and resolution checks for real local databases.
|
||||
`mirror` is a command that synchronises your storage with your local vault. It is essentially a process that runs upon startup in Obsidian.
|
||||
|
||||
In other words, it performs the following actions:
|
||||
|
||||
1. **Precondition checks** — Aborts early if any of the following conditions are not met:
|
||||
- Settings must be configured (`isConfigured: true`).
|
||||
- File watching must not be suspended (`suspendFileWatching: false`).
|
||||
- Remediation mode must be inactive (`maxMTimeForReflectEvents: 0`).
|
||||
|
||||
2. **State restoration** — On subsequent runs (after the first successful scan), restores the previous storage state before proceeding.
|
||||
|
||||
3. **Expired deletion cleanup** — If `automaticallyDeleteMetadataOfDeletedFiles` is set to a positive number of days, any document that is marked deleted and whose `mtime` is older than the retention period is permanently removed from the local database.
|
||||
|
||||
4. **File collection** — Enumerates files from two sources:
|
||||
- **Storage**: all files under the vault path that pass `isTargetFile`.
|
||||
- **Local database**: all normal documents (fetched with conflict information) whose paths are valid and pass `isTargetFile`.
|
||||
- Both collections build case-insensitive ↔ case-sensitive path maps, controlled by `handleFilenameCaseSensitive`.
|
||||
|
||||
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
|
||||
|
||||
| Group | Condition | Action |
|
||||
|---|---|---|
|
||||
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
|
||||
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
|
||||
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
|
||||
|
||||
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
|
||||
|
||||
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases).
|
||||
|
||||
### Planned options:
|
||||
|
||||
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`).
|
||||
- `serve`: Start CLI in server mode, exposing REST APIs for remote, and batch operations.
|
||||
@@ -194,9 +236,9 @@ TODO: Conflict and resolution checks for real local databases.
|
||||
Create default settings, apply a setup URI, then run one sync cycle.
|
||||
|
||||
```bash
|
||||
npm run cli -- init-settings /data/livesync-settings.json
|
||||
printf '%s\n' "$SETUP_PASSPHRASE" | npm run cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json sync
|
||||
npm run --silent cli -- init-settings /data/livesync-settings.json
|
||||
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
|
||||
```
|
||||
|
||||
### 2. Scripted import and export
|
||||
@@ -204,8 +246,8 @@ npm run cli -- /data/vault --settings /data/livesync-settings.json sync
|
||||
Push local files into the database from automation, and pull them back for export or backup.
|
||||
|
||||
```bash
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
|
||||
```
|
||||
|
||||
### 3. Revision inspection and restore
|
||||
@@ -213,9 +255,9 @@ npm run cli -- /data/vault --settings /data/livesync-settings.json pull notes/no
|
||||
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
|
||||
|
||||
```bash
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
|
||||
```
|
||||
|
||||
### 4. Conflict and cleanup workflow
|
||||
@@ -223,9 +265,9 @@ npm run cli -- /data/vault --settings /data/livesync-settings.json pull-rev note
|
||||
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
|
||||
|
||||
```bash
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
|
||||
```
|
||||
|
||||
### 5. CI smoke test for content round-trip
|
||||
@@ -233,8 +275,8 @@ npm run cli -- /data/vault --settings /data/livesync-settings.json rm notes/obso
|
||||
Validate that `put`/`cat` is behaving as expected in a pipeline.
|
||||
|
||||
```bash
|
||||
echo "hello-ci" | npm run cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
|
||||
npm run cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
|
||||
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
@@ -6,6 +6,8 @@ import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSetting
|
||||
import { stripAllPrefixes } from "@lib/string_and_binary/path";
|
||||
import type { CLICommandContext, CLIOptions } from "./types";
|
||||
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils";
|
||||
import { performFullScan } from "@lib/serviceFeatures/offlineScanner";
|
||||
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
|
||||
|
||||
export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> {
|
||||
const { vaultPath, core, settingsPath } = context;
|
||||
@@ -309,5 +311,12 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
return true;
|
||||
}
|
||||
|
||||
if (options.command === "mirror") {
|
||||
console.error("[Command] mirror");
|
||||
const log = (msg: unknown) => console.error(`[Mirror] ${msg}`);
|
||||
const errorManager = new UnresolvedErrorManager(core.services.appLifecycle);
|
||||
return await performFullScan(core as any, log, errorManager, false, true);
|
||||
}
|
||||
|
||||
throw new Error(`Unsupported command: ${options.command}`);
|
||||
}
|
||||
|
||||
@@ -15,12 +15,14 @@ export type CLICommand =
|
||||
| "info"
|
||||
| "rm"
|
||||
| "resolve"
|
||||
| "mirror"
|
||||
| "init-settings";
|
||||
|
||||
export interface CLIOptions {
|
||||
databasePath?: string;
|
||||
settingsPath?: string;
|
||||
verbose?: boolean;
|
||||
debug?: boolean;
|
||||
force?: boolean;
|
||||
command: CLICommand;
|
||||
commandArgs: string[];
|
||||
@@ -45,5 +47,6 @@ export const VALID_COMMANDS = new Set([
|
||||
"info",
|
||||
"rm",
|
||||
"resolve",
|
||||
"mirror",
|
||||
"init-settings",
|
||||
] as const);
|
||||
|
||||
@@ -27,10 +27,12 @@ import { initialiseServiceModulesCLI } from "./serviceModules/CLIServiceModules"
|
||||
import { DEFAULT_SETTINGS, LOG_LEVEL_VERBOSE, type LOG_LEVEL, type ObsidianLiveSyncSettings } from "@lib/common/types";
|
||||
import type { InjectableServiceHub } from "@lib/services/implements/injectable/InjectableServiceHub";
|
||||
import type { InjectableSettingService } from "@/lib/src/services/implements/injectable/InjectableSettingService";
|
||||
import { LOG_LEVEL_DEBUG, setGlobalLogFunction, defaultLoggerEnv } from "octagonal-wheels/common/logger";
|
||||
import { LOG_LEVEL_DEBUG, setGlobalLogFunction, defaultLoggerEnv, LOG_LEVEL_INFO, LOG_LEVEL_URGENT, LOG_LEVEL_NOTICE } from "octagonal-wheels/common/logger";
|
||||
import { runCommand } from "./commands/runCommand";
|
||||
import { VALID_COMMANDS } from "./commands/types";
|
||||
import type { CLICommand, CLIOptions } from "./commands/types";
|
||||
import { getPathFromUXFileInfo } from "@lib/common/typeUtils";
|
||||
import { stripAllPrefixes } from "@lib/string_and_binary/path";
|
||||
|
||||
const SETTINGS_FILE = ".livesync/settings.json";
|
||||
defaultLoggerEnv.minLogLevel = LOG_LEVEL_DEBUG;
|
||||
@@ -45,12 +47,12 @@ defaultLoggerEnv.minLogLevel = LOG_LEVEL_DEBUG;
|
||||
// recentLogEntries.value = [...recentLogEntries.value, entry];
|
||||
// };
|
||||
|
||||
setGlobalLogFunction((msg, level) => {
|
||||
console.error(`[${level}] ${typeof msg === "string" ? msg : JSON.stringify(msg)}`);
|
||||
if (msg instanceof Error) {
|
||||
console.error(msg);
|
||||
}
|
||||
});
|
||||
// setGlobalLogFunction((msg, level) => {
|
||||
// console.error(`[${level}] ${typeof msg === "string" ? msg : JSON.stringify(msg)}`);
|
||||
// if (msg instanceof Error) {
|
||||
// console.error(msg);
|
||||
// }
|
||||
// });
|
||||
function printHelp(): void {
|
||||
console.log(`
|
||||
Self-hosted LiveSync CLI
|
||||
@@ -103,6 +105,7 @@ export function parseArgs(): CLIOptions {
|
||||
let databasePath: string | undefined;
|
||||
let settingsPath: string | undefined;
|
||||
let verbose = false;
|
||||
let debug = false;
|
||||
let force = false;
|
||||
let command: CLICommand = "daemon";
|
||||
const commandArgs: string[] = [];
|
||||
@@ -120,6 +123,10 @@ export function parseArgs(): CLIOptions {
|
||||
settingsPath = args[i];
|
||||
break;
|
||||
}
|
||||
case "--debug":
|
||||
case "-d":
|
||||
// debugging automatically enables verbose logging, as it is intended for debugging issues.
|
||||
debug = true;
|
||||
case "--verbose":
|
||||
case "-v":
|
||||
verbose = true;
|
||||
@@ -165,6 +172,7 @@ export function parseArgs(): CLIOptions {
|
||||
databasePath,
|
||||
settingsPath,
|
||||
verbose,
|
||||
debug,
|
||||
force,
|
||||
command,
|
||||
commandArgs,
|
||||
@@ -209,7 +217,18 @@ export async function main() {
|
||||
options.command === "rm" ||
|
||||
options.command === "resolve";
|
||||
const infoLog = avoidStdoutNoise ? console.error : console.log;
|
||||
|
||||
if(options.debug){
|
||||
setGlobalLogFunction((msg, level) => {
|
||||
console.error(`[${level}] ${typeof msg === "string" ? msg : JSON.stringify(msg)}`);
|
||||
if (msg instanceof Error) {
|
||||
console.error(msg);
|
||||
}
|
||||
});
|
||||
}else{
|
||||
setGlobalLogFunction((msg, level) => {
|
||||
// NO OP, leave it to logFunction
|
||||
})
|
||||
}
|
||||
if (options.command === "init-settings") {
|
||||
await createDefaultSettingsFile(options);
|
||||
return;
|
||||
@@ -243,8 +262,28 @@ export async function main() {
|
||||
const context = new NodeServiceContext(vaultPath);
|
||||
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(vaultPath, context);
|
||||
serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => {
|
||||
const prefix = `[${level}]`;
|
||||
if (level <= LOG_LEVEL_VERBOSE) {
|
||||
let levelStr = "";
|
||||
switch (level) {
|
||||
case LOG_LEVEL_DEBUG:
|
||||
levelStr = "debug";
|
||||
break;
|
||||
case LOG_LEVEL_VERBOSE:
|
||||
levelStr = "Verbose";
|
||||
break;
|
||||
case LOG_LEVEL_INFO:
|
||||
levelStr = "Info";
|
||||
break;
|
||||
case LOG_LEVEL_NOTICE:
|
||||
levelStr = "Notice";
|
||||
break;
|
||||
case LOG_LEVEL_URGENT:
|
||||
levelStr = "Urgent";
|
||||
break;
|
||||
default:
|
||||
levelStr = `${level}`;
|
||||
}
|
||||
const prefix = `(${levelStr})`;
|
||||
if (level <= LOG_LEVEL_INFO) {
|
||||
if (!options.verbose) return;
|
||||
}
|
||||
console.error(`${prefix} ${message}`);
|
||||
@@ -254,6 +293,7 @@ export async function main() {
|
||||
console.error(`[Info] Replication result received, but not processed automatically in CLI mode.`);
|
||||
return await Promise.resolve(true);
|
||||
}, -100);
|
||||
|
||||
// Setup settings handlers
|
||||
const settingService = serviceHubInstance.setting;
|
||||
|
||||
@@ -298,7 +338,18 @@ export async function main() {
|
||||
},
|
||||
() => [], // No extra modules
|
||||
() => [], // No add-ons
|
||||
() => [] // No serviceFeatures
|
||||
(core) => {
|
||||
// Add target filter to prevent internal files are handled
|
||||
core.services.vault.isTargetFile.addHandler(async (target) => {
|
||||
const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target));
|
||||
const parts = vaultPath.split(path.sep);
|
||||
// if some part of the path starts with dot, treat it as internal file and ignore.
|
||||
if (parts.some((part) => part.startsWith("."))) {
|
||||
return await Promise.resolve(false);
|
||||
}
|
||||
return await Promise.resolve(true);
|
||||
}, -1 /* highest priority */);
|
||||
}
|
||||
);
|
||||
|
||||
// Setup signal handlers for graceful shutdown
|
||||
|
||||
@@ -66,9 +66,9 @@ class CLIStatusAdapter implements IStorageEventStatusAdapter {
|
||||
const now = Date.now();
|
||||
if (now - this.lastUpdate > this.updateInterval) {
|
||||
if (status.totalQueued > 0 || status.processing > 0) {
|
||||
console.log(
|
||||
`[StorageEventManager] Batched: ${status.batched}, Processing: ${status.processing}, Total Queued: ${status.totalQueued}`
|
||||
);
|
||||
// console.log(
|
||||
// `[StorageEventManager] Batched: ${status.batched}, Processing: ${status.processing}, Total Queued: ${status.totalQueued}`
|
||||
// );
|
||||
}
|
||||
this.lastUpdate = now;
|
||||
}
|
||||
@@ -108,7 +108,7 @@ class CLIWatchAdapter implements IStorageEventWatchAdapter {
|
||||
async beginWatch(handlers: IStorageEventWatchHandlers): Promise<void> {
|
||||
// File watching is not activated in the CLI.
|
||||
// Because the CLI is designed for push/pull operations, not real-time sync.
|
||||
console.error("[CLIWatchAdapter] File watching is not enabled in CLI version");
|
||||
// console.error("[CLIWatchAdapter] File watching is not enabled in CLI version");
|
||||
return Promise.resolve();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,7 +18,9 @@
|
||||
"test:e2e:push-pull": "bash test/test-push-pull-linux.sh",
|
||||
"test:e2e:setup-put-cat": "bash test/test-setup-put-cat-linux.sh",
|
||||
"test:e2e:sync-two-local": "bash test/test-sync-two-local-databases-linux.sh",
|
||||
"test:e2e:all": "npm run test:e2e:two-vaults && npm run test:e2e:push-pull && npm run test:e2e:setup-put-cat && npm run test:e2e:sync-two-local"
|
||||
"test:e2e:mirror": "bash test/test-mirror-linux.sh",
|
||||
"pretest:e2e:all": "npm run build",
|
||||
"test:e2e:all": " export RUN_BUILD=0 && npm run test:e2e:setup-put-cat && npm run test:e2e:push-pull && npm run test:e2e:sync-two-local && npm run test:e2e:mirror && npm run test:e2e:two-vaults"
|
||||
},
|
||||
"dependencies": {},
|
||||
"devDependencies": {}
|
||||
|
||||
@@ -10,6 +10,78 @@ import { createInstanceLogFunction } from "@lib/services/lib/logUtils";
|
||||
import * as nodeFs from "node:fs";
|
||||
import * as nodePath from "node:path";
|
||||
|
||||
const NODE_KV_TYPED_KEY = "__nodeKvType";
|
||||
const NODE_KV_VALUES_KEY = "values";
|
||||
|
||||
type SerializableContainer =
|
||||
| {
|
||||
[NODE_KV_TYPED_KEY]: "Set";
|
||||
[NODE_KV_VALUES_KEY]: unknown[];
|
||||
}
|
||||
| {
|
||||
[NODE_KV_TYPED_KEY]: "Uint8Array";
|
||||
[NODE_KV_VALUES_KEY]: number[];
|
||||
}
|
||||
| {
|
||||
[NODE_KV_TYPED_KEY]: "ArrayBuffer";
|
||||
[NODE_KV_VALUES_KEY]: number[];
|
||||
};
|
||||
|
||||
function isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return typeof value === "object" && value !== null;
|
||||
}
|
||||
|
||||
function serializeForNodeKV(value: unknown): unknown {
|
||||
if (value instanceof Set) {
|
||||
return {
|
||||
[NODE_KV_TYPED_KEY]: "Set",
|
||||
[NODE_KV_VALUES_KEY]: [...value].map((entry) => serializeForNodeKV(entry)),
|
||||
} satisfies SerializableContainer;
|
||||
}
|
||||
if (value instanceof Uint8Array) {
|
||||
return {
|
||||
[NODE_KV_TYPED_KEY]: "Uint8Array",
|
||||
[NODE_KV_VALUES_KEY]: Array.from(value),
|
||||
} satisfies SerializableContainer;
|
||||
}
|
||||
if (value instanceof ArrayBuffer) {
|
||||
return {
|
||||
[NODE_KV_TYPED_KEY]: "ArrayBuffer",
|
||||
[NODE_KV_VALUES_KEY]: Array.from(new Uint8Array(value)),
|
||||
} satisfies SerializableContainer;
|
||||
}
|
||||
if (Array.isArray(value)) {
|
||||
return value.map((entry) => serializeForNodeKV(entry));
|
||||
}
|
||||
if (isRecord(value)) {
|
||||
return Object.fromEntries(Object.entries(value).map(([k, v]) => [k, serializeForNodeKV(v)]));
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
function deserializeFromNodeKV(value: unknown): unknown {
|
||||
if (Array.isArray(value)) {
|
||||
return value.map((entry) => deserializeFromNodeKV(entry));
|
||||
}
|
||||
if (!isRecord(value)) {
|
||||
return value;
|
||||
}
|
||||
|
||||
const taggedType = value[NODE_KV_TYPED_KEY];
|
||||
const taggedValues = value[NODE_KV_VALUES_KEY];
|
||||
if (taggedType === "Set" && Array.isArray(taggedValues)) {
|
||||
return new Set(taggedValues.map((entry) => deserializeFromNodeKV(entry)));
|
||||
}
|
||||
if (taggedType === "Uint8Array" && Array.isArray(taggedValues)) {
|
||||
return Uint8Array.from(taggedValues);
|
||||
}
|
||||
if (taggedType === "ArrayBuffer" && Array.isArray(taggedValues)) {
|
||||
return Uint8Array.from(taggedValues).buffer;
|
||||
}
|
||||
|
||||
return Object.fromEntries(Object.entries(value).map(([k, v]) => [k, deserializeFromNodeKV(v)]));
|
||||
}
|
||||
|
||||
class NodeFileKeyValueDatabase implements KeyValueDatabase {
|
||||
private filePath: string;
|
||||
private data = new Map<string, unknown>();
|
||||
@@ -29,7 +101,9 @@ class NodeFileKeyValueDatabase implements KeyValueDatabase {
|
||||
private load() {
|
||||
try {
|
||||
const loaded = JSON.parse(nodeFs.readFileSync(this.filePath, "utf-8")) as Record<string, unknown>;
|
||||
this.data = new Map(Object.entries(loaded));
|
||||
this.data = new Map(
|
||||
Object.entries(loaded).map(([key, value]) => [key, deserializeFromNodeKV(value)])
|
||||
);
|
||||
} catch {
|
||||
this.data = new Map();
|
||||
}
|
||||
@@ -37,7 +111,10 @@ class NodeFileKeyValueDatabase implements KeyValueDatabase {
|
||||
|
||||
private flush() {
|
||||
nodeFs.mkdirSync(nodePath.dirname(this.filePath), { recursive: true });
|
||||
nodeFs.writeFileSync(this.filePath, JSON.stringify(Object.fromEntries(this.data), null, 2), "utf-8");
|
||||
const serializable = Object.fromEntries(
|
||||
[...this.data.entries()].map(([key, value]) => [key, serializeForNodeKV(value)])
|
||||
);
|
||||
nodeFs.writeFileSync(this.filePath, JSON.stringify(serializable, null, 2), "utf-8");
|
||||
}
|
||||
|
||||
async get<T>(key: IDBValidKey): Promise<T> {
|
||||
|
||||
@@ -4,8 +4,9 @@ set -euo pipefail
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
|
||||
CLI_CMD=(npm --silent run cli -- -v)
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
VERBOSE_TEST_LOGGING="${VERBOSE_TEST_LOGGING:-0}"
|
||||
cli_test_init_cli_cmd
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
KEEP_TEST_DATA="${KEEP_TEST_DATA:-0}"
|
||||
TEST_ENV_FILE="${TEST_ENV_FILE:-$CLI_DIR/.test.env}"
|
||||
@@ -36,27 +37,24 @@ COUCHDB_URI=""
|
||||
COUCHDB_DBNAME=""
|
||||
MINIO_BUCKET=""
|
||||
|
||||
require_env() {
|
||||
local var_name="$1"
|
||||
if [[ -z "${!var_name:-}" ]]; then
|
||||
echo "[ERROR] required variable '$var_name' is missing in $TEST_ENV_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
if [[ "$REMOTE_TYPE" == "COUCHDB" ]]; then
|
||||
require_env hostname
|
||||
require_env dbname
|
||||
require_env username
|
||||
require_env password
|
||||
cli_test_require_env hostname "$TEST_ENV_FILE"
|
||||
cli_test_require_env dbname "$TEST_ENV_FILE"
|
||||
cli_test_require_env username "$TEST_ENV_FILE"
|
||||
cli_test_require_env password "$TEST_ENV_FILE"
|
||||
COUCHDB_URI="${hostname%/}"
|
||||
COUCHDB_DBNAME="${dbname}-${DB_SUFFIX}"
|
||||
COUCHDB_USER="${username:-}"
|
||||
COUCHDB_PASSWORD="${password:-}"
|
||||
elif [[ "$REMOTE_TYPE" == "MINIO" ]]; then
|
||||
require_env accessKey
|
||||
require_env secretKey
|
||||
require_env minioEndpoint
|
||||
require_env bucketName
|
||||
cli_test_require_env accessKey "$TEST_ENV_FILE"
|
||||
cli_test_require_env secretKey "$TEST_ENV_FILE"
|
||||
cli_test_require_env minioEndpoint "$TEST_ENV_FILE"
|
||||
cli_test_require_env bucketName "$TEST_ENV_FILE"
|
||||
MINIO_BUCKET="${bucketName}-${DB_SUFFIX}"
|
||||
MINIO_ENDPOINT="${minioEndpoint:-}"
|
||||
MINIO_ACCESS_KEY="${accessKey:-}"
|
||||
MINIO_SECRET_KEY="${secretKey:-}"
|
||||
else
|
||||
echo "[ERROR] unsupported REMOTE_TYPE: $REMOTE_TYPE (use COUCHDB or MINIO)" >&2
|
||||
exit 1
|
||||
@@ -65,9 +63,9 @@ fi
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
if [[ "$REMOTE_TYPE" == "COUCHDB" ]]; then
|
||||
bash "$CLI_DIR/util/couchdb-stop.sh" >/dev/null 2>&1 || true
|
||||
cli_test_stop_couchdb
|
||||
else
|
||||
bash "$CLI_DIR/util/minio-stop.sh" >/dev/null 2>&1 || true
|
||||
cli_test_stop_minio
|
||||
fi
|
||||
|
||||
if [[ "$KEEP_TEST_DATA" != "1" ]]; then
|
||||
@@ -83,10 +81,6 @@ cleanup() {
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
run_cli() {
|
||||
"${CLI_CMD[@]}" "$@"
|
||||
}
|
||||
|
||||
run_cli_a() {
|
||||
run_cli "$VAULT_A" --settings "$SETTINGS_A" "$@"
|
||||
}
|
||||
@@ -95,191 +89,28 @@ run_cli_b() {
|
||||
run_cli "$VAULT_B" --settings "$SETTINGS_B" "$@"
|
||||
}
|
||||
|
||||
assert_contains() {
|
||||
local haystack="$1"
|
||||
local needle="$2"
|
||||
local message="$3"
|
||||
if ! grep -Fq "$needle" <<< "$haystack"; then
|
||||
echo "[FAIL] $message" >&2
|
||||
echo "[FAIL] expected to find: $needle" >&2
|
||||
echo "[FAIL] actual output:" >&2
|
||||
echo "$haystack" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
assert_equal() {
|
||||
local expected="$1"
|
||||
local actual="$2"
|
||||
local message="$3"
|
||||
if [[ "$expected" != "$actual" ]]; then
|
||||
echo "[FAIL] $message" >&2
|
||||
echo "[FAIL] expected: $expected" >&2
|
||||
echo "[FAIL] actual: $actual" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
assert_command_fails() {
|
||||
local message="$1"
|
||||
shift
|
||||
set +e
|
||||
"$@" >"$WORK_DIR/failed-command.log" 2>&1
|
||||
local exit_code=$?
|
||||
set -e
|
||||
if [[ "$exit_code" -eq 0 ]]; then
|
||||
echo "[FAIL] $message" >&2
|
||||
cat "$WORK_DIR/failed-command.log" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
assert_files_equal() {
|
||||
local expected_file="$1"
|
||||
local actual_file="$2"
|
||||
local message="$3"
|
||||
if ! cmp -s "$expected_file" "$actual_file"; then
|
||||
echo "[FAIL] $message" >&2
|
||||
echo "[FAIL] expected sha256: $(sha256sum "$expected_file" | awk '{print $1}')" >&2
|
||||
echo "[FAIL] actual sha256: $(sha256sum "$actual_file" | awk '{print $1}')" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
sanitise_cat_stdout() {
|
||||
sed '/^\[CLIWatchAdapter\] File watching is not enabled in CLI version$/d'
|
||||
}
|
||||
|
||||
extract_json_string_field() {
|
||||
local field_name="$1"
|
||||
node -e '
|
||||
const fs = require("node:fs");
|
||||
const fieldName = process.argv[1];
|
||||
const data = JSON.parse(fs.readFileSync(0, "utf-8"));
|
||||
const value = data[fieldName];
|
||||
if (typeof value === "string") {
|
||||
process.stdout.write(value);
|
||||
}
|
||||
' "$field_name"
|
||||
}
|
||||
|
||||
sync_both() {
|
||||
run_cli_a sync >/dev/null
|
||||
run_cli_b sync >/dev/null
|
||||
}
|
||||
|
||||
curl_json() {
|
||||
curl -4 -sS --fail --connect-timeout 3 --max-time 15 "$@"
|
||||
}
|
||||
|
||||
configure_remote_settings() {
|
||||
local settings_file="$1"
|
||||
SETTINGS_FILE="$settings_file" \
|
||||
REMOTE_TYPE="$REMOTE_TYPE" \
|
||||
COUCHDB_URI="$COUCHDB_URI" \
|
||||
COUCHDB_USER="${username:-}" \
|
||||
COUCHDB_PASSWORD="${password:-}" \
|
||||
COUCHDB_DBNAME="$COUCHDB_DBNAME" \
|
||||
MINIO_ENDPOINT="${minioEndpoint:-}" \
|
||||
MINIO_BUCKET="$MINIO_BUCKET" \
|
||||
MINIO_ACCESS_KEY="${accessKey:-}" \
|
||||
MINIO_SECRET_KEY="${secretKey:-}" \
|
||||
ENCRYPT="$ENCRYPT" \
|
||||
E2E_PASSPHRASE="$E2E_PASSPHRASE" \
|
||||
node <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
const settingsPath = process.env.SETTINGS_FILE;
|
||||
const data = JSON.parse(fs.readFileSync(settingsPath, "utf-8"));
|
||||
|
||||
const remoteType = process.env.REMOTE_TYPE;
|
||||
if (remoteType === "COUCHDB") {
|
||||
data.remoteType = "";
|
||||
data.couchDB_URI = process.env.COUCHDB_URI;
|
||||
data.couchDB_USER = process.env.COUCHDB_USER;
|
||||
data.couchDB_PASSWORD = process.env.COUCHDB_PASSWORD;
|
||||
data.couchDB_DBNAME = process.env.COUCHDB_DBNAME;
|
||||
} else if (remoteType === "MINIO") {
|
||||
data.remoteType = "MINIO";
|
||||
data.bucket = process.env.MINIO_BUCKET;
|
||||
data.endpoint = process.env.MINIO_ENDPOINT;
|
||||
data.accessKey = process.env.MINIO_ACCESS_KEY;
|
||||
data.secretKey = process.env.MINIO_SECRET_KEY;
|
||||
data.region = "auto";
|
||||
data.forcePathStyle = true;
|
||||
}
|
||||
|
||||
data.liveSync = true;
|
||||
data.syncOnStart = false;
|
||||
data.syncOnSave = false;
|
||||
data.usePluginSync = false;
|
||||
|
||||
data.encrypt = process.env.ENCRYPT === "1";
|
||||
data.passphrase = data.encrypt ? process.env.E2E_PASSPHRASE : "";
|
||||
|
||||
data.isConfigured = true;
|
||||
|
||||
fs.writeFileSync(settingsPath, JSON.stringify(data, null, 2), "utf-8");
|
||||
NODE
|
||||
cli_test_apply_remote_sync_settings "$settings_file"
|
||||
}
|
||||
|
||||
init_settings() {
|
||||
local settings_file="$1"
|
||||
run_cli init-settings --force "$settings_file" >/dev/null
|
||||
cli_test_init_settings_file "$settings_file"
|
||||
configure_remote_settings "$settings_file"
|
||||
cat "$settings_file"
|
||||
}
|
||||
|
||||
wait_for_minio_bucket() {
|
||||
local retries=30
|
||||
local delay_sec=2
|
||||
local i
|
||||
for ((i = 1; i <= retries; i++)); do
|
||||
if docker run --rm --network host --entrypoint=/bin/sh minio/mc -c "mc alias set myminio $minioEndpoint $accessKey $secretKey >/dev/null 2>&1 && mc ls myminio/$MINIO_BUCKET >/dev/null 2>&1"; then
|
||||
return 0
|
||||
fi
|
||||
bucketName="$MINIO_BUCKET" bash "$CLI_DIR/util/minio-init.sh" >/dev/null 2>&1 || true
|
||||
sleep "$delay_sec"
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
start_remote() {
|
||||
if [[ "$REMOTE_TYPE" == "COUCHDB" ]]; then
|
||||
echo "[INFO] stopping leftover CouchDB container if present"
|
||||
bash "$CLI_DIR/util/couchdb-stop.sh" >/dev/null 2>&1 || true
|
||||
|
||||
echo "[INFO] starting CouchDB test container"
|
||||
bash "$CLI_DIR/util/couchdb-start.sh"
|
||||
|
||||
echo "[INFO] initialising CouchDB test container"
|
||||
bash "$CLI_DIR/util/couchdb-init.sh"
|
||||
|
||||
echo "[INFO] CouchDB create test database: $COUCHDB_DBNAME"
|
||||
until (curl_json -X PUT --user "${username}:${password}" "${hostname}/${COUCHDB_DBNAME}"); do sleep 5; done
|
||||
cli_test_start_couchdb "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME"
|
||||
else
|
||||
echo "[INFO] stopping leftover MinIO container if present"
|
||||
bash "$CLI_DIR/util/minio-stop.sh" >/dev/null 2>&1 || true
|
||||
|
||||
echo "[INFO] starting MinIO test container"
|
||||
bucketName="$MINIO_BUCKET" bash "$CLI_DIR/util/minio-start.sh"
|
||||
|
||||
echo "[INFO] initialising MinIO test bucket: $MINIO_BUCKET"
|
||||
local minio_init_ok=0
|
||||
for _ in 1 2 3 4 5; do
|
||||
if bucketName="$MINIO_BUCKET" bash "$CLI_DIR/util/minio-init.sh"; then
|
||||
minio_init_ok=1
|
||||
break
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
if [[ "$minio_init_ok" != "1" ]]; then
|
||||
echo "[FAIL] could not initialise MinIO bucket after retries: $MINIO_BUCKET" >&2
|
||||
exit 1
|
||||
fi
|
||||
if ! wait_for_minio_bucket; then
|
||||
echo "[FAIL] MinIO bucket not ready: $MINIO_BUCKET" >&2
|
||||
exit 1
|
||||
fi
|
||||
cli_test_start_minio "$MINIO_ENDPOINT" "$MINIO_ACCESS_KEY" "$MINIO_SECRET_KEY" "$MINIO_BUCKET"
|
||||
fi
|
||||
}
|
||||
|
||||
@@ -313,14 +144,14 @@ TARGET_CONFLICT="e2e/conflict.md"
|
||||
echo "[CASE] A puts and A can get info"
|
||||
printf 'alpha-from-a\n' | run_cli_a put "$TARGET_A_ONLY" >/dev/null
|
||||
INFO_A_ONLY="$(run_cli_a info "$TARGET_A_ONLY")"
|
||||
assert_contains "$INFO_A_ONLY" "\"path\": \"$TARGET_A_ONLY\"" "A info should include path after put"
|
||||
cli_test_assert_contains "$INFO_A_ONLY" "\"path\": \"$TARGET_A_ONLY\"" "A info should include path after put"
|
||||
echo "[PASS] A put/info"
|
||||
|
||||
echo "[CASE] A puts, both sync, and B can get info"
|
||||
printf 'visible-after-sync\n' | run_cli_a put "$TARGET_SYNC" >/dev/null
|
||||
sync_both
|
||||
INFO_B_SYNC="$(run_cli_b info "$TARGET_SYNC")"
|
||||
assert_contains "$INFO_B_SYNC" "\"path\": \"$TARGET_SYNC\"" "B info should include path after sync"
|
||||
cli_test_assert_contains "$INFO_B_SYNC" "\"path\": \"$TARGET_SYNC\"" "B info should include path after sync"
|
||||
echo "[PASS] sync A->B and B info"
|
||||
|
||||
echo "[CASE] A pushes and puts, both sync, and B can pull and cat"
|
||||
@@ -331,9 +162,9 @@ run_cli_a push "$PUSH_SRC" "$TARGET_PUSH" >/dev/null
|
||||
printf 'put-content-%s\n' "$DB_SUFFIX" | run_cli_a put "$TARGET_PUT" >/dev/null
|
||||
sync_both
|
||||
run_cli_b pull "$TARGET_PUSH" "$PULL_DST" >/dev/null
|
||||
assert_files_equal "$PUSH_SRC" "$PULL_DST" "B pull result does not match pushed source"
|
||||
CAT_B_PUT="$(run_cli_b cat "$TARGET_PUT" | sanitise_cat_stdout)"
|
||||
assert_equal "put-content-$DB_SUFFIX" "$CAT_B_PUT" "B cat should return A put content"
|
||||
cli_test_assert_files_equal "$PUSH_SRC" "$PULL_DST" "B pull result does not match pushed source"
|
||||
CAT_B_PUT="$(run_cli_b cat "$TARGET_PUT" | cli_test_sanitise_cat_stdout)"
|
||||
cli_test_assert_equal "put-content-$DB_SUFFIX" "$CAT_B_PUT" "B cat should return A put content"
|
||||
echo "[PASS] push/pull and put/cat across vaults"
|
||||
|
||||
echo "[CASE] A pushes binary, both sync, and B can pull identical bytes"
|
||||
@@ -343,31 +174,44 @@ head -c 4096 /dev/urandom > "$PUSH_BINARY_SRC"
|
||||
run_cli_a push "$PUSH_BINARY_SRC" "$TARGET_PUSH_BINARY" >/dev/null
|
||||
sync_both
|
||||
run_cli_b pull "$TARGET_PUSH_BINARY" "$PULL_BINARY_DST" >/dev/null
|
||||
assert_files_equal "$PUSH_BINARY_SRC" "$PULL_BINARY_DST" "B pull result does not match pushed binary source"
|
||||
cli_test_assert_files_equal "$PUSH_BINARY_SRC" "$PULL_BINARY_DST" "B pull result does not match pushed binary source"
|
||||
echo "[PASS] binary push/pull across vaults"
|
||||
|
||||
echo "[CASE] A removes, both sync, and B can no longer cat"
|
||||
run_cli_a rm "$TARGET_PUT" >/dev/null
|
||||
sync_both
|
||||
assert_command_fails "B cat should fail after A removed the file and synced" run_cli_b cat "$TARGET_PUT"
|
||||
cli_test_assert_command_fails "B cat should fail after A removed the file and synced" "$WORK_DIR/failed-command.log" run_cli_b cat "$TARGET_PUT"
|
||||
echo "[PASS] rm is replicated"
|
||||
|
||||
echo "[CASE] verify conflict detection"
|
||||
printf 'conflict-base\n' | run_cli_a put "$TARGET_CONFLICT" >/dev/null
|
||||
sync_both
|
||||
INFO_B_BASE="$(run_cli_b info "$TARGET_CONFLICT")"
|
||||
assert_contains "$INFO_B_BASE" "\"path\": \"$TARGET_CONFLICT\"" "B should be able to info before creating conflict"
|
||||
cli_test_assert_contains "$INFO_B_BASE" "\"path\": \"$TARGET_CONFLICT\"" "B should be able to info before creating conflict"
|
||||
|
||||
printf 'conflict-from-a-%s\n' "$DB_SUFFIX" | run_cli_a put "$TARGET_CONFLICT" >/dev/null
|
||||
printf 'conflict-from-b-%s\n' "$DB_SUFFIX" | run_cli_b put "$TARGET_CONFLICT" >/dev/null
|
||||
|
||||
run_cli_a sync >/dev/null
|
||||
run_cli_b sync >/dev/null
|
||||
run_cli_a sync >/dev/null
|
||||
INFO_A_CONFLICT=""
|
||||
INFO_B_CONFLICT=""
|
||||
CONFLICT_DETECTED=0
|
||||
|
||||
INFO_A_CONFLICT="$(run_cli_a info "$TARGET_CONFLICT")"
|
||||
INFO_B_CONFLICT="$(run_cli_b info "$TARGET_CONFLICT")"
|
||||
if grep -qF '"conflicts": "N/A"' <<< "$INFO_A_CONFLICT" && grep -qF '"conflicts": "N/A"' <<< "$INFO_B_CONFLICT"; then
|
||||
for side in a b a; do
|
||||
if [[ "$side" == "a" ]]; then
|
||||
run_cli_a sync >/dev/null
|
||||
else
|
||||
run_cli_b sync >/dev/null
|
||||
fi
|
||||
|
||||
INFO_A_CONFLICT="$(run_cli_a info "$TARGET_CONFLICT")"
|
||||
INFO_B_CONFLICT="$(run_cli_b info "$TARGET_CONFLICT")"
|
||||
if ! grep -qF '"conflicts": "N/A"' <<< "$INFO_A_CONFLICT" || ! grep -qF '"conflicts": "N/A"' <<< "$INFO_B_CONFLICT"; then
|
||||
CONFLICT_DETECTED=1
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$CONFLICT_DETECTED" != "1" ]]; then
|
||||
echo "[FAIL] conflict was expected but both A and B show Conflicts: N/A" >&2
|
||||
echo "--- A info ---" >&2
|
||||
echo "$INFO_A_CONFLICT" >&2
|
||||
@@ -399,7 +243,7 @@ fi
|
||||
echo "[PASS] ls marks conflicts"
|
||||
|
||||
echo "[CASE] resolve conflict on A and verify both vaults are clean"
|
||||
KEEP_REVISION="$(printf '%s' "$INFO_A_CONFLICT" | extract_json_string_field revision)"
|
||||
KEEP_REVISION="$(printf '%s' "$INFO_A_CONFLICT" | cli_test_json_string_field_from_stdin revision)"
|
||||
if [[ -z "$KEEP_REVISION" ]]; then
|
||||
echo "[FAIL] could not extract current revision from A info output" >&2
|
||||
echo "$INFO_A_CONFLICT" >&2
|
||||
@@ -411,7 +255,7 @@ run_cli_a resolve "$TARGET_CONFLICT" "$KEEP_REVISION" >/dev/null
|
||||
INFO_A_RESOLVED=""
|
||||
INFO_B_RESOLVED=""
|
||||
RESOLVE_PROPAGATED=0
|
||||
for _ in 1 2 3 4 5; do
|
||||
for _ in 1 2 3 4 5 6; do
|
||||
sync_both
|
||||
INFO_A_RESOLVED="$(run_cli_a info "$TARGET_CONFLICT")"
|
||||
INFO_B_RESOLVED="$(run_cli_b info "$TARGET_CONFLICT")"
|
||||
@@ -419,19 +263,15 @@ for _ in 1 2 3 4 5; do
|
||||
RESOLVE_PROPAGATED=1
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [[ "$RESOLVE_PROPAGATED" != "1" ]]; then
|
||||
KEEP_REVISION_B="$(printf '%s' "$INFO_B_RESOLVED" | extract_json_string_field revision)"
|
||||
if [[ -n "$KEEP_REVISION_B" ]]; then
|
||||
run_cli_b resolve "$TARGET_CONFLICT" "$KEEP_REVISION_B" >/dev/null
|
||||
sync_both
|
||||
INFO_A_RESOLVED="$(run_cli_a info "$TARGET_CONFLICT")"
|
||||
INFO_B_RESOLVED="$(run_cli_b info "$TARGET_CONFLICT")"
|
||||
if grep -qF '"conflicts": "N/A"' <<< "$INFO_A_RESOLVED" && grep -qF '"conflicts": "N/A"' <<< "$INFO_B_RESOLVED"; then
|
||||
RESOLVE_PROPAGATED=1
|
||||
|
||||
# Retry from A only when conflict remains due to eventual consistency.
|
||||
if ! grep -qF '"conflicts": "N/A"' <<< "$INFO_A_RESOLVED"; then
|
||||
KEEP_REVISION_A="$(printf '%s' "$INFO_A_RESOLVED" | cli_test_json_string_field_from_stdin revision)"
|
||||
if [[ -n "$KEEP_REVISION_A" ]]; then
|
||||
run_cli_a resolve "$TARGET_CONFLICT" "$KEEP_REVISION_A" >/dev/null || true
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$RESOLVE_PROPAGATED" != "1" ]]; then
|
||||
echo "[FAIL] conflicts should be resolved on both vaults" >&2
|
||||
@@ -453,9 +293,9 @@ if [[ "$LS_A_RESOLVED_REV" == *"*" || "$LS_B_RESOLVED_REV" == *"*" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CAT_A_RESOLVED="$(run_cli_a cat "$TARGET_CONFLICT" | sanitise_cat_stdout)"
|
||||
CAT_B_RESOLVED="$(run_cli_b cat "$TARGET_CONFLICT" | sanitise_cat_stdout)"
|
||||
assert_equal "$CAT_A_RESOLVED" "$CAT_B_RESOLVED" "resolved content should match across both vaults"
|
||||
CAT_A_RESOLVED="$(run_cli_a cat "$TARGET_CONFLICT" | cli_test_sanitise_cat_stdout)"
|
||||
CAT_B_RESOLVED="$(run_cli_b cat "$TARGET_CONFLICT" | cli_test_sanitise_cat_stdout)"
|
||||
cli_test_assert_equal "$CAT_A_RESOLVED" "$CAT_B_RESOLVED" "resolved content should match across both vaults"
|
||||
echo "[PASS] resolve is replicated and ls reflects resolved state"
|
||||
|
||||
echo "[PASS] all requested E2E scenarios completed (${TEST_LABEL})"
|
||||
|
||||
295
src/apps/cli/test/test-helpers.sh
Normal file
295
src/apps/cli/test/test-helpers.sh
Normal file
@@ -0,0 +1,295 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
cli_test_init_cli_cmd() {
|
||||
if [[ "${VERBOSE_TEST_LOGGING:-0}" == "1" ]]; then
|
||||
CLI_CMD=(npm --silent run cli -- -v)
|
||||
else
|
||||
CLI_CMD=(npm --silent run cli --)
|
||||
fi
|
||||
}
|
||||
|
||||
run_cli() {
|
||||
"${CLI_CMD[@]}" "$@"
|
||||
}
|
||||
|
||||
cli_test_require_env() {
|
||||
local var_name="$1"
|
||||
local env_file="${2:-${TEST_ENV_FILE:-environment}}"
|
||||
if [[ -z "${!var_name:-}" ]]; then
|
||||
echo "[ERROR] required variable '$var_name' is missing in $env_file" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
cli_test_assert_contains() {
|
||||
local haystack="$1"
|
||||
local needle="$2"
|
||||
local message="$3"
|
||||
if ! grep -Fq "$needle" <<< "$haystack"; then
|
||||
echo "[FAIL] $message" >&2
|
||||
echo "[FAIL] expected to find: $needle" >&2
|
||||
echo "[FAIL] actual output:" >&2
|
||||
echo "$haystack" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
cli_test_assert_equal() {
|
||||
local expected="$1"
|
||||
local actual="$2"
|
||||
local message="$3"
|
||||
if [[ "$expected" != "$actual" ]]; then
|
||||
echo "[FAIL] $message" >&2
|
||||
echo "[FAIL] expected: $expected" >&2
|
||||
echo "[FAIL] actual: $actual" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
cli_test_assert_command_fails() {
|
||||
local message="$1"
|
||||
local log_file="$2"
|
||||
shift 2
|
||||
set +e
|
||||
"$@" >"$log_file" 2>&1
|
||||
local exit_code=$?
|
||||
set -e
|
||||
if [[ "$exit_code" -eq 0 ]]; then
|
||||
echo "[FAIL] $message" >&2
|
||||
cat "$log_file" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
cli_test_assert_files_equal() {
|
||||
local expected_file="$1"
|
||||
local actual_file="$2"
|
||||
local message="$3"
|
||||
if ! cmp -s "$expected_file" "$actual_file"; then
|
||||
echo "[FAIL] $message" >&2
|
||||
echo "[FAIL] expected sha256: $(sha256sum "$expected_file" | awk '{print $1}')" >&2
|
||||
echo "[FAIL] actual sha256: $(sha256sum "$actual_file" | awk '{print $1}')" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
cli_test_sanitise_cat_stdout() {
|
||||
sed '/^\[CLIWatchAdapter\] File watching is not enabled in CLI version$/d'
|
||||
}
|
||||
|
||||
cli_test_json_string_field_from_stdin() {
|
||||
local field_name="$1"
|
||||
node -e '
|
||||
const fs = require("node:fs");
|
||||
const fieldName = process.argv[1];
|
||||
const data = JSON.parse(fs.readFileSync(0, "utf-8"));
|
||||
const value = data[fieldName];
|
||||
if (typeof value === "string") {
|
||||
process.stdout.write(value);
|
||||
}
|
||||
' "$field_name"
|
||||
}
|
||||
|
||||
cli_test_json_string_field_from_file() {
|
||||
local json_file="$1"
|
||||
local field_name="$2"
|
||||
node -e '
|
||||
const fs = require("node:fs");
|
||||
const jsonFile = process.argv[1];
|
||||
const fieldName = process.argv[2];
|
||||
const data = JSON.parse(fs.readFileSync(jsonFile, "utf-8"));
|
||||
const value = data[fieldName];
|
||||
if (typeof value === "string") {
|
||||
process.stdout.write(value);
|
||||
}
|
||||
' "$json_file" "$field_name"
|
||||
}
|
||||
|
||||
cli_test_json_field_is_na() {
|
||||
local json_file="$1"
|
||||
local field_name="$2"
|
||||
[[ "$(cli_test_json_string_field_from_file "$json_file" "$field_name")" == "N/A" ]]
|
||||
}
|
||||
|
||||
cli_test_curl_json() {
|
||||
curl -4 -sS --fail --connect-timeout 3 --max-time 15 "$@"
|
||||
}
|
||||
|
||||
cli_test_init_settings_file() {
|
||||
local settings_file="$1"
|
||||
run_cli init-settings --force "$settings_file" >/dev/null
|
||||
}
|
||||
|
||||
cli_test_mark_settings_configured() {
|
||||
local settings_file="$1"
|
||||
SETTINGS_FILE="$settings_file" node <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
const settingsPath = process.env.SETTINGS_FILE;
|
||||
const data = JSON.parse(fs.readFileSync(settingsPath, "utf-8"));
|
||||
data.isConfigured = true;
|
||||
fs.writeFileSync(settingsPath, JSON.stringify(data, null, 2), "utf-8");
|
||||
NODE
|
||||
}
|
||||
|
||||
cli_test_apply_couchdb_settings() {
|
||||
local settings_file="$1"
|
||||
local couchdb_uri="$2"
|
||||
local couchdb_user="$3"
|
||||
local couchdb_password="$4"
|
||||
local couchdb_dbname="$5"
|
||||
local live_sync="${6:-0}"
|
||||
SETTINGS_FILE="$settings_file" \
|
||||
COUCHDB_URI="$couchdb_uri" \
|
||||
COUCHDB_USER="$couchdb_user" \
|
||||
COUCHDB_PASSWORD="$couchdb_password" \
|
||||
COUCHDB_DBNAME="$couchdb_dbname" \
|
||||
LIVE_SYNC="$live_sync" \
|
||||
node <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
const settingsPath = process.env.SETTINGS_FILE;
|
||||
const data = JSON.parse(fs.readFileSync(settingsPath, "utf-8"));
|
||||
data.couchDB_URI = process.env.COUCHDB_URI;
|
||||
data.couchDB_USER = process.env.COUCHDB_USER;
|
||||
data.couchDB_PASSWORD = process.env.COUCHDB_PASSWORD;
|
||||
data.couchDB_DBNAME = process.env.COUCHDB_DBNAME;
|
||||
if (process.env.LIVE_SYNC === "1") {
|
||||
data.liveSync = true;
|
||||
data.syncOnStart = false;
|
||||
data.syncOnSave = false;
|
||||
data.usePluginSync = false;
|
||||
}
|
||||
data.isConfigured = true;
|
||||
fs.writeFileSync(settingsPath, JSON.stringify(data, null, 2), "utf-8");
|
||||
NODE
|
||||
}
|
||||
|
||||
cli_test_apply_remote_sync_settings() {
|
||||
local settings_file="$1"
|
||||
SETTINGS_FILE="$settings_file" \
|
||||
REMOTE_TYPE="$REMOTE_TYPE" \
|
||||
COUCHDB_URI="$COUCHDB_URI" \
|
||||
COUCHDB_USER="${COUCHDB_USER:-}" \
|
||||
COUCHDB_PASSWORD="${COUCHDB_PASSWORD:-}" \
|
||||
COUCHDB_DBNAME="$COUCHDB_DBNAME" \
|
||||
MINIO_ENDPOINT="${MINIO_ENDPOINT:-}" \
|
||||
MINIO_BUCKET="$MINIO_BUCKET" \
|
||||
MINIO_ACCESS_KEY="${MINIO_ACCESS_KEY:-}" \
|
||||
MINIO_SECRET_KEY="${MINIO_SECRET_KEY:-}" \
|
||||
ENCRYPT="${ENCRYPT:-0}" \
|
||||
E2E_PASSPHRASE="${E2E_PASSPHRASE:-}" \
|
||||
node <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
const settingsPath = process.env.SETTINGS_FILE;
|
||||
const data = JSON.parse(fs.readFileSync(settingsPath, "utf-8"));
|
||||
|
||||
const remoteType = process.env.REMOTE_TYPE;
|
||||
if (remoteType === "COUCHDB") {
|
||||
data.remoteType = "";
|
||||
data.couchDB_URI = process.env.COUCHDB_URI;
|
||||
data.couchDB_USER = process.env.COUCHDB_USER;
|
||||
data.couchDB_PASSWORD = process.env.COUCHDB_PASSWORD;
|
||||
data.couchDB_DBNAME = process.env.COUCHDB_DBNAME;
|
||||
} else if (remoteType === "MINIO") {
|
||||
data.remoteType = "MINIO";
|
||||
data.bucket = process.env.MINIO_BUCKET;
|
||||
data.endpoint = process.env.MINIO_ENDPOINT;
|
||||
data.accessKey = process.env.MINIO_ACCESS_KEY;
|
||||
data.secretKey = process.env.MINIO_SECRET_KEY;
|
||||
data.region = "auto";
|
||||
data.forcePathStyle = true;
|
||||
}
|
||||
|
||||
data.liveSync = true;
|
||||
data.syncOnStart = false;
|
||||
data.syncOnSave = false;
|
||||
data.usePluginSync = false;
|
||||
data.encrypt = process.env.ENCRYPT === "1";
|
||||
data.passphrase = data.encrypt ? process.env.E2E_PASSPHRASE : "";
|
||||
data.isConfigured = true;
|
||||
|
||||
fs.writeFileSync(settingsPath, JSON.stringify(data, null, 2), "utf-8");
|
||||
NODE
|
||||
}
|
||||
|
||||
cli_test_stop_couchdb() {
|
||||
bash "$CLI_DIR/util/couchdb-stop.sh" >/dev/null 2>&1 || true
|
||||
}
|
||||
|
||||
cli_test_start_couchdb() {
|
||||
local couchdb_uri="$1"
|
||||
local couchdb_user="$2"
|
||||
local couchdb_password="$3"
|
||||
local couchdb_dbname="$4"
|
||||
echo "[INFO] stopping leftover CouchDB container if present"
|
||||
cli_test_stop_couchdb
|
||||
|
||||
echo "[INFO] starting CouchDB test container"
|
||||
bash "$CLI_DIR/util/couchdb-start.sh"
|
||||
|
||||
echo "[INFO] initialising CouchDB test container"
|
||||
bash "$CLI_DIR/util/couchdb-init.sh"
|
||||
|
||||
echo "[INFO] CouchDB create test database: $couchdb_dbname"
|
||||
until (cli_test_curl_json -X PUT --user "${couchdb_user}:${couchdb_password}" "${couchdb_uri}/${couchdb_dbname}"); do sleep 5; done
|
||||
}
|
||||
|
||||
cli_test_stop_minio() {
|
||||
bash "$CLI_DIR/util/minio-stop.sh" >/dev/null 2>&1 || true
|
||||
}
|
||||
|
||||
cli_test_wait_for_minio_bucket() {
|
||||
local minio_endpoint="$1"
|
||||
local minio_access_key="$2"
|
||||
local minio_secret_key="$3"
|
||||
local minio_bucket="$4"
|
||||
local retries=30
|
||||
local delay_sec=2
|
||||
local i
|
||||
for ((i = 1; i <= retries; i++)); do
|
||||
if docker run --rm --network host --entrypoint=/bin/sh minio/mc -c "mc alias set myminio $minio_endpoint $minio_access_key $minio_secret_key >/dev/null 2>&1 && mc ls myminio/$minio_bucket >/dev/null 2>&1"; then
|
||||
return 0
|
||||
fi
|
||||
bucketName="$minio_bucket" bash "$CLI_DIR/util/minio-init.sh" >/dev/null 2>&1 || true
|
||||
sleep "$delay_sec"
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
cli_test_start_minio() {
|
||||
local minio_endpoint="$1"
|
||||
local minio_access_key="$2"
|
||||
local minio_secret_key="$3"
|
||||
local minio_bucket="$4"
|
||||
local minio_init_ok=0
|
||||
|
||||
echo "[INFO] stopping leftover MinIO container if present"
|
||||
cli_test_stop_minio
|
||||
|
||||
echo "[INFO] starting MinIO test container"
|
||||
bucketName="$minio_bucket" bash "$CLI_DIR/util/minio-start.sh"
|
||||
|
||||
echo "[INFO] initialising MinIO test bucket: $minio_bucket"
|
||||
for _ in 1 2 3 4 5; do
|
||||
if bucketName="$minio_bucket" bash "$CLI_DIR/util/minio-init.sh"; then
|
||||
minio_init_ok=1
|
||||
break
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
if [[ "$minio_init_ok" != "1" ]]; then
|
||||
echo "[FAIL] could not initialise MinIO bucket after retries: $minio_bucket" >&2
|
||||
exit 1
|
||||
fi
|
||||
if ! cli_test_wait_for_minio_bucket "$minio_endpoint" "$minio_access_key" "$minio_secret_key" "$minio_bucket"; then
|
||||
echo "[FAIL] MinIO bucket not ready: $minio_bucket" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
display_test_info(){
|
||||
echo "======================"
|
||||
echo "Script: ${BASH_SOURCE[1]:-$0}"
|
||||
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
echo "Git commit: $(git -C "$SCRIPT_DIR/.." rev-parse --short HEAD 2>/dev/null || echo "N/A")"
|
||||
echo "======================"
|
||||
}
|
||||
169
src/apps/cli/test/test-mirror-linux.sh
Executable file
169
src/apps/cli/test/test-mirror-linux.sh
Executable file
@@ -0,0 +1,169 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test: mirror command — storage <-> local database synchronisation
|
||||
#
|
||||
# Covered cases:
|
||||
# 1. Storage-only file → synced into DB (UPDATE DATABASE)
|
||||
# 2. DB-only file → restored to storage (UPDATE STORAGE)
|
||||
# 3. DB-deleted file → NOT restored to storage (UPDATE STORAGE skip)
|
||||
# 4. Both, storage newer → DB updated (SYNC: STORAGE → DB)
|
||||
# 5. Both, DB newer → storage updated (SYNC: DB → STORAGE)
|
||||
#
|
||||
# Not covered (require precise mtime control or artificial conflict injection):
|
||||
# - Both, equal mtime → no-op (EVEN)
|
||||
# - Conflicted entry → skipped
|
||||
#
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
display_test_info
|
||||
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-test.XXXXXX")"
|
||||
trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
|
||||
SETTINGS_FILE="$WORK_DIR/data.json"
|
||||
VAULT_DIR="$WORK_DIR/vault"
|
||||
mkdir -p "$VAULT_DIR/test"
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
npm run build
|
||||
fi
|
||||
|
||||
echo "[INFO] generating settings -> $SETTINGS_FILE"
|
||||
cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
|
||||
# isConfigured=true is required for mirror (canProceedScan checks this)
|
||||
cli_test_mark_settings_configured "$SETTINGS_FILE"
|
||||
|
||||
PASS=0
|
||||
FAIL=0
|
||||
|
||||
assert_pass() { echo "[PASS] $1"; PASS=$((PASS + 1)); }
|
||||
assert_fail() { echo "[FAIL] $1" >&2; FAIL=$((FAIL + 1)); }
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 1: File exists only in storage → should be synced into DB after mirror
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 1: storage-only → DB ==="
|
||||
|
||||
printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md"
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
|
||||
RESULT_FILE="$WORK_DIR/case1-cat.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE"
|
||||
|
||||
if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then
|
||||
assert_pass "storage-only file was synced into DB"
|
||||
else
|
||||
assert_fail "storage-only file NOT synced into DB"
|
||||
echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2
|
||||
echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 2: File exists only in DB → should be restored to storage after mirror
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 2: DB-only → storage ==="
|
||||
|
||||
printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md
|
||||
|
||||
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
assert_fail "db-only.md unexpectedly exists in storage before mirror"
|
||||
else
|
||||
echo "[INFO] confirmed: test/db-only.md not in storage before mirror"
|
||||
fi
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
|
||||
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")"
|
||||
if [[ "$STORAGE_CONTENT" == "db-only content" ]]; then
|
||||
assert_pass "DB-only file was restored to storage"
|
||||
else
|
||||
assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')"
|
||||
fi
|
||||
else
|
||||
assert_fail "DB-only file was NOT restored to storage"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 3: File deleted in DB → should NOT be created in storage
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 3: DB-deleted → storage untouched ==="
|
||||
|
||||
printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
|
||||
if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then
|
||||
assert_pass "deleted DB entry was not restored to storage"
|
||||
else
|
||||
assert_fail "deleted DB entry was incorrectly restored to storage"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 4: Both exist, storage is newer → DB should be updated
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 4: storage newer → DB updated ==="
|
||||
|
||||
# Seed DB with old content (mtime ≈ now)
|
||||
printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md
|
||||
|
||||
# Write new content to storage with a timestamp 1 hour in the future
|
||||
printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md"
|
||||
touch -t "$(date -d '+1 hour' +%Y%m%d%H%M)" "$VAULT_DIR/test/sync-storage-newer.md"
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
|
||||
DB_RESULT_FILE="$WORK_DIR/case4-pull.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
|
||||
if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then
|
||||
assert_pass "DB updated to match newer storage file"
|
||||
else
|
||||
assert_fail "DB NOT updated to match newer storage file"
|
||||
echo "--- expected(storage) ---" >&2; cat "$VAULT_DIR/test/sync-storage-newer.md" >&2
|
||||
echo "--- pulled(from db) ---" >&2; cat "$DB_RESULT_FILE" >&2
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 5: Both exist, DB is newer → storage should be updated
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 5: DB newer → storage updated ==="
|
||||
|
||||
# Write old content to storage with a timestamp 1 hour in the past
|
||||
printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md"
|
||||
touch -t "$(date -d '-1 hour' +%Y%m%d%H%M)" "$VAULT_DIR/test/sync-db-newer.md"
|
||||
|
||||
# Write new content to DB only (mtime ≈ now, newer than the storage file)
|
||||
printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
|
||||
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")"
|
||||
if [[ "$STORAGE_CONTENT" == "new db content" ]]; then
|
||||
assert_pass "storage updated to match newer DB entry"
|
||||
else
|
||||
assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Summary
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "Results: PASS=$PASS FAIL=$FAIL"
|
||||
if [[ "$FAIL" -gt 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
@@ -4,10 +4,12 @@ set -euo pipefail
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
display_test_info
|
||||
|
||||
CLI_CMD=(npm run cli --)
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
REMOTE_PATH="${REMOTE_PATH:-test/push-pull.txt}"
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-test.XXXXXX")"
|
||||
trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
@@ -19,26 +21,12 @@ if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
npm run build
|
||||
fi
|
||||
|
||||
run_cli() {
|
||||
"${CLI_CMD[@]}" "$@"
|
||||
}
|
||||
|
||||
echo "[INFO] generating settings from DEFAULT_SETTINGS -> $SETTINGS_FILE"
|
||||
run_cli init-settings --force "$SETTINGS_FILE"
|
||||
cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
|
||||
if [[ -n "${COUCHDB_URI:-}" && -n "${COUCHDB_USER:-}" && -n "${COUCHDB_PASSWORD:-}" && -n "${COUCHDB_DBNAME:-}" ]]; then
|
||||
echo "[INFO] applying CouchDB env vars to generated settings"
|
||||
SETTINGS_FILE="$SETTINGS_FILE" node <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
const settingsPath = process.env.SETTINGS_FILE;
|
||||
const data = JSON.parse(fs.readFileSync(settingsPath, "utf-8"));
|
||||
data.couchDB_URI = process.env.COUCHDB_URI;
|
||||
data.couchDB_USER = process.env.COUCHDB_USER;
|
||||
data.couchDB_PASSWORD = process.env.COUCHDB_PASSWORD;
|
||||
data.couchDB_DBNAME = process.env.COUCHDB_DBNAME;
|
||||
data.isConfigured = true;
|
||||
fs.writeFileSync(settingsPath, JSON.stringify(data, null, 2), "utf-8");
|
||||
NODE
|
||||
cli_test_apply_couchdb_settings "$SETTINGS_FILE" "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME"
|
||||
else
|
||||
echo "[WARN] CouchDB env vars are not fully set. push/pull may fail unless generated settings are updated."
|
||||
fi
|
||||
|
||||
@@ -5,11 +5,13 @@ SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
REPO_ROOT="$(cd -- "$CLI_DIR/../../.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
display_test_info
|
||||
|
||||
CLI_CMD=(npm run cli --)
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
REMOTE_PATH="${REMOTE_PATH:-test/setup-put-cat.txt}"
|
||||
SETUP_PASSPHRASE="${SETUP_PASSPHRASE:-setup-passphrase}"
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-test.XXXXXX")"
|
||||
trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
@@ -21,12 +23,8 @@ if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
npm run build
|
||||
fi
|
||||
|
||||
run_cli() {
|
||||
"${CLI_CMD[@]}" "$@"
|
||||
}
|
||||
|
||||
echo "[INFO] generating settings from DEFAULT_SETTINGS -> $SETTINGS_FILE"
|
||||
run_cli init-settings --force "$SETTINGS_FILE"
|
||||
cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
|
||||
echo "[INFO] creating setup URI from settings"
|
||||
SETUP_URI="$(
|
||||
@@ -84,7 +82,7 @@ CAT_OUTPUT="$WORK_DIR/cat-output.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" cat "$REMOTE_PATH" > "$CAT_OUTPUT"
|
||||
|
||||
CAT_OUTPUT_CLEAN="$WORK_DIR/cat-output-clean.txt"
|
||||
grep -v '^\[CLIWatchAdapter\] File watching is not enabled in CLI version$' "$CAT_OUTPUT" > "$CAT_OUTPUT_CLEAN" || true
|
||||
cli_test_sanitise_cat_stdout < "$CAT_OUTPUT" > "$CAT_OUTPUT_CLEAN"
|
||||
|
||||
if cmp -s "$SRC_FILE" "$CAT_OUTPUT_CLEAN"; then
|
||||
echo "[PASS] setup/put/cat roundtrip matched"
|
||||
@@ -175,48 +173,52 @@ echo "[INFO] info $REMOTE_PATH"
|
||||
INFO_OUTPUT="$WORK_DIR/info-output.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" info "$REMOTE_PATH" > "$INFO_OUTPUT"
|
||||
|
||||
# Check required label lines
|
||||
for label in "ID:" "Revision:" "Conflicts:" "Filename:" "Path:" "Size:" "Chunks:"; do
|
||||
if ! grep -q "^$label" "$INFO_OUTPUT"; then
|
||||
echo "[FAIL] info output missing label: $label" >&2
|
||||
cat "$INFO_OUTPUT" >&2
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Path value must match
|
||||
INFO_PATH="$(grep '^Path:' "$INFO_OUTPUT" | sed 's/^Path:[[:space:]]*//')"
|
||||
if [[ "$INFO_PATH" != "$REMOTE_PATH" ]]; then
|
||||
echo "[FAIL] info Path mismatch: $INFO_PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Filename must be the basename
|
||||
INFO_FILENAME="$(grep '^Filename:' "$INFO_OUTPUT" | sed 's/^Filename:[[:space:]]*//')"
|
||||
EXPECTED_FILENAME="$(basename "$REMOTE_PATH")"
|
||||
if [[ "$INFO_FILENAME" != "$EXPECTED_FILENAME" ]]; then
|
||||
echo "[FAIL] info Filename mismatch: $INFO_FILENAME != $EXPECTED_FILENAME" >&2
|
||||
exit 1
|
||||
fi
|
||||
set +e
|
||||
INFO_JSON_CHECK="$(
|
||||
INFO_OUTPUT="$INFO_OUTPUT" REMOTE_PATH="$REMOTE_PATH" EXPECTED_FILENAME="$EXPECTED_FILENAME" node - <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
|
||||
# Size must be numeric
|
||||
INFO_SIZE="$(grep '^Size:' "$INFO_OUTPUT" | sed 's/^Size:[[:space:]]*//')"
|
||||
if [[ ! "$INFO_SIZE" =~ ^[0-9]+$ ]]; then
|
||||
echo "[FAIL] info Size is not numeric: $INFO_SIZE" >&2
|
||||
exit 1
|
||||
fi
|
||||
const content = fs.readFileSync(process.env.INFO_OUTPUT, "utf-8");
|
||||
let data;
|
||||
try {
|
||||
data = JSON.parse(content);
|
||||
} catch (ex) {
|
||||
console.error("invalid-json");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
# Chunks count must be numeric and ≥1
|
||||
INFO_CHUNKS="$(grep '^Chunks:' "$INFO_OUTPUT" | sed 's/^Chunks:[[:space:]]*//')"
|
||||
if [[ ! "$INFO_CHUNKS" =~ ^[0-9]+$ ]] || [[ "$INFO_CHUNKS" -lt 1 ]]; then
|
||||
echo "[FAIL] info Chunks is not a positive integer: $INFO_CHUNKS" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Conflicts should be N/A (no live CouchDB)
|
||||
INFO_CONFLICTS="$(grep '^Conflicts:' "$INFO_OUTPUT" | sed 's/^Conflicts:[[:space:]]*//')"
|
||||
if [[ "$INFO_CONFLICTS" != "N/A" ]]; then
|
||||
echo "[FAIL] info Conflicts expected N/A, got: $INFO_CONFLICTS" >&2
|
||||
if (!data || typeof data !== "object") {
|
||||
console.error("invalid-payload");
|
||||
process.exit(1);
|
||||
}
|
||||
if (data.path !== process.env.REMOTE_PATH) {
|
||||
console.error(`path-mismatch:${String(data.path)}`);
|
||||
process.exit(1);
|
||||
}
|
||||
if (data.filename !== process.env.EXPECTED_FILENAME) {
|
||||
console.error(`filename-mismatch:${String(data.filename)}`);
|
||||
process.exit(1);
|
||||
}
|
||||
if (!Number.isInteger(data.size) || data.size < 0) {
|
||||
console.error(`size-invalid:${String(data.size)}`);
|
||||
process.exit(1);
|
||||
}
|
||||
if (!Number.isInteger(data.chunks) || data.chunks < 1) {
|
||||
console.error(`chunks-invalid:${String(data.chunks)}`);
|
||||
process.exit(1);
|
||||
}
|
||||
if (data.conflicts !== "N/A") {
|
||||
console.error(`conflicts-invalid:${String(data.conflicts)}`);
|
||||
process.exit(1);
|
||||
}
|
||||
NODE
|
||||
)"
|
||||
INFO_JSON_EXIT=$?
|
||||
set -e
|
||||
if [[ "$INFO_JSON_EXIT" -ne 0 ]]; then
|
||||
echo "[FAIL] info JSON output validation failed: $INFO_JSON_CHECK" >&2
|
||||
cat "$INFO_OUTPUT" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -292,8 +294,30 @@ echo "[INFO] info $REV_PATH (past revisions)"
|
||||
REV_INFO_OUTPUT="$WORK_DIR/rev-info-output.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" info "$REV_PATH" > "$REV_INFO_OUTPUT"
|
||||
|
||||
PAST_REV="$(grep '^ rev: ' "$REV_INFO_OUTPUT" | head -n 1 | sed 's/^ rev: //')"
|
||||
if [[ -z "$PAST_REV" ]]; then
|
||||
set +e
|
||||
PAST_REV="$(
|
||||
REV_INFO_OUTPUT="$REV_INFO_OUTPUT" node - <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
|
||||
const content = fs.readFileSync(process.env.REV_INFO_OUTPUT, "utf-8");
|
||||
let data;
|
||||
try {
|
||||
data = JSON.parse(content);
|
||||
} catch {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const revisions = Array.isArray(data?.revisions) ? data.revisions : [];
|
||||
const revision = revisions.find((rev) => typeof rev === "string" && rev !== "N/A");
|
||||
if (!revision) {
|
||||
process.exit(1);
|
||||
}
|
||||
process.stdout.write(revision);
|
||||
NODE
|
||||
)"
|
||||
PAST_REV_EXIT=$?
|
||||
set -e
|
||||
if [[ "$PAST_REV_EXIT" -ne 0 ]] || [[ -z "$PAST_REV" ]]; then
|
||||
echo "[FAIL] info output did not include any past revision" >&2
|
||||
cat "$REV_INFO_OUTPUT" >&2
|
||||
exit 1
|
||||
|
||||
@@ -1,39 +1,66 @@
|
||||
#!/usr/bin/env bash
|
||||
## TODO: test this script. I would love to go to my bed today (3a.m.) However, I am so excited about the new CLI that I want to at least get this skeleton in place. Delightful days!
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
display_test_info
|
||||
|
||||
CLI_CMD=(npm run cli --)
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
COUCHDB_URI="${COUCHDB_URI:-}"
|
||||
COUCHDB_USER="${COUCHDB_USER:-}"
|
||||
COUCHDB_PASSWORD="${COUCHDB_PASSWORD:-}"
|
||||
COUCHDB_DBNAME_BASE="${COUCHDB_DBNAME:-livesync-cli-e2e}"
|
||||
TEST_ENV_FILE="${TEST_ENV_FILE:-$CLI_DIR/.test.env}"
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
if [[ ! -f "$TEST_ENV_FILE" ]]; then
|
||||
echo "[ERROR] test env file not found: $TEST_ENV_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
set -a
|
||||
source "$TEST_ENV_FILE"
|
||||
set +a
|
||||
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-two-db-test.XXXXXX")"
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
npm run build
|
||||
fi
|
||||
DB_SUFFIX="$(date +%s)-$RANDOM"
|
||||
|
||||
COUCHDB_URI="${hostname%/}"
|
||||
COUCHDB_DBNAME="${dbname}-${DB_SUFFIX}"
|
||||
COUCHDB_USER="${username:-}"
|
||||
COUCHDB_PASSWORD="${password:-}"
|
||||
|
||||
if [[ -z "$COUCHDB_URI" || -z "$COUCHDB_USER" || -z "$COUCHDB_PASSWORD" ]]; then
|
||||
echo "[ERROR] COUCHDB_URI, COUCHDB_USER, COUCHDB_PASSWORD are required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-two-db-test.XXXXXX")"
|
||||
trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
npm run build
|
||||
fi
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
cli_test_stop_couchdb
|
||||
|
||||
run_cli() {
|
||||
"${CLI_CMD[@]}" "$@"
|
||||
rm -rf "$WORK_DIR"
|
||||
|
||||
# Note: we do not attempt to delete the test database, as it may cause issues if the test failed in a way that leaves the database in an inconsistent state. The test database is named with a unique suffix, so it should not interfere with other tests.
|
||||
echo "[INFO] test completed with exit code $exit_code. Test database '$COUCHDB_DBNAME' is not deleted for debugging purposes."
|
||||
exit "$exit_code"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
|
||||
start_remote() {
|
||||
cli_test_start_couchdb "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME"
|
||||
}
|
||||
|
||||
DB_SUFFIX="$(date +%s)-$RANDOM"
|
||||
COUCHDB_DBNAME="${COUCHDB_DBNAME_BASE}-${DB_SUFFIX}"
|
||||
|
||||
|
||||
echo "[INFO] using CouchDB database: $COUCHDB_DBNAME"
|
||||
start_remote
|
||||
|
||||
VAULT_A="$WORK_DIR/vault-a"
|
||||
VAULT_B="$WORK_DIR/vault-b"
|
||||
@@ -41,31 +68,12 @@ SETTINGS_A="$WORK_DIR/a-settings.json"
|
||||
SETTINGS_B="$WORK_DIR/b-settings.json"
|
||||
mkdir -p "$VAULT_A" "$VAULT_B"
|
||||
|
||||
run_cli init-settings --force "$SETTINGS_A" >/dev/null
|
||||
run_cli init-settings --force "$SETTINGS_B" >/dev/null
|
||||
cli_test_init_settings_file "$SETTINGS_A"
|
||||
cli_test_init_settings_file "$SETTINGS_B"
|
||||
|
||||
apply_settings() {
|
||||
local settings_file="$1"
|
||||
SETTINGS_FILE="$settings_file" \
|
||||
COUCHDB_URI="$COUCHDB_URI" \
|
||||
COUCHDB_USER="$COUCHDB_USER" \
|
||||
COUCHDB_PASSWORD="$COUCHDB_PASSWORD" \
|
||||
COUCHDB_DBNAME="$COUCHDB_DBNAME" \
|
||||
node <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
const settingsPath = process.env.SETTINGS_FILE;
|
||||
const data = JSON.parse(fs.readFileSync(settingsPath, "utf-8"));
|
||||
data.couchDB_URI = process.env.COUCHDB_URI;
|
||||
data.couchDB_USER = process.env.COUCHDB_USER;
|
||||
data.couchDB_PASSWORD = process.env.COUCHDB_PASSWORD;
|
||||
data.couchDB_DBNAME = process.env.COUCHDB_DBNAME;
|
||||
data.liveSync = true;
|
||||
data.syncOnStart = false;
|
||||
data.syncOnSave = false;
|
||||
data.usePluginSync = false;
|
||||
data.isConfigured = true;
|
||||
fs.writeFileSync(settingsPath, JSON.stringify(data, null, 2), "utf-8");
|
||||
NODE
|
||||
cli_test_apply_couchdb_settings "$settings_file" "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME" 1
|
||||
}
|
||||
|
||||
apply_settings "$SETTINGS_A"
|
||||
@@ -95,24 +103,12 @@ cat_b() {
|
||||
run_cli_b cat "$1"
|
||||
}
|
||||
|
||||
assert_equal() {
|
||||
local expected="$1"
|
||||
local actual="$2"
|
||||
local message="$3"
|
||||
if [[ "$expected" != "$actual" ]]; then
|
||||
echo "[FAIL] $message" >&2
|
||||
echo "expected: $expected" >&2
|
||||
echo "actual: $actual" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "[INFO] case1: A creates file, B can read after sync"
|
||||
printf 'from-a\n' | run_cli_a put shared/from-a.txt >/dev/null
|
||||
sync_a
|
||||
sync_b
|
||||
VALUE_FROM_B="$(cat_b shared/from-a.txt)"
|
||||
assert_equal "from-a" "$VALUE_FROM_B" "B could not read file created on A"
|
||||
cli_test_assert_equal "from-a" "$VALUE_FROM_B" "B could not read file created on A"
|
||||
echo "[PASS] case1 passed"
|
||||
|
||||
echo "[INFO] case2: B creates file, A can read after sync"
|
||||
@@ -120,7 +116,7 @@ printf 'from-b\n' | run_cli_b put shared/from-b.txt >/dev/null
|
||||
sync_b
|
||||
sync_a
|
||||
VALUE_FROM_A="$(cat_a shared/from-b.txt)"
|
||||
assert_equal "from-b" "$VALUE_FROM_A" "A could not read file created on B"
|
||||
cli_test_assert_equal "from-b" "$VALUE_FROM_A" "A could not read file created on B"
|
||||
echo "[PASS] case2 passed"
|
||||
|
||||
echo "[INFO] case3: concurrent edits create conflict"
|
||||
@@ -131,15 +127,25 @@ sync_b
|
||||
printf 'edit-from-a\n' | run_cli_a put shared/conflicted.txt >/dev/null
|
||||
printf 'edit-from-b\n' | run_cli_b put shared/conflicted.txt >/dev/null
|
||||
|
||||
sync_a
|
||||
sync_b
|
||||
|
||||
INFO_A="$WORK_DIR/info-a.txt"
|
||||
INFO_B="$WORK_DIR/info-b.txt"
|
||||
run_cli_a info shared/conflicted.txt > "$INFO_A"
|
||||
run_cli_b info shared/conflicted.txt > "$INFO_B"
|
||||
CONFLICT_DETECTED=0
|
||||
for side in a b; do
|
||||
if [[ "$side" == "a" ]]; then
|
||||
sync_a
|
||||
else
|
||||
sync_b
|
||||
fi
|
||||
|
||||
if grep -q '^Conflicts: N/A$' "$INFO_A" && grep -q '^Conflicts: N/A$' "$INFO_B"; then
|
||||
run_cli_a info shared/conflicted.txt > "$INFO_A"
|
||||
run_cli_b info shared/conflicted.txt > "$INFO_B"
|
||||
if ! cli_test_json_field_is_na "$INFO_A" conflicts || ! cli_test_json_field_is_na "$INFO_B" conflicts; then
|
||||
CONFLICT_DETECTED=1
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$CONFLICT_DETECTED" != "1" ]]; then
|
||||
echo "[FAIL] expected conflict after concurrent edits, but both sides show N/A" >&2
|
||||
echo "--- A info ---" >&2
|
||||
cat "$INFO_A" >&2
|
||||
@@ -150,21 +156,60 @@ fi
|
||||
echo "[PASS] case3 conflict detected"
|
||||
|
||||
echo "[INFO] case4: resolve on A, sync, and verify B has no conflict"
|
||||
KEEP_REV="$(sed -n 's/^Revision:[[:space:]]*//p' "$INFO_A" | head -n 1)"
|
||||
INFO_A_AFTER="$WORK_DIR/info-a-after-resolve.txt"
|
||||
INFO_B_AFTER="$WORK_DIR/info-b-after-resolve.txt"
|
||||
|
||||
# Ensure A sees the conflict before resolving; otherwise resolve may be a no-op.
|
||||
for _ in 1 2 3 4 5; do
|
||||
run_cli_a info shared/conflicted.txt > "$INFO_A_AFTER"
|
||||
if ! cli_test_json_field_is_na "$INFO_A_AFTER" conflicts; then
|
||||
break
|
||||
fi
|
||||
sync_b
|
||||
sync_a
|
||||
done
|
||||
|
||||
run_cli_a info shared/conflicted.txt > "$INFO_A_AFTER"
|
||||
if cli_test_json_field_is_na "$INFO_A_AFTER" conflicts; then
|
||||
echo "[FAIL] A does not see conflict, cannot resolve from A only" >&2
|
||||
cat "$INFO_A_AFTER" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
KEEP_REV="$(cli_test_json_string_field_from_file "$INFO_A_AFTER" revision)"
|
||||
if [[ -z "$KEEP_REV" ]]; then
|
||||
echo "[FAIL] could not read Revision from A info output" >&2
|
||||
cat "$INFO_A" >&2
|
||||
echo "[FAIL] could not read revision from A info output" >&2
|
||||
cat "$INFO_A_AFTER" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
run_cli_a resolve shared/conflicted.txt "$KEEP_REV" >/dev/null
|
||||
sync_a
|
||||
sync_b
|
||||
|
||||
INFO_B_AFTER="$WORK_DIR/info-b-after-resolve.txt"
|
||||
run_cli_b info shared/conflicted.txt > "$INFO_B_AFTER"
|
||||
if ! grep -q '^Conflicts: N/A$' "$INFO_B_AFTER"; then
|
||||
echo "[FAIL] B still has conflicts after resolving on A and syncing" >&2
|
||||
RESOLVE_PROPAGATED=0
|
||||
for _ in 1 2 3 4 5 6; do
|
||||
sync_a
|
||||
sync_b
|
||||
run_cli_a info shared/conflicted.txt > "$INFO_A_AFTER"
|
||||
run_cli_b info shared/conflicted.txt > "$INFO_B_AFTER"
|
||||
if cli_test_json_field_is_na "$INFO_A_AFTER" conflicts && cli_test_json_field_is_na "$INFO_B_AFTER" conflicts; then
|
||||
RESOLVE_PROPAGATED=1
|
||||
break
|
||||
fi
|
||||
|
||||
# Retry resolve from A only when conflict remains due to eventual consistency.
|
||||
if ! cli_test_json_field_is_na "$INFO_A_AFTER" conflicts; then
|
||||
KEEP_REV_A="$(cli_test_json_string_field_from_file "$INFO_A_AFTER" revision)"
|
||||
if [[ -n "$KEEP_REV_A" ]]; then
|
||||
run_cli_a resolve shared/conflicted.txt "$KEEP_REV_A" >/dev/null || true
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$RESOLVE_PROPAGATED" != "1" ]]; then
|
||||
echo "[FAIL] conflicts should be resolved on both A and B" >&2
|
||||
echo "--- A info after resolve ---" >&2
|
||||
cat "$INFO_A_AFTER" >&2
|
||||
echo "--- B info after resolve ---" >&2
|
||||
cat "$INFO_B_AFTER" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -89,7 +89,7 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
(this.services.API as BrowserAPIService<ServiceContext>).getSystemVaultName.setHandler(
|
||||
() => "p2p-livesync-web-peer"
|
||||
);
|
||||
this.services.API.addLog.setHandler(Logger);
|
||||
// this.services.API.addLog.setHandler(Logger);
|
||||
const repStore = SimpleStoreIDBv2.open<any>("p2p-livesync-web-peer");
|
||||
this._simpleStore = repStore;
|
||||
let _settings = { ...P2P_DEFAULT_SETTINGS, additionalSuffixOfDatabaseName: "" } as ObsidianLiveSyncSettings;
|
||||
|
||||
2
src/lib
2
src/lib
Submodule src/lib updated: 35df9a1192...f77404c926
16
updates.md
16
updates.md
@@ -3,6 +3,12 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
|
||||
|
||||
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
|
||||
|
||||
## -- unreleased --
|
||||
|
||||
### New features
|
||||
|
||||
- `mirror` command has been added to the CLI. This command is intended to mirror the storage to the local database.
|
||||
|
||||
## 0.25.52-patched-1
|
||||
|
||||
12th March, 2026
|
||||
@@ -19,10 +25,10 @@ The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsid
|
||||
- Separated `ObsidianLiveSyncPlugin` into `ObsidianLiveSyncPlugin` and `LiveSyncBaseCore`.
|
||||
- Now `LiveSyncCore` indicates the type specified version of `LiveSyncBaseCore`.
|
||||
- Referencing `plugin.xxx` has been rewritten to referencing the corresponding service or `core.xxx`.
|
||||
- Offline change scanner and the local database preparation has been separated.
|
||||
- Set default priority for processFileEvent and processSynchroniseResult for the place for adding hooks.
|
||||
- Offline change scanner and the local database preparation have been separated.
|
||||
- Set default priority for processFileEvent and processSynchroniseResult for the place to add hooks.
|
||||
- ControlService now provides the readiness for processing operations.
|
||||
- DatabaseService now able to modify database opening options on derived classes.
|
||||
- DatabaseService is now able to modify database opening options on derived classes.
|
||||
- Now `useOfflineScanner`, `useCheckRemoteSize`, and `useRedFlagFeatures` are set from `main.ts`, instead of `LiveSyncBaseCore`.
|
||||
|
||||
### Internal API changes
|
||||
@@ -32,9 +38,9 @@ The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsid
|
||||
|
||||
### CLI
|
||||
|
||||
We have previously developed FileSystem LiveSync and various other components in a separate repository, but updates have been significantly delayed and we have been plagued by compatibility issues. Now, a CLI tool using the same core logic is emerging. This does not directly manipulate the file system, but it offers a more convenient way of working and can also communicate with Object Storage. We can also resolve conflicts. Please refer to the code in `src/apps/cli` for the [self-hosted-livesync-cli](./src/apps/cli/README.md) for more details.
|
||||
We have previously developed FileSystem LiveSync and various other components in a separate repository, but updates have been significantly delayed, and we have been plagued by compatibility issues. Now, a CLI tool using the same core logic is emerging. This does not directly manipulate the file system, but it offers a more convenient way of working and can also communicate with Object Storage. We can also resolve conflicts. Please refer to the code in `src/apps/cli` for the [self-hosted-livesync-cli](./src/apps/cli/README.md) for more details.
|
||||
|
||||
- Add `self-hosted-livesync-cli` to `src/apps/cli` as a headless, and a dedicated version.
|
||||
- Add `self-hosted-livesync-cli` to `src/apps/cli` as a headless and dedicated version.
|
||||
- Add more tests.
|
||||
- Object Storage support has also been confirmed (and fixed) in CLI.
|
||||
- Yes, we have finally managed to 'get one file'.
|
||||
|
||||
Reference in New Issue
Block a user