Compare commits

...

33 Commits

Author SHA1 Message Date
vorotamoroz
c45aca4794 fixed: fixed subrepo pointer
I thought I’d rebased it, but it turns out everything had been merged.
2026-05-13 13:45:09 +01:00
vorotamoroz
e2c54aaf43 Update lib to fix P2P problems 2026-05-13 13:31:11 +01:00
vorotamoroz
8dda24a689 Merge pull request #882 from vrtmrz/p2p-rpc
feat: use new p2p-rpc wrapper
2026-05-13 19:24:26 +09:00
vorotamoroz
fbbb63906a Merge branch 'main' into p2p-rpc 2026-05-13 19:22:03 +09:00
vorotamoroz
1e66a7f144 Merge pull request #894 from vrtmrz/fix_unexpected_error_on_startup
fixed: fixed unexpected error during startup
2026-05-13 19:16:18 +09:00
vorotamoroz
df79d81475 fixed: fixed unexpected error during startup 2026-05-13 10:14:47 +00:00
vorotamoroz
ad71355859 Merge pull request #893 from brian-spackman/fix-fractional-mtime-on-linux
fix: truncate sub-millisecond CLI mtimes to prevent mobile crash
2026-05-13 19:12:56 +09:00
vorotamoroz
95dc079fad Merge pull request #843 from andrewleech/daemon-sync
cli: implement continuous sync daemon mode
2026-05-13 18:53:59 +09:00
Andrew Leech
67996f6d0a cli: fix stale stat.size in NodeVaultAdapter causing corrupted file errors
chokidar stats are captured at poll time and may not reflect the file's
final byte length by the time vault.read() is called. The downstream
integrity check compares stat.size to content length; a mismatch causes
other LiveSync clients to reject the file as corrupted.

Fix by updating file.stat.size from the actual content in read() and
readBinary().

Co-authored-by: Joysimple <Joysimple@users.noreply.github.com>
2026-05-13 16:56:08 +10:00
Andrew Leech
4ab2e41d18 cli daemon: set disableCheckingConfigMismatch for headless operation
The config mismatch dialog's defaultAction is "Dismiss" which blocks
replication. Since the daemon cannot resolve mismatches interactively,
skip the check entirely and accept the remote configuration as-is.
2026-05-13 11:21:06 +10:00
Andrew Leech
c0ad8ee15a cli: add configurable ignore rules and deployment artifacts
IgnoreRules (src/apps/cli/serviceModules/IgnoreRules.ts):
- Reads .livesync/ignore for user-defined glob patterns
- Applies gitignore matchBase semantics: patterns without / get **/ prefix,
  patterns ending with / get ** appended for directory contents
- Supports `import: .gitignore` directive to merge gitignore patterns
- Rejects negation patterns with a warning (not fully supportable)
- Integrated into both daemon and mirror commands via isTargetFile handler

Wiring:
- IgnoreRules loaded before LiveSyncBaseCore construction so beginWatch()
  receives rules when it fires during onLoad/onFirstInitialise
- Passed through initialiseServiceModulesCLI -> StorageEventManagerCLI ->
  CLIStorageEventManagerAdapter -> CLIWatchAdapter

Deployment:
- src/apps/cli/deploy/livesync-cli.service - systemd unit template
- src/apps/cli/deploy/install.sh - user/system install script

Testing:
- src/apps/cli/test/test-daemon-linux.sh - e2e tests for ignore rules
- src/apps/cli/serviceModules/IgnoreRules.unit.spec.ts - 15 unit tests
- src/apps/cli/commands/daemonCommand.unit.spec.ts - 7 unit tests
2026-05-13 11:21:06 +10:00
Andrew Leech
e6ae516493 cli: implement local→CouchDB file watching via chokidar
- Add chokidar ^4.0.0 as dependency (root package.json, runtime-package.json)
- Mark chokidar as external in vite.config.ts (not bundled, loaded at runtime)
- Implement CLIWatchAdapter.beginWatch() with chokidar:
  - ignoreInitial: true (startup files handled by mirror scan)
  - awaitWriteFinish to prevent partial-write events
  - Excludes dotfiles and .livesync/ directory at watcher level
  - Maps add/change/unlink/addDir/unlinkDir to IStorageEventWatchHandlers
  - Fatal error handler: logs clearly and releases watcher resources
- Add close() to CLIWatchAdapter, StorageEventManagerCLI for clean shutdown
- Register onUnload hook in CLIServiceModules to close watcher on shutdown
2026-05-13 11:21:06 +10:00
Andrew Leech
a4d5ef4620 cli: implement daemon startup sequence and CouchDB→local sync
- Add daemon command to help text and --interval/-i flag for polling mode
- Capture original sync settings before suspendAllSync() clobbers them
- Implement daemon startup: mirror scan → restore settings → applySettings()
  which triggers the full suspend/resume lifecycle and starts the _changes feed
- Guard processSynchroniseResult no-op to non-daemon commands so default
  handler writes incoming CouchDB changes to the local filesystem
- Polling mode: restore settings + clearInterval-safe try/catch error handling
- Warn when both liveSync and syncOnStart are false after restore (no-op config)
- Fix: only block indefinitely if daemon startup succeeded
2026-05-13 11:21:06 +10:00
Brian Spackman
3f7bb047ac fix: floor sub-millisecond CLI mtimes to prevent mobile crash
On Linux, fs.Stats.mtimeMs and ctimeMs return floats with sub-millisecond
precision derived from the kernel's nanosecond filesystem mtime. Stored
raw, this produces document timestamps like 1778511180024.462 in CouchDB
rather than integer milliseconds.

Mobile clients running LiveSync 0.25.60 have been observed to crash when
processing change-feed updates carrying non-integer millisecond timestamps
from CLI-written documents. Desktop and mobile GUI plugins write integer
milliseconds, so the crash only manifests when the headless CLI on Linux
is the source. Whether the issue was introduced in 0.25.60 or had been
latent in earlier versions hasn't been investigated; 0.25.60 is the
version where the crash was confirmed and the fix verified.

Floor the values at every stat-read site (six across three adapters and
one command) so CLI-written documents carry integer-millisecond
timestamps consistent with the rest of the mesh.
2026-05-12 18:00:25 -06:00
vorotamoroz
b6b153c0de Merge pull request #887 from vrtmrz/add_ignore_to_eslint
chore: Change eslint config to ignore _tools
2026-05-11 21:01:47 +09:00
vorotamoroz
eca6a6e0ba chore: Change eslint config to ignore _tools 2026-05-11 13:00:32 +01:00
vorotamoroz
ca43d96c46 Merge pull request #886 from vrtmrz/fix_prettier
Fix prettier config
2026-05-11 20:34:22 +09:00
vorotamoroz
112e3c8b1d Fix prettier config 2026-05-11 12:33:32 +01:00
vorotamoroz
d1eb105801 Merge pull request #872 from OriBoharon/make-cli-onboarding-easier
added documentaion and a hook build script to make onbaording easier when trying to build the cli app
2026-05-11 18:43:00 +09:00
vorotamoroz
d5b93e89cd Change the default issue report label from 'bug' to 'uncategorised' 2026-05-11 17:55:31 +09:00
vorotamoroz
e96fe7cde1 Merge pull request #885 from vrtmrz/tidy_file
(chore) remove obsoleted file
2026-05-11 17:52:22 +09:00
vorotamoroz
68e0610f1d (chore) remove obsoleted file 2026-05-11 09:49:32 +01:00
vorotamoroz
a6be20695a feat: use new p2p-rpc wrapper 2026-05-11 03:49:35 +01:00
vorotamoroz
772b6ecf26 Merge pull request #871 from SeleiXi/feat/diff-navigation-buttons
feat: Add diff navigation buttons for Document History
2026-05-09 22:51:04 +09:00
SeleiXi
81dc7f604b feat: auto navigation to diff 2026-05-09 14:07:08 +08:00
vorotamoroz
a9c87fa52e - Add default test environment
- Fixed to use environment by APIs
- Make test parallel
2026-05-08 03:04:14 +00:00
vorotamoroz
e81f023943 Add default test env 2026-05-08 03:01:22 +00:00
vorotamoroz
2afe12ad2d fix pattern 2026-05-07 11:28:01 +01:00
vorotamoroz
4a9d6c1349 Add ci 2026-05-07 11:23:51 +01:00
vorotamoroz
279fc8876e feat(tests): enhance push/pull test with Docker integration and improved environment variable handling
style(test): format comment
2026-05-07 11:22:56 +01:00
vorotamoroz
cc3d30dbcf feat(tests): add Deno-based tests for checking CLI functionality in the same-codebase between platforms. 2026-05-07 11:06:12 +01:00
bori
7a4b76a550 added documentaion and a hook build script to make onbaording easier 2026-05-02 18:51:07 +03:00
SeleiXi
f9294446ba feat: add diff block navigation to Document History modal
Add prev/next buttons to jump between diff blocks in the
Document History view. Includes position indicator and
auto-scroll with visual focus highlighting.
2026-05-02 22:18:43 +08:00
61 changed files with 5271 additions and 345 deletions

View File

@@ -2,7 +2,7 @@
name: Issue report
about: Create a report to help us improve
title: ''
labels: 'bug'
labels: 'uncategorised'
assignees: ''
---

114
.github/workflows/cli-deno-tests.yml vendored Normal file
View File

@@ -0,0 +1,114 @@
name: cli-deno-tests
on:
workflow_dispatch:
inputs:
test_task:
description: 'Deno test task to run'
type: choice
options:
- test
- test:local
- test:e2e-matrix
- test:p2p-sync
default: test
permissions:
contents: read
jobs:
prepare:
runs-on: ubuntu-latest
outputs:
task_matrix: ${{ steps.select.outputs.task_matrix }}
steps:
- name: Select task matrix
id: select
shell: bash
run: |
set -euo pipefail
SELECTED_TASK="${{ github.event_name == 'workflow_dispatch' && inputs.test_task || 'test' }}"
echo "[INFO] Selected task set: $SELECTED_TASK"
case "$SELECTED_TASK" in
test)
TASK_MATRIX='["test:setup-put-cat","test:mirror","test:push-pull","test:sync-two-local","test:sync-locked-remote","test:p2p-host","test:p2p-peers","test:p2p-sync","test:p2p-three-nodes","test:p2p-upload-download","test:e2e-couchdb","test:e2e-matrix"]'
;;
test:local)
TASK_MATRIX='["test:setup-put-cat","test:mirror"]'
;;
test:e2e-matrix)
TASK_MATRIX='["test:e2e-matrix"]'
;;
test:p2p-sync)
TASK_MATRIX='["test:p2p-sync"]'
;;
*)
echo "[ERROR] Unknown task set: $SELECTED_TASK" >&2
exit 1
;;
esac
echo "task_matrix=$TASK_MATRIX" >> "$GITHUB_OUTPUT"
test:
needs: prepare
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
task: ${{ fromJson(needs.prepare.outputs.task_matrix) }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24.x'
cache: 'npm'
- name: Setup Deno
uses: denoland/setup-deno@v2
with:
deno-version: v2.x
- name: Install dependencies
run: npm ci
- name: Build CLI
working-directory: src/apps/cli
run: npm run build
- name: Create .test.env
working-directory: src/apps/cli
run: |
cat <<EOF > .test.env
hostname=http://127.0.0.1:5989/
dbname=livesync-test-db-ci
username=admin
password=testpassword
minioEndpoint=http://127.0.0.1:9000
accessKey=minioadmin
secretKey=minioadmin
bucketName=livesync-test-bucket-ci
EOF
- name: Run Deno tests
working-directory: src/apps/cli/testdeno
env:
LIVESYNC_DOCKER_MODE: native
LIVESYNC_CLI_RETRY: 3
run: |
TASK="${{ matrix.task }}"
echo "[INFO] Running Deno task: $TASK"
deno task "$TASK"
- name: Stop leftover containers
if: always()
run: |
docker stop couchdb-test minio-test relay-test >/dev/null 2>&1 || true
docker rm couchdb-test minio-test relay-test >/dev/null 2>&1 || true

View File

@@ -13,7 +13,7 @@ const prettierConfig = {
tabWidth: 4,
printWidth: 120,
semi: true,
endOfLine: "cr",
endOfLine: "lf",
...localPrettierConfig,
};

View File

@@ -63,6 +63,9 @@ npm test # Run vitest tests (requires Docker services)
### Environment Setup
- Clone with submodules: `git clone --recurse-submodules <repository-url>`
- If you already cloned without them, run: `git submodule update --init --recursive`
- The shared common library is provided by the `src/lib` submodule, and builds will fail if it is missing
- Create `.env` file with `PATHS_TEST_INSTALL` pointing to test vault plug-in directories (`:` separated on Unix, `;` on Windows)
- Development builds auto-copy to these paths on build

View File

@@ -38,6 +38,7 @@ export default [
"modules/octagonal-wheels/rollup.config.js",
"modules/octagonal-wheels/dist/**/*",
"src/lib/test",
"src/lib/_tools",
"src/lib/src/cli",
"**/main.js",
"src/apps/**/*",

61
package-lock.json generated
View File

@@ -16,11 +16,13 @@
"@smithy/protocol-http": "^5.3.9",
"@smithy/querystring-builder": "^4.2.9",
"@trystero-p2p/nostr": "^0.23.0",
"chokidar": "^4.0.0",
"commander": "^14.0.3",
"diff-match-patch": "^1.0.5",
"fflate": "^0.8.2",
"idb": "^8.0.3",
"markdown-it": "^14.1.1",
"micromatch": "^4.0.0",
"minimatch": "^10.2.2",
"octagonal-wheels": "^0.1.45",
"pouchdb-adapter-leveldb": "^9.0.0",
@@ -38,6 +40,7 @@
"@types/deno": "^2.5.0",
"@types/diff-match-patch": "^1.0.36",
"@types/markdown-it": "^14.1.2",
"@types/micromatch": "^4.0.10",
"@types/node": "^24.10.13",
"@types/pouchdb": "^6.4.2",
"@types/pouchdb-adapter-http": "^6.1.6",
@@ -984,7 +987,6 @@
"integrity": "sha512-CGOfOJqWjg2qW/Mb6zNsDm+u5vFQ8DxXfbM09z69p5Z6+mE1ikP2jUXw+j42Pf1XTYED2Rni5f95npYeuwMDQA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@babel/code-frame": "^7.29.0",
"@babel/generator": "^7.29.0",
@@ -2378,7 +2380,8 @@
"resolved": "https://registry.npmjs.org/@marijn/find-cluster-break/-/find-cluster-break-1.0.2.tgz",
"integrity": "sha512-l0h88YhZFyKdXIFNfSWpyjStDjGHwZ/U7iobcK1cQQD8sejsONdQtTVU+1wVN1PBw40PiiHB1vA5S7VTfQiP9g==",
"dev": true,
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/@minhducsun2002/leb128": {
"version": "1.0.0",
@@ -4224,7 +4227,6 @@
"integrity": "sha512-ou/d51QSdTyN26D7h6dSpusAKaZkAiGM55/AKYi+9AGZw7q85hElbjK3kEyzXHhLSnRISHOYzVge6x0jRZ7DXA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@sveltejs/vite-plugin-svelte-inspector": "^5.0.0",
"deepmerge": "^4.3.1",
@@ -4298,6 +4300,13 @@
"@babel/types": "^7.0.0"
}
},
"node_modules/@types/braces": {
"version": "3.0.5",
"resolved": "https://registry.npmjs.org/@types/braces/-/braces-3.0.5.tgz",
"integrity": "sha512-SQFof9H+LXeWNz8wDe7oN5zu7ket0qwMu5vZubW4GCJ8Kkeh6nBWUz87+KTz/G3Kqsrp0j/W253XJb3KMEeg3w==",
"dev": true,
"license": "MIT"
},
"node_modules/@types/chai": {
"version": "5.2.3",
"resolved": "https://registry.npmjs.org/@types/chai/-/chai-5.2.3.tgz",
@@ -4417,6 +4426,16 @@
"dev": true,
"license": "MIT"
},
"node_modules/@types/micromatch": {
"version": "4.0.10",
"resolved": "https://registry.npmjs.org/@types/micromatch/-/micromatch-4.0.10.tgz",
"integrity": "sha512-5jOhFDElqr4DKTrTEbnW8DZ4Hz5LRUEmyrGpCMrD/NphYv3nUnaF08xmSLx1rGGnyEs/kFnhiw6dCgcDqMr5PQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@types/braces": "*"
}
},
"node_modules/@types/minimatch": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/@types/minimatch/-/minimatch-5.1.2.tgz",
@@ -4738,7 +4757,6 @@
"integrity": "sha512-klQbnPAAiGYFyI02+znpBRLyjL4/BrBd0nyWkdC0s/6xFLkXYQ8OoRrSkqacS1ddVxf/LDyODIKbQ5TgKAf/Fg==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@typescript-eslint/scope-manager": "8.56.1",
"@typescript-eslint/types": "8.56.1",
@@ -4943,7 +4961,6 @@
"integrity": "sha512-gjjrFC4+kPVK/fN9URDJWrssU5Gqh8Az8pKG/NSfQ2V+ky8b/y1BgBg0Ug13+hOGp5pzInonmGRPn7vOgSLgzA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@blazediff/core": "1.9.1",
"@vitest/mocker": "4.1.1",
@@ -4967,7 +4984,6 @@
"integrity": "sha512-dtVSBZZha2k/7P7EAXXrEAoxuIKl8Yv9f2Dk4GN/DGfmhf4DQvkvu+57okR2wq/gan1xppKjL/aBxK/kbYrbGw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@vitest/browser": "4.1.1",
"@vitest/mocker": "4.1.1",
@@ -5409,7 +5425,6 @@
"integrity": "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"acorn": "bin/acorn"
},
@@ -6123,7 +6138,6 @@
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz",
"integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==",
"dev": true,
"license": "MIT",
"dependencies": {
"fill-range": "^7.1.1"
@@ -6152,7 +6166,6 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"baseline-browser-mapping": "^2.9.0",
"caniuse-lite": "^1.0.30001759",
@@ -6385,7 +6398,6 @@
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz",
"integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==",
"dev": true,
"license": "MIT",
"dependencies": {
"readdirp": "^4.0.1"
@@ -6648,7 +6660,8 @@
"resolved": "https://registry.npmjs.org/crelt/-/crelt-1.0.6.tgz",
"integrity": "sha512-VQ2MBenTq1fWZUH9DJNGti7kKv6EeAuYr3cLwxUWhIu1baTaXh4Ib5W2CqHVqib4/MqbYGJqiL3Zb8GJZr3l4g==",
"dev": true,
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/cross-spawn": {
"version": "7.0.6",
@@ -7441,7 +7454,6 @@
"dev": true,
"hasInstallScript": true,
"license": "MIT",
"peer": true,
"bin": {
"esbuild": "bin/esbuild"
},
@@ -7555,7 +7567,6 @@
"integrity": "sha512-XoMjdBOwe/esVgEvLmNsD3IRHkm7fbKIUGvrleloJXUZgDHig2IPWNniv+GwjyJXzuNqVjlr5+4yVUZjycJwfQ==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.8.0",
"@eslint-community/regexpp": "^4.12.1",
@@ -8255,7 +8266,6 @@
"version": "7.1.1",
"resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz",
"integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==",
"dev": true,
"license": "MIT",
"dependencies": {
"to-regex-range": "^5.0.1"
@@ -9358,7 +9368,6 @@
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
"integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=0.12.0"
@@ -9695,7 +9704,6 @@
"integrity": "sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"jiti": "lib/jiti-cli.mjs"
}
@@ -10409,7 +10417,6 @@
"version": "4.0.8",
"resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz",
"integrity": "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA==",
"dev": true,
"license": "MIT",
"dependencies": {
"braces": "^3.0.3",
@@ -11119,7 +11126,6 @@
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.2.tgz",
"integrity": "sha512-V7+vQEJ06Z+c5tSye8S+nHUfI51xoXIXjHQ99cQtKUkQqqO1kO/KCJUfZXuB47h/YBlDhah2H3hdUGXn8ie0oA==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=8.6"
@@ -11203,7 +11209,6 @@
"integrity": "sha512-vA30H8Nvkq/cPBnNw4Q8TWz1EJyqgpuinBcHET0YVJVFldr8JDNiU9LaWAE1KqSkRYazuaBhTpB5ZzShOezQ6A==",
"dev": true,
"license": "Apache-2.0",
"peer": true,
"dependencies": {
"playwright-core": "1.58.2"
},
@@ -11270,7 +11275,6 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"nanoid": "^3.3.11",
"picocolors": "^1.1.1",
@@ -11296,7 +11300,6 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"lilconfig": "^3.1.1"
},
@@ -11943,7 +11946,6 @@
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz",
"integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 14.18.0"
@@ -12956,7 +12958,8 @@
"resolved": "https://registry.npmjs.org/style-mod/-/style-mod-4.1.3.tgz",
"integrity": "sha512-i/n8VsZydrugj3Iuzll8+x/00GH2vnYsk1eomD8QiRrSAeW6ItbCQDtfXCeJHd0iwiNagqjQkvpvREEPtW3IoQ==",
"dev": true,
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/sublevel-pouchdb": {
"version": "9.0.0",
@@ -13025,7 +13028,6 @@
"integrity": "sha512-0a/huwc8e2es+7KFi70esqsReRfRbrT8h1cJSY/+z1lF0yKM6TT+//HYu28Yxstr50H7ifaqZRDGd0KuKDxP7w==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@jridgewell/remapping": "^2.3.4",
"@jridgewell/sourcemap-codec": "^1.5.0",
@@ -13336,7 +13338,6 @@
"integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==",
"dev": true,
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},
@@ -13358,7 +13359,6 @@
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
"integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"is-number": "^7.0.0"
@@ -13455,7 +13455,6 @@
"integrity": "sha512-5C1sg4USs1lfG0GFb2RLXsdpXqBSEhAaA/0kPL01wxzpMqLILNxIxIOKiILz+cdg/pLnOUxFYOR5yhHU666wbw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"esbuild": "~0.27.0",
"get-tsconfig": "^4.7.5"
@@ -14086,7 +14085,6 @@
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
"dev": true,
"license": "Apache-2.0",
"peer": true,
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
@@ -14236,7 +14234,6 @@
"integrity": "sha512-Bby3NOsna2jsjfLVOHKes8sGwgl4TT0E6vvpYgnAYDIF/tie7MRaFthmKuHx1NSXjiTueXH3do80FMQgvEktRg==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"esbuild": "^0.27.0",
"fdir": "^6.5.0",
@@ -14873,7 +14870,6 @@
"integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==",
"dev": true,
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},
@@ -14907,7 +14903,6 @@
"integrity": "sha512-yF+o4POL41rpAzj5KVILUxm1GCjKnELvaqmU9TLLUbMfDzuN0UpUR9uaDs+mCtjPe+uYPksXDRLQGGPvj1cTmA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@vitest/expect": "4.1.1",
"@vitest/mocker": "4.1.1",
@@ -15015,7 +15010,8 @@
"resolved": "https://registry.npmjs.org/w3c-keyname/-/w3c-keyname-2.2.8.tgz",
"integrity": "sha512-dpojBhNsCNN7T82Tm7k26A6G9ML3NkhDsnw9n/eoxSRlVBB4CEtIQ/KTCLI2Fwf3ataSXRhYFkQi3SlnFwPvPQ==",
"dev": true,
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/wait-port": {
"version": "1.1.0",
@@ -15667,7 +15663,6 @@
"integrity": "sha512-AvbaCLOO2Otw/lW5bmh9d/WEdcDFdQp2Z2ZUH3pX9U2ihyUY0nvLv7J6TrWowklRGPYbB/IuIMfYgxaCPg5Bpg==",
"dev": true,
"license": "ISC",
"peer": true,
"bin": {
"yaml": "bin.mjs"
},

View File

@@ -69,6 +69,7 @@
"@types/deno": "^2.5.0",
"@types/diff-match-patch": "^1.0.36",
"@types/markdown-it": "^14.1.2",
"@types/micromatch": "^4.0.10",
"@types/node": "^24.10.13",
"@types/pouchdb": "^6.4.2",
"@types/pouchdb-adapter-http": "^6.1.6",
@@ -133,11 +134,13 @@
"@smithy/protocol-http": "^5.3.9",
"@smithy/querystring-builder": "^4.2.9",
"@trystero-p2p/nostr": "^0.23.0",
"chokidar": "^4.0.0",
"commander": "^14.0.3",
"diff-match-patch": "^1.0.5",
"fflate": "^0.8.2",
"idb": "^8.0.3",
"markdown-it": "^14.1.1",
"micromatch": "^4.0.0",
"minimatch": "^10.2.2",
"octagonal-wheels": "^0.1.45",
"pouchdb-adapter-leveldb": "^9.0.0",

View File

@@ -3,4 +3,6 @@ test/*
!test/*.sh
test/test-init.local.sh
node_modules
.*.json
.*.json
*.env
!.test.env

View File

@@ -95,13 +95,24 @@ livesync-cli ./my-db pull folder/note.md ./note.md
### Build from source
```bash
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
# due to shared dependencies with webapp and main library
# Clone with submodules, because the shared core lives in src/lib
git clone --recurse-submodules <repository-url>
cd obsidian-livesync
# If you already cloned without submodules, run this once instead
git submodule update --init --recursive
# Install dependencies from the repository root
npm install
# Build the project (ensure you are in `src/apps/cli` directory)
# Build the CLI from its package directory
cd src/apps/cli
npm run build
```
If `src/lib` is missing, `npm run build` now stops early with a targeted message
instead of a low-level Vite `ENOENT` error.
Run the CLI:
```bash
@@ -286,9 +297,11 @@ Options:
--force, -f Overwrite existing file on init-settings
--verbose, -v Enable verbose logging
--debug, -d Enable debug logging (includes verbose)
--help, -h Show help message
--interval <N>, -i <N> (daemon only) Poll CouchDB every N seconds instead of using the _changes feed
--help, -h Show this help message
Commands:
daemon (default) Run mirror scan then continuously sync CouchDB <-> local filesystem
init-settings [path] Create settings JSON from DEFAULT_SETTINGS
sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
@@ -395,6 +408,86 @@ In other words, it performs the following actions:
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases).
##### daemon
`daemon` is the default command when no command is specified. It runs an initial mirror scan and then continuously syncs changes in both directions:
- **CouchDB → local filesystem**: via the `_changes` feed (LiveSync mode, default) or periodic polling (`--interval N`).
- **local filesystem → CouchDB**: via chokidar file watching. Any file created, modified, or deleted in the vault directory is pushed to CouchDB.
In **LiveSync mode** the `_changes` feed delivers remote changes as they arrive, with sub-second latency. In **polling mode** (`--interval N`) the CLI polls CouchDB every N seconds. Use polling mode if your CouchDB instance does not support long-lived HTTP connections, or if you need predictable network usage.
The daemon exits cleanly on `SIGINT` or `SIGTERM`.
```bash
# LiveSync mode (default — _changes feed, near-real-time)
livesync-cli /path/to/vault
# Polling mode — poll every 60 seconds
livesync-cli /path/to/vault --interval 60
```
### .livesync/ignore
Place a `.livesync/ignore` file in your vault root to exclude files from sync in both directions (local → CouchDB and CouchDB → local).
**Format:**
- Lines beginning with `#` are comments.
- Blank lines are ignored.
- All other lines are [minimatch](https://github.com/isaacs/minimatch) glob patterns, relative to the vault root.
- The directive `import: .gitignore` (exactly this string) reads `.gitignore` from the vault root and merges its non-comment, non-blank lines into the ignore rules.
- Negation patterns (lines starting with `!`) are not supported and will cause an error on load.
**Example `.livesync/ignore`:**
```
# Ignore temporary files
*.tmp
*.swp
# Ignore build output
build/
dist/
# Merge patterns from .gitignore
import: .gitignore
```
Patterns apply in both directions: the chokidar watcher will not emit events for matched files, and the `isTargetFile` filter will exclude them from CouchDB → local sync.
Changes to this file require a daemon restart to take effect.
### Systemd Installation
The `deploy/` directory contains a systemd unit template and an install script.
**Automated install (user service, recommended):**
```bash
bash src/apps/cli/deploy/install.sh --vault /path/to/vault
```
**With polling interval:**
```bash
bash src/apps/cli/deploy/install.sh --vault /path/to/vault --interval 60
```
**System-wide install** (requires root / sudo for `/etc/systemd/system/`):
```bash
bash src/apps/cli/deploy/install.sh --system --vault /path/to/vault
```
The script:
1. Builds the CLI (`npm install` + `npm run build`).
2. Installs the binary to `~/.local/bin/livesync-cli` (user) or `/usr/local/bin/livesync-cli` (system).
3. Writes the unit file to `~/.config/systemd/user/livesync-cli.service` (user) or `/etc/systemd/system/livesync-cli.service` (system).
4. Runs `systemctl [--user] daemon-reload && systemctl [--user] enable --now livesync-cli`.
**Manual setup** — if you prefer to manage the unit yourself, copy `deploy/livesync-cli.service`, replace `LIVESYNC_BIN` and `LIVESYNC_VAULT_PATH` with the actual binary path and vault path, then install to the appropriate systemd directory.
### Planned options:
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`).

View File

@@ -39,12 +39,6 @@ export class NodeFileSystemAdapter implements IFileSystemAdapter<NodeFile, NodeF
async getAbstractFileByPath(p: FilePath | string): Promise<NodeFile | null> {
const pathStr = this.normalisePath(p);
const cached = this.fileCache.get(pathStr);
if (cached) {
return cached;
}
return await this.refreshFile(pathStr);
}
@@ -104,14 +98,15 @@ export class NodeFileSystemAdapter implements IFileSystemAdapter<NodeFile, NodeF
path: pathStr as FilePath,
stat: {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: "file",
},
};
this.fileCache.set(pathStr, file);
return file;
} catch {
// Evict so a deleted file is not returned by subsequent cache scans.
this.fileCache.delete(pathStr);
return null;
}
@@ -137,8 +132,8 @@ export class NodeFileSystemAdapter implements IFileSystemAdapter<NodeFile, NodeF
path: entryRelativePath as FilePath,
stat: {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: "file",
},
};

View File

@@ -28,8 +28,8 @@ export class NodeStorageAdapter implements IStorageAdapter<NodeStat> {
const stat = await fs.stat(this.resolvePath(p));
return {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: stat.isDirectory() ? "folder" : "file",
};
} catch {

View File

@@ -15,7 +15,12 @@ export class NodeVaultAdapter implements IVaultAdapter<NodeFile> {
}
async read(file: NodeFile): Promise<string> {
return await fs.readFile(this.resolvePath(file.path), "utf-8");
const content = await fs.readFile(this.resolvePath(file.path), "utf-8");
// Correct stale stat.size — chokidar stats may be from a poll before the final write.
// The downstream document integrity check compares stat.size to content length, so
// they must agree or other clients reject the file as corrupted.
file.stat.size = Buffer.byteLength(content, "utf-8");
return content;
}
async cachedRead(file: NodeFile): Promise<string> {
@@ -25,6 +30,8 @@ export class NodeVaultAdapter implements IVaultAdapter<NodeFile> {
async readBinary(file: NodeFile): Promise<ArrayBuffer> {
const buffer = await fs.readFile(this.resolvePath(file.path));
// Same correction as read() — ensure stat.size matches actual byte length.
file.stat.size = buffer.length;
return buffer.buffer.slice(buffer.byteOffset, buffer.byteOffset + buffer.byteLength) as ArrayBuffer;
}
@@ -66,8 +73,8 @@ export class NodeVaultAdapter implements IVaultAdapter<NodeFile> {
path: p as any,
stat: {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: "file",
},
};
@@ -89,8 +96,8 @@ export class NodeVaultAdapter implements IVaultAdapter<NodeFile> {
path: p as any,
stat: {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: "file",
},
};

View File

@@ -0,0 +1,312 @@
import { describe, expect, it, vi, beforeEach, afterEach } from "vitest";
import { runCommand } from "./runCommand";
import type { CLIOptions } from "./types";
// Mock performFullScan so daemon tests don't require a real CouchDB connection.
vi.mock("@lib/serviceFeatures/offlineScanner", () => ({
performFullScan: vi.fn(async () => true),
}));
// Mock UnresolvedErrorManager to avoid event-hub side effects.
vi.mock("@lib/services/base/UnresolvedErrorManager", () => ({
UnresolvedErrorManager: class UnresolvedErrorManager {
showError() {}
clearError() {}
clearErrors() {}
},
}));
import * as offlineScanner from "@lib/serviceFeatures/offlineScanner";
function createCoreMock() {
return {
services: {
control: {
activated: Promise.resolve(),
applySettings: vi.fn(async () => {}),
},
setting: {
applyPartial: vi.fn(async () => {}),
currentSettings: vi.fn(() => ({ liveSync: true, syncOnStart: false })),
},
replication: {
replicate: vi.fn(async () => true),
},
appLifecycle: {
onUnload: {
addHandler: vi.fn(),
},
},
},
serviceModules: {
fileHandler: {
dbToStorage: vi.fn(async () => true),
storeFileToDB: vi.fn(async () => true),
},
storageAccess: {
readFileAuto: vi.fn(async () => ""),
writeFileAuto: vi.fn(async () => {}),
},
databaseFileAccess: {
fetch: vi.fn(async () => undefined),
},
},
} as any;
}
function makeDaemonOptions(interval?: number): CLIOptions {
return {
command: "daemon",
commandArgs: [],
databasePath: "/tmp/vault",
verbose: false,
force: false,
interval,
};
}
const baseContext = {
vaultPath: "/tmp/vault",
settingsPath: "/tmp/vault/.livesync/settings.json",
originalSyncSettings: {
liveSync: true,
syncOnStart: false,
periodicReplication: false,
syncOnSave: false,
syncOnEditorSave: false,
syncOnFileOpen: false,
syncAfterMerge: false,
},
} as any;
describe("daemon command", () => {
beforeEach(() => {
vi.restoreAllMocks();
vi.useFakeTimers();
});
afterEach(() => {
vi.useRealTimers();
});
it("calls performFullScan during startup", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(offlineScanner.performFullScan).toHaveBeenCalledTimes(1);
});
it("returns false when performFullScan fails", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(false);
const result = await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(result).toBe(false);
});
it("polling mode: calls setTimeout when interval option is set", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
const setTimeoutSpy = vi.spyOn(globalThis, "setTimeout");
await runCommand(makeDaemonOptions(30), { ...baseContext, core });
expect(setTimeoutSpy).toHaveBeenCalledTimes(1);
// Interval should be in milliseconds (30s → 30000ms)
expect(setTimeoutSpy).toHaveBeenCalledWith(expect.any(Function), 30000);
});
it("polling mode: applies settings with suspendFileWatching=false before setting interval", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
await runCommand(makeDaemonOptions(10), { ...baseContext, core });
expect(core.services.setting.applyPartial).toHaveBeenCalledWith(
expect.objectContaining({ suspendFileWatching: false }),
true
);
expect(core.services.control.applySettings).toHaveBeenCalledTimes(1);
});
it("liveSync mode: calls applyPartial and applySettings", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(core.services.setting.applyPartial).toHaveBeenCalledWith(
expect.objectContaining({
...baseContext.originalSyncSettings,
suspendFileWatching: false,
}),
true
);
expect(core.services.control.applySettings).toHaveBeenCalledTimes(1);
});
it("liveSync mode: logs warning when both liveSync and syncOnStart are false", async () => {
const core = createCoreMock();
core.services.setting.currentSettings = vi.fn(() => ({
liveSync: false,
syncOnStart: false,
}));
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
const consoleSpy = vi.spyOn(console, "error").mockImplementation(() => {});
const result = await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(result).toBe(true);
const warningCalls = consoleSpy.mock.calls.filter(
(args) => typeof args[0] === "string" && args[0].includes("liveSync and syncOnStart are both disabled")
);
expect(warningCalls.length).toBeGreaterThan(0);
});
it("liveSync mode: no warning when liveSync is true", async () => {
const core = createCoreMock();
core.services.setting.currentSettings = vi.fn(() => ({
liveSync: true,
syncOnStart: false,
}));
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
const consoleSpy = vi.spyOn(console, "error").mockImplementation(() => {});
await runCommand(makeDaemonOptions(), { ...baseContext, core });
const warningCalls = consoleSpy.mock.calls.filter(
(args) => typeof args[0] === "string" && args[0].includes("liveSync and syncOnStart are both disabled")
);
expect(warningCalls.length).toBe(0);
});
it("calls replicate before performFullScan", async () => {
const core = createCoreMock();
const callOrder: string[] = [];
core.services.replication.replicate = vi.fn(async () => {
callOrder.push("replicate");
return true;
});
vi.mocked(offlineScanner.performFullScan).mockImplementation(async () => {
callOrder.push("performFullScan");
return true;
});
await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(callOrder).toEqual(["replicate", "performFullScan"]);
});
it("returns false when initial replication fails", async () => {
const core = createCoreMock();
core.services.replication.replicate = vi.fn(async () => false);
vi.mocked(offlineScanner.performFullScan).mockClear();
const result = await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(result).toBe(false);
// performFullScan should NOT have been called
expect(offlineScanner.performFullScan).not.toHaveBeenCalled();
});
it("polling mode: registers onUnload handler that clears timeout", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
await runCommand(makeDaemonOptions(10), { ...baseContext, core });
// onUnload handler should have been registered
expect(core.services.appLifecycle.onUnload.addHandler).toHaveBeenCalledTimes(1);
const handler = core.services.appLifecycle.onUnload.addHandler.mock.calls[0][0];
// Get the timeout ID that was created
const clearTimeoutSpy = vi.spyOn(globalThis, "clearTimeout");
await handler();
expect(clearTimeoutSpy).toHaveBeenCalledTimes(1);
});
it("polling backoff: interval escalates on failure, caps at 300000ms, then halves on recovery", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
vi.spyOn(console, "error").mockImplementation(() => {});
// startup replicate (call 1) succeeds; poll calls 27 fail; call 8 succeeds.
let callCount = 0;
core.services.replication.replicate = vi.fn(async () => {
callCount++;
if (callCount === 1) return true; // initial startup replicate
if (callCount <= 7) throw new Error("network failure");
return true; // recovery
});
const baseMs = 30 * 1000;
const setTimeoutSpy = vi.spyOn(globalThis, "setTimeout");
await runCommand(makeDaemonOptions(30), { ...baseContext, core });
// After runCommand returns the first setTimeout has been scheduled.
// setTimeoutSpy.mock.calls[0] is the initial schedule (baseMs).
expect(setTimeoutSpy.mock.calls[0][1]).toBe(baseMs);
// Advance through 6 failure polls. After each failure the next setTimeout
// should be scheduled with a larger (or capped) interval.
// formula: min(base * 2^n, 300000). base=30000ms.
// failure 1: 30000*2=60000, failure 2: 30000*4=120000,
// failure 3: 30000*8=240000, failure 4: 30000*16=480000→capped, 5→cap, 6→cap
const expectedIntervals = [
baseMs * 2, // after failure 1: 60000
baseMs * 4, // after failure 2: 120000
baseMs * 8, // after failure 3: 240000
300_000, // after failure 4 (would be 480000, capped)
300_000, // after failure 5 (cap)
300_000, // after failure 6 (cap)
];
for (const expected of expectedIntervals) {
const prevCallCount = setTimeoutSpy.mock.calls.length;
await vi.advanceTimersByTimeAsync(setTimeoutSpy.mock.calls[prevCallCount - 1][1] as number);
const newCallCount = setTimeoutSpy.mock.calls.length;
expect(newCallCount).toBeGreaterThan(prevCallCount);
expect(setTimeoutSpy.mock.calls[newCallCount - 1][1]).toBe(expected);
}
// Now trigger the success poll — interval should halve each time toward base.
// After failure 6, consecutiveFailures=6, currentIntervalMs=300000.
// On success: consecutiveFailures=5, currentIntervalMs=150000.
const prevCallCount = setTimeoutSpy.mock.calls.length;
await vi.advanceTimersByTimeAsync(setTimeoutSpy.mock.calls[prevCallCount - 1][1] as number);
const afterSuccessCallCount = setTimeoutSpy.mock.calls.length;
expect(afterSuccessCallCount).toBeGreaterThan(prevCallCount);
// The interval after one success should be halved (300000 / 2 = 150000).
expect(setTimeoutSpy.mock.calls[afterSuccessCallCount - 1][1]).toBe(150_000);
});
it("polling error handling: replicate rejection is caught and console.error is called", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
const consoleSpy = vi.spyOn(console, "error").mockImplementation(() => {});
// Make replicate succeed on the initial call (startup), then fail on the poll.
let callCount = 0;
core.services.replication.replicate = vi.fn(async () => {
callCount++;
if (callCount === 1) return true; // startup replicate
throw new Error("network failure");
});
const intervalMs = 30 * 1000;
await runCommand(makeDaemonOptions(30), { ...baseContext, core });
// Advance time to trigger the first poll callback and flush its async work.
await vi.advanceTimersByTimeAsync(intervalMs);
// No unhandled rejection — the error was caught internally.
const errorCalls = consoleSpy.mock.calls.filter(
(args) => typeof args[0] === "string" && args[0].includes("Poll error")
);
expect(errorCalls.length).toBeGreaterThan(0);
});
});

View File

@@ -15,6 +15,96 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
await core.services.control.activated;
if (options.command === "daemon") {
const log = (msg: unknown) => console.error(`[Daemon] ${msg}`);
// Skip the config mismatch dialog — the daemon cannot resolve it interactively
// and the default "Dismiss" action would block replication. The daemon should
// accept whatever configuration the remote has.
await core.services.setting.applyPartial({ disableCheckingConfigMismatch: true }, true);
// 1. Replicate CouchDB → local PouchDB so the mirror scan has content to work with.
log("Replicating from CouchDB...");
const replResult = await core.services.replication.replicate(true);
if (!replResult) {
console.error("[Daemon] Initial CouchDB replication failed, cannot continue");
return false;
}
log("CouchDB replication complete");
// 2. Mirror scan to reconcile PouchDB ↔ local filesystem.
const errorManager = new UnresolvedErrorManager(core.services.appLifecycle);
log("Running mirror scan...");
const scanOk = await performFullScan(core as any, log, errorManager, false, true);
if (!scanOk) {
console.error("[Daemon] Mirror scan failed, cannot continue");
return false;
}
log("Mirror scan complete");
// 3. Re-enable sync.
const restoreSyncSettings = async () => {
await core.services.setting.applyPartial({
...context.originalSyncSettings,
suspendFileWatching: false,
}, true);
// applySettings fires the full lifecycle: onSuspending → onResumed.
// ModuleReplicatorCouchDB starts continuous replication on onResumed
// via fireAndForget.
await core.services.control.applySettings();
// Lifecycle events (onSuspending) may re-enable suspension flags.
// Clear them explicitly after the lifecycle completes. applyPartial
// with true is a direct store write — it does not re-trigger lifecycle.
await core.services.setting.applyPartial({
suspendFileWatching: false,
suspendParseReplicationResult: false,
}, true);
};
if (options.interval) {
log(`Polling mode: syncing every ${options.interval}s`);
await restoreSyncSettings();
const baseIntervalMs = options.interval * 1000;
let currentIntervalMs = baseIntervalMs;
let consecutiveFailures = 0;
const maxIntervalMs = 5 * 60 * 1000; // 5 minutes cap
const poll = async () => {
try {
await core.services.replication.replicate(true);
if (consecutiveFailures > 0) {
consecutiveFailures--;
currentIntervalMs = Math.max(currentIntervalMs / 2, baseIntervalMs);
log(`Replication recovered`);
}
} catch (err) {
consecutiveFailures++;
currentIntervalMs = Math.min(baseIntervalMs * Math.pow(2, consecutiveFailures), maxIntervalMs);
console.error(`[Daemon] Poll error (${consecutiveFailures} consecutive):`, err);
if (consecutiveFailures >= 5) {
console.error(`[Daemon] Warning: ${consecutiveFailures} consecutive failures, backing off to ${Math.round(currentIntervalMs / 1000)}s`);
}
}
pollTimer = setTimeout(poll, currentIntervalMs);
};
let pollTimer: ReturnType<typeof setTimeout> = setTimeout(poll, currentIntervalMs);
core.services.appLifecycle.onUnload.addHandler(async () => {
clearTimeout(pollTimer);
return true;
});
} else {
log("LiveSync mode: restoring sync settings and starting _changes feed");
await restoreSyncSettings();
// The applySettings() lifecycle fires onResumed → ModuleReplicatorCouchDB which
// starts continuous replication via fireAndForget(openReplication). Don't call
// openReplication directly — it races with the handler and causes dedup/termination.
log("LiveSync active");
const currentSettings = core.services.setting.currentSettings();
if (!currentSettings.liveSync && !currentSettings.syncOnStart) {
console.error("[Daemon] Warning: liveSync and syncOnStart are both disabled in settings. " +
"No sync will occur. Set liveSync=true in your settings file for continuous sync, " +
"or use --interval for polling mode.");
}
}
return true;
}
@@ -83,8 +173,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`);
await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), {
mtime: sourceStat.mtimeMs,
ctime: sourceStat.ctimeMs,
mtime: Math.floor(sourceStat.mtimeMs),
ctime: Math.floor(sourceStat.ctimeMs),
});
const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix;
const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true);

View File

@@ -1,5 +1,6 @@
import { LiveSyncBaseCore } from "../../../LiveSyncBaseCore";
import { ServiceContext } from "@lib/services/base/ServiceBase";
import type { ObsidianLiveSyncSettings } from "@lib/common/types";
export type CLICommand =
| "daemon"
@@ -29,15 +30,18 @@ export interface CLIOptions {
force?: boolean;
command: CLICommand;
commandArgs: string[];
interval?: number;
}
export interface CLICommandContext {
databasePath: string;
core: LiveSyncBaseCore<ServiceContext, any>;
settingsPath: string;
originalSyncSettings: Pick<ObsidianLiveSyncSettings, "liveSync" | "syncOnStart" | "periodicReplication" | "syncOnSave" | "syncOnEditorSave" | "syncOnFileOpen" | "syncAfterMerge">;
}
export const VALID_COMMANDS = new Set([
"daemon",
"sync",
"p2p-peers",
"p2p-sync",

187
src/apps/cli/deploy/install.sh Executable file
View File

@@ -0,0 +1,187 @@
#!/usr/bin/env bash
# install.sh — install livesync-cli as a systemd service
#
# Usage:
# install.sh [--user] [--system] [--vault <path>] [--interval <N>]
#
# Defaults: user install, prompts for vault path if not supplied.
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd -- "$SCRIPT_DIR/../../.." && pwd)"
CLI_DIR="$REPO_ROOT/src/apps/cli"
SERVICE_TEMPLATE="$SCRIPT_DIR/livesync-cli.service"
# ── Argument parsing ────────────────────────────────────────────────────────
INSTALL_MODE="user"
VAULT_PATH=""
INTERVAL=""
FORCE=0
while [[ $# -gt 0 ]]; do
case "$1" in
--user)
INSTALL_MODE="user"
shift
;;
--system)
INSTALL_MODE="system"
shift
;;
--vault)
if [[ -z "${2:-}" ]]; then
echo "Error: --vault requires a path argument" >&2
exit 1
fi
VAULT_PATH="$2"
shift 2
;;
--interval)
if [[ -z "${2:-}" ]]; then
echo "Error: --interval requires a numeric argument" >&2
exit 1
fi
INTERVAL="$2"
if ! [[ "$INTERVAL" =~ ^[1-9][0-9]*$ ]]; then
echo "Error: --interval requires a positive integer, got '$INTERVAL'" >&2
exit 1
fi
shift 2
;;
--force|-f)
FORCE=1
shift
;;
--help|-h)
cat <<EOF
Usage: install.sh [--user|--system] [--vault <path>] [--interval <N>] [--force]
--user Install as a user systemd service (default, ~/.config/systemd/user/)
--system Install as a system systemd service (/etc/systemd/system/)
--vault Path to the vault directory (prompted if omitted)
--interval Poll CouchDB every N seconds instead of using the _changes feed
--force Overwrite existing service unit without prompting
EOF
exit 0
;;
*)
echo "Error: Unknown argument: $1" >&2
exit 1
;;
esac
done
# ── Vault path ──────────────────────────────────────────────────────────────
if [[ -z "$VAULT_PATH" ]]; then
if [ ! -t 0 ]; then
echo "Error: --vault is required in non-interactive mode" >&2
exit 1
fi
printf 'Vault path: '
read -r VAULT_PATH
fi
_orig_vault="$VAULT_PATH"
if ! VAULT_PATH="$(cd -- "$VAULT_PATH" 2>/dev/null && pwd)"; then
echo "Error: vault directory does not exist: $_orig_vault" >&2
exit 1
fi
echo "[INFO] Vault: $VAULT_PATH"
echo "[INFO] Install mode: $INSTALL_MODE"
# ── Build ────────────────────────────────────────────────────────────────────
echo "[INFO] Building CLI from $REPO_ROOT..."
(cd "$REPO_ROOT" && npm install --silent)
(cd "$CLI_DIR" && npm run build)
BUILT_CJS="$CLI_DIR/dist/index.cjs"
if [[ ! -f "$BUILT_CJS" ]]; then
echo "Error: build output not found: $BUILT_CJS" >&2
exit 1
fi
# ── Install binary ───────────────────────────────────────────────────────────
if [[ "$INSTALL_MODE" == "user" ]]; then
BIN_DIR="$HOME/.local/bin"
UNIT_DIR="$HOME/.config/systemd/user"
SYSTEMCTL_FLAGS="--user"
else
BIN_DIR="/usr/local/bin"
UNIT_DIR="/etc/systemd/system"
SYSTEMCTL_FLAGS=""
fi
mkdir -p "$BIN_DIR"
LIVESYNC_BIN="$BIN_DIR/livesync-cli"
LIVESYNC_JS="$BIN_DIR/livesync-cli.js"
# Copy the CJS bundle so the wrapper is self-contained and independent of the
# build directory location.
cp "$BUILT_CJS" "$LIVESYNC_JS"
# Write a bash wrapper that invokes node on the installed bundle.
cat > "$LIVESYNC_BIN" <<WRAPPER
#!/usr/bin/env bash
exec node "$LIVESYNC_JS" "\$@"
WRAPPER
chmod +x "$LIVESYNC_BIN"
echo "[INFO] Installed bundle: $LIVESYNC_JS"
echo "[INFO] Installed binary: $LIVESYNC_BIN"
# ── Write systemd unit ───────────────────────────────────────────────────────
mkdir -p "$UNIT_DIR"
UNIT_PATH="$UNIT_DIR/livesync-cli.service"
EXEC_START="\"$LIVESYNC_BIN\" \"$VAULT_PATH\""
if [[ -n "$INTERVAL" ]]; then
EXEC_START="\"$LIVESYNC_BIN\" \"$VAULT_PATH\" --interval $INTERVAL"
fi
# Check for existing service and offer to overwrite.
if [[ -f "$UNIT_PATH" ]] && [[ "$FORCE" -eq 0 ]]; then
if [ ! -t 0 ]; then
echo "Error: service unit already exists at $UNIT_PATH; use --force to overwrite" >&2
exit 1
fi
printf 'Service unit already exists at %s. Overwrite? [y/N]: ' "$UNIT_PATH"
read -r CONFIRM
case "$CONFIRM" in
[yY]|[yY][eE][sS]) : ;;
*)
echo "[INFO] Aborted. Existing unit left in place."
exit 0
;;
esac
fi
# In awk gsub(), '&' in the replacement means "matched text"; escape any literal '&'
# in path variables before passing them as awk replacement strings.
AWK_BIN="${LIVESYNC_BIN//&/\\&}"
AWK_VAULT="${VAULT_PATH//&/\\&}"
awk -v bin="$AWK_BIN" -v vault="$AWK_VAULT" -v exec_start="ExecStart=$EXEC_START" \
'/^ExecStart=/ { print exec_start; next } {gsub("LIVESYNC_BIN", bin); gsub("LIVESYNC_VAULT_PATH", vault); print}' \
"$SERVICE_TEMPLATE" > "$UNIT_PATH"
echo "[INFO] Installed unit: $UNIT_PATH"
# ── Enable service ───────────────────────────────────────────────────────────
if ! command -v systemctl >/dev/null 2>&1; then
echo "[WARN] systemctl not found — skipping service activation"
echo "[INFO] To enable manually, copy $UNIT_PATH to the correct systemd directory and run:"
echo " systemctl $SYSTEMCTL_FLAGS daemon-reload"
echo " systemctl $SYSTEMCTL_FLAGS enable --now livesync-cli"
exit 0
fi
# shellcheck disable=SC2086
systemctl $SYSTEMCTL_FLAGS daemon-reload
# shellcheck disable=SC2086
systemctl $SYSTEMCTL_FLAGS enable --now livesync-cli
echo ""
echo "[Done] livesync-cli service installed and started."
echo ""
# shellcheck disable=SC2086
systemctl $SYSTEMCTL_FLAGS status livesync-cli --no-pager || true

View File

@@ -0,0 +1,17 @@
[Unit]
Description=Self-hosted LiveSync CLI Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=LIVESYNC_BIN LIVESYNC_VAULT_PATH
Restart=on-failure
RestartSec=10
TimeoutStartSec=300
StandardOutput=journal
StandardError=journal
LimitNOFILE=65536
[Install]
WantedBy=default.target

View File

@@ -26,6 +26,7 @@ import { VALID_COMMANDS } from "./commands/types";
import type { CLICommand, CLIOptions } from "./commands/types";
import { getPathFromUXFileInfo } from "@lib/common/typeUtils";
import { stripAllPrefixes } from "@lib/string_and_binary/path";
import { IgnoreRules } from "./serviceModules/IgnoreRules";
const SETTINGS_FILE = ".livesync/settings.json";
ensureGlobalNodeLocalStorage();
@@ -43,7 +44,8 @@ Arguments:
database-path Path to the local database directory
Commands:
sync Run one replication cycle and exit
daemon (default) Run mirror scan then continuously sync CouchDB <-> local filesystem
sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]\t<peer-id>\t<peer-name>
p2p-sync <peer> <timeout>
Sync with the specified peer-id or peer-name
@@ -60,24 +62,30 @@ Commands:
rm <path> Mark a file as deleted in local database
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path)
Options:
--interval <N>, -i <N> (daemon only) Poll CouchDB every N seconds instead of using the _changes feed
Examples:
livesync-cli ./my-database sync
livesync-cli ./my-database p2p-peers 5
livesync-cli ./my-database p2p-sync my-peer-name 15
livesync-cli ./my-database p2p-host
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
livesync-cli ./my-database pull folder/note.md ./exports/note.md
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
echo "Hello" | livesync-cli ./my-database put notes/hello.md
livesync-cli ./my-database cat notes/hello.md
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
livesync-cli ./my-database ls notes/
livesync-cli ./my-database info notes/hello.md
livesync-cli ./my-database rm notes/hello.md
livesync-cli ./my-database resolve notes/hello.md 3-abcdef
livesync-cli init-settings ./data.json
livesync-cli ./my-database --verbose
livesync-cli ./my-database Run daemon (LiveSync mode)
livesync-cli ./my-database --interval 30 Run daemon (polling every 30s)
livesync-cli ./my-database sync
livesync-cli ./my-database p2p-peers 5
livesync-cli ./my-database p2p-sync my-peer-name 15
livesync-cli ./my-database p2p-host
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
livesync-cli ./my-database pull folder/note.md ./exports/note.md
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
echo "Hello" | livesync-cli ./my-database put notes/hello.md
livesync-cli ./my-database cat notes/hello.md
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
livesync-cli ./my-database ls notes/
livesync-cli ./my-database info notes/hello.md
livesync-cli ./my-database rm notes/hello.md
livesync-cli ./my-database resolve notes/hello.md 3-abcdef
livesync-cli init-settings ./data.json
livesync-cli ./my-database --verbose
`);
}
@@ -94,6 +102,7 @@ export function parseArgs(): CLIOptions {
let verbose = false;
let debug = false;
let force = false;
let interval: number | undefined;
let command: CLICommand = "daemon";
const commandArgs: string[] = [];
@@ -110,6 +119,21 @@ export function parseArgs(): CLIOptions {
settingsPath = args[i];
break;
}
case "--interval":
case "-i": {
i++;
if (!args[i]) {
console.error(`Error: Missing value for ${token}`);
process.exit(1);
}
const n = parseInt(args[i], 10);
if (!Number.isInteger(n) || n <= 0) {
console.error(`Error: --interval requires a positive integer, got '${args[i]}'`);
process.exit(1);
}
interval = n;
break;
}
case "--debug":
case "-d":
// debugging automatically enables verbose logging, as it is intended for debugging issues.
@@ -164,6 +188,7 @@ export function parseArgs(): CLIOptions {
force,
command,
commandArgs,
interval,
};
}
@@ -197,6 +222,9 @@ async function createDefaultSettingsFile(options: CLIOptions) {
export async function main() {
const options = parseArgs();
if (options.interval && options.command !== "daemon") {
console.error(`Warning: --interval is only used in daemon mode, ignored for '${options.command}'`);
}
const avoidStdoutNoise =
options.command === "cat" ||
options.command === "cat-rev" ||
@@ -248,6 +276,20 @@ export async function main() {
infoLog(`Settings: ${settingsPath}`);
infoLog("");
// For daemon and mirror mode, load ignore rules before the core is constructed so that
// chokidar's ignored option is populated when beginWatch() fires during onLoad().
const watchEnabled = options.command === "daemon";
const vaultPath =
options.command === "mirror" && options.commandArgs[0]
? path.resolve(options.commandArgs[0])
: databasePath;
let ignoreRules: IgnoreRules | undefined;
if (options.command === "daemon" || options.command === "mirror") {
ignoreRules = new IgnoreRules(vaultPath);
await ignoreRules.load();
}
// Create service context and hub
const context = new NodeServiceContext(databasePath);
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(databasePath, context);
@@ -278,11 +320,14 @@ export async function main() {
}
console.error(`${prefix} ${message}`);
});
// Prevent replication result to be processed automatically.
serviceHubInstance.replication.processSynchroniseResult.addHandler(async () => {
console.error(`[Info] Replication result received, but not processed automatically in CLI mode.`);
return await Promise.resolve(true);
}, -100);
// Prevent replication result from being processed automatically in non-daemon commands.
// In daemon mode the default handler must run so changes are applied to the filesystem.
if (options.command !== "daemon") {
serviceHubInstance.replication.processSynchroniseResult.addHandler(async () => {
console.error(`[Info] Replication result received, but not processed automatically in CLI mode.`);
return await Promise.resolve(true);
}, -100);
}
// Setup settings handlers
const settingService = serviceHubInstance.setting;
@@ -324,11 +369,7 @@ export async function main() {
const core = new LiveSyncBaseCore(
serviceHubInstance,
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
const mirrorVaultPath =
options.command === "mirror" && options.commandArgs[0]
? path.resolve(options.commandArgs[0])
: databasePath;
return initialiseServiceModulesCLI(mirrorVaultPath, core, serviceHub);
return initialiseServiceModulesCLI(vaultPath, core, serviceHub, ignoreRules, watchEnabled);
},
(core) => [
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
@@ -344,8 +385,25 @@ export async function main() {
if (parts.some((part) => part.startsWith("."))) {
return await Promise.resolve(false);
}
// PouchDB LevelDB database directory lives in the vault directory.
if (parts[0]?.endsWith("-livesync-v2")) {
return await Promise.resolve(false);
}
return await Promise.resolve(true);
}, -1 /* highest priority */);
// Apply user-defined ignore rules for daemon mode (lower priority, runs after dotfile check).
if (ignoreRules) {
const rules = ignoreRules;
core.services.vault.isTargetFile.addHandler(async (target) => {
const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
if (rules.shouldIgnore(targetPath)) {
return false;
}
// undefined = pass through to next handler in chain
return undefined;
}, 0);
}
}
);
@@ -366,6 +424,25 @@ export async function main() {
process.on("SIGINT", () => shutdown("SIGINT"));
process.on("SIGTERM", () => shutdown("SIGTERM"));
// Save the settings file before any lifecycle events can mutate and persist them.
// suspendAllSync and other lifecycle hooks clobber sync settings in memory, and
// various code paths persist the clobbered state to disk. We restore on shutdown.
const settingsBackup = await fs.readFile(settingsPath, "utf-8").catch(() => null);
// Restore settings file on any exit to undo lifecycle mutations.
// Write to a temp path first so a crash mid-write doesn't leave a truncated file.
process.on("exit", () => {
if (settingsBackup) {
const tmpPath = settingsPath + ".tmp";
try {
require("fs").writeFileSync(tmpPath, settingsBackup, "utf-8");
require("fs").renameSync(tmpPath, settingsPath);
} catch (err) {
console.error("[Settings] Failed to restore settings on exit:", err);
}
}
});
// Start the core
try {
infoLog(`[Starting] Initializing LiveSync...`);
@@ -375,6 +452,18 @@ export async function main() {
console.error(`[Error] Failed to initialize LiveSync`);
process.exit(1);
}
// Capture sync settings before suspendAllSync() clobbers them.
// Used by daemon mode to restore the correct sync behaviour after the mirror scan.
const settingsBeforeSuspend = core.services.setting.currentSettings();
const originalSyncSettings = {
liveSync: settingsBeforeSuspend.liveSync,
syncOnStart: settingsBeforeSuspend.syncOnStart,
periodicReplication: settingsBeforeSuspend.periodicReplication,
syncOnSave: settingsBeforeSuspend.syncOnSave,
syncOnEditorSave: settingsBeforeSuspend.syncOnEditorSave,
syncOnFileOpen: settingsBeforeSuspend.syncOnFileOpen,
syncAfterMerge: settingsBeforeSuspend.syncAfterMerge,
};
await core.services.setting.suspendAllSync();
await core.services.control.onReady();
@@ -400,7 +489,7 @@ export async function main() {
infoLog("");
}
const result = await runCommand(options, { databasePath, core, settingsPath });
const result = await runCommand(options, { databasePath, core, settingsPath, originalSyncSettings });
if (!result) {
console.error(`[Error] Command '${options.command}' failed`);
process.exitCode = 1;
@@ -408,7 +497,7 @@ export async function main() {
infoLog(`[Done] Command '${options.command}' completed`);
}
if (options.command === "daemon") {
if (options.command === "daemon" && result) {
// Keep the process running
await new Promise(() => {});
} else {

View File

@@ -85,4 +85,67 @@ describe("CLI parseArgs", () => {
expect(parsed.command).toBe("p2p-host");
expect(parsed.commandArgs).toEqual([]);
});
it("parses --interval flag with valid integer", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval", "30"];
const parsed = parseArgs();
expect(parsed.command).toBe("daemon");
expect(parsed.interval).toBe(30);
});
it("parses -i shorthand for --interval", () => {
process.argv = ["node", "livesync-cli", "./vault", "-i", "10"];
const parsed = parseArgs();
expect(parsed.interval).toBe(10);
});
it("exits 1 when --interval has no value", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval"];
const exitMock = mockProcessExit();
vi.spyOn(console, "error").mockImplementation(() => {});
expect(() => parseArgs()).toThrowError("__EXIT__:1");
expect(exitMock).toHaveBeenCalledWith(1);
});
it("exits 1 when --interval is not a positive integer", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval", "0"];
const exitMock = mockProcessExit();
vi.spyOn(console, "error").mockImplementation(() => {});
expect(() => parseArgs()).toThrowError("__EXIT__:1");
expect(exitMock).toHaveBeenCalledWith(1);
});
it("exits 1 when --interval is negative", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval", "-5"];
const exitMock = mockProcessExit();
vi.spyOn(console, "error").mockImplementation(() => {});
expect(() => parseArgs()).toThrowError("__EXIT__:1");
});
it("exits 1 when --interval is not numeric", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval", "abc"];
const exitMock = mockProcessExit();
vi.spyOn(console, "error").mockImplementation(() => {});
expect(() => parseArgs()).toThrowError("__EXIT__:1");
});
it("parses explicit daemon command", () => {
process.argv = ["node", "livesync-cli", "./vault", "daemon"];
const parsed = parseArgs();
expect(parsed.command).toBe("daemon");
expect(parsed.databasePath).toBe("./vault");
});
it("defaults to daemon when no command specified", () => {
process.argv = ["node", "livesync-cli", "./vault"];
const parsed = parseArgs();
expect(parsed.command).toBe("daemon");
});
it("parses explicit daemon command with --interval", () => {
process.argv = ["node", "livesync-cli", "./vault", "daemon", "--interval", "30"];
const parsed = parseArgs();
expect(parsed.command).toBe("daemon");
expect(parsed.interval).toBe(30);
});
});

View File

@@ -11,8 +11,11 @@ import type {
} from "@lib/managers/adapters";
import type { FileEventItemSentinel } from "@lib/managers/StorageEventManager";
import type { NodeFile, NodeFolder } from "../adapters/NodeTypes";
import type { Stats } from "fs";
import * as fs from "fs/promises";
import * as path from "path";
import { watch as chokidarWatch, type FSWatcher } from "chokidar";
import type { IgnoreRules } from "../serviceModules/IgnoreRules";
/**
* CLI-specific type guard adapter
@@ -56,22 +59,11 @@ class CLIPersistenceAdapter implements IStorageEventPersistenceAdapter {
}
/**
* CLI-specific status adapter (console logging)
* CLI-specific status adapter (no-op — daemon uses journald for status)
*/
class CLIStatusAdapter implements IStorageEventStatusAdapter {
private lastUpdate = 0;
private updateInterval = 5000; // Update every 5 seconds
updateStatus(status: { batched: number; processing: number; totalQueued: number }): void {
const now = Date.now();
if (now - this.lastUpdate > this.updateInterval) {
if (status.totalQueued > 0 || status.processing > 0) {
// console.log(
// `[StorageEventManager] Batched: ${status.batched}, Processing: ${status.processing}, Total Queued: ${status.totalQueued}`
// );
}
this.lastUpdate = now;
}
updateStatus(_status: { batched: number; processing: number; totalQueued: number }): void {
// intentional no-op
}
}
@@ -100,15 +92,97 @@ class CLIConverterAdapter implements IStorageEventConverterAdapter<NodeFile> {
}
/**
* CLI-specific watch adapter (optional file watching with chokidar)
* CLI-specific watch adapter using chokidar for real-time filesystem monitoring.
*/
class CLIWatchAdapter implements IStorageEventWatchAdapter {
constructor(private basePath: string) {}
private _watcher: FSWatcher | undefined;
constructor(private basePath: string, private ignoreRules?: IgnoreRules, private watchEnabled: boolean = false) {}
private _toNodeFile(filePath: string, stats: Stats | undefined): NodeFile {
return {
path: path.relative(this.basePath, filePath) as FilePath,
stat: {
ctime: stats?.ctimeMs ?? Date.now(),
mtime: stats?.mtimeMs ?? Date.now(),
size: stats?.size ?? 0,
type: "file",
},
};
}
private _toNodeFolder(dirPath: string): NodeFolder {
return {
path: path.relative(this.basePath, dirPath) as FilePath,
isFolder: true,
};
}
async beginWatch(handlers: IStorageEventWatchHandlers): Promise<void> {
// File watching is not activated in the CLI.
// Because the CLI is designed for push/pull operations, not real-time sync.
// console.error("[CLIWatchAdapter] File watching is not enabled in CLI version");
if (!this.watchEnabled) return;
const baseIgnored: Array<RegExp | string | ((p: string) => boolean)> = [
/(^|[/\\])\./,
/(^|[/\\])[^/\\]*-livesync-v2([/\\]|$)/,
];
// Bind rules to a local const before the closure — chokidar v4 requires a
// MatchFunction, not glob strings, for custom patterns.
const rules = this.ignoreRules;
const ignored = rules
? [...baseIgnored, (p: string) => rules.shouldIgnore(path.relative(this.basePath, p))]
: baseIgnored;
const watcher = chokidarWatch(this.basePath, {
ignored,
ignoreInitial: true,
persistent: true,
awaitWriteFinish: {
stabilityThreshold: 500,
pollInterval: 100,
},
});
watcher.on("add", (filePath, stats) => {
const nodeFile = this._toNodeFile(filePath, stats);
handlers.onCreate(nodeFile);
});
watcher.on("change", (filePath, stats) => {
const nodeFile = this._toNodeFile(filePath, stats);
handlers.onChange(nodeFile);
});
watcher.on("unlink", (filePath) => {
const nodeFile = this._toNodeFile(filePath, undefined);
handlers.onDelete(nodeFile);
});
watcher.on("addDir", (dirPath) => {
const nodeFolder = this._toNodeFolder(dirPath);
handlers.onCreate(nodeFolder);
});
watcher.on("unlinkDir", (dirPath) => {
const nodeFolder = this._toNodeFolder(dirPath);
handlers.onDelete(nodeFolder);
});
watcher.on("error", (err) => {
console.error("[CLIWatchAdapter] Fatal watcher error — file watching stopped:", err);
console.error("[CLIWatchAdapter] Exiting for systemd restart.");
void watcher.close();
this._watcher = undefined;
// Use exit(1) rather than SIGTERM so systemd Restart=on-failure engages.
process.exit(1);
});
await new Promise<void>((resolve) => watcher.once("ready", resolve));
this._watcher = watcher;
}
close(): Promise<void> {
if (this._watcher) {
return this._watcher.close();
}
return Promise.resolve();
}
}
@@ -123,11 +197,15 @@ export class CLIStorageEventManagerAdapter implements IStorageEventManagerAdapte
readonly status: CLIStatusAdapter;
readonly converter: CLIConverterAdapter;
constructor(basePath: string) {
constructor(basePath: string, ignoreRules?: IgnoreRules, watchEnabled: boolean = false) {
this.typeGuard = new CLITypeGuardAdapter();
this.persistence = new CLIPersistenceAdapter(basePath);
this.watch = new CLIWatchAdapter(basePath);
this.watch = new CLIWatchAdapter(basePath, ignoreRules, watchEnabled);
this.status = new CLIStatusAdapter();
this.converter = new CLIConverterAdapter();
}
close(): Promise<void> {
return this.watch.close();
}
}

View File

@@ -0,0 +1,126 @@
import { describe, expect, it, vi, beforeEach } from "vitest";
import type { IStorageEventWatchHandlers } from "@lib/managers/adapters";
import type { NodeFile } from "../adapters/NodeTypes";
// ── chokidar mock ──────────────────────────────────────────────────────────────
// Must be hoisted before imports that pull in chokidar.
const mockWatcher = {
on: vi.fn().mockReturnThis(),
once: vi.fn((event: string, cb: () => void) => {
if (event === "ready") cb();
return mockWatcher;
}),
close: vi.fn(() => Promise.resolve()),
};
vi.mock("chokidar", () => ({
watch: vi.fn(() => mockWatcher),
}));
import * as chokidar from "chokidar";
import { CLIStorageEventManagerAdapter } from "./CLIStorageEventManagerAdapter";
// ── helpers ───────────────────────────────────────────────────────────────────
function makeHandlers(): IStorageEventWatchHandlers {
return {
onCreate: vi.fn(),
onChange: vi.fn(),
onDelete: vi.fn(),
onRename: vi.fn(),
} as any;
}
// ── tests ─────────────────────────────────────────────────────────────────────
describe("CLIStorageEventManagerAdapter", () => {
beforeEach(() => {
vi.clearAllMocks();
// Restore the default once() behaviour (ready fires synchronously).
mockWatcher.once.mockImplementation((event: string, cb: () => void) => {
if (event === "ready") cb();
return mockWatcher;
});
});
it("beginWatch is no-op when watchEnabled=false", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, false);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
expect(chokidar.watch).not.toHaveBeenCalled();
});
it("beginWatch calls chokidar.watch when watchEnabled=true", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, true);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
expect(chokidar.watch).toHaveBeenCalledTimes(1);
expect(chokidar.watch).toHaveBeenCalledWith(
"/base",
expect.objectContaining({ ignoreInitial: true })
);
});
it("add event produces NodeFile with correct relative path via onCreate", async () => {
const basePath = "/vault/base";
const adapter = new CLIStorageEventManagerAdapter(basePath, undefined, true);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
// Find the callback registered for the "add" event.
const addCall = mockWatcher.on.mock.calls.find(([event]) => event === "add");
expect(addCall).toBeDefined();
const addCallback = addCall![1] as (filePath: string, stats: any) => void;
const fakeStats = { ctimeMs: 1000, mtimeMs: 2000, size: 42 };
addCallback(`${basePath}/subdir/note.md`, fakeStats);
expect(handlers.onCreate).toHaveBeenCalledTimes(1);
const created = (handlers.onCreate as ReturnType<typeof vi.fn>).mock.calls[0][0] as NodeFile;
expect(created.path).toBe("subdir/note.md");
expect(created.stat?.size).toBe(42);
});
it("close() calls watcher.close()", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, true);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
await adapter.close();
expect(mockWatcher.close).toHaveBeenCalledTimes(1);
});
it("close() is safe when no watcher was started", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, false);
// Should not throw.
await expect(adapter.close()).resolves.toBeUndefined();
expect(mockWatcher.close).not.toHaveBeenCalled();
});
it("error event triggers process.exit(1)", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, true);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
const processExitSpy = vi.spyOn(process, "exit").mockImplementation((() => {}) as any);
const errorCall = mockWatcher.on.mock.calls.find(([event]) => event === "error");
expect(errorCall).toBeDefined();
const errorCallback = errorCall![1] as (err: Error) => void;
errorCallback(new Error("disk failure"));
expect(processExitSpy).toHaveBeenCalledWith(1);
processExitSpy.mockRestore();
});
});

View File

@@ -2,6 +2,7 @@ import { StorageEventManagerBase, type StorageEventManagerBaseDependencies } fro
import { CLIStorageEventManagerAdapter } from "./CLIStorageEventManagerAdapter";
import type { IMinimumLiveSyncCommands, LiveSyncBaseCore } from "../../../LiveSyncBaseCore";
import type { ServiceContext } from "@lib/services/base/ServiceBase";
import type { IgnoreRules } from "../serviceModules/IgnoreRules";
// import type { IMinimumLiveSyncCommands } from "@lib/services/base/IService";
export class StorageEventManagerCLI extends StorageEventManagerBase<CLIStorageEventManagerAdapter> {
@@ -10,9 +11,11 @@ export class StorageEventManagerCLI extends StorageEventManagerBase<CLIStorageEv
constructor(
basePath: string,
core: LiveSyncBaseCore<ServiceContext, IMinimumLiveSyncCommands>,
dependencies: StorageEventManagerBaseDependencies
dependencies: StorageEventManagerBaseDependencies,
ignoreRules?: IgnoreRules,
watchEnabled?: boolean
) {
const adapter = new CLIStorageEventManagerAdapter(basePath);
const adapter = new CLIStorageEventManagerAdapter(basePath, ignoreRules, watchEnabled);
super(adapter, dependencies);
this.core = core;
}
@@ -25,4 +28,11 @@ export class StorageEventManagerCLI extends StorageEventManagerBase<CLIStorageEv
// No-op in CLI version
// Internal file handling is not needed
}
/**
* Close the file watcher. Call this during graceful shutdown.
*/
close(): Promise<void> {
return this.adapter.close();
}
}

View File

@@ -6,6 +6,7 @@
"type": "module",
"scripts": {
"dev": "vite",
"prebuild": "node scripts/check-submodule.mjs",
"build": "vite build",
"preview": "vite preview",
"cli": "node dist/index.cjs",

View File

@@ -4,6 +4,7 @@
"version": "0.0.0",
"description": "Runtime dependencies for Self-hosted LiveSync CLI Docker image",
"dependencies": {
"chokidar": "^4.0.0",
"commander": "^14.0.3",
"werift": "^0.22.9",
"pouchdb-adapter-http": "^9.0.0",

View File

@@ -0,0 +1,36 @@
import fs from "node:fs";
import path from "node:path";
import process from "node:process";
const cliDir = process.cwd();
const repoRoot = path.resolve(cliDir, "../../..");
const requiredFiles = [
path.join(repoRoot, "src/lib/src/common/types.ts"),
];
const missingFiles = requiredFiles.filter((filePath) => !fs.existsSync(filePath));
if (missingFiles.length === 0) {
process.exit(0);
}
console.error("[CLI Build Error] Required shared sources were not found.");
console.error("This repository uses Git submodules, and the CLI depends on src/lib.");
console.error("");
console.error("Missing file(s):");
for (const filePath of missingFiles) {
console.error(` - ${path.relative(repoRoot, filePath)}`);
}
console.error("");
console.error("Initialize submodules, then retry the CLI build:");
console.error(" git submodule update --init --recursive");
console.error("");
console.error("For a fresh clone, prefer:");
console.error(" git clone --recurse-submodules <repository-url>");
console.error("");
console.error("Then run:");
console.error(" npm install");
console.error(" cd src/apps/cli");
console.error(" npm run build");
process.exit(1);

View File

@@ -9,6 +9,7 @@ import { ServiceFileAccessCLI } from "./ServiceFileAccessImpl";
import { ServiceDatabaseFileAccessCLI } from "./DatabaseFileAccess";
import { StorageEventManagerCLI } from "../managers/StorageEventManagerCLI";
import type { ServiceModules } from "@lib/interfaces/ServiceModule";
import type { IgnoreRules } from "./IgnoreRules";
/**
* Initialize service modules for CLI version
@@ -22,7 +23,9 @@ import type { ServiceModules } from "@lib/interfaces/ServiceModule";
export function initialiseServiceModulesCLI(
basePath: string,
core: LiveSyncBaseCore<ServiceContext, any>,
services: InjectableServiceHub<ServiceContext>
services: InjectableServiceHub<ServiceContext>,
ignoreRules?: IgnoreRules,
watchEnabled: boolean = false,
): ServiceModules {
const storageAccessManager = new StorageAccessManager();
@@ -42,6 +45,12 @@ export function initialiseServiceModulesCLI(
vaultService: services.vault,
storageAccessManager: storageAccessManager,
APIService: services.API,
}, ignoreRules, watchEnabled);
// Close the file watcher during graceful shutdown so the process can exit cleanly.
services.appLifecycle.onUnload.addHandler(async () => {
await storageEventManager.close();
return true;
});
// Storage access using CLI file system adapter

View File

@@ -0,0 +1,129 @@
import * as fs from "fs/promises";
import * as path from "path";
import { minimatch } from "minimatch";
/**
* Loads and evaluates ignore rules from `.livesync/ignore` inside the vault.
*
* File format:
* - Lines starting with `#` are comments.
* - Blank lines are ignored.
* - `import: .gitignore` (exactly) — merges patterns from the vault's `.gitignore`.
* - All other lines are minimatch glob patterns relative to the vault root.
*
* Negation patterns (lines starting with `!`) are not supported. Loading a
* ruleset containing them throws an error — use separate include/exclude files
* instead.
*
* Missing files (`.livesync/ignore` or `.gitignore`) are silently skipped.
*/
export class IgnoreRules {
private patterns: string[] = [];
constructor(private vaultPath: string) {}
/**
* Reads `.livesync/ignore` (and optionally `.gitignore`) and populates the
* pattern list. Safe to call multiple times — each call replaces the
* previous state. Does not throw if files are absent.
*
* @throws if any pattern line begins with `!` (negation is unsupported).
*/
async load(): Promise<void> {
this.patterns = [];
const ignorePath = path.join(this.vaultPath, ".livesync", "ignore");
let rawLines: string[];
try {
const content = await fs.readFile(ignorePath, "utf-8");
rawLines = content.split(/\r?\n/);
} catch {
// File absent or unreadable — treat as empty ruleset.
return;
}
for (const line of rawLines) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) {
continue;
}
// NOTE: Only the exact string "import: .gitignore" is recognised.
// Any future generalisation of this directive must validate that
// the resolved path stays within the vault directory.
if (trimmed === "import: .gitignore") {
await this._importGitignore();
continue;
}
if (trimmed.startsWith("import:")) {
console.error(`[IgnoreRules] Warning: unrecognised directive '${trimmed}' — only 'import: .gitignore' is supported`);
continue;
}
this._addPattern(trimmed);
}
if (this.patterns.length > 0) {
console.error(`[IgnoreRules] Loaded ${this.patterns.length} ignore patterns`);
}
}
// Normalises a single gitignore-style pattern:
// - Patterns ending with `/` (directory patterns like `build/`) are
// converted to `build/**` so they match all files inside that directory.
// - Patterns without a `/` are prefixed with `**/` to give them matchBase
// semantics (e.g. `*.tmp` → `**/*.tmp`), matching the basename in any
// subdirectory as gitignore does.
// - Patterns that already contain a `/` (but don't end with one) are
// path-specific and used as-is.
private _normalisePattern(pattern: string): string {
if (pattern.endsWith("/")) {
return "**/" + pattern + "**";
} else if (!pattern.includes("/")) {
return "**/" + pattern;
}
return pattern;
}
private async _importGitignore(): Promise<void> {
const gitignorePath = path.join(this.vaultPath, ".gitignore");
let content: string;
try {
content = await fs.readFile(gitignorePath, "utf-8");
} catch {
return;
}
this._parseLines(content);
}
private _parseLines(content: string): void {
for (const line of content.split(/\r?\n/)) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) continue;
this._addPattern(trimmed);
}
}
private _addPattern(raw: string): void {
if (raw.startsWith("!")) {
throw new Error(
`[IgnoreRules] Negation pattern '${raw}' is not supported. ` +
`Remove it from .livesync/ignore or use a separate include/exclude file.`
);
}
this.patterns.push(this._normalisePattern(raw));
}
/**
* Returns `true` if the given vault-relative path matches any loaded
* ignore pattern.
*
* @param relativePath - Path relative to the vault root, using forward
* slashes or the OS separator.
*/
shouldIgnore(relativePath: string): boolean {
if (this.patterns.length === 0) {
return false;
}
// Normalise to forward slashes for minimatch.
const normalised = relativePath.replace(/\\/g, "/");
return this.patterns.some((p) => minimatch(normalised, p, { dot: true }));
}
}

View File

@@ -0,0 +1,172 @@
import * as fs from "node:fs/promises";
import * as os from "node:os";
import * as path from "node:path";
import { afterEach, beforeEach, describe, expect, it } from "vitest";
import { IgnoreRules } from "./IgnoreRules";
describe("IgnoreRules", () => {
const tempDirs: string[] = [];
async function createVault(): Promise<string> {
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "livesync-ignorerules-"));
tempDirs.push(tempDir);
return tempDir;
}
async function writeIgnoreFile(vaultPath: string, content: string): Promise<void> {
const ignoreDir = path.join(vaultPath, ".livesync");
await fs.mkdir(ignoreDir, { recursive: true });
await fs.writeFile(path.join(ignoreDir, "ignore"), content, "utf-8");
}
afterEach(async () => {
await Promise.all(tempDirs.splice(0).map((dir) => fs.rm(dir, { recursive: true, force: true })));
});
describe("pattern normalisation", () => {
it("adds **/ prefix to basename patterns (no slash)", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("notes/scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("deep/nested/file.tmp")).toBe(true);
});
it("appends ** to directory patterns ending with / and prepends **/", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "build/\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("build/output.js")).toBe(true);
expect(rules.shouldIgnore("build/nested/file.js")).toBe(true);
expect(rules.shouldIgnore("subproject/build/output.js")).toBe(true);
});
it("leaves patterns containing / as-is", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "docs/private.md\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("docs/private.md")).toBe(true);
expect(rules.shouldIgnore("other/docs/private.md")).toBe(false);
});
});
describe("shouldIgnore", () => {
it("matches **/*.tmp against notes/scratch.tmp", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("notes/scratch.tmp")).toBe(true);
});
it("does not match notes/readme.md against **/*.tmp", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("notes/readme.md")).toBe(false);
});
it("returns false when no patterns are loaded", async () => {
const vaultPath = await createVault();
const rules = new IgnoreRules(vaultPath);
// No load() call — patterns are empty
expect(rules.shouldIgnore("anything.md")).toBe(false);
});
});
describe("negation patterns", () => {
it("throws when a negation pattern is encountered", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\n!important.tmp\n");
const rules = new IgnoreRules(vaultPath);
await expect(rules.load()).rejects.toThrow(/Negation pattern/);
});
it("throws when a .gitignore imported via directive contains negation", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "import: .gitignore\n");
await fs.writeFile(path.join(vaultPath, ".gitignore"), "*.log\n!keep.log\n", "utf-8");
const rules = new IgnoreRules(vaultPath);
await expect(rules.load()).rejects.toThrow(/Negation pattern/);
});
});
describe("unrecognised import: directives", () => {
it("warns and skips unrecognised import: forms (does not add as literal pattern)", async () => {
const vaultPath = await createVault();
// Typo: "import:.gitignore" instead of "import: .gitignore"
await writeIgnoreFile(vaultPath, "*.tmp\nimport:.gitignore\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
// *.tmp still loaded; import:.gitignore is skipped (not treated as a literal pattern)
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("import:.gitignore")).toBe(false);
});
});
describe("load() with missing file", () => {
it("returns without error when .livesync/ignore is absent", async () => {
const vaultPath = await createVault();
// No ignore file created
const rules = new IgnoreRules(vaultPath);
await expect(rules.load()).resolves.toBeUndefined();
expect(rules.shouldIgnore("anything.md")).toBe(false);
});
});
describe("load() with comments and blank lines", () => {
it("skips # comment lines and blank lines", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(
vaultPath,
"# This is a comment\n\n \n*.tmp\n# another comment\nbuild/\n"
);
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("build/output.js")).toBe(true);
expect(rules.shouldIgnore("readme.md")).toBe(false);
});
});
describe("import: .gitignore directive", () => {
it("reads and normalises patterns from .gitignore", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "import: .gitignore\n");
await fs.writeFile(path.join(vaultPath, ".gitignore"), "*.log\nnode_modules/\n", "utf-8");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("app.log")).toBe(true);
expect(rules.shouldIgnore("node_modules/package.json")).toBe(true);
expect(rules.shouldIgnore("src/node_modules/package.json")).toBe(true);
expect(rules.shouldIgnore("src/index.ts")).toBe(false);
});
it("merges .gitignore patterns with other patterns", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\nimport: .gitignore\n");
await fs.writeFile(path.join(vaultPath, ".gitignore"), "*.log\n", "utf-8");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("error.log")).toBe(true);
});
});
describe("import: .gitignore with missing .gitignore", () => {
it("does not throw when .gitignore is absent", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\nimport: .gitignore\n");
// No .gitignore created
const rules = new IgnoreRules(vaultPath);
await expect(rules.load()).resolves.toBeUndefined();
// The *.tmp pattern from the ignore file still works
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
});
});
});

View File

@@ -0,0 +1,166 @@
#!/usr/bin/env bash
# Test: daemon-related ignore rules behaviour
#
# Tests that are runnable without a long-running daemon process are exercised
# here using the `mirror` command, which calls the same `isTargetFile` handler
# stack that the daemon uses.
#
# Covered cases:
# 1. .livesync/ignore with *.tmp pattern → ignored file is NOT synced to DB
# 2. .livesync/ignore missing → no error, normal sync continues
# 3. import: .gitignore directive → patterns from .gitignore are merged
#
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
cd "$CLI_DIR"
source "$SCRIPT_DIR/test-helpers.sh"
display_test_info
RUN_BUILD="${RUN_BUILD:-1}"
cli_test_init_cli_cmd
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-daemon-test.XXXXXX")"
trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault"
mkdir -p "$VAULT_DIR/notes"
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..."
npm run build
fi
echo "[INFO] generating settings -> $SETTINGS_FILE"
cli_test_init_settings_file "$SETTINGS_FILE"
cli_test_mark_settings_configured "$SETTINGS_FILE"
PASS=0
FAIL=0
assert_pass() { echo "[PASS] $1"; PASS=$((PASS + 1)); }
assert_fail() { echo "[FAIL] $1" >&2; FAIL=$((FAIL + 1)); }
# ─────────────────────────────────────────────────────────────────────────────
# Case 1: .livesync/ignore with *.tmp → matched file should NOT appear in DB
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 1: .livesync/ignore *.tmp → ignored file not synced to DB ==="
mkdir -p "$VAULT_DIR/.livesync"
printf '*.tmp\n' > "$VAULT_DIR/.livesync/ignore"
# Also write a normal file so we can confirm mirror ran at all.
printf 'normal content\n' > "$VAULT_DIR/notes/normal.md"
# Write the file that should be ignored.
printf 'tmp content\n' > "$VAULT_DIR/notes/scratch.tmp"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
# The normal file should be in the DB.
RESULT_NORMAL="$WORK_DIR/case1-normal.txt"
if run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull notes/normal.md "$RESULT_NORMAL" 2>/dev/null; then
if cmp -s "$VAULT_DIR/notes/normal.md" "$RESULT_NORMAL"; then
assert_pass "normal.md was synced to DB"
else
assert_fail "normal.md content mismatch after mirror"
fi
else
assert_fail "normal.md was not found in DB after mirror"
fi
# The .tmp file should NOT be in the DB.
DB_LIST="$WORK_DIR/case1-ls.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls > "$DB_LIST"
if grep -q "scratch.tmp" "$DB_LIST"; then
assert_fail "scratch.tmp (ignored) was unexpectedly synced to DB"
echo "--- DB listing ---" >&2; cat "$DB_LIST" >&2
else
assert_pass "scratch.tmp (*.tmp pattern) was NOT synced to DB"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 2: .livesync/ignore absent → no error, normal sync continues
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 2: .livesync/ignore absent → no error, sync continues ==="
VAULT_DIR2="$WORK_DIR/vault2"
mkdir -p "$VAULT_DIR2/notes"
SETTINGS_FILE2="$WORK_DIR/data2.json"
cli_test_init_settings_file "$SETTINGS_FILE2"
cli_test_mark_settings_configured "$SETTINGS_FILE2"
# No .livesync directory at all.
printf 'hello\n' > "$VAULT_DIR2/notes/hello.md"
# mirror should succeed without error.
set +e
MIRROR_OUTPUT="$WORK_DIR/case2-mirror.txt"
run_cli "$VAULT_DIR2" --settings "$SETTINGS_FILE2" mirror >"$MIRROR_OUTPUT" 2>&1
MIRROR_EXIT=$?
set -e
if [[ "$MIRROR_EXIT" -ne 0 ]]; then
assert_fail "mirror exited non-zero ($MIRROR_EXIT) when .livesync/ignore is absent"
cat "$MIRROR_OUTPUT" >&2
else
assert_pass "mirror succeeded when .livesync/ignore is absent"
fi
# The normal file should have been synced.
RESULT_HELLO="$WORK_DIR/case2-hello.txt"
if run_cli "$VAULT_DIR2" --settings "$SETTINGS_FILE2" pull notes/hello.md "$RESULT_HELLO" 2>/dev/null; then
assert_pass "file synced normally when .livesync/ignore is absent"
else
assert_fail "file was not synced when .livesync/ignore is absent"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 3: import: .gitignore merges patterns
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 3: import: .gitignore directive merges patterns ==="
VAULT_DIR3="$WORK_DIR/vault3"
mkdir -p "$VAULT_DIR3/notes"
SETTINGS_FILE3="$WORK_DIR/data3.json"
cli_test_init_settings_file "$SETTINGS_FILE3"
cli_test_mark_settings_configured "$SETTINGS_FILE3"
mkdir -p "$VAULT_DIR3/.livesync"
printf 'import: .gitignore\n' > "$VAULT_DIR3/.livesync/ignore"
printf '# gitignore comment\n*.log\nbuild/\n' > "$VAULT_DIR3/.gitignore"
printf 'regular note\n' > "$VAULT_DIR3/notes/regular.md"
printf 'log content\n' > "$VAULT_DIR3/notes/debug.log"
run_cli "$VAULT_DIR3" --settings "$SETTINGS_FILE3" mirror
DB_LIST3="$WORK_DIR/case3-ls.txt"
run_cli "$VAULT_DIR3" --settings "$SETTINGS_FILE3" ls > "$DB_LIST3"
if grep -q "debug.log" "$DB_LIST3"; then
assert_fail "debug.log (ignored via .gitignore import) was unexpectedly synced to DB"
echo "--- DB listing ---" >&2; cat "$DB_LIST3" >&2
else
assert_pass "debug.log (*.log from imported .gitignore) was NOT synced to DB"
fi
# regular.md should still be present.
if grep -q "regular.md" "$DB_LIST3"; then
assert_pass "regular.md was synced normally alongside .gitignore import rules"
else
assert_fail "regular.md was NOT synced — .gitignore import may have been too broad"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Summary
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "Results: PASS=$PASS FAIL=$FAIL"
if [[ "$FAIL" -gt 0 ]]; then
exit 1
fi

View File

@@ -0,0 +1,9 @@
hostname=http://127.0.0.1:5989/
dbname=livesync-test-db-ci
username=admin
password=testpassword
minioEndpoint=http://127.0.0.1:9000
accessKey=minioadmin
secretKey=minioadmin
bucketName=livesync-test-bucket-ci
LIVESYNC_TEST_TEE=1

View File

@@ -0,0 +1,150 @@
# Writing CLI Tests on Deno
This guide explains how to add or update tests under `src/apps/cli/testdeno/`.
Note that new tests should be added to the Deno suite rather than the existing bash suite due to the cross-platform execution and TypeScript benefits.
## Scope
The Deno suite is designed for cross-platform execution, with a strong focus on Windows compatibility while keeping behaviour equivalent to existing bash tests.
## Principles
- Keep one scenario per file when practical.
- Reuse helpers from `helpers/` rather than duplicating process, Docker, or settings logic.
- Prefer deterministic data over random inputs unless randomness is explicitly required.
- Ensure every test can clean up automatically.
- Keep assertions actionable with clear failure messages.
## Directory structure
```
src/apps/cli/testdeno/
helpers/
backgroundCli.ts
cli.ts
docker.ts
env.ts
p2p.ts
settings.ts
temp.ts
test-*.ts
deno.json
```
## Test file naming
- Use `test-<feature>.ts`.
- Use names aligned with existing bash tests when porting, for example:
- `test-sync-locked-remote.ts`
- `test-p2p-sync.ts`
## Core helper usage
### Temporary workspace
Use `TempDir` and `await using` so cleanup is automatic:
```ts
await using workDir = await TempDir.create("livesync-cli-my-test");
```
### CLI execution
- `runCli(...)`: returns code and combined output.
- `runCliOrFail(...)`: throws on non-zero exit.
- `runCliWithInputOrFail(input, ...)`: for `put` and stdin-driven commands.
### Settings
- `initSettingsFile(...)`: creates a baseline settings file.
- `applyCouchdbSettings(...)`: applies CouchDB fields.
- `applyRemoteSyncSettings(...)`: applies remote and encryption fields.
- `applyP2pSettings(...)`: applies P2P fields.
- `applyP2pTestTweaks(...)`: enables P2P-only test profile.
### Docker services
- `startCouchdb(...)`, `stopCouchdb()`
- `startP2pRelay()`, `stopP2pRelay()`
### P2P discovery
- `discoverPeer(...)`
- `maybeStartLocalRelay(...)`
- `stopLocalRelayIfStarted(...)`
### Background host process
Use `startCliInBackground(...)` for long-running host mode such as `p2p-host`.
## Recommended test structure
1. Arrange
2. Act
3. Assert
4. Cleanup in `finally`
Example skeleton:
```ts
Deno.test("feature: behaviour", async () => {
await using workDir = await TempDir.create("example");
// Arrange
try {
// Act
// Assert
} finally {
// Optional explicit cleanup
}
});
```
## Reliability guidelines
- Use explicit waits only when needed for eventual consistency.
- Re-run sync operations where the protocol is eventually consistent.
- For network-sensitive commands, use `LIVESYNC_CLI_RETRY` during debugging.
- Keep Docker container reuse disabled by default unless debugging.
## Environment variables
Common variables:
- `LIVESYNC_DOCKER_MODE`
- `LIVESYNC_DOCKER_COMMAND`
- `LIVESYNC_TEST_TEE`
- `LIVESYNC_DOCKER_TEE`
- `LIVESYNC_CLI_DEBUG`
- `LIVESYNC_CLI_VERBOSE`
- `LIVESYNC_CLI_RETRY`
- `LIVESYNC_DEBUG_KEEP_DOCKER`
P2P variables:
- `RELAY`
- `ROOM_ID`
- `PASSPHRASE`
- `APP_ID`
- `PEERS_TIMEOUT`
- `SYNC_TIMEOUT`
- `USE_INTERNAL_RELAY`
## Adding a new test task
1. Add the test file under `src/apps/cli/testdeno/`.
2. Add a task in `src/apps/cli/testdeno/deno.json`.
3. Update `src/apps/cli/testdeno/test_dev_deno.md`.
4. Run the new task locally.
## Validation checklist
- The test passes on a clean workspace.
- The test does not leave persistent artefacts unless explicitly requested.
- Failure messages identify both expected and actual behaviour.
- The corresponding task is documented.
## Out of scope for this suite
- One-off reproduction scripts that are not intended as stable regression tests.

View File

@@ -0,0 +1,22 @@
{
"tasks": {
"test": "deno test --env-file=.test.env -A --no-check test-*.ts",
"test:local": "deno test --env-file=.test.env -A --no-check test-setup-put-cat.ts test-mirror.ts",
"test:push-pull": "deno test --env-file=.test.env -A --no-check test-push-pull.ts",
"test:setup-put-cat": "deno test --env-file=.test.env -A --no-check test-setup-put-cat.ts",
"test:mirror": "deno test --env-file=.test.env -A --no-check test-mirror.ts",
"test:sync-two-local": "deno test --env-file=.test.env -A --no-check test-sync-two-local-databases.ts",
"test:sync-locked-remote": "deno test --env-file=.test.env -A --no-check test-sync-locked-remote.ts",
"test:p2p-host": "deno test --env-file=.test.env -A --no-check test-p2p-host.ts",
"test:p2p-peers": "deno test --env-file=.test.env -A --no-check test-p2p-peers-local-relay.ts",
"test:p2p-sync": "deno test --env-file=.test.env -A --no-check test-p2p-sync.ts",
"test:p2p-three-nodes": "deno test --env-file=.test.env -A --no-check test-p2p-three-nodes-conflict.ts",
"test:p2p-upload-download": "deno test --env-file=.test.env -A --no-check test-p2p-upload-download-repro.ts",
"test:e2e-couchdb": "deno test --env-file=.test.env -A --no-check test-e2e-two-vaults-couchdb.ts",
"test:e2e-matrix": "deno test --env-file=.test.env -A --no-check test-e2e-two-vaults-matrix.ts"
},
"imports": {
"@std/assert": "jsr:@std/assert@^1.0.13",
"@std/path": "jsr:@std/path@^1.0.9"
}
}

31
src/apps/cli/testdeno/deno.lock generated Normal file
View File

@@ -0,0 +1,31 @@
{
"version": "5",
"specifiers": {
"jsr:@std/assert@^1.0.13": "1.0.19",
"jsr:@std/internal@^1.0.12": "1.0.12",
"jsr:@std/path@^1.0.9": "1.1.4"
},
"jsr": {
"@std/assert@1.0.19": {
"integrity": "eaada96ee120cb980bc47e040f82814d786fe8162ecc53c91d8df60b8755991e",
"dependencies": [
"jsr:@std/internal"
]
},
"@std/internal@1.0.12": {
"integrity": "972a634fd5bc34b242024402972cd5143eac68d8dffaca5eaa4dba30ce17b027"
},
"@std/path@1.1.4": {
"integrity": "1d2d43f39efb1b42f0b1882a25486647cb851481862dc7313390b2bb044314b5",
"dependencies": [
"jsr:@std/internal"
]
}
},
"workspace": {
"dependencies": [
"jsr:@std/assert@^1.0.13",
"jsr:@std/path@^1.0.9"
]
}
}

View File

@@ -0,0 +1,112 @@
import { CLI_DIR } from "./cli.ts";
import { join } from "@std/path";
const CLI_DIST = join(CLI_DIR, "dist", "index.cjs");
const VERBOSE_ENABLED = Deno.env.get("LIVESYNC_CLI_VERBOSE") === "1";
const DEBUG_ENABLED = Deno.env.get("LIVESYNC_CLI_DEBUG") === "1";
function decorateArgs(args: string[]): string[] {
return DEBUG_ENABLED ? ["-d", ...args] : VERBOSE_ENABLED ? ["-v", ...args] : args;
}
async function pump(
stream: ReadableStream<Uint8Array>,
sink: (text: string) => void,
teeTarget: WritableStream<Uint8Array> | null
): Promise<void> {
const reader = stream.getReader();
const writer = teeTarget?.getWriter();
const dec = new TextDecoder();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (!value) continue;
sink(dec.decode(value, { stream: true }));
if (writer) {
await writer.write(value);
}
}
} finally {
if (writer) writer.releaseLock();
reader.releaseLock();
}
}
export class BackgroundCliProcess {
#stdout = "";
#stderr = "";
#stdoutDone: Promise<void>;
#stderrDone: Promise<void>;
constructor(
readonly child: Deno.ChildProcess,
readonly args: string[]
) {
this.#stdoutDone = pump(
child.stdout,
(text) => {
this.#stdout += text;
},
null
);
this.#stderrDone = pump(
child.stderr,
(text) => {
this.#stderr += text;
},
null
);
}
get stdout(): string {
return this.#stdout;
}
get stderr(): string {
return this.#stderr;
}
get combined(): string {
return this.#stdout + this.#stderr;
}
async waitUntilContains(needle: string, timeoutMs = 15000): Promise<void> {
const started = Date.now();
while (Date.now() - started < timeoutMs) {
if (this.combined.includes(needle)) return;
const status = await Promise.race([
this.child.status.then((s) => ({ type: "status" as const, status: s })),
new Promise<{ type: "tick" }>((resolve) => setTimeout(() => resolve({ type: "tick" }), 100)),
]);
if (status.type === "status") {
throw new Error(
`Background CLI exited before '${needle}' appeared (code ${status.status.code})\n${this.combined}`
);
}
}
throw new Error(`Timed out waiting for '${needle}'\n${this.combined}`);
}
async stop(): Promise<number> {
try {
this.child.kill("SIGTERM");
} catch {
// ignore already-exited processes
}
const status = await this.child.status;
await Promise.all([this.#stdoutDone, this.#stderrDone]);
return status.code;
}
}
export function startCliInBackground(...args: string[]): BackgroundCliProcess {
const child = new Deno.Command("node", {
args: [CLI_DIST, ...decorateArgs(args)],
cwd: CLI_DIR,
stdin: "null",
stdout: "piped",
stderr: "piped",
}).spawn();
return new BackgroundCliProcess(child, args);
}

View File

@@ -0,0 +1,231 @@
import { join } from "@std/path";
// ---------------------------------------------------------------------------
// Path resolution
// ---------------------------------------------------------------------------
// This file lives at: src/apps/cli/testdeno/helpers/cli.ts
// CLI root (src/apps/cli/) is two levels up.
// import.meta.dirname is available in Deno 1.40+ as an OS-native path string.
export const CLI_DIR: string = join(import.meta.dirname!, "..", "..");
const CLI_DIST = join(CLI_DIR, "dist", "index.cjs");
// ---------------------------------------------------------------------------
// Result type
// ---------------------------------------------------------------------------
export interface CliResult {
stdout: string;
stderr: string;
/** stdout + stderr concatenated — useful for assertion messages. */
combined: string;
code: number;
}
const TEE_ENABLED = Deno.env.get("LIVESYNC_TEST_TEE") === "1";
const VERBOSE_ENABLED = Deno.env.get("LIVESYNC_CLI_VERBOSE") === "1";
const DEBUG_ENABLED = Deno.env.get("LIVESYNC_CLI_DEBUG") === "1";
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
function concatChunks(chunks: Uint8Array[]): Uint8Array {
const total = chunks.reduce((n, c) => n + c.length, 0);
const out = new Uint8Array(total);
let offset = 0;
for (const c of chunks) {
out.set(c, offset);
offset += c.length;
}
return out;
}
async function collectStream(
stream: ReadableStream<Uint8Array>,
teeTarget: WritableStream<Uint8Array> | null
): Promise<Uint8Array> {
const reader = stream.getReader();
const chunks: Uint8Array[] = [];
const writer = teeTarget?.getWriter();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (value) {
chunks.push(value);
if (writer) {
await writer.write(value);
}
}
}
} finally {
if (writer) {
writer.releaseLock();
}
reader.releaseLock();
}
return concatChunks(chunks);
}
async function runNodeCommand(args: string[], stdinData?: Uint8Array): Promise<CliResult> {
const cliArgs = DEBUG_ENABLED ? ["-d", ...args] : VERBOSE_ENABLED ? ["-v", ...args] : args;
const child = new Deno.Command("node", {
args: [CLI_DIST, ...cliArgs],
cwd: CLI_DIR,
stdin: stdinData ? "piped" : "null",
stdout: "piped",
stderr: "piped",
}).spawn();
const stdoutPromise = collectStream(child.stdout, TEE_ENABLED ? Deno.stdout.writable : null);
const stderrPromise = collectStream(child.stderr, TEE_ENABLED ? Deno.stderr.writable : null);
if (stdinData) {
const w = child.stdin.getWriter();
await w.write(stdinData);
await w.close();
}
const [status, stdout, stderr] = await Promise.all([child.status, stdoutPromise, stderrPromise]);
const dec = new TextDecoder();
const out = dec.decode(stdout);
const err = dec.decode(stderr);
return { stdout: out, stderr: err, combined: out + err, code: status.code };
}
function isTransientNetworkError(message: string): boolean {
const m = message.toLowerCase();
return (
m.includes("fetch failed") ||
m.includes("econnreset") ||
m.includes("econnrefused") ||
m.includes("und_err_socket") ||
m.includes("other side closed")
);
}
// ---------------------------------------------------------------------------
// Core runners
// ---------------------------------------------------------------------------
/**
* Run the CLI (node dist/index.cjs) with the supplied arguments.
* Pass the vault / DB path as the first argument, exactly as the bash helpers
* do. Does NOT throw on non-zero exit — check `.code` yourself.
*/
export async function runCli(...args: string[]): Promise<CliResult> {
const retries = Number(Deno.env.get("LIVESYNC_CLI_RETRY") ?? "0");
for (let attempt = 0; ; attempt++) {
const result = await runNodeCommand(args);
if (result.code === 0) return result;
if (attempt >= retries || !isTransientNetworkError(result.combined)) {
return result;
}
const waitMs = 400 * (attempt + 1);
console.warn(`[WARN] transient CLI failure, retrying (${attempt + 1}/${retries}) in ${waitMs}ms`);
await sleep(waitMs);
}
}
/**
* Run the CLI and throw if it exits non-zero. Returns stdout.
*/
export async function runCliOrFail(...args: string[]): Promise<string> {
const r = await runCli(...args);
if (r.code !== 0) {
throw new Error(`CLI exited with code ${r.code}\nstdout: ${r.stdout}\nstderr: ${r.stderr}`);
}
return r.stdout;
}
/**
* Run the CLI with data piped to stdin (equivalent to `echo … | run_cli …`
* or `cat file | run_cli …`).
*/
export async function runCliWithInput(input: string | Uint8Array, ...args: string[]): Promise<CliResult> {
const data = typeof input === "string" ? new TextEncoder().encode(input) : input;
const retries = Number(Deno.env.get("LIVESYNC_CLI_RETRY") ?? "0");
for (let attempt = 0; ; attempt++) {
const result = await runNodeCommand(args, data);
if (result.code === 0) return result;
if (attempt >= retries || !isTransientNetworkError(result.combined)) {
return result;
}
const waitMs = 400 * (attempt + 1);
console.warn(`[WARN] transient CLI(stdin) failure, retrying (${attempt + 1}/${retries}) in ${waitMs}ms`);
await sleep(waitMs);
}
}
/**
* runCliWithInput — throws on non-zero exit, returns stdout.
*/
export async function runCliWithInputOrFail(input: string | Uint8Array, ...args: string[]): Promise<string> {
const r = await runCliWithInput(input, ...args);
if (r.code !== 0) {
throw new Error(`CLI (with stdin) exited with code ${r.code}\nstdout: ${r.stdout}\nstderr: ${r.stderr}`);
}
return r.stdout;
}
// ---------------------------------------------------------------------------
// Output helpers
// ---------------------------------------------------------------------------
/** Strip the CLIWatchAdapter banner line that `cat` emits. */
export function sanitiseCatStdout(raw: string): string {
return raw
.split("\n")
.filter((l) => l !== "[CLIWatchAdapter] File watching is not enabled in CLI version")
.join("\n");
}
// ---------------------------------------------------------------------------
// Assertions (parity with test-helpers.sh)
// ---------------------------------------------------------------------------
export function assertContains(haystack: string, needle: string, message: string): void {
if (!haystack.includes(needle)) {
throw new Error(`[FAIL] ${message}\nExpected to find: ${JSON.stringify(needle)}\nActual output:\n${haystack}`);
}
}
export function assertNotContains(haystack: string, needle: string, message: string): void {
if (haystack.includes(needle)) {
throw new Error(`[FAIL] ${message}\nDid NOT expect: ${JSON.stringify(needle)}\nActual output:\n${haystack}`);
}
}
export async function assertFilesEqual(expectedPath: string, actualPath: string, message: string): Promise<void> {
const [expected, actual] = await Promise.all([Deno.readFile(expectedPath), Deno.readFile(actualPath)]);
if (expected.length !== actual.length || expected.some((b, i) => b !== actual[i])) {
const hex = async (d: Uint8Array<ArrayBuffer>) => {
const h = await crypto.subtle.digest("SHA-256", d);
return [...new Uint8Array(h)].map((b) => b.toString(16).padStart(2, "0")).join("");
};
throw new Error(
`[FAIL] ${message}\nexpected SHA-256: ${await hex(expected)}\nactual SHA-256: ${await hex(actual)}`
);
}
}
// ---------------------------------------------------------------------------
// JSON helpers
// ---------------------------------------------------------------------------
export async function readJsonFile<T = Record<string, unknown>>(filePath: string): Promise<T> {
return JSON.parse(await Deno.readTextFile(filePath)) as T;
}
export function jsonStringField(jsonText: string, field: string): string {
const data = JSON.parse(jsonText) as Record<string, unknown>;
const value = data[field];
return typeof value === "string" ? value : "";
}
export function jsonFieldIsNa(data: Record<string, unknown>, field: string): boolean {
return data[field] === "N/A";
}

View File

@@ -0,0 +1,530 @@
/**
* Docker service management for tests.
*
* CouchDB start/stop/init is implemented directly using `docker` CLI commands
* and the Fetch API, so it works on any platform where Docker (Desktop) is
* available — including Windows — without needing bash.
*/
type DockerInvoker = {
bin: string;
prefix: string[];
label: string;
};
let dockerInvokerPromise: Promise<DockerInvoker> | null = null;
const DOCKER_TEE = Deno.env.get("LIVESYNC_DOCKER_TEE") === "1" || Deno.env.get("LIVESYNC_TEST_TEE") === "1";
// ---------------------------------------------------------------------------
// Low-level docker wrapper
// ---------------------------------------------------------------------------
function parseCommand(command: string): { bin: string; prefix: string[] } {
const parts = command.trim().split(/\s+/).filter(Boolean);
if (parts.length === 0) {
throw new Error("LIVESYNC_DOCKER_COMMAND is empty");
}
return { bin: parts[0], prefix: parts.slice(1) };
}
async function runCommand(bin: string, args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
const cmd = new Deno.Command(bin, {
args,
stdin: "null",
stdout: "piped",
stderr: "piped",
});
try {
const { code, stdout, stderr } = await cmd.output();
const dec = new TextDecoder();
const result = {
code,
stdout: dec.decode(stdout),
stderr: dec.decode(stderr),
};
if (DOCKER_TEE) {
if (result.stdout.trim().length > 0) {
console.log(`[docker:${bin}] ${result.stdout.trimEnd()}`);
}
if (result.stderr.trim().length > 0) {
console.error(`[docker:${bin}] ${result.stderr.trimEnd()}`);
}
}
return result;
} catch (err) {
if (err instanceof Deno.errors.NotFound) {
return {
code: 127,
stdout: "",
stderr: `Command not found: ${bin}`,
};
}
throw err;
}
}
async function resolveDockerInvoker(): Promise<DockerInvoker> {
const custom = Deno.env.get("LIVESYNC_DOCKER_COMMAND")?.trim();
if (custom) {
const parsed = parseCommand(custom);
const runner: DockerInvoker = {
...parsed,
label: `custom(${custom})`,
};
// Validate custom command eagerly so misconfiguration fails fast.
const checkArgs = runner.prefix.length === 0 ? ["--version"] : [...runner.prefix, "docker", "--version"];
const check = await runCommand(runner.bin, checkArgs);
if (check.code !== 0) {
throw new Error(`LIVESYNC_DOCKER_COMMAND is not usable: ${custom}\n${check.stderr || check.stdout}`);
}
return runner;
}
const mode = (Deno.env.get("LIVESYNC_DOCKER_MODE") ?? "auto").toLowerCase();
const onWindows = Deno.build.os === "windows";
const native: DockerInvoker = { bin: "docker", prefix: [], label: "docker" };
const wsl: DockerInvoker = { bin: "wsl", prefix: [], label: "wsl docker" };
if (mode === "native") {
return native;
}
if (mode === "wsl") {
return wsl;
}
if (mode !== "auto") {
throw new Error(`Unsupported LIVESYNC_DOCKER_MODE='${mode}'. Use auto, native, or wsl.`);
}
// On Windows we prefer `wsl docker` first, then native docker.
// This typically works better in setups where Docker is installed only in
// WSL and not exposed as docker.exe on PATH.
const candidates = onWindows ? [wsl, native] : [native, wsl];
for (const c of candidates) {
if (c.bin === "docker") {
const r = await runCommand("docker", ["--version"]);
if (r.code === 0) return c;
continue;
}
const r = await runCommand("wsl", ["docker", "--version"]);
if (r.code === 0) return c;
}
throw new Error(
[
"Docker command is not available.",
"Set one of:",
"- LIVESYNC_DOCKER_MODE=native",
"- LIVESYNC_DOCKER_MODE=wsl",
"- LIVESYNC_DOCKER_COMMAND='docker'",
"- LIVESYNC_DOCKER_COMMAND='wsl docker'",
].join("\n")
);
}
async function getDockerInvoker(): Promise<DockerInvoker> {
if (!dockerInvokerPromise) {
dockerInvokerPromise = resolveDockerInvoker().then((r) => {
console.log(`[INFO] docker runner: ${r.label}`);
return r;
});
}
return await dockerInvokerPromise;
}
async function docker(...args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
const invoker = await getDockerInvoker();
// Either:
// docker <args>
// Or:
// wsl docker <args>
const finalArgs =
invoker.prefix.length === 0
? invoker.bin === "wsl"
? ["docker", ...args]
: args
: [...invoker.prefix, ...args];
const r = await runCommand(invoker.bin, finalArgs);
return { code: r.code, stdout: r.stdout, stderr: r.stderr };
}
async function dockerOrFail(...args: string[]): Promise<string> {
const r = await docker(...args);
if (r.code !== 0) {
throw new Error(`docker ${args[0]} failed (code ${r.code}): ${r.stderr.trim()}`);
}
return r.stdout;
}
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
async function waitForCouchdbStable(hostname: string, user: string, password: string): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = { Authorization: `Basic ${auth}` };
let consecutive = 0;
for (let i = 0; i < 30; i++) {
try {
const r = await fetch(`${h}/_up`, {
headers,
signal: AbortSignal.timeout(3000),
});
if (r.ok) {
consecutive++;
if (consecutive >= 3) return;
} else {
consecutive = 0;
}
} catch {
consecutive = 0;
}
await sleep(500);
}
throw new Error("CouchDB did not become stable in time");
}
// ---------------------------------------------------------------------------
// Fetch with retry (mirrors cli_test_curl_json() retry loop)
// ---------------------------------------------------------------------------
async function fetchRetry(
url: string,
init: RequestInit,
retries = 30,
delayMs = 2000,
allowStatus: number[] = []
): Promise<void> {
let lastError: unknown;
let lastStatus: number | undefined;
for (let i = 0; i < retries; i++) {
try {
const r = await fetch(url, {
signal: AbortSignal.timeout(5000),
...init,
});
lastStatus = r.status;
await r.body?.cancel().catch(() => {});
if (r.ok || allowStatus.includes(r.status)) return;
lastError = `HTTP ${r.status}`;
} catch (e) {
lastError = e;
}
await sleep(delayMs);
}
throw new Error(
`Could not reach ${url} after ${retries} retries: ${lastError} (last status: ${lastStatus ?? "N/A"})`
);
}
// ---------------------------------------------------------------------------
// CouchDB
// ---------------------------------------------------------------------------
//
// TODO: these values could be configurable via environment variables.
//
const COUCHDB_CONTAINER = "couchdb-test";
const COUCHDB_IMAGE = "couchdb:3.5.0";
const MINIO_CONTAINER = "minio-test";
const MINIO_IMAGE = "minio/minio";
const MINIO_MC_IMAGE = "minio/mc";
export async function stopCouchdb(): Promise<void> {
await docker("stop", COUCHDB_CONTAINER);
await docker("rm", COUCHDB_CONTAINER);
}
/**
* Start a CouchDB test container, initialise it, and create the test DB.
* Mirrors cli_test_start_couchdb() from test-helpers.sh, using direct
* docker / fetch calls instead of the bash util scripts.
*/
export async function startCouchdb(couchdbUri: string, user: string, password: string, dbname: string): Promise<void> {
console.log("[INFO] stopping leftover CouchDB container if present");
await stopCouchdb().catch(() => {});
console.log("[INFO] starting CouchDB test container");
await dockerOrFail(
"run",
"-d",
"--name",
COUCHDB_CONTAINER,
"-p",
// TODO: port mapping should be configurable.
"5989:5984",
"-e",
`COUCHDB_USER=${user}`,
"-e",
`COUCHDB_PASSWORD=${password}`,
"-e",
"COUCHDB_SINGLE_NODE=y",
COUCHDB_IMAGE
);
console.log("[INFO] initialising CouchDB");
await initCouchdb(couchdbUri, user, password);
console.log("[INFO] waiting for CouchDB to become stable");
await waitForCouchdbStable(couchdbUri, user, password);
console.log(`[INFO] creating test database: ${dbname}`);
await createCouchdbDatabase(couchdbUri, user, password, dbname);
}
/**
* Mirror couchdb-init.sh: configure single-node CouchDB via its REST API.
*/
async function initCouchdb(hostname: string, user: string, password: string, node = "_local"): Promise<void> {
// Podman environments often resolve localhost to ::1; use 127.0.0.1 like
// the bash script does.
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = {
"Content-Type": "application/json",
Authorization: `Basic ${auth}`,
};
const calls: Array<[string, string, string]> = [
[
"POST",
`${h}/_cluster_setup`,
JSON.stringify({
action: "enable_single_node",
username: user,
password,
bind_address: "0.0.0.0",
port: 5984,
singlenode: true,
}),
],
["PUT", `${h}/_node/${node}/_config/chttpd/require_valid_user`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd_auth/require_valid_user`, '"true"'],
["PUT", `${h}/_node/${node}/_config/httpd/WWW-Authenticate`, '"Basic realm=\\"couchdb\\""'],
["PUT", `${h}/_node/${node}/_config/httpd/enable_cors`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd/enable_cors`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd/max_http_request_size`, '"4294967296"'],
["PUT", `${h}/_node/${node}/_config/couchdb/max_document_size`, '"50000000"'],
["PUT", `${h}/_node/${node}/_config/cors/credentials`, '"true"'],
["PUT", `${h}/_node/${node}/_config/cors/origins`, '"*"'],
];
for (const [method, url, body] of calls) {
await fetchRetry(url, { method, headers, body });
}
}
export async function createCouchdbDatabase(
hostname: string,
user: string,
password: string,
dbname: string
): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
await fetchRetry(`${h}/${dbname}`, {
method: "PUT",
headers: { Authorization: `Basic ${auth}` },
});
}
/** Update a CouchDB document via PUT. Returns the updated document. */
export async function updateCouchdbDoc(
hostname: string,
user: string,
password: string,
docUrl: string,
updater: (doc: Record<string, unknown>) => Record<string, unknown>
): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = {
"Content-Type": "application/json",
Authorization: `Basic ${auth}`,
};
const getRes = await fetch(`${h}/${docUrl}`, { headers });
const current = (await getRes.json()) as Record<string, unknown>;
const updated = updater(current);
await fetchRetry(`${h}/${docUrl}`, {
method: "PUT",
headers,
body: JSON.stringify(updated),
});
}
// ---------------------------------------------------------------------------
// MinIO
// ---------------------------------------------------------------------------
function shQuote(value: string): string {
return `'${value.split("'").join(`'"'"'`)}'`;
}
export async function stopMinio(): Promise<void> {
await docker("stop", MINIO_CONTAINER);
await docker("rm", MINIO_CONTAINER);
}
async function initMinioBucket(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<boolean> {
const cmd =
`mc alias set myminio ${shQuote(minioEndpoint)} ${shQuote(accessKey)} ${shQuote(secretKey)} >/dev/null 2>&1 && ` +
`mc mb --ignore-existing myminio/${shQuote(bucket)} >/dev/null 2>&1`;
const r = await docker("run", "--rm", "--network", "host", "--entrypoint", "/bin/sh", MINIO_MC_IMAGE, "-c", cmd);
return r.code === 0;
}
async function waitForMinioBucket(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<void> {
for (let i = 0; i < 30; i++) {
const checkCmd =
`mc alias set myminio ${shQuote(minioEndpoint)} ${shQuote(accessKey)} ${shQuote(secretKey)} >/dev/null 2>&1 && ` +
`mc ls myminio/${shQuote(bucket)} >/dev/null 2>&1`;
const check = await docker(
"run",
"--rm",
"--network",
// Now I used host networking to access the container via localhost for some environments (Docker Desktop on Windows).
// We need something good idea to work across all environments.
"host",
"--entrypoint",
"/bin/sh",
MINIO_MC_IMAGE,
"-c",
checkCmd
);
if (check.code === 0) {
return;
}
await initMinioBucket(minioEndpoint, accessKey, secretKey, bucket);
await sleep(2000);
}
throw new Error(`MinIO bucket not ready: ${bucket}`);
}
export async function startMinio(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<void> {
console.log("[INFO] stopping leftover MinIO container if present");
await stopMinio().catch(() => {});
console.log("[INFO] starting MinIO test container");
await dockerOrFail(
"run",
"-d",
"--name",
MINIO_CONTAINER,
// TODO: Ports should be configurable.
"-p",
"9000:9000",
"-p",
"9001:9001",
"-e",
`MINIO_ROOT_USER=${accessKey}`,
"-e",
`MINIO_ROOT_PASSWORD=${secretKey}`,
"-e",
`MINIO_SERVER_URL=${minioEndpoint}`,
MINIO_IMAGE,
"server",
"/data",
"--console-address",
":9001"
);
console.log(`[INFO] initialising MinIO test bucket: ${bucket}`);
let initialised = false;
for (let i = 0; i < 5; i++) {
if (await initMinioBucket(minioEndpoint, accessKey, secretKey, bucket)) {
initialised = true;
break;
}
await sleep(2000);
}
if (!initialised) {
throw new Error(`Could not initialise MinIO bucket after retries: ${bucket}`);
}
await waitForMinioBucket(minioEndpoint, accessKey, secretKey, bucket);
}
// ---------------------------------------------------------------------------
// P2P relay (strfry)
// ---------------------------------------------------------------------------
// TODO: these values could be configurable via environment variables.
const P2P_RELAY_CONTAINER = "relay-test";
const P2P_RELAY_IMAGE = "ghcr.io/hoytech/strfry:latest";
const STRFRY_BOOTSTRAP_SH = String.raw`cat > /tmp/strfry.conf <<"EOF"
db = "./strfry-db/"
relay {
bind = "0.0.0.0"
port = 7777
nofiles = 100000
info {
name = "livesync test relay"
description = "local relay for livesync p2p tests"
}
maxWebsocketPayloadSize = 131072
autoPingSeconds = 55
writePolicy {
plugin = ""
}
}
EOF
exec /app/strfry --config /tmp/strfry.conf relay`;
export async function stopP2pRelay(): Promise<void> {
await docker("stop", P2P_RELAY_CONTAINER);
await docker("rm", P2P_RELAY_CONTAINER);
}
/**
* Start the local P2P relay container through the same docker runner used
* by CouchDB helpers. This keeps process ownership consistent across
* start/stop on Windows, WSL, and native Linux/macOS.
*/
export async function startP2pRelay(): Promise<void> {
console.log("[INFO] stopping leftover P2P relay container if present");
await stopP2pRelay().catch(() => {});
console.log("[INFO] starting local P2P relay container");
await dockerOrFail(
"run",
"-d",
"--name",
P2P_RELAY_CONTAINER,
"-p",
//TODO: port mapping should be configurable.
"4000:7777",
"--tmpfs",
"/app/strfry-db:rw,size=256m",
"--entrypoint",
"sh",
P2P_RELAY_IMAGE,
"-lc",
STRFRY_BOOTSTRAP_SH
);
}
export function isLocalP2pRelay(relayUrl: string): boolean {
return relayUrl === "ws://localhost:4000" || relayUrl === "ws://localhost:4000/";
}

View File

@@ -0,0 +1,26 @@
/**
* Load a .env-style file (KEY=value per line) into a plain object.
* Equivalent to `source $TEST_ENV_FILE; set -a` in bash.
* Maybe we should use some library... now it is just the minimal implementation that covers our use cases.
*
* Supported value formats:
* KEY=value
* KEY='single quoted'
* KEY="double quoted"
* # comment lines are ignored
*/
export async function loadEnvFile(filePath: string): Promise<Record<string, string>> {
const text = await Deno.readTextFile(filePath);
const result: Record<string, string> = {};
for (const line of text.split("\n")) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) continue;
const idx = trimmed.indexOf("=");
if (idx < 0) continue;
const key = trimmed.slice(0, idx).trim();
const raw = trimmed.slice(idx + 1).trim();
// Strip surrounding single or double quotes
result[key] = raw.replace(/^(['"])(.*)\1$/, "$2");
}
return result;
}

View File

@@ -0,0 +1,52 @@
import { runCli } from "./cli.ts";
import { isLocalP2pRelay, startP2pRelay, stopP2pRelay } from "./docker.ts";
export type PeerEntry = {
id: string;
name: string;
};
export function parsePeerLines(output: string): PeerEntry[] {
return output
.split(/\r?\n/)
.map((line) => line.split("\t"))
.filter((parts) => parts.length >= 3 && parts[0] === "[peer]")
.map((parts) => ({ id: parts[1], name: parts[2] }));
}
export async function discoverPeer(
vaultDir: string,
settingsFile: string,
timeoutSeconds: number,
targetPeer?: string
): Promise<PeerEntry> {
const result = await runCli(vaultDir, "--settings", settingsFile, "p2p-peers", String(timeoutSeconds));
if (result.code !== 0) {
throw new Error(`p2p-peers failed\n${result.combined}`);
}
const peers = parsePeerLines(result.stdout);
if (targetPeer) {
const matched = peers.find((peer) => peer.id === targetPeer || peer.name === targetPeer);
if (matched) return matched;
}
if (peers.length === 0) {
const fallback = result.combined.match(/Advertisement from\s+([^\s]+)/);
if (fallback?.[1]) {
return { id: fallback[1], name: fallback[1] };
}
throw new Error(`No peers discovered\n${result.combined}`);
}
return peers[0];
}
export async function maybeStartLocalRelay(relay: string): Promise<boolean> {
if (!isLocalP2pRelay(relay)) return false;
await startP2pRelay();
return true;
}
export async function stopLocalRelayIfStarted(started: boolean): Promise<void> {
if (started) {
await stopP2pRelay().catch(() => {});
}
}

View File

@@ -0,0 +1,205 @@
import { join } from "@std/path";
import { CLI_DIR, runCliOrFail } from "./cli.ts";
// ---------------------------------------------------------------------------
// Settings file initialisation
// ---------------------------------------------------------------------------
/** Generate a default settings file using the CLI's init-settings command. */
export async function initSettingsFile(settingsFile: string): Promise<void> {
await runCliOrFail("init-settings", "--force", settingsFile);
}
/**
* Generate a full setup URI from a settings file via src/lib API.
* Mirrors the bash flow in test-setup-put-cat-linux.sh.
*/
export async function generateSetupUriFromSettings(settingsFile: string, setupPassphrase: string): Promise<string> {
const repoRoot = join(CLI_DIR, "..", "..", "..");
const script = [
"import fs from 'node:fs';",
"import { pathToFileURL } from 'node:url';",
"(async () => {",
" const modulePath = process.env.REPO_ROOT + '/src/lib/src/API/processSetting.ts';",
" const moduleUrl = pathToFileURL(modulePath).href;",
" const { encodeSettingsToSetupURI } = await import(moduleUrl);",
" const settingsPath = process.env.SETTINGS_FILE;",
" const passphrase = process.env.SETUP_PASSPHRASE;",
" const settings = JSON.parse(fs.readFileSync(settingsPath, 'utf-8'));",
" settings.couchDB_DBNAME = 'setup-put-cat-db';",
" settings.couchDB_URI = 'http://127.0.0.1:5999';",
" settings.couchDB_USER = 'dummy';",
" settings.couchDB_PASSWORD = 'dummy';",
" settings.liveSync = false;",
" settings.syncOnStart = false;",
" settings.syncOnSave = false;",
" const uri = await encodeSettingsToSetupURI(settings, passphrase);",
" process.stdout.write(uri.trim());",
"})();",
].join("\n");
const scriptPath = await Deno.makeTempFile({
prefix: "livesync-setup-uri-",
suffix: ".mts",
});
await Deno.writeTextFile(scriptPath, script);
try {
const cmd = new Deno.Command("npx", {
args: ["tsx", scriptPath],
cwd: CLI_DIR,
env: {
REPO_ROOT: repoRoot,
SETTINGS_FILE: settingsFile,
SETUP_PASSPHRASE: setupPassphrase,
},
stdin: "null",
stdout: "piped",
stderr: "piped",
});
const { code, stdout, stderr } = await cmd.output();
const dec = new TextDecoder();
if (code !== 0) {
throw new Error(
`Failed to generate setup URI (code ${code})\nstdout: ${dec.decode(stdout)}\nstderr: ${dec.decode(stderr)}`
);
}
const uri = dec.decode(stdout).trim();
if (!uri) {
throw new Error("Failed to generate setup URI: output is empty");
}
return uri;
} finally {
await Deno.remove(scriptPath).catch(() => {});
}
}
/** Set isConfigured=true in a settings file (required for mirror / scan). */
export async function markSettingsConfigured(settingsFile: string): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
// ---------------------------------------------------------------------------
// CouchDB remote settings
// ---------------------------------------------------------------------------
/**
* Apply CouchDB connection details to a settings file.
* Mirrors cli_test_apply_couchdb_settings() from test-helpers.sh.
*/
export async function applyCouchdbSettings(
settingsFile: string,
couchdbUri: string,
couchdbUser: string,
couchdbPassword: string,
couchdbDbname: string,
liveSync = false
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.couchDB_URI = couchdbUri;
data.couchDB_USER = couchdbUser;
data.couchDB_PASSWORD = couchdbPassword;
data.couchDB_DBNAME = couchdbDbname;
if (liveSync) {
data.liveSync = true;
data.syncOnStart = false;
data.syncOnSave = false;
data.usePluginSync = false;
}
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
export async function applyRemoteSyncSettings(
settingsFile: string,
options: {
remoteType: "COUCHDB" | "MINIO";
couchdbUri?: string;
couchdbUser?: string;
couchdbPassword?: string;
couchdbDbname?: string;
minioBucket?: string;
minioEndpoint?: string;
minioAccessKey?: string;
minioSecretKey?: string;
encrypt?: boolean;
passphrase?: string;
}
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
if (options.remoteType === "COUCHDB") {
data.remoteType = "";
data.couchDB_URI = options.couchdbUri;
data.couchDB_USER = options.couchdbUser;
data.couchDB_PASSWORD = options.couchdbPassword;
data.couchDB_DBNAME = options.couchdbDbname;
} else {
data.remoteType = "MINIO";
data.bucket = options.minioBucket;
data.endpoint = options.minioEndpoint;
data.accessKey = options.minioAccessKey;
data.secretKey = options.minioSecretKey;
data.region = "auto";
data.forcePathStyle = true;
}
data.liveSync = true;
data.syncOnStart = false;
data.syncOnSave = false;
data.usePluginSync = false;
data.encrypt = options.encrypt === true;
data.passphrase = options.encrypt ? (options.passphrase ?? "") : "";
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
// ---------------------------------------------------------------------------
// P2P settings
// ---------------------------------------------------------------------------
/**
* Apply P2P connection details to a settings file.
* Mirrors cli_test_apply_p2p_settings() from test-helpers.sh.
*/
export async function applyP2pSettings(
settingsFile: string,
roomId: string,
passphrase: string,
appId = "self-hosted-livesync-cli-tests",
relays = "ws://localhost:4000/",
autoAccept = "~.*"
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.P2P_Enabled = true;
data.P2P_AutoStart = false;
data.P2P_AutoBroadcast = false;
data.P2P_AppID = appId;
data.P2P_roomID = roomId;
data.P2P_passphrase = passphrase;
data.P2P_relays = relays;
data.P2P_AutoAcceptingPeers = autoAccept;
data.P2P_AutoDenyingPeers = "";
data.P2P_IsHeadless = true;
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
export async function applyP2pTestTweaks(settingsFile: string, deviceName: string, passphrase: string): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.remoteType = "ONLY_P2P";
data.encrypt = true;
data.passphrase = passphrase;
data.usePathObfuscation = true;
data.handleFilenameCaseSensitive = false;
data.customChunkSize = 50;
data.usePluginSyncV2 = true;
data.doNotUseFixedRevisionForChunks = false;
data.P2P_DevicePeerName = deviceName;
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}

View File

@@ -0,0 +1,33 @@
import { join } from "@std/path";
/**
* A temporary directory that cleans itself up via `await using`.
* Requires TypeScript 5.2+ / Deno 1.40+ for the AsyncDisposable protocol.
*
* @example
* ```ts
* await using tmp = await TempDir.create();
* const file = tmp.join("data.json");
* ```
*/
export class TempDir implements AsyncDisposable {
readonly path: string;
private constructor(path: string) {
this.path = path;
}
static async create(prefix = "livesync-deno-test"): Promise<TempDir> {
const path = await Deno.makeTempDir({ prefix: `${prefix}.` });
return new TempDir(path);
}
/** Return an OS path joined to the temp directory root. */
join(...parts: string[]): string {
return join(this.path, ...parts);
}
async [Symbol.asyncDispose](): Promise<void> {
await Deno.remove(this.path, { recursive: true }).catch(() => {});
}
}

View File

@@ -0,0 +1,276 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import {
runCli,
runCliOrFail,
runCliWithInputOrFail,
sanitiseCatStdout,
assertFilesEqual,
jsonStringField,
} from "./helpers/cli.ts";
import { applyRemoteSyncSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, startMinio, stopCouchdb, stopMinio } from "./helpers/docker.ts";
type RemoteType = "COUCHDB" | "MINIO";
function requireEnv(...keys: string[]): string {
for (const key of keys) {
const value = Deno.env.get(key)?.trim();
if (value) return value;
}
throw new Error(`Required env var is missing: ${keys.join(" or ")}`);
}
export async function runScenario(remoteType: RemoteType, encrypt: boolean): Promise<void> {
const dbSuffix = `${Date.now()}-${Math.floor(Math.random() * 100000)}`;
const couchdbUri = remoteType === "COUCHDB" ? requireEnv("COUCHDB_URI", "hostname").replace(/\/$/, "") : "";
const couchdbUser = remoteType === "COUCHDB" ? requireEnv("COUCHDB_USER", "username") : "";
const couchdbPassword = remoteType === "COUCHDB" ? requireEnv("COUCHDB_PASSWORD", "password") : "";
const dbPrefix = remoteType === "COUCHDB" ? requireEnv("COUCHDB_DBNAME", "dbname") : "";
const dbname = remoteType === "COUCHDB" ? `${dbPrefix}-${dbSuffix}` : "";
const minioEndpoint = remoteType === "MINIO" ? requireEnv("MINIO_ENDPOINT", "minioEndpoint").replace(/\/$/, "") : "";
const minioAccessKey = remoteType === "MINIO" ? requireEnv("MINIO_ACCESS_KEY", "accessKey") : "";
const minioSecretKey = remoteType === "MINIO" ? requireEnv("MINIO_SECRET_KEY", "secretKey") : "";
const minioBucketBase = remoteType === "MINIO" ? requireEnv("MINIO_BUCKET_NAME", "bucketName") : "";
const minioBucket = remoteType === "MINIO" ? `${minioBucketBase}-${dbSuffix}` : "";
const passphrase = "e2e-passphrase";
await using workDir = await TempDir.create(
`livesync-cli-e2e-${remoteType.toLowerCase()}-${encrypt ? "enc1" : "enc0"}`
);
const vaultA = workDir.join("testvault_a");
const vaultB = workDir.join("testvault_b");
const settingsA = workDir.join("test-settings-a.json");
const settingsB = workDir.join("test-settings-b.json");
const pushSrc = workDir.join("push-source.txt");
const pullDst = workDir.join("pull-destination.txt");
const pushBinarySrc = workDir.join("push-source.bin");
const pullBinaryDst = workDir.join("pull-destination.bin");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (remoteType === "COUCHDB") {
await startCouchdb(couchdbUri, couchdbUser, couchdbPassword, dbname);
} else {
await startMinio(minioEndpoint, minioAccessKey, minioSecretKey, minioBucket);
}
try {
await initSettingsFile(settingsA);
await initSettingsFile(settingsB);
await applyRemoteSyncSettings(settingsA, {
remoteType,
couchdbUri,
couchdbUser,
couchdbPassword,
couchdbDbname: dbname,
minioBucket,
minioEndpoint,
minioAccessKey,
minioSecretKey,
encrypt,
passphrase,
});
await applyRemoteSyncSettings(settingsB, {
remoteType,
couchdbUri,
couchdbUser,
couchdbPassword,
couchdbDbname: dbname,
minioBucket,
minioEndpoint,
minioAccessKey,
minioSecretKey,
encrypt,
passphrase,
});
const syncBoth = async () => {
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
};
const targetAOnly = "e2e/a-only-info.md";
const targetSync = "e2e/sync-info.md";
const targetSyncTwiceFirst = "e2e/sync-twice-first.md";
const targetSyncTwiceSecond = "e2e/sync-twice-second.md";
const targetPush = "e2e/pushed-from-a.md";
const targetPut = "e2e/put-from-a.md";
const targetPushBinary = "e2e/pushed-from-a.bin";
const targetConflict = "e2e/conflict.md";
await runCliWithInputOrFail("alpha-from-a\n", vaultA, "--settings", settingsA, "put", targetAOnly);
const infoAOnly = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetAOnly);
assert(infoAOnly.includes(`"path": "${targetAOnly}"`));
await runCliWithInputOrFail("visible-after-sync\n", vaultA, "--settings", settingsA, "put", targetSync);
await syncBoth();
const infoBSync = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetSync);
assert(infoBSync.includes(`"path": "${targetSync}"`));
await runCliWithInputOrFail(
`first-sync-round-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetSyncTwiceFirst
);
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
const firstVisible = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetSyncTwiceFirst)
).trimEnd();
assert(firstVisible === `first-sync-round-${dbSuffix}`);
await runCliWithInputOrFail(
`second-sync-round-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetSyncTwiceSecond
);
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
const secondVisible = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetSyncTwiceSecond)
).trimEnd();
assert(secondVisible === `second-sync-round-${dbSuffix}`);
await Deno.writeTextFile(pushSrc, `pushed-content-${dbSuffix}\n`);
await runCliOrFail(vaultA, "--settings", settingsA, "push", pushSrc, targetPush);
await runCliWithInputOrFail(`put-content-${dbSuffix}\n`, vaultA, "--settings", settingsA, "put", targetPut);
await syncBoth();
await runCliOrFail(vaultB, "--settings", settingsB, "pull", targetPush, pullDst);
await assertFilesEqual(pushSrc, pullDst, "B pull result does not match pushed source");
const catBPut = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetPut)
).trimEnd();
assert(catBPut === `put-content-${dbSuffix}`);
const binary = new Uint8Array(4096);
binary.fill(0x61);
await Deno.writeFile(pushBinarySrc, binary);
await runCliOrFail(vaultA, "--settings", settingsA, "push", pushBinarySrc, targetPushBinary);
await syncBoth();
await runCliOrFail(vaultB, "--settings", settingsB, "pull", targetPushBinary, pullBinaryDst);
await assertFilesEqual(pushBinarySrc, pullBinaryDst, "B pull result does not match pushed binary source");
await runCliOrFail(vaultA, "--settings", settingsA, "rm", targetPut);
await syncBoth();
const removed = await runCli(vaultB, "--settings", settingsB, "cat", targetPut);
assert(removed.code !== 0, `B cat should fail after A removed the file\n${removed.combined}`);
await runCliWithInputOrFail("conflict-base\n", vaultA, "--settings", settingsA, "put", targetConflict);
await syncBoth();
await runCliWithInputOrFail(
`conflict-from-a-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetConflict
);
await runCliWithInputOrFail(
`conflict-from-b-${dbSuffix}\n`,
vaultB,
"--settings",
settingsB,
"put",
targetConflict
);
let infoAConflict = "";
let infoBConflict = "";
let conflictDetected = false;
for (const side of ["a", "b", "a"] as const) {
await runCliOrFail(
side === "a" ? vaultA : vaultB,
"--settings",
side === "a" ? settingsA : settingsB,
"sync"
);
infoAConflict = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetConflict);
infoBConflict = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetConflict);
if (
jsonStringField(infoAConflict, "conflicts") !== "N/A" ||
jsonStringField(infoBConflict, "conflicts") !== "N/A"
) {
conflictDetected = true;
break;
}
}
assert(conflictDetected, `conflict was expected\nA: ${infoAConflict}\nB: ${infoBConflict}`);
const lsAConflict =
(await runCliOrFail(vaultA, "--settings", settingsA, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const lsBConflict =
(await runCliOrFail(vaultB, "--settings", settingsB, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const revA = lsAConflict.split("\t")[3] ?? "";
const revB = lsBConflict.split("\t")[3] ?? "";
assert(
revA.includes("*") || revB.includes("*"),
`conflicted entry should be marked with '*'\nA: ${lsAConflict}\nB: ${lsBConflict}`
);
const keepRevision = jsonStringField(infoAConflict, "revision");
assert(keepRevision.length > 0, `could not extract revision\n${infoAConflict}`);
await runCliOrFail(vaultA, "--settings", settingsA, "resolve", targetConflict, keepRevision);
let resolved = false;
let infoAResolved = "";
let infoBResolved = "";
for (let i = 0; i < 6; i++) {
await syncBoth();
infoAResolved = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetConflict);
infoBResolved = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetConflict);
if (
jsonStringField(infoAResolved, "conflicts") === "N/A" &&
jsonStringField(infoBResolved, "conflicts") === "N/A"
) {
resolved = true;
break;
}
const retryRevision = jsonStringField(infoAResolved, "revision");
if (retryRevision) {
await runCli(vaultA, "--settings", settingsA, "resolve", targetConflict, retryRevision);
}
}
assert(resolved, `conflicts should be resolved\nA: ${infoAResolved}\nB: ${infoBResolved}`);
const lsAResolved =
(await runCliOrFail(vaultA, "--settings", settingsA, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const lsBResolved =
(await runCliOrFail(vaultB, "--settings", settingsB, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
assert(!(lsAResolved.split("\t")[3] ?? "").includes("*"));
assert(!(lsBResolved.split("\t")[3] ?? "").includes("*"));
const catAResolved = sanitiseCatStdout(
await runCliOrFail(vaultA, "--settings", settingsA, "cat", targetConflict)
).trimEnd();
const catBResolved = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetConflict)
).trimEnd();
assert(catAResolved === catBResolved, `resolved content should match\nA: ${catAResolved}\nB: ${catBResolved}`);
} finally {
if (!keepDocker) {
if (remoteType === "COUCHDB") {
await stopCouchdb().catch(() => {});
} else {
await stopMinio().catch(() => {});
}
}
}
}
Deno.test("e2e: two vaults over CouchDB without encryption", async () => {
await runScenario("COUCHDB", false);
});
Deno.test("e2e: two vaults over CouchDB with encryption", async () => {
await runScenario("COUCHDB", true);
});

View File

@@ -0,0 +1,20 @@
import { runScenario } from "./test-e2e-two-vaults-couchdb.ts";
type MatrixCase = {
remoteType: "COUCHDB" | "MINIO";
encrypt: boolean;
label: string;
};
const matrixCases: MatrixCase[] = [
{ remoteType: "COUCHDB", encrypt: false, label: "COUCHDB-enc0" },
{ remoteType: "COUCHDB", encrypt: true, label: "COUCHDB-enc1" },
{ remoteType: "MINIO", encrypt: false, label: "MINIO-enc0" },
{ remoteType: "MINIO", encrypt: true, label: "MINIO-enc1" },
];
for (const tc of matrixCases) {
Deno.test(`e2e matrix: ${tc.label}`, async () => {
await runScenario(tc.remoteType, tc.encrypt);
});
}

View File

@@ -0,0 +1,196 @@
/**
* Deno port of test-mirror-linux.sh
*
* Tests the `mirror` command — bidirectional synchronisation between a local
* storage directory (vault) and an in-process database.
*
* Covered cases (identical to the bash test):
* 1. Storage-only file -> synced into DB (UPDATE DATABASE)
* 2. DB-only file -> restored to storage (UPDATE STORAGE)
* 3. DB-deleted file -> NOT restored to storage (UPDATE STORAGE skip)
* 4. Both, storage newer -> DB updated (SYNC: STORAGE -> DB)
* 5. Both, DB newer -> storage updated (SYNC: DB -> STORAGE)
* 6. Compatibility mode -> omitted vault-path works (same DB + vault path)
*
* No external services are required.
*
* Run:
* deno test -A test-mirror.ts
*/
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCliOrFail } from "./helpers/cli.ts";
import { initSettingsFile, markSettingsConfigured } from "./helpers/settings.ts";
Deno.test("mirror: storage <-> DB synchronisation", async (t) => {
await using workDir = await TempDir.create("livesync-cli-mirror");
// -------------------------------------------------------------------
// Shared setup
// -------------------------------------------------------------------
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
const dbDir = workDir.join("db");
await Deno.mkdir(workDir.join("vault", "test"), { recursive: true });
await Deno.mkdir(dbDir, { recursive: true });
await initSettingsFile(settingsFile);
// isConfigured=true is required for canProceedScan in the mirror command.
await markSettingsConfigured(settingsFile);
// Copy settings to the DB directory (separated-path mode)
const dbSettings = workDir.join("db", "settings.json");
await Deno.copyFile(settingsFile, dbSettings);
/** Run mirror in separated-path mode: DB dir ≠ vault dir. */
const runMirror = () => runCliOrFail(dbDir, "--settings", dbSettings, "mirror", vaultDir);
/** Run mirror in compatibility mode: DB path = vault path. */
const runMirrorCompat = () => runCliOrFail(vaultDir, "--settings", settingsFile, "mirror");
// Helper wrappers
const dbRun = (...args: string[]) => runCliOrFail(dbDir, "--settings", dbSettings, ...args);
const compatRun = (...args: string[]) => runCliOrFail(vaultDir, "--settings", settingsFile, ...args);
// -------------------------------------------------------------------
// Case 1: storage-only -> DB (UPDATE DATABASE)
// -------------------------------------------------------------------
await t.step("case 1: storage-only file is synced into DB", async () => {
const storageFile = workDir.join("vault", "test", "storage-only.md");
await Deno.writeTextFile(storageFile, "storage-only content\n");
await runMirror();
const resultFile = workDir.join("case1-pull.txt");
await dbRun("pull", "test/storage-only.md", resultFile);
const storageContent = await Deno.readTextFile(storageFile);
const pulledContent = await Deno.readTextFile(resultFile);
assert(
storageContent === pulledContent,
`storage-only file NOT synced into DB\nexpected: ${storageContent}\ngot: ${pulledContent}`
);
console.log("[PASS] case 1: storage-only file was synced into DB");
});
// -------------------------------------------------------------------
// Case 2: DB-only -> storage (UPDATE STORAGE)
// -------------------------------------------------------------------
await t.step("case 2: DB-only file is restored to storage", async () => {
await dbRun(
"push",
// write inline via push (pipe not needed — push takes a file path)
// create a temp file with content and push it
await (async () => {
const tmp = workDir.join("db-only-src.txt");
await Deno.writeTextFile(tmp, "db-only content\n");
return tmp;
})(),
"test/db-only.md"
);
const storagePath = workDir.join("vault", "test", "db-only.md");
assert(!(await exists(storagePath)), "db-only.md unexpectedly exists in storage before mirror");
await runMirror();
assert(await exists(storagePath), "DB-only file NOT restored to storage after mirror");
const content = await Deno.readTextFile(storagePath);
assert(content === "db-only content\n", `DB-only file restored but content mismatch: '${content}'`);
console.log("[PASS] case 2: DB-only file was restored to storage");
});
// -------------------------------------------------------------------
// Case 3: DB-deleted -> storage untouched
// -------------------------------------------------------------------
await t.step("case 3: DB-deleted entry is NOT restored to storage", async () => {
const deletedSrc = workDir.join("deleted-src.txt");
await Deno.writeTextFile(deletedSrc, "to-be-deleted\n");
await dbRun("push", deletedSrc, "test/deleted.md");
await dbRun("rm", "test/deleted.md");
await runMirror();
const storagePath = workDir.join("vault", "test", "deleted.md");
assert(!(await exists(storagePath)), "deleted DB entry was incorrectly restored to storage");
console.log("[PASS] case 3: deleted DB entry was NOT restored to storage");
});
// -------------------------------------------------------------------
// Case 4: storage newer -> DB updated (SYNC: STORAGE -> DB)
// -------------------------------------------------------------------
await t.step("case 4: storage newer than DB -> DB is updated", async () => {
// Seed DB with old content (mtime ~ now)
const seedFile = workDir.join("case4-seed.txt");
await Deno.writeTextFile(seedFile, "old content\n");
await dbRun("push", seedFile, "test/sync-storage-newer.md");
// Write new content to storage with a timestamp 1 hour in the future
const storageFile = workDir.join("vault", "test", "sync-storage-newer.md");
await Deno.writeTextFile(storageFile, "new content\n");
await Deno.utime(storageFile, new Date(), new Date(Date.now() + 3600_000));
await runMirror();
const resultFile = workDir.join("case4-pull.txt");
await dbRun("pull", "test/sync-storage-newer.md", resultFile);
const storageContent = await Deno.readTextFile(storageFile);
const pulledContent = await Deno.readTextFile(resultFile);
assert(
storageContent === pulledContent,
`DB NOT updated to match newer storage file\nexpected: ${storageContent}\ngot: ${pulledContent}`
);
console.log("[PASS] case 4: DB updated to match newer storage file");
});
// -------------------------------------------------------------------
// Case 5: DB newer -> storage updated (SYNC: DB -> STORAGE)
// -------------------------------------------------------------------
await t.step("case 5: DB newer than storage -> storage is updated", async () => {
// Write old content to storage with a timestamp 1 hour in the past
const storageFile = workDir.join("vault", "test", "sync-db-newer.md");
await Deno.writeTextFile(storageFile, "old storage content\n");
await Deno.utime(storageFile, new Date(), new Date(Date.now() - 3600_000));
// Write new content to DB only (mtime ~ now, newer than the storage file)
const dbNewFile = workDir.join("case5-db-new.txt");
await Deno.writeTextFile(dbNewFile, "new db content\n");
await dbRun("push", dbNewFile, "test/sync-db-newer.md");
await runMirror();
const content = await Deno.readTextFile(storageFile);
assert(content === "new db content\n", `storage NOT updated to match newer DB entry (got: '${content}')`);
console.log("[PASS] case 5: storage updated to match newer DB entry");
});
// -------------------------------------------------------------------
// Case 6: compatibility mode (vault path = DB path)
// -------------------------------------------------------------------
await t.step("case 6: compatibility mode (omitted vault-path)", async () => {
const compatFile = workDir.join("vault", "compat.md");
await Deno.writeTextFile(compatFile, "compat-content\n");
await runMirrorCompat();
const resultFile = workDir.join("case6-pull.txt");
await compatRun("pull", "compat.md", resultFile);
const pulled = await Deno.readTextFile(resultFile);
assert(pulled === "compat-content\n", `Compatibility mode failed to sync file into DB (got: '${pulled}')`);
console.log("[PASS] case 6: compatibility mode works");
});
});
// ---------------------------------------------------------------------------
// Utility
// ---------------------------------------------------------------------------
async function exists(path: string): Promise<boolean> {
try {
await Deno.stat(path);
return true;
} catch {
return false;
}
}

View File

@@ -0,0 +1,40 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings } from "./helpers/settings.ts";
import { startP2pRelay, stopP2pRelay, isLocalP2pRelay } from "./helpers/docker.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
Deno.test("p2p-host: starts and becomes ready", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const useInternalRelay = Deno.env.get("USE_INTERNAL_RELAY") !== "0";
await using workDir = await TempDir.create("livesync-cli-p2p-host");
const vaultDir = workDir.join("vault-host");
const settingsFile = workDir.join("settings-host.json");
await Deno.mkdir(vaultDir, { recursive: true });
let relayStarted = false;
if (useInternalRelay && isLocalP2pRelay(relay)) {
await startP2pRelay();
relayStarted = true;
}
try {
await initSettingsFile(settingsFile);
await applyP2pSettings(settingsFile, roomId, passphrase, appId, relay);
const host = startCliInBackground(vaultDir, "--settings", settingsFile, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
assert(host.combined.includes("P2P host is running"));
} finally {
await host.stop();
}
} finally {
if (relayStarted) {
await stopP2pRelay().catch(() => {});
}
}
});

View File

@@ -0,0 +1,42 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings, applyP2pTestTweaks } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
Deno.test("p2p-peers: discovers host through local relay", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const timeoutSeconds = Number(Deno.env.get("TIMEOUT_SECONDS") ?? "8");
await using workDir = await TempDir.create("livesync-cli-p2p-peers-local-relay");
const hostVault = workDir.join("vault-host");
const hostSettings = workDir.join("settings-host.json");
const clientVault = workDir.join("vault");
const clientSettings = workDir.join("settings.json");
await Deno.mkdir(hostVault, { recursive: true });
await Deno.mkdir(clientVault, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
await initSettingsFile(hostSettings);
await initSettingsFile(clientSettings);
await applyP2pSettings(hostSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pSettings(clientSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pTestTweaks(hostSettings, "p2p-host", passphrase);
await applyP2pTestTweaks(clientSettings, "p2p-client", passphrase);
const host = startCliInBackground(hostVault, "--settings", hostSettings, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peer = await discoverPeer(clientVault, clientSettings, timeoutSeconds);
assert(peer.id.length > 0);
assert(peer.name.length > 0);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,59 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings, applyP2pTestTweaks } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { runCli } from "./helpers/cli.ts";
Deno.test("p2p-sync: discovers peer and completes sync", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "12");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "15");
await using workDir = await TempDir.create("livesync-cli-p2p-sync");
const hostVault = workDir.join("vault-host");
const hostSettings = workDir.join("settings-host.json");
const clientVault = workDir.join("vault-sync");
const clientSettings = workDir.join("settings-sync.json");
await Deno.mkdir(hostVault, { recursive: true });
await Deno.mkdir(clientVault, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
await initSettingsFile(hostSettings);
await initSettingsFile(clientSettings);
await applyP2pSettings(hostSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pSettings(clientSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pTestTweaks(hostSettings, "p2p-host", passphrase);
await applyP2pTestTweaks(clientSettings, "p2p-client", passphrase);
const host = startCliInBackground(hostVault, "--settings", hostSettings, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peer = await discoverPeer(
clientVault,
clientSettings,
peersTimeout,
Deno.env.get("TARGET_PEER") ?? undefined
);
const syncResult = await runCli(
clientVault,
"--settings",
clientSettings,
"p2p-sync",
peer.id,
String(syncTimeout)
);
assert(
syncResult.code === 0,
`p2p-sync failed\nstdout: ${syncResult.stdout}\nstderr: ${syncResult.stderr}`
);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,118 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { applyP2pSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { jsonStringField, runCliOrFail, runCliWithInputOrFail, sanitiseCatStdout } from "./helpers/cli.ts";
Deno.test("p2p: three nodes detect and resolve conflicts", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = `${Deno.env.get("ROOM_ID_PREFIX") ?? "p2p-room"}-${Date.now()}`;
const passphrase = `${Deno.env.get("PASSPHRASE_PREFIX") ?? "p2p-pass"}-${Date.now()}`;
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "10");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "15");
await using workDir = await TempDir.create("livesync-cli-p2p-3nodes");
const vaultA = workDir.join("vault-a");
const vaultB = workDir.join("vault-b");
const vaultC = workDir.join("vault-c");
const settingsA = workDir.join("settings-a.json");
const settingsB = workDir.join("settings-b.json");
const settingsC = workDir.join("settings-c.json");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
await Deno.mkdir(vaultC, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
for (const settings of [settingsA, settingsB, settingsC]) {
await initSettingsFile(settings);
await applyP2pSettings(settings, roomId, passphrase, appId, relay);
}
const host = startCliInBackground(vaultA, "--settings", settingsA, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peerFromB = await discoverPeer(vaultB, settingsB, peersTimeout);
const peerFromC = await discoverPeer(vaultC, settingsC, peersTimeout);
const targetPath = "p2p/conflicted-from-two-clients.txt";
await runCliWithInputOrFail("from-client-b-v1\n", vaultB, "--settings", settingsB, "put", targetPath);
await runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout));
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
let visibleOnC = "";
for (let i = 0; i < 5; i++) {
try {
visibleOnC = sanitiseCatStdout(
await runCliOrFail(vaultC, "--settings", settingsC, "cat", targetPath)
).trimEnd();
if (visibleOnC === "from-client-b-v1") break;
} catch {
// retry below
}
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
}
assert(visibleOnC === "from-client-b-v1", `C should see file created by B, got: ${visibleOnC}`);
await runCliWithInputOrFail("from-client-b-v2\n", vaultB, "--settings", settingsB, "put", targetPath);
await runCliWithInputOrFail("from-client-c-v2\n", vaultC, "--settings", settingsC, "put", targetPath);
const [syncB, syncC] = await Promise.all([
runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout)),
runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout)),
]);
void syncB;
void syncC;
await runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout));
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
const infoBBefore = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetPath);
const conflictsBBefore = jsonStringField(infoBBefore, "conflicts");
const keepRevB = jsonStringField(infoBBefore, "revision");
assert(
conflictsBBefore !== "N/A" && conflictsBBefore.length > 0,
`expected conflicts on B\n${infoBBefore}`
);
assert(keepRevB.length > 0, `could not read revision on B\n${infoBBefore}`);
const infoCBefore = await runCliOrFail(vaultC, "--settings", settingsC, "info", targetPath);
const conflictsCBefore = jsonStringField(infoCBefore, "conflicts");
const keepRevC = jsonStringField(infoCBefore, "revision");
assert(
conflictsCBefore !== "N/A" && conflictsCBefore.length > 0,
`expected conflicts on C\n${infoCBefore}`
);
assert(keepRevC.length > 0, `could not read revision on C\n${infoCBefore}`);
await runCliOrFail(vaultB, "--settings", settingsB, "resolve", targetPath, keepRevB);
await runCliOrFail(vaultC, "--settings", settingsC, "resolve", targetPath, keepRevC);
const infoBAfter = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetPath);
const infoCAfter = await runCliOrFail(vaultC, "--settings", settingsC, "info", targetPath);
assert(jsonStringField(infoBAfter, "conflicts") === "N/A", `conflict still remains on B\n${infoBAfter}`);
assert(jsonStringField(infoCAfter, "conflicts") === "N/A", `conflict still remains on C\n${infoCAfter}`);
const finalContentB = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetPath)
).trimEnd();
const finalContentC = sanitiseCatStdout(
await runCliOrFail(vaultC, "--settings", settingsC, "cat", targetPath)
).trimEnd();
assert(
finalContentB === "from-client-b-v2" || finalContentB === "from-client-c-v2",
`unexpected final content on B: ${finalContentB}`
);
assert(
finalContentC === "from-client-b-v2" || finalContentC === "from-client-c-v2",
`unexpected final content on C: ${finalContentC}`
);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,111 @@
import { TempDir } from "./helpers/temp.ts";
import { applyP2pSettings, applyP2pTestTweaks, initSettingsFile } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { assertFilesEqual, runCliOrFail } from "./helpers/cli.ts";
async function writeFilledFile(path: string, size: number, byte: number): Promise<void> {
const data = new Uint8Array(size);
data.fill(byte);
await Deno.writeFile(path, data);
}
Deno.test("p2p: upload/download reproduction scenario", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "20");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "240");
const roomId = `p2p-room-${Date.now()}`;
const passphrase = `p2p-pass-${Date.now()}`;
await using workDir = await TempDir.create("livesync-cli-p2p-upload-download");
const vaultHost = workDir.join("vault-host");
const vaultUp = workDir.join("vault-up");
const vaultDown = workDir.join("vault-down");
const settingsHost = workDir.join("settings-host.json");
const settingsUp = workDir.join("settings-up.json");
const settingsDown = workDir.join("settings-down.json");
for (const dir of [vaultHost, vaultUp, vaultDown]) {
await Deno.mkdir(dir, { recursive: true });
}
const relayStarted = await maybeStartLocalRelay(relay);
try {
for (const settings of [settingsHost, settingsUp, settingsDown]) {
await initSettingsFile(settings);
await applyP2pSettings(settings, roomId, passphrase, appId, relay, "~.*");
}
await applyP2pTestTweaks(settingsHost, "p2p-cli-host", passphrase);
await applyP2pTestTweaks(settingsUp, `p2p-cli-upload-${Date.now()}`, passphrase);
await applyP2pTestTweaks(settingsDown, `p2p-cli-download-${Date.now()}`, passphrase);
const host = startCliInBackground(vaultHost, "--settings", settingsHost, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const uploadPeer = await discoverPeer(vaultUp, settingsUp, peersTimeout);
const storeText = workDir.join("store-file.md");
const diffA = workDir.join("test-diff-1.md");
const diffB = workDir.join("test-diff-2.md");
const diffC = workDir.join("test-diff-3.md");
await Deno.writeTextFile(storeText, "Hello, World!\n");
await Deno.writeTextFile(diffA, "Content A\n");
await Deno.writeTextFile(diffB, "Content B\n");
await Deno.writeTextFile(diffC, "Content C\n");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", storeText, "p2p/store-file.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffA, "p2p/test-diff-1.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffB, "p2p/test-diff-2.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffC, "p2p/test-diff-3.md");
const large100k = workDir.join("large-100k.txt");
const large1m = workDir.join("large-1m.txt");
const binary100k = workDir.join("binary-100k.bin");
const binary5m = workDir.join("binary-5m.bin");
await Deno.writeTextFile(large100k, "a".repeat(100000));
await Deno.writeTextFile(large1m, "b".repeat(1000000));
await writeFilledFile(binary100k, 100000, 0x5a);
await writeFilledFile(binary5m, 5000000, 0x7c);
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", large100k, "p2p/large-100000.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", large1m, "p2p/large-1000000.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", binary100k, "p2p/binary-100000.bin");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", binary5m, "p2p/binary-5000000.bin");
await runCliOrFail(vaultUp, "--settings", settingsUp, "p2p-sync", uploadPeer.id, String(syncTimeout));
await runCliOrFail(vaultUp, "--settings", settingsUp, "p2p-sync", uploadPeer.id, String(syncTimeout));
const downloadPeer = await discoverPeer(vaultDown, settingsDown, peersTimeout);
await runCliOrFail(vaultDown, "--settings", settingsDown, "p2p-sync", downloadPeer.id, String(syncTimeout));
await runCliOrFail(vaultDown, "--settings", settingsDown, "p2p-sync", downloadPeer.id, String(syncTimeout));
const downStoreText = workDir.join("down-store-file.md");
const downDiffA = workDir.join("down-test-diff-1.md");
const downDiffB = workDir.join("down-test-diff-2.md");
const downDiffC = workDir.join("down-test-diff-3.md");
const downLarge100k = workDir.join("down-large-100k.txt");
const downLarge1m = workDir.join("down-large-1m.txt");
const downBinary100k = workDir.join("down-binary-100k.bin");
const downBinary5m = workDir.join("down-binary-5m.bin");
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/store-file.md", downStoreText);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-1.md", downDiffA);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-2.md", downDiffB);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-3.md", downDiffC);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/large-100000.md", downLarge100k);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/large-1000000.md", downLarge1m);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/binary-100000.bin", downBinary100k);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/binary-5000000.bin", downBinary5m);
await assertFilesEqual(storeText, downStoreText, "store-file mismatch");
await assertFilesEqual(diffA, downDiffA, "test-diff-1 mismatch");
await assertFilesEqual(diffB, downDiffB, "test-diff-2 mismatch");
await assertFilesEqual(diffC, downDiffC, "test-diff-3 mismatch");
await assertFilesEqual(large100k, downLarge100k, "large-100000 mismatch");
await assertFilesEqual(large1m, downLarge1m, "large-1000000 mismatch");
await assertFilesEqual(binary100k, downBinary100k, "binary-100000 mismatch");
await assertFilesEqual(binary5m, downBinary5m, "binary-5000000 mismatch");
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,78 @@
/**
* Deno port of test-push-pull-linux.sh
*
* Requires CouchDB connection details either via environment variables or a
* .test.env file. If neither is present the test logs a warning and the
* CLI will likely fail at the push step.
*
* Run:
* deno test -A test-push-pull.ts
*
* With explicit CouchDB:
* COUCHDB_URI=http://127.0.0.1:5984 \
* COUCHDB_USER=admin \
* COUCHDB_PASSWORD=password \
* COUCHDB_DBNAME=livesync-test \
* deno test -A test-push-pull.ts
*/
import { join } from "@std/path";
import { assertEquals } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCliOrFail } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, stopCouchdb } from "./helpers/docker.ts";
const REMOTE_PATH = Deno.env.get("REMOTE_PATH") ?? "test/push-pull.txt";
Deno.test("push/pull roundtrip", async () => {
await using workDir = await TempDir.create("livesync-cli-push-pull");
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
await Deno.mkdir(join(vaultDir, "test"), { recursive: true });
const uri = Deno.env.get("COUCHDB_URI") ?? "http://127.0.0.1:5989/";
const user = Deno.env.get("COUCHDB_USER") ?? "admin";
const password = Deno.env.get("COUCHDB_PASSWORD") ?? "testpassword";
const dbname = Deno.env.get("COUCHDB_DBNAME") ?? `push-pull-${Date.now()}`;
const shouldStartDocker = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (shouldStartDocker) {
await startCouchdb(uri, user, password, dbname);
}
try {
await initSettingsFile(settingsFile);
if (uri && user && password && dbname) {
console.log("[INFO] applying CouchDB env vars to settings");
await applyCouchdbSettings(settingsFile, uri, user, password, dbname);
} else {
console.warn(
"[WARN] CouchDB env vars not fully set — push/pull may fail unless the generated settings already contain connection details"
);
}
const srcFile = workDir.join("push-source.txt");
const pulledFile = workDir.join("pull-result.txt");
const content = `push-pull-test ${new Date().toISOString()}\n`;
await Deno.writeTextFile(srcFile, content);
console.log(`[INFO] push -> ${REMOTE_PATH}`);
await runCliOrFail(vaultDir, "--settings", settingsFile, "push", srcFile, REMOTE_PATH);
console.log(`[INFO] pull <- ${REMOTE_PATH}`);
await runCliOrFail(vaultDir, "--settings", settingsFile, "pull", REMOTE_PATH, pulledFile);
const pulled = await Deno.readTextFile(pulledFile);
assertEquals(content, pulled, "push/pull roundtrip content mismatch");
console.log("[PASS] push/pull roundtrip matched");
} finally {
if (shouldStartDocker && !keepDocker) {
await stopCouchdb().catch(() => {});
}
}
});

View File

@@ -0,0 +1,214 @@
/**
* Deno port of test-setup-put-cat-linux.sh
*
* Tests all local-DB file operations that require no external remote:
* setup /
* push / cat / ls / info / rm / resolve / cat-rev / pull-rev
*
* Run (no external services needed):
* deno test -A test-setup-put-cat.ts
*/
import { join } from "@std/path";
import { assertEquals, assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCli, runCliOrFail, runCliWithInput, sanitiseCatStdout } from "./helpers/cli.ts";
import { generateSetupUriFromSettings, initSettingsFile } from "./helpers/settings.ts";
const REMOTE_PATH = Deno.env.get("REMOTE_PATH") ?? "test/setup-put-cat.txt";
const SETUP_PASSPHRASE = Deno.env.get("SETUP_PASSPHRASE") ?? "setup-passphrase";
Deno.test("CLI file operations: push / cat / ls / info / rm / resolve / cat-rev / pull-rev", async (t) => {
await using workDir = await TempDir.create("livesync-cli-setup-put-cat");
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
await Deno.mkdir(join(vaultDir, "test"), { recursive: true });
await initSettingsFile(settingsFile);
const setupUri = await generateSetupUriFromSettings(settingsFile, SETUP_PASSPHRASE);
const setupResult = await runCliWithInput(
`${SETUP_PASSPHRASE}\n`,
vaultDir,
"--settings",
settingsFile,
"setup",
setupUri
);
assert(setupResult.code === 0, `setup command exited with ${setupResult.code}\n${setupResult.combined}`);
assert(
setupResult.combined.includes("[Command] setup ->"),
`setup command did not execute expected code path\n${setupResult.combined}`
);
const run = (...args: string[]) => runCliOrFail(vaultDir, "--settings", settingsFile, ...args);
// ------------------------------------------------------------------
// push / cat roundtrip
// ------------------------------------------------------------------
await t.step("push/cat roundtrip", async () => {
const srcFile = workDir.join("put-source.txt");
const content = `setup-put-cat-test ${new Date().toISOString()}\nline-2\n`;
await Deno.writeTextFile(srcFile, content);
console.log(`[INFO] push -> ${REMOTE_PATH}`);
await runCliWithInput(content, vaultDir, "--settings", settingsFile, "put", REMOTE_PATH);
console.log(`[INFO] cat <- ${REMOTE_PATH}`);
const rawOutput = await run("cat", REMOTE_PATH);
const catOutput = sanitiseCatStdout(rawOutput);
assertEquals(content, catOutput, "push/cat roundtrip content mismatch");
console.log("[PASS] push/cat roundtrip matched");
});
// ------------------------------------------------------------------
// ls: single file
// ------------------------------------------------------------------
await t.step("ls output format (single file)", async () => {
const lsOutput = await run("ls", REMOTE_PATH);
const line = lsOutput
.trim()
.split("\n")
.find((l) => l.startsWith(REMOTE_PATH + "\t"));
assert(line, `ls output did not include ${REMOTE_PATH}`);
const [lsPath, lsSize, lsMtime, lsRev] = line.split("\t");
assertEquals(lsPath, REMOTE_PATH, "ls path column mismatch");
assert(/^\d+$/.test(lsSize), `ls size not numeric: ${lsSize}`);
assert(/^\d+$/.test(lsMtime), `ls mtime not numeric: ${lsMtime}`);
assert(lsRev?.length > 0, "ls revision column is empty");
console.log("[PASS] ls output format matched");
});
// ------------------------------------------------------------------
// ls: prefix filter and sort order
// ------------------------------------------------------------------
await t.step("ls prefix filter and sort order", async () => {
await runCliWithInput("file-a\n", vaultDir, "--settings", settingsFile, "put", "test/a-first.txt");
await runCliWithInput("file-z\n", vaultDir, "--settings", settingsFile, "put", "test/z-last.txt");
const lsOut = await run("ls", "test/");
const lines = lsOut.trim().split("\n").filter(Boolean);
assert(lines.length >= 3, "ls prefix output expected at least 3 rows");
// Verify sorted ascending by path
const paths = lines.map((l) => l.split("\t")[0]);
for (let i = 1; i < paths.length; i++) {
assert(paths[i - 1] <= paths[i], `ls output not sorted: ${paths[i - 1]} > ${paths[i]}`);
}
assert(
lines.some((l) => l.startsWith("test/a-first.txt\t")),
"ls prefix output missing test/a-first.txt"
);
assert(
lines.some((l) => l.startsWith("test/z-last.txt\t")),
"ls prefix output missing test/z-last.txt"
);
console.log("[PASS] ls prefix and sorting matched");
});
// ------------------------------------------------------------------
// ls: no-match prefix returns empty output
// ------------------------------------------------------------------
await t.step("ls no-match prefix returns empty", async () => {
const lsOut = await run("ls", "no-such-prefix/");
assertEquals(lsOut.trim(), "", "ls no-match prefix should produce empty output");
console.log("[PASS] ls no-match prefix matched");
});
// ------------------------------------------------------------------
// info: JSON output format
// ------------------------------------------------------------------
await t.step("info output JSON format", async () => {
const infoOut = await run("info", REMOTE_PATH);
let data: Record<string, unknown>;
try {
data = JSON.parse(infoOut);
} catch {
throw new Error(`info output is not valid JSON:\n${infoOut}`);
}
assertEquals(data.path, REMOTE_PATH, "info .path mismatch");
assertEquals(data.filename, REMOTE_PATH.split("/").at(-1), "info .filename mismatch");
assert(typeof data.size === "number" && data.size >= 0, `info .size invalid: ${data.size}`);
assert(typeof data.chunks === "number" && (data.chunks as number) >= 1, `info .chunks invalid: ${data.chunks}`);
assertEquals(data.conflicts, "N/A", "info .conflicts should be N/A");
console.log("[PASS] info output format matched");
});
// ------------------------------------------------------------------
// info: non-existent path exits non-zero
// ------------------------------------------------------------------
await t.step("info non-existent path returns non-zero", async () => {
const r = await runCli(vaultDir, "--settings", settingsFile, "info", "no-such-file.md");
assert(r.code !== 0, "info on non-existent file should exit non-zero");
console.log("[PASS] info non-existent path returns non-zero");
});
// ------------------------------------------------------------------
// rm: removes file from ls and makes cat fail
// ------------------------------------------------------------------
await t.step("rm removes target from ls and cat", async () => {
await run("rm", "test/z-last.txt");
const catResult = await runCli(vaultDir, "--settings", settingsFile, "cat", "test/z-last.txt");
assert(catResult.code !== 0, "rm target should not be readable by cat");
const lsOut = await run("ls", "test/");
assert(!lsOut.includes("test/z-last.txt\t"), "rm target should not appear in ls output");
console.log("[PASS] rm removed target from visible entries");
});
// ------------------------------------------------------------------
// resolve: accepts current revision, rejects invalid revision
// ------------------------------------------------------------------
await t.step("resolve: valid and invalid revisions", async () => {
const lsLine = (await run("ls", "test/a-first.txt")).trim().split("\n")[0];
assert(lsLine, "could not fetch revision for resolve test");
const rev = lsLine.split("\t")[3];
assert(rev?.length > 0, "revision was empty for resolve test");
await run("resolve", "test/a-first.txt", rev);
console.log("[PASS] resolve accepted current revision");
const badR = await runCli(vaultDir, "--settings", settingsFile, "resolve", "test/a-first.txt", "9-no-such-rev");
assert(badR.code !== 0, "resolve with non-existent revision should exit non-zero");
console.log("[PASS] resolve non-existent revision returns non-zero");
});
// ------------------------------------------------------------------
// cat-rev / pull-rev: retrieve a past revision
// ------------------------------------------------------------------
await t.step("cat-rev / pull-rev: retrieve past revision", async () => {
const revPath = "test/revision-history.txt";
await runCliWithInput("revision-v1\n", vaultDir, "--settings", settingsFile, "put", revPath);
await runCliWithInput("revision-v2\n", vaultDir, "--settings", settingsFile, "put", revPath);
await runCliWithInput("revision-v3\n", vaultDir, "--settings", settingsFile, "put", revPath);
const infoOut = await run("info", revPath);
const infoData = JSON.parse(infoOut) as {
revisions?: string[];
};
const revisions = Array.isArray(infoData.revisions) ? infoData.revisions : [];
const pastRev = revisions.find((r): r is string => typeof r === "string" && r !== "N/A");
assert(pastRev, "info output did not include any past revision");
const catRevOut = await run("cat-rev", revPath, pastRev);
const catRevClean = sanitiseCatStdout(catRevOut);
assert(
catRevClean === "revision-v1\n" || catRevClean === "revision-v2\n",
`cat-rev output did not match expected past revision:\n${catRevClean}`
);
console.log("[PASS] cat-rev matched one of the past revisions from info");
const pullRevFile = workDir.join("rev-pull-output.txt");
await run("pull-rev", revPath, pullRevFile, pastRev);
const pullRevContent = await Deno.readTextFile(pullRevFile);
assert(
pullRevContent === "revision-v1\n" || pullRevContent === "revision-v2\n",
`pull-rev output did not match expected past revision:\n${pullRevContent}`
);
console.log("[PASS] pull-rev matched one of the past revisions from info");
});
});

View File

@@ -0,0 +1,93 @@
/**
* Deno port of test-sync-locked-remote-linux.sh
*
* Verifies CLI sync behaviour when the remote milestone document is unlocked
* versus locked.
*/
import { assert, assertStringIncludes } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCli } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { createCouchdbDatabase, startCouchdb, stopCouchdb, updateCouchdbDoc } from "./helpers/docker.ts";
const MILESTONE_DOC = "_local/obsydian_livesync_milestone";
function requireEnv(...keys: string[]): string {
for (const key of keys) {
const value = Deno.env.get(key)?.trim();
if (value) return value;
}
throw new Error(`Required env var is missing: ${keys.join(" or ")}`);
}
Deno.test("sync: actionable error against locked remote DB", async () => {
const couchdbUri = requireEnv("COUCHDB_URI", "hostname").replace(/\/$/, "");
const couchdbUser = requireEnv("COUCHDB_USER", "username");
const couchdbPassword = requireEnv("COUCHDB_PASSWORD", "password");
const dbPrefix = requireEnv("COUCHDB_DBNAME", "dbname");
const dbname = `${dbPrefix}-locked-${Date.now()}-${Math.floor(Math.random() * 100000)}`;
await using workDir = await TempDir.create("livesync-cli-locked-test");
const vaultDir = workDir.join("vault");
const settingsFile = workDir.join("settings.json");
await Deno.mkdir(vaultDir, { recursive: true });
const shouldStartDocker = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (shouldStartDocker) {
console.log(`[INFO] starting CouchDB and creating test database: ${dbname}`);
await startCouchdb(couchdbUri, couchdbUser, couchdbPassword, dbname);
} else {
console.log(`[INFO] using existing CouchDB and creating test database: ${dbname}`);
await createCouchdbDatabase(couchdbUri, couchdbUser, couchdbPassword, dbname);
}
try {
await initSettingsFile(settingsFile);
await applyCouchdbSettings(settingsFile, couchdbUri, couchdbUser, couchdbPassword, dbname, true);
console.log("[CASE] initial sync to create milestone document");
const initialSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
initialSync.code === 0,
`initial sync failed\nstdout: ${initialSync.stdout}\nstderr: ${initialSync.stderr}`
);
const updateMilestone = async (locked: boolean) => {
await updateCouchdbDoc(couchdbUri, couchdbUser, couchdbPassword, `${dbname}/${MILESTONE_DOC}`, (doc) => ({
...doc,
locked,
accepted_nodes: [],
}));
};
console.log("[CASE] sync should succeed when remote is not locked");
await updateMilestone(false);
const unlockedSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
unlockedSync.code === 0,
`sync should succeed when remote is not locked\nstdout: ${unlockedSync.stdout}\nstderr: ${unlockedSync.stderr}`
);
assert(
!unlockedSync.combined.includes("The remote database is locked"),
`locked error should not appear when remote is not locked\n${unlockedSync.combined}`
);
console.log("[PASS] unlocked remote DB syncs successfully");
console.log("[CASE] sync should fail with actionable error when remote is locked");
await updateMilestone(true);
const lockedSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
lockedSync.code !== 0,
`sync should fail when remote is locked\nstdout: ${lockedSync.stdout}\nstderr: ${lockedSync.stderr}`
);
assertStringIncludes(lockedSync.combined, "The remote database is locked and this device is not yet accepted");
console.log("[PASS] locked remote DB produces actionable CLI error");
} finally {
if (shouldStartDocker && !keepDocker) {
await stopCouchdb().catch(() => {});
}
}
});

View File

@@ -0,0 +1,272 @@
/**
* Deno port of test-sync-two-local-databases-linux.sh
*
* Tests two-vault synchronisation via CouchDB including conflict detection
* and resolution.
*
* Requires CouchDB connection details. Provide them via environment variables
* OR place a .test.env file at src/apps/cli/.test.env.
*
* By default, a CouchDB Docker container is started automatically
* (LIVESYNC_START_DOCKER=1). Set LIVESYNC_START_DOCKER=0 to use an existing
* CouchDB instance instead.
*
* Run:
* deno test -A test-sync-two-local-databases.ts
*
* With an existing CouchDB:
* COUCHDB_URI=http://127.0.0.1:5984 \
* COUCHDB_USER=admin \
* COUCHDB_PASSWORD=password \
* COUCHDB_DBNAME=livesync-test \
* LIVESYNC_START_DOCKER=0 \
* deno test -A test-sync-two-local-databases.ts
*/
import { assertEquals, assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCliOrFail, jsonFieldIsNa } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, stopCouchdb } from "./helpers/docker.ts";
// ---------------------------------------------------------------------------
// Load configuration
// ---------------------------------------------------------------------------
async function resolveConfig(): Promise<{
uri: string;
user: string;
password: string;
baseDbname: string;
} | null> {
const env = Deno.env.toObject();
const uri = (env["COUCHDB_URI"] ?? env["hostname"] ?? "").replace(/\/$/, "");
const user = env["COUCHDB_USER"] ?? env["username"] ?? "";
const password = env["COUCHDB_PASSWORD"] ?? env["password"] ?? "";
const baseDbname = env["COUCHDB_DBNAME"] ?? env["dbname"] ?? "livesync-test";
if (!uri || !user || !password) return null;
return { uri, user, password, baseDbname };
}
const config = await resolveConfig();
const START_DOCKER = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const KEEP_DOCKER = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
const SYNC_RETRY = Number(Deno.env.get("LIVESYNC_SYNC_RETRY") ?? "8");
// Provide a sane default for flaky remote connectivity in Docker-on-WSL
// environments. Users can override explicitly if needed.
if (!Deno.env.has("LIVESYNC_CLI_RETRY")) {
Deno.env.set("LIVESYNC_CLI_RETRY", "2");
}
// ---------------------------------------------------------------------------
// Test suite
// ---------------------------------------------------------------------------
Deno.test(
{
name: "sync two local databases: sync + conflict detection + resolution",
ignore: config === null,
},
async (t) => {
if (!config) return; // narrowing for TypeScript
const suffix = `${Date.now()}-${Math.floor(Math.random() * 65535)}`;
const dbname = `${config.baseDbname}-${suffix}`;
await using workDir = await TempDir.create("livesync-cli-two-db-test");
// ------------------------------------------------------------------
// Docker lifecycle
// ------------------------------------------------------------------
if (START_DOCKER) {
await startCouchdb(config.uri, config.user, config.password, dbname);
}
try {
await runSuite(t, workDir, config, dbname);
} finally {
if (START_DOCKER && !KEEP_DOCKER) {
await stopCouchdb().catch(() => {});
}
if (START_DOCKER && KEEP_DOCKER) {
console.log("[INFO] LIVESYNC_DEBUG_KEEP_DOCKER=1, keeping couchdb-test container");
}
console.log(`[INFO] test database '${dbname}' is preserved for debugging.`);
}
}
);
// ---------------------------------------------------------------------------
// Suite implementation
// ---------------------------------------------------------------------------
async function runSuite(
t: Deno.TestContext,
workDir: TempDir,
config: { uri: string; user: string; password: string },
dbname: string
): Promise<void> {
const sleep = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms));
const runWithRetry = async <T>(label: string, fn: () => Promise<T>, retries = SYNC_RETRY): Promise<T> => {
let lastErr: unknown;
for (let i = 0; i <= retries; i++) {
try {
return await fn();
} catch (err) {
lastErr = err;
if (i === retries) break;
const delayMs = 500 * (i + 1);
console.warn(`[WARN] ${label} failed, retrying (${i + 1}/${retries}) in ${delayMs}ms`);
await sleep(delayMs);
}
}
throw lastErr;
};
const vaultA = workDir.join("vault-a");
const vaultB = workDir.join("vault-b");
const settingsA = workDir.join("a-settings.json");
const settingsB = workDir.join("b-settings.json");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
await initSettingsFile(settingsA);
await initSettingsFile(settingsB);
const applySettings = async (f: string) =>
applyCouchdbSettings(f, config.uri, config.user, config.password, dbname, /* liveSync */ true);
await applySettings(settingsA);
await applySettings(settingsB);
const runA = (...args: string[]) => runCliOrFail(vaultA, "--settings", settingsA, ...args);
const runB = (...args: string[]) => runCliOrFail(vaultB, "--settings", settingsB, ...args);
const syncA = () => runWithRetry("syncA", () => runA("sync"));
const syncB = () => runWithRetry("syncB", () => runB("sync"));
const catA = (path: string) => runA("cat", path);
const catB = (path: string) => runB("cat", path);
// ------------------------------------------------------------------
// Case 1: A creates file, B reads after sync
// ------------------------------------------------------------------
await t.step("case 1: A creates file -> B can read after sync", async () => {
const srcA = workDir.join("from-a-src.txt");
await Deno.writeTextFile(srcA, "from-a\n");
await runA("push", srcA, "shared/from-a.txt");
await syncA();
await syncB();
const value = (await catB("shared/from-a.txt")).replace(/\r\n/g, "\n").trimEnd();
assertEquals(value, "from-a", "B could not read file created on A");
console.log("[PASS] case 1 passed");
});
// ------------------------------------------------------------------
// Case 2: B creates file, A reads after sync
// ------------------------------------------------------------------
await t.step("case 2: B creates file -> A can read after sync", async () => {
const srcB = workDir.join("from-b-src.txt");
await Deno.writeTextFile(srcB, "from-b\n");
await runB("push", srcB, "shared/from-b.txt");
await syncB();
await syncA();
const value = (await catA("shared/from-b.txt")).replace(/\r\n/g, "\n").trimEnd();
assertEquals(value, "from-b", "A could not read file created on B");
console.log("[PASS] case 2 passed");
});
// ------------------------------------------------------------------
// Case 3: concurrent edits create a conflict
// ------------------------------------------------------------------
await t.step("case 3: concurrent edits create conflict", async () => {
const baseSrc = workDir.join("base-src.txt");
await Deno.writeTextFile(baseSrc, "base\n");
await runA("push", baseSrc, "shared/conflicted.txt");
await syncA();
await syncB();
const aEdit = workDir.join("edit-a.txt");
const bEdit = workDir.join("edit-b.txt");
await Deno.writeTextFile(aEdit, "edit-from-a\n");
await Deno.writeTextFile(bEdit, "edit-from-b\n");
await runA("push", aEdit, "shared/conflicted.txt");
await runB("push", bEdit, "shared/conflicted.txt");
const infoFileA = workDir.join("info-a.json");
const infoFileB = workDir.join("info-b.json");
let conflictDetected = false;
for (const side of ["a", "b"] as const) {
if (side === "a") await syncA();
else await syncB();
await Deno.writeTextFile(infoFileA, await runA("info", "shared/conflicted.txt"));
await Deno.writeTextFile(infoFileB, await runB("info", "shared/conflicted.txt"));
const da = JSON.parse(await Deno.readTextFile(infoFileA)) as Record<string, unknown>;
const db = JSON.parse(await Deno.readTextFile(infoFileB)) as Record<string, unknown>;
if (!jsonFieldIsNa(da, "conflicts") || !jsonFieldIsNa(db, "conflicts")) {
conflictDetected = true;
break;
}
}
assert(conflictDetected, "expected conflict after concurrent edits, but both sides show N/A");
console.log("[PASS] case 3 conflict detected");
});
// ------------------------------------------------------------------
// Case 4: resolve on A, verify B has no conflict after sync
// ------------------------------------------------------------------
await t.step("case 4: resolve on A propagates to B", async () => {
const infoFileA = workDir.join("info-a-resolve.json");
const infoFileB = workDir.join("info-b-resolve.json");
// Ensure A sees the conflict
for (let i = 0; i < 5; i++) {
const raw = await runA("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, raw);
const da = JSON.parse(raw) as Record<string, unknown>;
if (!jsonFieldIsNa(da, "conflicts")) break;
await syncB();
await syncA();
}
const rawA = await runA("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, rawA);
const dataA = JSON.parse(rawA) as Record<string, unknown>;
assert(!jsonFieldIsNa(dataA, "conflicts"), "A does not see conflict, cannot resolve from A only");
const keepRev = dataA["revision"] as string;
assert(keepRev?.length > 0, "could not read revision from A info output");
await runA("resolve", "shared/conflicted.txt", keepRev);
let resolved = false;
for (let i = 0; i < 6; i++) {
await syncA();
await syncB();
const rawA2 = await runA("info", "shared/conflicted.txt");
const rawB2 = await runB("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, rawA2);
await Deno.writeTextFile(infoFileB, rawB2);
const da2 = JSON.parse(rawA2) as Record<string, unknown>;
const db2 = JSON.parse(rawB2) as Record<string, unknown>;
if (jsonFieldIsNa(da2, "conflicts") && jsonFieldIsNa(db2, "conflicts")) {
resolved = true;
break;
}
// If A still sees a conflict, resolve it again
if (!jsonFieldIsNa(da2, "conflicts")) {
const rev2 = da2["revision"] as string;
if (rev2) await runA("resolve", "shared/conflicted.txt", rev2).catch(() => {});
}
}
assert(resolved, "conflicts should be resolved on both A and B");
const contentA = (await catA("shared/conflicted.txt")).replace(/\r\n/g, "\n");
const contentB = (await catB("shared/conflicted.txt")).replace(/\r\n/g, "\n");
assertEquals(contentA, contentB, "resolved content mismatch between A and B");
console.log("[PASS] case 4 passed");
console.log("[PASS] all sync/resolve scenarios passed");
});
}

View File

@@ -0,0 +1,298 @@
# CLI Deno Test Development Notes
This document provides an overview of the Deno-based compatibility tests under `src/apps/cli/testdeno/`.
The existing bash tests under `src/apps/cli/test/` are preserved, while a Windows-friendly suite is maintained in parallel.
---
## Goals
- Keep existing bash tests intact.
- Provide direct execution from Windows PowerShell.
- Establish a TypeScript (Deno) foundation for core end-to-end and integration scenarios.
---
## Directory structure
```
src/apps/cli/testdeno/
deno.json
CONTRIBUTING_TESTS.md
helpers/
backgroundCli.ts
cli.ts
docker.ts
env.ts
p2p.ts
settings.ts
temp.ts
test-e2e-two-vaults-couchdb.ts
test-push-pull.ts
test-p2p-host.ts
test-p2p-peers-local-relay.ts
test-p2p-sync.ts
test-p2p-three-nodes-conflict.ts
test-p2p-upload-download-repro.ts
test-e2e-two-vaults-matrix.ts
test-setup-put-cat.ts
test-mirror.ts
test-sync-two-local-databases.ts
test-sync-locked-remote.ts
```
---
## Key files
### `deno.json`
- Defines Deno tasks.
- Defines import maps for `@std/assert` and `@std/path`.
Main tasks:
- `deno task test`
- `deno task test:local`
- `deno task test:push-pull`
- `deno task test:setup-put-cat`
- `deno task test:mirror`
- `deno task test:sync-two-local`
- `deno task test:sync-locked-remote`
- `deno task test:p2p-host`
- `deno task test:p2p-peers`
- `deno task test:p2p-sync`
- `deno task test:p2p-three-nodes`
- `deno task test:p2p-upload-download`
- `deno task test:e2e-couchdb`
- `deno task test:e2e-matrix`
### `helpers/cli.ts`
- CLI execution wrappers.
- `runCli`, `runCliOrFail`, `runCliWithInput`.
- Output normalisation via `sanitiseCatStdout`.
- Comparison utilities, including `assertFilesEqual`.
This file corresponds to `run_cli` and common assertions in `test-helpers.sh`.
### `helpers/settings.ts`
- Executes `init-settings --force`.
- Marks `isConfigured = true`.
- Applies CouchDB and P2P settings.
- Applies remote synchronisation settings and P2P test tweaks.
This file corresponds to settings helpers in `test-helpers.sh`.
### `helpers/docker.ts`
- Starts, stops, and initialises CouchDB directly from Deno.
- Configures CouchDB via `fetch + retry`.
- Starts and stops the P2P relay through the same Docker runner.
Both CouchDB and P2P relay flows are bash-independent.
### `helpers/backgroundCli.ts`
- Starts long-running commands such as `p2p-host` in the background.
- Waits for readiness logs and handles termination.
### `helpers/p2p.ts`
- Determines whether a local relay should be started.
- Parses `p2p-peers` output.
- Discovers peer IDs with a fallback based on advertisement logs.
### `helpers/env.ts`
- Loads `.test.env`.
- Supports `KEY=value`, single-quoted values, and double-quoted values.
### `helpers/temp.ts`
- Provides `TempDir`.
- Uses `await using` to auto-clean temporary directories.
---
## Implemented tests
### `test-push-pull.ts`
- Verifies push and pull round trips.
- Uses environment variables or `.test.env` for CouchDB values.
### `test-setup-put-cat.ts`
- Verifies `setup` with full setup URI generation via `encodeSettingsToSetupURI`.
- Verifies `push`, `cat`, `ls`, `info`, `rm`, `resolve`, `cat-rev`, and `pull-rev`.
- Does not require an external remote.
### `test-mirror.ts`
- Verifies six core mirror scenarios.
- Does not require an external remote.
### `test-sync-two-local-databases.ts`
- Verifies sync between two vaults and CouchDB.
- Verifies conflict detection and resolve propagation.
- Starts Docker CouchDB by default when `LIVESYNC_START_DOCKER != 0`.
### `test-sync-locked-remote.ts`
- Updates the CouchDB milestone `locked` flag.
- Verifies sync success when unlocked.
- Verifies actionable CLI error when locked.
### `test-p2p-host.ts`
- Verifies that `p2p-host` starts and emits readiness output.
### `test-p2p-peers-local-relay.ts`
- Verifies peer discovery through a local relay.
### `test-p2p-sync.ts`
- Verifies that `p2p-sync` completes after peer discovery.
### `test-p2p-three-nodes-conflict.ts`
- Uses one host and two clients.
- Verifies conflict creation, detection via `info`, and resolution via `resolve`.
### `test-p2p-upload-download-repro.ts`
- Uses host, upload, and download nodes.
- Verifies transfer of text files and binary files, including larger files.
### `test-e2e-two-vaults-couchdb.ts`
- Verifies two-vault end-to-end scenarios on CouchDB.
- Runs both encryption-off and encryption-on cases.
- Includes conflict marker checks in `ls` and resolve propagation checks.
### `test-e2e-two-vaults-matrix.ts`
- Verifies the matrix equivalent of the bash script.
- Runs four combinations:
- `COUCHDB-enc0`
- `COUCHDB-enc1`
- `MINIO-enc0`
- `MINIO-enc1`
---
## Running tests (PowerShell)
From `src/apps/cli/testdeno`:
```powershell
cd src/apps/cli/testdeno
# Local-only set
deno task test:local
# Individual tests
deno task test:setup-put-cat
deno task test:mirror
deno task test:push-pull
deno task test:sync-locked-remote
# CouchDB-based tests
deno task test:sync-two-local
deno task test:e2e-couchdb
# P2P-based tests
deno task test:p2p-host
deno task test:p2p-peers
deno task test:p2p-sync
deno task test:p2p-three-nodes
deno task test:p2p-upload-download
deno task test:e2e-matrix
```
---
## Environment variables
### CouchDB
- `COUCHDB_URI`
- `COUCHDB_USER`
- `COUCHDB_PASSWORD`
- `COUCHDB_DBNAME`
Equivalent keys in `src/apps/cli/.test.env`:
- `hostname`
- `username`
- `password`
- `dbname`
### Behaviour switches
- `LIVESYNC_START_DOCKER=0`: use existing CouchDB.
- `REMOTE_PATH`: override target path for selected tests.
- `LIVESYNC_TEST_TEE=1`: stream CLI stdout and stderr during execution.
- `LIVESYNC_DOCKER_TEE=1`: stream Docker stdout and stderr.
- `LIVESYNC_CLI_RETRY=<n>`: retry transient network failures.
- `LIVESYNC_DEBUG_KEEP_DOCKER=1`: keep `couchdb-test` after test completion.
### Docker command selection
`helpers/docker.ts` supports command selection via environment variables.
- `LIVESYNC_DOCKER_MODE=auto` (default)
- Windows: tries `wsl docker` first, then `docker`.
- Non-Windows: tries `docker` first, then `wsl docker`.
- `LIVESYNC_DOCKER_MODE=native`: always uses `docker`.
- `LIVESYNC_DOCKER_MODE=wsl`: always uses `wsl docker`.
- `LIVESYNC_DOCKER_COMMAND="..."`: custom command, for example `wsl docker`.
`LIVESYNC_DOCKER_COMMAND` has priority over `LIVESYNC_DOCKER_MODE`.
PowerShell examples:
```powershell
# Use Docker in WSL explicitly
$env:LIVESYNC_DOCKER_MODE = "wsl"
deno task test:sync-two-local
# Full custom command
$env:LIVESYNC_DOCKER_COMMAND = "wsl docker"
deno task test:sync-two-local
```
### P2P
- `RELAY`
- `ROOM_ID`
- `PASSPHRASE`
- `APP_ID`
- `PEERS_TIMEOUT`
- `SYNC_TIMEOUT`
- `USE_INTERNAL_RELAY=0|1`
- `TIMEOUT_SECONDS`
---
## Continuous Integration
The GitHub Actions workflow `.github/workflows/cli-deno-tests.yml` is used to run these tests automatically on push and pull requests affecting the CLI.
---
## Current limitations
- MinIO startup and matrix coverage are ported. Current limits are elsewhere, not setup URI generation.
---
## Maintenance policy
- Existing bash tests remain available.
- Deno tests are expanded in parallel for cross-platform usage.
- New scenarios should be added through reusable helpers in `helpers/`.

View File

@@ -11,11 +11,54 @@ const defaultExternal = [
"crypto",
"pouchdb-adapter-leveldb",
"commander",
"chokidar",
"punycode",
"werift",
];
// Polyfill FileReader at the very top of the CJS bundle. octagonal-wheels uses
// FileReader for base64 conversion when Uint8Array.toBase64 (TC39 proposal) is
// unavailable. Node.js has neither, so we inject a minimal FileReader shim before
// any module-scope code evaluates.
const fileReaderPolyfillBanner = `
if (typeof globalThis.FileReader === "undefined") {
globalThis.FileReader = class FileReader {
constructor() { this.result = null; this.onload = null; this.onerror = null; }
readAsDataURL(blob) {
blob.arrayBuffer().then((buf) => {
var b64 = require("buffer").Buffer.from(buf).toString("base64");
this.result = "data:" + (blob.type || "application/octet-stream") + ";base64," + b64;
if (this.onload) this.onload({ target: this });
}).catch((err) => { if (this.onerror) this.onerror({ target: this, error: err }); });
}
readAsArrayBuffer() { throw new Error("FileReader.readAsArrayBuffer is not implemented in this polyfill"); }
readAsBinaryString() { throw new Error("FileReader.readAsBinaryString is not implemented in this polyfill"); }
readAsText() { throw new Error("FileReader.readAsText is not implemented in this polyfill"); }
abort() { throw new Error("FileReader.abort is not implemented in this polyfill"); }
};
}
`;
function injectBanner(): import("vite").Plugin {
return {
name: "inject-banner",
generateBundle(_options, bundle) {
for (const chunk of Object.values(bundle)) {
if (chunk.type === "chunk" && chunk.fileName.startsWith("entrypoint")) {
// Insert after the shebang line if present, otherwise at the top.
if (chunk.code.startsWith("#!")) {
const newline = chunk.code.indexOf("\n");
chunk.code = chunk.code.slice(0, newline + 1) + fileReaderPolyfillBanner + chunk.code.slice(newline + 1);
} else {
chunk.code = fileReaderPolyfillBanner + chunk.code;
}
}
}
},
};
}
export default defineConfig({
plugins: [svelte()],
plugins: [svelte(), injectBanner()],
resolve: {
alias: {
"@lib/worker/bgWorker.ts": "../../lib/src/worker/bgWorker.mock.ts",

Submodule src/lib updated: 97530553a6...adcfe42522

View File

@@ -66,6 +66,11 @@ export class DocumentHistoryModal extends Modal {
currentDeleted = false;
initialRev?: string;
// Diff navigation state
currentDiffIndex = -1;
diffNavContainer!: HTMLDivElement;
diffNavIndicator!: HTMLSpanElement;
constructor(
app: App,
core: LiveSyncBaseCore,
@@ -216,6 +221,64 @@ export class DocumentHistoryModal extends Modal {
this.contentView.innerHTML =
(this.currentDeleted ? "(At this revision, the file has been deleted)\n" : "") + result;
}
// Reset diff navigation after content changes
this.resetDiffNavigation();
if (this.showDiff) {
this.navigateDiff("next");
}
}
/**
* Navigate to the previous or next diff block in the content view.
* Only effective when diff highlighting is enabled.
*/
navigateDiff(direction: "prev" | "next") {
const diffElements = this.contentView.querySelectorAll(".history-added, .history-deleted");
if (diffElements.length === 0) return;
// Remove previous focus highlight
const prevFocused = this.contentView.querySelector(".diff-focused");
if (prevFocused) {
prevFocused.classList.remove("diff-focused");
}
if (direction === "next") {
this.currentDiffIndex = (this.currentDiffIndex + 1) % diffElements.length;
} else {
this.currentDiffIndex =
this.currentDiffIndex <= 0 ? diffElements.length - 1 : this.currentDiffIndex - 1;
}
const target = diffElements[this.currentDiffIndex];
target.classList.add("diff-focused");
target.scrollIntoView({ behavior: "smooth", block: "center" });
this.diffNavIndicator.textContent = `${this.currentDiffIndex + 1}/${diffElements.length}`;
}
/**
* Reset the diff navigation index and update the indicator.
*/
resetDiffNavigation() {
this.currentDiffIndex = -1;
if (this.diffNavIndicator) {
if (this.showDiff) {
const diffElements = this.contentView.querySelectorAll(".history-added, .history-deleted");
this.diffNavIndicator.textContent = diffElements.length > 0 ? `0/${diffElements.length}` : "\u2014";
} else {
this.diffNavIndicator.textContent = "\u2014";
}
}
this.updateDiffNavVisibility();
}
/**
* Show or hide the diff navigation buttons based on the showDiff state.
*/
updateDiffNavVisibility() {
if (this.diffNavContainer) {
this.diffNavContainer.style.display = this.showDiff ? "flex" : "none";
}
}
override onOpen() {
@@ -236,25 +299,47 @@ export class DocumentHistoryModal extends Modal {
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
});
contentEl
.createDiv("", (e) => {
e.createEl("label", {}, (label) => {
label.appendChild(
createEl("input", { type: "checkbox" }, (checkbox) => {
if (this.showDiff) {
checkbox.checked = true;
}
checkbox.addEventListener("input", (evt: any) => {
this.showDiff = checkbox.checked;
localStorage.setItem("ols-history-highlightdiff", this.showDiff == true ? "1" : "");
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
})
);
label.appendText("Highlight diff");
});
})
.addClass("op-info");
const diffOptionsRow = contentEl.createDiv("");
diffOptionsRow.addClass("op-info");
diffOptionsRow.addClass("diff-options-row");
diffOptionsRow.createEl("label", {}, (label) => {
label.appendChild(
createEl("input", { type: "checkbox" }, (checkbox) => {
if (this.showDiff) {
checkbox.checked = true;
}
checkbox.addEventListener("input", (evt: any) => {
this.showDiff = checkbox.checked;
localStorage.setItem("ols-history-highlightdiff", this.showDiff == true ? "1" : "");
this.updateDiffNavVisibility();
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
})
);
label.appendText("Highlight diff");
});
// Diff navigation buttons
this.diffNavContainer = diffOptionsRow.createDiv("");
this.diffNavContainer.addClass("diff-nav");
this.diffNavContainer.style.display = this.showDiff ? "flex" : "none";
this.diffNavContainer.createEl("button", { text: "\u25B2 Prev" }, (e) => {
e.addClass("diff-nav-btn");
e.addEventListener("click", () => {
this.navigateDiff("prev");
});
});
this.diffNavContainer.createEl("button", { text: "\u25BC Next" }, (e) => {
e.addClass("diff-nav-btn");
e.addEventListener("click", () => {
this.navigateDiff("next");
});
});
this.diffNavIndicator = this.diffNavContainer.createEl("span", { text: "\u2014" });
this.diffNavIndicator.addClass("diff-nav-indicator");
this.info = contentEl.createDiv("");
this.info.addClass("op-info");
fireAndForget(async () => await this.loadFile(this.initialRev));

View File

@@ -1,208 +0,0 @@
import { type ObsidianLiveSyncSettings, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "../../lib/src/common/types.ts";
import { configURIBase } from "../../common/types.ts";
// import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser.js";
import { fireAndForget } from "../../lib/src/common/utils.ts";
import {
EVENT_REQUEST_COPY_SETUP_URI,
EVENT_REQUEST_OPEN_P2P_SETTINGS,
EVENT_REQUEST_OPEN_SETUP_URI,
EVENT_REQUEST_SHOW_SETUP_QR,
eventHub,
} from "../../common/events.ts";
import { $msg } from "../../lib/src/common/i18n.ts";
// import { performDoctorConsultation, RebuildOptions } from "@/lib/src/common/configForDoc.ts";
import type { LiveSyncCore } from "../../main.ts";
import {
encodeQR,
encodeSettingsToQRCodeData,
encodeSettingsToSetupURI,
OutputFormat,
} from "../../lib/src/API/processSetting.ts";
import { SetupManager, UserMode } from "./SetupManager.ts";
import { AbstractModule } from "../AbstractModule.ts";
export class ModuleSetupObsidian extends AbstractModule {
private _setupManager!: SetupManager;
private _everyOnload(): Promise<boolean> {
this._setupManager = this.core.getModule(SetupManager);
try {
this.registerObsidianProtocolHandler("setuplivesync", async (conf: any) => {
if (conf.settings) {
await this._setupManager.onUseSetupURI(
UserMode.Unknown,
`${configURIBase}${encodeURIComponent(conf.settings)}`
);
} else if (conf.settingsQR) {
await this._setupManager.decodeQR(conf.settingsQR);
}
});
} catch (e) {
this._log(
"Failed to register protocol handler. This feature may not work in some environments.",
LOG_LEVEL_NOTICE
);
this._log(e, LOG_LEVEL_VERBOSE);
}
this.addCommand({
id: "livesync-setting-qr",
name: "Show settings as a QR code",
callback: () => fireAndForget(this.encodeQR()),
});
this.addCommand({
id: "livesync-copysetupuri",
name: "Copy settings as a new setup URI",
callback: () => fireAndForget(this.command_copySetupURI()),
});
this.addCommand({
id: "livesync-copysetupuri-short",
name: "Copy settings as a new setup URI (With customization sync)",
callback: () => fireAndForget(this.command_copySetupURIWithSync()),
});
this.addCommand({
id: "livesync-copysetupurifull",
name: "Copy settings as a new setup URI (Full)",
callback: () => fireAndForget(this.command_copySetupURIFull()),
});
this.addCommand({
id: "livesync-opensetupuri",
name: "Use the copied setup URI (Formerly Open setup URI)",
callback: () => fireAndForget(this.command_openSetupURI()),
});
eventHub.onEvent(EVENT_REQUEST_OPEN_SETUP_URI, () => fireAndForget(() => this.command_openSetupURI()));
eventHub.onEvent(EVENT_REQUEST_COPY_SETUP_URI, () => fireAndForget(() => this.command_copySetupURI()));
eventHub.onEvent(EVENT_REQUEST_SHOW_SETUP_QR, () => fireAndForget(() => this.encodeQR()));
eventHub.onEvent(EVENT_REQUEST_OPEN_P2P_SETTINGS, () =>
fireAndForget(() => {
return this._setupManager.onP2PManualSetup(UserMode.Update, this.settings, false);
})
);
return Promise.resolve(true);
}
async encodeQR() {
const settingString = encodeSettingsToQRCodeData(this.settings);
const codeSVG = encodeQR(settingString, OutputFormat.SVG);
if (codeSVG == "") {
return "";
}
const msg = $msg("Setup.QRCode", { qr_image: codeSVG });
await this.core.confirm.confirmWithMessage("Settings QR Code", msg, ["OK"], "OK");
return await Promise.resolve(codeSVG);
}
async askEncryptingPassphrase(): Promise<string | false> {
const encryptingPassphrase = await this.core.confirm.askString(
"Encrypt your settings",
"The passphrase to encrypt the setup URI",
"",
true
);
return encryptingPassphrase;
}
async command_copySetupURI(stripExtra = true) {
const encryptingPassphrase = await this.askEncryptingPassphrase();
if (encryptingPassphrase === false) return;
const encryptedURI = await encodeSettingsToSetupURI(
this.settings,
encryptingPassphrase,
[...((stripExtra ? ["pluginSyncExtendedSetting"] : []) as (keyof ObsidianLiveSyncSettings)[])],
true
);
if (await this.services.UI.promptCopyToClipboard("Setup URI", encryptedURI)) {
this._log("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
}
// await navigator.clipboard.writeText(encryptedURI);
}
async command_copySetupURIFull() {
const encryptingPassphrase = await this.askEncryptingPassphrase();
if (encryptingPassphrase === false) return;
const encryptedURI = await encodeSettingsToSetupURI(this.settings, encryptingPassphrase, [], false);
await navigator.clipboard.writeText(encryptedURI);
this._log("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
}
async command_copySetupURIWithSync() {
await this.command_copySetupURI(false);
}
async command_openSetupURI() {
await this._setupManager.onUseSetupURI(UserMode.Unknown);
}
// TODO: Where to implement these?
// async askSyncWithRemoteConfig(tryingSettings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
// const buttons = {
// fetch: $msg("Setup.FetchRemoteConf.Buttons.Fetch"),
// no: $msg("Setup.FetchRemoteConf.Buttons.Skip"),
// } as const;
// const fetchRemoteConf = await this.core.confirm.askSelectStringDialogue(
// $msg("Setup.FetchRemoteConf.Message"),
// Object.values(buttons),
// { defaultAction: buttons.fetch, timeout: 0, title: $msg("Setup.FetchRemoteConf.Title") }
// );
// if (fetchRemoteConf == buttons.no) {
// return tryingSettings;
// }
// const newSettings = JSON.parse(JSON.stringify(tryingSettings)) as ObsidianLiveSyncSettings;
// const remoteConfig = await this.services.tweakValue.fetchRemotePreferred(newSettings);
// if (remoteConfig) {
// this._log("Remote configuration found.", LOG_LEVEL_NOTICE);
// const resultSettings = {
// ...DEFAULT_SETTINGS,
// ...tryingSettings,
// ...remoteConfig,
// } satisfies ObsidianLiveSyncSettings;
// return resultSettings;
// } else {
// this._log("Remote configuration not applied.", LOG_LEVEL_NOTICE);
// return {
// ...DEFAULT_SETTINGS,
// ...tryingSettings,
// } satisfies ObsidianLiveSyncSettings;
// }
// }
// async askPerformDoctor(
// tryingSettings: ObsidianLiveSyncSettings
// ): Promise<{ settings: ObsidianLiveSyncSettings; shouldRebuild: boolean; isModified: boolean }> {
// const buttons = {
// yes: $msg("Setup.Doctor.Buttons.Yes"),
// no: $msg("Setup.Doctor.Buttons.No"),
// } as const;
// const performDoctor = await this.core.confirm.askSelectStringDialogue(
// $msg("Setup.Doctor.Message"),
// Object.values(buttons),
// { defaultAction: buttons.yes, timeout: 0, title: $msg("Setup.Doctor.Title") }
// );
// if (performDoctor == buttons.no) {
// return { settings: tryingSettings, shouldRebuild: false, isModified: false };
// }
// const newSettings = JSON.parse(JSON.stringify(tryingSettings)) as ObsidianLiveSyncSettings;
// const { settings, shouldRebuild, isModified } = await performDoctorConsultation(this.core, newSettings, {
// localRebuild: RebuildOptions.AutomaticAcceptable, // Because we are in the setup wizard, we can skip the confirmation.
// remoteRebuild: RebuildOptions.SkipEvenIfRequired,
// activateReason: "New settings from URI",
// });
// if (isModified) {
// this._log("Doctor has fixed some issues!", LOG_LEVEL_NOTICE);
// return {
// settings,
// shouldRebuild,
// isModified,
// };
// } else {
// this._log("Doctor detected no issues!", LOG_LEVEL_NOTICE);
// return { settings: tryingSettings, shouldRebuild: false, isModified: false };
// }
// }
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.onLoaded.addHandler(this._everyOnload.bind(this));
}
}

View File

@@ -61,10 +61,12 @@ export class ModuleLiveSyncMain extends AbstractModule {
eventHub.onEvent(EVENT_SETTING_SAVED, (settings: ObsidianLiveSyncSettings) => {
fireAndForget(async () => {
try {
await this.core.services.control.applySettings();
const lang = this.core.services.setting.currentSettings()?.displayLanguage ?? undefined;
const lang = this.core.services.setting.currentSettings()?.displayLanguage;
if (lang !== undefined) {
setLang(this.core.services.setting.currentSettings()?.displayLanguage);
setLang(lang);
}
if (this.core.services.database.isDatabaseReady()) {
await this.core.services.control.applySettings();
}
eventHub.emitEvent(EVENT_REQUEST_RELOAD_SETTING_TAB);
} catch (e) {

View File

@@ -484,4 +484,45 @@ div.workspace-leaf-content[data-type=bases] .livesync-status {
white-space: pre-wrap;
word-break: break-all;
}
/* Diff navigation */
.diff-options-row {
display: flex;
align-items: center;
gap: 8px;
}
.diff-nav {
display: flex;
align-items: center;
gap: 4px;
margin-left: auto;
}
.diff-nav-btn {
padding: 2px 8px;
font-size: 0.85em;
cursor: pointer;
border: 1px solid var(--background-modifier-border);
border-radius: 4px;
background-color: var(--background-secondary);
color: var(--text-normal);
}
.diff-nav-btn:hover {
background-color: var(--background-modifier-hover);
}
.diff-nav-indicator {
font-size: 0.85em;
color: var(--text-muted);
min-width: 3em;
text-align: center;
}
.diff-focused {
outline: 2px solid var(--interactive-accent);
outline-offset: 1px;
border-radius: 2px;
}

View File

@@ -3,6 +3,18 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
## Unreleased
### Improved
- P2P synchronisation has been made more robust
Now the foundation for P2P synchronisation has been rewritten, and the unit tests have been added. The foundation has been separated into the transport layer, signalling-and-connection layer, and, an RPC layers. And each layer has been unit-tested. As the result, the P2P synchronisation now uses the robust shim that uses RPC-ed PouchDB synchronisation in contrast to previous implementation.
This P2P synchronisation is not compatible with previous versions in terms of connectivity. All devices must be updated.
### Fixed
- No longer baffling errors occur when setting-update is triggered during the early stage of initialisation.
## 0.25.60
29th April, 2026

30
vitest.config.rpc-unit.ts Normal file
View File

@@ -0,0 +1,30 @@
import { defineConfig, mergeConfig } from "vitest/config";
import viteConfig from "./vitest.config.common";
export default mergeConfig(
viteConfig,
defineConfig({
resolve: {
alias: {
obsidian: "",
},
},
test: {
name: "rpc-unit-tests",
include: ["src/lib/src/rpc/**/*.unit.spec.ts"],
exclude: ["test/**"],
coverage: {
include: ["src/lib/src/rpc/**/*.ts"],
exclude: ["**/*.unit.spec.ts", "**/index.ts"],
provider: "v8",
reporter: ["text", "json", "html", ["text", { file: "coverage-rpc-text.txt" }]],
thresholds: {
lines: 90,
functions: 90,
branches: 75,
statements: 90,
},
},
},
})
);