Compare commits

..

75 Commits

Author SHA1 Message Date
vorotamoroz
538130aa91 Fix package-lock 2026-05-13 11:36:01 +01:00
vorotamoroz
c9d0357fec Merge branch 'address_community_review' of https://github.com/vrtmrz/obsidian-livesync into address_community_review 2026-05-13 11:35:01 +01:00
vorotamoroz
d05c76da36 Update eslint config to ignore file,
fix some type error on LiveSyncBaseCore
2026-05-13 11:33:46 +01:00
vorotamoroz
d2eb6ecbaf Update for review once 2026-05-13 11:33:46 +01:00
vorotamoroz
25a6fde212 chore: Package modernise, update linter 2026-05-13 11:33:45 +01:00
vorotamoroz
e8f8b680ef prettify 2026-05-13 11:33:03 +01:00
vorotamoroz
6c30f2b863 (chore): removing DOM Operation 2026-05-13 11:33:03 +01:00
vorotamoroz
8dda24a689 Merge pull request #882 from vrtmrz/p2p-rpc
feat: use new p2p-rpc wrapper
2026-05-13 19:24:26 +09:00
vorotamoroz
fbbb63906a Merge branch 'main' into p2p-rpc 2026-05-13 19:22:03 +09:00
vorotamoroz
1e66a7f144 Merge pull request #894 from vrtmrz/fix_unexpected_error_on_startup
fixed: fixed unexpected error during startup
2026-05-13 19:16:18 +09:00
vorotamoroz
df79d81475 fixed: fixed unexpected error during startup 2026-05-13 10:14:47 +00:00
vorotamoroz
ad71355859 Merge pull request #893 from brian-spackman/fix-fractional-mtime-on-linux
fix: truncate sub-millisecond CLI mtimes to prevent mobile crash
2026-05-13 19:12:56 +09:00
vorotamoroz
95dc079fad Merge pull request #843 from andrewleech/daemon-sync
cli: implement continuous sync daemon mode
2026-05-13 18:53:59 +09:00
vorotamoroz
770d4af4a0 Update eslint config to ignore file,
fix some type error on LiveSyncBaseCore
2026-05-13 10:15:45 +01:00
vorotamoroz
3b311248cb Update for review once 2026-05-13 08:02:50 +01:00
Andrew Leech
67996f6d0a cli: fix stale stat.size in NodeVaultAdapter causing corrupted file errors
chokidar stats are captured at poll time and may not reflect the file's
final byte length by the time vault.read() is called. The downstream
integrity check compares stat.size to content length; a mismatch causes
other LiveSync clients to reject the file as corrupted.

Fix by updating file.stat.size from the actual content in read() and
readBinary().

Co-authored-by: Joysimple <Joysimple@users.noreply.github.com>
2026-05-13 16:56:08 +10:00
vorotamoroz
5772811a45 chore: Package modernise, update linter 2026-05-13 04:40:32 +01:00
vorotamoroz
55529cd71e prettify 2026-05-13 03:58:08 +01:00
vorotamoroz
2e9b8b7b62 (chore): removing DOM Operation 2026-05-13 03:55:11 +01:00
Andrew Leech
4ab2e41d18 cli daemon: set disableCheckingConfigMismatch for headless operation
The config mismatch dialog's defaultAction is "Dismiss" which blocks
replication. Since the daemon cannot resolve mismatches interactively,
skip the check entirely and accept the remote configuration as-is.
2026-05-13 11:21:06 +10:00
Andrew Leech
c0ad8ee15a cli: add configurable ignore rules and deployment artifacts
IgnoreRules (src/apps/cli/serviceModules/IgnoreRules.ts):
- Reads .livesync/ignore for user-defined glob patterns
- Applies gitignore matchBase semantics: patterns without / get **/ prefix,
  patterns ending with / get ** appended for directory contents
- Supports `import: .gitignore` directive to merge gitignore patterns
- Rejects negation patterns with a warning (not fully supportable)
- Integrated into both daemon and mirror commands via isTargetFile handler

Wiring:
- IgnoreRules loaded before LiveSyncBaseCore construction so beginWatch()
  receives rules when it fires during onLoad/onFirstInitialise
- Passed through initialiseServiceModulesCLI -> StorageEventManagerCLI ->
  CLIStorageEventManagerAdapter -> CLIWatchAdapter

Deployment:
- src/apps/cli/deploy/livesync-cli.service - systemd unit template
- src/apps/cli/deploy/install.sh - user/system install script

Testing:
- src/apps/cli/test/test-daemon-linux.sh - e2e tests for ignore rules
- src/apps/cli/serviceModules/IgnoreRules.unit.spec.ts - 15 unit tests
- src/apps/cli/commands/daemonCommand.unit.spec.ts - 7 unit tests
2026-05-13 11:21:06 +10:00
Andrew Leech
e6ae516493 cli: implement local→CouchDB file watching via chokidar
- Add chokidar ^4.0.0 as dependency (root package.json, runtime-package.json)
- Mark chokidar as external in vite.config.ts (not bundled, loaded at runtime)
- Implement CLIWatchAdapter.beginWatch() with chokidar:
  - ignoreInitial: true (startup files handled by mirror scan)
  - awaitWriteFinish to prevent partial-write events
  - Excludes dotfiles and .livesync/ directory at watcher level
  - Maps add/change/unlink/addDir/unlinkDir to IStorageEventWatchHandlers
  - Fatal error handler: logs clearly and releases watcher resources
- Add close() to CLIWatchAdapter, StorageEventManagerCLI for clean shutdown
- Register onUnload hook in CLIServiceModules to close watcher on shutdown
2026-05-13 11:21:06 +10:00
Andrew Leech
a4d5ef4620 cli: implement daemon startup sequence and CouchDB→local sync
- Add daemon command to help text and --interval/-i flag for polling mode
- Capture original sync settings before suspendAllSync() clobbers them
- Implement daemon startup: mirror scan → restore settings → applySettings()
  which triggers the full suspend/resume lifecycle and starts the _changes feed
- Guard processSynchroniseResult no-op to non-daemon commands so default
  handler writes incoming CouchDB changes to the local filesystem
- Polling mode: restore settings + clearInterval-safe try/catch error handling
- Warn when both liveSync and syncOnStart are false after restore (no-op config)
- Fix: only block indefinitely if daemon startup succeeded
2026-05-13 11:21:06 +10:00
Brian Spackman
3f7bb047ac fix: floor sub-millisecond CLI mtimes to prevent mobile crash
On Linux, fs.Stats.mtimeMs and ctimeMs return floats with sub-millisecond
precision derived from the kernel's nanosecond filesystem mtime. Stored
raw, this produces document timestamps like 1778511180024.462 in CouchDB
rather than integer milliseconds.

Mobile clients running LiveSync 0.25.60 have been observed to crash when
processing change-feed updates carrying non-integer millisecond timestamps
from CLI-written documents. Desktop and mobile GUI plugins write integer
milliseconds, so the crash only manifests when the headless CLI on Linux
is the source. Whether the issue was introduced in 0.25.60 or had been
latent in earlier versions hasn't been investigated; 0.25.60 is the
version where the crash was confirmed and the fix verified.

Floor the values at every stat-read site (six across three adapters and
one command) so CLI-written documents carry integer-millisecond
timestamps consistent with the rest of the mesh.
2026-05-12 18:00:25 -06:00
vorotamoroz
b6b153c0de Merge pull request #887 from vrtmrz/add_ignore_to_eslint
chore: Change eslint config to ignore _tools
2026-05-11 21:01:47 +09:00
vorotamoroz
eca6a6e0ba chore: Change eslint config to ignore _tools 2026-05-11 13:00:32 +01:00
vorotamoroz
ca43d96c46 Merge pull request #886 from vrtmrz/fix_prettier
Fix prettier config
2026-05-11 20:34:22 +09:00
vorotamoroz
112e3c8b1d Fix prettier config 2026-05-11 12:33:32 +01:00
vorotamoroz
d1eb105801 Merge pull request #872 from OriBoharon/make-cli-onboarding-easier
added documentaion and a hook build script to make onbaording easier when trying to build the cli app
2026-05-11 18:43:00 +09:00
vorotamoroz
d5b93e89cd Change the default issue report label from 'bug' to 'uncategorised' 2026-05-11 17:55:31 +09:00
vorotamoroz
e96fe7cde1 Merge pull request #885 from vrtmrz/tidy_file
(chore) remove obsoleted file
2026-05-11 17:52:22 +09:00
vorotamoroz
68e0610f1d (chore) remove obsoleted file 2026-05-11 09:49:32 +01:00
vorotamoroz
a6be20695a feat: use new p2p-rpc wrapper 2026-05-11 03:49:35 +01:00
vorotamoroz
772b6ecf26 Merge pull request #871 from SeleiXi/feat/diff-navigation-buttons
feat: Add diff navigation buttons for Document History
2026-05-09 22:51:04 +09:00
SeleiXi
81dc7f604b feat: auto navigation to diff 2026-05-09 14:07:08 +08:00
vorotamoroz
a9c87fa52e - Add default test environment
- Fixed to use environment by APIs
- Make test parallel
2026-05-08 03:04:14 +00:00
vorotamoroz
e81f023943 Add default test env 2026-05-08 03:01:22 +00:00
vorotamoroz
2afe12ad2d fix pattern 2026-05-07 11:28:01 +01:00
vorotamoroz
4a9d6c1349 Add ci 2026-05-07 11:23:51 +01:00
vorotamoroz
279fc8876e feat(tests): enhance push/pull test with Docker integration and improved environment variable handling
style(test): format comment
2026-05-07 11:22:56 +01:00
vorotamoroz
cc3d30dbcf feat(tests): add Deno-based tests for checking CLI functionality in the same-codebase between platforms. 2026-05-07 11:06:12 +01:00
vorotamoroz
39e82cc8a1 Fixed: Fix timing issue during test 2026-05-06 21:56:13 +09:00
bori
7a4b76a550 added documentaion and a hook build script to make onbaording easier 2026-05-02 18:51:07 +03:00
SeleiXi
f9294446ba feat: add diff block navigation to Document History modal
Add prev/next buttons to jump between diff blocks in the
Document History view. Includes position indicator and
auto-scroll with visual focus highlighting.
2026-05-02 22:18:43 +08:00
vorotamoroz
fa7ef62302 Fix: adjusting help
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:42:54 +09:00
vorotamoroz
81d8224330 bump
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:39:48 +09:00
vorotamoroz
cc466a4b3c ### Fixed
- Now larger settings can be exported and imported via QR code without issues. (#595)

- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:37:44 +09:00
vorotamoroz
ceebca7de9 Merge pull request #862 from fabiomanz/main
chore: remove obsolete `version` attribute from docker-compose.yml
2026-04-29 17:30:35 +09:00
Fabio
c2f696d0a4 chore: attribute version is obsolete 2026-04-29 07:07:45 +00:00
vorotamoroz
1aa7c45794 Fix the readme
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:55:34 +09:00
vorotamoroz
faefa80cbd Fix again 2026-04-29 12:40:40 +09:00
vorotamoroz
3737eacffd Fix readme
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:39:42 +09:00
vorotamoroz
4c0af0b608 Fixed(cli):
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:22:00 +09:00
vorotamoroz
bb69eb13e7 bump 2026-04-27 11:15:07 +09:00
vorotamoroz
7c9db6376f Fixed:
- No longer Setup-wizard drops username and password silently. (#865)
- Setup URI is now correctly imported (#859).
- now French translation is added.
2026-04-27 11:14:06 +09:00
vorotamoroz
4c04e4e676 Merge pull request #863 from koteitan/fix/859-strip-trailing-slash-from-uri
fix: strip trailing slash from couchDB_URI to avoid double-slash 401
2026-04-27 11:08:11 +09:00
koteitan
14ec35b257 fix: strip trailing slash from couchDB_URI to avoid double-slash 401
When couchDB_URI ends with a trailing slash (e.g. https://host/), the
database name concatenation produces a double-slash path
(https://host//obsidiannotes), which causes CouchDB to reject requests
with 401 "Name or password is incorrect".

Strip trailing slashes from couchDB_URI / baseUri at the path
concatenation sites in:
- src/common/utils.ts (_requestToCouchDBFetch, _requestToCouchDB)
- src/features/LocalDatabaseMainte/CmdLocalDatabaseMainte.ts

The companion fix for the replication path is in the livesync-commonlib
submodule.

Ref: #859
2026-04-27 00:12:57 +09:00
vorotamoroz
b609e4973c Merge remote-tracking branch 'refs/remotes/origin/main' 2026-04-25 20:37:08 +09:00
vorotamoroz
354f0be9a3 bump
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 20:16:50 +09:00
vorotamoroz
16804ed34c Merge pull request #842 from kdavh/patch-1
Update README.md, fix webpeer link
2026-04-25 19:07:56 +09:00
vorotamoroz
31bd270869 Fixed: Hidden file JSON conflicts no longer keep re-opening and dismissing the merge dialogue before we can act, which fixes persistent unresolvable data.json conflicts in plug-in settings sync (related: #850).
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 17:22:25 +09:00
vorotamoroz
b5d054f259 Fixed: Issue report generation now redacts remoteConfigurations connection strings and keeps only the scheme (e.g. sls+https://), so credentials are not exposed in reports.
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 17:09:43 +09:00
vorotamoroz
1ef2955d00 - Fixed a worker-side recursion issue that could raise Maximum call stack size exceeded during chunk splitting (related: #855).
- Improved background worker crash cleanup so pending split/encryption tasks are released cleanly instead of being left in a waiting state (related: #855).
- On start-up, the selected remote configuration is now applied to runtime connection fields as well, reducing intermittent authentication failures caused by stale runtime settings (related: #855).

Co-authored-by: Copilot <copilot@github.com>
2026-04-25 16:51:37 +09:00
vorotamoroz
6ef56063b3 Fixed: No longer credentials are broken during object storage configuration (related: #852).
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 15:03:38 +09:00
vorotamoroz
a912585800 Improve issue template
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 14:01:18 +09:00
vorotamoroz
7a863625bc bump 2026-04-09 04:32:34 +01:00
vorotamoroz
99b4037820 Fixed
- Packing a batch during the journal sync now continues even if the batch contains no items to upload.
2026-04-06 12:51:09 +01:00
vorotamoroz
d59b5dc2f9 Fixed
- Remote configuration URIs are now correctly encrypted when saved after editing in the settings dialogue.
- Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848).
2026-04-06 11:47:19 +01:00
vorotamoroz
4d0203e4ca Update: beta tagging 2026-04-06 11:46:51 +01:00
vorotamoroz
3e4db571cd Fixed
- Remote configuration URIs are now correctly encrypted when saved after editing in the settings dialogue.
- Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848).
2026-04-06 11:45:26 +01:00
vorotamoroz
b0a9bd84d6 bump 2026-04-05 18:21:38 +09:00
vorotamoroz
8c4e62e7c1 ### Fixed
- Now surely remote configurations are editable in the settings dialogue.
- We can fetch remote settings from the remote and apply them to the local settings for each remote configuration entry.
- No longer layout breaking occurs when the description of a remote configuration entry is too long.
2026-04-05 18:20:56 +09:00
vorotamoroz
3e03d1dbd5 bump 2026-04-05 17:48:00 +09:00
vorotamoroz
0dbf4cface bump 2026-04-05 17:47:45 +09:00
kdavh
12f04f6cf7 Update README.md, fix webpeer link 2026-03-28 12:47:28 -04:00
96 changed files with 7965 additions and 1088 deletions

View File

@@ -2,77 +2,104 @@
name: Issue report
about: Create a report to help us improve
title: ''
labels: ''
labels: 'uncategorised'
assignees: ''
---
Thank you for taking the time to report this issue!
To improve the process, I would like to ask you to let me know the information in advance.
Before filling in this form, please read: [How to report an issue](../docs/to_issue_reporting.md).
All instructions and examples, and empty entries can be deleted.
Just for your information, a [filled example](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Issue+example) is also written.
Issues with sufficient information will be prioritised.
## Abstract
The synchronisation hung up immediately after connecting.
---
## Expected behaviour
- Synchronisation ends with the message `Replication completed`
- Everything synchronised
## Required
## Actually happened
- Synchronisation has been cancelled with the message `TypeError ... ` (captured in the attached log, around LL.10-LL.12)
- No files synchronised
### Abstract
<!-- Briefly describe the problem in one or two sentences. -->
## Reproducing procedure
### Expected behaviour
<!-- What did you expect to happen? -->
1. Configure LiveSync as in the attached material.
2. Click the replication button on the ribbon.
3. Synchronising has begun.
4. About two or three seconds later, we got the error `TypeError ... `.
5. Replication has been stopped. No files synchronised.
### Actually happened
<!-- What actually happened? Include any error messages. -->
Note: If you do not catch the reproducing procedure, please let me know the frequency and signs.
## Report materials
If the information is not available, do not hesitate to report it as it is. You can also of course omit it if you think this is indeed unnecessary. If it is necessary, I will ask you.
### Report from the LiveSync
For more information, please refer to [Making the report](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Making+the+report).
<details>
<summary>Report from hatch</summary>
```
<!-- paste here -->
```
</details>
### Reproducing procedure
<!-- Step-by-step instructions to reproduce the issue. If you cannot reproduce it reliably, please describe the frequency and any signs you noticed. -->
### Obsidian debug info
Please provide debug info for **each device involved**. The primary device (where the issue occurred) is required; others are strongly recommended. If your issue involves synchronisation between devices, debug info from relevant devices is very helpful.
To get it: open the command palette → "Show debug info".
<details>
<summary>Debug info</summary>
<summary>Device 1 (primary)</summary>
```
<!-- paste here -->
```
</details>
<details>
<summary>Device 2 (if applicable)</summary>
```
<!-- paste here -->
```
</details>
### LiveSync version
The hatch report (below) includes version information. If you cannot provide the report, please fill in the version here.
- Self-hosted LiveSync version: <!-- e.g. 0.23.0 — find it in Obsidian Settings → Community Plugins -->
### Report from LiveSync
Open the `Hatch` pane in LiveSync settings and press `Make report`. Paste here or upload to [Gist](https://gist.github.com/) and share the link.
<details>
<summary>Report from hatch (primary)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
<details>
<summary>Report from hatch (if applicable)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
### Plug-in log
We can see the log by tapping the Document box icon. If you noticed something suspicious, please let me know.
Note: **Please enable `Verbose Log`**. For detail, refer to [Logging](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Logging), please.
Enable `Verbose Log` in General Settings first, then reproduce the issue and copy the log (tap the document box icon in the ribbon).
Paste here or upload to [Gist](https://gist.github.com/) and share the link.
<details>
<summary>Plug-in log</summary>
<summary>Plug-in log (primary)</summary>
```
<!-- paste here -->
<!-- paste here or link to Gist -->
```
</details>
### Network log
Network logs displayed in DevTools will possibly help with connection-related issues. To capture that, please refer to [DevTools](https://docs.vrtmrz.net/LiveSync/hintandtrivia/DevTools).
<details>
<summary>Plug-in log (if applicable)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
---
## Optional
### Screenshots
If applicable, please add screenshots to help explain your problem.
### Other information, insights and intuition.
### Other information, insights and intuition
Please provide any additional context or information about the problem.

114
.github/workflows/cli-deno-tests.yml vendored Normal file
View File

@@ -0,0 +1,114 @@
name: cli-deno-tests
on:
workflow_dispatch:
inputs:
test_task:
description: 'Deno test task to run'
type: choice
options:
- test
- test:local
- test:e2e-matrix
- test:p2p-sync
default: test
permissions:
contents: read
jobs:
prepare:
runs-on: ubuntu-latest
outputs:
task_matrix: ${{ steps.select.outputs.task_matrix }}
steps:
- name: Select task matrix
id: select
shell: bash
run: |
set -euo pipefail
SELECTED_TASK="${{ github.event_name == 'workflow_dispatch' && inputs.test_task || 'test' }}"
echo "[INFO] Selected task set: $SELECTED_TASK"
case "$SELECTED_TASK" in
test)
TASK_MATRIX='["test:setup-put-cat","test:mirror","test:push-pull","test:sync-two-local","test:sync-locked-remote","test:p2p-host","test:p2p-peers","test:p2p-sync","test:p2p-three-nodes","test:p2p-upload-download","test:e2e-couchdb","test:e2e-matrix"]'
;;
test:local)
TASK_MATRIX='["test:setup-put-cat","test:mirror"]'
;;
test:e2e-matrix)
TASK_MATRIX='["test:e2e-matrix"]'
;;
test:p2p-sync)
TASK_MATRIX='["test:p2p-sync"]'
;;
*)
echo "[ERROR] Unknown task set: $SELECTED_TASK" >&2
exit 1
;;
esac
echo "task_matrix=$TASK_MATRIX" >> "$GITHUB_OUTPUT"
test:
needs: prepare
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
task: ${{ fromJson(needs.prepare.outputs.task_matrix) }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24.x'
cache: 'npm'
- name: Setup Deno
uses: denoland/setup-deno@v2
with:
deno-version: v2.x
- name: Install dependencies
run: npm ci
- name: Build CLI
working-directory: src/apps/cli
run: npm run build
- name: Create .test.env
working-directory: src/apps/cli
run: |
cat <<EOF > .test.env
hostname=http://127.0.0.1:5989/
dbname=livesync-test-db-ci
username=admin
password=testpassword
minioEndpoint=http://127.0.0.1:9000
accessKey=minioadmin
secretKey=minioadmin
bucketName=livesync-test-bucket-ci
EOF
- name: Run Deno tests
working-directory: src/apps/cli/testdeno
env:
LIVESYNC_DOCKER_MODE: native
LIVESYNC_CLI_RETRY: 3
run: |
TASK="${{ matrix.task }}"
echo "[INFO] Running Deno task: $TASK"
deno task "$TASK"
- name: Stop leftover containers
if: always()
run: |
docker stop couchdb-test minio-test relay-test >/dev/null 2>&1 || true
docker rm couchdb-test minio-test relay-test >/dev/null 2>&1 || true

View File

@@ -13,7 +13,7 @@ const prettierConfig = {
tabWidth: 4,
printWidth: 120,
semi: true,
endOfLine: "cr",
endOfLine: "lf",
...localPrettierConfig,
};

View File

@@ -24,7 +24,7 @@ Additionally, it supports peer-to-peer synchronisation using WebRTC now (experim
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices.
- [webpeer](https://github.com/vrtmrz/obsidian-livesync/tree/main/src/apps/webpeer): A pseudo-client for receiving and sending data between devices.
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).

92
aggregator.html Normal file
View File

@@ -0,0 +1,92 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Self-hosted LiveSync Setup QR Aggregator</title>
<style>
body { font-family: sans-serif; display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100vh; margin: 0; background-color: #f4f4f9; color: #333; }
.container { background: white; padding: 2rem; border-radius: 8px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); text-align: center; max-width: 90%; }
.progress { margin: 20px 0; font-size: 1.2rem; font-weight: bold; }
.status { margin-bottom: 20px; color: #666; }
.btn { display: inline-block; padding: 12px 24px; background-color: #7c4dff; color: white; text-decoration: none; border-radius: 4px; font-weight: bold; transition: background-color 0.2s; border: none; cursor: pointer; }
.btn:hover { background-color: #651fff; }
.btn:disabled { background-color: #ccc; cursor: not-allowed; }
.grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(40px, 1fr)); gap: 8px; margin: 20px 0; }
.tile { width: 40px; height: 40px; border: 2px solid #ddd; border-radius: 4px; display: flex; align-items: center; justify-content: center; font-size: 0.8rem; }
.tile.filled { background-color: #7c4dff; color: white; border-color: #7c4dff; }
</style>
</head>
<body>
<div class="container">
<h1>LiveSync Setup</h1>
<div id="app">
<p>Checking hash data...</p>
</div>
</div>
<script>
function updateUI() {
const hash = window.location.hash.substring(1);
const params = new URLSearchParams(hash);
const id = params.get('id');
const total = parseInt(params.get('n') || '0');
const index = parseInt(params.get('i') || '-1');
const data = params.get('d');
const app = document.getElementById('app');
if (!id || total <= 0 || index === -1 || !data) {
app.innerHTML = '<p class="status">Invalid setup URL. Please scan the QR code correctly.</p>';
return;
}
// Get session data
const storageKey = 'ls_agg_' + id;
let session = JSON.parse(localStorage.getItem(storageKey) || '{}');
// Save current data
session[index] = data;
localStorage.setItem(storageKey, JSON.stringify(session));
const receivedIndexes = Object.keys(session).map(Number);
const count = receivedIndexes.length;
let html = `
<div class="status">Session ID: ${id}</div>
<div class="progress">${count} / ${total} Loaded</div>
<div class="grid">
`;
for (let i = 0; i < total; i++) {
const isFilled = session[i] !== undefined;
html += `<div class="tile ${isFilled ? 'filled' : ''}">${i + 1}</div>`;
}
html += `</div>`;
if (count === total) {
const sortedData = Array.from({length: total}, (_, i) => session[i]).join('');
// Use the correct protocol for settings
const obsidianUri = `obsidian://setuplivesync?settingsQR=${sortedData}`;
html += `
<p>All parts have been collected!</p>
<a href="${obsidianUri}" class="btn">Open Obsidian to complete setup</a>
<p style="margin-top:20px; font-size:0.8rem; color: #999;">Note: If the button does not respond, please ensure you are opening this in a browser that can trigger Obsidian.</p>
`;
} else {
html += `
<p class="status">Please scan the next QR code.</p>
<button class="btn" disabled>Waiting...</button>
`;
}
app.innerHTML = html;
}
window.addEventListener('hashchange', updateUI);
updateUI();
</script>
</body>
</html>

13
devs.md
View File

@@ -63,6 +63,9 @@ npm test # Run vitest tests (requires Docker services)
### Environment Setup
- Clone with submodules: `git clone --recurse-submodules <repository-url>`
- If you already cloned without them, run: `git submodule update --init --recursive`
- The shared common library is provided by the `src/lib` submodule, and builds will fail if it is missing
- Create `.env` file with `PATHS_TEST_INSTALL` pointing to test vault plug-in directories (`:` separated on Unix, `;` on Windows)
- Development builds auto-copy to these paths on build
@@ -153,17 +156,17 @@ export class ModuleExample extends AbstractObsidianModule {
## Beta Policy
- Beta versions are denoted by appending `-patched-N` to the base version number.
- Beta versions are denoted by appending `+patchedN` to the base version number.
- `The base version` mostly corresponds to the stable release version.
- e.g., v0.25.41-patched-1 is equivalent to v0.25.42-beta1.
- e.g., v0.25.41+patched1 is equivalent to v0.25.42-beta1.
- This notation is due to SemVer incompatibility of Obsidian's plugin system.
- Hence, this release is `0.25.41-patched-1`.
- Hence, this release is `0.25.41+patched1`.
- Each beta version may include larger changes, but bug fixes will often not be included.
- I think that in most cases, bug fixes will cause the stable releases.
- They will not be released per branch or backported; they will simply be released.
- Bug fixes for previous versions will be applied to the latest beta version.
This means, if xx.yy.02-patched-1 exists and there is a defect in xx.yy.01, a fix is applied to xx.yy.02-patched-1 and yields xx.yy.02-patched-2.
If the fix is required immediately, it is released as xx.yy.02 (with xx.yy.01-patched-1).
This means, if xx.yy.02+patched1 exists and there is a defect in xx.yy.01, a fix is applied to xx.yy.02+patched1 and yields xx.yy.02+patched2.
If the fix is required immediately, it is released as xx.yy.02 (with xx.yy.01+patched1).
- This procedure remains unchanged from the current one.
- At the very least, I am using the latest beta.
- However, I will not be using a beta continuously for a week after it has been released. It is probably closer to an RC in nature.

View File

@@ -1,7 +1,6 @@
# For details and other explanations about this file refer to:
# https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik
version: "2.1"
services:
couchdb:
image: couchdb:latest

View File

@@ -230,7 +230,6 @@ And, be sure to check the server log and be careful of malicious access.
If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`.
```yaml
version: "2.1"
services:
couchdb:
image: couchdb:latest

View File

@@ -71,7 +71,6 @@ obsidian-livesync
可以参照以下内容编辑 `docker-compose.yml`:
```yaml
version: "2.1"
services:
couchdb:
image: couchdb

145
docs/to_issue_reporting.md Normal file
View File

@@ -0,0 +1,145 @@
# How to report an issue
Thank you for helping improve Self-hosted LiveSync!
This document explains how to collect the information needed for an issue report. Issues with sufficient information will be prioritised.
---
## Filled example
Here is an example of a well-filled report for reference.
### Abstract
The synchronisation hung up immediately after connecting.
### Expected behaviour
- Synchronisation ends with the message `Replication completed`
- Everything synchronised
### Actually happened
- Synchronisation was cancelled with the message `TypeError: Failed to fetch` (visible in the plug-in log around lines 1012)
- No files synchronised
### Reproducing procedure
1. Configure LiveSync with the settings shown in the attached report.
2. Click the sync button on the ribbon.
3. Synchronisation begins.
4. About two or three seconds later, the error `TypeError: Failed to fetch` appears.
5. Replication stops. No files synchronised.
### Obsidian debug info (Device 1 — Windows desktop)
```
SYSTEM INFO:
Obsidian version: v1.2.8
Installer version: v1.1.15
Operating system: Windows 10 Pro 10.0.19044
Login status: logged in
Catalyst license: supporter
Insider build toggle: off
Community theme: Minimal v6.1.11
Snippets enabled: 3
Restricted mode: off
Plugins installed: 35
Plugins enabled: 11
1: Self-hosted LiveSync v0.19.4
...
```
### Report from LiveSync
```
----remote config----
cors:
credentials: "true"
...
---- Plug-in config ---
couchDB_URI: self-hosted
couchDB_USER: 𝑅𝐸𝐷𝐴𝐶𝑇𝐸𝐷
...
```
### Plug-in log
```
2023/5/24 10:50:33->HTTP:GET to:/ -> failed
2023/5/24 10:50:33->TypeError:Failed to fetch
2023/5/24 10:50:33->could not connect to https://example.com/ : your vault
(TypeError:Failed to fetch)
```
---
## How to collect each piece of information
### Obsidian debug info
Open the command palette (`Ctrl/Cmd + P`) and run **"Show debug info"**. Copy the output and paste it into the issue.
If multiple devices are involved in the problem (e.g., sync between a phone and a desktop), please provide the debug info for each device. The device where the issue occurred is required; information from other devices is strongly recommended.
### Report from LiveSync (hatch report)
1. Open LiveSync settings.
2. Go to the **Hatch** pane.
3. Press the **Make report** button.
The report will be copied to your clipboard. It contains your LiveSync configuration and the remote server configuration, with credentials automatically redacted.
**Tip:** For large reports, consider uploading to [GitHub Gist](https://gist.github.com/) and sharing the link instead of pasting directly into the issue. This makes it easier to manage, and if you accidentally leave sensitive data in, a Gist can be deleted.
If you paste directly, wrap it in a `<details>` tag to keep the issue readable:
```
<details>
<summary>Report from hatch</summary>
```
----remote config----
:
```
</details>
```
### Plug-in log
The plug-in log is volatile by default (not saved to disk) and shown only in the log dialogue, which can be opened by tapping the **document box icon** in the ribbon.
#### Enable verbose log
Before reproducing the issue, enable **Verbose Log** in LiveSync's **General Settings** pane. Without this, many diagnostic messages will be suppressed.
#### Persist the log to a file (optional)
If you need to capture a log across a restart, enable **"Write logs into the file"** in General Settings. Note that log files may contain sensitive information — use this option only for troubleshooting, and disable it afterwards.
As with the hatch report, consider uploading large logs to [GitHub Gist](https://gist.github.com/).
### Network log (for connection-related issues only)
If the issue is related to network connectivity (e.g., cannot connect to the server, authentication errors), a network log captured from browser DevTools can be very helpful. You do not need to include this for non-connection issues.
#### Opening DevTools
| Platform | Shortcut |
|----------|----------|
| Windows / Linux | `Ctrl + Shift + I` |
| macOS | `Cmd + Shift + I` |
| Android | Use [Chrome remote debugging](https://developer.chrome.com/docs/devtools/remote-debugging/) |
| iOS | Use [Safari Web Inspector](https://developer.apple.com/documentation/safari-developer-tools/inspecting-ios) on a Mac |
#### What to capture
1. Open the **Network** pane in DevTools.
2. Reproduce the issue.
3. Look for requests marked in red.
4. Capture screenshots of the **Headers**, **Payload**, and **Response** tabs for those requests.
**Important — redact before sharing:**
- Headers: conceal the request URL path, Remote Address, `authority`, and `authorisation` values.
- Payload / Response: the `_id` field contains your file paths — redact if needed.

View File

@@ -2,7 +2,6 @@
import esbuild from "esbuild";
import process from "process";
import builtins from "builtin-modules";
import sveltePlugin from "esbuild-svelte";
import { sveltePreprocess } from "svelte-preprocess";
import fs from "node:fs";

View File

@@ -1,102 +1,83 @@
import typescriptEslint from "@typescript-eslint/eslint-plugin";
import svelte from "eslint-plugin-svelte";
import _import from "eslint-plugin-import";
import { fixupPluginRules } from "@eslint/compat";
import tsParser from "@typescript-eslint/parser";
import path from "node:path";
import { fileURLToPath } from "node:url";
import js from "@eslint/js";
import { FlatCompat } from "@eslint/eslintrc";
import obsidianmd from "eslint-plugin-obsidianmd";
import globals from "globals";
import { defineConfig, globalIgnores } from "eslint/config";
import * as sveltePlugin from "eslint-plugin-svelte";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const compat = new FlatCompat({
baseDirectory: __dirname,
recommendedConfig: js.configs.recommended,
allConfig: js.configs.all,
});
export default [
export default defineConfig([
globalIgnores([
"**/node_modules/*",
"**/jest.config.js",
"src/lib/coverage",
"src/lib/browsertest",
"**/test.ts",
"**/tests.ts",
"**/**test.ts",
"**/**.test.ts",
"**/*.unit.spec.ts",
"**/esbuild.*.mjs",
"**/terser.*.mjs",
"**/node_modules",
"**/build",
"**/.eslintrc.js.bak",
"src/lib/src/patches/pouchdb-utils",
"**/esbuild.config.mjs",
"**/rollup.config.js",
"modules/octagonal-wheels/rollup.config.js",
"modules/octagonal-wheels/dist/**/*",
"src/lib/test",
"src/lib/_tools",
"src/lib/src/cli",
"**/main.js",
"src/apps/**/*",
".prettierrc.*.mjs",
".prettierrc.mjs",
"*.config.mjs",
"src/apps/**/*",
"src/lib/src/services/implements/browser/**",
"src/lib/src/services/implements/headless/**",
"src/lib/src/API",
]),
...sveltePlugin.configs["flat/base"],
...obsidianmd.configs.recommended,
{
ignores: [
"**/node_modules/*",
"**/jest.config.js",
"src/lib/coverage",
"src/lib/browsertest",
"**/test.ts",
"**/tests.ts",
"**/**test.ts",
"**/**.test.ts",
"**/esbuild.*.mjs",
"**/terser.*.mjs",
"**/node_modules",
"**/build",
"**/.eslintrc.js.bak",
"src/lib/src/patches/pouchdb-utils",
"**/esbuild.config.mjs",
"**/rollup.config.js",
"modules/octagonal-wheels/rollup.config.js",
"modules/octagonal-wheels/dist/**/*",
"src/lib/test",
"src/lib/src/cli",
"**/main.js",
"src/apps/**/*",
".prettierrc.*.mjs",
".prettierrc.mjs",
"*.config.mjs"
],
},
...compat.extends(
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended"
),
{
plugins: {
"@typescript-eslint": typescriptEslint,
svelte,
import: fixupPluginRules(_import),
},
files: ["**/*.ts"],
languageOptions: {
globals: { ...globals.browser },
parser: tsParser,
ecmaVersion: 5,
sourceType: "module",
parserOptions: {
project: ["tsconfig.json"],
project: "./tsconfig.json",
},
},
rules: {
"no-unused-vars": "off",
"@typescript-eslint/no-unused-vars": [
"error",
{
args: "none",
},
],
"@typescript-eslint/no-unused-vars": ["error", { args: "none" }],
"no-unused-labels": "off",
"@typescript-eslint/ban-ts-comment": "off",
"no-prototype-builtins": "off",
"@typescript-eslint/no-empty-function": "off",
"require-await": "error",
"obsidianmd/rule-custom-message": "off", // Temporary
"obsidianmd/ui/sentence-case": "off", // Temporary
"@typescript-eslint/require-await": "warn",
"@typescript-eslint/no-misused-promises": "warn",
"@typescript-eslint/no-floating-promises": "warn",
"no-async-promise-executor": "warn",
"@typescript-eslint/no-explicit-any": "off",
"@typescript-eslint/no-unnecessary-type-assertion": "error",
"no-constant-condition": [
"error",
{
checkLoops: false,
},
],
"no-constant-condition": ["error", { checkLoops: false }],
},
},
];
{
files: ["**/*.svelte"],
languageOptions: {
parserOptions: {
parser: tsParser,
},
},
rules: {
"no-unused-vars": ["error", { argsIgnorePattern: "^_", varsIgnorePattern: "^_" }],
"obsidianmd/no-plugin-as-component": "off", // Temporary
},
},
]);

View File

@@ -1,8 +1,8 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.25.56",
"minAppVersion": "0.9.12",
"version": "0.25.60",
"minAppVersion": "1.7.2",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",
"authorUrl": "https://github.com/vrtmrz",

1950
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "obsidian-livesync",
"version": "0.25.56",
"version": "0.25.60",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js",
"type": "module",
@@ -61,14 +61,13 @@
"license": "MIT",
"devDependencies": {
"@chialab/esbuild-plugin-worker": "^0.19.0",
"@eslint/compat": "^2.0.2",
"@eslint/eslintrc": "^3.3.4",
"@eslint/js": "^9.39.3",
"@sveltejs/vite-plugin-svelte": "^6.2.4",
"@tsconfig/svelte": "^5.0.8",
"@types/deno": "^2.5.0",
"@types/diff-match-patch": "^1.0.36",
"@types/markdown-it": "^14.1.2",
"@types/micromatch": "^4.0.10",
"@types/node": "^24.10.13",
"@types/pouchdb": "^6.4.2",
"@types/pouchdb-adapter-http": "^6.1.6",
@@ -83,18 +82,15 @@
"@vitest/browser": "^4.1.1",
"@vitest/browser-playwright": "^4.1.1",
"@vitest/coverage-v8": "^4.1.1",
"builtin-modules": "5.0.0",
"dotenv": "^17.3.1",
"dotenv-cli": "^11.0.0",
"esbuild": "0.25.0",
"esbuild-plugin-inline-worker": "^0.1.1",
"esbuild-svelte": "^0.9.4",
"eslint": "^9.39.3",
"eslint-plugin-import": "^2.32.0",
"eslint-plugin-obsidianmd": "^0.3.0",
"eslint-plugin-svelte": "^3.15.0",
"events": "^3.3.0",
"glob": "^13.0.6",
"obsidian": "^1.12.3",
"globals": "^14.0.0",
"playwright": "^1.58.2",
"postcss": "^8.5.6",
"postcss-load-config": "^6.0.1",
@@ -115,6 +111,7 @@
"svelte-check": "^4.4.3",
"svelte-preprocess": "^6.0.3",
"terser": "^5.39.0",
"tinyglobby": "^0.2.15",
"transform-pouch": "^2.0.0",
"tslib": "^2.8.1",
"tsx": "^4.21.0",
@@ -133,11 +130,14 @@
"@smithy/protocol-http": "^5.3.9",
"@smithy/querystring-builder": "^4.2.9",
"@trystero-p2p/nostr": "^0.23.0",
"chokidar": "^4.0.0",
"commander": "^14.0.3",
"obsidian": "^1.12.3",
"diff-match-patch": "^1.0.5",
"fflate": "^0.8.2",
"idb": "^8.0.3",
"markdown-it": "^14.1.1",
"micromatch": "^4.0.0",
"minimatch": "^10.2.2",
"octagonal-wheels": "^0.1.45",
"pouchdb-adapter-leveldb": "^9.0.0",

View File

@@ -1,4 +1,5 @@
import { LOG_LEVEL_INFO } from "octagonal-wheels/common/logger";
import type PouchDB from "pouchdb-core";
import type { SimpleStore } from "octagonal-wheels/databases/SimpleStoreBase";
import type { HasSettings, ObsidianLiveSyncSettings, EntryDoc } from "./lib/src/common/types";
import { __$checkInstanceBinding } from "./lib/src/dev/checks";
@@ -34,12 +35,11 @@ export class LiveSyncBaseCore<
TCommands extends IMinimumLiveSyncCommands = IMinimumLiveSyncCommands,
>
implements
LiveSyncLocalDBEnv,
LiveSyncReplicatorEnv,
LiveSyncJournalReplicatorEnv,
LiveSyncCouchDBReplicatorEnv,
HasSettings<ObsidianLiveSyncSettings>
{
LiveSyncLocalDBEnv,
LiveSyncReplicatorEnv,
LiveSyncJournalReplicatorEnv,
LiveSyncCouchDBReplicatorEnv,
HasSettings<ObsidianLiveSyncSettings> {
addOns = [] as TCommands[];
/**
@@ -123,7 +123,7 @@ export class LiveSyncBaseCore<
for (const module of this.modules) {
if (module.constructor === constructor) return module as T;
}
throw new Error(`Module ${constructor} not found or not loaded.`);
throw new Error(`Module ${constructor.name} not found or not loaded.`);
}
/**
@@ -160,8 +160,10 @@ export class LiveSyncBaseCore<
module.onBindFunction(this, this.services);
__$checkInstanceBinding(module); // Check if all functions are properly bound, and log warnings if not.
} else {
// module should not be never.
const moduleName = (module as unknown)?.constructor?.name ?? "unknown";
this.services.API.addLog(
`Module ${(module as any)?.constructor?.name ?? "unknown"} does not have onBindFunction, skipping binding.`,
`Module ${moduleName} does not have onBindFunction, skipping binding.`,
LOG_LEVEL_INFO
);
}

View File

@@ -3,4 +3,6 @@ test/*
!test/*.sh
test/test-init.local.sh
node_modules
.*.json
.*.json
*.env
!.test.env

View File

@@ -45,11 +45,84 @@ CLI Main
- Settings management (JSON file)
- Graceful shutdown handling
## Something I realised later that could lead to misunderstandings
## Usage
The term `vault` in this README refers to the directory containing your local database and settings file. Not the actual files you want to sync. I will fix this later, but please be mind this for now.
The CLI operates on a **database directory** which contains PouchDB data and settings.
## Docker
> [!NOTE]
> `livesync-cli` is the alias for the CLI executable. Please replace with the actual command of your installation (e.g. `npm run --silent cli --` or `docker run ...`).
```bash
livesync-cli [database-path] [command] [args...]
```
### Arguments
- `database-path`: Path to the directory where `.livesync` folder and `settings.json` are (or will be) located.
- Note: In previous versions, this was referred to as the "vault" path. Now it is clearly distinguished from the actual vault (the directory containing your `.md` files).
### Commands
- `sync`: Run one replication cycle with the remote CouchDB.
- `mirror [vault-path]`: Bidirectional sync between the local database and a local directory (**the actual vault**).
- If `vault-path` is provided, the CLI will synchronise the database with files in the vault directory.
- If `vault-path` is omitted, it defaults to `database-path` (compatibility mode).
- Use this command to keep your local `.md` files in sync with the database.
- `ls [prefix]`: List files currently stored in the local database.
- `push <src> <dst>`: Push a local file `<src>` into the database at path `<dst>`.
- `pull <src> <dst>`: Pull a file `<src>` from the database into local file `<dst>`.
- `cat <src>`: Read a file from the database and write to stdout.
- `put <dst>`: Read from stdin and write to the database path `<dst>`.
- `init-settings [file]`: Create a default settings file.
### Examples
```bash
# Basic sync with remote
livesync-cli ./my-db sync
# Mirroring to your actual Obsidian vault
livesync-cli ./my-db mirror /path/to/obsidian-vault
# Manual file operations
livesync-cli ./my-db push ./note.md folder/note.md
livesync-cli ./my-db pull folder/note.md ./note.md
```
## Installation
### Build from source
```bash
# Clone with submodules, because the shared core lives in src/lib
git clone --recurse-submodules <repository-url>
cd obsidian-livesync
# If you already cloned without submodules, run this once instead
git submodule update --init --recursive
# Install dependencies from the repository root
npm install
# Build the CLI from its package directory
cd src/apps/cli
npm run build
```
If `src/lib` is missing, `npm run build` now stops early with a targeted message
instead of a low-level Vite `ENOENT` error.
Run the CLI:
```bash
# Run with npm script (from repository root)
npm run --silent cli -- [database-path] [command] [args...]
# Run the built executable directly
node src/apps/cli/dist/index.cjs [database-path] [command] [args...]
```
### Docker
A Docker image is provided for headless / server deployments. Build from the repository root:
@@ -61,28 +134,28 @@ Run:
```bash
# Sync with CouchDB
docker run --rm -v /path/to/your/vault:/data livesync-cli sync
docker run --rm -v /path/to/your/db:/data livesync-cli sync
# Mirror to a specific vault directory
docker run --rm -v /path/to/your/db:/data -v /path/to/your/vault:/vault livesync-cli mirror /vault
# List files in the local database
docker run --rm -v /path/to/your/vault:/data livesync-cli ls
# Generate a default settings file
docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
docker run --rm -v /path/to/your/db:/data livesync-cli ls
```
The vault directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
The database directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
### P2P (WebRTC) and Docker networking
#### P2P (WebRTC) and Docker networking
The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates
three kinds of ICE candidates. The default Docker bridge network affects which
candidates are usable:
| Candidate type | Description | Bridge network |
|---|---|---|
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
| `srflx` | Host public IP via STUN reflection | Works over the internet |
| `relay` | Traffic relayed via TURN server | Always reachable |
| Candidate type | Description | Bridge network |
| -------------- | ---------------------------------- | -------------------------- |
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
| `srflx` | Host public IP via STUN reflection | Works over the internet |
| `relay` | Traffic relayed via TURN server | Always reachable |
**LAN P2P on Linux** — use `--network host` so that the real host IP is
advertised as the `host` candidate:
@@ -91,6 +164,8 @@ advertised as the `host` candidate:
docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host
```
Note: also fix the alias to include `--network host` if you want to use `livesync-cli` for P2P commands.
> `--network host` is not available on Docker Desktop for macOS or Windows.
**LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the
@@ -103,16 +178,35 @@ candidate carries the host's public IP and peers can connect normally.
**CouchDB sync only (no P2P)** — no special network configuration is required.
## Installation
### Adding `livesync-cli` alias
To use the `livesync-cli` command globally, you can add an alias to your shell configuration file (e.g., `.zshrc` or `.bashrc`).
If you are using `npm run`, add the following line:
```bash
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
# due to shared dependencies with webapp and main library
npm install
# Build the project (ensure you are in `src/apps/cli` directory)
npm run build
alias livesync-cli='npm run --silent --prefix /path/to/repository/src/apps/cli cli --'
# or
alias livesync-cli="npm run --silent --prefix $PWD cli --"
```
Alternatively, if you want to use the built executable directly:
```bash
alias livesync-cli='node /path/to/repository/src/apps/cli/dist/index.cjs'
or
alias livesync-cli="node $PWD/dist/index.cjs"
```
If you prefer using Docker:
```bash
alias livesync-cli='docker run --rm -v /path/to/your/db:/data livesync-cli'
```
After adding the alias, restart your shell or run `source ~/.zshrc` (or `.bashrc`).
## Usage
### Basic Usage
@@ -121,43 +215,43 @@ As you know, the CLI is designed to be used in a headless environment. Hence all
```bash
# Sync local database with CouchDB (no files will be changed).
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync
livesync-cli /path/to/your-local-database --settings /path/to/settings.json sync
# Push files to local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
livesync-cli /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
# Pull files from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
# Verbose logging
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose
livesync-cli /path/to/your-local-database --settings /path/to/settings.json --verbose
# Apply setup URI to settings file (settings only; does not run synchronisation)
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
livesync-cli /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
# Put text from stdin into local database
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
echo "Hello from stdin" | livesync-cli /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
# Output a file from local database to stdout
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
# Output a specific revision of a file from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
# Pull a specific revision of a file from local database to local storage
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
# List files in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
livesync-cli /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
# Show metadata for a file in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
livesync-cli /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
# Mark a file as deleted in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
livesync-cli /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
# Resolve conflict by keeping a specific revision
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
livesync-cli /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
```
### Configuration
@@ -192,7 +286,8 @@ The CLI uses the same settings format as the Obsidian plugin. Create a `.livesyn
```
Usage:
livesync-cli [database-path] [options] [command] [command-args]
livesync-cli <database-path> [options] <command> [command-args]
livesync-cli init-settings [path]
Arguments:
database-path Path to the local database directory (required except for init-settings)
@@ -201,9 +296,12 @@ Options:
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
--force, -f Overwrite existing file on init-settings
--verbose, -v Enable verbose logging
--debug, -d Enable debug logging (includes verbose)
--interval <N>, -i <N> (daemon only) Poll CouchDB every N seconds instead of using the _changes feed
--help, -h Show this help message
Commands:
daemon (default) Run mirror scan then continuously sync CouchDB <-> local filesystem
init-settings [path] Create settings JSON from DEFAULT_SETTINGS
sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
@@ -211,16 +309,16 @@ Commands:
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
push <src> <dst> Push local file <src> into local database path <dst>
pull <src> <dst> Pull file <src> from local database into local file <dst>
pull-rev <src> <dst> <revision> Pull specific revision into local file <dst>
pull-rev <src> <dst> <rev> Pull specific revision <rev> into local file <dst>
setup <setupURI> Apply setup URI to settings file
put <vaultPath> Read text from standard input and write to local database
cat <vaultPath> Write latest file content from local database to standard output
cat-rev <vaultPath> <revision> Write specific revision content from local database to standard output
put <dst> Read text from standard input and write to local database path <dst>
cat <src> Write latest file content from local database to standard output
cat-rev <src> <rev> Write specific revision <rev> content from local database to standard output
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list
rm <vaultPath> Mark file as deleted in local database
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision
mirror <storagePath> <vaultPath> Mirror local file into local database.
info <path> Show file metadata including current and past revisions, conflicts, and chunk list
rm <path> Mark file as deleted in local database
resolve <path> <rev> Resolve conflict by keeping the specified revision
mirror [vaultPath] Mirror database contents to the local file system (vaultPath defaults to database-path)
```
Run via npm script:
@@ -300,16 +398,96 @@ In other words, it performs the following actions:
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
| Group | Condition | Action |
|---|---|---|
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
| Group | Condition | Action |
| ----------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases).
##### daemon
`daemon` is the default command when no command is specified. It runs an initial mirror scan and then continuously syncs changes in both directions:
- **CouchDB → local filesystem**: via the `_changes` feed (LiveSync mode, default) or periodic polling (`--interval N`).
- **local filesystem → CouchDB**: via chokidar file watching. Any file created, modified, or deleted in the vault directory is pushed to CouchDB.
In **LiveSync mode** the `_changes` feed delivers remote changes as they arrive, with sub-second latency. In **polling mode** (`--interval N`) the CLI polls CouchDB every N seconds. Use polling mode if your CouchDB instance does not support long-lived HTTP connections, or if you need predictable network usage.
The daemon exits cleanly on `SIGINT` or `SIGTERM`.
```bash
# LiveSync mode (default — _changes feed, near-real-time)
livesync-cli /path/to/vault
# Polling mode — poll every 60 seconds
livesync-cli /path/to/vault --interval 60
```
### .livesync/ignore
Place a `.livesync/ignore` file in your vault root to exclude files from sync in both directions (local → CouchDB and CouchDB → local).
**Format:**
- Lines beginning with `#` are comments.
- Blank lines are ignored.
- All other lines are [minimatch](https://github.com/isaacs/minimatch) glob patterns, relative to the vault root.
- The directive `import: .gitignore` (exactly this string) reads `.gitignore` from the vault root and merges its non-comment, non-blank lines into the ignore rules.
- Negation patterns (lines starting with `!`) are not supported and will cause an error on load.
**Example `.livesync/ignore`:**
```
# Ignore temporary files
*.tmp
*.swp
# Ignore build output
build/
dist/
# Merge patterns from .gitignore
import: .gitignore
```
Patterns apply in both directions: the chokidar watcher will not emit events for matched files, and the `isTargetFile` filter will exclude them from CouchDB → local sync.
Changes to this file require a daemon restart to take effect.
### Systemd Installation
The `deploy/` directory contains a systemd unit template and an install script.
**Automated install (user service, recommended):**
```bash
bash src/apps/cli/deploy/install.sh --vault /path/to/vault
```
**With polling interval:**
```bash
bash src/apps/cli/deploy/install.sh --vault /path/to/vault --interval 60
```
**System-wide install** (requires root / sudo for `/etc/systemd/system/`):
```bash
bash src/apps/cli/deploy/install.sh --system --vault /path/to/vault
```
The script:
1. Builds the CLI (`npm install` + `npm run build`).
2. Installs the binary to `~/.local/bin/livesync-cli` (user) or `/usr/local/bin/livesync-cli` (system).
3. Writes the unit file to `~/.config/systemd/user/livesync-cli.service` (user) or `/etc/systemd/system/livesync-cli.service` (system).
4. Runs `systemctl [--user] daemon-reload && systemctl [--user] enable --now livesync-cli`.
**Manual setup** — if you prefer to manage the unit yourself, copy `deploy/livesync-cli.service`, replace `LIVESYNC_BIN` and `LIVESYNC_VAULT_PATH` with the actual binary path and vault path, then install to the appropriate systemd directory.
### Planned options:
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`).
@@ -323,9 +501,9 @@ Note: `mirror` does not respect file deletions. If a file is deleted in storage,
Create default settings, apply a setup URI, then run one sync cycle.
```bash
npm run --silent cli -- init-settings /data/livesync-settings.json
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
livesync-cli -- init-settings /data/livesync-settings.json
printf '%s\n' "$SETUP_PASSPHRASE" | livesync-cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
livesync-cli -- /data/vault --settings /data/livesync-settings.json sync
```
### 2. Scripted import and export
@@ -333,8 +511,8 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
Push local files into the database from automation, and pull them back for export or backup.
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
```
### 3. Revision inspection and restore
@@ -342,9 +520,9 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
livesync-cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
```
### 4. Conflict and cleanup workflow
@@ -352,9 +530,9 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
livesync-cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
```
### 5. CI smoke test for content round-trip
@@ -362,8 +540,8 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm n
Validate that `put`/`cat` is behaving as expected in a pipeline.
```bash
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
echo "hello-ci" | livesync-cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
```
## Development

View File

@@ -39,12 +39,6 @@ export class NodeFileSystemAdapter implements IFileSystemAdapter<NodeFile, NodeF
async getAbstractFileByPath(p: FilePath | string): Promise<NodeFile | null> {
const pathStr = this.normalisePath(p);
const cached = this.fileCache.get(pathStr);
if (cached) {
return cached;
}
return await this.refreshFile(pathStr);
}
@@ -104,14 +98,15 @@ export class NodeFileSystemAdapter implements IFileSystemAdapter<NodeFile, NodeF
path: pathStr as FilePath,
stat: {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: "file",
},
};
this.fileCache.set(pathStr, file);
return file;
} catch {
// Evict so a deleted file is not returned by subsequent cache scans.
this.fileCache.delete(pathStr);
return null;
}
@@ -137,8 +132,8 @@ export class NodeFileSystemAdapter implements IFileSystemAdapter<NodeFile, NodeF
path: entryRelativePath as FilePath,
stat: {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: "file",
},
};

View File

@@ -28,8 +28,8 @@ export class NodeStorageAdapter implements IStorageAdapter<NodeStat> {
const stat = await fs.stat(this.resolvePath(p));
return {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: stat.isDirectory() ? "folder" : "file",
};
} catch {

View File

@@ -15,7 +15,12 @@ export class NodeVaultAdapter implements IVaultAdapter<NodeFile> {
}
async read(file: NodeFile): Promise<string> {
return await fs.readFile(this.resolvePath(file.path), "utf-8");
const content = await fs.readFile(this.resolvePath(file.path), "utf-8");
// Correct stale stat.size — chokidar stats may be from a poll before the final write.
// The downstream document integrity check compares stat.size to content length, so
// they must agree or other clients reject the file as corrupted.
file.stat.size = Buffer.byteLength(content, "utf-8");
return content;
}
async cachedRead(file: NodeFile): Promise<string> {
@@ -25,6 +30,8 @@ export class NodeVaultAdapter implements IVaultAdapter<NodeFile> {
async readBinary(file: NodeFile): Promise<ArrayBuffer> {
const buffer = await fs.readFile(this.resolvePath(file.path));
// Same correction as read() — ensure stat.size matches actual byte length.
file.stat.size = buffer.length;
return buffer.buffer.slice(buffer.byteOffset, buffer.byteOffset + buffer.byteLength) as ArrayBuffer;
}
@@ -66,8 +73,8 @@ export class NodeVaultAdapter implements IVaultAdapter<NodeFile> {
path: p as any,
stat: {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: "file",
},
};
@@ -89,8 +96,8 @@ export class NodeVaultAdapter implements IVaultAdapter<NodeFile> {
path: p as any,
stat: {
size: stat.size,
mtime: stat.mtimeMs,
ctime: stat.ctimeMs,
mtime: Math.floor(stat.mtimeMs),
ctime: Math.floor(stat.ctimeMs),
type: "file",
},
};

View File

@@ -0,0 +1,312 @@
import { describe, expect, it, vi, beforeEach, afterEach } from "vitest";
import { runCommand } from "./runCommand";
import type { CLIOptions } from "./types";
// Mock performFullScan so daemon tests don't require a real CouchDB connection.
vi.mock("@lib/serviceFeatures/offlineScanner", () => ({
performFullScan: vi.fn(async () => true),
}));
// Mock UnresolvedErrorManager to avoid event-hub side effects.
vi.mock("@lib/services/base/UnresolvedErrorManager", () => ({
UnresolvedErrorManager: class UnresolvedErrorManager {
showError() {}
clearError() {}
clearErrors() {}
},
}));
import * as offlineScanner from "@lib/serviceFeatures/offlineScanner";
function createCoreMock() {
return {
services: {
control: {
activated: Promise.resolve(),
applySettings: vi.fn(async () => {}),
},
setting: {
applyPartial: vi.fn(async () => {}),
currentSettings: vi.fn(() => ({ liveSync: true, syncOnStart: false })),
},
replication: {
replicate: vi.fn(async () => true),
},
appLifecycle: {
onUnload: {
addHandler: vi.fn(),
},
},
},
serviceModules: {
fileHandler: {
dbToStorage: vi.fn(async () => true),
storeFileToDB: vi.fn(async () => true),
},
storageAccess: {
readFileAuto: vi.fn(async () => ""),
writeFileAuto: vi.fn(async () => {}),
},
databaseFileAccess: {
fetch: vi.fn(async () => undefined),
},
},
} as any;
}
function makeDaemonOptions(interval?: number): CLIOptions {
return {
command: "daemon",
commandArgs: [],
databasePath: "/tmp/vault",
verbose: false,
force: false,
interval,
};
}
const baseContext = {
vaultPath: "/tmp/vault",
settingsPath: "/tmp/vault/.livesync/settings.json",
originalSyncSettings: {
liveSync: true,
syncOnStart: false,
periodicReplication: false,
syncOnSave: false,
syncOnEditorSave: false,
syncOnFileOpen: false,
syncAfterMerge: false,
},
} as any;
describe("daemon command", () => {
beforeEach(() => {
vi.restoreAllMocks();
vi.useFakeTimers();
});
afterEach(() => {
vi.useRealTimers();
});
it("calls performFullScan during startup", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(offlineScanner.performFullScan).toHaveBeenCalledTimes(1);
});
it("returns false when performFullScan fails", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(false);
const result = await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(result).toBe(false);
});
it("polling mode: calls setTimeout when interval option is set", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
const setTimeoutSpy = vi.spyOn(globalThis, "setTimeout");
await runCommand(makeDaemonOptions(30), { ...baseContext, core });
expect(setTimeoutSpy).toHaveBeenCalledTimes(1);
// Interval should be in milliseconds (30s → 30000ms)
expect(setTimeoutSpy).toHaveBeenCalledWith(expect.any(Function), 30000);
});
it("polling mode: applies settings with suspendFileWatching=false before setting interval", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
await runCommand(makeDaemonOptions(10), { ...baseContext, core });
expect(core.services.setting.applyPartial).toHaveBeenCalledWith(
expect.objectContaining({ suspendFileWatching: false }),
true
);
expect(core.services.control.applySettings).toHaveBeenCalledTimes(1);
});
it("liveSync mode: calls applyPartial and applySettings", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(core.services.setting.applyPartial).toHaveBeenCalledWith(
expect.objectContaining({
...baseContext.originalSyncSettings,
suspendFileWatching: false,
}),
true
);
expect(core.services.control.applySettings).toHaveBeenCalledTimes(1);
});
it("liveSync mode: logs warning when both liveSync and syncOnStart are false", async () => {
const core = createCoreMock();
core.services.setting.currentSettings = vi.fn(() => ({
liveSync: false,
syncOnStart: false,
}));
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
const consoleSpy = vi.spyOn(console, "error").mockImplementation(() => {});
const result = await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(result).toBe(true);
const warningCalls = consoleSpy.mock.calls.filter(
(args) => typeof args[0] === "string" && args[0].includes("liveSync and syncOnStart are both disabled")
);
expect(warningCalls.length).toBeGreaterThan(0);
});
it("liveSync mode: no warning when liveSync is true", async () => {
const core = createCoreMock();
core.services.setting.currentSettings = vi.fn(() => ({
liveSync: true,
syncOnStart: false,
}));
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
const consoleSpy = vi.spyOn(console, "error").mockImplementation(() => {});
await runCommand(makeDaemonOptions(), { ...baseContext, core });
const warningCalls = consoleSpy.mock.calls.filter(
(args) => typeof args[0] === "string" && args[0].includes("liveSync and syncOnStart are both disabled")
);
expect(warningCalls.length).toBe(0);
});
it("calls replicate before performFullScan", async () => {
const core = createCoreMock();
const callOrder: string[] = [];
core.services.replication.replicate = vi.fn(async () => {
callOrder.push("replicate");
return true;
});
vi.mocked(offlineScanner.performFullScan).mockImplementation(async () => {
callOrder.push("performFullScan");
return true;
});
await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(callOrder).toEqual(["replicate", "performFullScan"]);
});
it("returns false when initial replication fails", async () => {
const core = createCoreMock();
core.services.replication.replicate = vi.fn(async () => false);
vi.mocked(offlineScanner.performFullScan).mockClear();
const result = await runCommand(makeDaemonOptions(), { ...baseContext, core });
expect(result).toBe(false);
// performFullScan should NOT have been called
expect(offlineScanner.performFullScan).not.toHaveBeenCalled();
});
it("polling mode: registers onUnload handler that clears timeout", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
await runCommand(makeDaemonOptions(10), { ...baseContext, core });
// onUnload handler should have been registered
expect(core.services.appLifecycle.onUnload.addHandler).toHaveBeenCalledTimes(1);
const handler = core.services.appLifecycle.onUnload.addHandler.mock.calls[0][0];
// Get the timeout ID that was created
const clearTimeoutSpy = vi.spyOn(globalThis, "clearTimeout");
await handler();
expect(clearTimeoutSpy).toHaveBeenCalledTimes(1);
});
it("polling backoff: interval escalates on failure, caps at 300000ms, then halves on recovery", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
vi.spyOn(console, "error").mockImplementation(() => {});
// startup replicate (call 1) succeeds; poll calls 27 fail; call 8 succeeds.
let callCount = 0;
core.services.replication.replicate = vi.fn(async () => {
callCount++;
if (callCount === 1) return true; // initial startup replicate
if (callCount <= 7) throw new Error("network failure");
return true; // recovery
});
const baseMs = 30 * 1000;
const setTimeoutSpy = vi.spyOn(globalThis, "setTimeout");
await runCommand(makeDaemonOptions(30), { ...baseContext, core });
// After runCommand returns the first setTimeout has been scheduled.
// setTimeoutSpy.mock.calls[0] is the initial schedule (baseMs).
expect(setTimeoutSpy.mock.calls[0][1]).toBe(baseMs);
// Advance through 6 failure polls. After each failure the next setTimeout
// should be scheduled with a larger (or capped) interval.
// formula: min(base * 2^n, 300000). base=30000ms.
// failure 1: 30000*2=60000, failure 2: 30000*4=120000,
// failure 3: 30000*8=240000, failure 4: 30000*16=480000→capped, 5→cap, 6→cap
const expectedIntervals = [
baseMs * 2, // after failure 1: 60000
baseMs * 4, // after failure 2: 120000
baseMs * 8, // after failure 3: 240000
300_000, // after failure 4 (would be 480000, capped)
300_000, // after failure 5 (cap)
300_000, // after failure 6 (cap)
];
for (const expected of expectedIntervals) {
const prevCallCount = setTimeoutSpy.mock.calls.length;
await vi.advanceTimersByTimeAsync(setTimeoutSpy.mock.calls[prevCallCount - 1][1] as number);
const newCallCount = setTimeoutSpy.mock.calls.length;
expect(newCallCount).toBeGreaterThan(prevCallCount);
expect(setTimeoutSpy.mock.calls[newCallCount - 1][1]).toBe(expected);
}
// Now trigger the success poll — interval should halve each time toward base.
// After failure 6, consecutiveFailures=6, currentIntervalMs=300000.
// On success: consecutiveFailures=5, currentIntervalMs=150000.
const prevCallCount = setTimeoutSpy.mock.calls.length;
await vi.advanceTimersByTimeAsync(setTimeoutSpy.mock.calls[prevCallCount - 1][1] as number);
const afterSuccessCallCount = setTimeoutSpy.mock.calls.length;
expect(afterSuccessCallCount).toBeGreaterThan(prevCallCount);
// The interval after one success should be halved (300000 / 2 = 150000).
expect(setTimeoutSpy.mock.calls[afterSuccessCallCount - 1][1]).toBe(150_000);
});
it("polling error handling: replicate rejection is caught and console.error is called", async () => {
const core = createCoreMock();
vi.mocked(offlineScanner.performFullScan).mockResolvedValue(true);
const consoleSpy = vi.spyOn(console, "error").mockImplementation(() => {});
// Make replicate succeed on the initial call (startup), then fail on the poll.
let callCount = 0;
core.services.replication.replicate = vi.fn(async () => {
callCount++;
if (callCount === 1) return true; // startup replicate
throw new Error("network failure");
});
const intervalMs = 30 * 1000;
await runCommand(makeDaemonOptions(30), { ...baseContext, core });
// Advance time to trigger the first poll callback and flush its async work.
await vi.advanceTimersByTimeAsync(intervalMs);
// No unhandled rejection — the error was caught internally.
const errorCalls = consoleSpy.mock.calls.filter(
(args) => typeof args[0] === "string" && args[0].includes("Poll error")
);
expect(errorCalls.length).toBeGreaterThan(0);
});
});

View File

@@ -5,16 +5,106 @@ import { configURIBase } from "@lib/common/models/shared.const";
import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types";
import { stripAllPrefixes } from "@lib/string_and_binary/path";
import type { CLICommandContext, CLIOptions } from "./types";
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils";
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toDatabaseRelativePath } from "./utils";
import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p";
import { performFullScan } from "@lib/serviceFeatures/offlineScanner";
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> {
const { vaultPath, core, settingsPath } = context;
const { databasePath, core, settingsPath } = context;
await core.services.control.activated;
if (options.command === "daemon") {
const log = (msg: unknown) => console.error(`[Daemon] ${msg}`);
// Skip the config mismatch dialog — the daemon cannot resolve it interactively
// and the default "Dismiss" action would block replication. The daemon should
// accept whatever configuration the remote has.
await core.services.setting.applyPartial({ disableCheckingConfigMismatch: true }, true);
// 1. Replicate CouchDB → local PouchDB so the mirror scan has content to work with.
log("Replicating from CouchDB...");
const replResult = await core.services.replication.replicate(true);
if (!replResult) {
console.error("[Daemon] Initial CouchDB replication failed, cannot continue");
return false;
}
log("CouchDB replication complete");
// 2. Mirror scan to reconcile PouchDB ↔ local filesystem.
const errorManager = new UnresolvedErrorManager(core.services.appLifecycle);
log("Running mirror scan...");
const scanOk = await performFullScan(core as any, log, errorManager, false, true);
if (!scanOk) {
console.error("[Daemon] Mirror scan failed, cannot continue");
return false;
}
log("Mirror scan complete");
// 3. Re-enable sync.
const restoreSyncSettings = async () => {
await core.services.setting.applyPartial({
...context.originalSyncSettings,
suspendFileWatching: false,
}, true);
// applySettings fires the full lifecycle: onSuspending → onResumed.
// ModuleReplicatorCouchDB starts continuous replication on onResumed
// via fireAndForget.
await core.services.control.applySettings();
// Lifecycle events (onSuspending) may re-enable suspension flags.
// Clear them explicitly after the lifecycle completes. applyPartial
// with true is a direct store write — it does not re-trigger lifecycle.
await core.services.setting.applyPartial({
suspendFileWatching: false,
suspendParseReplicationResult: false,
}, true);
};
if (options.interval) {
log(`Polling mode: syncing every ${options.interval}s`);
await restoreSyncSettings();
const baseIntervalMs = options.interval * 1000;
let currentIntervalMs = baseIntervalMs;
let consecutiveFailures = 0;
const maxIntervalMs = 5 * 60 * 1000; // 5 minutes cap
const poll = async () => {
try {
await core.services.replication.replicate(true);
if (consecutiveFailures > 0) {
consecutiveFailures--;
currentIntervalMs = Math.max(currentIntervalMs / 2, baseIntervalMs);
log(`Replication recovered`);
}
} catch (err) {
consecutiveFailures++;
currentIntervalMs = Math.min(baseIntervalMs * Math.pow(2, consecutiveFailures), maxIntervalMs);
console.error(`[Daemon] Poll error (${consecutiveFailures} consecutive):`, err);
if (consecutiveFailures >= 5) {
console.error(`[Daemon] Warning: ${consecutiveFailures} consecutive failures, backing off to ${Math.round(currentIntervalMs / 1000)}s`);
}
}
pollTimer = setTimeout(poll, currentIntervalMs);
};
let pollTimer: ReturnType<typeof setTimeout> = setTimeout(poll, currentIntervalMs);
core.services.appLifecycle.onUnload.addHandler(async () => {
clearTimeout(pollTimer);
return true;
});
} else {
log("LiveSync mode: restoring sync settings and starting _changes feed");
await restoreSyncSettings();
// The applySettings() lifecycle fires onResumed → ModuleReplicatorCouchDB which
// starts continuous replication via fireAndForget(openReplication). Don't call
// openReplication directly — it races with the handler and causes dedup/termination.
log("LiveSync active");
const currentSettings = core.services.setting.currentSettings();
if (!currentSettings.liveSync && !currentSettings.syncOnStart) {
console.error("[Daemon] Warning: liveSync and syncOnStart are both disabled in settings. " +
"No sync will occur. Set liveSync=true in your settings file for continuous sync, " +
"or use --interval for polling mode.");
}
}
return true;
}
@@ -77,16 +167,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
throw new Error("push requires two arguments: <src> <dst>");
}
const sourcePath = path.resolve(options.commandArgs[0]);
const destinationVaultPath = toVaultRelativePath(options.commandArgs[1], vaultPath);
const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[1], databasePath);
const sourceData = await fs.readFile(sourcePath);
const sourceStat = await fs.stat(sourcePath);
console.log(`[Command] push ${sourcePath} -> ${destinationVaultPath}`);
console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`);
await core.serviceModules.storageAccess.writeFileAuto(destinationVaultPath, toArrayBuffer(sourceData), {
mtime: sourceStat.mtimeMs,
ctime: sourceStat.ctimeMs,
await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), {
mtime: Math.floor(sourceStat.mtimeMs),
ctime: Math.floor(sourceStat.ctimeMs),
});
const destinationPathWithPrefix = destinationVaultPath as FilePathWithPrefix;
const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix;
const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true);
return stored;
}
@@ -95,16 +185,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) {
throw new Error("pull requires two arguments: <src> <dst>");
}
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const destinationPath = path.resolve(options.commandArgs[1]);
console.log(`[Command] pull ${sourceVaultPath} -> ${destinationPath}`);
console.log(`[Command] pull ${sourceDatabasePath} -> ${destinationPath}`);
const sourcePathWithPrefix = sourceVaultPath as FilePathWithPrefix;
const sourcePathWithPrefix = sourceDatabasePath as FilePathWithPrefix;
const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true);
if (!restored) {
return false;
}
const data = await core.serviceModules.storageAccess.readFileAuto(sourceVaultPath);
const data = await core.serviceModules.storageAccess.readFileAuto(sourceDatabasePath);
await fs.mkdir(path.dirname(destinationPath), { recursive: true });
if (typeof data === "string") {
await fs.writeFile(destinationPath, data, "utf-8");
@@ -118,16 +208,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 3) {
throw new Error("pull-rev requires three arguments: <src> <dst> <rev>");
}
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const destinationPath = path.resolve(options.commandArgs[1]);
const rev = options.commandArgs[2].trim();
if (!rev) {
throw new Error("pull-rev requires a non-empty revision");
}
console.log(`[Command] pull-rev ${sourceVaultPath}@${rev} -> ${destinationPath}`);
console.log(`[Command] pull-rev ${sourceDatabasePath}@${rev} -> ${destinationPath}`);
const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix,
sourceDatabasePath as FilePathWithPrefix,
rev,
true
);
@@ -175,11 +265,11 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) {
throw new Error("put requires one argument: <dst>");
}
const destinationVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const content = await readStdinAsUtf8();
console.log(`[Command] put stdin -> ${destinationVaultPath}`);
console.log(`[Command] put stdin -> ${destinationDatabasePath}`);
return await core.serviceModules.databaseFileAccess.storeContent(
destinationVaultPath as FilePathWithPrefix,
destinationDatabasePath as FilePathWithPrefix,
content
);
}
@@ -188,10 +278,10 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) {
throw new Error("cat requires one argument: <src>");
}
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
console.error(`[Command] cat ${sourceVaultPath}`);
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
console.error(`[Command] cat ${sourceDatabasePath}`);
const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix,
sourceDatabasePath as FilePathWithPrefix,
undefined,
true
);
@@ -212,14 +302,14 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) {
throw new Error("cat-rev requires two arguments: <src> <rev>");
}
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const rev = options.commandArgs[1].trim();
if (!rev) {
throw new Error("cat-rev requires a non-empty revision");
}
console.error(`[Command] cat-rev ${sourceVaultPath} @ ${rev}`);
console.error(`[Command] cat-rev ${sourceDatabasePath} @ ${rev}`);
const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix,
sourceDatabasePath as FilePathWithPrefix,
rev,
true
);
@@ -239,7 +329,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.command === "ls") {
const prefix =
options.commandArgs.length > 0 && options.commandArgs[0].trim() !== ""
? toVaultRelativePath(options.commandArgs[0], vaultPath)
? toDatabaseRelativePath(options.commandArgs[0], databasePath)
: "";
const rows: { path: string; line: string }[] = [];
@@ -261,6 +351,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
rows.sort((a, b) => a.path.localeCompare(b.path));
if (rows.length > 0) {
process.stdout.write(rows.map((e) => e.line).join("\n") + "\n");
} else {
process.stderr.write("[Info] No documents found in the local database.\n");
}
return true;
}
@@ -269,7 +361,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) {
throw new Error("info requires one argument: <path>");
}
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) {
if (doc._deleted || doc.deleted) continue;
@@ -313,7 +405,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) {
throw new Error("rm requires one argument: <path>");
}
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
console.error(`[Command] rm ${targetPath}`);
return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix);
}
@@ -322,7 +414,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) {
throw new Error("resolve requires two arguments: <path> <revision-to-keep>");
}
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath) as FilePathWithPrefix;
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath) as FilePathWithPrefix;
const revisionToKeep = options.commandArgs[1].trim();
if (revisionToKeep === "") {
throw new Error("resolve requires a non-empty revision-to-keep");

View File

@@ -58,7 +58,7 @@ async function createSetupURI(passphrase: string): Promise<string> {
describe("runCommand abnormal cases", () => {
const context = {
vaultPath: "/tmp/vault",
databasePath: "/tmp/vault",
settingsPath: "/tmp/vault/.livesync/settings.json",
} as any;

View File

@@ -1,5 +1,6 @@
import { LiveSyncBaseCore } from "../../../LiveSyncBaseCore";
import { ServiceContext } from "@lib/services/base/ServiceBase";
import type { ObsidianLiveSyncSettings } from "@lib/common/types";
export type CLICommand =
| "daemon"
@@ -29,15 +30,18 @@ export interface CLIOptions {
force?: boolean;
command: CLICommand;
commandArgs: string[];
interval?: number;
}
export interface CLICommandContext {
vaultPath: string;
databasePath: string;
core: LiveSyncBaseCore<ServiceContext, any>;
settingsPath: string;
originalSyncSettings: Pick<ObsidianLiveSyncSettings, "liveSync" | "syncOnStart" | "periodicReplication" | "syncOnSave" | "syncOnEditorSave" | "syncOnFileOpen" | "syncAfterMerge">;
}
export const VALID_COMMANDS = new Set([
"daemon",
"sync",
"p2p-peers",
"p2p-sync",

View File

@@ -5,19 +5,19 @@ export function toArrayBuffer(data: Buffer): ArrayBuffer {
return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer;
}
export function toVaultRelativePath(inputPath: string, vaultPath: string): string {
export function toDatabaseRelativePath(inputPath: string, databasePath: string): string {
const stripped = inputPath.replace(/^[/\\]+/, "");
if (!path.isAbsolute(inputPath)) {
const normalized = stripped.replace(/\\/g, "/");
const resolved = path.resolve(vaultPath, normalized);
const rel = path.relative(vaultPath, resolved);
const resolved = path.resolve(databasePath, normalized);
const rel = path.relative(databasePath, resolved);
if (rel.startsWith("..") || path.isAbsolute(rel)) {
throw new Error(`Path ${inputPath} is outside of the local database directory`);
}
return rel.replace(/\\/g, "/");
}
const resolved = path.resolve(inputPath);
const rel = path.relative(vaultPath, resolved);
const rel = path.relative(databasePath, resolved);
if (rel.startsWith("..") || path.isAbsolute(rel)) {
throw new Error(`Path ${inputPath} is outside of the local database directory`);
}
@@ -25,15 +25,15 @@ export function toVaultRelativePath(inputPath: string, vaultPath: string): strin
}
export async function readStdinAsUtf8(): Promise<string> {
const chunks: Buffer[] = [];
const chunks = [];
for await (const chunk of process.stdin) {
if (typeof chunk === "string") {
chunks.push(Buffer.from(chunk, "utf-8"));
} else {
chunks.push(chunk);
chunks.push(chunk as Buffer);
}
}
return Buffer.concat(chunks).toString("utf-8");
return Buffer.concat(chunks as Uint8Array[]).toString("utf-8");
}
export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> {

View File

@@ -1,29 +1,33 @@
import * as path from "path";
import { describe, expect, it } from "vitest";
import { toVaultRelativePath } from "./utils";
import { toDatabaseRelativePath } from "./utils";
describe("toVaultRelativePath", () => {
const vaultPath = path.resolve("/tmp/livesync-vault");
describe("toDatabaseRelativePath", () => {
const databasePath = path.resolve("/tmp/livesync-vault");
it("rejects absolute paths outside vault", () => {
expect(() => toVaultRelativePath("/etc/passwd", vaultPath)).toThrow("outside of the local database directory");
expect(() => toDatabaseRelativePath("/etc/passwd", databasePath)).toThrow(
"outside of the local database directory"
);
});
it("normalizes leading slash for absolute path inside vault", () => {
const absoluteInsideVault = path.join(vaultPath, "notes", "foo.md");
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("notes/foo.md");
const absoluteInsideVault = path.join(databasePath, "notes", "foo.md");
expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("notes/foo.md");
});
it("normalizes Windows-style separators", () => {
expect(toVaultRelativePath("notes\\daily\\2026-03-12.md", vaultPath)).toBe("notes/daily/2026-03-12.md");
expect(toDatabaseRelativePath("notes\\daily\\2026-03-12.md", databasePath)).toBe("notes/daily/2026-03-12.md");
});
it("returns vault-relative path for another absolute path inside vault", () => {
const absoluteInsideVault = path.join(vaultPath, "docs", "inside.md");
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("docs/inside.md");
const absoluteInsideVault = path.join(databasePath, "docs", "inside.md");
expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("docs/inside.md");
});
it("rejects relative path traversal that escapes vault", () => {
expect(() => toVaultRelativePath("../escape.md", vaultPath)).toThrow("outside of the local database directory");
expect(() => toDatabaseRelativePath("../escape.md", databasePath)).toThrow(
"outside of the local database directory"
);
});
});

187
src/apps/cli/deploy/install.sh Executable file
View File

@@ -0,0 +1,187 @@
#!/usr/bin/env bash
# install.sh — install livesync-cli as a systemd service
#
# Usage:
# install.sh [--user] [--system] [--vault <path>] [--interval <N>]
#
# Defaults: user install, prompts for vault path if not supplied.
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd -- "$SCRIPT_DIR/../../.." && pwd)"
CLI_DIR="$REPO_ROOT/src/apps/cli"
SERVICE_TEMPLATE="$SCRIPT_DIR/livesync-cli.service"
# ── Argument parsing ────────────────────────────────────────────────────────
INSTALL_MODE="user"
VAULT_PATH=""
INTERVAL=""
FORCE=0
while [[ $# -gt 0 ]]; do
case "$1" in
--user)
INSTALL_MODE="user"
shift
;;
--system)
INSTALL_MODE="system"
shift
;;
--vault)
if [[ -z "${2:-}" ]]; then
echo "Error: --vault requires a path argument" >&2
exit 1
fi
VAULT_PATH="$2"
shift 2
;;
--interval)
if [[ -z "${2:-}" ]]; then
echo "Error: --interval requires a numeric argument" >&2
exit 1
fi
INTERVAL="$2"
if ! [[ "$INTERVAL" =~ ^[1-9][0-9]*$ ]]; then
echo "Error: --interval requires a positive integer, got '$INTERVAL'" >&2
exit 1
fi
shift 2
;;
--force|-f)
FORCE=1
shift
;;
--help|-h)
cat <<EOF
Usage: install.sh [--user|--system] [--vault <path>] [--interval <N>] [--force]
--user Install as a user systemd service (default, ~/.config/systemd/user/)
--system Install as a system systemd service (/etc/systemd/system/)
--vault Path to the vault directory (prompted if omitted)
--interval Poll CouchDB every N seconds instead of using the _changes feed
--force Overwrite existing service unit without prompting
EOF
exit 0
;;
*)
echo "Error: Unknown argument: $1" >&2
exit 1
;;
esac
done
# ── Vault path ──────────────────────────────────────────────────────────────
if [[ -z "$VAULT_PATH" ]]; then
if [ ! -t 0 ]; then
echo "Error: --vault is required in non-interactive mode" >&2
exit 1
fi
printf 'Vault path: '
read -r VAULT_PATH
fi
_orig_vault="$VAULT_PATH"
if ! VAULT_PATH="$(cd -- "$VAULT_PATH" 2>/dev/null && pwd)"; then
echo "Error: vault directory does not exist: $_orig_vault" >&2
exit 1
fi
echo "[INFO] Vault: $VAULT_PATH"
echo "[INFO] Install mode: $INSTALL_MODE"
# ── Build ────────────────────────────────────────────────────────────────────
echo "[INFO] Building CLI from $REPO_ROOT..."
(cd "$REPO_ROOT" && npm install --silent)
(cd "$CLI_DIR" && npm run build)
BUILT_CJS="$CLI_DIR/dist/index.cjs"
if [[ ! -f "$BUILT_CJS" ]]; then
echo "Error: build output not found: $BUILT_CJS" >&2
exit 1
fi
# ── Install binary ───────────────────────────────────────────────────────────
if [[ "$INSTALL_MODE" == "user" ]]; then
BIN_DIR="$HOME/.local/bin"
UNIT_DIR="$HOME/.config/systemd/user"
SYSTEMCTL_FLAGS="--user"
else
BIN_DIR="/usr/local/bin"
UNIT_DIR="/etc/systemd/system"
SYSTEMCTL_FLAGS=""
fi
mkdir -p "$BIN_DIR"
LIVESYNC_BIN="$BIN_DIR/livesync-cli"
LIVESYNC_JS="$BIN_DIR/livesync-cli.js"
# Copy the CJS bundle so the wrapper is self-contained and independent of the
# build directory location.
cp "$BUILT_CJS" "$LIVESYNC_JS"
# Write a bash wrapper that invokes node on the installed bundle.
cat > "$LIVESYNC_BIN" <<WRAPPER
#!/usr/bin/env bash
exec node "$LIVESYNC_JS" "\$@"
WRAPPER
chmod +x "$LIVESYNC_BIN"
echo "[INFO] Installed bundle: $LIVESYNC_JS"
echo "[INFO] Installed binary: $LIVESYNC_BIN"
# ── Write systemd unit ───────────────────────────────────────────────────────
mkdir -p "$UNIT_DIR"
UNIT_PATH="$UNIT_DIR/livesync-cli.service"
EXEC_START="\"$LIVESYNC_BIN\" \"$VAULT_PATH\""
if [[ -n "$INTERVAL" ]]; then
EXEC_START="\"$LIVESYNC_BIN\" \"$VAULT_PATH\" --interval $INTERVAL"
fi
# Check for existing service and offer to overwrite.
if [[ -f "$UNIT_PATH" ]] && [[ "$FORCE" -eq 0 ]]; then
if [ ! -t 0 ]; then
echo "Error: service unit already exists at $UNIT_PATH; use --force to overwrite" >&2
exit 1
fi
printf 'Service unit already exists at %s. Overwrite? [y/N]: ' "$UNIT_PATH"
read -r CONFIRM
case "$CONFIRM" in
[yY]|[yY][eE][sS]) : ;;
*)
echo "[INFO] Aborted. Existing unit left in place."
exit 0
;;
esac
fi
# In awk gsub(), '&' in the replacement means "matched text"; escape any literal '&'
# in path variables before passing them as awk replacement strings.
AWK_BIN="${LIVESYNC_BIN//&/\\&}"
AWK_VAULT="${VAULT_PATH//&/\\&}"
awk -v bin="$AWK_BIN" -v vault="$AWK_VAULT" -v exec_start="ExecStart=$EXEC_START" \
'/^ExecStart=/ { print exec_start; next } {gsub("LIVESYNC_BIN", bin); gsub("LIVESYNC_VAULT_PATH", vault); print}' \
"$SERVICE_TEMPLATE" > "$UNIT_PATH"
echo "[INFO] Installed unit: $UNIT_PATH"
# ── Enable service ───────────────────────────────────────────────────────────
if ! command -v systemctl >/dev/null 2>&1; then
echo "[WARN] systemctl not found — skipping service activation"
echo "[INFO] To enable manually, copy $UNIT_PATH to the correct systemd directory and run:"
echo " systemctl $SYSTEMCTL_FLAGS daemon-reload"
echo " systemctl $SYSTEMCTL_FLAGS enable --now livesync-cli"
exit 0
fi
# shellcheck disable=SC2086
systemctl $SYSTEMCTL_FLAGS daemon-reload
# shellcheck disable=SC2086
systemctl $SYSTEMCTL_FLAGS enable --now livesync-cli
echo ""
echo "[Done] livesync-cli service installed and started."
echo ""
# shellcheck disable=SC2086
systemctl $SYSTEMCTL_FLAGS status livesync-cli --no-pager || true

View File

@@ -0,0 +1,17 @@
[Unit]
Description=Self-hosted LiveSync CLI Daemon
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=LIVESYNC_BIN LIVESYNC_VAULT_PATH
Restart=on-failure
RestartSec=10
TimeoutStartSec=300
StandardOutput=journal
StandardError=journal
LimitNOFILE=65536
[Install]
WantedBy=default.target

View File

@@ -26,6 +26,7 @@ import { VALID_COMMANDS } from "./commands/types";
import type { CLICommand, CLIOptions } from "./commands/types";
import { getPathFromUXFileInfo } from "@lib/common/typeUtils";
import { stripAllPrefixes } from "@lib/string_and_binary/path";
import { IgnoreRules } from "./serviceModules/IgnoreRules";
const SETTINGS_FILE = ".livesync/settings.json";
ensureGlobalNodeLocalStorage();
@@ -36,14 +37,16 @@ function printHelp(): void {
Self-hosted LiveSync CLI
Usage:
livesync-cli [database-path] [options] [command] [command-args]
livesync-cli <database-path> [options] <command> [command-args]
livesync-cli init-settings [path]
Arguments:
database-path Path to the local database directory (required)
database-path Path to the local database directory
Commands:
sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
daemon (default) Run mirror scan then continuously sync CouchDB <-> local filesystem
sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]\t<peer-id>\t<peer-name>
p2p-sync <peer> <timeout>
Sync with the specified peer-id or peer-name
p2p-host Start P2P host mode and wait until interrupted
@@ -54,11 +57,18 @@ Commands:
put <dst> Read UTF-8 content from stdin and write to local database path <dst>
cat <src> Read file <src> from local database and write to stdout
cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout
ls [prefix] List DB files as path<TAB>size<TAB>mtime<TAB>revision[*]
ls [prefix] List DB files as path\tsize\tmtime\trevision[*]
info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks)
rm <path> Mark a file as deleted in local database
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path)
Options:
--interval <N>, -i <N> (daemon only) Poll CouchDB every N seconds instead of using the _changes feed
Examples:
livesync-cli ./my-database Run daemon (LiveSync mode)
livesync-cli ./my-database --interval 30 Run daemon (polling every 30s)
livesync-cli ./my-database sync
livesync-cli ./my-database p2p-peers 5
livesync-cli ./my-database p2p-sync my-peer-name 15
@@ -92,6 +102,7 @@ export function parseArgs(): CLIOptions {
let verbose = false;
let debug = false;
let force = false;
let interval: number | undefined;
let command: CLICommand = "daemon";
const commandArgs: string[] = [];
@@ -108,10 +119,26 @@ export function parseArgs(): CLIOptions {
settingsPath = args[i];
break;
}
case "--interval":
case "-i": {
i++;
if (!args[i]) {
console.error(`Error: Missing value for ${token}`);
process.exit(1);
}
const n = parseInt(args[i], 10);
if (!Number.isInteger(n) || n <= 0) {
console.error(`Error: --interval requires a positive integer, got '${args[i]}'`);
process.exit(1);
}
interval = n;
break;
}
case "--debug":
case "-d":
// debugging automatically enables verbose logging, as it is intended for debugging issues.
debug = true;
// falls through
case "--verbose":
case "-v":
verbose = true;
@@ -161,6 +188,7 @@ export function parseArgs(): CLIOptions {
force,
command,
commandArgs,
interval,
};
}
@@ -194,6 +222,9 @@ async function createDefaultSettingsFile(options: CLIOptions) {
export async function main() {
const options = parseArgs();
if (options.interval && options.command !== "daemon") {
console.error(`Warning: --interval is only used in daemon mode, ignored for '${options.command}'`);
}
const avoidStdoutNoise =
options.command === "cat" ||
options.command === "cat-rev" ||
@@ -220,34 +251,48 @@ export async function main() {
return;
}
// Resolve vault path
const vaultPath = path.resolve(options.databasePath!);
// Check if vault directory exists
// Resolve database path
const databasePath = path.resolve(options.databasePath!);
// Check if database directory exists
try {
const stat = await fs.stat(vaultPath);
const stat = await fs.stat(databasePath);
if (!stat.isDirectory()) {
console.error(`Error: ${vaultPath} is not a directory`);
console.error(`Error: ${databasePath} is not a directory`);
process.exit(1);
}
} catch (error) {
console.error(`Error: Vault directory ${vaultPath} does not exist`);
console.error(`Error: Database directory ${databasePath} does not exist`);
process.exit(1);
}
// Resolve settings path
const settingsPath = options.settingsPath
? path.resolve(options.settingsPath)
: path.join(vaultPath, SETTINGS_FILE);
configureNodeLocalStorage(path.join(vaultPath, ".livesync", "runtime", "local-storage.json"));
: path.join(databasePath, SETTINGS_FILE);
configureNodeLocalStorage(path.join(databasePath, ".livesync", "runtime", "local-storage.json"));
infoLog(`Self-hosted LiveSync CLI`);
infoLog(`Vault: ${vaultPath}`);
infoLog(`Database Path: ${databasePath}`);
infoLog(`Settings: ${settingsPath}`);
infoLog("");
// For daemon and mirror mode, load ignore rules before the core is constructed so that
// chokidar's ignored option is populated when beginWatch() fires during onLoad().
const watchEnabled = options.command === "daemon";
const vaultPath =
options.command === "mirror" && options.commandArgs[0]
? path.resolve(options.commandArgs[0])
: databasePath;
let ignoreRules: IgnoreRules | undefined;
if (options.command === "daemon" || options.command === "mirror") {
ignoreRules = new IgnoreRules(vaultPath);
await ignoreRules.load();
}
// Create service context and hub
const context = new NodeServiceContext(vaultPath);
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(vaultPath, context);
const context = new NodeServiceContext(databasePath);
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(databasePath, context);
serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => {
let levelStr = "";
switch (level) {
@@ -275,11 +320,14 @@ export async function main() {
}
console.error(`${prefix} ${message}`);
});
// Prevent replication result to be processed automatically.
serviceHubInstance.replication.processSynchroniseResult.addHandler(async () => {
console.error(`[Info] Replication result received, but not processed automatically in CLI mode.`);
return await Promise.resolve(true);
}, -100);
// Prevent replication result from being processed automatically in non-daemon commands.
// In daemon mode the default handler must run so changes are applied to the filesystem.
if (options.command !== "daemon") {
serviceHubInstance.replication.processSynchroniseResult.addHandler(async () => {
console.error(`[Info] Replication result received, but not processed automatically in CLI mode.`);
return await Promise.resolve(true);
}, -100);
}
// Setup settings handlers
const settingService = serviceHubInstance.setting;
@@ -321,7 +369,7 @@ export async function main() {
const core = new LiveSyncBaseCore(
serviceHubInstance,
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
return initialiseServiceModulesCLI(vaultPath, core, serviceHub);
return initialiseServiceModulesCLI(vaultPath, core, serviceHub, ignoreRules, watchEnabled);
},
(core) => [
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
@@ -331,14 +379,31 @@ export async function main() {
(core) => {
// Add target filter to prevent internal files are handled
core.services.vault.isTargetFile.addHandler(async (target) => {
const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target));
const parts = vaultPath.split(path.sep);
const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
const parts = targetPath.split(path.sep);
// if some part of the path starts with dot, treat it as internal file and ignore.
if (parts.some((part) => part.startsWith("."))) {
return await Promise.resolve(false);
}
// PouchDB LevelDB database directory lives in the vault directory.
if (parts[0]?.endsWith("-livesync-v2")) {
return await Promise.resolve(false);
}
return await Promise.resolve(true);
}, -1 /* highest priority */);
// Apply user-defined ignore rules for daemon mode (lower priority, runs after dotfile check).
if (ignoreRules) {
const rules = ignoreRules;
core.services.vault.isTargetFile.addHandler(async (target) => {
const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
if (rules.shouldIgnore(targetPath)) {
return false;
}
// undefined = pass through to next handler in chain
return undefined;
}, 0);
}
}
);
@@ -359,6 +424,25 @@ export async function main() {
process.on("SIGINT", () => shutdown("SIGINT"));
process.on("SIGTERM", () => shutdown("SIGTERM"));
// Save the settings file before any lifecycle events can mutate and persist them.
// suspendAllSync and other lifecycle hooks clobber sync settings in memory, and
// various code paths persist the clobbered state to disk. We restore on shutdown.
const settingsBackup = await fs.readFile(settingsPath, "utf-8").catch(() => null);
// Restore settings file on any exit to undo lifecycle mutations.
// Write to a temp path first so a crash mid-write doesn't leave a truncated file.
process.on("exit", () => {
if (settingsBackup) {
const tmpPath = settingsPath + ".tmp";
try {
require("fs").writeFileSync(tmpPath, settingsBackup, "utf-8");
require("fs").renameSync(tmpPath, settingsPath);
} catch (err) {
console.error("[Settings] Failed to restore settings on exit:", err);
}
}
});
// Start the core
try {
infoLog(`[Starting] Initializing LiveSync...`);
@@ -368,6 +452,18 @@ export async function main() {
console.error(`[Error] Failed to initialize LiveSync`);
process.exit(1);
}
// Capture sync settings before suspendAllSync() clobbers them.
// Used by daemon mode to restore the correct sync behaviour after the mirror scan.
const settingsBeforeSuspend = core.services.setting.currentSettings();
const originalSyncSettings = {
liveSync: settingsBeforeSuspend.liveSync,
syncOnStart: settingsBeforeSuspend.syncOnStart,
periodicReplication: settingsBeforeSuspend.periodicReplication,
syncOnSave: settingsBeforeSuspend.syncOnSave,
syncOnEditorSave: settingsBeforeSuspend.syncOnEditorSave,
syncOnFileOpen: settingsBeforeSuspend.syncOnFileOpen,
syncAfterMerge: settingsBeforeSuspend.syncAfterMerge,
};
await core.services.setting.suspendAllSync();
await core.services.control.onReady();
@@ -393,7 +489,7 @@ export async function main() {
infoLog("");
}
const result = await runCommand(options, { vaultPath, core, settingsPath });
const result = await runCommand(options, { databasePath, core, settingsPath, originalSyncSettings });
if (!result) {
console.error(`[Error] Command '${options.command}' failed`);
process.exitCode = 1;
@@ -401,7 +497,7 @@ export async function main() {
infoLog(`[Done] Command '${options.command}' completed`);
}
if (options.command === "daemon") {
if (options.command === "daemon" && result) {
// Keep the process running
await new Promise(() => {});
} else {

View File

@@ -17,7 +17,7 @@ describe("CLI parseArgs", () => {
});
it("exits 1 when --settings has no value", () => {
process.argv = ["node", "livesync-cli", "./vault", "--settings"];
process.argv = ["node", "livesync-cli", "./databasePath", "--settings"];
const exitMock = mockProcessExit();
const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
@@ -37,7 +37,7 @@ describe("CLI parseArgs", () => {
});
it("exits 1 for unknown command after database-path", () => {
process.argv = ["node", "livesync-cli", "./vault", "unknown-cmd"];
process.argv = ["node", "livesync-cli", "./databasePath", "unknown-cmd"];
const exitMock = mockProcessExit();
const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
@@ -56,33 +56,96 @@ describe("CLI parseArgs", () => {
expect(stdout).toHaveBeenCalled();
const combined = stdout.mock.calls.flat().join("\n");
expect(combined).toContain("Usage:");
expect(combined).toContain("livesync-cli [database-path]");
expect(combined).toContain("livesync-cli <database-path> [options] <command> [command-args]");
});
it("parses p2p-peers command and timeout", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-peers", "5"];
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-peers", "5"];
const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault");
expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-peers");
expect(parsed.commandArgs).toEqual(["5"]);
});
it("parses p2p-sync command with peer and timeout", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-sync", "peer-1", "12"];
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-sync", "peer-1", "12"];
const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault");
expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-sync");
expect(parsed.commandArgs).toEqual(["peer-1", "12"]);
});
it("parses p2p-host command", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-host"];
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-host"];
const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault");
expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-host");
expect(parsed.commandArgs).toEqual([]);
});
it("parses --interval flag with valid integer", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval", "30"];
const parsed = parseArgs();
expect(parsed.command).toBe("daemon");
expect(parsed.interval).toBe(30);
});
it("parses -i shorthand for --interval", () => {
process.argv = ["node", "livesync-cli", "./vault", "-i", "10"];
const parsed = parseArgs();
expect(parsed.interval).toBe(10);
});
it("exits 1 when --interval has no value", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval"];
const exitMock = mockProcessExit();
vi.spyOn(console, "error").mockImplementation(() => {});
expect(() => parseArgs()).toThrowError("__EXIT__:1");
expect(exitMock).toHaveBeenCalledWith(1);
});
it("exits 1 when --interval is not a positive integer", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval", "0"];
const exitMock = mockProcessExit();
vi.spyOn(console, "error").mockImplementation(() => {});
expect(() => parseArgs()).toThrowError("__EXIT__:1");
expect(exitMock).toHaveBeenCalledWith(1);
});
it("exits 1 when --interval is negative", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval", "-5"];
const exitMock = mockProcessExit();
vi.spyOn(console, "error").mockImplementation(() => {});
expect(() => parseArgs()).toThrowError("__EXIT__:1");
});
it("exits 1 when --interval is not numeric", () => {
process.argv = ["node", "livesync-cli", "./vault", "--interval", "abc"];
const exitMock = mockProcessExit();
vi.spyOn(console, "error").mockImplementation(() => {});
expect(() => parseArgs()).toThrowError("__EXIT__:1");
});
it("parses explicit daemon command", () => {
process.argv = ["node", "livesync-cli", "./vault", "daemon"];
const parsed = parseArgs();
expect(parsed.command).toBe("daemon");
expect(parsed.databasePath).toBe("./vault");
});
it("defaults to daemon when no command specified", () => {
process.argv = ["node", "livesync-cli", "./vault"];
const parsed = parseArgs();
expect(parsed.command).toBe("daemon");
});
it("parses explicit daemon command with --interval", () => {
process.argv = ["node", "livesync-cli", "./vault", "daemon", "--interval", "30"];
const parsed = parseArgs();
expect(parsed.command).toBe("daemon");
expect(parsed.interval).toBe(30);
});
});

View File

@@ -11,8 +11,11 @@ import type {
} from "@lib/managers/adapters";
import type { FileEventItemSentinel } from "@lib/managers/StorageEventManager";
import type { NodeFile, NodeFolder } from "../adapters/NodeTypes";
import type { Stats } from "fs";
import * as fs from "fs/promises";
import * as path from "path";
import { watch as chokidarWatch, type FSWatcher } from "chokidar";
import type { IgnoreRules } from "../serviceModules/IgnoreRules";
/**
* CLI-specific type guard adapter
@@ -56,22 +59,11 @@ class CLIPersistenceAdapter implements IStorageEventPersistenceAdapter {
}
/**
* CLI-specific status adapter (console logging)
* CLI-specific status adapter (no-op — daemon uses journald for status)
*/
class CLIStatusAdapter implements IStorageEventStatusAdapter {
private lastUpdate = 0;
private updateInterval = 5000; // Update every 5 seconds
updateStatus(status: { batched: number; processing: number; totalQueued: number }): void {
const now = Date.now();
if (now - this.lastUpdate > this.updateInterval) {
if (status.totalQueued > 0 || status.processing > 0) {
// console.log(
// `[StorageEventManager] Batched: ${status.batched}, Processing: ${status.processing}, Total Queued: ${status.totalQueued}`
// );
}
this.lastUpdate = now;
}
updateStatus(_status: { batched: number; processing: number; totalQueued: number }): void {
// intentional no-op
}
}
@@ -100,15 +92,97 @@ class CLIConverterAdapter implements IStorageEventConverterAdapter<NodeFile> {
}
/**
* CLI-specific watch adapter (optional file watching with chokidar)
* CLI-specific watch adapter using chokidar for real-time filesystem monitoring.
*/
class CLIWatchAdapter implements IStorageEventWatchAdapter {
constructor(private basePath: string) {}
private _watcher: FSWatcher | undefined;
constructor(private basePath: string, private ignoreRules?: IgnoreRules, private watchEnabled: boolean = false) {}
private _toNodeFile(filePath: string, stats: Stats | undefined): NodeFile {
return {
path: path.relative(this.basePath, filePath) as FilePath,
stat: {
ctime: stats?.ctimeMs ?? Date.now(),
mtime: stats?.mtimeMs ?? Date.now(),
size: stats?.size ?? 0,
type: "file",
},
};
}
private _toNodeFolder(dirPath: string): NodeFolder {
return {
path: path.relative(this.basePath, dirPath) as FilePath,
isFolder: true,
};
}
async beginWatch(handlers: IStorageEventWatchHandlers): Promise<void> {
// File watching is not activated in the CLI.
// Because the CLI is designed for push/pull operations, not real-time sync.
// console.error("[CLIWatchAdapter] File watching is not enabled in CLI version");
if (!this.watchEnabled) return;
const baseIgnored: Array<RegExp | string | ((p: string) => boolean)> = [
/(^|[/\\])\./,
/(^|[/\\])[^/\\]*-livesync-v2([/\\]|$)/,
];
// Bind rules to a local const before the closure — chokidar v4 requires a
// MatchFunction, not glob strings, for custom patterns.
const rules = this.ignoreRules;
const ignored = rules
? [...baseIgnored, (p: string) => rules.shouldIgnore(path.relative(this.basePath, p))]
: baseIgnored;
const watcher = chokidarWatch(this.basePath, {
ignored,
ignoreInitial: true,
persistent: true,
awaitWriteFinish: {
stabilityThreshold: 500,
pollInterval: 100,
},
});
watcher.on("add", (filePath, stats) => {
const nodeFile = this._toNodeFile(filePath, stats);
handlers.onCreate(nodeFile);
});
watcher.on("change", (filePath, stats) => {
const nodeFile = this._toNodeFile(filePath, stats);
handlers.onChange(nodeFile);
});
watcher.on("unlink", (filePath) => {
const nodeFile = this._toNodeFile(filePath, undefined);
handlers.onDelete(nodeFile);
});
watcher.on("addDir", (dirPath) => {
const nodeFolder = this._toNodeFolder(dirPath);
handlers.onCreate(nodeFolder);
});
watcher.on("unlinkDir", (dirPath) => {
const nodeFolder = this._toNodeFolder(dirPath);
handlers.onDelete(nodeFolder);
});
watcher.on("error", (err) => {
console.error("[CLIWatchAdapter] Fatal watcher error — file watching stopped:", err);
console.error("[CLIWatchAdapter] Exiting for systemd restart.");
void watcher.close();
this._watcher = undefined;
// Use exit(1) rather than SIGTERM so systemd Restart=on-failure engages.
process.exit(1);
});
await new Promise<void>((resolve) => watcher.once("ready", resolve));
this._watcher = watcher;
}
close(): Promise<void> {
if (this._watcher) {
return this._watcher.close();
}
return Promise.resolve();
}
}
@@ -123,11 +197,15 @@ export class CLIStorageEventManagerAdapter implements IStorageEventManagerAdapte
readonly status: CLIStatusAdapter;
readonly converter: CLIConverterAdapter;
constructor(basePath: string) {
constructor(basePath: string, ignoreRules?: IgnoreRules, watchEnabled: boolean = false) {
this.typeGuard = new CLITypeGuardAdapter();
this.persistence = new CLIPersistenceAdapter(basePath);
this.watch = new CLIWatchAdapter(basePath);
this.watch = new CLIWatchAdapter(basePath, ignoreRules, watchEnabled);
this.status = new CLIStatusAdapter();
this.converter = new CLIConverterAdapter();
}
close(): Promise<void> {
return this.watch.close();
}
}

View File

@@ -0,0 +1,126 @@
import { describe, expect, it, vi, beforeEach } from "vitest";
import type { IStorageEventWatchHandlers } from "@lib/managers/adapters";
import type { NodeFile } from "../adapters/NodeTypes";
// ── chokidar mock ──────────────────────────────────────────────────────────────
// Must be hoisted before imports that pull in chokidar.
const mockWatcher = {
on: vi.fn().mockReturnThis(),
once: vi.fn((event: string, cb: () => void) => {
if (event === "ready") cb();
return mockWatcher;
}),
close: vi.fn(() => Promise.resolve()),
};
vi.mock("chokidar", () => ({
watch: vi.fn(() => mockWatcher),
}));
import * as chokidar from "chokidar";
import { CLIStorageEventManagerAdapter } from "./CLIStorageEventManagerAdapter";
// ── helpers ───────────────────────────────────────────────────────────────────
function makeHandlers(): IStorageEventWatchHandlers {
return {
onCreate: vi.fn(),
onChange: vi.fn(),
onDelete: vi.fn(),
onRename: vi.fn(),
} as any;
}
// ── tests ─────────────────────────────────────────────────────────────────────
describe("CLIStorageEventManagerAdapter", () => {
beforeEach(() => {
vi.clearAllMocks();
// Restore the default once() behaviour (ready fires synchronously).
mockWatcher.once.mockImplementation((event: string, cb: () => void) => {
if (event === "ready") cb();
return mockWatcher;
});
});
it("beginWatch is no-op when watchEnabled=false", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, false);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
expect(chokidar.watch).not.toHaveBeenCalled();
});
it("beginWatch calls chokidar.watch when watchEnabled=true", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, true);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
expect(chokidar.watch).toHaveBeenCalledTimes(1);
expect(chokidar.watch).toHaveBeenCalledWith(
"/base",
expect.objectContaining({ ignoreInitial: true })
);
});
it("add event produces NodeFile with correct relative path via onCreate", async () => {
const basePath = "/vault/base";
const adapter = new CLIStorageEventManagerAdapter(basePath, undefined, true);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
// Find the callback registered for the "add" event.
const addCall = mockWatcher.on.mock.calls.find(([event]) => event === "add");
expect(addCall).toBeDefined();
const addCallback = addCall![1] as (filePath: string, stats: any) => void;
const fakeStats = { ctimeMs: 1000, mtimeMs: 2000, size: 42 };
addCallback(`${basePath}/subdir/note.md`, fakeStats);
expect(handlers.onCreate).toHaveBeenCalledTimes(1);
const created = (handlers.onCreate as ReturnType<typeof vi.fn>).mock.calls[0][0] as NodeFile;
expect(created.path).toBe("subdir/note.md");
expect(created.stat?.size).toBe(42);
});
it("close() calls watcher.close()", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, true);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
await adapter.close();
expect(mockWatcher.close).toHaveBeenCalledTimes(1);
});
it("close() is safe when no watcher was started", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, false);
// Should not throw.
await expect(adapter.close()).resolves.toBeUndefined();
expect(mockWatcher.close).not.toHaveBeenCalled();
});
it("error event triggers process.exit(1)", async () => {
const adapter = new CLIStorageEventManagerAdapter("/base", undefined, true);
const handlers = makeHandlers();
await adapter.watch.beginWatch(handlers);
const processExitSpy = vi.spyOn(process, "exit").mockImplementation((() => {}) as any);
const errorCall = mockWatcher.on.mock.calls.find(([event]) => event === "error");
expect(errorCall).toBeDefined();
const errorCallback = errorCall![1] as (err: Error) => void;
errorCallback(new Error("disk failure"));
expect(processExitSpy).toHaveBeenCalledWith(1);
processExitSpy.mockRestore();
});
});

View File

@@ -2,6 +2,7 @@ import { StorageEventManagerBase, type StorageEventManagerBaseDependencies } fro
import { CLIStorageEventManagerAdapter } from "./CLIStorageEventManagerAdapter";
import type { IMinimumLiveSyncCommands, LiveSyncBaseCore } from "../../../LiveSyncBaseCore";
import type { ServiceContext } from "@lib/services/base/ServiceBase";
import type { IgnoreRules } from "../serviceModules/IgnoreRules";
// import type { IMinimumLiveSyncCommands } from "@lib/services/base/IService";
export class StorageEventManagerCLI extends StorageEventManagerBase<CLIStorageEventManagerAdapter> {
@@ -10,9 +11,11 @@ export class StorageEventManagerCLI extends StorageEventManagerBase<CLIStorageEv
constructor(
basePath: string,
core: LiveSyncBaseCore<ServiceContext, IMinimumLiveSyncCommands>,
dependencies: StorageEventManagerBaseDependencies
dependencies: StorageEventManagerBaseDependencies,
ignoreRules?: IgnoreRules,
watchEnabled?: boolean
) {
const adapter = new CLIStorageEventManagerAdapter(basePath);
const adapter = new CLIStorageEventManagerAdapter(basePath, ignoreRules, watchEnabled);
super(adapter, dependencies);
this.core = core;
}
@@ -25,4 +28,11 @@ export class StorageEventManagerCLI extends StorageEventManagerBase<CLIStorageEv
// No-op in CLI version
// Internal file handling is not needed
}
/**
* Close the file watcher. Call this during graceful shutdown.
*/
close(): Promise<void> {
return this.adapter.close();
}
}

View File

@@ -6,6 +6,7 @@
"type": "module",
"scripts": {
"dev": "vite",
"prebuild": "node scripts/check-submodule.mjs",
"build": "vite build",
"preview": "vite preview",
"cli": "node dist/index.cjs",

View File

@@ -4,6 +4,7 @@
"version": "0.0.0",
"description": "Runtime dependencies for Self-hosted LiveSync CLI Docker image",
"dependencies": {
"chokidar": "^4.0.0",
"commander": "^14.0.3",
"werift": "^0.22.9",
"pouchdb-adapter-http": "^9.0.0",

View File

@@ -0,0 +1,36 @@
import fs from "node:fs";
import path from "node:path";
import process from "node:process";
const cliDir = process.cwd();
const repoRoot = path.resolve(cliDir, "../../..");
const requiredFiles = [
path.join(repoRoot, "src/lib/src/common/types.ts"),
];
const missingFiles = requiredFiles.filter((filePath) => !fs.existsSync(filePath));
if (missingFiles.length === 0) {
process.exit(0);
}
console.error("[CLI Build Error] Required shared sources were not found.");
console.error("This repository uses Git submodules, and the CLI depends on src/lib.");
console.error("");
console.error("Missing file(s):");
for (const filePath of missingFiles) {
console.error(` - ${path.relative(repoRoot, filePath)}`);
}
console.error("");
console.error("Initialize submodules, then retry the CLI build:");
console.error(" git submodule update --init --recursive");
console.error("");
console.error("For a fresh clone, prefer:");
console.error(" git clone --recurse-submodules <repository-url>");
console.error("");
console.error("Then run:");
console.error(" npm install");
console.error(" cd src/apps/cli");
console.error(" npm run build");
process.exit(1);

View File

@@ -9,6 +9,7 @@ import { ServiceFileAccessCLI } from "./ServiceFileAccessImpl";
import { ServiceDatabaseFileAccessCLI } from "./DatabaseFileAccess";
import { StorageEventManagerCLI } from "../managers/StorageEventManagerCLI";
import type { ServiceModules } from "@lib/interfaces/ServiceModule";
import type { IgnoreRules } from "./IgnoreRules";
/**
* Initialize service modules for CLI version
@@ -22,7 +23,9 @@ import type { ServiceModules } from "@lib/interfaces/ServiceModule";
export function initialiseServiceModulesCLI(
basePath: string,
core: LiveSyncBaseCore<ServiceContext, any>,
services: InjectableServiceHub<ServiceContext>
services: InjectableServiceHub<ServiceContext>,
ignoreRules?: IgnoreRules,
watchEnabled: boolean = false,
): ServiceModules {
const storageAccessManager = new StorageAccessManager();
@@ -42,6 +45,12 @@ export function initialiseServiceModulesCLI(
vaultService: services.vault,
storageAccessManager: storageAccessManager,
APIService: services.API,
}, ignoreRules, watchEnabled);
// Close the file watcher during graceful shutdown so the process can exit cleanly.
services.appLifecycle.onUnload.addHandler(async () => {
await storageEventManager.close();
return true;
});
// Storage access using CLI file system adapter

View File

@@ -0,0 +1,129 @@
import * as fs from "fs/promises";
import * as path from "path";
import { minimatch } from "minimatch";
/**
* Loads and evaluates ignore rules from `.livesync/ignore` inside the vault.
*
* File format:
* - Lines starting with `#` are comments.
* - Blank lines are ignored.
* - `import: .gitignore` (exactly) — merges patterns from the vault's `.gitignore`.
* - All other lines are minimatch glob patterns relative to the vault root.
*
* Negation patterns (lines starting with `!`) are not supported. Loading a
* ruleset containing them throws an error — use separate include/exclude files
* instead.
*
* Missing files (`.livesync/ignore` or `.gitignore`) are silently skipped.
*/
export class IgnoreRules {
private patterns: string[] = [];
constructor(private vaultPath: string) {}
/**
* Reads `.livesync/ignore` (and optionally `.gitignore`) and populates the
* pattern list. Safe to call multiple times — each call replaces the
* previous state. Does not throw if files are absent.
*
* @throws if any pattern line begins with `!` (negation is unsupported).
*/
async load(): Promise<void> {
this.patterns = [];
const ignorePath = path.join(this.vaultPath, ".livesync", "ignore");
let rawLines: string[];
try {
const content = await fs.readFile(ignorePath, "utf-8");
rawLines = content.split(/\r?\n/);
} catch {
// File absent or unreadable — treat as empty ruleset.
return;
}
for (const line of rawLines) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) {
continue;
}
// NOTE: Only the exact string "import: .gitignore" is recognised.
// Any future generalisation of this directive must validate that
// the resolved path stays within the vault directory.
if (trimmed === "import: .gitignore") {
await this._importGitignore();
continue;
}
if (trimmed.startsWith("import:")) {
console.error(`[IgnoreRules] Warning: unrecognised directive '${trimmed}' — only 'import: .gitignore' is supported`);
continue;
}
this._addPattern(trimmed);
}
if (this.patterns.length > 0) {
console.error(`[IgnoreRules] Loaded ${this.patterns.length} ignore patterns`);
}
}
// Normalises a single gitignore-style pattern:
// - Patterns ending with `/` (directory patterns like `build/`) are
// converted to `build/**` so they match all files inside that directory.
// - Patterns without a `/` are prefixed with `**/` to give them matchBase
// semantics (e.g. `*.tmp` → `**/*.tmp`), matching the basename in any
// subdirectory as gitignore does.
// - Patterns that already contain a `/` (but don't end with one) are
// path-specific and used as-is.
private _normalisePattern(pattern: string): string {
if (pattern.endsWith("/")) {
return "**/" + pattern + "**";
} else if (!pattern.includes("/")) {
return "**/" + pattern;
}
return pattern;
}
private async _importGitignore(): Promise<void> {
const gitignorePath = path.join(this.vaultPath, ".gitignore");
let content: string;
try {
content = await fs.readFile(gitignorePath, "utf-8");
} catch {
return;
}
this._parseLines(content);
}
private _parseLines(content: string): void {
for (const line of content.split(/\r?\n/)) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) continue;
this._addPattern(trimmed);
}
}
private _addPattern(raw: string): void {
if (raw.startsWith("!")) {
throw new Error(
`[IgnoreRules] Negation pattern '${raw}' is not supported. ` +
`Remove it from .livesync/ignore or use a separate include/exclude file.`
);
}
this.patterns.push(this._normalisePattern(raw));
}
/**
* Returns `true` if the given vault-relative path matches any loaded
* ignore pattern.
*
* @param relativePath - Path relative to the vault root, using forward
* slashes or the OS separator.
*/
shouldIgnore(relativePath: string): boolean {
if (this.patterns.length === 0) {
return false;
}
// Normalise to forward slashes for minimatch.
const normalised = relativePath.replace(/\\/g, "/");
return this.patterns.some((p) => minimatch(normalised, p, { dot: true }));
}
}

View File

@@ -0,0 +1,172 @@
import * as fs from "node:fs/promises";
import * as os from "node:os";
import * as path from "node:path";
import { afterEach, beforeEach, describe, expect, it } from "vitest";
import { IgnoreRules } from "./IgnoreRules";
describe("IgnoreRules", () => {
const tempDirs: string[] = [];
async function createVault(): Promise<string> {
const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), "livesync-ignorerules-"));
tempDirs.push(tempDir);
return tempDir;
}
async function writeIgnoreFile(vaultPath: string, content: string): Promise<void> {
const ignoreDir = path.join(vaultPath, ".livesync");
await fs.mkdir(ignoreDir, { recursive: true });
await fs.writeFile(path.join(ignoreDir, "ignore"), content, "utf-8");
}
afterEach(async () => {
await Promise.all(tempDirs.splice(0).map((dir) => fs.rm(dir, { recursive: true, force: true })));
});
describe("pattern normalisation", () => {
it("adds **/ prefix to basename patterns (no slash)", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("notes/scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("deep/nested/file.tmp")).toBe(true);
});
it("appends ** to directory patterns ending with / and prepends **/", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "build/\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("build/output.js")).toBe(true);
expect(rules.shouldIgnore("build/nested/file.js")).toBe(true);
expect(rules.shouldIgnore("subproject/build/output.js")).toBe(true);
});
it("leaves patterns containing / as-is", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "docs/private.md\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("docs/private.md")).toBe(true);
expect(rules.shouldIgnore("other/docs/private.md")).toBe(false);
});
});
describe("shouldIgnore", () => {
it("matches **/*.tmp against notes/scratch.tmp", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("notes/scratch.tmp")).toBe(true);
});
it("does not match notes/readme.md against **/*.tmp", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("notes/readme.md")).toBe(false);
});
it("returns false when no patterns are loaded", async () => {
const vaultPath = await createVault();
const rules = new IgnoreRules(vaultPath);
// No load() call — patterns are empty
expect(rules.shouldIgnore("anything.md")).toBe(false);
});
});
describe("negation patterns", () => {
it("throws when a negation pattern is encountered", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\n!important.tmp\n");
const rules = new IgnoreRules(vaultPath);
await expect(rules.load()).rejects.toThrow(/Negation pattern/);
});
it("throws when a .gitignore imported via directive contains negation", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "import: .gitignore\n");
await fs.writeFile(path.join(vaultPath, ".gitignore"), "*.log\n!keep.log\n", "utf-8");
const rules = new IgnoreRules(vaultPath);
await expect(rules.load()).rejects.toThrow(/Negation pattern/);
});
});
describe("unrecognised import: directives", () => {
it("warns and skips unrecognised import: forms (does not add as literal pattern)", async () => {
const vaultPath = await createVault();
// Typo: "import:.gitignore" instead of "import: .gitignore"
await writeIgnoreFile(vaultPath, "*.tmp\nimport:.gitignore\n");
const rules = new IgnoreRules(vaultPath);
await rules.load();
// *.tmp still loaded; import:.gitignore is skipped (not treated as a literal pattern)
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("import:.gitignore")).toBe(false);
});
});
describe("load() with missing file", () => {
it("returns without error when .livesync/ignore is absent", async () => {
const vaultPath = await createVault();
// No ignore file created
const rules = new IgnoreRules(vaultPath);
await expect(rules.load()).resolves.toBeUndefined();
expect(rules.shouldIgnore("anything.md")).toBe(false);
});
});
describe("load() with comments and blank lines", () => {
it("skips # comment lines and blank lines", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(
vaultPath,
"# This is a comment\n\n \n*.tmp\n# another comment\nbuild/\n"
);
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("build/output.js")).toBe(true);
expect(rules.shouldIgnore("readme.md")).toBe(false);
});
});
describe("import: .gitignore directive", () => {
it("reads and normalises patterns from .gitignore", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "import: .gitignore\n");
await fs.writeFile(path.join(vaultPath, ".gitignore"), "*.log\nnode_modules/\n", "utf-8");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("app.log")).toBe(true);
expect(rules.shouldIgnore("node_modules/package.json")).toBe(true);
expect(rules.shouldIgnore("src/node_modules/package.json")).toBe(true);
expect(rules.shouldIgnore("src/index.ts")).toBe(false);
});
it("merges .gitignore patterns with other patterns", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\nimport: .gitignore\n");
await fs.writeFile(path.join(vaultPath, ".gitignore"), "*.log\n", "utf-8");
const rules = new IgnoreRules(vaultPath);
await rules.load();
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
expect(rules.shouldIgnore("error.log")).toBe(true);
});
});
describe("import: .gitignore with missing .gitignore", () => {
it("does not throw when .gitignore is absent", async () => {
const vaultPath = await createVault();
await writeIgnoreFile(vaultPath, "*.tmp\nimport: .gitignore\n");
// No .gitignore created
const rules = new IgnoreRules(vaultPath);
await expect(rules.load()).resolves.toBeUndefined();
// The *.tmp pattern from the ignore file still works
expect(rules.shouldIgnore("scratch.tmp")).toBe(true);
});
});
});

View File

@@ -27,10 +27,10 @@ import { DatabaseService } from "@lib/services/base/DatabaseService";
import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types";
export class NodeServiceContext extends ServiceContext {
vaultPath: string;
constructor(vaultPath: string) {
databasePath: string;
constructor(databasePath: string) {
super();
this.vaultPath = vaultPath;
this.databasePath = databasePath;
}
}
@@ -64,7 +64,7 @@ class NodeDatabaseService<T extends NodeServiceContext> extends DatabaseService<
): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } {
const optionPass = {
...options,
prefix: this.context.vaultPath + nodePath.sep,
prefix: this.context.databasePath + nodePath.sep,
};
const passSettings = { ...settings, useIndexedDBAdapter: false };
return super.modifyDatabaseOptions(passSettings, name, optionPass);

View File

@@ -0,0 +1,49 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
cd "$CLI_DIR"
source "$SCRIPT_DIR/test-helpers.sh"
display_test_info "Test for Issue #860: Empty output from ls and mirror"
RUN_BUILD="${RUN_BUILD:-1}"
cli_test_init_cli_cmd
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-repro-860.XXXXXX")"
trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault"
mkdir -p "$VAULT_DIR"
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..."
npm run build
fi
echo "[INFO] generating settings -> $SETTINGS_FILE"
cli_test_init_settings_file "$SETTINGS_FILE"
# 1. Test 'ls' on empty database
echo "[INFO] Testing 'ls' on empty database..."
LS_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls)
if [[ -z "$LS_OUTPUT" ]]; then
echo "[REPRODUCED] 'ls' returned empty output for empty database."
else
echo "[INFO] 'ls' output: $LS_OUTPUT"
fi
# 2. Test 'mirror' on empty vault
echo "[INFO] Testing 'mirror' on empty vault..."
MIRROR_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror 2>&1)
if [[ "$MIRROR_OUTPUT" == *"[Command] mirror"* ]] && [[ ! "$MIRROR_OUTPUT" == *"[Mirror]"* ]]; then
# Note: currently it prints [Command] mirror to stderr.
# Let's see if it prints anything else.
echo "[REPRODUCED] 'mirror' produced no functional logs (only command header)."
else
echo "[INFO] 'mirror' output: $MIRROR_OUTPUT"
fi
echo "[DONE] finished repro-860 test"

View File

@@ -0,0 +1,166 @@
#!/usr/bin/env bash
# Test: daemon-related ignore rules behaviour
#
# Tests that are runnable without a long-running daemon process are exercised
# here using the `mirror` command, which calls the same `isTargetFile` handler
# stack that the daemon uses.
#
# Covered cases:
# 1. .livesync/ignore with *.tmp pattern → ignored file is NOT synced to DB
# 2. .livesync/ignore missing → no error, normal sync continues
# 3. import: .gitignore directive → patterns from .gitignore are merged
#
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
cd "$CLI_DIR"
source "$SCRIPT_DIR/test-helpers.sh"
display_test_info
RUN_BUILD="${RUN_BUILD:-1}"
cli_test_init_cli_cmd
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-daemon-test.XXXXXX")"
trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault"
mkdir -p "$VAULT_DIR/notes"
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..."
npm run build
fi
echo "[INFO] generating settings -> $SETTINGS_FILE"
cli_test_init_settings_file "$SETTINGS_FILE"
cli_test_mark_settings_configured "$SETTINGS_FILE"
PASS=0
FAIL=0
assert_pass() { echo "[PASS] $1"; PASS=$((PASS + 1)); }
assert_fail() { echo "[FAIL] $1" >&2; FAIL=$((FAIL + 1)); }
# ─────────────────────────────────────────────────────────────────────────────
# Case 1: .livesync/ignore with *.tmp → matched file should NOT appear in DB
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 1: .livesync/ignore *.tmp → ignored file not synced to DB ==="
mkdir -p "$VAULT_DIR/.livesync"
printf '*.tmp\n' > "$VAULT_DIR/.livesync/ignore"
# Also write a normal file so we can confirm mirror ran at all.
printf 'normal content\n' > "$VAULT_DIR/notes/normal.md"
# Write the file that should be ignored.
printf 'tmp content\n' > "$VAULT_DIR/notes/scratch.tmp"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
# The normal file should be in the DB.
RESULT_NORMAL="$WORK_DIR/case1-normal.txt"
if run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull notes/normal.md "$RESULT_NORMAL" 2>/dev/null; then
if cmp -s "$VAULT_DIR/notes/normal.md" "$RESULT_NORMAL"; then
assert_pass "normal.md was synced to DB"
else
assert_fail "normal.md content mismatch after mirror"
fi
else
assert_fail "normal.md was not found in DB after mirror"
fi
# The .tmp file should NOT be in the DB.
DB_LIST="$WORK_DIR/case1-ls.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls > "$DB_LIST"
if grep -q "scratch.tmp" "$DB_LIST"; then
assert_fail "scratch.tmp (ignored) was unexpectedly synced to DB"
echo "--- DB listing ---" >&2; cat "$DB_LIST" >&2
else
assert_pass "scratch.tmp (*.tmp pattern) was NOT synced to DB"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 2: .livesync/ignore absent → no error, normal sync continues
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 2: .livesync/ignore absent → no error, sync continues ==="
VAULT_DIR2="$WORK_DIR/vault2"
mkdir -p "$VAULT_DIR2/notes"
SETTINGS_FILE2="$WORK_DIR/data2.json"
cli_test_init_settings_file "$SETTINGS_FILE2"
cli_test_mark_settings_configured "$SETTINGS_FILE2"
# No .livesync directory at all.
printf 'hello\n' > "$VAULT_DIR2/notes/hello.md"
# mirror should succeed without error.
set +e
MIRROR_OUTPUT="$WORK_DIR/case2-mirror.txt"
run_cli "$VAULT_DIR2" --settings "$SETTINGS_FILE2" mirror >"$MIRROR_OUTPUT" 2>&1
MIRROR_EXIT=$?
set -e
if [[ "$MIRROR_EXIT" -ne 0 ]]; then
assert_fail "mirror exited non-zero ($MIRROR_EXIT) when .livesync/ignore is absent"
cat "$MIRROR_OUTPUT" >&2
else
assert_pass "mirror succeeded when .livesync/ignore is absent"
fi
# The normal file should have been synced.
RESULT_HELLO="$WORK_DIR/case2-hello.txt"
if run_cli "$VAULT_DIR2" --settings "$SETTINGS_FILE2" pull notes/hello.md "$RESULT_HELLO" 2>/dev/null; then
assert_pass "file synced normally when .livesync/ignore is absent"
else
assert_fail "file was not synced when .livesync/ignore is absent"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 3: import: .gitignore merges patterns
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 3: import: .gitignore directive merges patterns ==="
VAULT_DIR3="$WORK_DIR/vault3"
mkdir -p "$VAULT_DIR3/notes"
SETTINGS_FILE3="$WORK_DIR/data3.json"
cli_test_init_settings_file "$SETTINGS_FILE3"
cli_test_mark_settings_configured "$SETTINGS_FILE3"
mkdir -p "$VAULT_DIR3/.livesync"
printf 'import: .gitignore\n' > "$VAULT_DIR3/.livesync/ignore"
printf '# gitignore comment\n*.log\nbuild/\n' > "$VAULT_DIR3/.gitignore"
printf 'regular note\n' > "$VAULT_DIR3/notes/regular.md"
printf 'log content\n' > "$VAULT_DIR3/notes/debug.log"
run_cli "$VAULT_DIR3" --settings "$SETTINGS_FILE3" mirror
DB_LIST3="$WORK_DIR/case3-ls.txt"
run_cli "$VAULT_DIR3" --settings "$SETTINGS_FILE3" ls > "$DB_LIST3"
if grep -q "debug.log" "$DB_LIST3"; then
assert_fail "debug.log (ignored via .gitignore import) was unexpectedly synced to DB"
echo "--- DB listing ---" >&2; cat "$DB_LIST3" >&2
else
assert_pass "debug.log (*.log from imported .gitignore) was NOT synced to DB"
fi
# regular.md should still be present.
if grep -q "regular.md" "$DB_LIST3"; then
assert_pass "regular.md was synced normally alongside .gitignore import rules"
else
assert_fail "regular.md was NOT synced — .gitignore import may have been too broad"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Summary
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "Results: PASS=$PASS FAIL=$FAIL"
if [[ "$FAIL" -gt 0 ]]; then
exit 1
fi

83
src/apps/cli/test/test-mirror-linux.sh Normal file → Executable file
View File

@@ -28,7 +28,9 @@ trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault"
DB_DIR="$WORK_DIR/db"
mkdir -p "$VAULT_DIR/test"
mkdir -p "$DB_DIR"
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..."
@@ -41,6 +43,20 @@ cli_test_init_settings_file "$SETTINGS_FILE"
# isConfigured=true is required for mirror (canProceedScan checks this)
cli_test_mark_settings_configured "$SETTINGS_FILE"
# Preparation: Sync settings and files logic
DB_SETTINGS="$DB_DIR/settings.json"
cp "$SETTINGS_FILE" "$DB_SETTINGS"
# Helper for standard run (Separated paths)
run_mirror_test() {
run_cli "$DB_DIR" --settings "$DB_SETTINGS" mirror "$VAULT_DIR"
}
# Helper for compatibility run (Same path)
run_mirror_compat() {
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
}
PASS=0
FAIL=0
@@ -78,19 +94,27 @@ portable_touch_timestamp() {
# Case 1: File exists only in storage → should be synced into DB after mirror
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 1: storage-only → DB ==="
echo "=== Case 1: storage-only → DB (Separated Paths) ==="
printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
echo "[DEBUG] DB_DIR: $DB_DIR"
echo "[DEBUG] VAULT_DIR: $VAULT_DIR"
run_mirror_test
RESULT_FILE="$WORK_DIR/case1-cat.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE"
# Try 'ls' first to see what's in the DB
echo "--- DB contents ---"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" ls
echo "-------------------"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/storage-only.md "$RESULT_FILE"
if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then
assert_pass "storage-only file was synced into DB"
assert_pass "storage-only file was synced into DB using separated paths"
else
assert_fail "storage-only file NOT synced into DB"
assert_fail "storage-only file NOT synced into DB with separated paths"
echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2
echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2
fi
@@ -99,9 +123,9 @@ fi
# Case 2: File exists only in DB → should be restored to storage after mirror
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 2: DB-only → storage ==="
echo "=== Case 2: DB-only → storage (Separated Paths) ==="
printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md
printf 'db-only content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/db-only.md
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
assert_fail "db-only.md unexpectedly exists in storage before mirror"
@@ -109,7 +133,7 @@ else
echo "[INFO] confirmed: test/db-only.md not in storage before mirror"
fi
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
run_mirror_test
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")"
@@ -119,19 +143,19 @@ if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')"
fi
else
assert_fail "DB-only file was NOT restored to storage"
assert_fail "DB-only file NOT restored to storage after mirror"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 3: File deleted in DB → should NOT be created in storage
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 3: DB-deleted → storage untouched ==="
echo "=== Case 3: DB-deleted → storage untouched (Separated Paths) ==="
printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md
printf 'to-be-deleted\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/deleted.md
run_cli "$DB_DIR" --settings "$DB_SETTINGS" rm test/deleted.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
run_mirror_test
if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then
assert_pass "deleted DB entry was not restored to storage"
@@ -143,19 +167,19 @@ fi
# Case 4: Both exist, storage is newer → DB should be updated
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 4: storage newer → DB updated ==="
echo "=== Case 4: storage newer → DB updated (Separated Paths) ==="
# Seed DB with old content (mtime ≈ now)
printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md
printf 'old content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-storage-newer.md
# Write new content to storage with a timestamp 1 hour in the future
printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md"
touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
run_mirror_test
DB_RESULT_FILE="$WORK_DIR/case4-pull.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then
assert_pass "DB updated to match newer storage file"
else
@@ -168,16 +192,16 @@ fi
# Case 5: Both exist, DB is newer → storage should be updated
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 5: DB newer → storage updated ==="
echo "=== Case 5: DB newer → storage updated (Separated Paths) ==="
# Write old content to storage with a timestamp 1 hour in the past
printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md"
touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md"
# Write new content to DB only (mtime ≈ now, newer than the storage file)
printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md
printf 'new db content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-db-newer.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
run_mirror_test
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")"
if [[ "$STORAGE_CONTENT" == "new db content" ]]; then
@@ -186,6 +210,25 @@ else
assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 6: Compatibility test - omitted vault-path
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 6: omitted vault-path (Compatibility Mode) ==="
# We use VAULT_DIR as the "main" database path for this part.
printf 'compat-content\n' > "$VAULT_DIR/compat.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
# In compat mode, it should find it in the DB at root
CAT_RESULT="$WORK_DIR/compat-cat.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull compat.md "$CAT_RESULT"
if [[ "$(cat "$CAT_RESULT")" == "compat-content" ]]; then
assert_pass "Compatibility mode works (omitted vault-path)"
else
assert_fail "Compatibility mode failed to sync file into DB"
fi
# ─────────────────────────────────────────────────────────────────────────────
# Summary
# ─────────────────────────────────────────────────────────────────────────────

View File

@@ -0,0 +1,9 @@
hostname=http://127.0.0.1:5989/
dbname=livesync-test-db-ci
username=admin
password=testpassword
minioEndpoint=http://127.0.0.1:9000
accessKey=minioadmin
secretKey=minioadmin
bucketName=livesync-test-bucket-ci
LIVESYNC_TEST_TEE=1

View File

@@ -0,0 +1,150 @@
# Writing CLI Tests on Deno
This guide explains how to add or update tests under `src/apps/cli/testdeno/`.
Note that new tests should be added to the Deno suite rather than the existing bash suite due to the cross-platform execution and TypeScript benefits.
## Scope
The Deno suite is designed for cross-platform execution, with a strong focus on Windows compatibility while keeping behaviour equivalent to existing bash tests.
## Principles
- Keep one scenario per file when practical.
- Reuse helpers from `helpers/` rather than duplicating process, Docker, or settings logic.
- Prefer deterministic data over random inputs unless randomness is explicitly required.
- Ensure every test can clean up automatically.
- Keep assertions actionable with clear failure messages.
## Directory structure
```
src/apps/cli/testdeno/
helpers/
backgroundCli.ts
cli.ts
docker.ts
env.ts
p2p.ts
settings.ts
temp.ts
test-*.ts
deno.json
```
## Test file naming
- Use `test-<feature>.ts`.
- Use names aligned with existing bash tests when porting, for example:
- `test-sync-locked-remote.ts`
- `test-p2p-sync.ts`
## Core helper usage
### Temporary workspace
Use `TempDir` and `await using` so cleanup is automatic:
```ts
await using workDir = await TempDir.create("livesync-cli-my-test");
```
### CLI execution
- `runCli(...)`: returns code and combined output.
- `runCliOrFail(...)`: throws on non-zero exit.
- `runCliWithInputOrFail(input, ...)`: for `put` and stdin-driven commands.
### Settings
- `initSettingsFile(...)`: creates a baseline settings file.
- `applyCouchdbSettings(...)`: applies CouchDB fields.
- `applyRemoteSyncSettings(...)`: applies remote and encryption fields.
- `applyP2pSettings(...)`: applies P2P fields.
- `applyP2pTestTweaks(...)`: enables P2P-only test profile.
### Docker services
- `startCouchdb(...)`, `stopCouchdb()`
- `startP2pRelay()`, `stopP2pRelay()`
### P2P discovery
- `discoverPeer(...)`
- `maybeStartLocalRelay(...)`
- `stopLocalRelayIfStarted(...)`
### Background host process
Use `startCliInBackground(...)` for long-running host mode such as `p2p-host`.
## Recommended test structure
1. Arrange
2. Act
3. Assert
4. Cleanup in `finally`
Example skeleton:
```ts
Deno.test("feature: behaviour", async () => {
await using workDir = await TempDir.create("example");
// Arrange
try {
// Act
// Assert
} finally {
// Optional explicit cleanup
}
});
```
## Reliability guidelines
- Use explicit waits only when needed for eventual consistency.
- Re-run sync operations where the protocol is eventually consistent.
- For network-sensitive commands, use `LIVESYNC_CLI_RETRY` during debugging.
- Keep Docker container reuse disabled by default unless debugging.
## Environment variables
Common variables:
- `LIVESYNC_DOCKER_MODE`
- `LIVESYNC_DOCKER_COMMAND`
- `LIVESYNC_TEST_TEE`
- `LIVESYNC_DOCKER_TEE`
- `LIVESYNC_CLI_DEBUG`
- `LIVESYNC_CLI_VERBOSE`
- `LIVESYNC_CLI_RETRY`
- `LIVESYNC_DEBUG_KEEP_DOCKER`
P2P variables:
- `RELAY`
- `ROOM_ID`
- `PASSPHRASE`
- `APP_ID`
- `PEERS_TIMEOUT`
- `SYNC_TIMEOUT`
- `USE_INTERNAL_RELAY`
## Adding a new test task
1. Add the test file under `src/apps/cli/testdeno/`.
2. Add a task in `src/apps/cli/testdeno/deno.json`.
3. Update `src/apps/cli/testdeno/test_dev_deno.md`.
4. Run the new task locally.
## Validation checklist
- The test passes on a clean workspace.
- The test does not leave persistent artefacts unless explicitly requested.
- Failure messages identify both expected and actual behaviour.
- The corresponding task is documented.
## Out of scope for this suite
- One-off reproduction scripts that are not intended as stable regression tests.

View File

@@ -0,0 +1,22 @@
{
"tasks": {
"test": "deno test --env-file=.test.env -A --no-check test-*.ts",
"test:local": "deno test --env-file=.test.env -A --no-check test-setup-put-cat.ts test-mirror.ts",
"test:push-pull": "deno test --env-file=.test.env -A --no-check test-push-pull.ts",
"test:setup-put-cat": "deno test --env-file=.test.env -A --no-check test-setup-put-cat.ts",
"test:mirror": "deno test --env-file=.test.env -A --no-check test-mirror.ts",
"test:sync-two-local": "deno test --env-file=.test.env -A --no-check test-sync-two-local-databases.ts",
"test:sync-locked-remote": "deno test --env-file=.test.env -A --no-check test-sync-locked-remote.ts",
"test:p2p-host": "deno test --env-file=.test.env -A --no-check test-p2p-host.ts",
"test:p2p-peers": "deno test --env-file=.test.env -A --no-check test-p2p-peers-local-relay.ts",
"test:p2p-sync": "deno test --env-file=.test.env -A --no-check test-p2p-sync.ts",
"test:p2p-three-nodes": "deno test --env-file=.test.env -A --no-check test-p2p-three-nodes-conflict.ts",
"test:p2p-upload-download": "deno test --env-file=.test.env -A --no-check test-p2p-upload-download-repro.ts",
"test:e2e-couchdb": "deno test --env-file=.test.env -A --no-check test-e2e-two-vaults-couchdb.ts",
"test:e2e-matrix": "deno test --env-file=.test.env -A --no-check test-e2e-two-vaults-matrix.ts"
},
"imports": {
"@std/assert": "jsr:@std/assert@^1.0.13",
"@std/path": "jsr:@std/path@^1.0.9"
}
}

31
src/apps/cli/testdeno/deno.lock generated Normal file
View File

@@ -0,0 +1,31 @@
{
"version": "5",
"specifiers": {
"jsr:@std/assert@^1.0.13": "1.0.19",
"jsr:@std/internal@^1.0.12": "1.0.12",
"jsr:@std/path@^1.0.9": "1.1.4"
},
"jsr": {
"@std/assert@1.0.19": {
"integrity": "eaada96ee120cb980bc47e040f82814d786fe8162ecc53c91d8df60b8755991e",
"dependencies": [
"jsr:@std/internal"
]
},
"@std/internal@1.0.12": {
"integrity": "972a634fd5bc34b242024402972cd5143eac68d8dffaca5eaa4dba30ce17b027"
},
"@std/path@1.1.4": {
"integrity": "1d2d43f39efb1b42f0b1882a25486647cb851481862dc7313390b2bb044314b5",
"dependencies": [
"jsr:@std/internal"
]
}
},
"workspace": {
"dependencies": [
"jsr:@std/assert@^1.0.13",
"jsr:@std/path@^1.0.9"
]
}
}

View File

@@ -0,0 +1,112 @@
import { CLI_DIR } from "./cli.ts";
import { join } from "@std/path";
const CLI_DIST = join(CLI_DIR, "dist", "index.cjs");
const VERBOSE_ENABLED = Deno.env.get("LIVESYNC_CLI_VERBOSE") === "1";
const DEBUG_ENABLED = Deno.env.get("LIVESYNC_CLI_DEBUG") === "1";
function decorateArgs(args: string[]): string[] {
return DEBUG_ENABLED ? ["-d", ...args] : VERBOSE_ENABLED ? ["-v", ...args] : args;
}
async function pump(
stream: ReadableStream<Uint8Array>,
sink: (text: string) => void,
teeTarget: WritableStream<Uint8Array> | null
): Promise<void> {
const reader = stream.getReader();
const writer = teeTarget?.getWriter();
const dec = new TextDecoder();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (!value) continue;
sink(dec.decode(value, { stream: true }));
if (writer) {
await writer.write(value);
}
}
} finally {
if (writer) writer.releaseLock();
reader.releaseLock();
}
}
export class BackgroundCliProcess {
#stdout = "";
#stderr = "";
#stdoutDone: Promise<void>;
#stderrDone: Promise<void>;
constructor(
readonly child: Deno.ChildProcess,
readonly args: string[]
) {
this.#stdoutDone = pump(
child.stdout,
(text) => {
this.#stdout += text;
},
null
);
this.#stderrDone = pump(
child.stderr,
(text) => {
this.#stderr += text;
},
null
);
}
get stdout(): string {
return this.#stdout;
}
get stderr(): string {
return this.#stderr;
}
get combined(): string {
return this.#stdout + this.#stderr;
}
async waitUntilContains(needle: string, timeoutMs = 15000): Promise<void> {
const started = Date.now();
while (Date.now() - started < timeoutMs) {
if (this.combined.includes(needle)) return;
const status = await Promise.race([
this.child.status.then((s) => ({ type: "status" as const, status: s })),
new Promise<{ type: "tick" }>((resolve) => setTimeout(() => resolve({ type: "tick" }), 100)),
]);
if (status.type === "status") {
throw new Error(
`Background CLI exited before '${needle}' appeared (code ${status.status.code})\n${this.combined}`
);
}
}
throw new Error(`Timed out waiting for '${needle}'\n${this.combined}`);
}
async stop(): Promise<number> {
try {
this.child.kill("SIGTERM");
} catch {
// ignore already-exited processes
}
const status = await this.child.status;
await Promise.all([this.#stdoutDone, this.#stderrDone]);
return status.code;
}
}
export function startCliInBackground(...args: string[]): BackgroundCliProcess {
const child = new Deno.Command("node", {
args: [CLI_DIST, ...decorateArgs(args)],
cwd: CLI_DIR,
stdin: "null",
stdout: "piped",
stderr: "piped",
}).spawn();
return new BackgroundCliProcess(child, args);
}

View File

@@ -0,0 +1,231 @@
import { join } from "@std/path";
// ---------------------------------------------------------------------------
// Path resolution
// ---------------------------------------------------------------------------
// This file lives at: src/apps/cli/testdeno/helpers/cli.ts
// CLI root (src/apps/cli/) is two levels up.
// import.meta.dirname is available in Deno 1.40+ as an OS-native path string.
export const CLI_DIR: string = join(import.meta.dirname!, "..", "..");
const CLI_DIST = join(CLI_DIR, "dist", "index.cjs");
// ---------------------------------------------------------------------------
// Result type
// ---------------------------------------------------------------------------
export interface CliResult {
stdout: string;
stderr: string;
/** stdout + stderr concatenated — useful for assertion messages. */
combined: string;
code: number;
}
const TEE_ENABLED = Deno.env.get("LIVESYNC_TEST_TEE") === "1";
const VERBOSE_ENABLED = Deno.env.get("LIVESYNC_CLI_VERBOSE") === "1";
const DEBUG_ENABLED = Deno.env.get("LIVESYNC_CLI_DEBUG") === "1";
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
function concatChunks(chunks: Uint8Array[]): Uint8Array {
const total = chunks.reduce((n, c) => n + c.length, 0);
const out = new Uint8Array(total);
let offset = 0;
for (const c of chunks) {
out.set(c, offset);
offset += c.length;
}
return out;
}
async function collectStream(
stream: ReadableStream<Uint8Array>,
teeTarget: WritableStream<Uint8Array> | null
): Promise<Uint8Array> {
const reader = stream.getReader();
const chunks: Uint8Array[] = [];
const writer = teeTarget?.getWriter();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (value) {
chunks.push(value);
if (writer) {
await writer.write(value);
}
}
}
} finally {
if (writer) {
writer.releaseLock();
}
reader.releaseLock();
}
return concatChunks(chunks);
}
async function runNodeCommand(args: string[], stdinData?: Uint8Array): Promise<CliResult> {
const cliArgs = DEBUG_ENABLED ? ["-d", ...args] : VERBOSE_ENABLED ? ["-v", ...args] : args;
const child = new Deno.Command("node", {
args: [CLI_DIST, ...cliArgs],
cwd: CLI_DIR,
stdin: stdinData ? "piped" : "null",
stdout: "piped",
stderr: "piped",
}).spawn();
const stdoutPromise = collectStream(child.stdout, TEE_ENABLED ? Deno.stdout.writable : null);
const stderrPromise = collectStream(child.stderr, TEE_ENABLED ? Deno.stderr.writable : null);
if (stdinData) {
const w = child.stdin.getWriter();
await w.write(stdinData);
await w.close();
}
const [status, stdout, stderr] = await Promise.all([child.status, stdoutPromise, stderrPromise]);
const dec = new TextDecoder();
const out = dec.decode(stdout);
const err = dec.decode(stderr);
return { stdout: out, stderr: err, combined: out + err, code: status.code };
}
function isTransientNetworkError(message: string): boolean {
const m = message.toLowerCase();
return (
m.includes("fetch failed") ||
m.includes("econnreset") ||
m.includes("econnrefused") ||
m.includes("und_err_socket") ||
m.includes("other side closed")
);
}
// ---------------------------------------------------------------------------
// Core runners
// ---------------------------------------------------------------------------
/**
* Run the CLI (node dist/index.cjs) with the supplied arguments.
* Pass the vault / DB path as the first argument, exactly as the bash helpers
* do. Does NOT throw on non-zero exit — check `.code` yourself.
*/
export async function runCli(...args: string[]): Promise<CliResult> {
const retries = Number(Deno.env.get("LIVESYNC_CLI_RETRY") ?? "0");
for (let attempt = 0; ; attempt++) {
const result = await runNodeCommand(args);
if (result.code === 0) return result;
if (attempt >= retries || !isTransientNetworkError(result.combined)) {
return result;
}
const waitMs = 400 * (attempt + 1);
console.warn(`[WARN] transient CLI failure, retrying (${attempt + 1}/${retries}) in ${waitMs}ms`);
await sleep(waitMs);
}
}
/**
* Run the CLI and throw if it exits non-zero. Returns stdout.
*/
export async function runCliOrFail(...args: string[]): Promise<string> {
const r = await runCli(...args);
if (r.code !== 0) {
throw new Error(`CLI exited with code ${r.code}\nstdout: ${r.stdout}\nstderr: ${r.stderr}`);
}
return r.stdout;
}
/**
* Run the CLI with data piped to stdin (equivalent to `echo … | run_cli …`
* or `cat file | run_cli …`).
*/
export async function runCliWithInput(input: string | Uint8Array, ...args: string[]): Promise<CliResult> {
const data = typeof input === "string" ? new TextEncoder().encode(input) : input;
const retries = Number(Deno.env.get("LIVESYNC_CLI_RETRY") ?? "0");
for (let attempt = 0; ; attempt++) {
const result = await runNodeCommand(args, data);
if (result.code === 0) return result;
if (attempt >= retries || !isTransientNetworkError(result.combined)) {
return result;
}
const waitMs = 400 * (attempt + 1);
console.warn(`[WARN] transient CLI(stdin) failure, retrying (${attempt + 1}/${retries}) in ${waitMs}ms`);
await sleep(waitMs);
}
}
/**
* runCliWithInput — throws on non-zero exit, returns stdout.
*/
export async function runCliWithInputOrFail(input: string | Uint8Array, ...args: string[]): Promise<string> {
const r = await runCliWithInput(input, ...args);
if (r.code !== 0) {
throw new Error(`CLI (with stdin) exited with code ${r.code}\nstdout: ${r.stdout}\nstderr: ${r.stderr}`);
}
return r.stdout;
}
// ---------------------------------------------------------------------------
// Output helpers
// ---------------------------------------------------------------------------
/** Strip the CLIWatchAdapter banner line that `cat` emits. */
export function sanitiseCatStdout(raw: string): string {
return raw
.split("\n")
.filter((l) => l !== "[CLIWatchAdapter] File watching is not enabled in CLI version")
.join("\n");
}
// ---------------------------------------------------------------------------
// Assertions (parity with test-helpers.sh)
// ---------------------------------------------------------------------------
export function assertContains(haystack: string, needle: string, message: string): void {
if (!haystack.includes(needle)) {
throw new Error(`[FAIL] ${message}\nExpected to find: ${JSON.stringify(needle)}\nActual output:\n${haystack}`);
}
}
export function assertNotContains(haystack: string, needle: string, message: string): void {
if (haystack.includes(needle)) {
throw new Error(`[FAIL] ${message}\nDid NOT expect: ${JSON.stringify(needle)}\nActual output:\n${haystack}`);
}
}
export async function assertFilesEqual(expectedPath: string, actualPath: string, message: string): Promise<void> {
const [expected, actual] = await Promise.all([Deno.readFile(expectedPath), Deno.readFile(actualPath)]);
if (expected.length !== actual.length || expected.some((b, i) => b !== actual[i])) {
const hex = async (d: Uint8Array<ArrayBuffer>) => {
const h = await crypto.subtle.digest("SHA-256", d);
return [...new Uint8Array(h)].map((b) => b.toString(16).padStart(2, "0")).join("");
};
throw new Error(
`[FAIL] ${message}\nexpected SHA-256: ${await hex(expected)}\nactual SHA-256: ${await hex(actual)}`
);
}
}
// ---------------------------------------------------------------------------
// JSON helpers
// ---------------------------------------------------------------------------
export async function readJsonFile<T = Record<string, unknown>>(filePath: string): Promise<T> {
return JSON.parse(await Deno.readTextFile(filePath)) as T;
}
export function jsonStringField(jsonText: string, field: string): string {
const data = JSON.parse(jsonText) as Record<string, unknown>;
const value = data[field];
return typeof value === "string" ? value : "";
}
export function jsonFieldIsNa(data: Record<string, unknown>, field: string): boolean {
return data[field] === "N/A";
}

View File

@@ -0,0 +1,530 @@
/**
* Docker service management for tests.
*
* CouchDB start/stop/init is implemented directly using `docker` CLI commands
* and the Fetch API, so it works on any platform where Docker (Desktop) is
* available — including Windows — without needing bash.
*/
type DockerInvoker = {
bin: string;
prefix: string[];
label: string;
};
let dockerInvokerPromise: Promise<DockerInvoker> | null = null;
const DOCKER_TEE = Deno.env.get("LIVESYNC_DOCKER_TEE") === "1" || Deno.env.get("LIVESYNC_TEST_TEE") === "1";
// ---------------------------------------------------------------------------
// Low-level docker wrapper
// ---------------------------------------------------------------------------
function parseCommand(command: string): { bin: string; prefix: string[] } {
const parts = command.trim().split(/\s+/).filter(Boolean);
if (parts.length === 0) {
throw new Error("LIVESYNC_DOCKER_COMMAND is empty");
}
return { bin: parts[0], prefix: parts.slice(1) };
}
async function runCommand(bin: string, args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
const cmd = new Deno.Command(bin, {
args,
stdin: "null",
stdout: "piped",
stderr: "piped",
});
try {
const { code, stdout, stderr } = await cmd.output();
const dec = new TextDecoder();
const result = {
code,
stdout: dec.decode(stdout),
stderr: dec.decode(stderr),
};
if (DOCKER_TEE) {
if (result.stdout.trim().length > 0) {
console.log(`[docker:${bin}] ${result.stdout.trimEnd()}`);
}
if (result.stderr.trim().length > 0) {
console.error(`[docker:${bin}] ${result.stderr.trimEnd()}`);
}
}
return result;
} catch (err) {
if (err instanceof Deno.errors.NotFound) {
return {
code: 127,
stdout: "",
stderr: `Command not found: ${bin}`,
};
}
throw err;
}
}
async function resolveDockerInvoker(): Promise<DockerInvoker> {
const custom = Deno.env.get("LIVESYNC_DOCKER_COMMAND")?.trim();
if (custom) {
const parsed = parseCommand(custom);
const runner: DockerInvoker = {
...parsed,
label: `custom(${custom})`,
};
// Validate custom command eagerly so misconfiguration fails fast.
const checkArgs = runner.prefix.length === 0 ? ["--version"] : [...runner.prefix, "docker", "--version"];
const check = await runCommand(runner.bin, checkArgs);
if (check.code !== 0) {
throw new Error(`LIVESYNC_DOCKER_COMMAND is not usable: ${custom}\n${check.stderr || check.stdout}`);
}
return runner;
}
const mode = (Deno.env.get("LIVESYNC_DOCKER_MODE") ?? "auto").toLowerCase();
const onWindows = Deno.build.os === "windows";
const native: DockerInvoker = { bin: "docker", prefix: [], label: "docker" };
const wsl: DockerInvoker = { bin: "wsl", prefix: [], label: "wsl docker" };
if (mode === "native") {
return native;
}
if (mode === "wsl") {
return wsl;
}
if (mode !== "auto") {
throw new Error(`Unsupported LIVESYNC_DOCKER_MODE='${mode}'. Use auto, native, or wsl.`);
}
// On Windows we prefer `wsl docker` first, then native docker.
// This typically works better in setups where Docker is installed only in
// WSL and not exposed as docker.exe on PATH.
const candidates = onWindows ? [wsl, native] : [native, wsl];
for (const c of candidates) {
if (c.bin === "docker") {
const r = await runCommand("docker", ["--version"]);
if (r.code === 0) return c;
continue;
}
const r = await runCommand("wsl", ["docker", "--version"]);
if (r.code === 0) return c;
}
throw new Error(
[
"Docker command is not available.",
"Set one of:",
"- LIVESYNC_DOCKER_MODE=native",
"- LIVESYNC_DOCKER_MODE=wsl",
"- LIVESYNC_DOCKER_COMMAND='docker'",
"- LIVESYNC_DOCKER_COMMAND='wsl docker'",
].join("\n")
);
}
async function getDockerInvoker(): Promise<DockerInvoker> {
if (!dockerInvokerPromise) {
dockerInvokerPromise = resolveDockerInvoker().then((r) => {
console.log(`[INFO] docker runner: ${r.label}`);
return r;
});
}
return await dockerInvokerPromise;
}
async function docker(...args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
const invoker = await getDockerInvoker();
// Either:
// docker <args>
// Or:
// wsl docker <args>
const finalArgs =
invoker.prefix.length === 0
? invoker.bin === "wsl"
? ["docker", ...args]
: args
: [...invoker.prefix, ...args];
const r = await runCommand(invoker.bin, finalArgs);
return { code: r.code, stdout: r.stdout, stderr: r.stderr };
}
async function dockerOrFail(...args: string[]): Promise<string> {
const r = await docker(...args);
if (r.code !== 0) {
throw new Error(`docker ${args[0]} failed (code ${r.code}): ${r.stderr.trim()}`);
}
return r.stdout;
}
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
async function waitForCouchdbStable(hostname: string, user: string, password: string): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = { Authorization: `Basic ${auth}` };
let consecutive = 0;
for (let i = 0; i < 30; i++) {
try {
const r = await fetch(`${h}/_up`, {
headers,
signal: AbortSignal.timeout(3000),
});
if (r.ok) {
consecutive++;
if (consecutive >= 3) return;
} else {
consecutive = 0;
}
} catch {
consecutive = 0;
}
await sleep(500);
}
throw new Error("CouchDB did not become stable in time");
}
// ---------------------------------------------------------------------------
// Fetch with retry (mirrors cli_test_curl_json() retry loop)
// ---------------------------------------------------------------------------
async function fetchRetry(
url: string,
init: RequestInit,
retries = 30,
delayMs = 2000,
allowStatus: number[] = []
): Promise<void> {
let lastError: unknown;
let lastStatus: number | undefined;
for (let i = 0; i < retries; i++) {
try {
const r = await fetch(url, {
signal: AbortSignal.timeout(5000),
...init,
});
lastStatus = r.status;
await r.body?.cancel().catch(() => {});
if (r.ok || allowStatus.includes(r.status)) return;
lastError = `HTTP ${r.status}`;
} catch (e) {
lastError = e;
}
await sleep(delayMs);
}
throw new Error(
`Could not reach ${url} after ${retries} retries: ${lastError} (last status: ${lastStatus ?? "N/A"})`
);
}
// ---------------------------------------------------------------------------
// CouchDB
// ---------------------------------------------------------------------------
//
// TODO: these values could be configurable via environment variables.
//
const COUCHDB_CONTAINER = "couchdb-test";
const COUCHDB_IMAGE = "couchdb:3.5.0";
const MINIO_CONTAINER = "minio-test";
const MINIO_IMAGE = "minio/minio";
const MINIO_MC_IMAGE = "minio/mc";
export async function stopCouchdb(): Promise<void> {
await docker("stop", COUCHDB_CONTAINER);
await docker("rm", COUCHDB_CONTAINER);
}
/**
* Start a CouchDB test container, initialise it, and create the test DB.
* Mirrors cli_test_start_couchdb() from test-helpers.sh, using direct
* docker / fetch calls instead of the bash util scripts.
*/
export async function startCouchdb(couchdbUri: string, user: string, password: string, dbname: string): Promise<void> {
console.log("[INFO] stopping leftover CouchDB container if present");
await stopCouchdb().catch(() => {});
console.log("[INFO] starting CouchDB test container");
await dockerOrFail(
"run",
"-d",
"--name",
COUCHDB_CONTAINER,
"-p",
// TODO: port mapping should be configurable.
"5989:5984",
"-e",
`COUCHDB_USER=${user}`,
"-e",
`COUCHDB_PASSWORD=${password}`,
"-e",
"COUCHDB_SINGLE_NODE=y",
COUCHDB_IMAGE
);
console.log("[INFO] initialising CouchDB");
await initCouchdb(couchdbUri, user, password);
console.log("[INFO] waiting for CouchDB to become stable");
await waitForCouchdbStable(couchdbUri, user, password);
console.log(`[INFO] creating test database: ${dbname}`);
await createCouchdbDatabase(couchdbUri, user, password, dbname);
}
/**
* Mirror couchdb-init.sh: configure single-node CouchDB via its REST API.
*/
async function initCouchdb(hostname: string, user: string, password: string, node = "_local"): Promise<void> {
// Podman environments often resolve localhost to ::1; use 127.0.0.1 like
// the bash script does.
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = {
"Content-Type": "application/json",
Authorization: `Basic ${auth}`,
};
const calls: Array<[string, string, string]> = [
[
"POST",
`${h}/_cluster_setup`,
JSON.stringify({
action: "enable_single_node",
username: user,
password,
bind_address: "0.0.0.0",
port: 5984,
singlenode: true,
}),
],
["PUT", `${h}/_node/${node}/_config/chttpd/require_valid_user`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd_auth/require_valid_user`, '"true"'],
["PUT", `${h}/_node/${node}/_config/httpd/WWW-Authenticate`, '"Basic realm=\\"couchdb\\""'],
["PUT", `${h}/_node/${node}/_config/httpd/enable_cors`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd/enable_cors`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd/max_http_request_size`, '"4294967296"'],
["PUT", `${h}/_node/${node}/_config/couchdb/max_document_size`, '"50000000"'],
["PUT", `${h}/_node/${node}/_config/cors/credentials`, '"true"'],
["PUT", `${h}/_node/${node}/_config/cors/origins`, '"*"'],
];
for (const [method, url, body] of calls) {
await fetchRetry(url, { method, headers, body });
}
}
export async function createCouchdbDatabase(
hostname: string,
user: string,
password: string,
dbname: string
): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
await fetchRetry(`${h}/${dbname}`, {
method: "PUT",
headers: { Authorization: `Basic ${auth}` },
});
}
/** Update a CouchDB document via PUT. Returns the updated document. */
export async function updateCouchdbDoc(
hostname: string,
user: string,
password: string,
docUrl: string,
updater: (doc: Record<string, unknown>) => Record<string, unknown>
): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = {
"Content-Type": "application/json",
Authorization: `Basic ${auth}`,
};
const getRes = await fetch(`${h}/${docUrl}`, { headers });
const current = (await getRes.json()) as Record<string, unknown>;
const updated = updater(current);
await fetchRetry(`${h}/${docUrl}`, {
method: "PUT",
headers,
body: JSON.stringify(updated),
});
}
// ---------------------------------------------------------------------------
// MinIO
// ---------------------------------------------------------------------------
function shQuote(value: string): string {
return `'${value.split("'").join(`'"'"'`)}'`;
}
export async function stopMinio(): Promise<void> {
await docker("stop", MINIO_CONTAINER);
await docker("rm", MINIO_CONTAINER);
}
async function initMinioBucket(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<boolean> {
const cmd =
`mc alias set myminio ${shQuote(minioEndpoint)} ${shQuote(accessKey)} ${shQuote(secretKey)} >/dev/null 2>&1 && ` +
`mc mb --ignore-existing myminio/${shQuote(bucket)} >/dev/null 2>&1`;
const r = await docker("run", "--rm", "--network", "host", "--entrypoint", "/bin/sh", MINIO_MC_IMAGE, "-c", cmd);
return r.code === 0;
}
async function waitForMinioBucket(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<void> {
for (let i = 0; i < 30; i++) {
const checkCmd =
`mc alias set myminio ${shQuote(minioEndpoint)} ${shQuote(accessKey)} ${shQuote(secretKey)} >/dev/null 2>&1 && ` +
`mc ls myminio/${shQuote(bucket)} >/dev/null 2>&1`;
const check = await docker(
"run",
"--rm",
"--network",
// Now I used host networking to access the container via localhost for some environments (Docker Desktop on Windows).
// We need something good idea to work across all environments.
"host",
"--entrypoint",
"/bin/sh",
MINIO_MC_IMAGE,
"-c",
checkCmd
);
if (check.code === 0) {
return;
}
await initMinioBucket(minioEndpoint, accessKey, secretKey, bucket);
await sleep(2000);
}
throw new Error(`MinIO bucket not ready: ${bucket}`);
}
export async function startMinio(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<void> {
console.log("[INFO] stopping leftover MinIO container if present");
await stopMinio().catch(() => {});
console.log("[INFO] starting MinIO test container");
await dockerOrFail(
"run",
"-d",
"--name",
MINIO_CONTAINER,
// TODO: Ports should be configurable.
"-p",
"9000:9000",
"-p",
"9001:9001",
"-e",
`MINIO_ROOT_USER=${accessKey}`,
"-e",
`MINIO_ROOT_PASSWORD=${secretKey}`,
"-e",
`MINIO_SERVER_URL=${minioEndpoint}`,
MINIO_IMAGE,
"server",
"/data",
"--console-address",
":9001"
);
console.log(`[INFO] initialising MinIO test bucket: ${bucket}`);
let initialised = false;
for (let i = 0; i < 5; i++) {
if (await initMinioBucket(minioEndpoint, accessKey, secretKey, bucket)) {
initialised = true;
break;
}
await sleep(2000);
}
if (!initialised) {
throw new Error(`Could not initialise MinIO bucket after retries: ${bucket}`);
}
await waitForMinioBucket(minioEndpoint, accessKey, secretKey, bucket);
}
// ---------------------------------------------------------------------------
// P2P relay (strfry)
// ---------------------------------------------------------------------------
// TODO: these values could be configurable via environment variables.
const P2P_RELAY_CONTAINER = "relay-test";
const P2P_RELAY_IMAGE = "ghcr.io/hoytech/strfry:latest";
const STRFRY_BOOTSTRAP_SH = String.raw`cat > /tmp/strfry.conf <<"EOF"
db = "./strfry-db/"
relay {
bind = "0.0.0.0"
port = 7777
nofiles = 100000
info {
name = "livesync test relay"
description = "local relay for livesync p2p tests"
}
maxWebsocketPayloadSize = 131072
autoPingSeconds = 55
writePolicy {
plugin = ""
}
}
EOF
exec /app/strfry --config /tmp/strfry.conf relay`;
export async function stopP2pRelay(): Promise<void> {
await docker("stop", P2P_RELAY_CONTAINER);
await docker("rm", P2P_RELAY_CONTAINER);
}
/**
* Start the local P2P relay container through the same docker runner used
* by CouchDB helpers. This keeps process ownership consistent across
* start/stop on Windows, WSL, and native Linux/macOS.
*/
export async function startP2pRelay(): Promise<void> {
console.log("[INFO] stopping leftover P2P relay container if present");
await stopP2pRelay().catch(() => {});
console.log("[INFO] starting local P2P relay container");
await dockerOrFail(
"run",
"-d",
"--name",
P2P_RELAY_CONTAINER,
"-p",
//TODO: port mapping should be configurable.
"4000:7777",
"--tmpfs",
"/app/strfry-db:rw,size=256m",
"--entrypoint",
"sh",
P2P_RELAY_IMAGE,
"-lc",
STRFRY_BOOTSTRAP_SH
);
}
export function isLocalP2pRelay(relayUrl: string): boolean {
return relayUrl === "ws://localhost:4000" || relayUrl === "ws://localhost:4000/";
}

View File

@@ -0,0 +1,26 @@
/**
* Load a .env-style file (KEY=value per line) into a plain object.
* Equivalent to `source $TEST_ENV_FILE; set -a` in bash.
* Maybe we should use some library... now it is just the minimal implementation that covers our use cases.
*
* Supported value formats:
* KEY=value
* KEY='single quoted'
* KEY="double quoted"
* # comment lines are ignored
*/
export async function loadEnvFile(filePath: string): Promise<Record<string, string>> {
const text = await Deno.readTextFile(filePath);
const result: Record<string, string> = {};
for (const line of text.split("\n")) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) continue;
const idx = trimmed.indexOf("=");
if (idx < 0) continue;
const key = trimmed.slice(0, idx).trim();
const raw = trimmed.slice(idx + 1).trim();
// Strip surrounding single or double quotes
result[key] = raw.replace(/^(['"])(.*)\1$/, "$2");
}
return result;
}

View File

@@ -0,0 +1,52 @@
import { runCli } from "./cli.ts";
import { isLocalP2pRelay, startP2pRelay, stopP2pRelay } from "./docker.ts";
export type PeerEntry = {
id: string;
name: string;
};
export function parsePeerLines(output: string): PeerEntry[] {
return output
.split(/\r?\n/)
.map((line) => line.split("\t"))
.filter((parts) => parts.length >= 3 && parts[0] === "[peer]")
.map((parts) => ({ id: parts[1], name: parts[2] }));
}
export async function discoverPeer(
vaultDir: string,
settingsFile: string,
timeoutSeconds: number,
targetPeer?: string
): Promise<PeerEntry> {
const result = await runCli(vaultDir, "--settings", settingsFile, "p2p-peers", String(timeoutSeconds));
if (result.code !== 0) {
throw new Error(`p2p-peers failed\n${result.combined}`);
}
const peers = parsePeerLines(result.stdout);
if (targetPeer) {
const matched = peers.find((peer) => peer.id === targetPeer || peer.name === targetPeer);
if (matched) return matched;
}
if (peers.length === 0) {
const fallback = result.combined.match(/Advertisement from\s+([^\s]+)/);
if (fallback?.[1]) {
return { id: fallback[1], name: fallback[1] };
}
throw new Error(`No peers discovered\n${result.combined}`);
}
return peers[0];
}
export async function maybeStartLocalRelay(relay: string): Promise<boolean> {
if (!isLocalP2pRelay(relay)) return false;
await startP2pRelay();
return true;
}
export async function stopLocalRelayIfStarted(started: boolean): Promise<void> {
if (started) {
await stopP2pRelay().catch(() => {});
}
}

View File

@@ -0,0 +1,205 @@
import { join } from "@std/path";
import { CLI_DIR, runCliOrFail } from "./cli.ts";
// ---------------------------------------------------------------------------
// Settings file initialisation
// ---------------------------------------------------------------------------
/** Generate a default settings file using the CLI's init-settings command. */
export async function initSettingsFile(settingsFile: string): Promise<void> {
await runCliOrFail("init-settings", "--force", settingsFile);
}
/**
* Generate a full setup URI from a settings file via src/lib API.
* Mirrors the bash flow in test-setup-put-cat-linux.sh.
*/
export async function generateSetupUriFromSettings(settingsFile: string, setupPassphrase: string): Promise<string> {
const repoRoot = join(CLI_DIR, "..", "..", "..");
const script = [
"import fs from 'node:fs';",
"import { pathToFileURL } from 'node:url';",
"(async () => {",
" const modulePath = process.env.REPO_ROOT + '/src/lib/src/API/processSetting.ts';",
" const moduleUrl = pathToFileURL(modulePath).href;",
" const { encodeSettingsToSetupURI } = await import(moduleUrl);",
" const settingsPath = process.env.SETTINGS_FILE;",
" const passphrase = process.env.SETUP_PASSPHRASE;",
" const settings = JSON.parse(fs.readFileSync(settingsPath, 'utf-8'));",
" settings.couchDB_DBNAME = 'setup-put-cat-db';",
" settings.couchDB_URI = 'http://127.0.0.1:5999';",
" settings.couchDB_USER = 'dummy';",
" settings.couchDB_PASSWORD = 'dummy';",
" settings.liveSync = false;",
" settings.syncOnStart = false;",
" settings.syncOnSave = false;",
" const uri = await encodeSettingsToSetupURI(settings, passphrase);",
" process.stdout.write(uri.trim());",
"})();",
].join("\n");
const scriptPath = await Deno.makeTempFile({
prefix: "livesync-setup-uri-",
suffix: ".mts",
});
await Deno.writeTextFile(scriptPath, script);
try {
const cmd = new Deno.Command("npx", {
args: ["tsx", scriptPath],
cwd: CLI_DIR,
env: {
REPO_ROOT: repoRoot,
SETTINGS_FILE: settingsFile,
SETUP_PASSPHRASE: setupPassphrase,
},
stdin: "null",
stdout: "piped",
stderr: "piped",
});
const { code, stdout, stderr } = await cmd.output();
const dec = new TextDecoder();
if (code !== 0) {
throw new Error(
`Failed to generate setup URI (code ${code})\nstdout: ${dec.decode(stdout)}\nstderr: ${dec.decode(stderr)}`
);
}
const uri = dec.decode(stdout).trim();
if (!uri) {
throw new Error("Failed to generate setup URI: output is empty");
}
return uri;
} finally {
await Deno.remove(scriptPath).catch(() => {});
}
}
/** Set isConfigured=true in a settings file (required for mirror / scan). */
export async function markSettingsConfigured(settingsFile: string): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
// ---------------------------------------------------------------------------
// CouchDB remote settings
// ---------------------------------------------------------------------------
/**
* Apply CouchDB connection details to a settings file.
* Mirrors cli_test_apply_couchdb_settings() from test-helpers.sh.
*/
export async function applyCouchdbSettings(
settingsFile: string,
couchdbUri: string,
couchdbUser: string,
couchdbPassword: string,
couchdbDbname: string,
liveSync = false
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.couchDB_URI = couchdbUri;
data.couchDB_USER = couchdbUser;
data.couchDB_PASSWORD = couchdbPassword;
data.couchDB_DBNAME = couchdbDbname;
if (liveSync) {
data.liveSync = true;
data.syncOnStart = false;
data.syncOnSave = false;
data.usePluginSync = false;
}
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
export async function applyRemoteSyncSettings(
settingsFile: string,
options: {
remoteType: "COUCHDB" | "MINIO";
couchdbUri?: string;
couchdbUser?: string;
couchdbPassword?: string;
couchdbDbname?: string;
minioBucket?: string;
minioEndpoint?: string;
minioAccessKey?: string;
minioSecretKey?: string;
encrypt?: boolean;
passphrase?: string;
}
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
if (options.remoteType === "COUCHDB") {
data.remoteType = "";
data.couchDB_URI = options.couchdbUri;
data.couchDB_USER = options.couchdbUser;
data.couchDB_PASSWORD = options.couchdbPassword;
data.couchDB_DBNAME = options.couchdbDbname;
} else {
data.remoteType = "MINIO";
data.bucket = options.minioBucket;
data.endpoint = options.minioEndpoint;
data.accessKey = options.minioAccessKey;
data.secretKey = options.minioSecretKey;
data.region = "auto";
data.forcePathStyle = true;
}
data.liveSync = true;
data.syncOnStart = false;
data.syncOnSave = false;
data.usePluginSync = false;
data.encrypt = options.encrypt === true;
data.passphrase = options.encrypt ? (options.passphrase ?? "") : "";
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
// ---------------------------------------------------------------------------
// P2P settings
// ---------------------------------------------------------------------------
/**
* Apply P2P connection details to a settings file.
* Mirrors cli_test_apply_p2p_settings() from test-helpers.sh.
*/
export async function applyP2pSettings(
settingsFile: string,
roomId: string,
passphrase: string,
appId = "self-hosted-livesync-cli-tests",
relays = "ws://localhost:4000/",
autoAccept = "~.*"
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.P2P_Enabled = true;
data.P2P_AutoStart = false;
data.P2P_AutoBroadcast = false;
data.P2P_AppID = appId;
data.P2P_roomID = roomId;
data.P2P_passphrase = passphrase;
data.P2P_relays = relays;
data.P2P_AutoAcceptingPeers = autoAccept;
data.P2P_AutoDenyingPeers = "";
data.P2P_IsHeadless = true;
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
export async function applyP2pTestTweaks(settingsFile: string, deviceName: string, passphrase: string): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.remoteType = "ONLY_P2P";
data.encrypt = true;
data.passphrase = passphrase;
data.usePathObfuscation = true;
data.handleFilenameCaseSensitive = false;
data.customChunkSize = 50;
data.usePluginSyncV2 = true;
data.doNotUseFixedRevisionForChunks = false;
data.P2P_DevicePeerName = deviceName;
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}

View File

@@ -0,0 +1,33 @@
import { join } from "@std/path";
/**
* A temporary directory that cleans itself up via `await using`.
* Requires TypeScript 5.2+ / Deno 1.40+ for the AsyncDisposable protocol.
*
* @example
* ```ts
* await using tmp = await TempDir.create();
* const file = tmp.join("data.json");
* ```
*/
export class TempDir implements AsyncDisposable {
readonly path: string;
private constructor(path: string) {
this.path = path;
}
static async create(prefix = "livesync-deno-test"): Promise<TempDir> {
const path = await Deno.makeTempDir({ prefix: `${prefix}.` });
return new TempDir(path);
}
/** Return an OS path joined to the temp directory root. */
join(...parts: string[]): string {
return join(this.path, ...parts);
}
async [Symbol.asyncDispose](): Promise<void> {
await Deno.remove(this.path, { recursive: true }).catch(() => {});
}
}

View File

@@ -0,0 +1,277 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import {
runCli,
runCliOrFail,
runCliWithInputOrFail,
sanitiseCatStdout,
assertFilesEqual,
jsonStringField,
} from "./helpers/cli.ts";
import { applyRemoteSyncSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, startMinio, stopCouchdb, stopMinio } from "./helpers/docker.ts";
type RemoteType = "COUCHDB" | "MINIO";
function requireEnv(...keys: string[]): string {
for (const key of keys) {
const value = Deno.env.get(key)?.trim();
if (value) return value;
}
throw new Error(`Required env var is missing: ${keys.join(" or ")}`);
}
export async function runScenario(remoteType: RemoteType, encrypt: boolean): Promise<void> {
const dbSuffix = `${Date.now()}-${Math.floor(Math.random() * 100000)}`;
const couchdbUri = remoteType === "COUCHDB" ? requireEnv("COUCHDB_URI", "hostname").replace(/\/$/, "") : "";
const couchdbUser = remoteType === "COUCHDB" ? requireEnv("COUCHDB_USER", "username") : "";
const couchdbPassword = remoteType === "COUCHDB" ? requireEnv("COUCHDB_PASSWORD", "password") : "";
const dbPrefix = remoteType === "COUCHDB" ? requireEnv("COUCHDB_DBNAME", "dbname") : "";
const dbname = remoteType === "COUCHDB" ? `${dbPrefix}-${dbSuffix}` : "";
const minioEndpoint =
remoteType === "MINIO" ? requireEnv("MINIO_ENDPOINT", "minioEndpoint").replace(/\/$/, "") : "";
const minioAccessKey = remoteType === "MINIO" ? requireEnv("MINIO_ACCESS_KEY", "accessKey") : "";
const minioSecretKey = remoteType === "MINIO" ? requireEnv("MINIO_SECRET_KEY", "secretKey") : "";
const minioBucketBase = remoteType === "MINIO" ? requireEnv("MINIO_BUCKET_NAME", "bucketName") : "";
const minioBucket = remoteType === "MINIO" ? `${minioBucketBase}-${dbSuffix}` : "";
const passphrase = "e2e-passphrase";
await using workDir = await TempDir.create(
`livesync-cli-e2e-${remoteType.toLowerCase()}-${encrypt ? "enc1" : "enc0"}`
);
const vaultA = workDir.join("testvault_a");
const vaultB = workDir.join("testvault_b");
const settingsA = workDir.join("test-settings-a.json");
const settingsB = workDir.join("test-settings-b.json");
const pushSrc = workDir.join("push-source.txt");
const pullDst = workDir.join("pull-destination.txt");
const pushBinarySrc = workDir.join("push-source.bin");
const pullBinaryDst = workDir.join("pull-destination.bin");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (remoteType === "COUCHDB") {
await startCouchdb(couchdbUri, couchdbUser, couchdbPassword, dbname);
} else {
await startMinio(minioEndpoint, minioAccessKey, minioSecretKey, minioBucket);
}
try {
await initSettingsFile(settingsA);
await initSettingsFile(settingsB);
await applyRemoteSyncSettings(settingsA, {
remoteType,
couchdbUri,
couchdbUser,
couchdbPassword,
couchdbDbname: dbname,
minioBucket,
minioEndpoint,
minioAccessKey,
minioSecretKey,
encrypt,
passphrase,
});
await applyRemoteSyncSettings(settingsB, {
remoteType,
couchdbUri,
couchdbUser,
couchdbPassword,
couchdbDbname: dbname,
minioBucket,
minioEndpoint,
minioAccessKey,
minioSecretKey,
encrypt,
passphrase,
});
const syncBoth = async () => {
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
};
const targetAOnly = "e2e/a-only-info.md";
const targetSync = "e2e/sync-info.md";
const targetSyncTwiceFirst = "e2e/sync-twice-first.md";
const targetSyncTwiceSecond = "e2e/sync-twice-second.md";
const targetPush = "e2e/pushed-from-a.md";
const targetPut = "e2e/put-from-a.md";
const targetPushBinary = "e2e/pushed-from-a.bin";
const targetConflict = "e2e/conflict.md";
await runCliWithInputOrFail("alpha-from-a\n", vaultA, "--settings", settingsA, "put", targetAOnly);
const infoAOnly = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetAOnly);
assert(infoAOnly.includes(`"path": "${targetAOnly}"`));
await runCliWithInputOrFail("visible-after-sync\n", vaultA, "--settings", settingsA, "put", targetSync);
await syncBoth();
const infoBSync = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetSync);
assert(infoBSync.includes(`"path": "${targetSync}"`));
await runCliWithInputOrFail(
`first-sync-round-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetSyncTwiceFirst
);
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
const firstVisible = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetSyncTwiceFirst)
).trimEnd();
assert(firstVisible === `first-sync-round-${dbSuffix}`);
await runCliWithInputOrFail(
`second-sync-round-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetSyncTwiceSecond
);
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
const secondVisible = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetSyncTwiceSecond)
).trimEnd();
assert(secondVisible === `second-sync-round-${dbSuffix}`);
await Deno.writeTextFile(pushSrc, `pushed-content-${dbSuffix}\n`);
await runCliOrFail(vaultA, "--settings", settingsA, "push", pushSrc, targetPush);
await runCliWithInputOrFail(`put-content-${dbSuffix}\n`, vaultA, "--settings", settingsA, "put", targetPut);
await syncBoth();
await runCliOrFail(vaultB, "--settings", settingsB, "pull", targetPush, pullDst);
await assertFilesEqual(pushSrc, pullDst, "B pull result does not match pushed source");
const catBPut = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetPut)
).trimEnd();
assert(catBPut === `put-content-${dbSuffix}`);
const binary = new Uint8Array(4096);
binary.fill(0x61);
await Deno.writeFile(pushBinarySrc, binary);
await runCliOrFail(vaultA, "--settings", settingsA, "push", pushBinarySrc, targetPushBinary);
await syncBoth();
await runCliOrFail(vaultB, "--settings", settingsB, "pull", targetPushBinary, pullBinaryDst);
await assertFilesEqual(pushBinarySrc, pullBinaryDst, "B pull result does not match pushed binary source");
await runCliOrFail(vaultA, "--settings", settingsA, "rm", targetPut);
await syncBoth();
const removed = await runCli(vaultB, "--settings", settingsB, "cat", targetPut);
assert(removed.code !== 0, `B cat should fail after A removed the file\n${removed.combined}`);
await runCliWithInputOrFail("conflict-base\n", vaultA, "--settings", settingsA, "put", targetConflict);
await syncBoth();
await runCliWithInputOrFail(
`conflict-from-a-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetConflict
);
await runCliWithInputOrFail(
`conflict-from-b-${dbSuffix}\n`,
vaultB,
"--settings",
settingsB,
"put",
targetConflict
);
let infoAConflict = "";
let infoBConflict = "";
let conflictDetected = false;
for (const side of ["a", "b", "a"] as const) {
await runCliOrFail(
side === "a" ? vaultA : vaultB,
"--settings",
side === "a" ? settingsA : settingsB,
"sync"
);
infoAConflict = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetConflict);
infoBConflict = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetConflict);
if (
jsonStringField(infoAConflict, "conflicts") !== "N/A" ||
jsonStringField(infoBConflict, "conflicts") !== "N/A"
) {
conflictDetected = true;
break;
}
}
assert(conflictDetected, `conflict was expected\nA: ${infoAConflict}\nB: ${infoBConflict}`);
const lsAConflict =
(await runCliOrFail(vaultA, "--settings", settingsA, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const lsBConflict =
(await runCliOrFail(vaultB, "--settings", settingsB, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const revA = lsAConflict.split("\t")[3] ?? "";
const revB = lsBConflict.split("\t")[3] ?? "";
assert(
revA.includes("*") || revB.includes("*"),
`conflicted entry should be marked with '*'\nA: ${lsAConflict}\nB: ${lsBConflict}`
);
const keepRevision = jsonStringField(infoAConflict, "revision");
assert(keepRevision.length > 0, `could not extract revision\n${infoAConflict}`);
await runCliOrFail(vaultA, "--settings", settingsA, "resolve", targetConflict, keepRevision);
let resolved = false;
let infoAResolved = "";
let infoBResolved = "";
for (let i = 0; i < 6; i++) {
await syncBoth();
infoAResolved = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetConflict);
infoBResolved = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetConflict);
if (
jsonStringField(infoAResolved, "conflicts") === "N/A" &&
jsonStringField(infoBResolved, "conflicts") === "N/A"
) {
resolved = true;
break;
}
const retryRevision = jsonStringField(infoAResolved, "revision");
if (retryRevision) {
await runCli(vaultA, "--settings", settingsA, "resolve", targetConflict, retryRevision);
}
}
assert(resolved, `conflicts should be resolved\nA: ${infoAResolved}\nB: ${infoBResolved}`);
const lsAResolved =
(await runCliOrFail(vaultA, "--settings", settingsA, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const lsBResolved =
(await runCliOrFail(vaultB, "--settings", settingsB, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
assert(!(lsAResolved.split("\t")[3] ?? "").includes("*"));
assert(!(lsBResolved.split("\t")[3] ?? "").includes("*"));
const catAResolved = sanitiseCatStdout(
await runCliOrFail(vaultA, "--settings", settingsA, "cat", targetConflict)
).trimEnd();
const catBResolved = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetConflict)
).trimEnd();
assert(catAResolved === catBResolved, `resolved content should match\nA: ${catAResolved}\nB: ${catBResolved}`);
} finally {
if (!keepDocker) {
if (remoteType === "COUCHDB") {
await stopCouchdb().catch(() => {});
} else {
await stopMinio().catch(() => {});
}
}
}
}
Deno.test("e2e: two vaults over CouchDB without encryption", async () => {
await runScenario("COUCHDB", false);
});
Deno.test("e2e: two vaults over CouchDB with encryption", async () => {
await runScenario("COUCHDB", true);
});

View File

@@ -0,0 +1,20 @@
import { runScenario } from "./test-e2e-two-vaults-couchdb.ts";
type MatrixCase = {
remoteType: "COUCHDB" | "MINIO";
encrypt: boolean;
label: string;
};
const matrixCases: MatrixCase[] = [
{ remoteType: "COUCHDB", encrypt: false, label: "COUCHDB-enc0" },
{ remoteType: "COUCHDB", encrypt: true, label: "COUCHDB-enc1" },
{ remoteType: "MINIO", encrypt: false, label: "MINIO-enc0" },
{ remoteType: "MINIO", encrypt: true, label: "MINIO-enc1" },
];
for (const tc of matrixCases) {
Deno.test(`e2e matrix: ${tc.label}`, async () => {
await runScenario(tc.remoteType, tc.encrypt);
});
}

View File

@@ -0,0 +1,196 @@
/**
* Deno port of test-mirror-linux.sh
*
* Tests the `mirror` command — bidirectional synchronisation between a local
* storage directory (vault) and an in-process database.
*
* Covered cases (identical to the bash test):
* 1. Storage-only file -> synced into DB (UPDATE DATABASE)
* 2. DB-only file -> restored to storage (UPDATE STORAGE)
* 3. DB-deleted file -> NOT restored to storage (UPDATE STORAGE skip)
* 4. Both, storage newer -> DB updated (SYNC: STORAGE -> DB)
* 5. Both, DB newer -> storage updated (SYNC: DB -> STORAGE)
* 6. Compatibility mode -> omitted vault-path works (same DB + vault path)
*
* No external services are required.
*
* Run:
* deno test -A test-mirror.ts
*/
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCliOrFail } from "./helpers/cli.ts";
import { initSettingsFile, markSettingsConfigured } from "./helpers/settings.ts";
Deno.test("mirror: storage <-> DB synchronisation", async (t) => {
await using workDir = await TempDir.create("livesync-cli-mirror");
// -------------------------------------------------------------------
// Shared setup
// -------------------------------------------------------------------
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
const dbDir = workDir.join("db");
await Deno.mkdir(workDir.join("vault", "test"), { recursive: true });
await Deno.mkdir(dbDir, { recursive: true });
await initSettingsFile(settingsFile);
// isConfigured=true is required for canProceedScan in the mirror command.
await markSettingsConfigured(settingsFile);
// Copy settings to the DB directory (separated-path mode)
const dbSettings = workDir.join("db", "settings.json");
await Deno.copyFile(settingsFile, dbSettings);
/** Run mirror in separated-path mode: DB dir ≠ vault dir. */
const runMirror = () => runCliOrFail(dbDir, "--settings", dbSettings, "mirror", vaultDir);
/** Run mirror in compatibility mode: DB path = vault path. */
const runMirrorCompat = () => runCliOrFail(vaultDir, "--settings", settingsFile, "mirror");
// Helper wrappers
const dbRun = (...args: string[]) => runCliOrFail(dbDir, "--settings", dbSettings, ...args);
const compatRun = (...args: string[]) => runCliOrFail(vaultDir, "--settings", settingsFile, ...args);
// -------------------------------------------------------------------
// Case 1: storage-only -> DB (UPDATE DATABASE)
// -------------------------------------------------------------------
await t.step("case 1: storage-only file is synced into DB", async () => {
const storageFile = workDir.join("vault", "test", "storage-only.md");
await Deno.writeTextFile(storageFile, "storage-only content\n");
await runMirror();
const resultFile = workDir.join("case1-pull.txt");
await dbRun("pull", "test/storage-only.md", resultFile);
const storageContent = await Deno.readTextFile(storageFile);
const pulledContent = await Deno.readTextFile(resultFile);
assert(
storageContent === pulledContent,
`storage-only file NOT synced into DB\nexpected: ${storageContent}\ngot: ${pulledContent}`
);
console.log("[PASS] case 1: storage-only file was synced into DB");
});
// -------------------------------------------------------------------
// Case 2: DB-only -> storage (UPDATE STORAGE)
// -------------------------------------------------------------------
await t.step("case 2: DB-only file is restored to storage", async () => {
await dbRun(
"push",
// write inline via push (pipe not needed — push takes a file path)
// create a temp file with content and push it
await (async () => {
const tmp = workDir.join("db-only-src.txt");
await Deno.writeTextFile(tmp, "db-only content\n");
return tmp;
})(),
"test/db-only.md"
);
const storagePath = workDir.join("vault", "test", "db-only.md");
assert(!(await exists(storagePath)), "db-only.md unexpectedly exists in storage before mirror");
await runMirror();
assert(await exists(storagePath), "DB-only file NOT restored to storage after mirror");
const content = await Deno.readTextFile(storagePath);
assert(content === "db-only content\n", `DB-only file restored but content mismatch: '${content}'`);
console.log("[PASS] case 2: DB-only file was restored to storage");
});
// -------------------------------------------------------------------
// Case 3: DB-deleted -> storage untouched
// -------------------------------------------------------------------
await t.step("case 3: DB-deleted entry is NOT restored to storage", async () => {
const deletedSrc = workDir.join("deleted-src.txt");
await Deno.writeTextFile(deletedSrc, "to-be-deleted\n");
await dbRun("push", deletedSrc, "test/deleted.md");
await dbRun("rm", "test/deleted.md");
await runMirror();
const storagePath = workDir.join("vault", "test", "deleted.md");
assert(!(await exists(storagePath)), "deleted DB entry was incorrectly restored to storage");
console.log("[PASS] case 3: deleted DB entry was NOT restored to storage");
});
// -------------------------------------------------------------------
// Case 4: storage newer -> DB updated (SYNC: STORAGE -> DB)
// -------------------------------------------------------------------
await t.step("case 4: storage newer than DB -> DB is updated", async () => {
// Seed DB with old content (mtime ~ now)
const seedFile = workDir.join("case4-seed.txt");
await Deno.writeTextFile(seedFile, "old content\n");
await dbRun("push", seedFile, "test/sync-storage-newer.md");
// Write new content to storage with a timestamp 1 hour in the future
const storageFile = workDir.join("vault", "test", "sync-storage-newer.md");
await Deno.writeTextFile(storageFile, "new content\n");
await Deno.utime(storageFile, new Date(), new Date(Date.now() + 3600_000));
await runMirror();
const resultFile = workDir.join("case4-pull.txt");
await dbRun("pull", "test/sync-storage-newer.md", resultFile);
const storageContent = await Deno.readTextFile(storageFile);
const pulledContent = await Deno.readTextFile(resultFile);
assert(
storageContent === pulledContent,
`DB NOT updated to match newer storage file\nexpected: ${storageContent}\ngot: ${pulledContent}`
);
console.log("[PASS] case 4: DB updated to match newer storage file");
});
// -------------------------------------------------------------------
// Case 5: DB newer -> storage updated (SYNC: DB -> STORAGE)
// -------------------------------------------------------------------
await t.step("case 5: DB newer than storage -> storage is updated", async () => {
// Write old content to storage with a timestamp 1 hour in the past
const storageFile = workDir.join("vault", "test", "sync-db-newer.md");
await Deno.writeTextFile(storageFile, "old storage content\n");
await Deno.utime(storageFile, new Date(), new Date(Date.now() - 3600_000));
// Write new content to DB only (mtime ~ now, newer than the storage file)
const dbNewFile = workDir.join("case5-db-new.txt");
await Deno.writeTextFile(dbNewFile, "new db content\n");
await dbRun("push", dbNewFile, "test/sync-db-newer.md");
await runMirror();
const content = await Deno.readTextFile(storageFile);
assert(content === "new db content\n", `storage NOT updated to match newer DB entry (got: '${content}')`);
console.log("[PASS] case 5: storage updated to match newer DB entry");
});
// -------------------------------------------------------------------
// Case 6: compatibility mode (vault path = DB path)
// -------------------------------------------------------------------
await t.step("case 6: compatibility mode (omitted vault-path)", async () => {
const compatFile = workDir.join("vault", "compat.md");
await Deno.writeTextFile(compatFile, "compat-content\n");
await runMirrorCompat();
const resultFile = workDir.join("case6-pull.txt");
await compatRun("pull", "compat.md", resultFile);
const pulled = await Deno.readTextFile(resultFile);
assert(pulled === "compat-content\n", `Compatibility mode failed to sync file into DB (got: '${pulled}')`);
console.log("[PASS] case 6: compatibility mode works");
});
});
// ---------------------------------------------------------------------------
// Utility
// ---------------------------------------------------------------------------
async function exists(path: string): Promise<boolean> {
try {
await Deno.stat(path);
return true;
} catch {
return false;
}
}

View File

@@ -0,0 +1,40 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings } from "./helpers/settings.ts";
import { startP2pRelay, stopP2pRelay, isLocalP2pRelay } from "./helpers/docker.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
Deno.test("p2p-host: starts and becomes ready", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const useInternalRelay = Deno.env.get("USE_INTERNAL_RELAY") !== "0";
await using workDir = await TempDir.create("livesync-cli-p2p-host");
const vaultDir = workDir.join("vault-host");
const settingsFile = workDir.join("settings-host.json");
await Deno.mkdir(vaultDir, { recursive: true });
let relayStarted = false;
if (useInternalRelay && isLocalP2pRelay(relay)) {
await startP2pRelay();
relayStarted = true;
}
try {
await initSettingsFile(settingsFile);
await applyP2pSettings(settingsFile, roomId, passphrase, appId, relay);
const host = startCliInBackground(vaultDir, "--settings", settingsFile, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
assert(host.combined.includes("P2P host is running"));
} finally {
await host.stop();
}
} finally {
if (relayStarted) {
await stopP2pRelay().catch(() => {});
}
}
});

View File

@@ -0,0 +1,42 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings, applyP2pTestTweaks } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
Deno.test("p2p-peers: discovers host through local relay", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const timeoutSeconds = Number(Deno.env.get("TIMEOUT_SECONDS") ?? "8");
await using workDir = await TempDir.create("livesync-cli-p2p-peers-local-relay");
const hostVault = workDir.join("vault-host");
const hostSettings = workDir.join("settings-host.json");
const clientVault = workDir.join("vault");
const clientSettings = workDir.join("settings.json");
await Deno.mkdir(hostVault, { recursive: true });
await Deno.mkdir(clientVault, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
await initSettingsFile(hostSettings);
await initSettingsFile(clientSettings);
await applyP2pSettings(hostSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pSettings(clientSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pTestTweaks(hostSettings, "p2p-host", passphrase);
await applyP2pTestTweaks(clientSettings, "p2p-client", passphrase);
const host = startCliInBackground(hostVault, "--settings", hostSettings, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peer = await discoverPeer(clientVault, clientSettings, timeoutSeconds);
assert(peer.id.length > 0);
assert(peer.name.length > 0);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,59 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings, applyP2pTestTweaks } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { runCli } from "./helpers/cli.ts";
Deno.test("p2p-sync: discovers peer and completes sync", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "12");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "15");
await using workDir = await TempDir.create("livesync-cli-p2p-sync");
const hostVault = workDir.join("vault-host");
const hostSettings = workDir.join("settings-host.json");
const clientVault = workDir.join("vault-sync");
const clientSettings = workDir.join("settings-sync.json");
await Deno.mkdir(hostVault, { recursive: true });
await Deno.mkdir(clientVault, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
await initSettingsFile(hostSettings);
await initSettingsFile(clientSettings);
await applyP2pSettings(hostSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pSettings(clientSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pTestTweaks(hostSettings, "p2p-host", passphrase);
await applyP2pTestTweaks(clientSettings, "p2p-client", passphrase);
const host = startCliInBackground(hostVault, "--settings", hostSettings, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peer = await discoverPeer(
clientVault,
clientSettings,
peersTimeout,
Deno.env.get("TARGET_PEER") ?? undefined
);
const syncResult = await runCli(
clientVault,
"--settings",
clientSettings,
"p2p-sync",
peer.id,
String(syncTimeout)
);
assert(
syncResult.code === 0,
`p2p-sync failed\nstdout: ${syncResult.stdout}\nstderr: ${syncResult.stderr}`
);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,118 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { applyP2pSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { jsonStringField, runCliOrFail, runCliWithInputOrFail, sanitiseCatStdout } from "./helpers/cli.ts";
Deno.test("p2p: three nodes detect and resolve conflicts", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = `${Deno.env.get("ROOM_ID_PREFIX") ?? "p2p-room"}-${Date.now()}`;
const passphrase = `${Deno.env.get("PASSPHRASE_PREFIX") ?? "p2p-pass"}-${Date.now()}`;
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "10");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "15");
await using workDir = await TempDir.create("livesync-cli-p2p-3nodes");
const vaultA = workDir.join("vault-a");
const vaultB = workDir.join("vault-b");
const vaultC = workDir.join("vault-c");
const settingsA = workDir.join("settings-a.json");
const settingsB = workDir.join("settings-b.json");
const settingsC = workDir.join("settings-c.json");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
await Deno.mkdir(vaultC, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
for (const settings of [settingsA, settingsB, settingsC]) {
await initSettingsFile(settings);
await applyP2pSettings(settings, roomId, passphrase, appId, relay);
}
const host = startCliInBackground(vaultA, "--settings", settingsA, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peerFromB = await discoverPeer(vaultB, settingsB, peersTimeout);
const peerFromC = await discoverPeer(vaultC, settingsC, peersTimeout);
const targetPath = "p2p/conflicted-from-two-clients.txt";
await runCliWithInputOrFail("from-client-b-v1\n", vaultB, "--settings", settingsB, "put", targetPath);
await runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout));
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
let visibleOnC = "";
for (let i = 0; i < 5; i++) {
try {
visibleOnC = sanitiseCatStdout(
await runCliOrFail(vaultC, "--settings", settingsC, "cat", targetPath)
).trimEnd();
if (visibleOnC === "from-client-b-v1") break;
} catch {
// retry below
}
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
}
assert(visibleOnC === "from-client-b-v1", `C should see file created by B, got: ${visibleOnC}`);
await runCliWithInputOrFail("from-client-b-v2\n", vaultB, "--settings", settingsB, "put", targetPath);
await runCliWithInputOrFail("from-client-c-v2\n", vaultC, "--settings", settingsC, "put", targetPath);
const [syncB, syncC] = await Promise.all([
runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout)),
runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout)),
]);
void syncB;
void syncC;
await runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout));
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
const infoBBefore = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetPath);
const conflictsBBefore = jsonStringField(infoBBefore, "conflicts");
const keepRevB = jsonStringField(infoBBefore, "revision");
assert(
conflictsBBefore !== "N/A" && conflictsBBefore.length > 0,
`expected conflicts on B\n${infoBBefore}`
);
assert(keepRevB.length > 0, `could not read revision on B\n${infoBBefore}`);
const infoCBefore = await runCliOrFail(vaultC, "--settings", settingsC, "info", targetPath);
const conflictsCBefore = jsonStringField(infoCBefore, "conflicts");
const keepRevC = jsonStringField(infoCBefore, "revision");
assert(
conflictsCBefore !== "N/A" && conflictsCBefore.length > 0,
`expected conflicts on C\n${infoCBefore}`
);
assert(keepRevC.length > 0, `could not read revision on C\n${infoCBefore}`);
await runCliOrFail(vaultB, "--settings", settingsB, "resolve", targetPath, keepRevB);
await runCliOrFail(vaultC, "--settings", settingsC, "resolve", targetPath, keepRevC);
const infoBAfter = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetPath);
const infoCAfter = await runCliOrFail(vaultC, "--settings", settingsC, "info", targetPath);
assert(jsonStringField(infoBAfter, "conflicts") === "N/A", `conflict still remains on B\n${infoBAfter}`);
assert(jsonStringField(infoCAfter, "conflicts") === "N/A", `conflict still remains on C\n${infoCAfter}`);
const finalContentB = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetPath)
).trimEnd();
const finalContentC = sanitiseCatStdout(
await runCliOrFail(vaultC, "--settings", settingsC, "cat", targetPath)
).trimEnd();
assert(
finalContentB === "from-client-b-v2" || finalContentB === "from-client-c-v2",
`unexpected final content on B: ${finalContentB}`
);
assert(
finalContentC === "from-client-b-v2" || finalContentC === "from-client-c-v2",
`unexpected final content on C: ${finalContentC}`
);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,111 @@
import { TempDir } from "./helpers/temp.ts";
import { applyP2pSettings, applyP2pTestTweaks, initSettingsFile } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { assertFilesEqual, runCliOrFail } from "./helpers/cli.ts";
async function writeFilledFile(path: string, size: number, byte: number): Promise<void> {
const data = new Uint8Array(size);
data.fill(byte);
await Deno.writeFile(path, data);
}
Deno.test("p2p: upload/download reproduction scenario", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "20");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "240");
const roomId = `p2p-room-${Date.now()}`;
const passphrase = `p2p-pass-${Date.now()}`;
await using workDir = await TempDir.create("livesync-cli-p2p-upload-download");
const vaultHost = workDir.join("vault-host");
const vaultUp = workDir.join("vault-up");
const vaultDown = workDir.join("vault-down");
const settingsHost = workDir.join("settings-host.json");
const settingsUp = workDir.join("settings-up.json");
const settingsDown = workDir.join("settings-down.json");
for (const dir of [vaultHost, vaultUp, vaultDown]) {
await Deno.mkdir(dir, { recursive: true });
}
const relayStarted = await maybeStartLocalRelay(relay);
try {
for (const settings of [settingsHost, settingsUp, settingsDown]) {
await initSettingsFile(settings);
await applyP2pSettings(settings, roomId, passphrase, appId, relay, "~.*");
}
await applyP2pTestTweaks(settingsHost, "p2p-cli-host", passphrase);
await applyP2pTestTweaks(settingsUp, `p2p-cli-upload-${Date.now()}`, passphrase);
await applyP2pTestTweaks(settingsDown, `p2p-cli-download-${Date.now()}`, passphrase);
const host = startCliInBackground(vaultHost, "--settings", settingsHost, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const uploadPeer = await discoverPeer(vaultUp, settingsUp, peersTimeout);
const storeText = workDir.join("store-file.md");
const diffA = workDir.join("test-diff-1.md");
const diffB = workDir.join("test-diff-2.md");
const diffC = workDir.join("test-diff-3.md");
await Deno.writeTextFile(storeText, "Hello, World!\n");
await Deno.writeTextFile(diffA, "Content A\n");
await Deno.writeTextFile(diffB, "Content B\n");
await Deno.writeTextFile(diffC, "Content C\n");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", storeText, "p2p/store-file.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffA, "p2p/test-diff-1.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffB, "p2p/test-diff-2.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffC, "p2p/test-diff-3.md");
const large100k = workDir.join("large-100k.txt");
const large1m = workDir.join("large-1m.txt");
const binary100k = workDir.join("binary-100k.bin");
const binary5m = workDir.join("binary-5m.bin");
await Deno.writeTextFile(large100k, "a".repeat(100000));
await Deno.writeTextFile(large1m, "b".repeat(1000000));
await writeFilledFile(binary100k, 100000, 0x5a);
await writeFilledFile(binary5m, 5000000, 0x7c);
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", large100k, "p2p/large-100000.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", large1m, "p2p/large-1000000.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", binary100k, "p2p/binary-100000.bin");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", binary5m, "p2p/binary-5000000.bin");
await runCliOrFail(vaultUp, "--settings", settingsUp, "p2p-sync", uploadPeer.id, String(syncTimeout));
await runCliOrFail(vaultUp, "--settings", settingsUp, "p2p-sync", uploadPeer.id, String(syncTimeout));
const downloadPeer = await discoverPeer(vaultDown, settingsDown, peersTimeout);
await runCliOrFail(vaultDown, "--settings", settingsDown, "p2p-sync", downloadPeer.id, String(syncTimeout));
await runCliOrFail(vaultDown, "--settings", settingsDown, "p2p-sync", downloadPeer.id, String(syncTimeout));
const downStoreText = workDir.join("down-store-file.md");
const downDiffA = workDir.join("down-test-diff-1.md");
const downDiffB = workDir.join("down-test-diff-2.md");
const downDiffC = workDir.join("down-test-diff-3.md");
const downLarge100k = workDir.join("down-large-100k.txt");
const downLarge1m = workDir.join("down-large-1m.txt");
const downBinary100k = workDir.join("down-binary-100k.bin");
const downBinary5m = workDir.join("down-binary-5m.bin");
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/store-file.md", downStoreText);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-1.md", downDiffA);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-2.md", downDiffB);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-3.md", downDiffC);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/large-100000.md", downLarge100k);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/large-1000000.md", downLarge1m);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/binary-100000.bin", downBinary100k);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/binary-5000000.bin", downBinary5m);
await assertFilesEqual(storeText, downStoreText, "store-file mismatch");
await assertFilesEqual(diffA, downDiffA, "test-diff-1 mismatch");
await assertFilesEqual(diffB, downDiffB, "test-diff-2 mismatch");
await assertFilesEqual(diffC, downDiffC, "test-diff-3 mismatch");
await assertFilesEqual(large100k, downLarge100k, "large-100000 mismatch");
await assertFilesEqual(large1m, downLarge1m, "large-1000000 mismatch");
await assertFilesEqual(binary100k, downBinary100k, "binary-100000 mismatch");
await assertFilesEqual(binary5m, downBinary5m, "binary-5000000 mismatch");
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,78 @@
/**
* Deno port of test-push-pull-linux.sh
*
* Requires CouchDB connection details either via environment variables or a
* .test.env file. If neither is present the test logs a warning and the
* CLI will likely fail at the push step.
*
* Run:
* deno test -A test-push-pull.ts
*
* With explicit CouchDB:
* COUCHDB_URI=http://127.0.0.1:5984 \
* COUCHDB_USER=admin \
* COUCHDB_PASSWORD=password \
* COUCHDB_DBNAME=livesync-test \
* deno test -A test-push-pull.ts
*/
import { join } from "@std/path";
import { assertEquals } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCliOrFail } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, stopCouchdb } from "./helpers/docker.ts";
const REMOTE_PATH = Deno.env.get("REMOTE_PATH") ?? "test/push-pull.txt";
Deno.test("push/pull roundtrip", async () => {
await using workDir = await TempDir.create("livesync-cli-push-pull");
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
await Deno.mkdir(join(vaultDir, "test"), { recursive: true });
const uri = Deno.env.get("COUCHDB_URI") ?? "http://127.0.0.1:5989/";
const user = Deno.env.get("COUCHDB_USER") ?? "admin";
const password = Deno.env.get("COUCHDB_PASSWORD") ?? "testpassword";
const dbname = Deno.env.get("COUCHDB_DBNAME") ?? `push-pull-${Date.now()}`;
const shouldStartDocker = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (shouldStartDocker) {
await startCouchdb(uri, user, password, dbname);
}
try {
await initSettingsFile(settingsFile);
if (uri && user && password && dbname) {
console.log("[INFO] applying CouchDB env vars to settings");
await applyCouchdbSettings(settingsFile, uri, user, password, dbname);
} else {
console.warn(
"[WARN] CouchDB env vars not fully set — push/pull may fail unless the generated settings already contain connection details"
);
}
const srcFile = workDir.join("push-source.txt");
const pulledFile = workDir.join("pull-result.txt");
const content = `push-pull-test ${new Date().toISOString()}\n`;
await Deno.writeTextFile(srcFile, content);
console.log(`[INFO] push -> ${REMOTE_PATH}`);
await runCliOrFail(vaultDir, "--settings", settingsFile, "push", srcFile, REMOTE_PATH);
console.log(`[INFO] pull <- ${REMOTE_PATH}`);
await runCliOrFail(vaultDir, "--settings", settingsFile, "pull", REMOTE_PATH, pulledFile);
const pulled = await Deno.readTextFile(pulledFile);
assertEquals(content, pulled, "push/pull roundtrip content mismatch");
console.log("[PASS] push/pull roundtrip matched");
} finally {
if (shouldStartDocker && !keepDocker) {
await stopCouchdb().catch(() => {});
}
}
});

View File

@@ -0,0 +1,214 @@
/**
* Deno port of test-setup-put-cat-linux.sh
*
* Tests all local-DB file operations that require no external remote:
* setup /
* push / cat / ls / info / rm / resolve / cat-rev / pull-rev
*
* Run (no external services needed):
* deno test -A test-setup-put-cat.ts
*/
import { join } from "@std/path";
import { assertEquals, assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCli, runCliOrFail, runCliWithInput, sanitiseCatStdout } from "./helpers/cli.ts";
import { generateSetupUriFromSettings, initSettingsFile } from "./helpers/settings.ts";
const REMOTE_PATH = Deno.env.get("REMOTE_PATH") ?? "test/setup-put-cat.txt";
const SETUP_PASSPHRASE = Deno.env.get("SETUP_PASSPHRASE") ?? "setup-passphrase";
Deno.test("CLI file operations: push / cat / ls / info / rm / resolve / cat-rev / pull-rev", async (t) => {
await using workDir = await TempDir.create("livesync-cli-setup-put-cat");
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
await Deno.mkdir(join(vaultDir, "test"), { recursive: true });
await initSettingsFile(settingsFile);
const setupUri = await generateSetupUriFromSettings(settingsFile, SETUP_PASSPHRASE);
const setupResult = await runCliWithInput(
`${SETUP_PASSPHRASE}\n`,
vaultDir,
"--settings",
settingsFile,
"setup",
setupUri
);
assert(setupResult.code === 0, `setup command exited with ${setupResult.code}\n${setupResult.combined}`);
assert(
setupResult.combined.includes("[Command] setup ->"),
`setup command did not execute expected code path\n${setupResult.combined}`
);
const run = (...args: string[]) => runCliOrFail(vaultDir, "--settings", settingsFile, ...args);
// ------------------------------------------------------------------
// push / cat roundtrip
// ------------------------------------------------------------------
await t.step("push/cat roundtrip", async () => {
const srcFile = workDir.join("put-source.txt");
const content = `setup-put-cat-test ${new Date().toISOString()}\nline-2\n`;
await Deno.writeTextFile(srcFile, content);
console.log(`[INFO] push -> ${REMOTE_PATH}`);
await runCliWithInput(content, vaultDir, "--settings", settingsFile, "put", REMOTE_PATH);
console.log(`[INFO] cat <- ${REMOTE_PATH}`);
const rawOutput = await run("cat", REMOTE_PATH);
const catOutput = sanitiseCatStdout(rawOutput);
assertEquals(content, catOutput, "push/cat roundtrip content mismatch");
console.log("[PASS] push/cat roundtrip matched");
});
// ------------------------------------------------------------------
// ls: single file
// ------------------------------------------------------------------
await t.step("ls output format (single file)", async () => {
const lsOutput = await run("ls", REMOTE_PATH);
const line = lsOutput
.trim()
.split("\n")
.find((l) => l.startsWith(REMOTE_PATH + "\t"));
assert(line, `ls output did not include ${REMOTE_PATH}`);
const [lsPath, lsSize, lsMtime, lsRev] = line.split("\t");
assertEquals(lsPath, REMOTE_PATH, "ls path column mismatch");
assert(/^\d+$/.test(lsSize), `ls size not numeric: ${lsSize}`);
assert(/^\d+$/.test(lsMtime), `ls mtime not numeric: ${lsMtime}`);
assert(lsRev?.length > 0, "ls revision column is empty");
console.log("[PASS] ls output format matched");
});
// ------------------------------------------------------------------
// ls: prefix filter and sort order
// ------------------------------------------------------------------
await t.step("ls prefix filter and sort order", async () => {
await runCliWithInput("file-a\n", vaultDir, "--settings", settingsFile, "put", "test/a-first.txt");
await runCliWithInput("file-z\n", vaultDir, "--settings", settingsFile, "put", "test/z-last.txt");
const lsOut = await run("ls", "test/");
const lines = lsOut.trim().split("\n").filter(Boolean);
assert(lines.length >= 3, "ls prefix output expected at least 3 rows");
// Verify sorted ascending by path
const paths = lines.map((l) => l.split("\t")[0]);
for (let i = 1; i < paths.length; i++) {
assert(paths[i - 1] <= paths[i], `ls output not sorted: ${paths[i - 1]} > ${paths[i]}`);
}
assert(
lines.some((l) => l.startsWith("test/a-first.txt\t")),
"ls prefix output missing test/a-first.txt"
);
assert(
lines.some((l) => l.startsWith("test/z-last.txt\t")),
"ls prefix output missing test/z-last.txt"
);
console.log("[PASS] ls prefix and sorting matched");
});
// ------------------------------------------------------------------
// ls: no-match prefix returns empty output
// ------------------------------------------------------------------
await t.step("ls no-match prefix returns empty", async () => {
const lsOut = await run("ls", "no-such-prefix/");
assertEquals(lsOut.trim(), "", "ls no-match prefix should produce empty output");
console.log("[PASS] ls no-match prefix matched");
});
// ------------------------------------------------------------------
// info: JSON output format
// ------------------------------------------------------------------
await t.step("info output JSON format", async () => {
const infoOut = await run("info", REMOTE_PATH);
let data: Record<string, unknown>;
try {
data = JSON.parse(infoOut);
} catch {
throw new Error(`info output is not valid JSON:\n${infoOut}`);
}
assertEquals(data.path, REMOTE_PATH, "info .path mismatch");
assertEquals(data.filename, REMOTE_PATH.split("/").at(-1), "info .filename mismatch");
assert(typeof data.size === "number" && data.size >= 0, `info .size invalid: ${data.size}`);
assert(typeof data.chunks === "number" && (data.chunks as number) >= 1, `info .chunks invalid: ${data.chunks}`);
assertEquals(data.conflicts, "N/A", "info .conflicts should be N/A");
console.log("[PASS] info output format matched");
});
// ------------------------------------------------------------------
// info: non-existent path exits non-zero
// ------------------------------------------------------------------
await t.step("info non-existent path returns non-zero", async () => {
const r = await runCli(vaultDir, "--settings", settingsFile, "info", "no-such-file.md");
assert(r.code !== 0, "info on non-existent file should exit non-zero");
console.log("[PASS] info non-existent path returns non-zero");
});
// ------------------------------------------------------------------
// rm: removes file from ls and makes cat fail
// ------------------------------------------------------------------
await t.step("rm removes target from ls and cat", async () => {
await run("rm", "test/z-last.txt");
const catResult = await runCli(vaultDir, "--settings", settingsFile, "cat", "test/z-last.txt");
assert(catResult.code !== 0, "rm target should not be readable by cat");
const lsOut = await run("ls", "test/");
assert(!lsOut.includes("test/z-last.txt\t"), "rm target should not appear in ls output");
console.log("[PASS] rm removed target from visible entries");
});
// ------------------------------------------------------------------
// resolve: accepts current revision, rejects invalid revision
// ------------------------------------------------------------------
await t.step("resolve: valid and invalid revisions", async () => {
const lsLine = (await run("ls", "test/a-first.txt")).trim().split("\n")[0];
assert(lsLine, "could not fetch revision for resolve test");
const rev = lsLine.split("\t")[3];
assert(rev?.length > 0, "revision was empty for resolve test");
await run("resolve", "test/a-first.txt", rev);
console.log("[PASS] resolve accepted current revision");
const badR = await runCli(vaultDir, "--settings", settingsFile, "resolve", "test/a-first.txt", "9-no-such-rev");
assert(badR.code !== 0, "resolve with non-existent revision should exit non-zero");
console.log("[PASS] resolve non-existent revision returns non-zero");
});
// ------------------------------------------------------------------
// cat-rev / pull-rev: retrieve a past revision
// ------------------------------------------------------------------
await t.step("cat-rev / pull-rev: retrieve past revision", async () => {
const revPath = "test/revision-history.txt";
await runCliWithInput("revision-v1\n", vaultDir, "--settings", settingsFile, "put", revPath);
await runCliWithInput("revision-v2\n", vaultDir, "--settings", settingsFile, "put", revPath);
await runCliWithInput("revision-v3\n", vaultDir, "--settings", settingsFile, "put", revPath);
const infoOut = await run("info", revPath);
const infoData = JSON.parse(infoOut) as {
revisions?: string[];
};
const revisions = Array.isArray(infoData.revisions) ? infoData.revisions : [];
const pastRev = revisions.find((r): r is string => typeof r === "string" && r !== "N/A");
assert(pastRev, "info output did not include any past revision");
const catRevOut = await run("cat-rev", revPath, pastRev);
const catRevClean = sanitiseCatStdout(catRevOut);
assert(
catRevClean === "revision-v1\n" || catRevClean === "revision-v2\n",
`cat-rev output did not match expected past revision:\n${catRevClean}`
);
console.log("[PASS] cat-rev matched one of the past revisions from info");
const pullRevFile = workDir.join("rev-pull-output.txt");
await run("pull-rev", revPath, pullRevFile, pastRev);
const pullRevContent = await Deno.readTextFile(pullRevFile);
assert(
pullRevContent === "revision-v1\n" || pullRevContent === "revision-v2\n",
`pull-rev output did not match expected past revision:\n${pullRevContent}`
);
console.log("[PASS] pull-rev matched one of the past revisions from info");
});
});

View File

@@ -0,0 +1,93 @@
/**
* Deno port of test-sync-locked-remote-linux.sh
*
* Verifies CLI sync behaviour when the remote milestone document is unlocked
* versus locked.
*/
import { assert, assertStringIncludes } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCli } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { createCouchdbDatabase, startCouchdb, stopCouchdb, updateCouchdbDoc } from "./helpers/docker.ts";
const MILESTONE_DOC = "_local/obsydian_livesync_milestone";
function requireEnv(...keys: string[]): string {
for (const key of keys) {
const value = Deno.env.get(key)?.trim();
if (value) return value;
}
throw new Error(`Required env var is missing: ${keys.join(" or ")}`);
}
Deno.test("sync: actionable error against locked remote DB", async () => {
const couchdbUri = requireEnv("COUCHDB_URI", "hostname").replace(/\/$/, "");
const couchdbUser = requireEnv("COUCHDB_USER", "username");
const couchdbPassword = requireEnv("COUCHDB_PASSWORD", "password");
const dbPrefix = requireEnv("COUCHDB_DBNAME", "dbname");
const dbname = `${dbPrefix}-locked-${Date.now()}-${Math.floor(Math.random() * 100000)}`;
await using workDir = await TempDir.create("livesync-cli-locked-test");
const vaultDir = workDir.join("vault");
const settingsFile = workDir.join("settings.json");
await Deno.mkdir(vaultDir, { recursive: true });
const shouldStartDocker = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (shouldStartDocker) {
console.log(`[INFO] starting CouchDB and creating test database: ${dbname}`);
await startCouchdb(couchdbUri, couchdbUser, couchdbPassword, dbname);
} else {
console.log(`[INFO] using existing CouchDB and creating test database: ${dbname}`);
await createCouchdbDatabase(couchdbUri, couchdbUser, couchdbPassword, dbname);
}
try {
await initSettingsFile(settingsFile);
await applyCouchdbSettings(settingsFile, couchdbUri, couchdbUser, couchdbPassword, dbname, true);
console.log("[CASE] initial sync to create milestone document");
const initialSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
initialSync.code === 0,
`initial sync failed\nstdout: ${initialSync.stdout}\nstderr: ${initialSync.stderr}`
);
const updateMilestone = async (locked: boolean) => {
await updateCouchdbDoc(couchdbUri, couchdbUser, couchdbPassword, `${dbname}/${MILESTONE_DOC}`, (doc) => ({
...doc,
locked,
accepted_nodes: [],
}));
};
console.log("[CASE] sync should succeed when remote is not locked");
await updateMilestone(false);
const unlockedSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
unlockedSync.code === 0,
`sync should succeed when remote is not locked\nstdout: ${unlockedSync.stdout}\nstderr: ${unlockedSync.stderr}`
);
assert(
!unlockedSync.combined.includes("The remote database is locked"),
`locked error should not appear when remote is not locked\n${unlockedSync.combined}`
);
console.log("[PASS] unlocked remote DB syncs successfully");
console.log("[CASE] sync should fail with actionable error when remote is locked");
await updateMilestone(true);
const lockedSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
lockedSync.code !== 0,
`sync should fail when remote is locked\nstdout: ${lockedSync.stdout}\nstderr: ${lockedSync.stderr}`
);
assertStringIncludes(lockedSync.combined, "The remote database is locked and this device is not yet accepted");
console.log("[PASS] locked remote DB produces actionable CLI error");
} finally {
if (shouldStartDocker && !keepDocker) {
await stopCouchdb().catch(() => {});
}
}
});

View File

@@ -0,0 +1,272 @@
/**
* Deno port of test-sync-two-local-databases-linux.sh
*
* Tests two-vault synchronisation via CouchDB including conflict detection
* and resolution.
*
* Requires CouchDB connection details. Provide them via environment variables
* OR place a .test.env file at src/apps/cli/.test.env.
*
* By default, a CouchDB Docker container is started automatically
* (LIVESYNC_START_DOCKER=1). Set LIVESYNC_START_DOCKER=0 to use an existing
* CouchDB instance instead.
*
* Run:
* deno test -A test-sync-two-local-databases.ts
*
* With an existing CouchDB:
* COUCHDB_URI=http://127.0.0.1:5984 \
* COUCHDB_USER=admin \
* COUCHDB_PASSWORD=password \
* COUCHDB_DBNAME=livesync-test \
* LIVESYNC_START_DOCKER=0 \
* deno test -A test-sync-two-local-databases.ts
*/
import { assertEquals, assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCliOrFail, jsonFieldIsNa } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, stopCouchdb } from "./helpers/docker.ts";
// ---------------------------------------------------------------------------
// Load configuration
// ---------------------------------------------------------------------------
async function resolveConfig(): Promise<{
uri: string;
user: string;
password: string;
baseDbname: string;
} | null> {
const env = Deno.env.toObject();
const uri = (env["COUCHDB_URI"] ?? env["hostname"] ?? "").replace(/\/$/, "");
const user = env["COUCHDB_USER"] ?? env["username"] ?? "";
const password = env["COUCHDB_PASSWORD"] ?? env["password"] ?? "";
const baseDbname = env["COUCHDB_DBNAME"] ?? env["dbname"] ?? "livesync-test";
if (!uri || !user || !password) return null;
return { uri, user, password, baseDbname };
}
const config = await resolveConfig();
const START_DOCKER = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const KEEP_DOCKER = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
const SYNC_RETRY = Number(Deno.env.get("LIVESYNC_SYNC_RETRY") ?? "8");
// Provide a sane default for flaky remote connectivity in Docker-on-WSL
// environments. Users can override explicitly if needed.
if (!Deno.env.has("LIVESYNC_CLI_RETRY")) {
Deno.env.set("LIVESYNC_CLI_RETRY", "2");
}
// ---------------------------------------------------------------------------
// Test suite
// ---------------------------------------------------------------------------
Deno.test(
{
name: "sync two local databases: sync + conflict detection + resolution",
ignore: config === null,
},
async (t) => {
if (!config) return; // narrowing for TypeScript
const suffix = `${Date.now()}-${Math.floor(Math.random() * 65535)}`;
const dbname = `${config.baseDbname}-${suffix}`;
await using workDir = await TempDir.create("livesync-cli-two-db-test");
// ------------------------------------------------------------------
// Docker lifecycle
// ------------------------------------------------------------------
if (START_DOCKER) {
await startCouchdb(config.uri, config.user, config.password, dbname);
}
try {
await runSuite(t, workDir, config, dbname);
} finally {
if (START_DOCKER && !KEEP_DOCKER) {
await stopCouchdb().catch(() => {});
}
if (START_DOCKER && KEEP_DOCKER) {
console.log("[INFO] LIVESYNC_DEBUG_KEEP_DOCKER=1, keeping couchdb-test container");
}
console.log(`[INFO] test database '${dbname}' is preserved for debugging.`);
}
}
);
// ---------------------------------------------------------------------------
// Suite implementation
// ---------------------------------------------------------------------------
async function runSuite(
t: Deno.TestContext,
workDir: TempDir,
config: { uri: string; user: string; password: string },
dbname: string
): Promise<void> {
const sleep = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms));
const runWithRetry = async <T>(label: string, fn: () => Promise<T>, retries = SYNC_RETRY): Promise<T> => {
let lastErr: unknown;
for (let i = 0; i <= retries; i++) {
try {
return await fn();
} catch (err) {
lastErr = err;
if (i === retries) break;
const delayMs = 500 * (i + 1);
console.warn(`[WARN] ${label} failed, retrying (${i + 1}/${retries}) in ${delayMs}ms`);
await sleep(delayMs);
}
}
throw lastErr;
};
const vaultA = workDir.join("vault-a");
const vaultB = workDir.join("vault-b");
const settingsA = workDir.join("a-settings.json");
const settingsB = workDir.join("b-settings.json");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
await initSettingsFile(settingsA);
await initSettingsFile(settingsB);
const applySettings = async (f: string) =>
applyCouchdbSettings(f, config.uri, config.user, config.password, dbname, /* liveSync */ true);
await applySettings(settingsA);
await applySettings(settingsB);
const runA = (...args: string[]) => runCliOrFail(vaultA, "--settings", settingsA, ...args);
const runB = (...args: string[]) => runCliOrFail(vaultB, "--settings", settingsB, ...args);
const syncA = () => runWithRetry("syncA", () => runA("sync"));
const syncB = () => runWithRetry("syncB", () => runB("sync"));
const catA = (path: string) => runA("cat", path);
const catB = (path: string) => runB("cat", path);
// ------------------------------------------------------------------
// Case 1: A creates file, B reads after sync
// ------------------------------------------------------------------
await t.step("case 1: A creates file -> B can read after sync", async () => {
const srcA = workDir.join("from-a-src.txt");
await Deno.writeTextFile(srcA, "from-a\n");
await runA("push", srcA, "shared/from-a.txt");
await syncA();
await syncB();
const value = (await catB("shared/from-a.txt")).replace(/\r\n/g, "\n").trimEnd();
assertEquals(value, "from-a", "B could not read file created on A");
console.log("[PASS] case 1 passed");
});
// ------------------------------------------------------------------
// Case 2: B creates file, A reads after sync
// ------------------------------------------------------------------
await t.step("case 2: B creates file -> A can read after sync", async () => {
const srcB = workDir.join("from-b-src.txt");
await Deno.writeTextFile(srcB, "from-b\n");
await runB("push", srcB, "shared/from-b.txt");
await syncB();
await syncA();
const value = (await catA("shared/from-b.txt")).replace(/\r\n/g, "\n").trimEnd();
assertEquals(value, "from-b", "A could not read file created on B");
console.log("[PASS] case 2 passed");
});
// ------------------------------------------------------------------
// Case 3: concurrent edits create a conflict
// ------------------------------------------------------------------
await t.step("case 3: concurrent edits create conflict", async () => {
const baseSrc = workDir.join("base-src.txt");
await Deno.writeTextFile(baseSrc, "base\n");
await runA("push", baseSrc, "shared/conflicted.txt");
await syncA();
await syncB();
const aEdit = workDir.join("edit-a.txt");
const bEdit = workDir.join("edit-b.txt");
await Deno.writeTextFile(aEdit, "edit-from-a\n");
await Deno.writeTextFile(bEdit, "edit-from-b\n");
await runA("push", aEdit, "shared/conflicted.txt");
await runB("push", bEdit, "shared/conflicted.txt");
const infoFileA = workDir.join("info-a.json");
const infoFileB = workDir.join("info-b.json");
let conflictDetected = false;
for (const side of ["a", "b"] as const) {
if (side === "a") await syncA();
else await syncB();
await Deno.writeTextFile(infoFileA, await runA("info", "shared/conflicted.txt"));
await Deno.writeTextFile(infoFileB, await runB("info", "shared/conflicted.txt"));
const da = JSON.parse(await Deno.readTextFile(infoFileA)) as Record<string, unknown>;
const db = JSON.parse(await Deno.readTextFile(infoFileB)) as Record<string, unknown>;
if (!jsonFieldIsNa(da, "conflicts") || !jsonFieldIsNa(db, "conflicts")) {
conflictDetected = true;
break;
}
}
assert(conflictDetected, "expected conflict after concurrent edits, but both sides show N/A");
console.log("[PASS] case 3 conflict detected");
});
// ------------------------------------------------------------------
// Case 4: resolve on A, verify B has no conflict after sync
// ------------------------------------------------------------------
await t.step("case 4: resolve on A propagates to B", async () => {
const infoFileA = workDir.join("info-a-resolve.json");
const infoFileB = workDir.join("info-b-resolve.json");
// Ensure A sees the conflict
for (let i = 0; i < 5; i++) {
const raw = await runA("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, raw);
const da = JSON.parse(raw) as Record<string, unknown>;
if (!jsonFieldIsNa(da, "conflicts")) break;
await syncB();
await syncA();
}
const rawA = await runA("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, rawA);
const dataA = JSON.parse(rawA) as Record<string, unknown>;
assert(!jsonFieldIsNa(dataA, "conflicts"), "A does not see conflict, cannot resolve from A only");
const keepRev = dataA["revision"] as string;
assert(keepRev?.length > 0, "could not read revision from A info output");
await runA("resolve", "shared/conflicted.txt", keepRev);
let resolved = false;
for (let i = 0; i < 6; i++) {
await syncA();
await syncB();
const rawA2 = await runA("info", "shared/conflicted.txt");
const rawB2 = await runB("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, rawA2);
await Deno.writeTextFile(infoFileB, rawB2);
const da2 = JSON.parse(rawA2) as Record<string, unknown>;
const db2 = JSON.parse(rawB2) as Record<string, unknown>;
if (jsonFieldIsNa(da2, "conflicts") && jsonFieldIsNa(db2, "conflicts")) {
resolved = true;
break;
}
// If A still sees a conflict, resolve it again
if (!jsonFieldIsNa(da2, "conflicts")) {
const rev2 = da2["revision"] as string;
if (rev2) await runA("resolve", "shared/conflicted.txt", rev2).catch(() => {});
}
}
assert(resolved, "conflicts should be resolved on both A and B");
const contentA = (await catA("shared/conflicted.txt")).replace(/\r\n/g, "\n");
const contentB = (await catB("shared/conflicted.txt")).replace(/\r\n/g, "\n");
assertEquals(contentA, contentB, "resolved content mismatch between A and B");
console.log("[PASS] case 4 passed");
console.log("[PASS] all sync/resolve scenarios passed");
});
}

View File

@@ -0,0 +1,298 @@
# CLI Deno Test Development Notes
This document provides an overview of the Deno-based compatibility tests under `src/apps/cli/testdeno/`.
The existing bash tests under `src/apps/cli/test/` are preserved, while a Windows-friendly suite is maintained in parallel.
---
## Goals
- Keep existing bash tests intact.
- Provide direct execution from Windows PowerShell.
- Establish a TypeScript (Deno) foundation for core end-to-end and integration scenarios.
---
## Directory structure
```
src/apps/cli/testdeno/
deno.json
CONTRIBUTING_TESTS.md
helpers/
backgroundCli.ts
cli.ts
docker.ts
env.ts
p2p.ts
settings.ts
temp.ts
test-e2e-two-vaults-couchdb.ts
test-push-pull.ts
test-p2p-host.ts
test-p2p-peers-local-relay.ts
test-p2p-sync.ts
test-p2p-three-nodes-conflict.ts
test-p2p-upload-download-repro.ts
test-e2e-two-vaults-matrix.ts
test-setup-put-cat.ts
test-mirror.ts
test-sync-two-local-databases.ts
test-sync-locked-remote.ts
```
---
## Key files
### `deno.json`
- Defines Deno tasks.
- Defines import maps for `@std/assert` and `@std/path`.
Main tasks:
- `deno task test`
- `deno task test:local`
- `deno task test:push-pull`
- `deno task test:setup-put-cat`
- `deno task test:mirror`
- `deno task test:sync-two-local`
- `deno task test:sync-locked-remote`
- `deno task test:p2p-host`
- `deno task test:p2p-peers`
- `deno task test:p2p-sync`
- `deno task test:p2p-three-nodes`
- `deno task test:p2p-upload-download`
- `deno task test:e2e-couchdb`
- `deno task test:e2e-matrix`
### `helpers/cli.ts`
- CLI execution wrappers.
- `runCli`, `runCliOrFail`, `runCliWithInput`.
- Output normalisation via `sanitiseCatStdout`.
- Comparison utilities, including `assertFilesEqual`.
This file corresponds to `run_cli` and common assertions in `test-helpers.sh`.
### `helpers/settings.ts`
- Executes `init-settings --force`.
- Marks `isConfigured = true`.
- Applies CouchDB and P2P settings.
- Applies remote synchronisation settings and P2P test tweaks.
This file corresponds to settings helpers in `test-helpers.sh`.
### `helpers/docker.ts`
- Starts, stops, and initialises CouchDB directly from Deno.
- Configures CouchDB via `fetch + retry`.
- Starts and stops the P2P relay through the same Docker runner.
Both CouchDB and P2P relay flows are bash-independent.
### `helpers/backgroundCli.ts`
- Starts long-running commands such as `p2p-host` in the background.
- Waits for readiness logs and handles termination.
### `helpers/p2p.ts`
- Determines whether a local relay should be started.
- Parses `p2p-peers` output.
- Discovers peer IDs with a fallback based on advertisement logs.
### `helpers/env.ts`
- Loads `.test.env`.
- Supports `KEY=value`, single-quoted values, and double-quoted values.
### `helpers/temp.ts`
- Provides `TempDir`.
- Uses `await using` to auto-clean temporary directories.
---
## Implemented tests
### `test-push-pull.ts`
- Verifies push and pull round trips.
- Uses environment variables or `.test.env` for CouchDB values.
### `test-setup-put-cat.ts`
- Verifies `setup` with full setup URI generation via `encodeSettingsToSetupURI`.
- Verifies `push`, `cat`, `ls`, `info`, `rm`, `resolve`, `cat-rev`, and `pull-rev`.
- Does not require an external remote.
### `test-mirror.ts`
- Verifies six core mirror scenarios.
- Does not require an external remote.
### `test-sync-two-local-databases.ts`
- Verifies sync between two vaults and CouchDB.
- Verifies conflict detection and resolve propagation.
- Starts Docker CouchDB by default when `LIVESYNC_START_DOCKER != 0`.
### `test-sync-locked-remote.ts`
- Updates the CouchDB milestone `locked` flag.
- Verifies sync success when unlocked.
- Verifies actionable CLI error when locked.
### `test-p2p-host.ts`
- Verifies that `p2p-host` starts and emits readiness output.
### `test-p2p-peers-local-relay.ts`
- Verifies peer discovery through a local relay.
### `test-p2p-sync.ts`
- Verifies that `p2p-sync` completes after peer discovery.
### `test-p2p-three-nodes-conflict.ts`
- Uses one host and two clients.
- Verifies conflict creation, detection via `info`, and resolution via `resolve`.
### `test-p2p-upload-download-repro.ts`
- Uses host, upload, and download nodes.
- Verifies transfer of text files and binary files, including larger files.
### `test-e2e-two-vaults-couchdb.ts`
- Verifies two-vault end-to-end scenarios on CouchDB.
- Runs both encryption-off and encryption-on cases.
- Includes conflict marker checks in `ls` and resolve propagation checks.
### `test-e2e-two-vaults-matrix.ts`
- Verifies the matrix equivalent of the bash script.
- Runs four combinations:
- `COUCHDB-enc0`
- `COUCHDB-enc1`
- `MINIO-enc0`
- `MINIO-enc1`
---
## Running tests (PowerShell)
From `src/apps/cli/testdeno`:
```powershell
cd src/apps/cli/testdeno
# Local-only set
deno task test:local
# Individual tests
deno task test:setup-put-cat
deno task test:mirror
deno task test:push-pull
deno task test:sync-locked-remote
# CouchDB-based tests
deno task test:sync-two-local
deno task test:e2e-couchdb
# P2P-based tests
deno task test:p2p-host
deno task test:p2p-peers
deno task test:p2p-sync
deno task test:p2p-three-nodes
deno task test:p2p-upload-download
deno task test:e2e-matrix
```
---
## Environment variables
### CouchDB
- `COUCHDB_URI`
- `COUCHDB_USER`
- `COUCHDB_PASSWORD`
- `COUCHDB_DBNAME`
Equivalent keys in `src/apps/cli/.test.env`:
- `hostname`
- `username`
- `password`
- `dbname`
### Behaviour switches
- `LIVESYNC_START_DOCKER=0`: use existing CouchDB.
- `REMOTE_PATH`: override target path for selected tests.
- `LIVESYNC_TEST_TEE=1`: stream CLI stdout and stderr during execution.
- `LIVESYNC_DOCKER_TEE=1`: stream Docker stdout and stderr.
- `LIVESYNC_CLI_RETRY=<n>`: retry transient network failures.
- `LIVESYNC_DEBUG_KEEP_DOCKER=1`: keep `couchdb-test` after test completion.
### Docker command selection
`helpers/docker.ts` supports command selection via environment variables.
- `LIVESYNC_DOCKER_MODE=auto` (default)
- Windows: tries `wsl docker` first, then `docker`.
- Non-Windows: tries `docker` first, then `wsl docker`.
- `LIVESYNC_DOCKER_MODE=native`: always uses `docker`.
- `LIVESYNC_DOCKER_MODE=wsl`: always uses `wsl docker`.
- `LIVESYNC_DOCKER_COMMAND="..."`: custom command, for example `wsl docker`.
`LIVESYNC_DOCKER_COMMAND` has priority over `LIVESYNC_DOCKER_MODE`.
PowerShell examples:
```powershell
# Use Docker in WSL explicitly
$env:LIVESYNC_DOCKER_MODE = "wsl"
deno task test:sync-two-local
# Full custom command
$env:LIVESYNC_DOCKER_COMMAND = "wsl docker"
deno task test:sync-two-local
```
### P2P
- `RELAY`
- `ROOM_ID`
- `PASSPHRASE`
- `APP_ID`
- `PEERS_TIMEOUT`
- `SYNC_TIMEOUT`
- `USE_INTERNAL_RELAY=0|1`
- `TIMEOUT_SECONDS`
---
## Continuous Integration
The GitHub Actions workflow `.github/workflows/cli-deno-tests.yml` is used to run these tests automatically on push and pull requests affecting the CLI.
---
## Current limitations
- MinIO startup and matrix coverage are ported. Current limits are elsewhere, not setup URI generation.
---
## Maintenance policy
- Existing bash tests remain available.
- Deno tests are expanded in parallel for cross-platform usage.
- New scenarios should be added through reusable helpers in `helpers/`.

View File

@@ -11,11 +11,54 @@ const defaultExternal = [
"crypto",
"pouchdb-adapter-leveldb",
"commander",
"chokidar",
"punycode",
"werift",
];
// Polyfill FileReader at the very top of the CJS bundle. octagonal-wheels uses
// FileReader for base64 conversion when Uint8Array.toBase64 (TC39 proposal) is
// unavailable. Node.js has neither, so we inject a minimal FileReader shim before
// any module-scope code evaluates.
const fileReaderPolyfillBanner = `
if (typeof globalThis.FileReader === "undefined") {
globalThis.FileReader = class FileReader {
constructor() { this.result = null; this.onload = null; this.onerror = null; }
readAsDataURL(blob) {
blob.arrayBuffer().then((buf) => {
var b64 = require("buffer").Buffer.from(buf).toString("base64");
this.result = "data:" + (blob.type || "application/octet-stream") + ";base64," + b64;
if (this.onload) this.onload({ target: this });
}).catch((err) => { if (this.onerror) this.onerror({ target: this, error: err }); });
}
readAsArrayBuffer() { throw new Error("FileReader.readAsArrayBuffer is not implemented in this polyfill"); }
readAsBinaryString() { throw new Error("FileReader.readAsBinaryString is not implemented in this polyfill"); }
readAsText() { throw new Error("FileReader.readAsText is not implemented in this polyfill"); }
abort() { throw new Error("FileReader.abort is not implemented in this polyfill"); }
};
}
`;
function injectBanner(): import("vite").Plugin {
return {
name: "inject-banner",
generateBundle(_options, bundle) {
for (const chunk of Object.values(bundle)) {
if (chunk.type === "chunk" && chunk.fileName.startsWith("entrypoint")) {
// Insert after the shebang line if present, otherwise at the top.
if (chunk.code.startsWith("#!")) {
const newline = chunk.code.indexOf("\n");
chunk.code = chunk.code.slice(0, newline + 1) + fileReaderPolyfillBanner + chunk.code.slice(newline + 1);
} else {
chunk.code = fileReaderPolyfillBanner + chunk.code;
}
}
}
},
};
}
export default defineConfig({
plugins: [svelte()],
plugins: [svelte(), injectBanner()],
resolve: {
alias: {
"@lib/worker/bgWorker.ts": "../../lib/src/worker/bgWorker.mock.ts",

View File

@@ -41,7 +41,7 @@ async function renderHistoryList(): Promise<VaultHistoryItem[]> {
const [items, lastUsedId] = await Promise.all([historyStore.getVaultHistory(), historyStore.getLastUsedVaultId()]);
listEl.innerHTML = "";
listEl.replaceChildren();
emptyEl.classList.toggle("is-hidden", items.length > 0);
for (const item of items) {

View File

@@ -138,7 +138,7 @@ export const _requestToCouchDBFetch = async (
authorization: authHeader,
"content-type": "application/json",
};
const uri = `${baseUri}/${path}`;
const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
const requestParam = {
url: uri,
method: method || (body ? "PUT" : "GET"),
@@ -162,7 +162,7 @@ export const _requestToCouchDB = async (
const authHeaderGen = new AuthorizationHeaderGenerator();
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
const uri = `${baseUri}/${path}`;
const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
const requestParam: RequestUrlParam = {
url: uri,
method: method || (body ? "PUT" : "GET"),

View File

@@ -30,7 +30,8 @@
type JSONData = Record<string | number | symbol, any> | [any];
const docsArray = $derived.by(() => {
if (docs && docs.length >= 1) {
// The merge pane compares two revisions, so guard against incomplete input before reading docs[1].
if (docs && docs.length >= 2) {
if (keepOrder || docs[0].mtime < docs[1].mtime) {
return { a: docs[0], b: docs[1] } as const;
} else {

View File

@@ -636,10 +636,24 @@ Offline Changed files: ${processFiles.length}`;
// --> Conflict processing
// Keep one in-flight conflict check per path so repeated sync events do not close the active merge dialogue.
pendingConflictChecks = new Set<FilePathWithPrefix>();
queueConflictCheck(path: FilePathWithPrefix) {
if (this.pendingConflictChecks.has(path)) return;
this.pendingConflictChecks.add(path);
this.conflictResolutionProcessor.enqueue(path);
}
finishConflictCheck(path: FilePathWithPrefix) {
this.pendingConflictChecks.delete(path);
}
requeueConflictCheck(path: FilePathWithPrefix) {
this.finishConflictCheck(path);
this.queueConflictCheck(path);
}
async resolveConflictOnInternalFiles() {
// Scan all conflicted internal files
const conflicted = this.localDatabase.findEntries(ICHeader, ICHeaderEnd, { conflicts: true });
@@ -648,7 +662,7 @@ Offline Changed files: ${processFiles.length}`;
for await (const doc of conflicted) {
if (!("_conflicts" in doc)) continue;
if (isInternalMetadata(doc._id)) {
this.conflictResolutionProcessor.enqueue(doc.path);
this.queueConflictCheck(doc.path);
}
}
} catch (ex) {
@@ -679,21 +693,27 @@ Offline Changed files: ${processFiles.length}`;
const cc = await this.localDatabase.getRaw(id, { conflicts: true });
if (cc._conflicts?.length === 0) {
await this.extractInternalFileFromDatabase(stripAllPrefixes(path));
this.finishConflictCheck(path);
} else {
this.conflictResolutionProcessor.enqueue(path);
this.requeueConflictCheck(path);
}
// check the file again
}
conflictResolutionProcessor = new QueueProcessor(
async (paths: FilePathWithPrefix[]) => {
const path = paths[0];
sendSignal(`cancel-internal-conflict:${path}`);
try {
// Retrieve data
const id = await this.path2id(path, ICHeader);
const doc = await this.localDatabase.getRaw<MetaEntry>(id, { conflicts: true });
if (doc._conflicts === undefined) return [];
if (doc._conflicts.length == 0) return [];
if (doc._conflicts === undefined) {
this.finishConflictCheck(path);
return [];
}
if (doc._conflicts.length == 0) {
this.finishConflictCheck(path);
return [];
}
this._log(`Hidden file conflicted:${path}`);
const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
const revA = doc._rev;
@@ -725,7 +745,7 @@ Offline Changed files: ${processFiles.length}`;
await this.storeInternalFileToDatabase({ path: filename, ...stat });
await this.extractInternalFileFromDatabase(filename);
await this.localDatabase.removeRevision(id, revB);
this.conflictResolutionProcessor.enqueue(path);
this.requeueConflictCheck(path);
return [];
} else {
this._log(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE);
@@ -743,6 +763,7 @@ Offline Changed files: ${processFiles.length}`;
await this.resolveByNewerEntry(id, path, doc, revA, revB);
return [];
} catch (ex) {
this.finishConflictCheck(path);
this._log(`Failed to resolve conflict (Hidden): ${path}`);
this._log(ex, LOG_LEVEL_VERBOSE);
return [];
@@ -761,15 +782,22 @@ Offline Changed files: ${processFiles.length}`;
const prefixedPath = addPrefix(path, ICHeader);
const docAMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revA });
const docBMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revB });
if (docAMerge != false && docBMerge != false) {
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
// Again for other conflicted revisions.
this.conflictResolutionProcessor.enqueue(path);
try {
if (docAMerge != false && docBMerge != false) {
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
// Again for other conflicted revisions.
this.requeueConflictCheck(path);
} else {
this.finishConflictCheck(path);
}
return;
} else {
// If either revision could not read, force resolving by the newer one.
await this.resolveByNewerEntry(id, path, doc, revA, revB);
}
return;
} else {
// If either revision could not read, force resolving by the newer one.
await this.resolveByNewerEntry(id, path, doc, revA, revB);
} catch (ex) {
this.finishConflictCheck(path);
throw ex;
}
},
{
@@ -793,6 +821,8 @@ Offline Changed files: ${processFiles.length}`;
const storeFilePath = strippedPath;
const displayFilename = `${storeFilePath}`;
// const path = this.prefixedConfigDir2configDir(stripAllPrefixes(docA.path)) || docA.path;
// Cancel only when replacing an existing dialogue for the same path, not on every queue pass.
sendSignal(`cancel-internal-conflict:${docA.path}`);
const modal = new JsonResolveModal(this.app, storageFilePath, [docA, docB], async (keep, result) => {
// modal.close();
try {
@@ -1164,7 +1194,7 @@ Offline Changed files: ${files.length}`;
// Check if the file is conflicted, and if so, enqueue to resolve.
// Until the conflict is resolved, the file will not be processed.
if (docMeta._conflicts && docMeta._conflicts.length > 0) {
this.conflictResolutionProcessor.enqueue(path);
this.queueConflictCheck(path);
this._log(`${headerLine} Hidden file conflicted, enqueued to resolve`);
return true;
}

View File

@@ -781,7 +781,8 @@ Success: ${successCount}, Errored: ${errored}`;
const credential = generateCredentialObject(this.settings);
const request = async (path: string, method: string = "GET", body: any = undefined) => {
const req = await _requestToCouchDB(
this.settings.couchDB_URI + (this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
this.settings.couchDB_URI.replace(/\/+$/, "") +
(this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
credential,
window.origin,
path,

Submodule src/lib updated: 37b8e2813e...6c53e748eb

View File

@@ -1,6 +1,6 @@
import { TFile, Modal, App, DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT, diff_match_patch } from "../../../deps.ts";
import { getPathFromTFile, isValidPath } from "../../../common/utils.ts";
import { decodeBinary, escapeStringToHTML, readString } from "../../../lib/src/string_and_binary/convert.ts";
import { decodeBinary, readString } from "../../../lib/src/string_and_binary/convert.ts";
import ObsidianLiveSyncPlugin from "../../../main.ts";
import {
type DocumentID,
@@ -66,6 +66,11 @@ export class DocumentHistoryModal extends Modal {
currentDeleted = false;
initialRev?: string;
// Diff navigation state
currentDiffIndex = -1;
diffNavContainer!: HTMLDivElement;
diffNavIndicator!: HTMLSpanElement;
constructor(
app: App,
core: LiveSyncBaseCore,
@@ -140,22 +145,66 @@ export class DocumentHistoryModal extends Modal {
return v;
}
prepareContentView(usePreformatted = true) {
this.contentView.empty();
this.contentView.toggleClass("op-pre", usePreformatted);
}
appendTextDiff(diff: [number, string][]) {
for (const [operation, text] of diff) {
if (operation == DIFF_DELETE) {
this.contentView.createSpan({ text, cls: "history-deleted" });
} else if (operation == DIFF_EQUAL) {
this.contentView.createSpan({ text, cls: "history-normal" });
} else if (operation == DIFF_INSERT) {
this.contentView.createSpan({ text, cls: "history-added" });
}
}
}
appendImageDiff(baseSrc: string, overlaySrc?: string) {
const wrap = this.contentView.createDiv({ cls: "ls-imgdiff-wrap" });
const overlay = wrap.createDiv({ cls: "overlay" });
overlay.createEl("img", { cls: "img-base" }, (img) => {
img.src = baseSrc;
});
if (overlaySrc) {
overlay.createEl("img", { cls: "img-overlay" }, (img) => {
img.src = overlaySrc;
});
}
}
appendDeletedNotice(usePreformatted = true) {
const notice = "(At this revision, the file has been deleted)";
if (usePreformatted) {
this.contentView.appendText(`${notice}\n`);
} else {
this.contentView.createDiv({ text: notice });
}
}
async showExactRev(rev: string) {
const db = this.core.localDatabase;
const w = await db.getDBEntry(this.file, { rev: rev }, false, false, true);
this.currentText = "";
this.currentDeleted = false;
this.prepareContentView();
if (w === false) {
this.currentDeleted = true;
this.info.innerHTML = "";
this.contentView.innerHTML = `Could not read this revision<br>(${rev})`;
this.info.empty();
this.contentView.appendText("Could not read this revision");
this.contentView.createEl("br");
this.contentView.appendText(`(${rev})`);
} else {
this.currentDoc = w;
this.info.innerHTML = `Modified:${new Date(w.mtime).toLocaleString()}`;
let result = undefined;
this.info.setText(`Modified:${new Date(w.mtime).toLocaleString()}`);
const w1data = readDocument(w);
this.currentDeleted = !!w.deleted;
// this.currentText = w1data;
if (typeof w1data == "string") {
this.currentText = w1data;
}
let rendered = false;
if (this.showDiff) {
const prevRevIdx = this.revs_info.length - 1 - ((this.range.value as any) / 1 - 1);
if (prevRevIdx >= 0 && prevRevIdx < this.revs_info.length) {
@@ -163,58 +212,112 @@ export class DocumentHistoryModal extends Modal {
const w2 = await db.getDBEntry(this.file, { rev: oldRev }, false, false, true);
if (w2 != false) {
if (typeof w1data == "string") {
result = "";
const dmp = new diff_match_patch();
const w2data = readDocument(w2) as string;
const diff = dmp.diff_main(w2data, w1data);
dmp.diff_cleanupSemantic(diff);
for (const v of diff) {
const x1 = v[0];
const x2 = v[1];
if (x1 == DIFF_DELETE) {
result += "<span class='history-deleted'>" + escapeStringToHTML(x2) + "</span>";
} else if (x1 == DIFF_EQUAL) {
result += "<span class='history-normal'>" + escapeStringToHTML(x2) + "</span>";
} else if (x1 == DIFF_INSERT) {
result += "<span class='history-added'>" + escapeStringToHTML(x2) + "</span>";
const w2data = readDocument(w2);
if (typeof w2data == "string") {
const dmp = new diff_match_patch();
const diff = dmp.diff_main(w2data, w1data);
dmp.diff_cleanupSemantic(diff);
if (this.currentDeleted) {
this.appendDeletedNotice();
}
this.appendTextDiff(diff);
rendered = true;
}
result = result.replace(/\n/g, "<br>");
} else if (isImage(this.file)) {
const src = this.generateBlobURL("base", w1data);
const overlay = this.generateBlobURL(
"overlay",
readDocument(w2) as Uint8Array<ArrayBuffer>
);
result = `<div class='ls-imgdiff-wrap'>
<div class='overlay'>
<img class='img-base' src="${src}">
<img class='img-overlay' src='${overlay}'>
</div>
</div>`;
this.contentView.removeClass("op-pre");
this.prepareContentView(false);
if (this.currentDeleted) {
this.appendDeletedNotice(false);
}
this.appendImageDiff(src, overlay);
rendered = true;
}
}
}
}
if (result == undefined) {
if (!rendered) {
if (typeof w1data != "string") {
if (isImage(this.file)) {
const src = this.generateBlobURL("base", w1data);
result = `<div class='ls-imgdiff-wrap'>
<div class='overlay'>
<img class='img-base' src="${src}">
</div>
</div>`;
this.contentView.removeClass("op-pre");
this.prepareContentView(false);
if (this.currentDeleted) {
this.appendDeletedNotice(false);
}
this.appendImageDiff(src);
} else {
if (this.currentDeleted) {
this.appendDeletedNotice();
}
this.contentView.appendText("Binary file");
}
} else {
result = escapeStringToHTML(w1data);
if (this.currentDeleted) {
this.appendDeletedNotice();
}
this.contentView.appendText(w1data);
}
}
if (result == undefined) result = typeof w1data == "string" ? escapeStringToHTML(w1data) : "Binary file";
this.contentView.innerHTML =
(this.currentDeleted ? "(At this revision, the file has been deleted)\n" : "") + result;
}
// Reset diff navigation after content changes
this.resetDiffNavigation();
if (this.showDiff) {
this.navigateDiff("next");
}
}
/**
* Navigate to the previous or next diff block in the content view.
* Only effective when diff highlighting is enabled.
*/
navigateDiff(direction: "prev" | "next") {
const diffElements = this.contentView.querySelectorAll(".history-added, .history-deleted");
if (diffElements.length === 0) return;
// Remove previous focus highlight
const prevFocused = this.contentView.querySelector(".diff-focused");
if (prevFocused) {
prevFocused.classList.remove("diff-focused");
}
if (direction === "next") {
this.currentDiffIndex = (this.currentDiffIndex + 1) % diffElements.length;
} else {
this.currentDiffIndex = this.currentDiffIndex <= 0 ? diffElements.length - 1 : this.currentDiffIndex - 1;
}
const target = diffElements[this.currentDiffIndex];
target.classList.add("diff-focused");
target.scrollIntoView({ behavior: "smooth", block: "center" });
this.diffNavIndicator.textContent = `${this.currentDiffIndex + 1}/${diffElements.length}`;
}
/**
* Reset the diff navigation index and update the indicator.
*/
resetDiffNavigation() {
this.currentDiffIndex = -1;
if (this.diffNavIndicator) {
if (this.showDiff) {
const diffElements = this.contentView.querySelectorAll(".history-added, .history-deleted");
this.diffNavIndicator.textContent = diffElements.length > 0 ? `0/${diffElements.length}` : "\u2014";
} else {
this.diffNavIndicator.textContent = "\u2014";
}
}
this.updateDiffNavVisibility();
}
/**
* Show or hide the diff navigation buttons based on the showDiff state.
*/
updateDiffNavVisibility() {
if (this.diffNavContainer) {
this.diffNavContainer.style.display = this.showDiff ? "flex" : "none";
}
}
@@ -236,25 +339,47 @@ export class DocumentHistoryModal extends Modal {
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
});
contentEl
.createDiv("", (e) => {
e.createEl("label", {}, (label) => {
label.appendChild(
createEl("input", { type: "checkbox" }, (checkbox) => {
if (this.showDiff) {
checkbox.checked = true;
}
checkbox.addEventListener("input", (evt: any) => {
this.showDiff = checkbox.checked;
localStorage.setItem("ols-history-highlightdiff", this.showDiff == true ? "1" : "");
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
})
);
label.appendText("Highlight diff");
});
})
.addClass("op-info");
const diffOptionsRow = contentEl.createDiv("");
diffOptionsRow.addClass("op-info");
diffOptionsRow.addClass("diff-options-row");
diffOptionsRow.createEl("label", {}, (label) => {
label.appendChild(
createEl("input", { type: "checkbox" }, (checkbox) => {
if (this.showDiff) {
checkbox.checked = true;
}
checkbox.addEventListener("input", (evt: any) => {
this.showDiff = checkbox.checked;
localStorage.setItem("ols-history-highlightdiff", this.showDiff == true ? "1" : "");
this.updateDiffNavVisibility();
void scheduleOnceIfDuplicated("loadRevs", () => this.loadRevs());
});
})
);
label.appendText("Highlight diff");
});
// Diff navigation buttons
this.diffNavContainer = diffOptionsRow.createDiv("");
this.diffNavContainer.addClass("diff-nav");
this.diffNavContainer.style.display = this.showDiff ? "flex" : "none";
this.diffNavContainer.createEl("button", { text: "\u25B2 Prev" }, (e) => {
e.addClass("diff-nav-btn");
e.addEventListener("click", () => {
this.navigateDiff("prev");
});
});
this.diffNavContainer.createEl("button", { text: "\u25BC Next" }, (e) => {
e.addClass("diff-nav-btn");
e.addEventListener("click", () => {
this.navigateDiff("next");
});
});
this.diffNavIndicator = this.diffNavContainer.createEl("span", { text: "\u2014" });
this.diffNavIndicator.addClass("diff-nav-indicator");
this.info = contentEl.createDiv("");
this.info.addClass("op-info");
fireAndForget(async () => await this.loadFile(this.initialRev));

View File

@@ -1,7 +1,6 @@
import { App, Modal } from "../../../deps.ts";
import { DIFF_DELETE, DIFF_EQUAL, DIFF_INSERT } from "diff-match-patch";
import { CANCELLED, LEAVE_TO_SUBSEQUENT, type diff_result } from "../../../lib/src/common/types.ts";
import { escapeStringToHTML } from "../../../lib/src/string_and_binary/convert.ts";
import { delay } from "../../../lib/src/common/utils.ts";
import { eventHub } from "../../../common/events.ts";
import { globalSlipBoard } from "../../../lib/src/bureau/bureau.ts";
@@ -44,6 +43,25 @@ export class ConflictResolveModal extends Modal {
// sendValue("close-resolve-conflict:" + this.filename, false);
}
appendDiffFragment(container: HTMLDivElement, text: string, cls: string) {
const lines = text.split("\n");
lines.forEach((line, index) => {
const span = container.createSpan({ cls });
span.textContent = line;
if (index < lines.length - 1) {
container.createSpan({ cls: "ls-mark-cr" });
container.createEl("br");
}
});
}
appendVersionInfo(container: HTMLDivElement, cls: string, name: string, date: string) {
const line = container.createSpan({ cls });
line.createSpan({ text: name, cls: "conflict-dev-name" });
line.appendText(`: ${date}`);
container.createEl("br");
}
override onOpen() {
const { contentEl } = this;
// Send cancel signal for the previous merge dialogue
@@ -64,25 +82,21 @@ export class ConflictResolveModal extends Modal {
const div = contentEl.createDiv("");
div.addClass("op-scrollable");
div.addClass("ls-dialog");
let diff = "";
let diffLength = 0;
for (const v of this.result.diff) {
const x1 = v[0];
const x2 = v[1];
diffLength += x2.length;
if (diffLength > 100 * 1024) {
continue;
}
if (x1 == DIFF_DELETE) {
diff +=
"<span class='deleted'>" +
escapeStringToHTML(x2).replace(/\n/g, "<span class='ls-mark-cr'></span>\n") +
"</span>";
this.appendDiffFragment(div, x2, "deleted");
div.createEl("span", { text: x2, cls: "deleted normal conflict-dev-name" });
} else if (x1 == DIFF_EQUAL) {
diff +=
"<span class='normal'>" +
escapeStringToHTML(x2).replace(/\n/g, "<span class='ls-mark-cr'></span>\n") +
"</span>";
this.appendDiffFragment(div, x2, "normal");
} else if (x1 == DIFF_INSERT) {
diff +=
"<span class='added'>" +
escapeStringToHTML(x2).replace(/\n/g, "<span class='ls-mark-cr'></span>\n") +
"</span>";
this.appendDiffFragment(div, x2, "added");
}
}
@@ -92,8 +106,8 @@ export class ConflictResolveModal extends Modal {
new Date(this.result.left.mtime).toLocaleString() + (this.result.left.deleted ? " (Deleted)" : "");
const date2 =
new Date(this.result.right.mtime).toLocaleString() + (this.result.right.deleted ? " (Deleted)" : "");
div2.innerHTML = `<span class='deleted'><span class='conflict-dev-name'>${this.localName}</span>: ${date1}</span><br>
<span class='added'><span class='conflict-dev-name'>${this.remoteName}</span>: ${date2}</span><br>`;
this.appendVersionInfo(div2, "deleted", this.localName, date1);
this.appendVersionInfo(div2, "added", this.remoteName, date2);
contentEl.createEl("button", { text: `Use ${this.localName}` }, (e) =>
e.addEventListener("click", () => this.sendResponse(this.result.right.rev))
).style.marginRight = "4px";
@@ -108,11 +122,9 @@ export class ConflictResolveModal extends Modal {
contentEl.createEl("button", { text: !this.pluginPickMode ? "Not now" : "Cancel" }, (e) =>
e.addEventListener("click", () => this.sendResponse(CANCELLED))
).style.marginRight = "4px";
diff = diff.replace(/\n/g, "<br>");
if (diff.length > 100 * 1024) {
if (diffLength > 100 * 1024) {
div.empty();
div.innerText = "(Too large diff to display)";
} else {
div.innerHTML = diff;
}
}

View File

@@ -1,208 +0,0 @@
import { type ObsidianLiveSyncSettings, LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "../../lib/src/common/types.ts";
import { configURIBase } from "../../common/types.ts";
// import { PouchDB } from "../../lib/src/pouchdb/pouchdb-browser.js";
import { fireAndForget } from "../../lib/src/common/utils.ts";
import {
EVENT_REQUEST_COPY_SETUP_URI,
EVENT_REQUEST_OPEN_P2P_SETTINGS,
EVENT_REQUEST_OPEN_SETUP_URI,
EVENT_REQUEST_SHOW_SETUP_QR,
eventHub,
} from "../../common/events.ts";
import { $msg } from "../../lib/src/common/i18n.ts";
// import { performDoctorConsultation, RebuildOptions } from "@/lib/src/common/configForDoc.ts";
import type { LiveSyncCore } from "../../main.ts";
import {
encodeQR,
encodeSettingsToQRCodeData,
encodeSettingsToSetupURI,
OutputFormat,
} from "../../lib/src/API/processSetting.ts";
import { SetupManager, UserMode } from "./SetupManager.ts";
import { AbstractModule } from "../AbstractModule.ts";
export class ModuleSetupObsidian extends AbstractModule {
private _setupManager!: SetupManager;
private _everyOnload(): Promise<boolean> {
this._setupManager = this.core.getModule(SetupManager);
try {
this.registerObsidianProtocolHandler("setuplivesync", async (conf: any) => {
if (conf.settings) {
await this._setupManager.onUseSetupURI(
UserMode.Unknown,
`${configURIBase}${encodeURIComponent(conf.settings)}`
);
} else if (conf.settingsQR) {
await this._setupManager.decodeQR(conf.settingsQR);
}
});
} catch (e) {
this._log(
"Failed to register protocol handler. This feature may not work in some environments.",
LOG_LEVEL_NOTICE
);
this._log(e, LOG_LEVEL_VERBOSE);
}
this.addCommand({
id: "livesync-setting-qr",
name: "Show settings as a QR code",
callback: () => fireAndForget(this.encodeQR()),
});
this.addCommand({
id: "livesync-copysetupuri",
name: "Copy settings as a new setup URI",
callback: () => fireAndForget(this.command_copySetupURI()),
});
this.addCommand({
id: "livesync-copysetupuri-short",
name: "Copy settings as a new setup URI (With customization sync)",
callback: () => fireAndForget(this.command_copySetupURIWithSync()),
});
this.addCommand({
id: "livesync-copysetupurifull",
name: "Copy settings as a new setup URI (Full)",
callback: () => fireAndForget(this.command_copySetupURIFull()),
});
this.addCommand({
id: "livesync-opensetupuri",
name: "Use the copied setup URI (Formerly Open setup URI)",
callback: () => fireAndForget(this.command_openSetupURI()),
});
eventHub.onEvent(EVENT_REQUEST_OPEN_SETUP_URI, () => fireAndForget(() => this.command_openSetupURI()));
eventHub.onEvent(EVENT_REQUEST_COPY_SETUP_URI, () => fireAndForget(() => this.command_copySetupURI()));
eventHub.onEvent(EVENT_REQUEST_SHOW_SETUP_QR, () => fireAndForget(() => this.encodeQR()));
eventHub.onEvent(EVENT_REQUEST_OPEN_P2P_SETTINGS, () =>
fireAndForget(() => {
return this._setupManager.onP2PManualSetup(UserMode.Update, this.settings, false);
})
);
return Promise.resolve(true);
}
async encodeQR() {
const settingString = encodeSettingsToQRCodeData(this.settings);
const codeSVG = encodeQR(settingString, OutputFormat.SVG);
if (codeSVG == "") {
return "";
}
const msg = $msg("Setup.QRCode", { qr_image: codeSVG });
await this.core.confirm.confirmWithMessage("Settings QR Code", msg, ["OK"], "OK");
return await Promise.resolve(codeSVG);
}
async askEncryptingPassphrase(): Promise<string | false> {
const encryptingPassphrase = await this.core.confirm.askString(
"Encrypt your settings",
"The passphrase to encrypt the setup URI",
"",
true
);
return encryptingPassphrase;
}
async command_copySetupURI(stripExtra = true) {
const encryptingPassphrase = await this.askEncryptingPassphrase();
if (encryptingPassphrase === false) return;
const encryptedURI = await encodeSettingsToSetupURI(
this.settings,
encryptingPassphrase,
[...((stripExtra ? ["pluginSyncExtendedSetting"] : []) as (keyof ObsidianLiveSyncSettings)[])],
true
);
if (await this.services.UI.promptCopyToClipboard("Setup URI", encryptedURI)) {
this._log("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
}
// await navigator.clipboard.writeText(encryptedURI);
}
async command_copySetupURIFull() {
const encryptingPassphrase = await this.askEncryptingPassphrase();
if (encryptingPassphrase === false) return;
const encryptedURI = await encodeSettingsToSetupURI(this.settings, encryptingPassphrase, [], false);
await navigator.clipboard.writeText(encryptedURI);
this._log("Setup URI copied to clipboard", LOG_LEVEL_NOTICE);
}
async command_copySetupURIWithSync() {
await this.command_copySetupURI(false);
}
async command_openSetupURI() {
await this._setupManager.onUseSetupURI(UserMode.Unknown);
}
// TODO: Where to implement these?
// async askSyncWithRemoteConfig(tryingSettings: ObsidianLiveSyncSettings): Promise<ObsidianLiveSyncSettings> {
// const buttons = {
// fetch: $msg("Setup.FetchRemoteConf.Buttons.Fetch"),
// no: $msg("Setup.FetchRemoteConf.Buttons.Skip"),
// } as const;
// const fetchRemoteConf = await this.core.confirm.askSelectStringDialogue(
// $msg("Setup.FetchRemoteConf.Message"),
// Object.values(buttons),
// { defaultAction: buttons.fetch, timeout: 0, title: $msg("Setup.FetchRemoteConf.Title") }
// );
// if (fetchRemoteConf == buttons.no) {
// return tryingSettings;
// }
// const newSettings = JSON.parse(JSON.stringify(tryingSettings)) as ObsidianLiveSyncSettings;
// const remoteConfig = await this.services.tweakValue.fetchRemotePreferred(newSettings);
// if (remoteConfig) {
// this._log("Remote configuration found.", LOG_LEVEL_NOTICE);
// const resultSettings = {
// ...DEFAULT_SETTINGS,
// ...tryingSettings,
// ...remoteConfig,
// } satisfies ObsidianLiveSyncSettings;
// return resultSettings;
// } else {
// this._log("Remote configuration not applied.", LOG_LEVEL_NOTICE);
// return {
// ...DEFAULT_SETTINGS,
// ...tryingSettings,
// } satisfies ObsidianLiveSyncSettings;
// }
// }
// async askPerformDoctor(
// tryingSettings: ObsidianLiveSyncSettings
// ): Promise<{ settings: ObsidianLiveSyncSettings; shouldRebuild: boolean; isModified: boolean }> {
// const buttons = {
// yes: $msg("Setup.Doctor.Buttons.Yes"),
// no: $msg("Setup.Doctor.Buttons.No"),
// } as const;
// const performDoctor = await this.core.confirm.askSelectStringDialogue(
// $msg("Setup.Doctor.Message"),
// Object.values(buttons),
// { defaultAction: buttons.yes, timeout: 0, title: $msg("Setup.Doctor.Title") }
// );
// if (performDoctor == buttons.no) {
// return { settings: tryingSettings, shouldRebuild: false, isModified: false };
// }
// const newSettings = JSON.parse(JSON.stringify(tryingSettings)) as ObsidianLiveSyncSettings;
// const { settings, shouldRebuild, isModified } = await performDoctorConsultation(this.core, newSettings, {
// localRebuild: RebuildOptions.AutomaticAcceptable, // Because we are in the setup wizard, we can skip the confirmation.
// remoteRebuild: RebuildOptions.SkipEvenIfRequired,
// activateReason: "New settings from URI",
// });
// if (isModified) {
// this._log("Doctor has fixed some issues!", LOG_LEVEL_NOTICE);
// return {
// settings,
// shouldRebuild,
// isModified,
// };
// } else {
// this._log("Doctor detected no issues!", LOG_LEVEL_NOTICE);
// return { settings: tryingSettings, shouldRebuild: false, isModified: false };
// }
// }
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
services.appLifecycle.onLoaded.addHandler(this._everyOnload.bind(this));
}
}

View File

@@ -43,10 +43,13 @@ export function paneChangeLog(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElem
// tmpDiv.addClass("sls-header-button");
tmpDiv.addClass("op-warn-info");
tmpDiv.innerHTML = `<p>${$msg("obsidianLiveSyncSettingTab.msgNewVersionNote")}</p><button>${$msg("obsidianLiveSyncSettingTab.optionOkReadEverything")}</button>`;
tmpDiv.createEl("p", { text: $msg("obsidianLiveSyncSettingTab.msgNewVersionNote") });
const readEverythingButton = tmpDiv.createEl("button", {
text: $msg("obsidianLiveSyncSettingTab.optionOkReadEverything"),
});
if (lastVersion > (this.editingSettings?.lastReadUpdates || 0)) {
const informationButtonDiv = informationDivEl.appendChild(tmpDiv);
informationButtonDiv.querySelector("button")?.addEventListener("click", () => {
readEverythingButton.addEventListener("click", () => {
fireAndForget(async () => {
this.editingSettings.lastReadUpdates = lastVersion;
await this.saveAllDirtySettings();

View File

@@ -137,6 +137,23 @@ export function paneHatch(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement,
pluginConfig.accessKey = REDACTED;
pluginConfig.secretKey = REDACTED;
const redact = (source: string) => `${REDACTED}(${source.length} letters)`;
const toSchemeOnly = (uri: string) => {
try {
return `${new URL(uri).protocol}//`;
} catch {
const matched = uri.match(/^[A-Za-z][A-Za-z0-9+.-]*:\/\//);
return matched?.[0] ?? REDACTED;
}
};
pluginConfig.remoteConfigurations = Object.fromEntries(
Object.entries(pluginConfig.remoteConfigurations || {}).map(([id, config]) => [
id,
{
...config,
uri: toSchemeOnly(config.uri),
},
])
);
pluginConfig.region = redact(pluginConfig.region);
pluginConfig.bucket = redact(pluginConfig.bucket);
pluginConfig.pluginSyncExtendedSetting = {};

View File

@@ -32,6 +32,7 @@ import SetupRemote from "../SetupWizard/dialogs/SetupRemote.svelte";
import SetupRemoteCouchDB from "../SetupWizard/dialogs/SetupRemoteCouchDB.svelte";
import SetupRemoteBucket from "../SetupWizard/dialogs/SetupRemoteBucket.svelte";
import SetupRemoteP2P from "../SetupWizard/dialogs/SetupRemoteP2P.svelte";
import { syncActivatedRemoteSettings } from "./remoteConfigBuffer.ts";
function getSettingsFromEditingSettings(editingSettings: AllSettings): ObsidianLiveSyncSettings {
const workObj = { ...editingSettings } as ObsidianLiveSyncSettings;
@@ -183,6 +184,11 @@ export function paneRemoteConfig(
}, true);
if (synchroniseActiveRemote) {
// Keep both buffers aligned with the newly activated remote before saving any remaining dirty keys.
syncActivatedRemoteSettings(this.editingSettings, this.core.settings);
if (this.initialSettings) {
syncActivatedRemoteSettings(this.initialSettings, this.core.settings);
}
await this.saveAllDirtySettings();
}
@@ -254,7 +260,7 @@ export function paneRemoteConfig(
id,
name: name.trim() || "New Remote",
uri: serializeRemoteConfiguration(nextSettings),
isEncrypted: nextSettings.encrypt,
isEncrypted: false,
};
this.editingSettings.remoteConfigurations = configs;
if (!this.editingSettings.activeConfigurationId) {
@@ -332,7 +338,16 @@ export function paneRemoteConfig(
row.addButton((btn) =>
setEmojiButton(btn, "🔧", "Configure").onClick(async () => {
const parsed = ConnectionStringParser.parse(config.uri);
let parsed: RemoteConfigurationResult;
try {
parsed = ConnectionStringParser.parse(config.uri);
} catch (ex) {
this.services.API.addLog(
`Failed to parse remote configuration '${config.id}' for editing: ${ex}`,
LOG_LEVEL_NOTICE
);
return;
}
const workSettings = createBaseRemoteSettings();
if (parsed.type === "couchdb") {
workSettings.remoteType = REMOTE_COUCHDB;
@@ -352,7 +367,7 @@ export function paneRemoteConfig(
nextConfigs[config.id] = {
...config,
uri: serializeRemoteConfiguration(nextSettings),
isEncrypted: nextSettings.encrypt,
isEncrypted: false,
};
this.editingSettings.remoteConfigurations = nextConfigs;
await persistRemoteConfigurations(config.id === this.editingSettings.activeConfigurationId);
@@ -430,6 +445,38 @@ export function paneRemoteConfig(
});
})
.addSeparator()
.addItem((item) => {
item.setTitle("📡 Fetch remote settings").onClick(async () => {
let parsed: RemoteConfigurationResult;
try {
parsed = ConnectionStringParser.parse(config.uri);
} catch (ex) {
this.services.API.addLog(
`Failed to parse remote configuration '${config.id}': ${ex}`,
LOG_LEVEL_NOTICE
);
return;
}
const workSettings = createBaseRemoteSettings();
if (parsed.type === "couchdb") {
workSettings.remoteType = REMOTE_COUCHDB;
} else if (parsed.type === "s3") {
workSettings.remoteType = REMOTE_MINIO;
} else {
workSettings.remoteType = REMOTE_P2P;
}
Object.assign(workSettings, parsed.settings);
const newTweaks =
await this.services.tweakValue.checkAndAskUseRemoteConfiguration(
workSettings
);
if (newTweaks.result !== false) {
this.editingSettings = { ...this.editingSettings, ...newTweaks.result };
this.requestUpdate();
}
});
})
.addSeparator()
.addItem((item) => {
item.setTitle("🗑 Delete").onClick(async () => {
const confirmed = await this.services.UI.confirm.askYesNoDialog(

View File

@@ -121,13 +121,13 @@ export function paneSetup(
const repo = "vrtmrz/obsidian-livesync";
const topPath = $msg("obsidianLiveSyncSettingTab.linkTroubleshooting");
const rawRepoURI = `https://raw.githubusercontent.com/${repo}/main`;
this.createEl(
paneEl,
"div",
"",
(el) =>
(el.innerHTML = `<a href='https://github.com/${repo}/blob/main${topPath}' target="_blank">${$msg("obsidianLiveSyncSettingTab.linkOpenInBrowser")}</a>`)
);
this.createEl(paneEl, "div", "", (el) => {
el.createEl("a", { text: $msg("obsidianLiveSyncSettingTab.linkOpenInBrowser") }, (anchor) => {
anchor.href = `https://github.com/${repo}/blob/main${topPath}`;
anchor.target = "_blank";
anchor.rel = "noopener";
});
});
const troubleShootEl = this.createEl(paneEl, "div", {
text: "",
cls: "sls-troubleshoot-preview",

View File

@@ -0,0 +1,17 @@
import { pickBucketSyncSettings, pickCouchDBSyncSettings, pickP2PSyncSettings } from "@lib/common/utils.ts";
import type { ObsidianLiveSyncSettings } from "@lib/common/types.ts";
// Keep the setting dialogue buffer aligned with the current core settings before persisting other dirty keys.
// This also clears stale dirty values left from editing a different remote type before switching active remotes.
export function syncActivatedRemoteSettings(
target: Partial<ObsidianLiveSyncSettings>,
source: ObsidianLiveSyncSettings
): void {
Object.assign(target, {
remoteType: source.remoteType,
activeConfigurationId: source.activeConfigurationId,
...pickBucketSyncSettings(source),
...pickCouchDBSyncSettings(source),
...pickP2PSyncSettings(source),
});
}

View File

@@ -0,0 +1,83 @@
import { describe, expect, it } from "vitest";
import { DEFAULT_SETTINGS, REMOTE_COUCHDB, REMOTE_MINIO } from "../../../lib/src/common/types";
import { syncActivatedRemoteSettings } from "./remoteConfigBuffer";
describe("syncActivatedRemoteSettings", () => {
it("should copy active MinIO credentials into the editing buffer", () => {
const target = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_COUCHDB,
activeConfigurationId: "old-remote",
accessKey: "",
secretKey: "",
endpoint: "",
bucket: "",
region: "",
encrypt: true,
};
const source = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
bucketPrefix: "folder/",
useCustomRequestHandler: false,
forcePathStyle: true,
bucketCustomHeaders: "",
};
syncActivatedRemoteSettings(target, source);
expect(target.remoteType).toBe(REMOTE_MINIO);
expect(target.activeConfigurationId).toBe("remote-s3");
expect(target.accessKey).toBe("access");
expect(target.secretKey).toBe("secret");
expect(target.endpoint).toBe("https://minio.example.test");
expect(target.bucket).toBe("vault");
expect(target.region).toBe("sz-hq");
expect(target.bucketPrefix).toBe("folder/");
expect(target.encrypt).toBe(true);
});
it("should clear stale dirty values from a different remote type", () => {
const target = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
couchDB_URI: "https://edited.invalid",
couchDB_USER: "edited-user",
couchDB_PASSWORD: "edited-pass",
couchDB_DBNAME: "edited-db",
};
const source = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
couchDB_URI: "https://current.example.test",
couchDB_USER: "current-user",
couchDB_PASSWORD: "current-pass",
couchDB_DBNAME: "current-db",
};
syncActivatedRemoteSettings(target, source);
expect(target.couchDB_URI).toBe("https://current.example.test");
expect(target.couchDB_USER).toBe("current-user");
expect(target.couchDB_PASSWORD).toBe("current-pass");
expect(target.couchDB_DBNAME).toBe("current-db");
});
});

View File

@@ -13,7 +13,7 @@ export const checkConfig = async (
Logger($msg("obsidianLiveSyncSettingTab.logCheckingDbConfig"), LOG_LEVEL_INFO);
let isSuccessful = true;
const emptyDiv = createDiv();
emptyDiv.innerHTML = "<span></span>";
emptyDiv.createSpan();
checkResultDiv?.replaceChildren(...[emptyDiv]);
const addResult = (msg: string, classes?: string[]) => {
const tmpDiv = createDiv();
@@ -21,7 +21,7 @@ export const checkConfig = async (
if (classes) {
tmpDiv.addClasses(classes);
}
tmpDiv.innerHTML = `${msg}`;
tmpDiv.textContent = msg;
checkResultDiv?.appendChild(tmpDiv);
};
try {
@@ -47,9 +47,10 @@ export const checkConfig = async (
if (!checkResultDiv) return;
const tmpDiv = createDiv();
tmpDiv.addClass("ob-btn-config-fix");
tmpDiv.innerHTML = `<label>${title}</label><button>${$msg("obsidianLiveSyncSettingTab.btnFix")}</button>`;
tmpDiv.createEl("label", { text: title });
const fixButton = tmpDiv.createEl("button", { text: $msg("obsidianLiveSyncSettingTab.btnFix") });
const x = checkResultDiv.appendChild(tmpDiv);
x.querySelector("button")?.addEventListener("click", () => {
fixButton.addEventListener("click", () => {
fireAndForget(async () => {
Logger($msg("obsidianLiveSyncSettingTab.logCouchDbConfigSet", { title, key, value }));
const res = await requestToCouchDBWithCredentials(

View File

@@ -4,10 +4,10 @@
import Decision from "@/lib/src/UI/components/Decision.svelte";
import Instruction from "@/lib/src/UI/components/Instruction.svelte";
import UserDecisions from "@/lib/src/UI/components/UserDecisions.svelte";
const TYPE_CLOSE = "close";
const TYPE_CLOSE = "close";
type ResultType = typeof TYPE_CLOSE;
type Props = {
setResult: (result: ResultType) => void;
setResult: (_result: ResultType) => void;
};
const { setResult }: Props = $props();
</script>

View File

@@ -61,10 +61,12 @@ export class ModuleLiveSyncMain extends AbstractModule {
eventHub.onEvent(EVENT_SETTING_SAVED, (settings: ObsidianLiveSyncSettings) => {
fireAndForget(async () => {
try {
await this.core.services.control.applySettings();
const lang = this.core.services.setting.currentSettings()?.displayLanguage ?? undefined;
const lang = this.core.services.setting.currentSettings()?.displayLanguage;
if (lang !== undefined) {
setLang(this.core.services.setting.currentSettings()?.displayLanguage);
setLang(lang);
}
if (this.core.services.database.isDatabaseReady()) {
await this.core.services.control.applySettings();
}
eventHub.emitEvent(EVENT_REQUEST_RELOAD_SETTING_TAB);
} catch (e) {

View File

@@ -3,11 +3,22 @@ import { createServiceFeature } from "@lib/interfaces/ServiceModule";
import { SUPPORTED_I18N_LANGS, type I18N_LANGS } from "@lib/common/rosetta";
import { $msg, setLang } from "@lib/common/i18n";
function tryGetLanguage() {
try {
// Note: 1.8.7+ is required. but it is 18, Feb., 2025. we want to fallback on earlier versions, so we catch the error here.
// eslint-disable-next-line obsidianmd/no-unsupported-api
return getLanguage();
} catch (e) {
console.error("Failed to get Obsidian language, defaulting to 'def'", e);
return "en";
}
}
export const enableI18nFeature = createServiceFeature(async ({ services: { setting, API } }) => {
let isChanged = false;
const settings = setting.currentSettings();
if (settings.displayLanguage == "") {
const obsidianLanguage = getLanguage();
const obsidianLanguage = tryGetLanguage();
if (
SUPPORTED_I18N_LANGS.indexOf(obsidianLanguage) !== -1 && // Check if the language is supported
obsidianLanguage != settings.displayLanguage // Check if the language is different from the current setting

View File

@@ -5,6 +5,7 @@ import { type UseP2PReplicatorResult } from "@/lib/src/replication/trystero/UseP
import { P2PLogCollector } from "@/lib/src/replication/trystero/P2PLogCollector";
import { P2PReplicatorPaneView, VIEW_TYPE_P2P } from "@/features/P2PSync/P2PReplicator/P2PReplicatorPaneView";
import type { LiveSyncCore } from "@/main";
import type { WorkspaceLeaf } from "@/deps";
/**
* ServiceFeature: P2P Replicator lifecycle management.
@@ -43,7 +44,7 @@ export function useP2PReplicatorUI(
// Register view, commands and ribbon if a view factory is provided
const viewType = VIEW_TYPE_P2P;
const factory = (leaf: any) => {
const factory = (leaf: WorkspaceLeaf) => {
return new P2PReplicatorPaneView(leaf, core, {
replicator: getReplicator(),
p2pLogCollector,

View File

@@ -7,7 +7,7 @@ import type { TFile, App, TFolder } from "obsidian";
* Vault adapter implementation for Obsidian
*/
export class ObsidianVaultAdapter implements IVaultAdapter<TFile> {
constructor(private app: App) {}
constructor(private app: App) { }
async read(file: TFile): Promise<string> {
return await this.app.vault.read(file);
@@ -38,10 +38,20 @@ export class ObsidianVaultAdapter implements IVaultAdapter<TFile> {
}
async delete(file: TFile | TFolder, force = false): Promise<void> {
// if ("trashFile" in this.app.fileManager) {
// // eslint-disable-next-line obsidianmd/no-unsupported-api
// return await this.app.fileManager.trashFile(file);
// }
//TODO: need fix
return await this.app.vault.delete(file, force);
}
async trash(file: TFile | TFolder, force = false): Promise<void> {
// if ("trashFile" in this.app.fileManager) {
// // eslint-disable-next-line obsidianmd/no-unsupported-api
// return await this.app.fileManager.trashFile(file);
// }
//TODO: need fix
return await this.app.vault.trash(file, force);
}

View File

@@ -73,6 +73,12 @@
overflow-y: scroll;
}
.sls-remote-list .setting-item-description {
white-space: normal;
overflow-wrap: anywhere;
word-break: break-word;
}
.sls-plugins-tbl {
border: 1px solid var(--background-modifier-border);
width: 100%;
@@ -478,4 +484,45 @@ div.workspace-leaf-content[data-type=bases] .livesync-status {
white-space: pre-wrap;
word-break: break-all;
}
/* Diff navigation */
.diff-options-row {
display: flex;
align-items: center;
gap: 8px;
}
.diff-nav {
display: flex;
align-items: center;
gap: 4px;
margin-left: auto;
}
.diff-nav-btn {
padding: 2px 8px;
font-size: 0.85em;
cursor: pointer;
border: 1px solid var(--background-modifier-border);
border-radius: 4px;
background-color: var(--background-secondary);
color: var(--text-normal);
}
.diff-nav-btn:hover {
background-color: var(--background-modifier-hover);
}
.diff-nav-indicator {
font-size: 0.85em;
color: var(--text-muted);
min-width: 3em;
text-align: center;
}
.diff-focused {
outline: 2px solid var(--interactive-accent);
outline-offset: 1px;
border-radius: 2px;
}

View File

@@ -3,32 +3,88 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
## Unreleased 2
## Unreleased
3rd April, 2026
### Improved
As this commit is a bit of a fragile matter, I shall add a note here.
You know that untagged updates shall not be tested well. please be careful to use your own build. In most cases, I check that the warnings have disappeared, that the code compiles successfully without any warnings, and that it runs on the desktop.
- P2P synchronisation has been made more robust
Now the foundation for P2P synchronisation has been rewritten, and the unit tests have been added. The foundation has been separated into the transport layer, signalling-and-connection layer, and, an RPC layers. And each layer has been unit-tested. As the result, the P2P synchronisation now uses the robust shim that uses RPC-ed PouchDB synchronisation in contrast to previous implementation.
This P2P synchronisation is not compatible with previous versions in terms of connectivity. All devices must be updated.
### Fixed
- No unexpected error (about a replicator) during early stage of initialisation.
- No longer baffling errors occur when setting-update is triggered during the early stage of initialisation.
## 0.25.60
29th April, 2026
### Fixed
- Now larger settings can be exported and imported via QR code without issues. (#595)
- When the settings data exceeds the QR code capacity, it is now split into multiple QR codes.
- These QR codes are reassembled by the aggregator page, which collects the split data and reconstructs the original settings.
- Aggregator page is available at `https://vrtmrz.github.io/obsidian-livesync/aggregator.html`, and this file is also included in the repository.
- We will not send the settings data to any server. The QR code data is generated and processed entirely on the client side, ensuring that your settings remain private and secure. HOWEVER, please be careful your network environment.
- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.
### Fixed (CLI)
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
- Improved the clarity of CLI command logs by including the total count of processed items.
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.
## 0.25.59
### Fixed
- No longer Setup-wizard drops username and password silently. (#865)
- Thank you so much for @koteitan !
- Setup URI is now correctly imported (#859).
- Also thank you so much for @koteitan !
### Improved
- now French translation is added by @foXaCe ! Thank you so much!
## 0.25.58
### Fixed
- No longer credentials are broken during object storage configuration (related: #852).
- Fixed a worker-side recursion issue that could raise `Maximum call stack size exceeded` during chunk splitting (related: #855).
- Improved background worker crash cleanup so pending split/encryption tasks are released cleanly instead of being left in a waiting state (related: #855).
- On start-up, the selected remote configuration is now applied to runtime connection fields as well, reducing intermittent authentication failures caused by stale runtime settings (related: #855).
- Issue report generation now redacts `remoteConfigurations` connection strings and keeps only the scheme (e.g. `sls+https://`), so credentials are not exposed in reports.
- Hidden file JSON conflicts no longer keep re-opening and dismissing the merge dialogue before we can act, which fixes persistent unresolvable `data.json` conflicts in plug-in settings sync (related: #850).
## 0.25.57
9th April, 2026
- Packing a batch during the journal sync now continues even if the batch contains no items to upload.
- No unexpected error (about a replicator) during the early stage of initialisation.
- Now error messages are kept hidden if the show status inside the editor is disabled (related: #829).
- Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848).
- Each device's local deduplication caches (`knownIDs`, `sentIDs`, `receivedFiles`, `sentFiles`) now track the remote journal epoch (derived from the encryption parameters stored on the remote).
- When the epoch changes, the plugin verifies whether the device's last uploaded file still exists on the remote. If the file is gone, it confirms a remote wipe and automatically clears the stale caches. If the file is still present (e.g. a protocol upgrade without a wipe), the caches are preserved, and only the epoch is updated. This means normal upgrades never cause unnecessary re-processing.
### Translations
- Russian translation has been added! Thank you so much for the contribution, @vipka1n! (#845)
### New features
- Now we can configure multiple Remote Databases of the same type, e.g, multiple CouchDBs or S3 remotes.
- A user interface for managing multiple remote databases has been added to the settings dialogue. I think no explanation is needed for the UI, but please let me know if you have any questions.
- We can switch between multiple Remote Databases in the settings dialogue.
## Unreleased
2nd April, 2026
### CLI
#### Fixed
- Replication progress is now correctly saved and restored in the CLI.
- Replication progress is now correctly saved and restored in the CLI (related: #846).
## ~~0.25.55~~ 0.25.56

View File

@@ -1,4 +1,5 @@
{
"0.25.60": "1.7.2",
"1.0.1": "0.9.12",
"1.0.0": "0.9.7"
}

View File

@@ -2,7 +2,8 @@ import { defineConfig, mergeConfig } from "vitest/config";
import { playwright } from "@vitest/browser-playwright";
import viteConfig from "./vitest.config.common";
import path from "path";
import dotenv from "dotenv";
import { existsSync, readFileSync } from "node:fs";
import { parseEnv } from "node:util";
import { grantClipboardPermissions, writeHandoffFile, readHandoffFile } from "./test/lib/commands";
// P2P test environment variables
@@ -22,8 +23,9 @@ import { grantClipboardPermissions, writeHandoffFile, readHandoffFile } from "./
// General test options (also read from env):
// ENABLE_DEBUGGER - Set to "true" to attach a debugger and pause before tests
// ENABLE_UI - Set to "true" to open a visible browser window during tests
const defEnv = dotenv.config({ path: ".env" }).parsed;
const testEnv = dotenv.config({ path: ".test.env" }).parsed;
const loadEnvFile = (path: string) => (existsSync(path) ? parseEnv(readFileSync(path, "utf-8")) : undefined);
const defEnv = loadEnvFile(".env");
const testEnv = loadEnvFile(".test.env");
// Merge: dotenv files < process.env (so shell-injected vars like P2P_TEST_* take precedence)
const p2pEnv: Record<string, string> = {};
if (process.env.P2P_TEST_ROOM_ID) p2pEnv.P2P_TEST_ROOM_ID = process.env.P2P_TEST_ROOM_ID;

30
vitest.config.rpc-unit.ts Normal file
View File

@@ -0,0 +1,30 @@
import { defineConfig, mergeConfig } from "vitest/config";
import viteConfig from "./vitest.config.common";
export default mergeConfig(
viteConfig,
defineConfig({
resolve: {
alias: {
obsidian: "",
},
},
test: {
name: "rpc-unit-tests",
include: ["src/lib/src/rpc/**/*.unit.spec.ts"],
exclude: ["test/**"],
coverage: {
include: ["src/lib/src/rpc/**/*.ts"],
exclude: ["**/*.unit.spec.ts", "**/index.ts"],
provider: "v8",
reporter: ["text", "json", "html", ["text", { file: "coverage-rpc-text.txt" }]],
thresholds: {
lines: 90,
functions: 90,
branches: 75,
statements: 90,
},
},
},
})
);

View File

@@ -2,10 +2,13 @@ import { defineConfig, mergeConfig } from "vitest/config";
import { playwright } from "@vitest/browser-playwright";
import viteConfig from "./vitest.config.common";
import path from "path";
import dotenv from "dotenv";
import { existsSync, readFileSync } from "node:fs";
import { parseEnv } from "node:util";
import { grantClipboardPermissions, openWebPeer, closeWebPeer, acceptWebPeer } from "./test/lib/commands";
const defEnv = dotenv.config({ path: ".env" }).parsed;
const testEnv = dotenv.config({ path: ".test.env" }).parsed;
const loadEnvFile = (path: string) => (existsSync(path) ? parseEnv(readFileSync(path, "utf-8")) : undefined);
const defEnv = loadEnvFile(".env");
const testEnv = loadEnvFile(".test.env");
const env = Object.assign({}, defEnv, testEnv);
const debuggerEnabled = env?.ENABLE_DEBUGGER === "true";
const enableUI = env?.ENABLE_UI === "true";