Compare commits

...

29 Commits

Author SHA1 Message Date
vorotamoroz
a6be20695a feat: use new p2p-rpc wrapper 2026-05-11 03:49:35 +01:00
vorotamoroz
2afe12ad2d fix pattern 2026-05-07 11:28:01 +01:00
vorotamoroz
4a9d6c1349 Add ci 2026-05-07 11:23:51 +01:00
vorotamoroz
279fc8876e feat(tests): enhance push/pull test with Docker integration and improved environment variable handling
style(test): format comment
2026-05-07 11:22:56 +01:00
vorotamoroz
cc3d30dbcf feat(tests): add Deno-based tests for checking CLI functionality in the same-codebase between platforms. 2026-05-07 11:06:12 +01:00
vorotamoroz
39e82cc8a1 Fixed: Fix timing issue during test 2026-05-06 21:56:13 +09:00
vorotamoroz
fa7ef62302 Fix: adjusting help
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:42:54 +09:00
vorotamoroz
81d8224330 bump
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:39:48 +09:00
vorotamoroz
cc466a4b3c ### Fixed
- Now larger settings can be exported and imported via QR code without issues. (#595)

- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:37:44 +09:00
vorotamoroz
ceebca7de9 Merge pull request #862 from fabiomanz/main
chore: remove obsolete `version` attribute from docker-compose.yml
2026-04-29 17:30:35 +09:00
Fabio
c2f696d0a4 chore: attribute version is obsolete 2026-04-29 07:07:45 +00:00
vorotamoroz
1aa7c45794 Fix the readme
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:55:34 +09:00
vorotamoroz
faefa80cbd Fix again 2026-04-29 12:40:40 +09:00
vorotamoroz
3737eacffd Fix readme
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:39:42 +09:00
vorotamoroz
4c0af0b608 Fixed(cli):
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:22:00 +09:00
vorotamoroz
bb69eb13e7 bump 2026-04-27 11:15:07 +09:00
vorotamoroz
7c9db6376f Fixed:
- No longer Setup-wizard drops username and password silently. (#865)
- Setup URI is now correctly imported (#859).
- now French translation is added.
2026-04-27 11:14:06 +09:00
vorotamoroz
4c04e4e676 Merge pull request #863 from koteitan/fix/859-strip-trailing-slash-from-uri
fix: strip trailing slash from couchDB_URI to avoid double-slash 401
2026-04-27 11:08:11 +09:00
koteitan
14ec35b257 fix: strip trailing slash from couchDB_URI to avoid double-slash 401
When couchDB_URI ends with a trailing slash (e.g. https://host/), the
database name concatenation produces a double-slash path
(https://host//obsidiannotes), which causes CouchDB to reject requests
with 401 "Name or password is incorrect".

Strip trailing slashes from couchDB_URI / baseUri at the path
concatenation sites in:
- src/common/utils.ts (_requestToCouchDBFetch, _requestToCouchDB)
- src/features/LocalDatabaseMainte/CmdLocalDatabaseMainte.ts

The companion fix for the replication path is in the livesync-commonlib
submodule.

Ref: #859
2026-04-27 00:12:57 +09:00
vorotamoroz
b609e4973c Merge remote-tracking branch 'refs/remotes/origin/main' 2026-04-25 20:37:08 +09:00
vorotamoroz
354f0be9a3 bump
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 20:16:50 +09:00
vorotamoroz
16804ed34c Merge pull request #842 from kdavh/patch-1
Update README.md, fix webpeer link
2026-04-25 19:07:56 +09:00
vorotamoroz
31bd270869 Fixed: Hidden file JSON conflicts no longer keep re-opening and dismissing the merge dialogue before we can act, which fixes persistent unresolvable data.json conflicts in plug-in settings sync (related: #850).
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 17:22:25 +09:00
vorotamoroz
b5d054f259 Fixed: Issue report generation now redacts remoteConfigurations connection strings and keeps only the scheme (e.g. sls+https://), so credentials are not exposed in reports.
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 17:09:43 +09:00
vorotamoroz
1ef2955d00 - Fixed a worker-side recursion issue that could raise Maximum call stack size exceeded during chunk splitting (related: #855).
- Improved background worker crash cleanup so pending split/encryption tasks are released cleanly instead of being left in a waiting state (related: #855).
- On start-up, the selected remote configuration is now applied to runtime connection fields as well, reducing intermittent authentication failures caused by stale runtime settings (related: #855).

Co-authored-by: Copilot <copilot@github.com>
2026-04-25 16:51:37 +09:00
vorotamoroz
6ef56063b3 Fixed: No longer credentials are broken during object storage configuration (related: #852).
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 15:03:38 +09:00
vorotamoroz
a912585800 Improve issue template
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 14:01:18 +09:00
vorotamoroz
7a863625bc bump 2026-04-09 04:32:34 +01:00
kdavh
12f04f6cf7 Update README.md, fix webpeer link 2026-03-28 12:47:28 -04:00
56 changed files with 4288 additions and 310 deletions

View File

@@ -2,77 +2,104 @@
name: Issue report name: Issue report
about: Create a report to help us improve about: Create a report to help us improve
title: '' title: ''
labels: '' labels: 'bug'
assignees: '' assignees: ''
--- ---
Thank you for taking the time to report this issue! Thank you for taking the time to report this issue!
To improve the process, I would like to ask you to let me know the information in advance. Before filling in this form, please read: [How to report an issue](../docs/to_issue_reporting.md).
All instructions and examples, and empty entries can be deleted. Issues with sufficient information will be prioritised.
Just for your information, a [filled example](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Issue+example) is also written.
## Abstract ---
The synchronisation hung up immediately after connecting.
## Expected behaviour ## Required
- Synchronisation ends with the message `Replication completed`
- Everything synchronised
## Actually happened ### Abstract
- Synchronisation has been cancelled with the message `TypeError ... ` (captured in the attached log, around LL.10-LL.12) <!-- Briefly describe the problem in one or two sentences. -->
- No files synchronised
## Reproducing procedure ### Expected behaviour
<!-- What did you expect to happen? -->
1. Configure LiveSync as in the attached material. ### Actually happened
2. Click the replication button on the ribbon. <!-- What actually happened? Include any error messages. -->
3. Synchronising has begun.
4. About two or three seconds later, we got the error `TypeError ... `.
5. Replication has been stopped. No files synchronised.
Note: If you do not catch the reproducing procedure, please let me know the frequency and signs. ### Reproducing procedure
<!-- Step-by-step instructions to reproduce the issue. If you cannot reproduce it reliably, please describe the frequency and any signs you noticed. -->
## Report materials
If the information is not available, do not hesitate to report it as it is. You can also of course omit it if you think this is indeed unnecessary. If it is necessary, I will ask you.
### Report from the LiveSync
For more information, please refer to [Making the report](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Making+the+report).
<details>
<summary>Report from hatch</summary>
```
<!-- paste here -->
```
</details>
### Obsidian debug info ### Obsidian debug info
Please provide debug info for **each device involved**. The primary device (where the issue occurred) is required; others are strongly recommended. If your issue involves synchronisation between devices, debug info from relevant devices is very helpful.
To get it: open the command palette → "Show debug info".
<details> <details>
<summary>Debug info</summary> <summary>Device 1 (primary)</summary>
``` ```
<!-- paste here --> <!-- paste here -->
``` ```
</details> </details>
<details>
<summary>Device 2 (if applicable)</summary>
```
<!-- paste here -->
```
</details>
### LiveSync version
The hatch report (below) includes version information. If you cannot provide the report, please fill in the version here.
- Self-hosted LiveSync version: <!-- e.g. 0.23.0 — find it in Obsidian Settings → Community Plugins -->
### Report from LiveSync
Open the `Hatch` pane in LiveSync settings and press `Make report`. Paste here or upload to [Gist](https://gist.github.com/) and share the link.
<details>
<summary>Report from hatch (primary)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
<details>
<summary>Report from hatch (if applicable)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
### Plug-in log ### Plug-in log
We can see the log by tapping the Document box icon. If you noticed something suspicious, please let me know. Enable `Verbose Log` in General Settings first, then reproduce the issue and copy the log (tap the document box icon in the ribbon).
Note: **Please enable `Verbose Log`**. For detail, refer to [Logging](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Logging), please. Paste here or upload to [Gist](https://gist.github.com/) and share the link.
<details> <details>
<summary>Plug-in log</summary> <summary>Plug-in log (primary)</summary>
``` ```
<!-- paste here --> <!-- paste here or link to Gist -->
``` ```
</details> </details>
### Network log
Network logs displayed in DevTools will possibly help with connection-related issues. To capture that, please refer to [DevTools](https://docs.vrtmrz.net/LiveSync/hintandtrivia/DevTools). <details>
<summary>Plug-in log (if applicable)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
---
## Optional
### Screenshots ### Screenshots
If applicable, please add screenshots to help explain your problem. If applicable, please add screenshots to help explain your problem.
### Other information, insights and intuition. ### Other information, insights and intuition
Please provide any additional context or information about the problem. Please provide any additional context or information about the problem.

75
.github/workflows/cli-deno-tests.yml vendored Normal file
View File

@@ -0,0 +1,75 @@
name: cli-deno-tests
on:
workflow_dispatch:
inputs:
test_task:
description: 'Deno test task to run'
type: choice
options:
- test
- test:local
- test:e2e-matrix
- test:p2p-sync
default: test
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '24.x'
cache: 'npm'
- name: Setup Deno
uses: denoland/setup-deno@v2
with:
deno-version: v2.x
- name: Install dependencies
run: npm ci
- name: Build CLI
working-directory: src/apps/cli
run: npm run build
- name: Create .test.env
working-directory: src/apps/cli
run: |
cat <<EOF > .test.env
hostname=http://127.0.0.1:5989/
dbname=livesync-test-db-ci
username=admin
password=testpassword
minioEndpoint=http://127.0.0.1:9000
accessKey=minioadmin
secretKey=minioadmin
bucketName=livesync-test-bucket-ci
EOF
- name: Run Deno tests
working-directory: src/apps/cli/testdeno
env:
LIVESYNC_DOCKER_MODE: native
LIVESYNC_CLI_RETRY: 3
run: |
TASK="${{ github.event_name == 'workflow_dispatch' && inputs.test_task || 'test' }}"
echo "[INFO] Running Deno task: $TASK"
deno task "$TASK"
- name: Stop leftover containers
if: always()
run: |
docker stop couchdb-test minio-test relay-test >/dev/null 2>&1 || true
docker rm couchdb-test minio-test relay-test >/dev/null 2>&1 || true

View File

@@ -24,7 +24,7 @@ Additionally, it supports peer-to-peer synchronisation using WebRTC now (experim
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**. - WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers: - Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices. - [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices. - [webpeer](https://github.com/vrtmrz/obsidian-livesync/tree/main/src/apps/webpeer): A pseudo-client for receiving and sending data between devices.
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it. - A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync). - For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).

92
aggregator.html Normal file
View File

@@ -0,0 +1,92 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Self-hosted LiveSync Setup QR Aggregator</title>
<style>
body { font-family: sans-serif; display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100vh; margin: 0; background-color: #f4f4f9; color: #333; }
.container { background: white; padding: 2rem; border-radius: 8px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); text-align: center; max-width: 90%; }
.progress { margin: 20px 0; font-size: 1.2rem; font-weight: bold; }
.status { margin-bottom: 20px; color: #666; }
.btn { display: inline-block; padding: 12px 24px; background-color: #7c4dff; color: white; text-decoration: none; border-radius: 4px; font-weight: bold; transition: background-color 0.2s; border: none; cursor: pointer; }
.btn:hover { background-color: #651fff; }
.btn:disabled { background-color: #ccc; cursor: not-allowed; }
.grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(40px, 1fr)); gap: 8px; margin: 20px 0; }
.tile { width: 40px; height: 40px; border: 2px solid #ddd; border-radius: 4px; display: flex; align-items: center; justify-content: center; font-size: 0.8rem; }
.tile.filled { background-color: #7c4dff; color: white; border-color: #7c4dff; }
</style>
</head>
<body>
<div class="container">
<h1>LiveSync Setup</h1>
<div id="app">
<p>Checking hash data...</p>
</div>
</div>
<script>
function updateUI() {
const hash = window.location.hash.substring(1);
const params = new URLSearchParams(hash);
const id = params.get('id');
const total = parseInt(params.get('n') || '0');
const index = parseInt(params.get('i') || '-1');
const data = params.get('d');
const app = document.getElementById('app');
if (!id || total <= 0 || index === -1 || !data) {
app.innerHTML = '<p class="status">Invalid setup URL. Please scan the QR code correctly.</p>';
return;
}
// Get session data
const storageKey = 'ls_agg_' + id;
let session = JSON.parse(localStorage.getItem(storageKey) || '{}');
// Save current data
session[index] = data;
localStorage.setItem(storageKey, JSON.stringify(session));
const receivedIndexes = Object.keys(session).map(Number);
const count = receivedIndexes.length;
let html = `
<div class="status">Session ID: ${id}</div>
<div class="progress">${count} / ${total} Loaded</div>
<div class="grid">
`;
for (let i = 0; i < total; i++) {
const isFilled = session[i] !== undefined;
html += `<div class="tile ${isFilled ? 'filled' : ''}">${i + 1}</div>`;
}
html += `</div>`;
if (count === total) {
const sortedData = Array.from({length: total}, (_, i) => session[i]).join('');
// Use the correct protocol for settings
const obsidianUri = `obsidian://setuplivesync?settingsQR=${sortedData}`;
html += `
<p>All parts have been collected!</p>
<a href="${obsidianUri}" class="btn">Open Obsidian to complete setup</a>
<p style="margin-top:20px; font-size:0.8rem; color: #999;">Note: If the button does not respond, please ensure you are opening this in a browser that can trigger Obsidian.</p>
`;
} else {
html += `
<p class="status">Please scan the next QR code.</p>
<button class="btn" disabled>Waiting...</button>
`;
}
app.innerHTML = html;
}
window.addEventListener('hashchange', updateUI);
updateUI();
</script>
</body>
</html>

View File

@@ -1,7 +1,6 @@
# For details and other explanations about this file refer to: # For details and other explanations about this file refer to:
# https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik # https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb:latest image: couchdb:latest

View File

@@ -230,7 +230,6 @@ And, be sure to check the server log and be careful of malicious access.
If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`. If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`.
```yaml ```yaml
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb:latest image: couchdb:latest

View File

@@ -71,7 +71,6 @@ obsidian-livesync
可以参照以下内容编辑 `docker-compose.yml`: 可以参照以下内容编辑 `docker-compose.yml`:
```yaml ```yaml
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb image: couchdb

145
docs/to_issue_reporting.md Normal file
View File

@@ -0,0 +1,145 @@
# How to report an issue
Thank you for helping improve Self-hosted LiveSync!
This document explains how to collect the information needed for an issue report. Issues with sufficient information will be prioritised.
---
## Filled example
Here is an example of a well-filled report for reference.
### Abstract
The synchronisation hung up immediately after connecting.
### Expected behaviour
- Synchronisation ends with the message `Replication completed`
- Everything synchronised
### Actually happened
- Synchronisation was cancelled with the message `TypeError: Failed to fetch` (visible in the plug-in log around lines 1012)
- No files synchronised
### Reproducing procedure
1. Configure LiveSync with the settings shown in the attached report.
2. Click the sync button on the ribbon.
3. Synchronisation begins.
4. About two or three seconds later, the error `TypeError: Failed to fetch` appears.
5. Replication stops. No files synchronised.
### Obsidian debug info (Device 1 — Windows desktop)
```
SYSTEM INFO:
Obsidian version: v1.2.8
Installer version: v1.1.15
Operating system: Windows 10 Pro 10.0.19044
Login status: logged in
Catalyst license: supporter
Insider build toggle: off
Community theme: Minimal v6.1.11
Snippets enabled: 3
Restricted mode: off
Plugins installed: 35
Plugins enabled: 11
1: Self-hosted LiveSync v0.19.4
...
```
### Report from LiveSync
```
----remote config----
cors:
credentials: "true"
...
---- Plug-in config ---
couchDB_URI: self-hosted
couchDB_USER: 𝑅𝐸𝐷𝐴𝐶𝑇𝐸𝐷
...
```
### Plug-in log
```
2023/5/24 10:50:33->HTTP:GET to:/ -> failed
2023/5/24 10:50:33->TypeError:Failed to fetch
2023/5/24 10:50:33->could not connect to https://example.com/ : your vault
(TypeError:Failed to fetch)
```
---
## How to collect each piece of information
### Obsidian debug info
Open the command palette (`Ctrl/Cmd + P`) and run **"Show debug info"**. Copy the output and paste it into the issue.
If multiple devices are involved in the problem (e.g., sync between a phone and a desktop), please provide the debug info for each device. The device where the issue occurred is required; information from other devices is strongly recommended.
### Report from LiveSync (hatch report)
1. Open LiveSync settings.
2. Go to the **Hatch** pane.
3. Press the **Make report** button.
The report will be copied to your clipboard. It contains your LiveSync configuration and the remote server configuration, with credentials automatically redacted.
**Tip:** For large reports, consider uploading to [GitHub Gist](https://gist.github.com/) and sharing the link instead of pasting directly into the issue. This makes it easier to manage, and if you accidentally leave sensitive data in, a Gist can be deleted.
If you paste directly, wrap it in a `<details>` tag to keep the issue readable:
```
<details>
<summary>Report from hatch</summary>
```
----remote config----
:
```
</details>
```
### Plug-in log
The plug-in log is volatile by default (not saved to disk) and shown only in the log dialogue, which can be opened by tapping the **document box icon** in the ribbon.
#### Enable verbose log
Before reproducing the issue, enable **Verbose Log** in LiveSync's **General Settings** pane. Without this, many diagnostic messages will be suppressed.
#### Persist the log to a file (optional)
If you need to capture a log across a restart, enable **"Write logs into the file"** in General Settings. Note that log files may contain sensitive information — use this option only for troubleshooting, and disable it afterwards.
As with the hatch report, consider uploading large logs to [GitHub Gist](https://gist.github.com/).
### Network log (for connection-related issues only)
If the issue is related to network connectivity (e.g., cannot connect to the server, authentication errors), a network log captured from browser DevTools can be very helpful. You do not need to include this for non-connection issues.
#### Opening DevTools
| Platform | Shortcut |
|----------|----------|
| Windows / Linux | `Ctrl + Shift + I` |
| macOS | `Cmd + Shift + I` |
| Android | Use [Chrome remote debugging](https://developer.chrome.com/docs/devtools/remote-debugging/) |
| iOS | Use [Safari Web Inspector](https://developer.apple.com/documentation/safari-developer-tools/inspecting-ios) on a Mac |
#### What to capture
1. Open the **Network** pane in DevTools.
2. Reproduce the issue.
3. Look for requests marked in red.
4. Capture screenshots of the **Headers**, **Payload**, and **Response** tabs for those requests.
**Important — redact before sharing:**
- Headers: conceal the request URL path, Remote Address, `authority`, and `authorisation` values.
- Payload / Response: the `_id` field contains your file paths — redact if needed.

View File

@@ -1,7 +1,7 @@
{ {
"id": "obsidian-livesync", "id": "obsidian-livesync",
"name": "Self-hosted LiveSync", "name": "Self-hosted LiveSync",
"version": "0.25.56+patched5", "version": "0.25.60",
"minAppVersion": "0.9.12", "minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz", "author": "vorotamoroz",

81
package-lock.json generated
View File

@@ -1,12 +1,12 @@
{ {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.25.56+patched5", "version": "0.25.60",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.25.56+patched5", "version": "0.25.60",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@aws-sdk/client-s3": "^3.808.0", "@aws-sdk/client-s3": "^3.808.0",
@@ -924,13 +924,13 @@
} }
}, },
"node_modules/@aws-sdk/xml-builder": { "node_modules/@aws-sdk/xml-builder": {
"version": "3.972.15", "version": "3.972.19",
"resolved": "https://registry.npmjs.org/@aws-sdk/xml-builder/-/xml-builder-3.972.15.tgz", "resolved": "https://registry.npmjs.org/@aws-sdk/xml-builder/-/xml-builder-3.972.19.tgz",
"integrity": "sha512-PxMRlCFNiQnke9YR29vjFQwz4jq+6Q04rOVFeTDR2K7Qpv9h9FOWOxG+zJjageimYbWqE3bTuLjmryWHAWbvaA==", "integrity": "sha512-Cw8IOMdBUEIl8ZlhRC3Dc/E64D5B5/8JhV6vhPLiPfJwcRC84S6F8aBOIi/N4vR9ZyA4I5Cc0Ateb/9EHaJXeQ==",
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"@smithy/types": "^4.13.1", "@smithy/types": "^4.14.1",
"fast-xml-parser": "5.5.8", "fast-xml-parser": "5.7.1",
"tslib": "^2.6.2" "tslib": "^2.6.2"
}, },
"engines": { "engines": {
@@ -2668,6 +2668,18 @@
"url": "https://paulmillr.com/funding/" "url": "https://paulmillr.com/funding/"
} }
}, },
"node_modules/@nodable/entities": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/@nodable/entities/-/entities-2.1.0.tgz",
"integrity": "sha512-nyT7T3nbMyBI/lvr6L5TyWbFJAI9FTgVRakNoBqCD+PmID8DzFrrNdLLtHMwMszOtqZa8PAOV24ZqDnQrhQINA==",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/nodable"
}
],
"license": "MIT"
},
"node_modules/@nodelib/fs.scandir": { "node_modules/@nodelib/fs.scandir": {
"version": "2.1.5", "version": "2.1.5",
"resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz",
@@ -3945,9 +3957,9 @@
} }
}, },
"node_modules/@smithy/types": { "node_modules/@smithy/types": {
"version": "4.13.1", "version": "4.14.1",
"resolved": "https://registry.npmjs.org/@smithy/types/-/types-4.13.1.tgz", "resolved": "https://registry.npmjs.org/@smithy/types/-/types-4.14.1.tgz",
"integrity": "sha512-787F3yzE2UiJIQ+wYW1CVg2odHjmaWLGksnKQHUrK/lYZSEcy1msuLVvxaR/sI2/aDe9U+TBuLsXnr3vod1g0g==", "integrity": "sha512-59b5HtSVrVR/eYNei3BUj3DCPKD/G7EtDDe7OEJE7i7FtQFugYo6MxbotS8mVJkLNVf8gYaAlEBwwtJ9HzhWSg==",
"license": "Apache-2.0", "license": "Apache-2.0",
"dependencies": { "dependencies": {
"tslib": "^2.6.2" "tslib": "^2.6.2"
@@ -6073,9 +6085,9 @@
} }
}, },
"node_modules/basic-ftp": { "node_modules/basic-ftp": {
"version": "5.2.0", "version": "5.3.0",
"resolved": "https://registry.npmjs.org/basic-ftp/-/basic-ftp-5.2.0.tgz", "resolved": "https://registry.npmjs.org/basic-ftp/-/basic-ftp-5.3.0.tgz",
"integrity": "sha512-VoMINM2rqJwJgfdHq6RiUudKt2BV+FY5ZFezP/ypmwayk68+NzzAQy4XXLlqsGD4MCzq3DrmNFD/uUmBJuGoXw==", "integrity": "sha512-5K9eNNn7ywHPsYnFwjKgYH8Hf8B5emh7JKcPaVjjrMJFQQwGpwowEnZNEtHs7DfR7hCZsmaK3VA4HUK0YarT+w==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"engines": { "engines": {
@@ -8155,9 +8167,9 @@
"license": "MIT" "license": "MIT"
}, },
"node_modules/fast-xml-builder": { "node_modules/fast-xml-builder": {
"version": "1.1.4", "version": "1.1.5",
"resolved": "https://registry.npmjs.org/fast-xml-builder/-/fast-xml-builder-1.1.4.tgz", "resolved": "https://registry.npmjs.org/fast-xml-builder/-/fast-xml-builder-1.1.5.tgz",
"integrity": "sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg==", "integrity": "sha512-4TJn/8FKLeslLAH3dnohXqE3QSoxkhvaMzepOIZytwJXZO69Bfz0HBdDHzOTOon6G59Zrk6VQ2bEiv1t61rfkA==",
"funding": [ "funding": [
{ {
"type": "github", "type": "github",
@@ -8170,9 +8182,9 @@
} }
}, },
"node_modules/fast-xml-parser": { "node_modules/fast-xml-parser": {
"version": "5.5.8", "version": "5.7.1",
"resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-5.5.8.tgz", "resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-5.7.1.tgz",
"integrity": "sha512-Z7Fh2nVQSb2d+poDViM063ix2ZGt9jmY1nWhPfHBOK2Hgnb/OW3P4Et3P/81SEej0J7QbWtJqxO05h8QYfK7LQ==", "integrity": "sha512-8Cc3f8GUGUULg34pBch/KGyPLglS+OFs05deyOlY7fL2MTagYPKrVQNmR1fLF/yJ9PH5ZSTd3YDF6pnmeZU+zA==",
"funding": [ "funding": [
{ {
"type": "github", "type": "github",
@@ -8181,9 +8193,10 @@
], ],
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"fast-xml-builder": "^1.1.4", "@nodable/entities": "^2.1.0",
"path-expression-matcher": "^1.2.0", "fast-xml-builder": "^1.1.5",
"strnum": "^2.2.0" "path-expression-matcher": "^1.5.0",
"strnum": "^2.2.3"
}, },
"bin": { "bin": {
"fxparser": "src/cli/cli.js" "fxparser": "src/cli/cli.js"
@@ -11013,9 +11026,9 @@
} }
}, },
"node_modules/path-expression-matcher": { "node_modules/path-expression-matcher": {
"version": "1.2.0", "version": "1.5.0",
"resolved": "https://registry.npmjs.org/path-expression-matcher/-/path-expression-matcher-1.2.0.tgz", "resolved": "https://registry.npmjs.org/path-expression-matcher/-/path-expression-matcher-1.5.0.tgz",
"integrity": "sha512-DwmPWeFn+tq7TiyJ2CxezCAirXjFxvaiD03npak3cRjlP9+OjTmSy1EpIrEbh+l6JgUundniloMLDQ/6VTdhLQ==", "integrity": "sha512-cbrerZV+6rvdQrrD+iGMcZFEiiSrbv9Tfdkvnusy6y0x0GKBXREFg/Y65GhIfm0tnLntThhzCnfKwp1WRjeCyQ==",
"funding": [ "funding": [
{ {
"type": "github", "type": "github",
@@ -11238,9 +11251,9 @@
} }
}, },
"node_modules/postcss": { "node_modules/postcss": {
"version": "8.5.8", "version": "8.5.10",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.8.tgz", "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.10.tgz",
"integrity": "sha512-OW/rX8O/jXnm82Ey1k44pObPtdblfiuWnrd8X7GJ7emImCOstunGbXUpp7HdBrFQX6rJzn3sPT397Wp5aCwCHg==", "integrity": "sha512-pMMHxBOZKFU6HgAZ4eyGnwXF/EvPGGqUr0MnZ5+99485wwW41kW91A4LOGxSHhgugZmSChL5AlElNdwlNgcnLQ==",
"dev": true, "dev": true,
"funding": [ "funding": [
{ {
@@ -12927,9 +12940,9 @@
} }
}, },
"node_modules/strnum": { "node_modules/strnum": {
"version": "2.2.2", "version": "2.2.3",
"resolved": "https://registry.npmjs.org/strnum/-/strnum-2.2.2.tgz", "resolved": "https://registry.npmjs.org/strnum/-/strnum-2.2.3.tgz",
"integrity": "sha512-DnR90I+jtXNSTXWdwrEy9FakW7UX+qUZg28gj5fk2vxxl7uS/3bpI4fjFYVmdK9etptYBPNkpahuQnEwhwECqA==", "integrity": "sha512-oKx6RUCuHfT3oyVjtnrmn19H1SiCqgJSg+54XqURKp5aCMbrXrhLjRN9TjuwMjiYstZ0MzDrHqkGZ5dFTKd+zg==",
"funding": [ "funding": [
{ {
"type": "github", "type": "github",
@@ -14218,9 +14231,9 @@
} }
}, },
"node_modules/vite": { "node_modules/vite": {
"version": "7.3.1", "version": "7.3.2",
"resolved": "https://registry.npmjs.org/vite/-/vite-7.3.1.tgz", "resolved": "https://registry.npmjs.org/vite/-/vite-7.3.2.tgz",
"integrity": "sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==", "integrity": "sha512-Bby3NOsna2jsjfLVOHKes8sGwgl4TT0E6vvpYgnAYDIF/tie7MRaFthmKuHx1NSXjiTueXH3do80FMQgvEktRg==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true, "peer": true,

View File

@@ -1,6 +1,6 @@
{ {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.25.56+patched5", "version": "0.25.60",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js", "main": "main.js",
"type": "module", "type": "module",

View File

@@ -45,11 +45,73 @@ CLI Main
- Settings management (JSON file) - Settings management (JSON file)
- Graceful shutdown handling - Graceful shutdown handling
## Something I realised later that could lead to misunderstandings ## Usage
The term `vault` in this README refers to the directory containing your local database and settings file. Not the actual files you want to sync. I will fix this later, but please be mind this for now. The CLI operates on a **database directory** which contains PouchDB data and settings.
## Docker > [!NOTE]
> `livesync-cli` is the alias for the CLI executable. Please replace with the actual command of your installation (e.g. `npm run --silent cli --` or `docker run ...`).
```bash
livesync-cli [database-path] [command] [args...]
```
### Arguments
- `database-path`: Path to the directory where `.livesync` folder and `settings.json` are (or will be) located.
- Note: In previous versions, this was referred to as the "vault" path. Now it is clearly distinguished from the actual vault (the directory containing your `.md` files).
### Commands
- `sync`: Run one replication cycle with the remote CouchDB.
- `mirror [vault-path]`: Bidirectional sync between the local database and a local directory (**the actual vault**).
- If `vault-path` is provided, the CLI will synchronise the database with files in the vault directory.
- If `vault-path` is omitted, it defaults to `database-path` (compatibility mode).
- Use this command to keep your local `.md` files in sync with the database.
- `ls [prefix]`: List files currently stored in the local database.
- `push <src> <dst>`: Push a local file `<src>` into the database at path `<dst>`.
- `pull <src> <dst>`: Pull a file `<src>` from the database into local file `<dst>`.
- `cat <src>`: Read a file from the database and write to stdout.
- `put <dst>`: Read from stdin and write to the database path `<dst>`.
- `init-settings [file]`: Create a default settings file.
### Examples
```bash
# Basic sync with remote
livesync-cli ./my-db sync
# Mirroring to your actual Obsidian vault
livesync-cli ./my-db mirror /path/to/obsidian-vault
# Manual file operations
livesync-cli ./my-db push ./note.md folder/note.md
livesync-cli ./my-db pull folder/note.md ./note.md
```
## Installation
### Build from source
```bash
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
# due to shared dependencies with webapp and main library
npm install
# Build the project (ensure you are in `src/apps/cli` directory)
npm run build
```
Run the CLI:
```bash
# Run with npm script (from repository root)
npm run --silent cli -- [database-path] [command] [args...]
# Run the built executable directly
node src/apps/cli/dist/index.cjs [database-path] [command] [args...]
```
### Docker
A Docker image is provided for headless / server deployments. Build from the repository root: A Docker image is provided for headless / server deployments. Build from the repository root:
@@ -61,28 +123,28 @@ Run:
```bash ```bash
# Sync with CouchDB # Sync with CouchDB
docker run --rm -v /path/to/your/vault:/data livesync-cli sync docker run --rm -v /path/to/your/db:/data livesync-cli sync
# Mirror to a specific vault directory
docker run --rm -v /path/to/your/db:/data -v /path/to/your/vault:/vault livesync-cli mirror /vault
# List files in the local database # List files in the local database
docker run --rm -v /path/to/your/vault:/data livesync-cli ls docker run --rm -v /path/to/your/db:/data livesync-cli ls
# Generate a default settings file
docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
``` ```
The vault directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`. The database directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
### P2P (WebRTC) and Docker networking #### P2P (WebRTC) and Docker networking
The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates
three kinds of ICE candidates. The default Docker bridge network affects which three kinds of ICE candidates. The default Docker bridge network affects which
candidates are usable: candidates are usable:
| Candidate type | Description | Bridge network | | Candidate type | Description | Bridge network |
|---|---|---| | -------------- | ---------------------------------- | -------------------------- |
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers | | `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
| `srflx` | Host public IP via STUN reflection | Works over the internet | | `srflx` | Host public IP via STUN reflection | Works over the internet |
| `relay` | Traffic relayed via TURN server | Always reachable | | `relay` | Traffic relayed via TURN server | Always reachable |
**LAN P2P on Linux** — use `--network host` so that the real host IP is **LAN P2P on Linux** — use `--network host` so that the real host IP is
advertised as the `host` candidate: advertised as the `host` candidate:
@@ -91,6 +153,8 @@ advertised as the `host` candidate:
docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host
``` ```
Note: also fix the alias to include `--network host` if you want to use `livesync-cli` for P2P commands.
> `--network host` is not available on Docker Desktop for macOS or Windows. > `--network host` is not available on Docker Desktop for macOS or Windows.
**LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the **LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the
@@ -103,16 +167,35 @@ candidate carries the host's public IP and peers can connect normally.
**CouchDB sync only (no P2P)** — no special network configuration is required. **CouchDB sync only (no P2P)** — no special network configuration is required.
## Installation
### Adding `livesync-cli` alias
To use the `livesync-cli` command globally, you can add an alias to your shell configuration file (e.g., `.zshrc` or `.bashrc`).
If you are using `npm run`, add the following line:
```bash ```bash
# Install dependencies (ensure you are in repository root directory, not src/apps/cli) alias livesync-cli='npm run --silent --prefix /path/to/repository/src/apps/cli cli --'
# due to shared dependencies with webapp and main library # or
npm install alias livesync-cli="npm run --silent --prefix $PWD cli --"
# Build the project (ensure you are in `src/apps/cli` directory)
npm run build
``` ```
Alternatively, if you want to use the built executable directly:
```bash
alias livesync-cli='node /path/to/repository/src/apps/cli/dist/index.cjs'
or
alias livesync-cli="node $PWD/dist/index.cjs"
```
If you prefer using Docker:
```bash
alias livesync-cli='docker run --rm -v /path/to/your/db:/data livesync-cli'
```
After adding the alias, restart your shell or run `source ~/.zshrc` (or `.bashrc`).
## Usage ## Usage
### Basic Usage ### Basic Usage
@@ -121,43 +204,43 @@ As you know, the CLI is designed to be used in a headless environment. Hence all
```bash ```bash
# Sync local database with CouchDB (no files will be changed). # Sync local database with CouchDB (no files will be changed).
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync livesync-cli /path/to/your-local-database --settings /path/to/settings.json sync
# Push files to local database # Push files to local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
# Pull files from local database # Pull files from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
# Verbose logging # Verbose logging
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose livesync-cli /path/to/your-local-database --settings /path/to/settings.json --verbose
# Apply setup URI to settings file (settings only; does not run synchronisation) # Apply setup URI to settings file (settings only; does not run synchronisation)
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..." livesync-cli /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
# Put text from stdin into local database # Put text from stdin into local database
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md echo "Hello from stdin" | livesync-cli /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
# Output a file from local database to stdout # Output a file from local database to stdout
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
# Output a specific revision of a file from local database # Output a specific revision of a file from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
# Pull a specific revision of a file from local database to local storage # Pull a specific revision of a file from local database to local storage
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
# List files in local database # List files in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/ livesync-cli /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
# Show metadata for a file in local database # Show metadata for a file in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
# Mark a file as deleted in local database # Mark a file as deleted in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md livesync-cli /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
# Resolve conflict by keeping a specific revision # Resolve conflict by keeping a specific revision
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef livesync-cli /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
``` ```
### Configuration ### Configuration
@@ -192,7 +275,8 @@ The CLI uses the same settings format as the Obsidian plugin. Create a `.livesyn
``` ```
Usage: Usage:
livesync-cli [database-path] [options] [command] [command-args] livesync-cli <database-path> [options] <command> [command-args]
livesync-cli init-settings [path]
Arguments: Arguments:
database-path Path to the local database directory (required except for init-settings) database-path Path to the local database directory (required except for init-settings)
@@ -201,7 +285,8 @@ Options:
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory) --settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
--force, -f Overwrite existing file on init-settings --force, -f Overwrite existing file on init-settings
--verbose, -v Enable verbose logging --verbose, -v Enable verbose logging
--help, -h Show this help message --debug, -d Enable debug logging (includes verbose)
--help, -h Show help message
Commands: Commands:
init-settings [path] Create settings JSON from DEFAULT_SETTINGS init-settings [path] Create settings JSON from DEFAULT_SETTINGS
@@ -211,16 +296,16 @@ Commands:
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C) p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
push <src> <dst> Push local file <src> into local database path <dst> push <src> <dst> Push local file <src> into local database path <dst>
pull <src> <dst> Pull file <src> from local database into local file <dst> pull <src> <dst> Pull file <src> from local database into local file <dst>
pull-rev <src> <dst> <revision> Pull specific revision into local file <dst> pull-rev <src> <dst> <rev> Pull specific revision <rev> into local file <dst>
setup <setupURI> Apply setup URI to settings file setup <setupURI> Apply setup URI to settings file
put <vaultPath> Read text from standard input and write to local database put <dst> Read text from standard input and write to local database path <dst>
cat <vaultPath> Write latest file content from local database to standard output cat <src> Write latest file content from local database to standard output
cat-rev <vaultPath> <revision> Write specific revision content from local database to standard output cat-rev <src> <rev> Write specific revision <rev> content from local database to standard output
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*] ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list info <path> Show file metadata including current and past revisions, conflicts, and chunk list
rm <vaultPath> Mark file as deleted in local database rm <path> Mark file as deleted in local database
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision resolve <path> <rev> Resolve conflict by keeping the specified revision
mirror <storagePath> <vaultPath> Mirror local file into local database. mirror [vaultPath] Mirror database contents to the local file system (vaultPath defaults to database-path)
``` ```
Run via npm script: Run via npm script:
@@ -300,11 +385,11 @@ In other words, it performs the following actions:
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time): 5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
| Group | Condition | Action | | Group | Condition | Action |
|---|---|---| | ----------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. | | **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. | | **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. | | **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2. 6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
@@ -323,9 +408,9 @@ Note: `mirror` does not respect file deletions. If a file is deleted in storage,
Create default settings, apply a setup URI, then run one sync cycle. Create default settings, apply a setup URI, then run one sync cycle.
```bash ```bash
npm run --silent cli -- init-settings /data/livesync-settings.json livesync-cli -- init-settings /data/livesync-settings.json
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI" printf '%s\n' "$SETUP_PASSPHRASE" | livesync-cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync livesync-cli -- /data/vault --settings /data/livesync-settings.json sync
``` ```
### 2. Scripted import and export ### 2. Scripted import and export
@@ -333,8 +418,8 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
Push local files into the database from automation, and pull them back for export or backup. Push local files into the database from automation, and pull them back for export or backup.
```bash ```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md livesync-cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md livesync-cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
``` ```
### 3. Revision inspection and restore ### 3. Revision inspection and restore
@@ -342,9 +427,9 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`). List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
```bash ```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef livesync-cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef livesync-cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
``` ```
### 4. Conflict and cleanup workflow ### 4. Conflict and cleanup workflow
@@ -352,9 +437,9 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files. Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
```bash ```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef livesync-cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md livesync-cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
``` ```
### 5. CI smoke test for content round-trip ### 5. CI smoke test for content round-trip
@@ -362,8 +447,8 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm n
Validate that `put`/`cat` is behaving as expected in a pipeline. Validate that `put`/`cat` is behaving as expected in a pipeline.
```bash ```bash
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md echo "hello-ci" | livesync-cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md livesync-cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
``` ```
## Development ## Development

View File

@@ -5,13 +5,13 @@ import { configURIBase } from "@lib/common/models/shared.const";
import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types"; import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types";
import { stripAllPrefixes } from "@lib/string_and_binary/path"; import { stripAllPrefixes } from "@lib/string_and_binary/path";
import type { CLICommandContext, CLIOptions } from "./types"; import type { CLICommandContext, CLIOptions } from "./types";
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils"; import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toDatabaseRelativePath } from "./utils";
import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p"; import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p";
import { performFullScan } from "@lib/serviceFeatures/offlineScanner"; import { performFullScan } from "@lib/serviceFeatures/offlineScanner";
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager"; import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> { export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> {
const { vaultPath, core, settingsPath } = context; const { databasePath, core, settingsPath } = context;
await core.services.control.activated; await core.services.control.activated;
if (options.command === "daemon") { if (options.command === "daemon") {
@@ -77,16 +77,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
throw new Error("push requires two arguments: <src> <dst>"); throw new Error("push requires two arguments: <src> <dst>");
} }
const sourcePath = path.resolve(options.commandArgs[0]); const sourcePath = path.resolve(options.commandArgs[0]);
const destinationVaultPath = toVaultRelativePath(options.commandArgs[1], vaultPath); const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[1], databasePath);
const sourceData = await fs.readFile(sourcePath); const sourceData = await fs.readFile(sourcePath);
const sourceStat = await fs.stat(sourcePath); const sourceStat = await fs.stat(sourcePath);
console.log(`[Command] push ${sourcePath} -> ${destinationVaultPath}`); console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`);
await core.serviceModules.storageAccess.writeFileAuto(destinationVaultPath, toArrayBuffer(sourceData), { await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), {
mtime: sourceStat.mtimeMs, mtime: sourceStat.mtimeMs,
ctime: sourceStat.ctimeMs, ctime: sourceStat.ctimeMs,
}); });
const destinationPathWithPrefix = destinationVaultPath as FilePathWithPrefix; const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix;
const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true); const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true);
return stored; return stored;
} }
@@ -95,16 +95,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("pull requires two arguments: <src> <dst>"); throw new Error("pull requires two arguments: <src> <dst>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const destinationPath = path.resolve(options.commandArgs[1]); const destinationPath = path.resolve(options.commandArgs[1]);
console.log(`[Command] pull ${sourceVaultPath} -> ${destinationPath}`); console.log(`[Command] pull ${sourceDatabasePath} -> ${destinationPath}`);
const sourcePathWithPrefix = sourceVaultPath as FilePathWithPrefix; const sourcePathWithPrefix = sourceDatabasePath as FilePathWithPrefix;
const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true); const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true);
if (!restored) { if (!restored) {
return false; return false;
} }
const data = await core.serviceModules.storageAccess.readFileAuto(sourceVaultPath); const data = await core.serviceModules.storageAccess.readFileAuto(sourceDatabasePath);
await fs.mkdir(path.dirname(destinationPath), { recursive: true }); await fs.mkdir(path.dirname(destinationPath), { recursive: true });
if (typeof data === "string") { if (typeof data === "string") {
await fs.writeFile(destinationPath, data, "utf-8"); await fs.writeFile(destinationPath, data, "utf-8");
@@ -118,16 +118,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 3) { if (options.commandArgs.length < 3) {
throw new Error("pull-rev requires three arguments: <src> <dst> <rev>"); throw new Error("pull-rev requires three arguments: <src> <dst> <rev>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const destinationPath = path.resolve(options.commandArgs[1]); const destinationPath = path.resolve(options.commandArgs[1]);
const rev = options.commandArgs[2].trim(); const rev = options.commandArgs[2].trim();
if (!rev) { if (!rev) {
throw new Error("pull-rev requires a non-empty revision"); throw new Error("pull-rev requires a non-empty revision");
} }
console.log(`[Command] pull-rev ${sourceVaultPath}@${rev} -> ${destinationPath}`); console.log(`[Command] pull-rev ${sourceDatabasePath}@${rev} -> ${destinationPath}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
rev, rev,
true true
); );
@@ -175,11 +175,11 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("put requires one argument: <dst>"); throw new Error("put requires one argument: <dst>");
} }
const destinationVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const content = await readStdinAsUtf8(); const content = await readStdinAsUtf8();
console.log(`[Command] put stdin -> ${destinationVaultPath}`); console.log(`[Command] put stdin -> ${destinationDatabasePath}`);
return await core.serviceModules.databaseFileAccess.storeContent( return await core.serviceModules.databaseFileAccess.storeContent(
destinationVaultPath as FilePathWithPrefix, destinationDatabasePath as FilePathWithPrefix,
content content
); );
} }
@@ -188,10 +188,10 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("cat requires one argument: <src>"); throw new Error("cat requires one argument: <src>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
console.error(`[Command] cat ${sourceVaultPath}`); console.error(`[Command] cat ${sourceDatabasePath}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
undefined, undefined,
true true
); );
@@ -212,14 +212,14 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("cat-rev requires two arguments: <src> <rev>"); throw new Error("cat-rev requires two arguments: <src> <rev>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const rev = options.commandArgs[1].trim(); const rev = options.commandArgs[1].trim();
if (!rev) { if (!rev) {
throw new Error("cat-rev requires a non-empty revision"); throw new Error("cat-rev requires a non-empty revision");
} }
console.error(`[Command] cat-rev ${sourceVaultPath} @ ${rev}`); console.error(`[Command] cat-rev ${sourceDatabasePath} @ ${rev}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
rev, rev,
true true
); );
@@ -239,7 +239,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.command === "ls") { if (options.command === "ls") {
const prefix = const prefix =
options.commandArgs.length > 0 && options.commandArgs[0].trim() !== "" options.commandArgs.length > 0 && options.commandArgs[0].trim() !== ""
? toVaultRelativePath(options.commandArgs[0], vaultPath) ? toDatabaseRelativePath(options.commandArgs[0], databasePath)
: ""; : "";
const rows: { path: string; line: string }[] = []; const rows: { path: string; line: string }[] = [];
@@ -261,6 +261,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
rows.sort((a, b) => a.path.localeCompare(b.path)); rows.sort((a, b) => a.path.localeCompare(b.path));
if (rows.length > 0) { if (rows.length > 0) {
process.stdout.write(rows.map((e) => e.line).join("\n") + "\n"); process.stdout.write(rows.map((e) => e.line).join("\n") + "\n");
} else {
process.stderr.write("[Info] No documents found in the local database.\n");
} }
return true; return true;
} }
@@ -269,7 +271,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("info requires one argument: <path>"); throw new Error("info requires one argument: <path>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) { for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) {
if (doc._deleted || doc.deleted) continue; if (doc._deleted || doc.deleted) continue;
@@ -313,7 +315,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("rm requires one argument: <path>"); throw new Error("rm requires one argument: <path>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
console.error(`[Command] rm ${targetPath}`); console.error(`[Command] rm ${targetPath}`);
return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix); return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix);
} }
@@ -322,7 +324,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("resolve requires two arguments: <path> <revision-to-keep>"); throw new Error("resolve requires two arguments: <path> <revision-to-keep>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath) as FilePathWithPrefix; const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath) as FilePathWithPrefix;
const revisionToKeep = options.commandArgs[1].trim(); const revisionToKeep = options.commandArgs[1].trim();
if (revisionToKeep === "") { if (revisionToKeep === "") {
throw new Error("resolve requires a non-empty revision-to-keep"); throw new Error("resolve requires a non-empty revision-to-keep");

View File

@@ -58,7 +58,7 @@ async function createSetupURI(passphrase: string): Promise<string> {
describe("runCommand abnormal cases", () => { describe("runCommand abnormal cases", () => {
const context = { const context = {
vaultPath: "/tmp/vault", databasePath: "/tmp/vault",
settingsPath: "/tmp/vault/.livesync/settings.json", settingsPath: "/tmp/vault/.livesync/settings.json",
} as any; } as any;

View File

@@ -32,7 +32,7 @@ export interface CLIOptions {
} }
export interface CLICommandContext { export interface CLICommandContext {
vaultPath: string; databasePath: string;
core: LiveSyncBaseCore<ServiceContext, any>; core: LiveSyncBaseCore<ServiceContext, any>;
settingsPath: string; settingsPath: string;
} }

View File

@@ -5,19 +5,19 @@ export function toArrayBuffer(data: Buffer): ArrayBuffer {
return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer; return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer;
} }
export function toVaultRelativePath(inputPath: string, vaultPath: string): string { export function toDatabaseRelativePath(inputPath: string, databasePath: string): string {
const stripped = inputPath.replace(/^[/\\]+/, ""); const stripped = inputPath.replace(/^[/\\]+/, "");
if (!path.isAbsolute(inputPath)) { if (!path.isAbsolute(inputPath)) {
const normalized = stripped.replace(/\\/g, "/"); const normalized = stripped.replace(/\\/g, "/");
const resolved = path.resolve(vaultPath, normalized); const resolved = path.resolve(databasePath, normalized);
const rel = path.relative(vaultPath, resolved); const rel = path.relative(databasePath, resolved);
if (rel.startsWith("..") || path.isAbsolute(rel)) { if (rel.startsWith("..") || path.isAbsolute(rel)) {
throw new Error(`Path ${inputPath} is outside of the local database directory`); throw new Error(`Path ${inputPath} is outside of the local database directory`);
} }
return rel.replace(/\\/g, "/"); return rel.replace(/\\/g, "/");
} }
const resolved = path.resolve(inputPath); const resolved = path.resolve(inputPath);
const rel = path.relative(vaultPath, resolved); const rel = path.relative(databasePath, resolved);
if (rel.startsWith("..") || path.isAbsolute(rel)) { if (rel.startsWith("..") || path.isAbsolute(rel)) {
throw new Error(`Path ${inputPath} is outside of the local database directory`); throw new Error(`Path ${inputPath} is outside of the local database directory`);
} }
@@ -25,15 +25,15 @@ export function toVaultRelativePath(inputPath: string, vaultPath: string): strin
} }
export async function readStdinAsUtf8(): Promise<string> { export async function readStdinAsUtf8(): Promise<string> {
const chunks: Buffer[] = []; const chunks = [];
for await (const chunk of process.stdin) { for await (const chunk of process.stdin) {
if (typeof chunk === "string") { if (typeof chunk === "string") {
chunks.push(Buffer.from(chunk, "utf-8")); chunks.push(Buffer.from(chunk, "utf-8"));
} else { } else {
chunks.push(chunk); chunks.push(chunk as Buffer);
} }
} }
return Buffer.concat(chunks).toString("utf-8"); return Buffer.concat(chunks as Uint8Array[]).toString("utf-8");
} }
export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> { export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> {

View File

@@ -1,29 +1,33 @@
import * as path from "path"; import * as path from "path";
import { describe, expect, it } from "vitest"; import { describe, expect, it } from "vitest";
import { toVaultRelativePath } from "./utils"; import { toDatabaseRelativePath } from "./utils";
describe("toVaultRelativePath", () => { describe("toDatabaseRelativePath", () => {
const vaultPath = path.resolve("/tmp/livesync-vault"); const databasePath = path.resolve("/tmp/livesync-vault");
it("rejects absolute paths outside vault", () => { it("rejects absolute paths outside vault", () => {
expect(() => toVaultRelativePath("/etc/passwd", vaultPath)).toThrow("outside of the local database directory"); expect(() => toDatabaseRelativePath("/etc/passwd", databasePath)).toThrow(
"outside of the local database directory"
);
}); });
it("normalizes leading slash for absolute path inside vault", () => { it("normalizes leading slash for absolute path inside vault", () => {
const absoluteInsideVault = path.join(vaultPath, "notes", "foo.md"); const absoluteInsideVault = path.join(databasePath, "notes", "foo.md");
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("notes/foo.md"); expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("notes/foo.md");
}); });
it("normalizes Windows-style separators", () => { it("normalizes Windows-style separators", () => {
expect(toVaultRelativePath("notes\\daily\\2026-03-12.md", vaultPath)).toBe("notes/daily/2026-03-12.md"); expect(toDatabaseRelativePath("notes\\daily\\2026-03-12.md", databasePath)).toBe("notes/daily/2026-03-12.md");
}); });
it("returns vault-relative path for another absolute path inside vault", () => { it("returns vault-relative path for another absolute path inside vault", () => {
const absoluteInsideVault = path.join(vaultPath, "docs", "inside.md"); const absoluteInsideVault = path.join(databasePath, "docs", "inside.md");
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("docs/inside.md"); expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("docs/inside.md");
}); });
it("rejects relative path traversal that escapes vault", () => { it("rejects relative path traversal that escapes vault", () => {
expect(() => toVaultRelativePath("../escape.md", vaultPath)).toThrow("outside of the local database directory"); expect(() => toDatabaseRelativePath("../escape.md", databasePath)).toThrow(
"outside of the local database directory"
);
}); });
}); });

View File

@@ -36,14 +36,15 @@ function printHelp(): void {
Self-hosted LiveSync CLI Self-hosted LiveSync CLI
Usage: Usage:
livesync-cli [database-path] [options] [command] [command-args] livesync-cli <database-path> [options] <command> [command-args]
livesync-cli init-settings [path]
Arguments: Arguments:
database-path Path to the local database directory (required) database-path Path to the local database directory
Commands: Commands:
sync Run one replication cycle and exit sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name> p2p-peers <timeout> Show discovered peers as [peer]\t<peer-id>\t<peer-name>
p2p-sync <peer> <timeout> p2p-sync <peer> <timeout>
Sync with the specified peer-id or peer-name Sync with the specified peer-id or peer-name
p2p-host Start P2P host mode and wait until interrupted p2p-host Start P2P host mode and wait until interrupted
@@ -54,28 +55,29 @@ Commands:
put <dst> Read UTF-8 content from stdin and write to local database path <dst> put <dst> Read UTF-8 content from stdin and write to local database path <dst>
cat <src> Read file <src> from local database and write to stdout cat <src> Read file <src> from local database and write to stdout
cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout
ls [prefix] List DB files as path<TAB>size<TAB>mtime<TAB>revision[*] ls [prefix] List DB files as path\tsize\tmtime\trevision[*]
info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks) info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks)
rm <path> Mark a file as deleted in local database rm <path> Mark a file as deleted in local database
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path)
Examples: Examples:
livesync-cli ./my-database sync livesync-cli ./my-database sync
livesync-cli ./my-database p2p-peers 5 livesync-cli ./my-database p2p-peers 5
livesync-cli ./my-database p2p-sync my-peer-name 15 livesync-cli ./my-database p2p-sync my-peer-name 15
livesync-cli ./my-database p2p-host livesync-cli ./my-database p2p-host
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
livesync-cli ./my-database pull folder/note.md ./exports/note.md livesync-cli ./my-database pull folder/note.md ./exports/note.md
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..." livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
echo "Hello" | livesync-cli ./my-database put notes/hello.md echo "Hello" | livesync-cli ./my-database put notes/hello.md
livesync-cli ./my-database cat notes/hello.md livesync-cli ./my-database cat notes/hello.md
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
livesync-cli ./my-database ls notes/ livesync-cli ./my-database ls notes/
livesync-cli ./my-database info notes/hello.md livesync-cli ./my-database info notes/hello.md
livesync-cli ./my-database rm notes/hello.md livesync-cli ./my-database rm notes/hello.md
livesync-cli ./my-database resolve notes/hello.md 3-abcdef livesync-cli ./my-database resolve notes/hello.md 3-abcdef
livesync-cli init-settings ./data.json livesync-cli init-settings ./data.json
livesync-cli ./my-database --verbose livesync-cli ./my-database --verbose
`); `);
} }
@@ -112,6 +114,7 @@ export function parseArgs(): CLIOptions {
case "-d": case "-d":
// debugging automatically enables verbose logging, as it is intended for debugging issues. // debugging automatically enables verbose logging, as it is intended for debugging issues.
debug = true; debug = true;
// falls through
case "--verbose": case "--verbose":
case "-v": case "-v":
verbose = true; verbose = true;
@@ -220,34 +223,34 @@ export async function main() {
return; return;
} }
// Resolve vault path // Resolve database path
const vaultPath = path.resolve(options.databasePath!); const databasePath = path.resolve(options.databasePath!);
// Check if vault directory exists // Check if database directory exists
try { try {
const stat = await fs.stat(vaultPath); const stat = await fs.stat(databasePath);
if (!stat.isDirectory()) { if (!stat.isDirectory()) {
console.error(`Error: ${vaultPath} is not a directory`); console.error(`Error: ${databasePath} is not a directory`);
process.exit(1); process.exit(1);
} }
} catch (error) { } catch (error) {
console.error(`Error: Vault directory ${vaultPath} does not exist`); console.error(`Error: Database directory ${databasePath} does not exist`);
process.exit(1); process.exit(1);
} }
// Resolve settings path // Resolve settings path
const settingsPath = options.settingsPath const settingsPath = options.settingsPath
? path.resolve(options.settingsPath) ? path.resolve(options.settingsPath)
: path.join(vaultPath, SETTINGS_FILE); : path.join(databasePath, SETTINGS_FILE);
configureNodeLocalStorage(path.join(vaultPath, ".livesync", "runtime", "local-storage.json")); configureNodeLocalStorage(path.join(databasePath, ".livesync", "runtime", "local-storage.json"));
infoLog(`Self-hosted LiveSync CLI`); infoLog(`Self-hosted LiveSync CLI`);
infoLog(`Vault: ${vaultPath}`); infoLog(`Database Path: ${databasePath}`);
infoLog(`Settings: ${settingsPath}`); infoLog(`Settings: ${settingsPath}`);
infoLog(""); infoLog("");
// Create service context and hub // Create service context and hub
const context = new NodeServiceContext(vaultPath); const context = new NodeServiceContext(databasePath);
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(vaultPath, context); const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(databasePath, context);
serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => { serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => {
let levelStr = ""; let levelStr = "";
switch (level) { switch (level) {
@@ -321,7 +324,11 @@ export async function main() {
const core = new LiveSyncBaseCore( const core = new LiveSyncBaseCore(
serviceHubInstance, serviceHubInstance,
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => { (core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
return initialiseServiceModulesCLI(vaultPath, core, serviceHub); const mirrorVaultPath =
options.command === "mirror" && options.commandArgs[0]
? path.resolve(options.commandArgs[0])
: databasePath;
return initialiseServiceModulesCLI(mirrorVaultPath, core, serviceHub);
}, },
(core) => [ (core) => [
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts // No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
@@ -331,8 +338,8 @@ export async function main() {
(core) => { (core) => {
// Add target filter to prevent internal files are handled // Add target filter to prevent internal files are handled
core.services.vault.isTargetFile.addHandler(async (target) => { core.services.vault.isTargetFile.addHandler(async (target) => {
const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target)); const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
const parts = vaultPath.split(path.sep); const parts = targetPath.split(path.sep);
// if some part of the path starts with dot, treat it as internal file and ignore. // if some part of the path starts with dot, treat it as internal file and ignore.
if (parts.some((part) => part.startsWith("."))) { if (parts.some((part) => part.startsWith("."))) {
return await Promise.resolve(false); return await Promise.resolve(false);
@@ -393,7 +400,7 @@ export async function main() {
infoLog(""); infoLog("");
} }
const result = await runCommand(options, { vaultPath, core, settingsPath }); const result = await runCommand(options, { databasePath, core, settingsPath });
if (!result) { if (!result) {
console.error(`[Error] Command '${options.command}' failed`); console.error(`[Error] Command '${options.command}' failed`);
process.exitCode = 1; process.exitCode = 1;

View File

@@ -17,7 +17,7 @@ describe("CLI parseArgs", () => {
}); });
it("exits 1 when --settings has no value", () => { it("exits 1 when --settings has no value", () => {
process.argv = ["node", "livesync-cli", "./vault", "--settings"]; process.argv = ["node", "livesync-cli", "./databasePath", "--settings"];
const exitMock = mockProcessExit(); const exitMock = mockProcessExit();
const stderr = vi.spyOn(console, "error").mockImplementation(() => {}); const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
@@ -37,7 +37,7 @@ describe("CLI parseArgs", () => {
}); });
it("exits 1 for unknown command after database-path", () => { it("exits 1 for unknown command after database-path", () => {
process.argv = ["node", "livesync-cli", "./vault", "unknown-cmd"]; process.argv = ["node", "livesync-cli", "./databasePath", "unknown-cmd"];
const exitMock = mockProcessExit(); const exitMock = mockProcessExit();
const stderr = vi.spyOn(console, "error").mockImplementation(() => {}); const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
@@ -56,32 +56,32 @@ describe("CLI parseArgs", () => {
expect(stdout).toHaveBeenCalled(); expect(stdout).toHaveBeenCalled();
const combined = stdout.mock.calls.flat().join("\n"); const combined = stdout.mock.calls.flat().join("\n");
expect(combined).toContain("Usage:"); expect(combined).toContain("Usage:");
expect(combined).toContain("livesync-cli [database-path]"); expect(combined).toContain("livesync-cli <database-path> [options] <command> [command-args]");
}); });
it("parses p2p-peers command and timeout", () => { it("parses p2p-peers command and timeout", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-peers", "5"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-peers", "5"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-peers"); expect(parsed.command).toBe("p2p-peers");
expect(parsed.commandArgs).toEqual(["5"]); expect(parsed.commandArgs).toEqual(["5"]);
}); });
it("parses p2p-sync command with peer and timeout", () => { it("parses p2p-sync command with peer and timeout", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-sync", "peer-1", "12"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-sync", "peer-1", "12"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-sync"); expect(parsed.command).toBe("p2p-sync");
expect(parsed.commandArgs).toEqual(["peer-1", "12"]); expect(parsed.commandArgs).toEqual(["peer-1", "12"]);
}); });
it("parses p2p-host command", () => { it("parses p2p-host command", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-host"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-host"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-host"); expect(parsed.command).toBe("p2p-host");
expect(parsed.commandArgs).toEqual([]); expect(parsed.commandArgs).toEqual([]);
}); });

View File

@@ -27,10 +27,10 @@ import { DatabaseService } from "@lib/services/base/DatabaseService";
import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types"; import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types";
export class NodeServiceContext extends ServiceContext { export class NodeServiceContext extends ServiceContext {
vaultPath: string; databasePath: string;
constructor(vaultPath: string) { constructor(databasePath: string) {
super(); super();
this.vaultPath = vaultPath; this.databasePath = databasePath;
} }
} }
@@ -64,7 +64,7 @@ class NodeDatabaseService<T extends NodeServiceContext> extends DatabaseService<
): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } { ): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } {
const optionPass = { const optionPass = {
...options, ...options,
prefix: this.context.vaultPath + nodePath.sep, prefix: this.context.databasePath + nodePath.sep,
}; };
const passSettings = { ...settings, useIndexedDBAdapter: false }; const passSettings = { ...settings, useIndexedDBAdapter: false };
return super.modifyDatabaseOptions(passSettings, name, optionPass); return super.modifyDatabaseOptions(passSettings, name, optionPass);

View File

@@ -0,0 +1,49 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
cd "$CLI_DIR"
source "$SCRIPT_DIR/test-helpers.sh"
display_test_info "Test for Issue #860: Empty output from ls and mirror"
RUN_BUILD="${RUN_BUILD:-1}"
cli_test_init_cli_cmd
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-repro-860.XXXXXX")"
trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault"
mkdir -p "$VAULT_DIR"
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..."
npm run build
fi
echo "[INFO] generating settings -> $SETTINGS_FILE"
cli_test_init_settings_file "$SETTINGS_FILE"
# 1. Test 'ls' on empty database
echo "[INFO] Testing 'ls' on empty database..."
LS_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls)
if [[ -z "$LS_OUTPUT" ]]; then
echo "[REPRODUCED] 'ls' returned empty output for empty database."
else
echo "[INFO] 'ls' output: $LS_OUTPUT"
fi
# 2. Test 'mirror' on empty vault
echo "[INFO] Testing 'mirror' on empty vault..."
MIRROR_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror 2>&1)
if [[ "$MIRROR_OUTPUT" == *"[Command] mirror"* ]] && [[ ! "$MIRROR_OUTPUT" == *"[Mirror]"* ]]; then
# Note: currently it prints [Command] mirror to stderr.
# Let's see if it prints anything else.
echo "[REPRODUCED] 'mirror' produced no functional logs (only command header)."
else
echo "[INFO] 'mirror' output: $MIRROR_OUTPUT"
fi
echo "[DONE] finished repro-860 test"

83
src/apps/cli/test/test-mirror-linux.sh Normal file → Executable file
View File

@@ -28,7 +28,9 @@ trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json" SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault" VAULT_DIR="$WORK_DIR/vault"
DB_DIR="$WORK_DIR/db"
mkdir -p "$VAULT_DIR/test" mkdir -p "$VAULT_DIR/test"
mkdir -p "$DB_DIR"
if [[ "$RUN_BUILD" == "1" ]]; then if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..." echo "[INFO] building CLI..."
@@ -41,6 +43,20 @@ cli_test_init_settings_file "$SETTINGS_FILE"
# isConfigured=true is required for mirror (canProceedScan checks this) # isConfigured=true is required for mirror (canProceedScan checks this)
cli_test_mark_settings_configured "$SETTINGS_FILE" cli_test_mark_settings_configured "$SETTINGS_FILE"
# Preparation: Sync settings and files logic
DB_SETTINGS="$DB_DIR/settings.json"
cp "$SETTINGS_FILE" "$DB_SETTINGS"
# Helper for standard run (Separated paths)
run_mirror_test() {
run_cli "$DB_DIR" --settings "$DB_SETTINGS" mirror "$VAULT_DIR"
}
# Helper for compatibility run (Same path)
run_mirror_compat() {
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
}
PASS=0 PASS=0
FAIL=0 FAIL=0
@@ -78,19 +94,27 @@ portable_touch_timestamp() {
# Case 1: File exists only in storage → should be synced into DB after mirror # Case 1: File exists only in storage → should be synced into DB after mirror
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 1: storage-only → DB ===" echo "=== Case 1: storage-only → DB (Separated Paths) ==="
printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md" printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror echo "[DEBUG] DB_DIR: $DB_DIR"
echo "[DEBUG] VAULT_DIR: $VAULT_DIR"
run_mirror_test
RESULT_FILE="$WORK_DIR/case1-cat.txt" RESULT_FILE="$WORK_DIR/case1-cat.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE" # Try 'ls' first to see what's in the DB
echo "--- DB contents ---"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" ls
echo "-------------------"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/storage-only.md "$RESULT_FILE"
if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then
assert_pass "storage-only file was synced into DB" assert_pass "storage-only file was synced into DB using separated paths"
else else
assert_fail "storage-only file NOT synced into DB" assert_fail "storage-only file NOT synced into DB with separated paths"
echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2 echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2
echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2 echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2
fi fi
@@ -99,9 +123,9 @@ fi
# Case 2: File exists only in DB → should be restored to storage after mirror # Case 2: File exists only in DB → should be restored to storage after mirror
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 2: DB-only → storage ===" echo "=== Case 2: DB-only → storage (Separated Paths) ==="
printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md printf 'db-only content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/db-only.md
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
assert_fail "db-only.md unexpectedly exists in storage before mirror" assert_fail "db-only.md unexpectedly exists in storage before mirror"
@@ -109,7 +133,7 @@ else
echo "[INFO] confirmed: test/db-only.md not in storage before mirror" echo "[INFO] confirmed: test/db-only.md not in storage before mirror"
fi fi
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")" STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")"
@@ -119,19 +143,19 @@ if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')" assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')"
fi fi
else else
assert_fail "DB-only file was NOT restored to storage" assert_fail "DB-only file NOT restored to storage after mirror"
fi fi
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
# Case 3: File deleted in DB → should NOT be created in storage # Case 3: File deleted in DB → should NOT be created in storage
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 3: DB-deleted → storage untouched ===" echo "=== Case 3: DB-deleted → storage untouched (Separated Paths) ==="
printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md printf 'to-be-deleted\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/deleted.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md run_cli "$DB_DIR" --settings "$DB_SETTINGS" rm test/deleted.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then
assert_pass "deleted DB entry was not restored to storage" assert_pass "deleted DB entry was not restored to storage"
@@ -143,19 +167,19 @@ fi
# Case 4: Both exist, storage is newer → DB should be updated # Case 4: Both exist, storage is newer → DB should be updated
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 4: storage newer → DB updated ===" echo "=== Case 4: storage newer → DB updated (Separated Paths) ==="
# Seed DB with old content (mtime ≈ now) # Seed DB with old content (mtime ≈ now)
printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md printf 'old content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-storage-newer.md
# Write new content to storage with a timestamp 1 hour in the future # Write new content to storage with a timestamp 1 hour in the future
printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md" printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md"
touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md" touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
DB_RESULT_FILE="$WORK_DIR/case4-pull.txt" DB_RESULT_FILE="$WORK_DIR/case4-pull.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE" run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then
assert_pass "DB updated to match newer storage file" assert_pass "DB updated to match newer storage file"
else else
@@ -168,16 +192,16 @@ fi
# Case 5: Both exist, DB is newer → storage should be updated # Case 5: Both exist, DB is newer → storage should be updated
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 5: DB newer → storage updated ===" echo "=== Case 5: DB newer → storage updated (Separated Paths) ==="
# Write old content to storage with a timestamp 1 hour in the past # Write old content to storage with a timestamp 1 hour in the past
printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md" printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md"
touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md" touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md"
# Write new content to DB only (mtime ≈ now, newer than the storage file) # Write new content to DB only (mtime ≈ now, newer than the storage file)
printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md printf 'new db content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-db-newer.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")" STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")"
if [[ "$STORAGE_CONTENT" == "new db content" ]]; then if [[ "$STORAGE_CONTENT" == "new db content" ]]; then
@@ -186,6 +210,25 @@ else
assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')" assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')"
fi fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 6: Compatibility test - omitted vault-path
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 6: omitted vault-path (Compatibility Mode) ==="
# We use VAULT_DIR as the "main" database path for this part.
printf 'compat-content\n' > "$VAULT_DIR/compat.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
# In compat mode, it should find it in the DB at root
CAT_RESULT="$WORK_DIR/compat-cat.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull compat.md "$CAT_RESULT"
if [[ "$(cat "$CAT_RESULT")" == "compat-content" ]]; then
assert_pass "Compatibility mode works (omitted vault-path)"
else
assert_fail "Compatibility mode failed to sync file into DB"
fi
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
# Summary # Summary
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────

View File

@@ -0,0 +1,150 @@
# Writing CLI Tests on Deno
This guide explains how to add or update tests under `src/apps/cli/testdeno/`.
Note that new tests should be added to the Deno suite rather than the existing bash suite due to the cross-platform execution and TypeScript benefits.
## Scope
The Deno suite is designed for cross-platform execution, with a strong focus on Windows compatibility while keeping behaviour equivalent to existing bash tests.
## Principles
- Keep one scenario per file when practical.
- Reuse helpers from `helpers/` rather than duplicating process, Docker, or settings logic.
- Prefer deterministic data over random inputs unless randomness is explicitly required.
- Ensure every test can clean up automatically.
- Keep assertions actionable with clear failure messages.
## Directory structure
```
src/apps/cli/testdeno/
helpers/
backgroundCli.ts
cli.ts
docker.ts
env.ts
p2p.ts
settings.ts
temp.ts
test-*.ts
deno.json
```
## Test file naming
- Use `test-<feature>.ts`.
- Use names aligned with existing bash tests when porting, for example:
- `test-sync-locked-remote.ts`
- `test-p2p-sync.ts`
## Core helper usage
### Temporary workspace
Use `TempDir` and `await using` so cleanup is automatic:
```ts
await using workDir = await TempDir.create("livesync-cli-my-test");
```
### CLI execution
- `runCli(...)`: returns code and combined output.
- `runCliOrFail(...)`: throws on non-zero exit.
- `runCliWithInputOrFail(input, ...)`: for `put` and stdin-driven commands.
### Settings
- `initSettingsFile(...)`: creates a baseline settings file.
- `applyCouchdbSettings(...)`: applies CouchDB fields.
- `applyRemoteSyncSettings(...)`: applies remote and encryption fields.
- `applyP2pSettings(...)`: applies P2P fields.
- `applyP2pTestTweaks(...)`: enables P2P-only test profile.
### Docker services
- `startCouchdb(...)`, `stopCouchdb()`
- `startP2pRelay()`, `stopP2pRelay()`
### P2P discovery
- `discoverPeer(...)`
- `maybeStartLocalRelay(...)`
- `stopLocalRelayIfStarted(...)`
### Background host process
Use `startCliInBackground(...)` for long-running host mode such as `p2p-host`.
## Recommended test structure
1. Arrange
2. Act
3. Assert
4. Cleanup in `finally`
Example skeleton:
```ts
Deno.test("feature: behaviour", async () => {
await using workDir = await TempDir.create("example");
// Arrange
try {
// Act
// Assert
} finally {
// Optional explicit cleanup
}
});
```
## Reliability guidelines
- Use explicit waits only when needed for eventual consistency.
- Re-run sync operations where the protocol is eventually consistent.
- For network-sensitive commands, use `LIVESYNC_CLI_RETRY` during debugging.
- Keep Docker container reuse disabled by default unless debugging.
## Environment variables
Common variables:
- `LIVESYNC_DOCKER_MODE`
- `LIVESYNC_DOCKER_COMMAND`
- `LIVESYNC_TEST_TEE`
- `LIVESYNC_DOCKER_TEE`
- `LIVESYNC_CLI_DEBUG`
- `LIVESYNC_CLI_VERBOSE`
- `LIVESYNC_CLI_RETRY`
- `LIVESYNC_DEBUG_KEEP_DOCKER`
P2P variables:
- `RELAY`
- `ROOM_ID`
- `PASSPHRASE`
- `APP_ID`
- `PEERS_TIMEOUT`
- `SYNC_TIMEOUT`
- `USE_INTERNAL_RELAY`
## Adding a new test task
1. Add the test file under `src/apps/cli/testdeno/`.
2. Add a task in `src/apps/cli/testdeno/deno.json`.
3. Update `src/apps/cli/testdeno/test_dev_deno.md`.
4. Run the new task locally.
## Validation checklist
- The test passes on a clean workspace.
- The test does not leave persistent artefacts unless explicitly requested.
- Failure messages identify both expected and actual behaviour.
- The corresponding task is documented.
## Out of scope for this suite
- One-off reproduction scripts that are not intended as stable regression tests.

View File

@@ -0,0 +1,22 @@
{
"tasks": {
"test": "deno test -A --no-check test-*.ts",
"test:local": "deno test -A --no-check test-setup-put-cat.ts test-mirror.ts",
"test:push-pull": "deno test -A --no-check test-push-pull.ts",
"test:setup-put-cat": "deno test -A --no-check test-setup-put-cat.ts",
"test:mirror": "deno test -A --no-check test-mirror.ts",
"test:sync-two-local": "deno test -A --no-check test-sync-two-local-databases.ts",
"test:sync-locked-remote": "deno test -A --no-check test-sync-locked-remote.ts",
"test:p2p-host": "deno test -A --no-check test-p2p-host.ts",
"test:p2p-peers": "deno test -A --no-check test-p2p-peers-local-relay.ts",
"test:p2p-sync": "deno test -A --no-check test-p2p-sync.ts",
"test:p2p-three-nodes": "deno test -A --no-check test-p2p-three-nodes-conflict.ts",
"test:p2p-upload-download": "deno test -A --no-check test-p2p-upload-download-repro.ts",
"test:e2e-couchdb": "deno test -A --no-check test-e2e-two-vaults-couchdb.ts",
"test:e2e-matrix": "deno test -A --no-check test-e2e-two-vaults-matrix.ts"
},
"imports": {
"@std/assert": "jsr:@std/assert@^1.0.13",
"@std/path": "jsr:@std/path@^1.0.9"
}
}

31
src/apps/cli/testdeno/deno.lock generated Normal file
View File

@@ -0,0 +1,31 @@
{
"version": "5",
"specifiers": {
"jsr:@std/assert@^1.0.13": "1.0.19",
"jsr:@std/internal@^1.0.12": "1.0.12",
"jsr:@std/path@^1.0.9": "1.1.4"
},
"jsr": {
"@std/assert@1.0.19": {
"integrity": "eaada96ee120cb980bc47e040f82814d786fe8162ecc53c91d8df60b8755991e",
"dependencies": [
"jsr:@std/internal"
]
},
"@std/internal@1.0.12": {
"integrity": "972a634fd5bc34b242024402972cd5143eac68d8dffaca5eaa4dba30ce17b027"
},
"@std/path@1.1.4": {
"integrity": "1d2d43f39efb1b42f0b1882a25486647cb851481862dc7313390b2bb044314b5",
"dependencies": [
"jsr:@std/internal"
]
}
},
"workspace": {
"dependencies": [
"jsr:@std/assert@^1.0.13",
"jsr:@std/path@^1.0.9"
]
}
}

View File

@@ -0,0 +1,112 @@
import { CLI_DIR } from "./cli.ts";
import { join } from "@std/path";
const CLI_DIST = join(CLI_DIR, "dist", "index.cjs");
const VERBOSE_ENABLED = Deno.env.get("LIVESYNC_CLI_VERBOSE") === "1";
const DEBUG_ENABLED = Deno.env.get("LIVESYNC_CLI_DEBUG") === "1";
function decorateArgs(args: string[]): string[] {
return DEBUG_ENABLED ? ["-d", ...args] : VERBOSE_ENABLED ? ["-v", ...args] : args;
}
async function pump(
stream: ReadableStream<Uint8Array>,
sink: (text: string) => void,
teeTarget: WritableStream<Uint8Array> | null
): Promise<void> {
const reader = stream.getReader();
const writer = teeTarget?.getWriter();
const dec = new TextDecoder();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (!value) continue;
sink(dec.decode(value, { stream: true }));
if (writer) {
await writer.write(value);
}
}
} finally {
if (writer) writer.releaseLock();
reader.releaseLock();
}
}
export class BackgroundCliProcess {
#stdout = "";
#stderr = "";
#stdoutDone: Promise<void>;
#stderrDone: Promise<void>;
constructor(
readonly child: Deno.ChildProcess,
readonly args: string[]
) {
this.#stdoutDone = pump(
child.stdout,
(text) => {
this.#stdout += text;
},
null
);
this.#stderrDone = pump(
child.stderr,
(text) => {
this.#stderr += text;
},
null
);
}
get stdout(): string {
return this.#stdout;
}
get stderr(): string {
return this.#stderr;
}
get combined(): string {
return this.#stdout + this.#stderr;
}
async waitUntilContains(needle: string, timeoutMs = 15000): Promise<void> {
const started = Date.now();
while (Date.now() - started < timeoutMs) {
if (this.combined.includes(needle)) return;
const status = await Promise.race([
this.child.status.then((s) => ({ type: "status" as const, status: s })),
new Promise<{ type: "tick" }>((resolve) => setTimeout(() => resolve({ type: "tick" }), 100)),
]);
if (status.type === "status") {
throw new Error(
`Background CLI exited before '${needle}' appeared (code ${status.status.code})\n${this.combined}`
);
}
}
throw new Error(`Timed out waiting for '${needle}'\n${this.combined}`);
}
async stop(): Promise<number> {
try {
this.child.kill("SIGTERM");
} catch {
// ignore already-exited processes
}
const status = await this.child.status;
await Promise.all([this.#stdoutDone, this.#stderrDone]);
return status.code;
}
}
export function startCliInBackground(...args: string[]): BackgroundCliProcess {
const child = new Deno.Command("node", {
args: [CLI_DIST, ...decorateArgs(args)],
cwd: CLI_DIR,
stdin: "null",
stdout: "piped",
stderr: "piped",
}).spawn();
return new BackgroundCliProcess(child, args);
}

View File

@@ -0,0 +1,231 @@
import { join } from "@std/path";
// ---------------------------------------------------------------------------
// Path resolution
// ---------------------------------------------------------------------------
// This file lives at: src/apps/cli/testdeno/helpers/cli.ts
// CLI root (src/apps/cli/) is two levels up.
// import.meta.dirname is available in Deno 1.40+ as an OS-native path string.
export const CLI_DIR: string = join(import.meta.dirname!, "..", "..");
const CLI_DIST = join(CLI_DIR, "dist", "index.cjs");
// ---------------------------------------------------------------------------
// Result type
// ---------------------------------------------------------------------------
export interface CliResult {
stdout: string;
stderr: string;
/** stdout + stderr concatenated — useful for assertion messages. */
combined: string;
code: number;
}
const TEE_ENABLED = Deno.env.get("LIVESYNC_TEST_TEE") === "1";
const VERBOSE_ENABLED = Deno.env.get("LIVESYNC_CLI_VERBOSE") === "1";
const DEBUG_ENABLED = Deno.env.get("LIVESYNC_CLI_DEBUG") === "1";
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
function concatChunks(chunks: Uint8Array[]): Uint8Array {
const total = chunks.reduce((n, c) => n + c.length, 0);
const out = new Uint8Array(total);
let offset = 0;
for (const c of chunks) {
out.set(c, offset);
offset += c.length;
}
return out;
}
async function collectStream(
stream: ReadableStream<Uint8Array>,
teeTarget: WritableStream<Uint8Array> | null
): Promise<Uint8Array> {
const reader = stream.getReader();
const chunks: Uint8Array[] = [];
const writer = teeTarget?.getWriter();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (value) {
chunks.push(value);
if (writer) {
await writer.write(value);
}
}
}
} finally {
if (writer) {
writer.releaseLock();
}
reader.releaseLock();
}
return concatChunks(chunks);
}
async function runNodeCommand(args: string[], stdinData?: Uint8Array): Promise<CliResult> {
const cliArgs = DEBUG_ENABLED ? ["-d", ...args] : VERBOSE_ENABLED ? ["-v", ...args] : args;
const child = new Deno.Command("node", {
args: [CLI_DIST, ...cliArgs],
cwd: CLI_DIR,
stdin: stdinData ? "piped" : "null",
stdout: "piped",
stderr: "piped",
}).spawn();
const stdoutPromise = collectStream(child.stdout, TEE_ENABLED ? Deno.stdout.writable : null);
const stderrPromise = collectStream(child.stderr, TEE_ENABLED ? Deno.stderr.writable : null);
if (stdinData) {
const w = child.stdin.getWriter();
await w.write(stdinData);
await w.close();
}
const [status, stdout, stderr] = await Promise.all([child.status, stdoutPromise, stderrPromise]);
const dec = new TextDecoder();
const out = dec.decode(stdout);
const err = dec.decode(stderr);
return { stdout: out, stderr: err, combined: out + err, code: status.code };
}
function isTransientNetworkError(message: string): boolean {
const m = message.toLowerCase();
return (
m.includes("fetch failed") ||
m.includes("econnreset") ||
m.includes("econnrefused") ||
m.includes("und_err_socket") ||
m.includes("other side closed")
);
}
// ---------------------------------------------------------------------------
// Core runners
// ---------------------------------------------------------------------------
/**
* Run the CLI (node dist/index.cjs) with the supplied arguments.
* Pass the vault / DB path as the first argument, exactly as the bash helpers
* do. Does NOT throw on non-zero exit — check `.code` yourself.
*/
export async function runCli(...args: string[]): Promise<CliResult> {
const retries = Number(Deno.env.get("LIVESYNC_CLI_RETRY") ?? "0");
for (let attempt = 0; ; attempt++) {
const result = await runNodeCommand(args);
if (result.code === 0) return result;
if (attempt >= retries || !isTransientNetworkError(result.combined)) {
return result;
}
const waitMs = 400 * (attempt + 1);
console.warn(`[WARN] transient CLI failure, retrying (${attempt + 1}/${retries}) in ${waitMs}ms`);
await sleep(waitMs);
}
}
/**
* Run the CLI and throw if it exits non-zero. Returns stdout.
*/
export async function runCliOrFail(...args: string[]): Promise<string> {
const r = await runCli(...args);
if (r.code !== 0) {
throw new Error(`CLI exited with code ${r.code}\nstdout: ${r.stdout}\nstderr: ${r.stderr}`);
}
return r.stdout;
}
/**
* Run the CLI with data piped to stdin (equivalent to `echo … | run_cli …`
* or `cat file | run_cli …`).
*/
export async function runCliWithInput(input: string | Uint8Array, ...args: string[]): Promise<CliResult> {
const data = typeof input === "string" ? new TextEncoder().encode(input) : input;
const retries = Number(Deno.env.get("LIVESYNC_CLI_RETRY") ?? "0");
for (let attempt = 0; ; attempt++) {
const result = await runNodeCommand(args, data);
if (result.code === 0) return result;
if (attempt >= retries || !isTransientNetworkError(result.combined)) {
return result;
}
const waitMs = 400 * (attempt + 1);
console.warn(`[WARN] transient CLI(stdin) failure, retrying (${attempt + 1}/${retries}) in ${waitMs}ms`);
await sleep(waitMs);
}
}
/**
* runCliWithInput — throws on non-zero exit, returns stdout.
*/
export async function runCliWithInputOrFail(input: string | Uint8Array, ...args: string[]): Promise<string> {
const r = await runCliWithInput(input, ...args);
if (r.code !== 0) {
throw new Error(`CLI (with stdin) exited with code ${r.code}\nstdout: ${r.stdout}\nstderr: ${r.stderr}`);
}
return r.stdout;
}
// ---------------------------------------------------------------------------
// Output helpers
// ---------------------------------------------------------------------------
/** Strip the CLIWatchAdapter banner line that `cat` emits. */
export function sanitiseCatStdout(raw: string): string {
return raw
.split("\n")
.filter((l) => l !== "[CLIWatchAdapter] File watching is not enabled in CLI version")
.join("\n");
}
// ---------------------------------------------------------------------------
// Assertions (parity with test-helpers.sh)
// ---------------------------------------------------------------------------
export function assertContains(haystack: string, needle: string, message: string): void {
if (!haystack.includes(needle)) {
throw new Error(`[FAIL] ${message}\nExpected to find: ${JSON.stringify(needle)}\nActual output:\n${haystack}`);
}
}
export function assertNotContains(haystack: string, needle: string, message: string): void {
if (haystack.includes(needle)) {
throw new Error(`[FAIL] ${message}\nDid NOT expect: ${JSON.stringify(needle)}\nActual output:\n${haystack}`);
}
}
export async function assertFilesEqual(expectedPath: string, actualPath: string, message: string): Promise<void> {
const [expected, actual] = await Promise.all([Deno.readFile(expectedPath), Deno.readFile(actualPath)]);
if (expected.length !== actual.length || expected.some((b, i) => b !== actual[i])) {
const hex = async (d: Uint8Array<ArrayBuffer>) => {
const h = await crypto.subtle.digest("SHA-256", d);
return [...new Uint8Array(h)].map((b) => b.toString(16).padStart(2, "0")).join("");
};
throw new Error(
`[FAIL] ${message}\nexpected SHA-256: ${await hex(expected)}\nactual SHA-256: ${await hex(actual)}`
);
}
}
// ---------------------------------------------------------------------------
// JSON helpers
// ---------------------------------------------------------------------------
export async function readJsonFile<T = Record<string, unknown>>(filePath: string): Promise<T> {
return JSON.parse(await Deno.readTextFile(filePath)) as T;
}
export function jsonStringField(jsonText: string, field: string): string {
const data = JSON.parse(jsonText) as Record<string, unknown>;
const value = data[field];
return typeof value === "string" ? value : "";
}
export function jsonFieldIsNa(data: Record<string, unknown>, field: string): boolean {
return data[field] === "N/A";
}

View File

@@ -0,0 +1,530 @@
/**
* Docker service management for tests.
*
* CouchDB start/stop/init is implemented directly using `docker` CLI commands
* and the Fetch API, so it works on any platform where Docker (Desktop) is
* available — including Windows — without needing bash.
*/
type DockerInvoker = {
bin: string;
prefix: string[];
label: string;
};
let dockerInvokerPromise: Promise<DockerInvoker> | null = null;
const DOCKER_TEE = Deno.env.get("LIVESYNC_DOCKER_TEE") === "1" || Deno.env.get("LIVESYNC_TEST_TEE") === "1";
// ---------------------------------------------------------------------------
// Low-level docker wrapper
// ---------------------------------------------------------------------------
function parseCommand(command: string): { bin: string; prefix: string[] } {
const parts = command.trim().split(/\s+/).filter(Boolean);
if (parts.length === 0) {
throw new Error("LIVESYNC_DOCKER_COMMAND is empty");
}
return { bin: parts[0], prefix: parts.slice(1) };
}
async function runCommand(bin: string, args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
const cmd = new Deno.Command(bin, {
args,
stdin: "null",
stdout: "piped",
stderr: "piped",
});
try {
const { code, stdout, stderr } = await cmd.output();
const dec = new TextDecoder();
const result = {
code,
stdout: dec.decode(stdout),
stderr: dec.decode(stderr),
};
if (DOCKER_TEE) {
if (result.stdout.trim().length > 0) {
console.log(`[docker:${bin}] ${result.stdout.trimEnd()}`);
}
if (result.stderr.trim().length > 0) {
console.error(`[docker:${bin}] ${result.stderr.trimEnd()}`);
}
}
return result;
} catch (err) {
if (err instanceof Deno.errors.NotFound) {
return {
code: 127,
stdout: "",
stderr: `Command not found: ${bin}`,
};
}
throw err;
}
}
async function resolveDockerInvoker(): Promise<DockerInvoker> {
const custom = Deno.env.get("LIVESYNC_DOCKER_COMMAND")?.trim();
if (custom) {
const parsed = parseCommand(custom);
const runner: DockerInvoker = {
...parsed,
label: `custom(${custom})`,
};
// Validate custom command eagerly so misconfiguration fails fast.
const checkArgs = runner.prefix.length === 0 ? ["--version"] : [...runner.prefix, "docker", "--version"];
const check = await runCommand(runner.bin, checkArgs);
if (check.code !== 0) {
throw new Error(`LIVESYNC_DOCKER_COMMAND is not usable: ${custom}\n${check.stderr || check.stdout}`);
}
return runner;
}
const mode = (Deno.env.get("LIVESYNC_DOCKER_MODE") ?? "auto").toLowerCase();
const onWindows = Deno.build.os === "windows";
const native: DockerInvoker = { bin: "docker", prefix: [], label: "docker" };
const wsl: DockerInvoker = { bin: "wsl", prefix: [], label: "wsl docker" };
if (mode === "native") {
return native;
}
if (mode === "wsl") {
return wsl;
}
if (mode !== "auto") {
throw new Error(`Unsupported LIVESYNC_DOCKER_MODE='${mode}'. Use auto, native, or wsl.`);
}
// On Windows we prefer `wsl docker` first, then native docker.
// This typically works better in setups where Docker is installed only in
// WSL and not exposed as docker.exe on PATH.
const candidates = onWindows ? [wsl, native] : [native, wsl];
for (const c of candidates) {
if (c.bin === "docker") {
const r = await runCommand("docker", ["--version"]);
if (r.code === 0) return c;
continue;
}
const r = await runCommand("wsl", ["docker", "--version"]);
if (r.code === 0) return c;
}
throw new Error(
[
"Docker command is not available.",
"Set one of:",
"- LIVESYNC_DOCKER_MODE=native",
"- LIVESYNC_DOCKER_MODE=wsl",
"- LIVESYNC_DOCKER_COMMAND='docker'",
"- LIVESYNC_DOCKER_COMMAND='wsl docker'",
].join("\n")
);
}
async function getDockerInvoker(): Promise<DockerInvoker> {
if (!dockerInvokerPromise) {
dockerInvokerPromise = resolveDockerInvoker().then((r) => {
console.log(`[INFO] docker runner: ${r.label}`);
return r;
});
}
return await dockerInvokerPromise;
}
async function docker(...args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
const invoker = await getDockerInvoker();
// Either:
// docker <args>
// Or:
// wsl docker <args>
const finalArgs =
invoker.prefix.length === 0
? invoker.bin === "wsl"
? ["docker", ...args]
: args
: [...invoker.prefix, ...args];
const r = await runCommand(invoker.bin, finalArgs);
return { code: r.code, stdout: r.stdout, stderr: r.stderr };
}
async function dockerOrFail(...args: string[]): Promise<string> {
const r = await docker(...args);
if (r.code !== 0) {
throw new Error(`docker ${args[0]} failed (code ${r.code}): ${r.stderr.trim()}`);
}
return r.stdout;
}
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
async function waitForCouchdbStable(hostname: string, user: string, password: string): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = { Authorization: `Basic ${auth}` };
let consecutive = 0;
for (let i = 0; i < 30; i++) {
try {
const r = await fetch(`${h}/_up`, {
headers,
signal: AbortSignal.timeout(3000),
});
if (r.ok) {
consecutive++;
if (consecutive >= 3) return;
} else {
consecutive = 0;
}
} catch {
consecutive = 0;
}
await sleep(500);
}
throw new Error("CouchDB did not become stable in time");
}
// ---------------------------------------------------------------------------
// Fetch with retry (mirrors cli_test_curl_json() retry loop)
// ---------------------------------------------------------------------------
async function fetchRetry(
url: string,
init: RequestInit,
retries = 30,
delayMs = 2000,
allowStatus: number[] = []
): Promise<void> {
let lastError: unknown;
let lastStatus: number | undefined;
for (let i = 0; i < retries; i++) {
try {
const r = await fetch(url, {
signal: AbortSignal.timeout(5000),
...init,
});
lastStatus = r.status;
await r.body?.cancel().catch(() => {});
if (r.ok || allowStatus.includes(r.status)) return;
lastError = `HTTP ${r.status}`;
} catch (e) {
lastError = e;
}
await sleep(delayMs);
}
throw new Error(
`Could not reach ${url} after ${retries} retries: ${lastError} (last status: ${lastStatus ?? "N/A"})`
);
}
// ---------------------------------------------------------------------------
// CouchDB
// ---------------------------------------------------------------------------
//
// TODO: these values could be configurable via environment variables.
//
const COUCHDB_CONTAINER = "couchdb-test";
const COUCHDB_IMAGE = "couchdb:3.5.0";
const MINIO_CONTAINER = "minio-test";
const MINIO_IMAGE = "minio/minio";
const MINIO_MC_IMAGE = "minio/mc";
export async function stopCouchdb(): Promise<void> {
await docker("stop", COUCHDB_CONTAINER);
await docker("rm", COUCHDB_CONTAINER);
}
/**
* Start a CouchDB test container, initialise it, and create the test DB.
* Mirrors cli_test_start_couchdb() from test-helpers.sh, using direct
* docker / fetch calls instead of the bash util scripts.
*/
export async function startCouchdb(couchdbUri: string, user: string, password: string, dbname: string): Promise<void> {
console.log("[INFO] stopping leftover CouchDB container if present");
await stopCouchdb().catch(() => {});
console.log("[INFO] starting CouchDB test container");
await dockerOrFail(
"run",
"-d",
"--name",
COUCHDB_CONTAINER,
"-p",
// TODO: port mapping should be configurable.
"5989:5984",
"-e",
`COUCHDB_USER=${user}`,
"-e",
`COUCHDB_PASSWORD=${password}`,
"-e",
"COUCHDB_SINGLE_NODE=y",
COUCHDB_IMAGE
);
console.log("[INFO] initialising CouchDB");
await initCouchdb(couchdbUri, user, password);
console.log("[INFO] waiting for CouchDB to become stable");
await waitForCouchdbStable(couchdbUri, user, password);
console.log(`[INFO] creating test database: ${dbname}`);
await createCouchdbDatabase(couchdbUri, user, password, dbname);
}
/**
* Mirror couchdb-init.sh: configure single-node CouchDB via its REST API.
*/
async function initCouchdb(hostname: string, user: string, password: string, node = "_local"): Promise<void> {
// Podman environments often resolve localhost to ::1; use 127.0.0.1 like
// the bash script does.
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = {
"Content-Type": "application/json",
Authorization: `Basic ${auth}`,
};
const calls: Array<[string, string, string]> = [
[
"POST",
`${h}/_cluster_setup`,
JSON.stringify({
action: "enable_single_node",
username: user,
password,
bind_address: "0.0.0.0",
port: 5984,
singlenode: true,
}),
],
["PUT", `${h}/_node/${node}/_config/chttpd/require_valid_user`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd_auth/require_valid_user`, '"true"'],
["PUT", `${h}/_node/${node}/_config/httpd/WWW-Authenticate`, '"Basic realm=\\"couchdb\\""'],
["PUT", `${h}/_node/${node}/_config/httpd/enable_cors`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd/enable_cors`, '"true"'],
["PUT", `${h}/_node/${node}/_config/chttpd/max_http_request_size`, '"4294967296"'],
["PUT", `${h}/_node/${node}/_config/couchdb/max_document_size`, '"50000000"'],
["PUT", `${h}/_node/${node}/_config/cors/credentials`, '"true"'],
["PUT", `${h}/_node/${node}/_config/cors/origins`, '"*"'],
];
for (const [method, url, body] of calls) {
await fetchRetry(url, { method, headers, body });
}
}
export async function createCouchdbDatabase(
hostname: string,
user: string,
password: string,
dbname: string
): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
await fetchRetry(`${h}/${dbname}`, {
method: "PUT",
headers: { Authorization: `Basic ${auth}` },
});
}
/** Update a CouchDB document via PUT. Returns the updated document. */
export async function updateCouchdbDoc(
hostname: string,
user: string,
password: string,
docUrl: string,
updater: (doc: Record<string, unknown>) => Record<string, unknown>
): Promise<void> {
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
const auth = btoa(`${user}:${password}`);
const headers = {
"Content-Type": "application/json",
Authorization: `Basic ${auth}`,
};
const getRes = await fetch(`${h}/${docUrl}`, { headers });
const current = (await getRes.json()) as Record<string, unknown>;
const updated = updater(current);
await fetchRetry(`${h}/${docUrl}`, {
method: "PUT",
headers,
body: JSON.stringify(updated),
});
}
// ---------------------------------------------------------------------------
// MinIO
// ---------------------------------------------------------------------------
function shQuote(value: string): string {
return `'${value.split("'").join(`'"'"'`)}'`;
}
export async function stopMinio(): Promise<void> {
await docker("stop", MINIO_CONTAINER);
await docker("rm", MINIO_CONTAINER);
}
async function initMinioBucket(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<boolean> {
const cmd =
`mc alias set myminio ${shQuote(minioEndpoint)} ${shQuote(accessKey)} ${shQuote(secretKey)} >/dev/null 2>&1 && ` +
`mc mb --ignore-existing myminio/${shQuote(bucket)} >/dev/null 2>&1`;
const r = await docker("run", "--rm", "--network", "host", "--entrypoint", "/bin/sh", MINIO_MC_IMAGE, "-c", cmd);
return r.code === 0;
}
async function waitForMinioBucket(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<void> {
for (let i = 0; i < 30; i++) {
const checkCmd =
`mc alias set myminio ${shQuote(minioEndpoint)} ${shQuote(accessKey)} ${shQuote(secretKey)} >/dev/null 2>&1 && ` +
`mc ls myminio/${shQuote(bucket)} >/dev/null 2>&1`;
const check = await docker(
"run",
"--rm",
"--network",
// Now I used host networking to access the container via localhost for some environments (Docker Desktop on Windows).
// We need something good idea to work across all environments.
"host",
"--entrypoint",
"/bin/sh",
MINIO_MC_IMAGE,
"-c",
checkCmd
);
if (check.code === 0) {
return;
}
await initMinioBucket(minioEndpoint, accessKey, secretKey, bucket);
await sleep(2000);
}
throw new Error(`MinIO bucket not ready: ${bucket}`);
}
export async function startMinio(
minioEndpoint: string,
accessKey: string,
secretKey: string,
bucket: string
): Promise<void> {
console.log("[INFO] stopping leftover MinIO container if present");
await stopMinio().catch(() => {});
console.log("[INFO] starting MinIO test container");
await dockerOrFail(
"run",
"-d",
"--name",
MINIO_CONTAINER,
// TODO: Ports should be configurable.
"-p",
"9000:9000",
"-p",
"9001:9001",
"-e",
`MINIO_ROOT_USER=${accessKey}`,
"-e",
`MINIO_ROOT_PASSWORD=${secretKey}`,
"-e",
`MINIO_SERVER_URL=${minioEndpoint}`,
MINIO_IMAGE,
"server",
"/data",
"--console-address",
":9001"
);
console.log(`[INFO] initialising MinIO test bucket: ${bucket}`);
let initialised = false;
for (let i = 0; i < 5; i++) {
if (await initMinioBucket(minioEndpoint, accessKey, secretKey, bucket)) {
initialised = true;
break;
}
await sleep(2000);
}
if (!initialised) {
throw new Error(`Could not initialise MinIO bucket after retries: ${bucket}`);
}
await waitForMinioBucket(minioEndpoint, accessKey, secretKey, bucket);
}
// ---------------------------------------------------------------------------
// P2P relay (strfry)
// ---------------------------------------------------------------------------
// TODO: these values could be configurable via environment variables.
const P2P_RELAY_CONTAINER = "relay-test";
const P2P_RELAY_IMAGE = "ghcr.io/hoytech/strfry:latest";
const STRFRY_BOOTSTRAP_SH = String.raw`cat > /tmp/strfry.conf <<"EOF"
db = "./strfry-db/"
relay {
bind = "0.0.0.0"
port = 7777
nofiles = 100000
info {
name = "livesync test relay"
description = "local relay for livesync p2p tests"
}
maxWebsocketPayloadSize = 131072
autoPingSeconds = 55
writePolicy {
plugin = ""
}
}
EOF
exec /app/strfry --config /tmp/strfry.conf relay`;
export async function stopP2pRelay(): Promise<void> {
await docker("stop", P2P_RELAY_CONTAINER);
await docker("rm", P2P_RELAY_CONTAINER);
}
/**
* Start the local P2P relay container through the same docker runner used
* by CouchDB helpers. This keeps process ownership consistent across
* start/stop on Windows, WSL, and native Linux/macOS.
*/
export async function startP2pRelay(): Promise<void> {
console.log("[INFO] stopping leftover P2P relay container if present");
await stopP2pRelay().catch(() => {});
console.log("[INFO] starting local P2P relay container");
await dockerOrFail(
"run",
"-d",
"--name",
P2P_RELAY_CONTAINER,
"-p",
//TODO: port mapping should be configurable.
"4000:7777",
"--tmpfs",
"/app/strfry-db:rw,size=256m",
"--entrypoint",
"sh",
P2P_RELAY_IMAGE,
"-lc",
STRFRY_BOOTSTRAP_SH
);
}
export function isLocalP2pRelay(relayUrl: string): boolean {
return relayUrl === "ws://localhost:4000" || relayUrl === "ws://localhost:4000/";
}

View File

@@ -0,0 +1,26 @@
/**
* Load a .env-style file (KEY=value per line) into a plain object.
* Equivalent to `source $TEST_ENV_FILE; set -a` in bash.
* Maybe we should use some library... now it is just the minimal implementation that covers our use cases.
*
* Supported value formats:
* KEY=value
* KEY='single quoted'
* KEY="double quoted"
* # comment lines are ignored
*/
export async function loadEnvFile(filePath: string): Promise<Record<string, string>> {
const text = await Deno.readTextFile(filePath);
const result: Record<string, string> = {};
for (const line of text.split("\n")) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) continue;
const idx = trimmed.indexOf("=");
if (idx < 0) continue;
const key = trimmed.slice(0, idx).trim();
const raw = trimmed.slice(idx + 1).trim();
// Strip surrounding single or double quotes
result[key] = raw.replace(/^(['"])(.*)\1$/, "$2");
}
return result;
}

View File

@@ -0,0 +1,52 @@
import { runCli } from "./cli.ts";
import { isLocalP2pRelay, startP2pRelay, stopP2pRelay } from "./docker.ts";
export type PeerEntry = {
id: string;
name: string;
};
export function parsePeerLines(output: string): PeerEntry[] {
return output
.split(/\r?\n/)
.map((line) => line.split("\t"))
.filter((parts) => parts.length >= 3 && parts[0] === "[peer]")
.map((parts) => ({ id: parts[1], name: parts[2] }));
}
export async function discoverPeer(
vaultDir: string,
settingsFile: string,
timeoutSeconds: number,
targetPeer?: string
): Promise<PeerEntry> {
const result = await runCli(vaultDir, "--settings", settingsFile, "p2p-peers", String(timeoutSeconds));
if (result.code !== 0) {
throw new Error(`p2p-peers failed\n${result.combined}`);
}
const peers = parsePeerLines(result.stdout);
if (targetPeer) {
const matched = peers.find((peer) => peer.id === targetPeer || peer.name === targetPeer);
if (matched) return matched;
}
if (peers.length === 0) {
const fallback = result.combined.match(/Advertisement from\s+([^\s]+)/);
if (fallback?.[1]) {
return { id: fallback[1], name: fallback[1] };
}
throw new Error(`No peers discovered\n${result.combined}`);
}
return peers[0];
}
export async function maybeStartLocalRelay(relay: string): Promise<boolean> {
if (!isLocalP2pRelay(relay)) return false;
await startP2pRelay();
return true;
}
export async function stopLocalRelayIfStarted(started: boolean): Promise<void> {
if (started) {
await stopP2pRelay().catch(() => {});
}
}

View File

@@ -0,0 +1,205 @@
import { join } from "@std/path";
import { CLI_DIR, runCliOrFail } from "./cli.ts";
// ---------------------------------------------------------------------------
// Settings file initialisation
// ---------------------------------------------------------------------------
/** Generate a default settings file using the CLI's init-settings command. */
export async function initSettingsFile(settingsFile: string): Promise<void> {
await runCliOrFail("init-settings", "--force", settingsFile);
}
/**
* Generate a full setup URI from a settings file via src/lib API.
* Mirrors the bash flow in test-setup-put-cat-linux.sh.
*/
export async function generateSetupUriFromSettings(settingsFile: string, setupPassphrase: string): Promise<string> {
const repoRoot = join(CLI_DIR, "..", "..", "..");
const script = [
"import fs from 'node:fs';",
"import { pathToFileURL } from 'node:url';",
"(async () => {",
" const modulePath = process.env.REPO_ROOT + '/src/lib/src/API/processSetting.ts';",
" const moduleUrl = pathToFileURL(modulePath).href;",
" const { encodeSettingsToSetupURI } = await import(moduleUrl);",
" const settingsPath = process.env.SETTINGS_FILE;",
" const passphrase = process.env.SETUP_PASSPHRASE;",
" const settings = JSON.parse(fs.readFileSync(settingsPath, 'utf-8'));",
" settings.couchDB_DBNAME = 'setup-put-cat-db';",
" settings.couchDB_URI = 'http://127.0.0.1:5999';",
" settings.couchDB_USER = 'dummy';",
" settings.couchDB_PASSWORD = 'dummy';",
" settings.liveSync = false;",
" settings.syncOnStart = false;",
" settings.syncOnSave = false;",
" const uri = await encodeSettingsToSetupURI(settings, passphrase);",
" process.stdout.write(uri.trim());",
"})();",
].join("\n");
const scriptPath = await Deno.makeTempFile({
prefix: "livesync-setup-uri-",
suffix: ".mts",
});
await Deno.writeTextFile(scriptPath, script);
try {
const cmd = new Deno.Command("npx", {
args: ["tsx", scriptPath],
cwd: CLI_DIR,
env: {
REPO_ROOT: repoRoot,
SETTINGS_FILE: settingsFile,
SETUP_PASSPHRASE: setupPassphrase,
},
stdin: "null",
stdout: "piped",
stderr: "piped",
});
const { code, stdout, stderr } = await cmd.output();
const dec = new TextDecoder();
if (code !== 0) {
throw new Error(
`Failed to generate setup URI (code ${code})\nstdout: ${dec.decode(stdout)}\nstderr: ${dec.decode(stderr)}`
);
}
const uri = dec.decode(stdout).trim();
if (!uri) {
throw new Error("Failed to generate setup URI: output is empty");
}
return uri;
} finally {
await Deno.remove(scriptPath).catch(() => {});
}
}
/** Set isConfigured=true in a settings file (required for mirror / scan). */
export async function markSettingsConfigured(settingsFile: string): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
// ---------------------------------------------------------------------------
// CouchDB remote settings
// ---------------------------------------------------------------------------
/**
* Apply CouchDB connection details to a settings file.
* Mirrors cli_test_apply_couchdb_settings() from test-helpers.sh.
*/
export async function applyCouchdbSettings(
settingsFile: string,
couchdbUri: string,
couchdbUser: string,
couchdbPassword: string,
couchdbDbname: string,
liveSync = false
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.couchDB_URI = couchdbUri;
data.couchDB_USER = couchdbUser;
data.couchDB_PASSWORD = couchdbPassword;
data.couchDB_DBNAME = couchdbDbname;
if (liveSync) {
data.liveSync = true;
data.syncOnStart = false;
data.syncOnSave = false;
data.usePluginSync = false;
}
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
export async function applyRemoteSyncSettings(
settingsFile: string,
options: {
remoteType: "COUCHDB" | "MINIO";
couchdbUri?: string;
couchdbUser?: string;
couchdbPassword?: string;
couchdbDbname?: string;
minioBucket?: string;
minioEndpoint?: string;
minioAccessKey?: string;
minioSecretKey?: string;
encrypt?: boolean;
passphrase?: string;
}
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
if (options.remoteType === "COUCHDB") {
data.remoteType = "";
data.couchDB_URI = options.couchdbUri;
data.couchDB_USER = options.couchdbUser;
data.couchDB_PASSWORD = options.couchdbPassword;
data.couchDB_DBNAME = options.couchdbDbname;
} else {
data.remoteType = "MINIO";
data.bucket = options.minioBucket;
data.endpoint = options.minioEndpoint;
data.accessKey = options.minioAccessKey;
data.secretKey = options.minioSecretKey;
data.region = "auto";
data.forcePathStyle = true;
}
data.liveSync = true;
data.syncOnStart = false;
data.syncOnSave = false;
data.usePluginSync = false;
data.encrypt = options.encrypt === true;
data.passphrase = options.encrypt ? (options.passphrase ?? "") : "";
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
// ---------------------------------------------------------------------------
// P2P settings
// ---------------------------------------------------------------------------
/**
* Apply P2P connection details to a settings file.
* Mirrors cli_test_apply_p2p_settings() from test-helpers.sh.
*/
export async function applyP2pSettings(
settingsFile: string,
roomId: string,
passphrase: string,
appId = "self-hosted-livesync-cli-tests",
relays = "ws://localhost:4000/",
autoAccept = "~.*"
): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.P2P_Enabled = true;
data.P2P_AutoStart = false;
data.P2P_AutoBroadcast = false;
data.P2P_AppID = appId;
data.P2P_roomID = roomId;
data.P2P_passphrase = passphrase;
data.P2P_relays = relays;
data.P2P_AutoAcceptingPeers = autoAccept;
data.P2P_AutoDenyingPeers = "";
data.P2P_IsHeadless = true;
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}
export async function applyP2pTestTweaks(settingsFile: string, deviceName: string, passphrase: string): Promise<void> {
const data = JSON.parse(await Deno.readTextFile(settingsFile));
data.remoteType = "ONLY_P2P";
data.encrypt = true;
data.passphrase = passphrase;
data.usePathObfuscation = true;
data.handleFilenameCaseSensitive = false;
data.customChunkSize = 50;
data.usePluginSyncV2 = true;
data.doNotUseFixedRevisionForChunks = false;
data.P2P_DevicePeerName = deviceName;
data.isConfigured = true;
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
}

View File

@@ -0,0 +1,33 @@
import { join } from "@std/path";
/**
* A temporary directory that cleans itself up via `await using`.
* Requires TypeScript 5.2+ / Deno 1.40+ for the AsyncDisposable protocol.
*
* @example
* ```ts
* await using tmp = await TempDir.create();
* const file = tmp.join("data.json");
* ```
*/
export class TempDir implements AsyncDisposable {
readonly path: string;
private constructor(path: string) {
this.path = path;
}
static async create(prefix = "livesync-deno-test"): Promise<TempDir> {
const path = await Deno.makeTempDir({ prefix: `${prefix}.` });
return new TempDir(path);
}
/** Return an OS path joined to the temp directory root. */
join(...parts: string[]): string {
return join(this.path, ...parts);
}
async [Symbol.asyncDispose](): Promise<void> {
await Deno.remove(this.path, { recursive: true }).catch(() => {});
}
}

View File

@@ -0,0 +1,279 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { loadEnvFile } from "./helpers/env.ts";
import {
runCli,
runCliOrFail,
runCliWithInputOrFail,
sanitiseCatStdout,
assertFilesEqual,
jsonStringField,
} from "./helpers/cli.ts";
import { applyRemoteSyncSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, startMinio, stopCouchdb, stopMinio } from "./helpers/docker.ts";
import { join } from "@std/path";
const TEST_ENV = join(import.meta.dirname!, "..", ".test.env");
type RemoteType = "COUCHDB" | "MINIO";
function requireEnv(env: Record<string, string>, key: string): string {
const value = env[key]?.trim();
if (!value) throw new Error(`Required env var is missing: ${key}`);
return value;
}
export async function runScenario(remoteType: RemoteType, encrypt: boolean): Promise<void> {
const env = await loadEnvFile(TEST_ENV);
const dbSuffix = `${Date.now()}-${Math.floor(Math.random() * 100000)}`;
const couchdbUri = remoteType === "COUCHDB" ? requireEnv(env, "hostname").replace(/\/$/, "") : "";
const couchdbUser = remoteType === "COUCHDB" ? requireEnv(env, "username") : "";
const couchdbPassword = remoteType === "COUCHDB" ? requireEnv(env, "password") : "";
const dbPrefix = remoteType === "COUCHDB" ? requireEnv(env, "dbname") : "";
const dbname = remoteType === "COUCHDB" ? `${dbPrefix}-${dbSuffix}` : "";
const minioEndpoint = remoteType === "MINIO" ? requireEnv(env, "minioEndpoint").replace(/\/$/, "") : "";
const minioAccessKey = remoteType === "MINIO" ? requireEnv(env, "accessKey") : "";
const minioSecretKey = remoteType === "MINIO" ? requireEnv(env, "secretKey") : "";
const minioBucketBase = remoteType === "MINIO" ? requireEnv(env, "bucketName") : "";
const minioBucket = remoteType === "MINIO" ? `${minioBucketBase}-${dbSuffix}` : "";
const passphrase = "e2e-passphrase";
await using workDir = await TempDir.create(
`livesync-cli-e2e-${remoteType.toLowerCase()}-${encrypt ? "enc1" : "enc0"}`
);
const vaultA = workDir.join("testvault_a");
const vaultB = workDir.join("testvault_b");
const settingsA = workDir.join("test-settings-a.json");
const settingsB = workDir.join("test-settings-b.json");
const pushSrc = workDir.join("push-source.txt");
const pullDst = workDir.join("pull-destination.txt");
const pushBinarySrc = workDir.join("push-source.bin");
const pullBinaryDst = workDir.join("pull-destination.bin");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (remoteType === "COUCHDB") {
await startCouchdb(couchdbUri, couchdbUser, couchdbPassword, dbname);
} else {
await startMinio(minioEndpoint, minioAccessKey, minioSecretKey, minioBucket);
}
try {
await initSettingsFile(settingsA);
await initSettingsFile(settingsB);
await applyRemoteSyncSettings(settingsA, {
remoteType,
couchdbUri,
couchdbUser,
couchdbPassword,
couchdbDbname: dbname,
minioBucket,
minioEndpoint,
minioAccessKey,
minioSecretKey,
encrypt,
passphrase,
});
await applyRemoteSyncSettings(settingsB, {
remoteType,
couchdbUri,
couchdbUser,
couchdbPassword,
couchdbDbname: dbname,
minioBucket,
minioEndpoint,
minioAccessKey,
minioSecretKey,
encrypt,
passphrase,
});
const syncBoth = async () => {
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
};
const targetAOnly = "e2e/a-only-info.md";
const targetSync = "e2e/sync-info.md";
const targetSyncTwiceFirst = "e2e/sync-twice-first.md";
const targetSyncTwiceSecond = "e2e/sync-twice-second.md";
const targetPush = "e2e/pushed-from-a.md";
const targetPut = "e2e/put-from-a.md";
const targetPushBinary = "e2e/pushed-from-a.bin";
const targetConflict = "e2e/conflict.md";
await runCliWithInputOrFail("alpha-from-a\n", vaultA, "--settings", settingsA, "put", targetAOnly);
const infoAOnly = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetAOnly);
assert(infoAOnly.includes(`"path": "${targetAOnly}"`));
await runCliWithInputOrFail("visible-after-sync\n", vaultA, "--settings", settingsA, "put", targetSync);
await syncBoth();
const infoBSync = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetSync);
assert(infoBSync.includes(`"path": "${targetSync}"`));
await runCliWithInputOrFail(
`first-sync-round-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetSyncTwiceFirst
);
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
const firstVisible = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetSyncTwiceFirst)
).trimEnd();
assert(firstVisible === `first-sync-round-${dbSuffix}`);
await runCliWithInputOrFail(
`second-sync-round-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetSyncTwiceSecond
);
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
const secondVisible = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetSyncTwiceSecond)
).trimEnd();
assert(secondVisible === `second-sync-round-${dbSuffix}`);
await Deno.writeTextFile(pushSrc, `pushed-content-${dbSuffix}\n`);
await runCliOrFail(vaultA, "--settings", settingsA, "push", pushSrc, targetPush);
await runCliWithInputOrFail(`put-content-${dbSuffix}\n`, vaultA, "--settings", settingsA, "put", targetPut);
await syncBoth();
await runCliOrFail(vaultB, "--settings", settingsB, "pull", targetPush, pullDst);
await assertFilesEqual(pushSrc, pullDst, "B pull result does not match pushed source");
const catBPut = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetPut)
).trimEnd();
assert(catBPut === `put-content-${dbSuffix}`);
const binary = new Uint8Array(4096);
binary.fill(0x61);
await Deno.writeFile(pushBinarySrc, binary);
await runCliOrFail(vaultA, "--settings", settingsA, "push", pushBinarySrc, targetPushBinary);
await syncBoth();
await runCliOrFail(vaultB, "--settings", settingsB, "pull", targetPushBinary, pullBinaryDst);
await assertFilesEqual(pushBinarySrc, pullBinaryDst, "B pull result does not match pushed binary source");
await runCliOrFail(vaultA, "--settings", settingsA, "rm", targetPut);
await syncBoth();
const removed = await runCli(vaultB, "--settings", settingsB, "cat", targetPut);
assert(removed.code !== 0, `B cat should fail after A removed the file\n${removed.combined}`);
await runCliWithInputOrFail("conflict-base\n", vaultA, "--settings", settingsA, "put", targetConflict);
await syncBoth();
await runCliWithInputOrFail(
`conflict-from-a-${dbSuffix}\n`,
vaultA,
"--settings",
settingsA,
"put",
targetConflict
);
await runCliWithInputOrFail(
`conflict-from-b-${dbSuffix}\n`,
vaultB,
"--settings",
settingsB,
"put",
targetConflict
);
let infoAConflict = "";
let infoBConflict = "";
let conflictDetected = false;
for (const side of ["a", "b", "a"] as const) {
await runCliOrFail(
side === "a" ? vaultA : vaultB,
"--settings",
side === "a" ? settingsA : settingsB,
"sync"
);
infoAConflict = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetConflict);
infoBConflict = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetConflict);
if (
jsonStringField(infoAConflict, "conflicts") !== "N/A" ||
jsonStringField(infoBConflict, "conflicts") !== "N/A"
) {
conflictDetected = true;
break;
}
}
assert(conflictDetected, `conflict was expected\nA: ${infoAConflict}\nB: ${infoBConflict}`);
const lsAConflict =
(await runCliOrFail(vaultA, "--settings", settingsA, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const lsBConflict =
(await runCliOrFail(vaultB, "--settings", settingsB, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const revA = lsAConflict.split("\t")[3] ?? "";
const revB = lsBConflict.split("\t")[3] ?? "";
assert(
revA.includes("*") || revB.includes("*"),
`conflicted entry should be marked with '*'\nA: ${lsAConflict}\nB: ${lsBConflict}`
);
const keepRevision = jsonStringField(infoAConflict, "revision");
assert(keepRevision.length > 0, `could not extract revision\n${infoAConflict}`);
await runCliOrFail(vaultA, "--settings", settingsA, "resolve", targetConflict, keepRevision);
let resolved = false;
let infoAResolved = "";
let infoBResolved = "";
for (let i = 0; i < 6; i++) {
await syncBoth();
infoAResolved = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetConflict);
infoBResolved = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetConflict);
if (
jsonStringField(infoAResolved, "conflicts") === "N/A" &&
jsonStringField(infoBResolved, "conflicts") === "N/A"
) {
resolved = true;
break;
}
const retryRevision = jsonStringField(infoAResolved, "revision");
if (retryRevision) {
await runCli(vaultA, "--settings", settingsA, "resolve", targetConflict, retryRevision);
}
}
assert(resolved, `conflicts should be resolved\nA: ${infoAResolved}\nB: ${infoBResolved}`);
const lsAResolved =
(await runCliOrFail(vaultA, "--settings", settingsA, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
const lsBResolved =
(await runCliOrFail(vaultB, "--settings", settingsB, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
assert(!(lsAResolved.split("\t")[3] ?? "").includes("*"));
assert(!(lsBResolved.split("\t")[3] ?? "").includes("*"));
const catAResolved = sanitiseCatStdout(
await runCliOrFail(vaultA, "--settings", settingsA, "cat", targetConflict)
).trimEnd();
const catBResolved = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetConflict)
).trimEnd();
assert(catAResolved === catBResolved, `resolved content should match\nA: ${catAResolved}\nB: ${catBResolved}`);
} finally {
if (!keepDocker) {
if (remoteType === "COUCHDB") {
await stopCouchdb().catch(() => {});
} else {
await stopMinio().catch(() => {});
}
}
}
}
Deno.test("e2e: two vaults over CouchDB without encryption", async () => {
await runScenario("COUCHDB", false);
});
Deno.test("e2e: two vaults over CouchDB with encryption", async () => {
await runScenario("COUCHDB", true);
});

View File

@@ -0,0 +1,20 @@
import { runScenario } from "./test-e2e-two-vaults-couchdb.ts";
type MatrixCase = {
remoteType: "COUCHDB" | "MINIO";
encrypt: boolean;
label: string;
};
const matrixCases: MatrixCase[] = [
{ remoteType: "COUCHDB", encrypt: false, label: "COUCHDB-enc0" },
{ remoteType: "COUCHDB", encrypt: true, label: "COUCHDB-enc1" },
{ remoteType: "MINIO", encrypt: false, label: "MINIO-enc0" },
{ remoteType: "MINIO", encrypt: true, label: "MINIO-enc1" },
];
for (const tc of matrixCases) {
Deno.test(`e2e matrix: ${tc.label}`, async () => {
await runScenario(tc.remoteType, tc.encrypt);
});
}

View File

@@ -0,0 +1,196 @@
/**
* Deno port of test-mirror-linux.sh
*
* Tests the `mirror` command — bidirectional synchronisation between a local
* storage directory (vault) and an in-process database.
*
* Covered cases (identical to the bash test):
* 1. Storage-only file -> synced into DB (UPDATE DATABASE)
* 2. DB-only file -> restored to storage (UPDATE STORAGE)
* 3. DB-deleted file -> NOT restored to storage (UPDATE STORAGE skip)
* 4. Both, storage newer -> DB updated (SYNC: STORAGE -> DB)
* 5. Both, DB newer -> storage updated (SYNC: DB -> STORAGE)
* 6. Compatibility mode -> omitted vault-path works (same DB + vault path)
*
* No external services are required.
*
* Run:
* deno test -A test-mirror.ts
*/
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCliOrFail } from "./helpers/cli.ts";
import { initSettingsFile, markSettingsConfigured } from "./helpers/settings.ts";
Deno.test("mirror: storage <-> DB synchronisation", async (t) => {
await using workDir = await TempDir.create("livesync-cli-mirror");
// -------------------------------------------------------------------
// Shared setup
// -------------------------------------------------------------------
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
const dbDir = workDir.join("db");
await Deno.mkdir(workDir.join("vault", "test"), { recursive: true });
await Deno.mkdir(dbDir, { recursive: true });
await initSettingsFile(settingsFile);
// isConfigured=true is required for canProceedScan in the mirror command.
await markSettingsConfigured(settingsFile);
// Copy settings to the DB directory (separated-path mode)
const dbSettings = workDir.join("db", "settings.json");
await Deno.copyFile(settingsFile, dbSettings);
/** Run mirror in separated-path mode: DB dir ≠ vault dir. */
const runMirror = () => runCliOrFail(dbDir, "--settings", dbSettings, "mirror", vaultDir);
/** Run mirror in compatibility mode: DB path = vault path. */
const runMirrorCompat = () => runCliOrFail(vaultDir, "--settings", settingsFile, "mirror");
// Helper wrappers
const dbRun = (...args: string[]) => runCliOrFail(dbDir, "--settings", dbSettings, ...args);
const compatRun = (...args: string[]) => runCliOrFail(vaultDir, "--settings", settingsFile, ...args);
// -------------------------------------------------------------------
// Case 1: storage-only -> DB (UPDATE DATABASE)
// -------------------------------------------------------------------
await t.step("case 1: storage-only file is synced into DB", async () => {
const storageFile = workDir.join("vault", "test", "storage-only.md");
await Deno.writeTextFile(storageFile, "storage-only content\n");
await runMirror();
const resultFile = workDir.join("case1-pull.txt");
await dbRun("pull", "test/storage-only.md", resultFile);
const storageContent = await Deno.readTextFile(storageFile);
const pulledContent = await Deno.readTextFile(resultFile);
assert(
storageContent === pulledContent,
`storage-only file NOT synced into DB\nexpected: ${storageContent}\ngot: ${pulledContent}`
);
console.log("[PASS] case 1: storage-only file was synced into DB");
});
// -------------------------------------------------------------------
// Case 2: DB-only -> storage (UPDATE STORAGE)
// -------------------------------------------------------------------
await t.step("case 2: DB-only file is restored to storage", async () => {
await dbRun(
"push",
// write inline via push (pipe not needed — push takes a file path)
// create a temp file with content and push it
await (async () => {
const tmp = workDir.join("db-only-src.txt");
await Deno.writeTextFile(tmp, "db-only content\n");
return tmp;
})(),
"test/db-only.md"
);
const storagePath = workDir.join("vault", "test", "db-only.md");
assert(!(await exists(storagePath)), "db-only.md unexpectedly exists in storage before mirror");
await runMirror();
assert(await exists(storagePath), "DB-only file NOT restored to storage after mirror");
const content = await Deno.readTextFile(storagePath);
assert(content === "db-only content\n", `DB-only file restored but content mismatch: '${content}'`);
console.log("[PASS] case 2: DB-only file was restored to storage");
});
// -------------------------------------------------------------------
// Case 3: DB-deleted -> storage untouched
// -------------------------------------------------------------------
await t.step("case 3: DB-deleted entry is NOT restored to storage", async () => {
const deletedSrc = workDir.join("deleted-src.txt");
await Deno.writeTextFile(deletedSrc, "to-be-deleted\n");
await dbRun("push", deletedSrc, "test/deleted.md");
await dbRun("rm", "test/deleted.md");
await runMirror();
const storagePath = workDir.join("vault", "test", "deleted.md");
assert(!(await exists(storagePath)), "deleted DB entry was incorrectly restored to storage");
console.log("[PASS] case 3: deleted DB entry was NOT restored to storage");
});
// -------------------------------------------------------------------
// Case 4: storage newer -> DB updated (SYNC: STORAGE -> DB)
// -------------------------------------------------------------------
await t.step("case 4: storage newer than DB -> DB is updated", async () => {
// Seed DB with old content (mtime ~ now)
const seedFile = workDir.join("case4-seed.txt");
await Deno.writeTextFile(seedFile, "old content\n");
await dbRun("push", seedFile, "test/sync-storage-newer.md");
// Write new content to storage with a timestamp 1 hour in the future
const storageFile = workDir.join("vault", "test", "sync-storage-newer.md");
await Deno.writeTextFile(storageFile, "new content\n");
await Deno.utime(storageFile, new Date(), new Date(Date.now() + 3600_000));
await runMirror();
const resultFile = workDir.join("case4-pull.txt");
await dbRun("pull", "test/sync-storage-newer.md", resultFile);
const storageContent = await Deno.readTextFile(storageFile);
const pulledContent = await Deno.readTextFile(resultFile);
assert(
storageContent === pulledContent,
`DB NOT updated to match newer storage file\nexpected: ${storageContent}\ngot: ${pulledContent}`
);
console.log("[PASS] case 4: DB updated to match newer storage file");
});
// -------------------------------------------------------------------
// Case 5: DB newer -> storage updated (SYNC: DB -> STORAGE)
// -------------------------------------------------------------------
await t.step("case 5: DB newer than storage -> storage is updated", async () => {
// Write old content to storage with a timestamp 1 hour in the past
const storageFile = workDir.join("vault", "test", "sync-db-newer.md");
await Deno.writeTextFile(storageFile, "old storage content\n");
await Deno.utime(storageFile, new Date(), new Date(Date.now() - 3600_000));
// Write new content to DB only (mtime ~ now, newer than the storage file)
const dbNewFile = workDir.join("case5-db-new.txt");
await Deno.writeTextFile(dbNewFile, "new db content\n");
await dbRun("push", dbNewFile, "test/sync-db-newer.md");
await runMirror();
const content = await Deno.readTextFile(storageFile);
assert(content === "new db content\n", `storage NOT updated to match newer DB entry (got: '${content}')`);
console.log("[PASS] case 5: storage updated to match newer DB entry");
});
// -------------------------------------------------------------------
// Case 6: compatibility mode (vault path = DB path)
// -------------------------------------------------------------------
await t.step("case 6: compatibility mode (omitted vault-path)", async () => {
const compatFile = workDir.join("vault", "compat.md");
await Deno.writeTextFile(compatFile, "compat-content\n");
await runMirrorCompat();
const resultFile = workDir.join("case6-pull.txt");
await compatRun("pull", "compat.md", resultFile);
const pulled = await Deno.readTextFile(resultFile);
assert(pulled === "compat-content\n", `Compatibility mode failed to sync file into DB (got: '${pulled}')`);
console.log("[PASS] case 6: compatibility mode works");
});
});
// ---------------------------------------------------------------------------
// Utility
// ---------------------------------------------------------------------------
async function exists(path: string): Promise<boolean> {
try {
await Deno.stat(path);
return true;
} catch {
return false;
}
}

View File

@@ -0,0 +1,40 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings } from "./helpers/settings.ts";
import { startP2pRelay, stopP2pRelay, isLocalP2pRelay } from "./helpers/docker.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
Deno.test("p2p-host: starts and becomes ready", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const useInternalRelay = Deno.env.get("USE_INTERNAL_RELAY") !== "0";
await using workDir = await TempDir.create("livesync-cli-p2p-host");
const vaultDir = workDir.join("vault-host");
const settingsFile = workDir.join("settings-host.json");
await Deno.mkdir(vaultDir, { recursive: true });
let relayStarted = false;
if (useInternalRelay && isLocalP2pRelay(relay)) {
await startP2pRelay();
relayStarted = true;
}
try {
await initSettingsFile(settingsFile);
await applyP2pSettings(settingsFile, roomId, passphrase, appId, relay);
const host = startCliInBackground(vaultDir, "--settings", settingsFile, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
assert(host.combined.includes("P2P host is running"));
} finally {
await host.stop();
}
} finally {
if (relayStarted) {
await stopP2pRelay().catch(() => {});
}
}
});

View File

@@ -0,0 +1,42 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings, applyP2pTestTweaks } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
Deno.test("p2p-peers: discovers host through local relay", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const timeoutSeconds = Number(Deno.env.get("TIMEOUT_SECONDS") ?? "8");
await using workDir = await TempDir.create("livesync-cli-p2p-peers-local-relay");
const hostVault = workDir.join("vault-host");
const hostSettings = workDir.join("settings-host.json");
const clientVault = workDir.join("vault");
const clientSettings = workDir.join("settings.json");
await Deno.mkdir(hostVault, { recursive: true });
await Deno.mkdir(clientVault, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
await initSettingsFile(hostSettings);
await initSettingsFile(clientSettings);
await applyP2pSettings(hostSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pSettings(clientSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pTestTweaks(hostSettings, "p2p-host", passphrase);
await applyP2pTestTweaks(clientSettings, "p2p-client", passphrase);
const host = startCliInBackground(hostVault, "--settings", hostSettings, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peer = await discoverPeer(clientVault, clientSettings, timeoutSeconds);
assert(peer.id.length > 0);
assert(peer.name.length > 0);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,59 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { initSettingsFile, applyP2pSettings, applyP2pTestTweaks } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { runCli } from "./helpers/cli.ts";
Deno.test("p2p-sync: discovers peer and completes sync", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "12");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "15");
await using workDir = await TempDir.create("livesync-cli-p2p-sync");
const hostVault = workDir.join("vault-host");
const hostSettings = workDir.join("settings-host.json");
const clientVault = workDir.join("vault-sync");
const clientSettings = workDir.join("settings-sync.json");
await Deno.mkdir(hostVault, { recursive: true });
await Deno.mkdir(clientVault, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
await initSettingsFile(hostSettings);
await initSettingsFile(clientSettings);
await applyP2pSettings(hostSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pSettings(clientSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
await applyP2pTestTweaks(hostSettings, "p2p-host", passphrase);
await applyP2pTestTweaks(clientSettings, "p2p-client", passphrase);
const host = startCliInBackground(hostVault, "--settings", hostSettings, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peer = await discoverPeer(
clientVault,
clientSettings,
peersTimeout,
Deno.env.get("TARGET_PEER") ?? undefined
);
const syncResult = await runCli(
clientVault,
"--settings",
clientSettings,
"p2p-sync",
peer.id,
String(syncTimeout)
);
assert(
syncResult.code === 0,
`p2p-sync failed\nstdout: ${syncResult.stdout}\nstderr: ${syncResult.stderr}`
);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,118 @@
import { assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { applyP2pSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { jsonStringField, runCliOrFail, runCliWithInputOrFail, sanitiseCatStdout } from "./helpers/cli.ts";
Deno.test("p2p: three nodes detect and resolve conflicts", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const roomId = `${Deno.env.get("ROOM_ID_PREFIX") ?? "p2p-room"}-${Date.now()}`;
const passphrase = `${Deno.env.get("PASSPHRASE_PREFIX") ?? "p2p-pass"}-${Date.now()}`;
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "10");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "15");
await using workDir = await TempDir.create("livesync-cli-p2p-3nodes");
const vaultA = workDir.join("vault-a");
const vaultB = workDir.join("vault-b");
const vaultC = workDir.join("vault-c");
const settingsA = workDir.join("settings-a.json");
const settingsB = workDir.join("settings-b.json");
const settingsC = workDir.join("settings-c.json");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
await Deno.mkdir(vaultC, { recursive: true });
const relayStarted = await maybeStartLocalRelay(relay);
try {
for (const settings of [settingsA, settingsB, settingsC]) {
await initSettingsFile(settings);
await applyP2pSettings(settings, roomId, passphrase, appId, relay);
}
const host = startCliInBackground(vaultA, "--settings", settingsA, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const peerFromB = await discoverPeer(vaultB, settingsB, peersTimeout);
const peerFromC = await discoverPeer(vaultC, settingsC, peersTimeout);
const targetPath = "p2p/conflicted-from-two-clients.txt";
await runCliWithInputOrFail("from-client-b-v1\n", vaultB, "--settings", settingsB, "put", targetPath);
await runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout));
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
let visibleOnC = "";
for (let i = 0; i < 5; i++) {
try {
visibleOnC = sanitiseCatStdout(
await runCliOrFail(vaultC, "--settings", settingsC, "cat", targetPath)
).trimEnd();
if (visibleOnC === "from-client-b-v1") break;
} catch {
// retry below
}
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
}
assert(visibleOnC === "from-client-b-v1", `C should see file created by B, got: ${visibleOnC}`);
await runCliWithInputOrFail("from-client-b-v2\n", vaultB, "--settings", settingsB, "put", targetPath);
await runCliWithInputOrFail("from-client-c-v2\n", vaultC, "--settings", settingsC, "put", targetPath);
const [syncB, syncC] = await Promise.all([
runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout)),
runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout)),
]);
void syncB;
void syncC;
await runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout));
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
const infoBBefore = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetPath);
const conflictsBBefore = jsonStringField(infoBBefore, "conflicts");
const keepRevB = jsonStringField(infoBBefore, "revision");
assert(
conflictsBBefore !== "N/A" && conflictsBBefore.length > 0,
`expected conflicts on B\n${infoBBefore}`
);
assert(keepRevB.length > 0, `could not read revision on B\n${infoBBefore}`);
const infoCBefore = await runCliOrFail(vaultC, "--settings", settingsC, "info", targetPath);
const conflictsCBefore = jsonStringField(infoCBefore, "conflicts");
const keepRevC = jsonStringField(infoCBefore, "revision");
assert(
conflictsCBefore !== "N/A" && conflictsCBefore.length > 0,
`expected conflicts on C\n${infoCBefore}`
);
assert(keepRevC.length > 0, `could not read revision on C\n${infoCBefore}`);
await runCliOrFail(vaultB, "--settings", settingsB, "resolve", targetPath, keepRevB);
await runCliOrFail(vaultC, "--settings", settingsC, "resolve", targetPath, keepRevC);
const infoBAfter = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetPath);
const infoCAfter = await runCliOrFail(vaultC, "--settings", settingsC, "info", targetPath);
assert(jsonStringField(infoBAfter, "conflicts") === "N/A", `conflict still remains on B\n${infoBAfter}`);
assert(jsonStringField(infoCAfter, "conflicts") === "N/A", `conflict still remains on C\n${infoCAfter}`);
const finalContentB = sanitiseCatStdout(
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetPath)
).trimEnd();
const finalContentC = sanitiseCatStdout(
await runCliOrFail(vaultC, "--settings", settingsC, "cat", targetPath)
).trimEnd();
assert(
finalContentB === "from-client-b-v2" || finalContentB === "from-client-c-v2",
`unexpected final content on B: ${finalContentB}`
);
assert(
finalContentC === "from-client-b-v2" || finalContentC === "from-client-c-v2",
`unexpected final content on C: ${finalContentC}`
);
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,111 @@
import { TempDir } from "./helpers/temp.ts";
import { applyP2pSettings, applyP2pTestTweaks, initSettingsFile } from "./helpers/settings.ts";
import { startCliInBackground } from "./helpers/backgroundCli.ts";
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
import { assertFilesEqual, runCliOrFail } from "./helpers/cli.ts";
async function writeFilledFile(path: string, size: number, byte: number): Promise<void> {
const data = new Uint8Array(size);
data.fill(byte);
await Deno.writeFile(path, data);
}
Deno.test("p2p: upload/download reproduction scenario", async () => {
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "20");
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "240");
const roomId = `p2p-room-${Date.now()}`;
const passphrase = `p2p-pass-${Date.now()}`;
await using workDir = await TempDir.create("livesync-cli-p2p-upload-download");
const vaultHost = workDir.join("vault-host");
const vaultUp = workDir.join("vault-up");
const vaultDown = workDir.join("vault-down");
const settingsHost = workDir.join("settings-host.json");
const settingsUp = workDir.join("settings-up.json");
const settingsDown = workDir.join("settings-down.json");
for (const dir of [vaultHost, vaultUp, vaultDown]) {
await Deno.mkdir(dir, { recursive: true });
}
const relayStarted = await maybeStartLocalRelay(relay);
try {
for (const settings of [settingsHost, settingsUp, settingsDown]) {
await initSettingsFile(settings);
await applyP2pSettings(settings, roomId, passphrase, appId, relay, "~.*");
}
await applyP2pTestTweaks(settingsHost, "p2p-cli-host", passphrase);
await applyP2pTestTweaks(settingsUp, `p2p-cli-upload-${Date.now()}`, passphrase);
await applyP2pTestTweaks(settingsDown, `p2p-cli-download-${Date.now()}`, passphrase);
const host = startCliInBackground(vaultHost, "--settings", settingsHost, "p2p-host");
try {
await host.waitUntilContains("P2P host is running", 20000);
const uploadPeer = await discoverPeer(vaultUp, settingsUp, peersTimeout);
const storeText = workDir.join("store-file.md");
const diffA = workDir.join("test-diff-1.md");
const diffB = workDir.join("test-diff-2.md");
const diffC = workDir.join("test-diff-3.md");
await Deno.writeTextFile(storeText, "Hello, World!\n");
await Deno.writeTextFile(diffA, "Content A\n");
await Deno.writeTextFile(diffB, "Content B\n");
await Deno.writeTextFile(diffC, "Content C\n");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", storeText, "p2p/store-file.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffA, "p2p/test-diff-1.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffB, "p2p/test-diff-2.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffC, "p2p/test-diff-3.md");
const large100k = workDir.join("large-100k.txt");
const large1m = workDir.join("large-1m.txt");
const binary100k = workDir.join("binary-100k.bin");
const binary5m = workDir.join("binary-5m.bin");
await Deno.writeTextFile(large100k, "a".repeat(100000));
await Deno.writeTextFile(large1m, "b".repeat(1000000));
await writeFilledFile(binary100k, 100000, 0x5a);
await writeFilledFile(binary5m, 5000000, 0x7c);
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", large100k, "p2p/large-100000.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", large1m, "p2p/large-1000000.md");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", binary100k, "p2p/binary-100000.bin");
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", binary5m, "p2p/binary-5000000.bin");
await runCliOrFail(vaultUp, "--settings", settingsUp, "p2p-sync", uploadPeer.id, String(syncTimeout));
await runCliOrFail(vaultUp, "--settings", settingsUp, "p2p-sync", uploadPeer.id, String(syncTimeout));
const downloadPeer = await discoverPeer(vaultDown, settingsDown, peersTimeout);
await runCliOrFail(vaultDown, "--settings", settingsDown, "p2p-sync", downloadPeer.id, String(syncTimeout));
await runCliOrFail(vaultDown, "--settings", settingsDown, "p2p-sync", downloadPeer.id, String(syncTimeout));
const downStoreText = workDir.join("down-store-file.md");
const downDiffA = workDir.join("down-test-diff-1.md");
const downDiffB = workDir.join("down-test-diff-2.md");
const downDiffC = workDir.join("down-test-diff-3.md");
const downLarge100k = workDir.join("down-large-100k.txt");
const downLarge1m = workDir.join("down-large-1m.txt");
const downBinary100k = workDir.join("down-binary-100k.bin");
const downBinary5m = workDir.join("down-binary-5m.bin");
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/store-file.md", downStoreText);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-1.md", downDiffA);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-2.md", downDiffB);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-3.md", downDiffC);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/large-100000.md", downLarge100k);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/large-1000000.md", downLarge1m);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/binary-100000.bin", downBinary100k);
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/binary-5000000.bin", downBinary5m);
await assertFilesEqual(storeText, downStoreText, "store-file mismatch");
await assertFilesEqual(diffA, downDiffA, "test-diff-1 mismatch");
await assertFilesEqual(diffB, downDiffB, "test-diff-2 mismatch");
await assertFilesEqual(diffC, downDiffC, "test-diff-3 mismatch");
await assertFilesEqual(large100k, downLarge100k, "large-100000 mismatch");
await assertFilesEqual(large1m, downLarge1m, "large-1000000 mismatch");
await assertFilesEqual(binary100k, downBinary100k, "binary-100000 mismatch");
await assertFilesEqual(binary5m, downBinary5m, "binary-5000000 mismatch");
} finally {
await host.stop();
}
} finally {
await stopLocalRelayIfStarted(relayStarted);
}
});

View File

@@ -0,0 +1,78 @@
/**
* Deno port of test-push-pull-linux.sh
*
* Requires CouchDB connection details either via environment variables or a
* .test.env file. If neither is present the test logs a warning and the
* CLI will likely fail at the push step.
*
* Run:
* deno test -A test-push-pull.ts
*
* With explicit CouchDB:
* COUCHDB_URI=http://127.0.0.1:5984 \
* COUCHDB_USER=admin \
* COUCHDB_PASSWORD=password \
* COUCHDB_DBNAME=livesync-test \
* deno test -A test-push-pull.ts
*/
import { join } from "@std/path";
import { assertEquals } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCliOrFail } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, stopCouchdb } from "./helpers/docker.ts";
const REMOTE_PATH = Deno.env.get("REMOTE_PATH") ?? "test/push-pull.txt";
Deno.test("push/pull roundtrip", async () => {
await using workDir = await TempDir.create("livesync-cli-push-pull");
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
await Deno.mkdir(join(vaultDir, "test"), { recursive: true });
const uri = Deno.env.get("COUCHDB_URI") ?? "http://127.0.0.1:5989/";
const user = Deno.env.get("COUCHDB_USER") ?? "admin";
const password = Deno.env.get("COUCHDB_PASSWORD") ?? "testpassword";
const dbname = Deno.env.get("COUCHDB_DBNAME") ?? `push-pull-${Date.now()}`;
const shouldStartDocker = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (shouldStartDocker) {
await startCouchdb(uri, user, password, dbname);
}
try {
await initSettingsFile(settingsFile);
if (uri && user && password && dbname) {
console.log("[INFO] applying CouchDB env vars to settings");
await applyCouchdbSettings(settingsFile, uri, user, password, dbname);
} else {
console.warn(
"[WARN] CouchDB env vars not fully set — push/pull may fail unless the generated settings already contain connection details"
);
}
const srcFile = workDir.join("push-source.txt");
const pulledFile = workDir.join("pull-result.txt");
const content = `push-pull-test ${new Date().toISOString()}\n`;
await Deno.writeTextFile(srcFile, content);
console.log(`[INFO] push -> ${REMOTE_PATH}`);
await runCliOrFail(vaultDir, "--settings", settingsFile, "push", srcFile, REMOTE_PATH);
console.log(`[INFO] pull <- ${REMOTE_PATH}`);
await runCliOrFail(vaultDir, "--settings", settingsFile, "pull", REMOTE_PATH, pulledFile);
const pulled = await Deno.readTextFile(pulledFile);
assertEquals(content, pulled, "push/pull roundtrip content mismatch");
console.log("[PASS] push/pull roundtrip matched");
} finally {
if (shouldStartDocker && !keepDocker) {
await stopCouchdb().catch(() => {});
}
}
});

View File

@@ -0,0 +1,214 @@
/**
* Deno port of test-setup-put-cat-linux.sh
*
* Tests all local-DB file operations that require no external remote:
* setup /
* push / cat / ls / info / rm / resolve / cat-rev / pull-rev
*
* Run (no external services needed):
* deno test -A test-setup-put-cat.ts
*/
import { join } from "@std/path";
import { assertEquals, assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { runCli, runCliOrFail, runCliWithInput, sanitiseCatStdout } from "./helpers/cli.ts";
import { generateSetupUriFromSettings, initSettingsFile } from "./helpers/settings.ts";
const REMOTE_PATH = Deno.env.get("REMOTE_PATH") ?? "test/setup-put-cat.txt";
const SETUP_PASSPHRASE = Deno.env.get("SETUP_PASSPHRASE") ?? "setup-passphrase";
Deno.test("CLI file operations: push / cat / ls / info / rm / resolve / cat-rev / pull-rev", async (t) => {
await using workDir = await TempDir.create("livesync-cli-setup-put-cat");
const settingsFile = workDir.join("data.json");
const vaultDir = workDir.join("vault");
await Deno.mkdir(join(vaultDir, "test"), { recursive: true });
await initSettingsFile(settingsFile);
const setupUri = await generateSetupUriFromSettings(settingsFile, SETUP_PASSPHRASE);
const setupResult = await runCliWithInput(
`${SETUP_PASSPHRASE}\n`,
vaultDir,
"--settings",
settingsFile,
"setup",
setupUri
);
assert(setupResult.code === 0, `setup command exited with ${setupResult.code}\n${setupResult.combined}`);
assert(
setupResult.combined.includes("[Command] setup ->"),
`setup command did not execute expected code path\n${setupResult.combined}`
);
const run = (...args: string[]) => runCliOrFail(vaultDir, "--settings", settingsFile, ...args);
// ------------------------------------------------------------------
// push / cat roundtrip
// ------------------------------------------------------------------
await t.step("push/cat roundtrip", async () => {
const srcFile = workDir.join("put-source.txt");
const content = `setup-put-cat-test ${new Date().toISOString()}\nline-2\n`;
await Deno.writeTextFile(srcFile, content);
console.log(`[INFO] push -> ${REMOTE_PATH}`);
await runCliWithInput(content, vaultDir, "--settings", settingsFile, "put", REMOTE_PATH);
console.log(`[INFO] cat <- ${REMOTE_PATH}`);
const rawOutput = await run("cat", REMOTE_PATH);
const catOutput = sanitiseCatStdout(rawOutput);
assertEquals(content, catOutput, "push/cat roundtrip content mismatch");
console.log("[PASS] push/cat roundtrip matched");
});
// ------------------------------------------------------------------
// ls: single file
// ------------------------------------------------------------------
await t.step("ls output format (single file)", async () => {
const lsOutput = await run("ls", REMOTE_PATH);
const line = lsOutput
.trim()
.split("\n")
.find((l) => l.startsWith(REMOTE_PATH + "\t"));
assert(line, `ls output did not include ${REMOTE_PATH}`);
const [lsPath, lsSize, lsMtime, lsRev] = line.split("\t");
assertEquals(lsPath, REMOTE_PATH, "ls path column mismatch");
assert(/^\d+$/.test(lsSize), `ls size not numeric: ${lsSize}`);
assert(/^\d+$/.test(lsMtime), `ls mtime not numeric: ${lsMtime}`);
assert(lsRev?.length > 0, "ls revision column is empty");
console.log("[PASS] ls output format matched");
});
// ------------------------------------------------------------------
// ls: prefix filter and sort order
// ------------------------------------------------------------------
await t.step("ls prefix filter and sort order", async () => {
await runCliWithInput("file-a\n", vaultDir, "--settings", settingsFile, "put", "test/a-first.txt");
await runCliWithInput("file-z\n", vaultDir, "--settings", settingsFile, "put", "test/z-last.txt");
const lsOut = await run("ls", "test/");
const lines = lsOut.trim().split("\n").filter(Boolean);
assert(lines.length >= 3, "ls prefix output expected at least 3 rows");
// Verify sorted ascending by path
const paths = lines.map((l) => l.split("\t")[0]);
for (let i = 1; i < paths.length; i++) {
assert(paths[i - 1] <= paths[i], `ls output not sorted: ${paths[i - 1]} > ${paths[i]}`);
}
assert(
lines.some((l) => l.startsWith("test/a-first.txt\t")),
"ls prefix output missing test/a-first.txt"
);
assert(
lines.some((l) => l.startsWith("test/z-last.txt\t")),
"ls prefix output missing test/z-last.txt"
);
console.log("[PASS] ls prefix and sorting matched");
});
// ------------------------------------------------------------------
// ls: no-match prefix returns empty output
// ------------------------------------------------------------------
await t.step("ls no-match prefix returns empty", async () => {
const lsOut = await run("ls", "no-such-prefix/");
assertEquals(lsOut.trim(), "", "ls no-match prefix should produce empty output");
console.log("[PASS] ls no-match prefix matched");
});
// ------------------------------------------------------------------
// info: JSON output format
// ------------------------------------------------------------------
await t.step("info output JSON format", async () => {
const infoOut = await run("info", REMOTE_PATH);
let data: Record<string, unknown>;
try {
data = JSON.parse(infoOut);
} catch {
throw new Error(`info output is not valid JSON:\n${infoOut}`);
}
assertEquals(data.path, REMOTE_PATH, "info .path mismatch");
assertEquals(data.filename, REMOTE_PATH.split("/").at(-1), "info .filename mismatch");
assert(typeof data.size === "number" && data.size >= 0, `info .size invalid: ${data.size}`);
assert(typeof data.chunks === "number" && (data.chunks as number) >= 1, `info .chunks invalid: ${data.chunks}`);
assertEquals(data.conflicts, "N/A", "info .conflicts should be N/A");
console.log("[PASS] info output format matched");
});
// ------------------------------------------------------------------
// info: non-existent path exits non-zero
// ------------------------------------------------------------------
await t.step("info non-existent path returns non-zero", async () => {
const r = await runCli(vaultDir, "--settings", settingsFile, "info", "no-such-file.md");
assert(r.code !== 0, "info on non-existent file should exit non-zero");
console.log("[PASS] info non-existent path returns non-zero");
});
// ------------------------------------------------------------------
// rm: removes file from ls and makes cat fail
// ------------------------------------------------------------------
await t.step("rm removes target from ls and cat", async () => {
await run("rm", "test/z-last.txt");
const catResult = await runCli(vaultDir, "--settings", settingsFile, "cat", "test/z-last.txt");
assert(catResult.code !== 0, "rm target should not be readable by cat");
const lsOut = await run("ls", "test/");
assert(!lsOut.includes("test/z-last.txt\t"), "rm target should not appear in ls output");
console.log("[PASS] rm removed target from visible entries");
});
// ------------------------------------------------------------------
// resolve: accepts current revision, rejects invalid revision
// ------------------------------------------------------------------
await t.step("resolve: valid and invalid revisions", async () => {
const lsLine = (await run("ls", "test/a-first.txt")).trim().split("\n")[0];
assert(lsLine, "could not fetch revision for resolve test");
const rev = lsLine.split("\t")[3];
assert(rev?.length > 0, "revision was empty for resolve test");
await run("resolve", "test/a-first.txt", rev);
console.log("[PASS] resolve accepted current revision");
const badR = await runCli(vaultDir, "--settings", settingsFile, "resolve", "test/a-first.txt", "9-no-such-rev");
assert(badR.code !== 0, "resolve with non-existent revision should exit non-zero");
console.log("[PASS] resolve non-existent revision returns non-zero");
});
// ------------------------------------------------------------------
// cat-rev / pull-rev: retrieve a past revision
// ------------------------------------------------------------------
await t.step("cat-rev / pull-rev: retrieve past revision", async () => {
const revPath = "test/revision-history.txt";
await runCliWithInput("revision-v1\n", vaultDir, "--settings", settingsFile, "put", revPath);
await runCliWithInput("revision-v2\n", vaultDir, "--settings", settingsFile, "put", revPath);
await runCliWithInput("revision-v3\n", vaultDir, "--settings", settingsFile, "put", revPath);
const infoOut = await run("info", revPath);
const infoData = JSON.parse(infoOut) as {
revisions?: string[];
};
const revisions = Array.isArray(infoData.revisions) ? infoData.revisions : [];
const pastRev = revisions.find((r): r is string => typeof r === "string" && r !== "N/A");
assert(pastRev, "info output did not include any past revision");
const catRevOut = await run("cat-rev", revPath, pastRev);
const catRevClean = sanitiseCatStdout(catRevOut);
assert(
catRevClean === "revision-v1\n" || catRevClean === "revision-v2\n",
`cat-rev output did not match expected past revision:\n${catRevClean}`
);
console.log("[PASS] cat-rev matched one of the past revisions from info");
const pullRevFile = workDir.join("rev-pull-output.txt");
await run("pull-rev", revPath, pullRevFile, pastRev);
const pullRevContent = await Deno.readTextFile(pullRevFile);
assert(
pullRevContent === "revision-v1\n" || pullRevContent === "revision-v2\n",
`pull-rev output did not match expected past revision:\n${pullRevContent}`
);
console.log("[PASS] pull-rev matched one of the past revisions from info");
});
});

View File

@@ -0,0 +1,97 @@
/**
* Deno port of test-sync-locked-remote-linux.sh
*
* Verifies CLI sync behaviour when the remote milestone document is unlocked
* versus locked.
*/
import { assert, assertStringIncludes } from "@std/assert";
import { join } from "@std/path";
import { loadEnvFile } from "./helpers/env.ts";
import { TempDir } from "./helpers/temp.ts";
import { runCli } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { createCouchdbDatabase, startCouchdb, stopCouchdb, updateCouchdbDoc } from "./helpers/docker.ts";
const TEST_ENV = join(import.meta.dirname!, "..", ".test.env");
const MILESTONE_DOC = "_local/obsydian_livesync_milestone";
function requireEnv(env: Record<string, string>, key: string): string {
const value = env[key]?.trim();
if (!value) {
throw new Error(`Required env var is missing: ${key}`);
}
return value;
}
Deno.test("sync: actionable error against locked remote DB", async () => {
const env = await loadEnvFile(TEST_ENV);
const couchdbUri = requireEnv(env, "hostname").replace(/\/$/, "");
const couchdbUser = requireEnv(env, "username");
const couchdbPassword = requireEnv(env, "password");
const dbPrefix = requireEnv(env, "dbname");
const dbname = `${dbPrefix}-locked-${Date.now()}-${Math.floor(Math.random() * 100000)}`;
await using workDir = await TempDir.create("livesync-cli-locked-test");
const vaultDir = workDir.join("vault");
const settingsFile = workDir.join("settings.json");
await Deno.mkdir(vaultDir, { recursive: true });
const shouldStartDocker = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
if (shouldStartDocker) {
console.log(`[INFO] starting CouchDB and creating test database: ${dbname}`);
await startCouchdb(couchdbUri, couchdbUser, couchdbPassword, dbname);
} else {
console.log(`[INFO] using existing CouchDB and creating test database: ${dbname}`);
await createCouchdbDatabase(couchdbUri, couchdbUser, couchdbPassword, dbname);
}
try {
await initSettingsFile(settingsFile);
await applyCouchdbSettings(settingsFile, couchdbUri, couchdbUser, couchdbPassword, dbname, true);
console.log("[CASE] initial sync to create milestone document");
const initialSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
initialSync.code === 0,
`initial sync failed\nstdout: ${initialSync.stdout}\nstderr: ${initialSync.stderr}`
);
const updateMilestone = async (locked: boolean) => {
await updateCouchdbDoc(couchdbUri, couchdbUser, couchdbPassword, `${dbname}/${MILESTONE_DOC}`, (doc) => ({
...doc,
locked,
accepted_nodes: [],
}));
};
console.log("[CASE] sync should succeed when remote is not locked");
await updateMilestone(false);
const unlockedSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
unlockedSync.code === 0,
`sync should succeed when remote is not locked\nstdout: ${unlockedSync.stdout}\nstderr: ${unlockedSync.stderr}`
);
assert(
!unlockedSync.combined.includes("The remote database is locked"),
`locked error should not appear when remote is not locked\n${unlockedSync.combined}`
);
console.log("[PASS] unlocked remote DB syncs successfully");
console.log("[CASE] sync should fail with actionable error when remote is locked");
await updateMilestone(true);
const lockedSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
assert(
lockedSync.code !== 0,
`sync should fail when remote is locked\nstdout: ${lockedSync.stdout}\nstderr: ${lockedSync.stderr}`
);
assertStringIncludes(lockedSync.combined, "The remote database is locked and this device is not yet accepted");
console.log("[PASS] locked remote DB produces actionable CLI error");
} finally {
if (shouldStartDocker && !keepDocker) {
await stopCouchdb().catch(() => {});
}
}
});

View File

@@ -0,0 +1,287 @@
/**
* Deno port of test-sync-two-local-databases-linux.sh
*
* Tests two-vault synchronisation via CouchDB including conflict detection
* and resolution.
*
* Requires CouchDB connection details. Provide them via environment variables
* OR place a .test.env file at src/apps/cli/.test.env.
*
* By default, a CouchDB Docker container is started automatically
* (LIVESYNC_START_DOCKER=1). Set LIVESYNC_START_DOCKER=0 to use an existing
* CouchDB instance instead.
*
* Run:
* deno test -A test-sync-two-local-databases.ts
*
* With an existing CouchDB:
* COUCHDB_URI=http://127.0.0.1:5984 \
* COUCHDB_USER=admin \
* COUCHDB_PASSWORD=password \
* COUCHDB_DBNAME=livesync-test \
* LIVESYNC_START_DOCKER=0 \
* deno test -A test-sync-two-local-databases.ts
*/
import { join } from "@std/path";
import { assertEquals, assert } from "@std/assert";
import { TempDir } from "./helpers/temp.ts";
import { CLI_DIR, runCliOrFail, jsonFieldIsNa } from "./helpers/cli.ts";
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
import { startCouchdb, stopCouchdb } from "./helpers/docker.ts";
import { loadEnvFile } from "./helpers/env.ts";
// ---------------------------------------------------------------------------
// Load configuration
// ---------------------------------------------------------------------------
async function resolveConfig(): Promise<{
uri: string;
user: string;
password: string;
baseDbname: string;
} | null> {
let env: Record<string, string> = {};
// 1. Explicit environment variables take priority
if (Deno.env.get("COUCHDB_URI")) {
env = Object.fromEntries(Deno.env.toObject());
} else {
// 2. TEST_ENV_FILE env var
const envFile = Deno.env.get("TEST_ENV_FILE") ?? join(CLI_DIR, ".test.env");
try {
env = await loadEnvFile(envFile);
} catch {
return null; // no config available — skip
}
}
const uri = (env["COUCHDB_URI"] ?? env["hostname"] ?? "").replace(/\/$/, "");
const user = env["COUCHDB_USER"] ?? env["username"] ?? "";
const password = env["COUCHDB_PASSWORD"] ?? env["password"] ?? "";
const baseDbname = env["COUCHDB_DBNAME"] ?? env["dbname"] ?? "livesync-test";
if (!uri || !user || !password) return null;
return { uri, user, password, baseDbname };
}
const config = await resolveConfig();
const START_DOCKER = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
const KEEP_DOCKER = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
const SYNC_RETRY = Number(Deno.env.get("LIVESYNC_SYNC_RETRY") ?? "8");
// Provide a sane default for flaky remote connectivity in Docker-on-WSL
// environments. Users can override explicitly if needed.
if (!Deno.env.has("LIVESYNC_CLI_RETRY")) {
Deno.env.set("LIVESYNC_CLI_RETRY", "2");
}
// ---------------------------------------------------------------------------
// Test suite
// ---------------------------------------------------------------------------
Deno.test(
{
name: "sync two local databases: sync + conflict detection + resolution",
ignore: config === null,
},
async (t) => {
if (!config) return; // narrowing for TypeScript
const suffix = `${Date.now()}-${Math.floor(Math.random() * 65535)}`;
const dbname = `${config.baseDbname}-${suffix}`;
await using workDir = await TempDir.create("livesync-cli-two-db-test");
// ------------------------------------------------------------------
// Docker lifecycle
// ------------------------------------------------------------------
if (START_DOCKER) {
await startCouchdb(config.uri, config.user, config.password, dbname);
}
try {
await runSuite(t, workDir, config, dbname);
} finally {
if (START_DOCKER && !KEEP_DOCKER) {
await stopCouchdb().catch(() => {});
}
if (START_DOCKER && KEEP_DOCKER) {
console.log("[INFO] LIVESYNC_DEBUG_KEEP_DOCKER=1, keeping couchdb-test container");
}
console.log(`[INFO] test database '${dbname}' is preserved for debugging.`);
}
}
);
// ---------------------------------------------------------------------------
// Suite implementation
// ---------------------------------------------------------------------------
async function runSuite(
t: Deno.TestContext,
workDir: TempDir,
config: { uri: string; user: string; password: string },
dbname: string
): Promise<void> {
const sleep = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms));
const runWithRetry = async <T>(label: string, fn: () => Promise<T>, retries = SYNC_RETRY): Promise<T> => {
let lastErr: unknown;
for (let i = 0; i <= retries; i++) {
try {
return await fn();
} catch (err) {
lastErr = err;
if (i === retries) break;
const delayMs = 500 * (i + 1);
console.warn(`[WARN] ${label} failed, retrying (${i + 1}/${retries}) in ${delayMs}ms`);
await sleep(delayMs);
}
}
throw lastErr;
};
const vaultA = workDir.join("vault-a");
const vaultB = workDir.join("vault-b");
const settingsA = workDir.join("a-settings.json");
const settingsB = workDir.join("b-settings.json");
await Deno.mkdir(vaultA, { recursive: true });
await Deno.mkdir(vaultB, { recursive: true });
await initSettingsFile(settingsA);
await initSettingsFile(settingsB);
const applySettings = async (f: string) =>
applyCouchdbSettings(f, config.uri, config.user, config.password, dbname, /* liveSync */ true);
await applySettings(settingsA);
await applySettings(settingsB);
const runA = (...args: string[]) => runCliOrFail(vaultA, "--settings", settingsA, ...args);
const runB = (...args: string[]) => runCliOrFail(vaultB, "--settings", settingsB, ...args);
const syncA = () => runWithRetry("syncA", () => runA("sync"));
const syncB = () => runWithRetry("syncB", () => runB("sync"));
const catA = (path: string) => runA("cat", path);
const catB = (path: string) => runB("cat", path);
// ------------------------------------------------------------------
// Case 1: A creates file, B reads after sync
// ------------------------------------------------------------------
await t.step("case 1: A creates file -> B can read after sync", async () => {
const srcA = workDir.join("from-a-src.txt");
await Deno.writeTextFile(srcA, "from-a\n");
await runA("push", srcA, "shared/from-a.txt");
await syncA();
await syncB();
const value = (await catB("shared/from-a.txt")).replace(/\r\n/g, "\n").trimEnd();
assertEquals(value, "from-a", "B could not read file created on A");
console.log("[PASS] case 1 passed");
});
// ------------------------------------------------------------------
// Case 2: B creates file, A reads after sync
// ------------------------------------------------------------------
await t.step("case 2: B creates file -> A can read after sync", async () => {
const srcB = workDir.join("from-b-src.txt");
await Deno.writeTextFile(srcB, "from-b\n");
await runB("push", srcB, "shared/from-b.txt");
await syncB();
await syncA();
const value = (await catA("shared/from-b.txt")).replace(/\r\n/g, "\n").trimEnd();
assertEquals(value, "from-b", "A could not read file created on B");
console.log("[PASS] case 2 passed");
});
// ------------------------------------------------------------------
// Case 3: concurrent edits create a conflict
// ------------------------------------------------------------------
await t.step("case 3: concurrent edits create conflict", async () => {
const baseSrc = workDir.join("base-src.txt");
await Deno.writeTextFile(baseSrc, "base\n");
await runA("push", baseSrc, "shared/conflicted.txt");
await syncA();
await syncB();
const aEdit = workDir.join("edit-a.txt");
const bEdit = workDir.join("edit-b.txt");
await Deno.writeTextFile(aEdit, "edit-from-a\n");
await Deno.writeTextFile(bEdit, "edit-from-b\n");
await runA("push", aEdit, "shared/conflicted.txt");
await runB("push", bEdit, "shared/conflicted.txt");
const infoFileA = workDir.join("info-a.json");
const infoFileB = workDir.join("info-b.json");
let conflictDetected = false;
for (const side of ["a", "b"] as const) {
if (side === "a") await syncA();
else await syncB();
await Deno.writeTextFile(infoFileA, await runA("info", "shared/conflicted.txt"));
await Deno.writeTextFile(infoFileB, await runB("info", "shared/conflicted.txt"));
const da = JSON.parse(await Deno.readTextFile(infoFileA)) as Record<string, unknown>;
const db = JSON.parse(await Deno.readTextFile(infoFileB)) as Record<string, unknown>;
if (!jsonFieldIsNa(da, "conflicts") || !jsonFieldIsNa(db, "conflicts")) {
conflictDetected = true;
break;
}
}
assert(conflictDetected, "expected conflict after concurrent edits, but both sides show N/A");
console.log("[PASS] case 3 conflict detected");
});
// ------------------------------------------------------------------
// Case 4: resolve on A, verify B has no conflict after sync
// ------------------------------------------------------------------
await t.step("case 4: resolve on A propagates to B", async () => {
const infoFileA = workDir.join("info-a-resolve.json");
const infoFileB = workDir.join("info-b-resolve.json");
// Ensure A sees the conflict
for (let i = 0; i < 5; i++) {
const raw = await runA("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, raw);
const da = JSON.parse(raw) as Record<string, unknown>;
if (!jsonFieldIsNa(da, "conflicts")) break;
await syncB();
await syncA();
}
const rawA = await runA("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, rawA);
const dataA = JSON.parse(rawA) as Record<string, unknown>;
assert(!jsonFieldIsNa(dataA, "conflicts"), "A does not see conflict, cannot resolve from A only");
const keepRev = dataA["revision"] as string;
assert(keepRev?.length > 0, "could not read revision from A info output");
await runA("resolve", "shared/conflicted.txt", keepRev);
let resolved = false;
for (let i = 0; i < 6; i++) {
await syncA();
await syncB();
const rawA2 = await runA("info", "shared/conflicted.txt");
const rawB2 = await runB("info", "shared/conflicted.txt");
await Deno.writeTextFile(infoFileA, rawA2);
await Deno.writeTextFile(infoFileB, rawB2);
const da2 = JSON.parse(rawA2) as Record<string, unknown>;
const db2 = JSON.parse(rawB2) as Record<string, unknown>;
if (jsonFieldIsNa(da2, "conflicts") && jsonFieldIsNa(db2, "conflicts")) {
resolved = true;
break;
}
// If A still sees a conflict, resolve it again
if (!jsonFieldIsNa(da2, "conflicts")) {
const rev2 = da2["revision"] as string;
if (rev2) await runA("resolve", "shared/conflicted.txt", rev2).catch(() => {});
}
}
assert(resolved, "conflicts should be resolved on both A and B");
const contentA = (await catA("shared/conflicted.txt")).replace(/\r\n/g, "\n");
const contentB = (await catB("shared/conflicted.txt")).replace(/\r\n/g, "\n");
assertEquals(contentA, contentB, "resolved content mismatch between A and B");
console.log("[PASS] case 4 passed");
console.log("[PASS] all sync/resolve scenarios passed");
});
}

View File

@@ -0,0 +1,298 @@
# CLI Deno Test Development Notes
This document provides an overview of the Deno-based compatibility tests under `src/apps/cli/testdeno/`.
The existing bash tests under `src/apps/cli/test/` are preserved, while a Windows-friendly suite is maintained in parallel.
---
## Goals
- Keep existing bash tests intact.
- Provide direct execution from Windows PowerShell.
- Establish a TypeScript (Deno) foundation for core end-to-end and integration scenarios.
---
## Directory structure
```
src/apps/cli/testdeno/
deno.json
CONTRIBUTING_TESTS.md
helpers/
backgroundCli.ts
cli.ts
docker.ts
env.ts
p2p.ts
settings.ts
temp.ts
test-e2e-two-vaults-couchdb.ts
test-push-pull.ts
test-p2p-host.ts
test-p2p-peers-local-relay.ts
test-p2p-sync.ts
test-p2p-three-nodes-conflict.ts
test-p2p-upload-download-repro.ts
test-e2e-two-vaults-matrix.ts
test-setup-put-cat.ts
test-mirror.ts
test-sync-two-local-databases.ts
test-sync-locked-remote.ts
```
---
## Key files
### `deno.json`
- Defines Deno tasks.
- Defines import maps for `@std/assert` and `@std/path`.
Main tasks:
- `deno task test`
- `deno task test:local`
- `deno task test:push-pull`
- `deno task test:setup-put-cat`
- `deno task test:mirror`
- `deno task test:sync-two-local`
- `deno task test:sync-locked-remote`
- `deno task test:p2p-host`
- `deno task test:p2p-peers`
- `deno task test:p2p-sync`
- `deno task test:p2p-three-nodes`
- `deno task test:p2p-upload-download`
- `deno task test:e2e-couchdb`
- `deno task test:e2e-matrix`
### `helpers/cli.ts`
- CLI execution wrappers.
- `runCli`, `runCliOrFail`, `runCliWithInput`.
- Output normalisation via `sanitiseCatStdout`.
- Comparison utilities, including `assertFilesEqual`.
This file corresponds to `run_cli` and common assertions in `test-helpers.sh`.
### `helpers/settings.ts`
- Executes `init-settings --force`.
- Marks `isConfigured = true`.
- Applies CouchDB and P2P settings.
- Applies remote synchronisation settings and P2P test tweaks.
This file corresponds to settings helpers in `test-helpers.sh`.
### `helpers/docker.ts`
- Starts, stops, and initialises CouchDB directly from Deno.
- Configures CouchDB via `fetch + retry`.
- Starts and stops the P2P relay through the same Docker runner.
Both CouchDB and P2P relay flows are bash-independent.
### `helpers/backgroundCli.ts`
- Starts long-running commands such as `p2p-host` in the background.
- Waits for readiness logs and handles termination.
### `helpers/p2p.ts`
- Determines whether a local relay should be started.
- Parses `p2p-peers` output.
- Discovers peer IDs with a fallback based on advertisement logs.
### `helpers/env.ts`
- Loads `.test.env`.
- Supports `KEY=value`, single-quoted values, and double-quoted values.
### `helpers/temp.ts`
- Provides `TempDir`.
- Uses `await using` to auto-clean temporary directories.
---
## Implemented tests
### `test-push-pull.ts`
- Verifies push and pull round trips.
- Uses environment variables or `.test.env` for CouchDB values.
### `test-setup-put-cat.ts`
- Verifies `setup` with full setup URI generation via `encodeSettingsToSetupURI`.
- Verifies `push`, `cat`, `ls`, `info`, `rm`, `resolve`, `cat-rev`, and `pull-rev`.
- Does not require an external remote.
### `test-mirror.ts`
- Verifies six core mirror scenarios.
- Does not require an external remote.
### `test-sync-two-local-databases.ts`
- Verifies sync between two vaults and CouchDB.
- Verifies conflict detection and resolve propagation.
- Starts Docker CouchDB by default when `LIVESYNC_START_DOCKER != 0`.
### `test-sync-locked-remote.ts`
- Updates the CouchDB milestone `locked` flag.
- Verifies sync success when unlocked.
- Verifies actionable CLI error when locked.
### `test-p2p-host.ts`
- Verifies that `p2p-host` starts and emits readiness output.
### `test-p2p-peers-local-relay.ts`
- Verifies peer discovery through a local relay.
### `test-p2p-sync.ts`
- Verifies that `p2p-sync` completes after peer discovery.
### `test-p2p-three-nodes-conflict.ts`
- Uses one host and two clients.
- Verifies conflict creation, detection via `info`, and resolution via `resolve`.
### `test-p2p-upload-download-repro.ts`
- Uses host, upload, and download nodes.
- Verifies transfer of text files and binary files, including larger files.
### `test-e2e-two-vaults-couchdb.ts`
- Verifies two-vault end-to-end scenarios on CouchDB.
- Runs both encryption-off and encryption-on cases.
- Includes conflict marker checks in `ls` and resolve propagation checks.
### `test-e2e-two-vaults-matrix.ts`
- Verifies the matrix equivalent of the bash script.
- Runs four combinations:
- `COUCHDB-enc0`
- `COUCHDB-enc1`
- `MINIO-enc0`
- `MINIO-enc1`
---
## Running tests (PowerShell)
From `src/apps/cli/testdeno`:
```powershell
cd src/apps/cli/testdeno
# Local-only set
deno task test:local
# Individual tests
deno task test:setup-put-cat
deno task test:mirror
deno task test:push-pull
deno task test:sync-locked-remote
# CouchDB-based tests
deno task test:sync-two-local
deno task test:e2e-couchdb
# P2P-based tests
deno task test:p2p-host
deno task test:p2p-peers
deno task test:p2p-sync
deno task test:p2p-three-nodes
deno task test:p2p-upload-download
deno task test:e2e-matrix
```
---
## Environment variables
### CouchDB
- `COUCHDB_URI`
- `COUCHDB_USER`
- `COUCHDB_PASSWORD`
- `COUCHDB_DBNAME`
Equivalent keys in `src/apps/cli/.test.env`:
- `hostname`
- `username`
- `password`
- `dbname`
### Behaviour switches
- `LIVESYNC_START_DOCKER=0`: use existing CouchDB.
- `REMOTE_PATH`: override target path for selected tests.
- `LIVESYNC_TEST_TEE=1`: stream CLI stdout and stderr during execution.
- `LIVESYNC_DOCKER_TEE=1`: stream Docker stdout and stderr.
- `LIVESYNC_CLI_RETRY=<n>`: retry transient network failures.
- `LIVESYNC_DEBUG_KEEP_DOCKER=1`: keep `couchdb-test` after test completion.
### Docker command selection
`helpers/docker.ts` supports command selection via environment variables.
- `LIVESYNC_DOCKER_MODE=auto` (default)
- Windows: tries `wsl docker` first, then `docker`.
- Non-Windows: tries `docker` first, then `wsl docker`.
- `LIVESYNC_DOCKER_MODE=native`: always uses `docker`.
- `LIVESYNC_DOCKER_MODE=wsl`: always uses `wsl docker`.
- `LIVESYNC_DOCKER_COMMAND="..."`: custom command, for example `wsl docker`.
`LIVESYNC_DOCKER_COMMAND` has priority over `LIVESYNC_DOCKER_MODE`.
PowerShell examples:
```powershell
# Use Docker in WSL explicitly
$env:LIVESYNC_DOCKER_MODE = "wsl"
deno task test:sync-two-local
# Full custom command
$env:LIVESYNC_DOCKER_COMMAND = "wsl docker"
deno task test:sync-two-local
```
### P2P
- `RELAY`
- `ROOM_ID`
- `PASSPHRASE`
- `APP_ID`
- `PEERS_TIMEOUT`
- `SYNC_TIMEOUT`
- `USE_INTERNAL_RELAY=0|1`
- `TIMEOUT_SECONDS`
---
## Continuous Integration
The GitHub Actions workflow `.github/workflows/cli-deno-tests.yml` is used to run these tests automatically on push and pull requests affecting the CLI.
---
## Current limitations
- MinIO startup and matrix coverage are ported. Current limits are elsewhere, not setup URI generation.
---
## Maintenance policy
- Existing bash tests remain available.
- Deno tests are expanded in parallel for cross-platform usage.
- New scenarios should be added through reusable helpers in `helpers/`.

View File

@@ -138,7 +138,7 @@ export const _requestToCouchDBFetch = async (
authorization: authHeader, authorization: authHeader,
"content-type": "application/json", "content-type": "application/json",
}; };
const uri = `${baseUri}/${path}`; const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
const requestParam = { const requestParam = {
url: uri, url: uri,
method: method || (body ? "PUT" : "GET"), method: method || (body ? "PUT" : "GET"),
@@ -162,7 +162,7 @@ export const _requestToCouchDB = async (
const authHeaderGen = new AuthorizationHeaderGenerator(); const authHeaderGen = new AuthorizationHeaderGenerator();
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials); const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders }; const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
const uri = `${baseUri}/${path}`; const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
const requestParam: RequestUrlParam = { const requestParam: RequestUrlParam = {
url: uri, url: uri,
method: method || (body ? "PUT" : "GET"), method: method || (body ? "PUT" : "GET"),

View File

@@ -30,7 +30,8 @@
type JSONData = Record<string | number | symbol, any> | [any]; type JSONData = Record<string | number | symbol, any> | [any];
const docsArray = $derived.by(() => { const docsArray = $derived.by(() => {
if (docs && docs.length >= 1) { // The merge pane compares two revisions, so guard against incomplete input before reading docs[1].
if (docs && docs.length >= 2) {
if (keepOrder || docs[0].mtime < docs[1].mtime) { if (keepOrder || docs[0].mtime < docs[1].mtime) {
return { a: docs[0], b: docs[1] } as const; return { a: docs[0], b: docs[1] } as const;
} else { } else {

View File

@@ -636,10 +636,24 @@ Offline Changed files: ${processFiles.length}`;
// --> Conflict processing // --> Conflict processing
// Keep one in-flight conflict check per path so repeated sync events do not close the active merge dialogue.
pendingConflictChecks = new Set<FilePathWithPrefix>();
queueConflictCheck(path: FilePathWithPrefix) { queueConflictCheck(path: FilePathWithPrefix) {
if (this.pendingConflictChecks.has(path)) return;
this.pendingConflictChecks.add(path);
this.conflictResolutionProcessor.enqueue(path); this.conflictResolutionProcessor.enqueue(path);
} }
finishConflictCheck(path: FilePathWithPrefix) {
this.pendingConflictChecks.delete(path);
}
requeueConflictCheck(path: FilePathWithPrefix) {
this.finishConflictCheck(path);
this.queueConflictCheck(path);
}
async resolveConflictOnInternalFiles() { async resolveConflictOnInternalFiles() {
// Scan all conflicted internal files // Scan all conflicted internal files
const conflicted = this.localDatabase.findEntries(ICHeader, ICHeaderEnd, { conflicts: true }); const conflicted = this.localDatabase.findEntries(ICHeader, ICHeaderEnd, { conflicts: true });
@@ -648,7 +662,7 @@ Offline Changed files: ${processFiles.length}`;
for await (const doc of conflicted) { for await (const doc of conflicted) {
if (!("_conflicts" in doc)) continue; if (!("_conflicts" in doc)) continue;
if (isInternalMetadata(doc._id)) { if (isInternalMetadata(doc._id)) {
this.conflictResolutionProcessor.enqueue(doc.path); this.queueConflictCheck(doc.path);
} }
} }
} catch (ex) { } catch (ex) {
@@ -679,21 +693,27 @@ Offline Changed files: ${processFiles.length}`;
const cc = await this.localDatabase.getRaw(id, { conflicts: true }); const cc = await this.localDatabase.getRaw(id, { conflicts: true });
if (cc._conflicts?.length === 0) { if (cc._conflicts?.length === 0) {
await this.extractInternalFileFromDatabase(stripAllPrefixes(path)); await this.extractInternalFileFromDatabase(stripAllPrefixes(path));
this.finishConflictCheck(path);
} else { } else {
this.conflictResolutionProcessor.enqueue(path); this.requeueConflictCheck(path);
} }
// check the file again // check the file again
} }
conflictResolutionProcessor = new QueueProcessor( conflictResolutionProcessor = new QueueProcessor(
async (paths: FilePathWithPrefix[]) => { async (paths: FilePathWithPrefix[]) => {
const path = paths[0]; const path = paths[0];
sendSignal(`cancel-internal-conflict:${path}`);
try { try {
// Retrieve data // Retrieve data
const id = await this.path2id(path, ICHeader); const id = await this.path2id(path, ICHeader);
const doc = await this.localDatabase.getRaw<MetaEntry>(id, { conflicts: true }); const doc = await this.localDatabase.getRaw<MetaEntry>(id, { conflicts: true });
if (doc._conflicts === undefined) return []; if (doc._conflicts === undefined) {
if (doc._conflicts.length == 0) return []; this.finishConflictCheck(path);
return [];
}
if (doc._conflicts.length == 0) {
this.finishConflictCheck(path);
return [];
}
this._log(`Hidden file conflicted:${path}`); this._log(`Hidden file conflicted:${path}`);
const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0])); const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
const revA = doc._rev; const revA = doc._rev;
@@ -725,7 +745,7 @@ Offline Changed files: ${processFiles.length}`;
await this.storeInternalFileToDatabase({ path: filename, ...stat }); await this.storeInternalFileToDatabase({ path: filename, ...stat });
await this.extractInternalFileFromDatabase(filename); await this.extractInternalFileFromDatabase(filename);
await this.localDatabase.removeRevision(id, revB); await this.localDatabase.removeRevision(id, revB);
this.conflictResolutionProcessor.enqueue(path); this.requeueConflictCheck(path);
return []; return [];
} else { } else {
this._log(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE); this._log(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE);
@@ -743,6 +763,7 @@ Offline Changed files: ${processFiles.length}`;
await this.resolveByNewerEntry(id, path, doc, revA, revB); await this.resolveByNewerEntry(id, path, doc, revA, revB);
return []; return [];
} catch (ex) { } catch (ex) {
this.finishConflictCheck(path);
this._log(`Failed to resolve conflict (Hidden): ${path}`); this._log(`Failed to resolve conflict (Hidden): ${path}`);
this._log(ex, LOG_LEVEL_VERBOSE); this._log(ex, LOG_LEVEL_VERBOSE);
return []; return [];
@@ -761,15 +782,22 @@ Offline Changed files: ${processFiles.length}`;
const prefixedPath = addPrefix(path, ICHeader); const prefixedPath = addPrefix(path, ICHeader);
const docAMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revA }); const docAMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revA });
const docBMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revB }); const docBMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revB });
if (docAMerge != false && docBMerge != false) { try {
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) { if (docAMerge != false && docBMerge != false) {
// Again for other conflicted revisions. if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
this.conflictResolutionProcessor.enqueue(path); // Again for other conflicted revisions.
this.requeueConflictCheck(path);
} else {
this.finishConflictCheck(path);
}
return;
} else {
// If either revision could not read, force resolving by the newer one.
await this.resolveByNewerEntry(id, path, doc, revA, revB);
} }
return; } catch (ex) {
} else { this.finishConflictCheck(path);
// If either revision could not read, force resolving by the newer one. throw ex;
await this.resolveByNewerEntry(id, path, doc, revA, revB);
} }
}, },
{ {
@@ -793,6 +821,8 @@ Offline Changed files: ${processFiles.length}`;
const storeFilePath = strippedPath; const storeFilePath = strippedPath;
const displayFilename = `${storeFilePath}`; const displayFilename = `${storeFilePath}`;
// const path = this.prefixedConfigDir2configDir(stripAllPrefixes(docA.path)) || docA.path; // const path = this.prefixedConfigDir2configDir(stripAllPrefixes(docA.path)) || docA.path;
// Cancel only when replacing an existing dialogue for the same path, not on every queue pass.
sendSignal(`cancel-internal-conflict:${docA.path}`);
const modal = new JsonResolveModal(this.app, storageFilePath, [docA, docB], async (keep, result) => { const modal = new JsonResolveModal(this.app, storageFilePath, [docA, docB], async (keep, result) => {
// modal.close(); // modal.close();
try { try {
@@ -1164,7 +1194,7 @@ Offline Changed files: ${files.length}`;
// Check if the file is conflicted, and if so, enqueue to resolve. // Check if the file is conflicted, and if so, enqueue to resolve.
// Until the conflict is resolved, the file will not be processed. // Until the conflict is resolved, the file will not be processed.
if (docMeta._conflicts && docMeta._conflicts.length > 0) { if (docMeta._conflicts && docMeta._conflicts.length > 0) {
this.conflictResolutionProcessor.enqueue(path); this.queueConflictCheck(path);
this._log(`${headerLine} Hidden file conflicted, enqueued to resolve`); this._log(`${headerLine} Hidden file conflicted, enqueued to resolve`);
return true; return true;
} }

View File

@@ -781,7 +781,8 @@ Success: ${successCount}, Errored: ${errored}`;
const credential = generateCredentialObject(this.settings); const credential = generateCredentialObject(this.settings);
const request = async (path: string, method: string = "GET", body: any = undefined) => { const request = async (path: string, method: string = "GET", body: any = undefined) => {
const req = await _requestToCouchDB( const req = await _requestToCouchDB(
this.settings.couchDB_URI + (this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""), this.settings.couchDB_URI.replace(/\/+$/, "") +
(this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
credential, credential,
window.origin, window.origin,
path, path,

Submodule src/lib updated: dd48d97e71...6a2dc6777f

View File

@@ -137,6 +137,23 @@ export function paneHatch(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement,
pluginConfig.accessKey = REDACTED; pluginConfig.accessKey = REDACTED;
pluginConfig.secretKey = REDACTED; pluginConfig.secretKey = REDACTED;
const redact = (source: string) => `${REDACTED}(${source.length} letters)`; const redact = (source: string) => `${REDACTED}(${source.length} letters)`;
const toSchemeOnly = (uri: string) => {
try {
return `${new URL(uri).protocol}//`;
} catch {
const matched = uri.match(/^[A-Za-z][A-Za-z0-9+.-]*:\/\//);
return matched?.[0] ?? REDACTED;
}
};
pluginConfig.remoteConfigurations = Object.fromEntries(
Object.entries(pluginConfig.remoteConfigurations || {}).map(([id, config]) => [
id,
{
...config,
uri: toSchemeOnly(config.uri),
},
])
);
pluginConfig.region = redact(pluginConfig.region); pluginConfig.region = redact(pluginConfig.region);
pluginConfig.bucket = redact(pluginConfig.bucket); pluginConfig.bucket = redact(pluginConfig.bucket);
pluginConfig.pluginSyncExtendedSetting = {}; pluginConfig.pluginSyncExtendedSetting = {};

View File

@@ -32,6 +32,7 @@ import SetupRemote from "../SetupWizard/dialogs/SetupRemote.svelte";
import SetupRemoteCouchDB from "../SetupWizard/dialogs/SetupRemoteCouchDB.svelte"; import SetupRemoteCouchDB from "../SetupWizard/dialogs/SetupRemoteCouchDB.svelte";
import SetupRemoteBucket from "../SetupWizard/dialogs/SetupRemoteBucket.svelte"; import SetupRemoteBucket from "../SetupWizard/dialogs/SetupRemoteBucket.svelte";
import SetupRemoteP2P from "../SetupWizard/dialogs/SetupRemoteP2P.svelte"; import SetupRemoteP2P from "../SetupWizard/dialogs/SetupRemoteP2P.svelte";
import { syncActivatedRemoteSettings } from "./remoteConfigBuffer.ts";
function getSettingsFromEditingSettings(editingSettings: AllSettings): ObsidianLiveSyncSettings { function getSettingsFromEditingSettings(editingSettings: AllSettings): ObsidianLiveSyncSettings {
const workObj = { ...editingSettings } as ObsidianLiveSyncSettings; const workObj = { ...editingSettings } as ObsidianLiveSyncSettings;
@@ -183,6 +184,11 @@ export function paneRemoteConfig(
}, true); }, true);
if (synchroniseActiveRemote) { if (synchroniseActiveRemote) {
// Keep both buffers aligned with the newly activated remote before saving any remaining dirty keys.
syncActivatedRemoteSettings(this.editingSettings, this.core.settings);
if (this.initialSettings) {
syncActivatedRemoteSettings(this.initialSettings, this.core.settings);
}
await this.saveAllDirtySettings(); await this.saveAllDirtySettings();
} }

View File

@@ -0,0 +1,17 @@
import { pickBucketSyncSettings, pickCouchDBSyncSettings, pickP2PSyncSettings } from "@lib/common/utils.ts";
import type { ObsidianLiveSyncSettings } from "@lib/common/types.ts";
// Keep the setting dialogue buffer aligned with the current core settings before persisting other dirty keys.
// This also clears stale dirty values left from editing a different remote type before switching active remotes.
export function syncActivatedRemoteSettings(
target: Partial<ObsidianLiveSyncSettings>,
source: ObsidianLiveSyncSettings
): void {
Object.assign(target, {
remoteType: source.remoteType,
activeConfigurationId: source.activeConfigurationId,
...pickBucketSyncSettings(source),
...pickCouchDBSyncSettings(source),
...pickP2PSyncSettings(source),
});
}

View File

@@ -0,0 +1,83 @@
import { describe, expect, it } from "vitest";
import { DEFAULT_SETTINGS, REMOTE_COUCHDB, REMOTE_MINIO } from "../../../lib/src/common/types";
import { syncActivatedRemoteSettings } from "./remoteConfigBuffer";
describe("syncActivatedRemoteSettings", () => {
it("should copy active MinIO credentials into the editing buffer", () => {
const target = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_COUCHDB,
activeConfigurationId: "old-remote",
accessKey: "",
secretKey: "",
endpoint: "",
bucket: "",
region: "",
encrypt: true,
};
const source = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
bucketPrefix: "folder/",
useCustomRequestHandler: false,
forcePathStyle: true,
bucketCustomHeaders: "",
};
syncActivatedRemoteSettings(target, source);
expect(target.remoteType).toBe(REMOTE_MINIO);
expect(target.activeConfigurationId).toBe("remote-s3");
expect(target.accessKey).toBe("access");
expect(target.secretKey).toBe("secret");
expect(target.endpoint).toBe("https://minio.example.test");
expect(target.bucket).toBe("vault");
expect(target.region).toBe("sz-hq");
expect(target.bucketPrefix).toBe("folder/");
expect(target.encrypt).toBe(true);
});
it("should clear stale dirty values from a different remote type", () => {
const target = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
couchDB_URI: "https://edited.invalid",
couchDB_USER: "edited-user",
couchDB_PASSWORD: "edited-pass",
couchDB_DBNAME: "edited-db",
};
const source = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
couchDB_URI: "https://current.example.test",
couchDB_USER: "current-user",
couchDB_PASSWORD: "current-pass",
couchDB_DBNAME: "current-db",
};
syncActivatedRemoteSettings(target, source);
expect(target.couchDB_URI).toBe("https://current.example.test");
expect(target.couchDB_USER).toBe("current-user");
expect(target.couchDB_PASSWORD).toBe("current-pass");
expect(target.couchDB_DBNAME).toBe("current-db");
});
});

View File

@@ -3,60 +3,83 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope. The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
## 0.25.56+patched5 ## Unreleased
6th April, 2026 ### P2P Synchronisation
Now the foundation for P2P synchronisation has been rewritten, and the unit tests have been added. The foundation has been separated into the transport layer, signalling-and-connection layer, and, an RPC layers. And each layer has been unit-tested. As the result, the P2P synchronisation now uses the robust shim that uses RPC-ed PouchDB synchronisation in contrast to previous implementation.
This P2P synchronisation is not compatible with previous versions in terms of connectivity. All devices must be updated.
## 0.25.60
29th April, 2026
### Fixed ### Fixed
- Now larger settings can be exported and imported via QR code without issues. (#595)
- When the settings data exceeds the QR code capacity, it is now split into multiple QR codes.
- These QR codes are reassembled by the aggregator page, which collects the split data and reconstructs the original settings.
- Aggregator page is available at `https://vrtmrz.github.io/obsidian-livesync/aggregator.html`, and this file is also included in the repository.
- We will not send the settings data to any server. The QR code data is generated and processed entirely on the client side, ensuring that your settings remain private and secure. HOWEVER, please be careful your network environment.
- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.
### Fixed (CLI)
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
- Improved the clarity of CLI command logs by including the total count of processed items.
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.
## 0.25.59
### Fixed
- No longer Setup-wizard drops username and password silently. (#865)
- Thank you so much for @koteitan !
- Setup URI is now correctly imported (#859).
- Also thank you so much for @koteitan !
### Improved
- now French translation is added by @foXaCe ! Thank you so much!
## 0.25.58
### Fixed
- No longer credentials are broken during object storage configuration (related: #852).
- Fixed a worker-side recursion issue that could raise `Maximum call stack size exceeded` during chunk splitting (related: #855).
- Improved background worker crash cleanup so pending split/encryption tasks are released cleanly instead of being left in a waiting state (related: #855).
- On start-up, the selected remote configuration is now applied to runtime connection fields as well, reducing intermittent authentication failures caused by stale runtime settings (related: #855).
- Issue report generation now redacts `remoteConfigurations` connection strings and keeps only the scheme (e.g. `sls+https://`), so credentials are not exposed in reports.
- Hidden file JSON conflicts no longer keep re-opening and dismissing the merge dialogue before we can act, which fixes persistent unresolvable `data.json` conflicts in plug-in settings sync (related: #850).
## 0.25.57
9th April, 2026
- Packing a batch during the journal sync now continues even if the batch contains no items to upload. - Packing a batch during the journal sync now continues even if the batch contains no items to upload.
- No unexpected error (about a replicator) during the early stage of initialisation.
## 0.25.56+patched4 - Now error messages are kept hidden if the show status inside the editor is disabled (related: #829).
6th April, 2026
### Fixed
- Remote configuration URIs are now correctly encrypted when saved after editing in the settings dialogue.
- Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848). - Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848).
- Each device's local deduplication caches (`knownIDs`, `sentIDs`, `receivedFiles`, `sentFiles`) now track the remote journal epoch (derived from the encryption parameters stored on the remote). - Each device's local deduplication caches (`knownIDs`, `sentIDs`, `receivedFiles`, `sentFiles`) now track the remote journal epoch (derived from the encryption parameters stored on the remote).
- When the epoch changes, the plugin verifies whether the device's last uploaded file still exists on the remote. If the file is gone, it confirms a remote wipe and automatically clears the stale caches. If the file is still present (e.g. a protocol upgrade without a wipe), the caches are preserved and only the epoch is updated. This means normal upgrades never cause unnecessary re-processing. - When the epoch changes, the plugin verifies whether the device's last uploaded file still exists on the remote. If the file is gone, it confirms a remote wipe and automatically clears the stale caches. If the file is still present (e.g. a protocol upgrade without a wipe), the caches are preserved, and only the epoch is updated. This means normal upgrades never cause unnecessary re-processing.
## 0.25.56+patched3
5th April, 2026
### Fixed
- Now surely remote configurations are editable in the settings dialogue.
- We can fetch remote settings from the remote and apply them to the local settings for each remote configuration entry.
- No longer layout breaking occurs when the description of a remote configuration entry is too long.
## 0.25.56+patched2
5th April, 2026
Beta release tagging is now changed to +patched1, +patched2, and so on.
### Translations ### Translations
- Russian translation has been added! Thank you so much for the contribution! - Russian translation has been added! Thank you so much for the contribution, @vipka1n! (#845)
### Fixed
- No unexpected error (about a replicator) during the early stage of initialisation.
- Now error messages are kept hidden if the show status inside the editor is disabled.
### New features ### New features
- Now we can configure multiple Remote Databases of the same type, e.g, multiple CouchDBs or S3 remotes. - Now we can configure multiple Remote Databases of the same type, e.g, multiple CouchDBs or S3 remotes.
- A user interface for managing multiple remote databases has been added to the settings dialogue. I think no explanation is needed for the UI, but please let me know if you have any questions.
- We can switch between multiple Remote Databases in the settings dialogue. - We can switch between multiple Remote Databases in the settings dialogue.
### CLI ### CLI
#### Fixed #### Fixed
- Replication progress is now correctly saved and restored in the CLI. - Replication progress is now correctly saved and restored in the CLI (related: #846).
## ~~0.25.55~~ 0.25.56 ## ~~0.25.55~~ 0.25.56

30
vitest.config.rpc-unit.ts Normal file
View File

@@ -0,0 +1,30 @@
import { defineConfig, mergeConfig } from "vitest/config";
import viteConfig from "./vitest.config.common";
export default mergeConfig(
viteConfig,
defineConfig({
resolve: {
alias: {
obsidian: "",
},
},
test: {
name: "rpc-unit-tests",
include: ["src/lib/src/rpc/**/*.unit.spec.ts"],
exclude: ["test/**"],
coverage: {
include: ["src/lib/src/rpc/**/*.ts"],
exclude: ["**/*.unit.spec.ts", "**/index.ts"],
provider: "v8",
reporter: ["text", "json", "html", ["text", { file: "coverage-rpc-text.txt" }]],
thresholds: {
lines: 90,
functions: 90,
branches: 75,
statements: 90,
},
},
},
})
);