mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-05-11 10:11:54 +00:00
Compare commits
83 Commits
0.25.52-pa
...
feat-userh
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0c51081566 | ||
|
|
2afe12ad2d | ||
|
|
4a9d6c1349 | ||
|
|
279fc8876e | ||
|
|
cc3d30dbcf | ||
|
|
39e82cc8a1 | ||
|
|
fa7ef62302 | ||
|
|
81d8224330 | ||
|
|
cc466a4b3c | ||
|
|
ceebca7de9 | ||
|
|
c2f696d0a4 | ||
|
|
1aa7c45794 | ||
|
|
faefa80cbd | ||
|
|
3737eacffd | ||
|
|
4c0af0b608 | ||
|
|
bb69eb13e7 | ||
|
|
7c9db6376f | ||
|
|
4c04e4e676 | ||
|
|
14ec35b257 | ||
|
|
b609e4973c | ||
|
|
354f0be9a3 | ||
|
|
16804ed34c | ||
|
|
31bd270869 | ||
|
|
b5d054f259 | ||
|
|
1ef2955d00 | ||
|
|
6ef56063b3 | ||
|
|
a912585800 | ||
|
|
7a863625bc | ||
|
|
99b4037820 | ||
|
|
d59b5dc2f9 | ||
|
|
4d0203e4ca | ||
|
|
3e4db571cd | ||
|
|
b0a9bd84d6 | ||
|
|
8c4e62e7c1 | ||
|
|
3e03d1dbd5 | ||
|
|
0dbf4cface | ||
|
|
bc22d61a3a | ||
|
|
d709bcc1d0 | ||
|
|
d7088be8af | ||
|
|
f17f1ecd93 | ||
|
|
bf556bd9f4 | ||
|
|
8b40969fa3 | ||
|
|
6cce931a88 | ||
|
|
216861f2c3 | ||
|
|
6ce724afb4 | ||
|
|
2e3e106fb2 | ||
|
|
00f2606a2f | ||
|
|
3c94a44285 | ||
|
|
4c0908acde | ||
|
|
cda27fb7f8 | ||
|
|
837a828cec | ||
|
|
4c8e13ccb9 | ||
|
|
1ae4eaab02 | ||
|
|
b1efbf74c7 | ||
|
|
12f04f6cf7 | ||
|
|
a937feed3f | ||
|
|
2de9899a99 | ||
|
|
a0af6201a5 | ||
|
|
9c7c6c8859 | ||
|
|
38d7cae1bc | ||
|
|
fee34f0dcb | ||
|
|
e01f7f4d92 | ||
|
|
985004bc0e | ||
|
|
967a78d657 | ||
|
|
2ff60dd5ac | ||
|
|
c3341da242 | ||
|
|
c2bfaeb5a9 | ||
|
|
c454616e1c | ||
|
|
c88e73b7d3 | ||
|
|
3a29818612 | ||
|
|
ee69085830 | ||
|
|
3963f7c971 | ||
|
|
602fcef949 | ||
|
|
075d260fdd | ||
|
|
0717093d81 | ||
|
|
1f87a9fd3d | ||
|
|
fdd3a3aecb | ||
|
|
d8281390c4 | ||
|
|
08b1712f39 | ||
|
|
6c69547cef | ||
|
|
89bf0488c3 | ||
|
|
653cf8dfbe | ||
|
|
310496d0b8 |
31
.dockerignore
Normal file
31
.dockerignore
Normal file
@@ -0,0 +1,31 @@
|
||||
# Git history
|
||||
.git/
|
||||
.gitignore
|
||||
|
||||
# Dependencies — re-installed inside Docker
|
||||
node_modules/
|
||||
src/apps/cli/node_modules/
|
||||
|
||||
# Pre-built CLI output — rebuilt inside Docker
|
||||
src/apps/cli/dist/
|
||||
|
||||
# Obsidian plugin build outputs
|
||||
main.js
|
||||
main_org.js
|
||||
pouchdb-browser.js
|
||||
production/
|
||||
|
||||
# Test coverage and reports
|
||||
coverage/
|
||||
|
||||
# Local environment / secrets
|
||||
.env
|
||||
*.env
|
||||
.test.env
|
||||
|
||||
# local config files
|
||||
*.local
|
||||
|
||||
# OS artefacts
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
1
.gitattributes
vendored
Normal file
1
.gitattributes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
*.sh text eol=lf
|
||||
107
.github/ISSUE_TEMPLATE/issue-report.md
vendored
107
.github/ISSUE_TEMPLATE/issue-report.md
vendored
@@ -2,77 +2,104 @@
|
||||
name: Issue report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: ''
|
||||
labels: 'bug'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Thank you for taking the time to report this issue!
|
||||
To improve the process, I would like to ask you to let me know the information in advance.
|
||||
Before filling in this form, please read: [How to report an issue](../docs/to_issue_reporting.md).
|
||||
|
||||
All instructions and examples, and empty entries can be deleted.
|
||||
Just for your information, a [filled example](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Issue+example) is also written.
|
||||
Issues with sufficient information will be prioritised.
|
||||
|
||||
## Abstract
|
||||
The synchronisation hung up immediately after connecting.
|
||||
---
|
||||
|
||||
## Expected behaviour
|
||||
- Synchronisation ends with the message `Replication completed`
|
||||
- Everything synchronised
|
||||
## Required
|
||||
|
||||
## Actually happened
|
||||
- Synchronisation has been cancelled with the message `TypeError ... ` (captured in the attached log, around LL.10-LL.12)
|
||||
- No files synchronised
|
||||
### Abstract
|
||||
<!-- Briefly describe the problem in one or two sentences. -->
|
||||
|
||||
## Reproducing procedure
|
||||
### Expected behaviour
|
||||
<!-- What did you expect to happen? -->
|
||||
|
||||
1. Configure LiveSync as in the attached material.
|
||||
2. Click the replication button on the ribbon.
|
||||
3. Synchronising has begun.
|
||||
4. About two or three seconds later, we got the error `TypeError ... `.
|
||||
5. Replication has been stopped. No files synchronised.
|
||||
### Actually happened
|
||||
<!-- What actually happened? Include any error messages. -->
|
||||
|
||||
Note: If you do not catch the reproducing procedure, please let me know the frequency and signs.
|
||||
|
||||
## Report materials
|
||||
If the information is not available, do not hesitate to report it as it is. You can also of course omit it if you think this is indeed unnecessary. If it is necessary, I will ask you.
|
||||
|
||||
### Report from the LiveSync
|
||||
For more information, please refer to [Making the report](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Making+the+report).
|
||||
<details>
|
||||
<summary>Report from hatch</summary>
|
||||
|
||||
```
|
||||
<!-- paste here -->
|
||||
```
|
||||
</details>
|
||||
### Reproducing procedure
|
||||
<!-- Step-by-step instructions to reproduce the issue. If you cannot reproduce it reliably, please describe the frequency and any signs you noticed. -->
|
||||
|
||||
### Obsidian debug info
|
||||
Please provide debug info for **each device involved**. The primary device (where the issue occurred) is required; others are strongly recommended. If your issue involves synchronisation between devices, debug info from relevant devices is very helpful.
|
||||
To get it: open the command palette → "Show debug info".
|
||||
|
||||
<details>
|
||||
<summary>Debug info</summary>
|
||||
<summary>Device 1 (primary)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here -->
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Device 2 (if applicable)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here -->
|
||||
```
|
||||
</details>
|
||||
|
||||
### LiveSync version
|
||||
The hatch report (below) includes version information. If you cannot provide the report, please fill in the version here.
|
||||
|
||||
- Self-hosted LiveSync version: <!-- e.g. 0.23.0 — find it in Obsidian Settings → Community Plugins -->
|
||||
|
||||
### Report from LiveSync
|
||||
Open the `Hatch` pane in LiveSync settings and press `Make report`. Paste here or upload to [Gist](https://gist.github.com/) and share the link.
|
||||
|
||||
<details>
|
||||
<summary>Report from hatch (primary)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here or link to Gist -->
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Report from hatch (if applicable)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here or link to Gist -->
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
### Plug-in log
|
||||
We can see the log by tapping the Document box icon. If you noticed something suspicious, please let me know.
|
||||
Note: **Please enable `Verbose Log`**. For detail, refer to [Logging](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Logging), please.
|
||||
Enable `Verbose Log` in General Settings first, then reproduce the issue and copy the log (tap the document box icon in the ribbon).
|
||||
Paste here or upload to [Gist](https://gist.github.com/) and share the link.
|
||||
|
||||
<details>
|
||||
<summary>Plug-in log</summary>
|
||||
<summary>Plug-in log (primary)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here -->
|
||||
<!-- paste here or link to Gist -->
|
||||
```
|
||||
</details>
|
||||
|
||||
### Network log
|
||||
Network logs displayed in DevTools will possibly help with connection-related issues. To capture that, please refer to [DevTools](https://docs.vrtmrz.net/LiveSync/hintandtrivia/DevTools).
|
||||
|
||||
<details>
|
||||
<summary>Plug-in log (if applicable)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here or link to Gist -->
|
||||
```
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Optional
|
||||
|
||||
### Screenshots
|
||||
If applicable, please add screenshots to help explain your problem.
|
||||
|
||||
### Other information, insights and intuition.
|
||||
### Other information, insights and intuition
|
||||
Please provide any additional context or information about the problem.
|
||||
|
||||
75
.github/workflows/cli-deno-tests.yml
vendored
Normal file
75
.github/workflows/cli-deno-tests.yml
vendored
Normal file
@@ -0,0 +1,75 @@
|
||||
name: cli-deno-tests
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
test_task:
|
||||
description: 'Deno test task to run'
|
||||
type: choice
|
||||
options:
|
||||
- test
|
||||
- test:local
|
||||
- test:e2e-matrix
|
||||
- test:p2p-sync
|
||||
default: test
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: recursive
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '24.x'
|
||||
cache: 'npm'
|
||||
|
||||
- name: Setup Deno
|
||||
uses: denoland/setup-deno@v2
|
||||
with:
|
||||
deno-version: v2.x
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Build CLI
|
||||
working-directory: src/apps/cli
|
||||
run: npm run build
|
||||
|
||||
- name: Create .test.env
|
||||
working-directory: src/apps/cli
|
||||
run: |
|
||||
cat <<EOF > .test.env
|
||||
hostname=http://127.0.0.1:5989/
|
||||
dbname=livesync-test-db-ci
|
||||
username=admin
|
||||
password=testpassword
|
||||
minioEndpoint=http://127.0.0.1:9000
|
||||
accessKey=minioadmin
|
||||
secretKey=minioadmin
|
||||
bucketName=livesync-test-bucket-ci
|
||||
EOF
|
||||
|
||||
- name: Run Deno tests
|
||||
working-directory: src/apps/cli/testdeno
|
||||
env:
|
||||
LIVESYNC_DOCKER_MODE: native
|
||||
LIVESYNC_CLI_RETRY: 3
|
||||
run: |
|
||||
TASK="${{ github.event_name == 'workflow_dispatch' && inputs.test_task || 'test' }}"
|
||||
echo "[INFO] Running Deno task: $TASK"
|
||||
deno task "$TASK"
|
||||
|
||||
- name: Stop leftover containers
|
||||
if: always()
|
||||
run: |
|
||||
docker stop couchdb-test minio-test relay-test >/dev/null 2>&1 || true
|
||||
docker rm couchdb-test minio-test relay-test >/dev/null 2>&1 || true
|
||||
101
.github/workflows/cli-docker.yml
vendored
Normal file
101
.github/workflows/cli-docker.yml
vendored
Normal file
@@ -0,0 +1,101 @@
|
||||
# Build and push the CLI Docker image to GitHub Container Registry (GHCR).#
|
||||
# Image tag format: <manifest-version>-<unix-epoch>-cli
|
||||
# Example: 0.25.56-1743500000-cli
|
||||
#
|
||||
# The image is also tagged 'latest' for convenience.
|
||||
# Image name: ghcr.io/<owner>/livesync-cli
|
||||
name: Build and Push CLI Docker Image
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "*.*.*-cli"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
dry_run:
|
||||
description: Build only (do not push image to GHCR)
|
||||
required: false
|
||||
type: boolean
|
||||
default: true
|
||||
force:
|
||||
description: Continue to build/push even if CLI E2E fails (workflow_dispatch only)
|
||||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
|
||||
jobs:
|
||||
build-and-push:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 90
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
submodules: recursive
|
||||
|
||||
- name: Derive image tag
|
||||
id: meta
|
||||
run: |
|
||||
VERSION=$(jq -r '.version' manifest.json)
|
||||
EPOCH=$(date +%s)
|
||||
TAG="${VERSION}-${EPOCH}-cli"
|
||||
IMAGE="ghcr.io/${{ github.repository_owner }}/livesync-cli"
|
||||
echo "tag=${TAG}" >> $GITHUB_OUTPUT
|
||||
echo "image=${IMAGE}" >> $GITHUB_OUTPUT
|
||||
echo "full=${IMAGE}:${TAG}" >> $GITHUB_OUTPUT
|
||||
echo "version=${IMAGE}:${VERSION}-cli" >> $GITHUB_OUTPUT
|
||||
echo "latest=${IMAGE}:latest" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "24.x"
|
||||
cache: "npm"
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Run CLI E2E (docker)
|
||||
id: e2e
|
||||
continue-on-error: ${{ github.event_name == 'workflow_dispatch' && inputs.force }}
|
||||
working-directory: src/apps/cli
|
||||
env:
|
||||
CI: true
|
||||
run: npm run test:e2e:docker:all
|
||||
|
||||
- name: Stop test containers (safety net)
|
||||
if: always()
|
||||
working-directory: src/apps/cli
|
||||
run: |
|
||||
# Keep this as a safety net for future suites/steps that may leave containers running.
|
||||
bash ./util/couchdb-stop.sh >/dev/null 2>&1 || true
|
||||
bash ./util/minio-stop.sh >/dev/null 2>&1 || true
|
||||
bash ./util/p2p-stop.sh >/dev/null 2>&1 || true
|
||||
|
||||
- name: Build and push
|
||||
if: ${{ steps.e2e.outcome == 'success' || (github.event_name == 'workflow_dispatch' && inputs.force) }}
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
file: src/apps/cli/Dockerfile
|
||||
push: ${{ !(github.event_name == 'workflow_dispatch' && inputs.dry_run) }}
|
||||
tags: |
|
||||
${{ steps.meta.outputs.full }}
|
||||
${{ steps.meta.outputs.version }}
|
||||
${{ steps.meta.outputs.latest }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
2
.github/workflows/harness-ci.yml
vendored
2
.github/workflows/harness-ci.yml
vendored
@@ -56,7 +56,7 @@ jobs:
|
||||
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suitep2p/' }}
|
||||
env:
|
||||
CI: true
|
||||
run: npm run test suitep2p/
|
||||
run: npm run test:p2p
|
||||
- name: Stop test services (CouchDB)
|
||||
run: npm run test:docker-couchdb:stop
|
||||
if: ${{ inputs.testsuite == '' || inputs.testsuite == 'suite/' }}
|
||||
|
||||
@@ -24,7 +24,7 @@ Additionally, it supports peer-to-peer synchronisation using WebRTC now (experim
|
||||
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
|
||||
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
|
||||
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
|
||||
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices.
|
||||
- [webpeer](https://github.com/vrtmrz/obsidian-livesync/tree/main/src/apps/webpeer): A pseudo-client for receiving and sending data between devices.
|
||||
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
|
||||
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).
|
||||
|
||||
|
||||
92
aggregator.html
Normal file
92
aggregator.html
Normal file
@@ -0,0 +1,92 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Self-hosted LiveSync Setup QR Aggregator</title>
|
||||
<style>
|
||||
body { font-family: sans-serif; display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100vh; margin: 0; background-color: #f4f4f9; color: #333; }
|
||||
.container { background: white; padding: 2rem; border-radius: 8px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); text-align: center; max-width: 90%; }
|
||||
.progress { margin: 20px 0; font-size: 1.2rem; font-weight: bold; }
|
||||
.status { margin-bottom: 20px; color: #666; }
|
||||
.btn { display: inline-block; padding: 12px 24px; background-color: #7c4dff; color: white; text-decoration: none; border-radius: 4px; font-weight: bold; transition: background-color 0.2s; border: none; cursor: pointer; }
|
||||
.btn:hover { background-color: #651fff; }
|
||||
.btn:disabled { background-color: #ccc; cursor: not-allowed; }
|
||||
.grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(40px, 1fr)); gap: 8px; margin: 20px 0; }
|
||||
.tile { width: 40px; height: 40px; border: 2px solid #ddd; border-radius: 4px; display: flex; align-items: center; justify-content: center; font-size: 0.8rem; }
|
||||
.tile.filled { background-color: #7c4dff; color: white; border-color: #7c4dff; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<h1>LiveSync Setup</h1>
|
||||
<div id="app">
|
||||
<p>Checking hash data...</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function updateUI() {
|
||||
const hash = window.location.hash.substring(1);
|
||||
const params = new URLSearchParams(hash);
|
||||
|
||||
const id = params.get('id');
|
||||
const total = parseInt(params.get('n') || '0');
|
||||
const index = parseInt(params.get('i') || '-1');
|
||||
const data = params.get('d');
|
||||
|
||||
const app = document.getElementById('app');
|
||||
|
||||
if (!id || total <= 0 || index === -1 || !data) {
|
||||
app.innerHTML = '<p class="status">Invalid setup URL. Please scan the QR code correctly.</p>';
|
||||
return;
|
||||
}
|
||||
|
||||
// Get session data
|
||||
const storageKey = 'ls_agg_' + id;
|
||||
let session = JSON.parse(localStorage.getItem(storageKey) || '{}');
|
||||
|
||||
// Save current data
|
||||
session[index] = data;
|
||||
localStorage.setItem(storageKey, JSON.stringify(session));
|
||||
|
||||
const receivedIndexes = Object.keys(session).map(Number);
|
||||
const count = receivedIndexes.length;
|
||||
|
||||
let html = `
|
||||
<div class="status">Session ID: ${id}</div>
|
||||
<div class="progress">${count} / ${total} Loaded</div>
|
||||
<div class="grid">
|
||||
`;
|
||||
|
||||
for (let i = 0; i < total; i++) {
|
||||
const isFilled = session[i] !== undefined;
|
||||
html += `<div class="tile ${isFilled ? 'filled' : ''}">${i + 1}</div>`;
|
||||
}
|
||||
html += `</div>`;
|
||||
|
||||
if (count === total) {
|
||||
const sortedData = Array.from({length: total}, (_, i) => session[i]).join('');
|
||||
// Use the correct protocol for settings
|
||||
const obsidianUri = `obsidian://setuplivesync?settingsQR=${sortedData}`;
|
||||
|
||||
html += `
|
||||
<p>All parts have been collected!</p>
|
||||
<a href="${obsidianUri}" class="btn">Open Obsidian to complete setup</a>
|
||||
<p style="margin-top:20px; font-size:0.8rem; color: #999;">Note: If the button does not respond, please ensure you are opening this in a browser that can trigger Obsidian.</p>
|
||||
`;
|
||||
} else {
|
||||
html += `
|
||||
<p class="status">Please scan the next QR code.</p>
|
||||
<button class="btn" disabled>Waiting...</button>
|
||||
`;
|
||||
}
|
||||
|
||||
app.innerHTML = html;
|
||||
}
|
||||
|
||||
window.addEventListener('hashchange', updateUI);
|
||||
updateUI();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
10
devs.md
10
devs.md
@@ -153,17 +153,17 @@ export class ModuleExample extends AbstractObsidianModule {
|
||||
|
||||
## Beta Policy
|
||||
|
||||
- Beta versions are denoted by appending `-patched-N` to the base version number.
|
||||
- Beta versions are denoted by appending `+patchedN` to the base version number.
|
||||
- `The base version` mostly corresponds to the stable release version.
|
||||
- e.g., v0.25.41-patched-1 is equivalent to v0.25.42-beta1.
|
||||
- e.g., v0.25.41+patched1 is equivalent to v0.25.42-beta1.
|
||||
- This notation is due to SemVer incompatibility of Obsidian's plugin system.
|
||||
- Hence, this release is `0.25.41-patched-1`.
|
||||
- Hence, this release is `0.25.41+patched1`.
|
||||
- Each beta version may include larger changes, but bug fixes will often not be included.
|
||||
- I think that in most cases, bug fixes will cause the stable releases.
|
||||
- They will not be released per branch or backported; they will simply be released.
|
||||
- Bug fixes for previous versions will be applied to the latest beta version.
|
||||
This means, if xx.yy.02-patched-1 exists and there is a defect in xx.yy.01, a fix is applied to xx.yy.02-patched-1 and yields xx.yy.02-patched-2.
|
||||
If the fix is required immediately, it is released as xx.yy.02 (with xx.yy.01-patched-1).
|
||||
This means, if xx.yy.02+patched1 exists and there is a defect in xx.yy.01, a fix is applied to xx.yy.02+patched1 and yields xx.yy.02+patched2.
|
||||
If the fix is required immediately, it is released as xx.yy.02 (with xx.yy.01+patched1).
|
||||
- This procedure remains unchanged from the current one.
|
||||
- At the very least, I am using the latest beta.
|
||||
- However, I will not be using a beta continuously for a week after it has been released. It is probably closer to an RC in nature.
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# For details and other explanations about this file refer to:
|
||||
# https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik
|
||||
|
||||
version: "2.1"
|
||||
services:
|
||||
couchdb:
|
||||
image: couchdb:latest
|
||||
|
||||
206
docs/design_docs_of_remote_configurations.md
Normal file
206
docs/design_docs_of_remote_configurations.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# The design document of remote configuration management
|
||||
|
||||
## Goal
|
||||
|
||||
- Allow us to manage multiple remote connections in a single vault.
|
||||
- Keep the existing synchronisation implementations working without requiring a large rewrite.
|
||||
- Provide a safe migration path from the previous single-remote configuration model.
|
||||
- Allow connections to be imported and exported in a compact and reusable format.
|
||||
|
||||
## Motivation
|
||||
|
||||
Historically, Self-hosted LiveSync stored one effective remote configuration directly in the main settings. This was simple, but it had several limitations.
|
||||
|
||||
- We could only keep one CouchDB, one bucket, or one Peer-to-Peer target as the effective configuration at a time.
|
||||
- Switching between same-type-remotes required manually rewriting the active settings.
|
||||
- Setup URI, QR code, CLI setup, and similar entry points all restored settings differently, which made migration logic easy to miss.
|
||||
- The internal settings shape had gradually become a mix of user-facing settings, transport-specific credentials, and compatibility-oriented values.
|
||||
|
||||
Once multiple remotes of the same type became desirable, the previous model no longer scaled well enough. We therefore needed a structure that could store many remotes, still expose one effective remote to the replication logic, and keep migration and import behaviour consistent.
|
||||
|
||||
## Prerequisite
|
||||
|
||||
- Existing synchronisation features must continue to read an effective remote configuration from the current settings.
|
||||
- Existing vaults must continue to work without requiring manual reconfiguration.
|
||||
- Setup URI, QR code, CLI setup, protocol handlers, and other imported settings must be normalised in the same way.
|
||||
- Import and export must be compact enough to be shared easily.
|
||||
- We must be explicit that exported connection strings may contain credentials or secrets.
|
||||
|
||||
## Outlined methods and implementation plans
|
||||
|
||||
### Abstract
|
||||
|
||||
The current settings now have two layers for remote configuration.
|
||||
|
||||
1. A stored collection of named remotes.
|
||||
2. One active remote projected into the legacy flat settings fields.
|
||||
|
||||
This means the replication and database layers can continue to read the effective remote from the existing settings fields, while the settings dialogue and migration logic can manage many stored remotes.
|
||||
|
||||
In short, the list is the source of truth for saved remotes, and the legacy fields remain the runtime compatibility layer.
|
||||
|
||||
### Data model
|
||||
|
||||
The main settings now contain the following properties.
|
||||
|
||||
```typescript
|
||||
type RemoteConfiguration = {
|
||||
id: string;
|
||||
name: string;
|
||||
uri: string;
|
||||
isEncrypted: boolean;
|
||||
};
|
||||
|
||||
type RemoteConfigurations = {
|
||||
remoteConfigurations: Record<string, RemoteConfiguration>;
|
||||
activeConfigurationId: string;
|
||||
};
|
||||
```
|
||||
|
||||
Each entry stores a connection string in `uri`.
|
||||
|
||||
- `sls+http://` or `sls+https://` for CouchDB-compatible remotes
|
||||
- `sls+s3://` for bucket-style remotes
|
||||
- `sls+p2p://` for Peer-to-Peer remotes
|
||||
|
||||
This structure allows multiple remotes of the same type to be stored without adding a large number of duplicated settings fields.
|
||||
|
||||
### Runtime compatibility
|
||||
|
||||
The replication logic still reads the effective remote from legacy flat settings such as the following.
|
||||
|
||||
- `remoteType`
|
||||
- `couchDB_URI`, `couchDB_USER`, `couchDB_PASSWORD`, `couchDB_DBNAME`
|
||||
- `endpoint`, `bucket`, `accessKey`, `secretKey`, and related bucket fields
|
||||
- `P2P_roomID`, `P2P_passphrase`, and related Peer-to-Peer fields
|
||||
|
||||
When a remote is activated, its connection string is parsed and projected into these legacy fields. Therefore, existing services do not need to know whether the remote came from an old vault, a Setup URI, or the new remote list.
|
||||
|
||||
This projection is intentionally one-way at runtime. The stored remote list is the persistent catalogue, while the flat fields describe the remote currently in use.
|
||||
|
||||
### Connection string format
|
||||
|
||||
The connection string is the transport-neutral storage format for a remote entry.
|
||||
|
||||
Benefits:
|
||||
|
||||
- It is compact enough for clipboard-based workflows.
|
||||
- It can be used for import and export in the settings dialogue.
|
||||
- It avoids introducing a separate serialisation format only for the remote list.
|
||||
- It can be parsed into the legacy settings shape whenever the active remote changes.
|
||||
|
||||
This is not equivalent to Setup URI.
|
||||
|
||||
- Setup URI represents a broader settings transfer workflow.
|
||||
- A remote connection string represents one remote only.
|
||||
|
||||
### Import and export
|
||||
|
||||
The settings dialogue now supports the following workflows.
|
||||
|
||||
- Add connection: create a new remote by using the remote setup dialogues.
|
||||
- Import connection: paste a connection string, validate it, and save it as a named remote.
|
||||
- Export: copy a stored remote connection string to the clipboard.
|
||||
|
||||
Import normalises the string by parsing and serialising it again before saving. This ensures that equivalent but differently formatted URIs are saved in a canonical form.
|
||||
|
||||
Export is intentionally simple. It copies the connection string itself, because this is the most direct representation of one remote entry.
|
||||
|
||||
### Security note
|
||||
|
||||
Connection strings may include credentials, secrets, JWT-related values, or Peer-to-Peer passphrases.
|
||||
|
||||
Therefore:
|
||||
|
||||
- Export is a deliberate clipboard operation.
|
||||
- Import trusts the supplied connection string as-is after parsing.
|
||||
- We should regard exported connection strings as sensitive information, much like Setup URI or a credentials-bearing configuration file.
|
||||
|
||||
The `isEncrypted` field is currently reserved for future expansion. At present, the connection string itself is stored plainly inside the settings data, in the same sense that the effective runtime configuration can contain usable remote credentials.
|
||||
|
||||
### Migration strategy
|
||||
|
||||
Older vaults store only one effective remote in the flat settings fields. The migration creates a first remote list from those values.
|
||||
|
||||
Rules:
|
||||
|
||||
- If no remote list exists and the legacy fields contain a CouchDB configuration, create `legacy-couchdb`.
|
||||
- If no remote list exists and the legacy fields contain a bucket configuration, create `legacy-s3`.
|
||||
- If no remote list exists and the legacy fields contain a Peer-to-Peer configuration, create `legacy-p2p`.
|
||||
- If more than one legacy remote is populated, create all possible entries and select the active one according to `remoteType`.
|
||||
|
||||
This migration is intentionally additive. It does not remove the flat fields because they remain necessary as the active runtime projection.
|
||||
|
||||
### Normalisation and application paths
|
||||
|
||||
One important design lesson from this work is that migration cannot rely only on loading `data.json`.
|
||||
|
||||
Settings may enter the system from several routes:
|
||||
|
||||
- normal settings load
|
||||
- Setup URI
|
||||
- QR code
|
||||
- protocol handler
|
||||
- CLI setup
|
||||
- Peer-to-Peer remote configuration retrieval
|
||||
- red flag based remote adjustment
|
||||
- settings markdown import
|
||||
|
||||
To keep behaviour consistent, normalisation is centralised in the settings service.
|
||||
|
||||
- `adjustSettings` is responsible for in-place normalisation and migration of a settings object.
|
||||
- `applyExternalSettings` is responsible for applying imported or externally supplied settings after passing them through the same normalisation flow.
|
||||
|
||||
This ensures that imported settings can migrate to the current remote list model even if they never passed through the ordinary `loadSettings` path.
|
||||
|
||||
### Why not store only the remote list
|
||||
|
||||
It would be possible to let all consumers parse the active remote every time and stop using the flat fields entirely. However, this would require broader changes across replication, diagnostics, and compatibility layers.
|
||||
|
||||
The current design keeps the change set limited.
|
||||
|
||||
- The remote list improves storage and UX.
|
||||
- The flat fields preserve compatibility and reduce migration risk.
|
||||
|
||||
This is a pragmatic transitional architecture, not an accidental duplication.
|
||||
|
||||
## Test strategy
|
||||
|
||||
The feature should be tested from four viewpoints.
|
||||
|
||||
1. Migration from old settings.
|
||||
- A vault with only legacy flat remote settings should gain a remote list automatically.
|
||||
- The correct active remote should be selected according to `remoteType`.
|
||||
|
||||
2. Runtime activation.
|
||||
- Activating a stored remote should correctly project its values into the effective flat settings.
|
||||
|
||||
3. External import paths.
|
||||
- Setup URI, QR code, CLI setup, Peer-to-Peer remote config, red flag adjustment, and settings markdown import should all pass through the same normalisation path.
|
||||
|
||||
4. Import and export.
|
||||
- Imported connection strings should be parsed, canonicalised, named, and stored correctly.
|
||||
- Export should copy the exact saved connection string.
|
||||
|
||||
## Documentation strategy
|
||||
|
||||
- This document explains the design and compatibility model of remote configuration management.
|
||||
- User-facing setup documents should explain only how to add, import, export, and activate remotes.
|
||||
- Release notes may refer to this document when changes in remote handling are significant.
|
||||
|
||||
## Outlook
|
||||
|
||||
Import/export configuration strings should also be encrypted in the future, but this is a separate feature that can be added on top of the current design.
|
||||
|
||||
## Consideration and conclusion
|
||||
|
||||
The remote configuration list solves the practical need to manage multiple remotes without forcing the whole codebase to abandon the previous effective-settings model at once.
|
||||
|
||||
Its core idea is modest but effective.
|
||||
|
||||
- Store named remotes as connection strings.
|
||||
- Select one active remote.
|
||||
- Project it into the legacy settings for runtime use.
|
||||
- Normalise every imported settings object through the same path.
|
||||
|
||||
This keeps the implementation understandable and migration-friendly. It also opens the door for future work, such as encrypted per-remote storage, richer remote metadata, or remote-scoped options, without forcing another large redesign of how remotes are represented.
|
||||
@@ -64,6 +64,10 @@ Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
|
||||
# Creating the save data & configuration directories.
|
||||
mkdir couchdb-data
|
||||
mkdir couchdb-etc
|
||||
|
||||
# Changing perms to user 5984.
|
||||
chown -R 5984:5984 ./couchdb-data
|
||||
chown -R 5984:5984 ./couchdb-etc
|
||||
```
|
||||
|
||||
#### 2. Create a `docker-compose.yml` file with the following added to it
|
||||
@@ -226,7 +230,6 @@ And, be sure to check the server log and be careful of malicious access.
|
||||
If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`.
|
||||
|
||||
```yaml
|
||||
version: "2.1"
|
||||
services:
|
||||
couchdb:
|
||||
image: couchdb:latest
|
||||
|
||||
@@ -71,7 +71,6 @@ obsidian-livesync
|
||||
|
||||
可以参照以下内容编辑 `docker-compose.yml`:
|
||||
```yaml
|
||||
version: "2.1"
|
||||
services:
|
||||
couchdb:
|
||||
image: couchdb
|
||||
|
||||
145
docs/to_issue_reporting.md
Normal file
145
docs/to_issue_reporting.md
Normal file
@@ -0,0 +1,145 @@
|
||||
# How to report an issue
|
||||
|
||||
Thank you for helping improve Self-hosted LiveSync!
|
||||
|
||||
This document explains how to collect the information needed for an issue report. Issues with sufficient information will be prioritised.
|
||||
|
||||
---
|
||||
|
||||
## Filled example
|
||||
|
||||
Here is an example of a well-filled report for reference.
|
||||
|
||||
### Abstract
|
||||
|
||||
The synchronisation hung up immediately after connecting.
|
||||
|
||||
### Expected behaviour
|
||||
|
||||
- Synchronisation ends with the message `Replication completed`
|
||||
- Everything synchronised
|
||||
|
||||
### Actually happened
|
||||
|
||||
- Synchronisation was cancelled with the message `TypeError: Failed to fetch` (visible in the plug-in log around lines 10–12)
|
||||
- No files synchronised
|
||||
|
||||
### Reproducing procedure
|
||||
|
||||
1. Configure LiveSync with the settings shown in the attached report.
|
||||
2. Click the sync button on the ribbon.
|
||||
3. Synchronisation begins.
|
||||
4. About two or three seconds later, the error `TypeError: Failed to fetch` appears.
|
||||
5. Replication stops. No files synchronised.
|
||||
|
||||
### Obsidian debug info (Device 1 — Windows desktop)
|
||||
|
||||
```
|
||||
SYSTEM INFO:
|
||||
Obsidian version: v1.2.8
|
||||
Installer version: v1.1.15
|
||||
Operating system: Windows 10 Pro 10.0.19044
|
||||
Login status: logged in
|
||||
Catalyst license: supporter
|
||||
Insider build toggle: off
|
||||
Community theme: Minimal v6.1.11
|
||||
Snippets enabled: 3
|
||||
Restricted mode: off
|
||||
Plugins installed: 35
|
||||
Plugins enabled: 11
|
||||
1: Self-hosted LiveSync v0.19.4
|
||||
...
|
||||
```
|
||||
|
||||
### Report from LiveSync
|
||||
|
||||
```
|
||||
----remote config----
|
||||
cors:
|
||||
credentials: "true"
|
||||
...
|
||||
---- Plug-in config ---
|
||||
couchDB_URI: self-hosted
|
||||
couchDB_USER: 𝑅𝐸𝐷𝐴𝐶𝑇𝐸𝐷
|
||||
...
|
||||
```
|
||||
|
||||
### Plug-in log
|
||||
|
||||
```
|
||||
2023/5/24 10:50:33->HTTP:GET to:/ -> failed
|
||||
2023/5/24 10:50:33->TypeError:Failed to fetch
|
||||
2023/5/24 10:50:33->could not connect to https://example.com/ : your vault
|
||||
(TypeError:Failed to fetch)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How to collect each piece of information
|
||||
|
||||
### Obsidian debug info
|
||||
|
||||
Open the command palette (`Ctrl/Cmd + P`) and run **"Show debug info"**. Copy the output and paste it into the issue.
|
||||
|
||||
If multiple devices are involved in the problem (e.g., sync between a phone and a desktop), please provide the debug info for each device. The device where the issue occurred is required; information from other devices is strongly recommended.
|
||||
|
||||
### Report from LiveSync (hatch report)
|
||||
|
||||
1. Open LiveSync settings.
|
||||
2. Go to the **Hatch** pane.
|
||||
3. Press the **Make report** button.
|
||||
|
||||
The report will be copied to your clipboard. It contains your LiveSync configuration and the remote server configuration, with credentials automatically redacted.
|
||||
|
||||
**Tip:** For large reports, consider uploading to [GitHub Gist](https://gist.github.com/) and sharing the link instead of pasting directly into the issue. This makes it easier to manage, and if you accidentally leave sensitive data in, a Gist can be deleted.
|
||||
|
||||
If you paste directly, wrap it in a `<details>` tag to keep the issue readable:
|
||||
|
||||
```
|
||||
<details>
|
||||
<summary>Report from hatch</summary>
|
||||
|
||||
```
|
||||
----remote config----
|
||||
:
|
||||
```
|
||||
</details>
|
||||
```
|
||||
|
||||
### Plug-in log
|
||||
|
||||
The plug-in log is volatile by default (not saved to disk) and shown only in the log dialogue, which can be opened by tapping the **document box icon** in the ribbon.
|
||||
|
||||
#### Enable verbose log
|
||||
|
||||
Before reproducing the issue, enable **Verbose Log** in LiveSync's **General Settings** pane. Without this, many diagnostic messages will be suppressed.
|
||||
|
||||
#### Persist the log to a file (optional)
|
||||
|
||||
If you need to capture a log across a restart, enable **"Write logs into the file"** in General Settings. Note that log files may contain sensitive information — use this option only for troubleshooting, and disable it afterwards.
|
||||
|
||||
As with the hatch report, consider uploading large logs to [GitHub Gist](https://gist.github.com/).
|
||||
|
||||
### Network log (for connection-related issues only)
|
||||
|
||||
If the issue is related to network connectivity (e.g., cannot connect to the server, authentication errors), a network log captured from browser DevTools can be very helpful. You do not need to include this for non-connection issues.
|
||||
|
||||
#### Opening DevTools
|
||||
|
||||
| Platform | Shortcut |
|
||||
|----------|----------|
|
||||
| Windows / Linux | `Ctrl + Shift + I` |
|
||||
| macOS | `Cmd + Shift + I` |
|
||||
| Android | Use [Chrome remote debugging](https://developer.chrome.com/docs/devtools/remote-debugging/) |
|
||||
| iOS | Use [Safari Web Inspector](https://developer.apple.com/documentation/safari-developer-tools/inspecting-ios) on a Mac |
|
||||
|
||||
#### What to capture
|
||||
|
||||
1. Open the **Network** pane in DevTools.
|
||||
2. Reproduce the issue.
|
||||
3. Look for requests marked in red.
|
||||
4. Capture screenshots of the **Headers**, **Payload**, and **Response** tabs for those requests.
|
||||
|
||||
**Important — redact before sharing:**
|
||||
- Headers: conceal the request URL path, Remote Address, `authority`, and `authorisation` values.
|
||||
- Payload / Response: the `_id` field contains your file paths — redact if needed.
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.25.52-patched-2",
|
||||
"version": "0.25.60",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
19766
package-lock.json
generated
19766
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
22
package.json
22
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.25.52-patched-2",
|
||||
"version": "0.25.60",
|
||||
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"main": "main.js",
|
||||
"type": "module",
|
||||
@@ -53,7 +53,8 @@
|
||||
"test:docker-all:down": "npm run test:docker-couchdb:down ; npm run test:docker-s3:down ; npm run test:docker-p2p:down",
|
||||
"test:docker-all:start": "npm run test:docker-all:up && sleep 5 && npm run test:docker-all:init",
|
||||
"test:docker-all:stop": "npm run test:docker-all:down",
|
||||
"test:full": "npm run test:docker-all:start && vitest run --coverage && npm run test:docker-all:stop"
|
||||
"test:full": "npm run test:docker-all:start && vitest run --coverage && npm run test:docker-all:stop",
|
||||
"test:p2p": "bash test/suitep2p/run-p2p-tests.sh"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "vorotamoroz",
|
||||
@@ -67,6 +68,7 @@
|
||||
"@tsconfig/svelte": "^5.0.8",
|
||||
"@types/deno": "^2.5.0",
|
||||
"@types/diff-match-patch": "^1.0.36",
|
||||
"@types/markdown-it": "^14.1.2",
|
||||
"@types/node": "^24.10.13",
|
||||
"@types/pouchdb": "^6.4.2",
|
||||
"@types/pouchdb-adapter-http": "^6.1.6",
|
||||
@@ -78,9 +80,9 @@
|
||||
"@types/transform-pouch": "^1.0.6",
|
||||
"@typescript-eslint/eslint-plugin": "8.56.1",
|
||||
"@typescript-eslint/parser": "8.56.1",
|
||||
"@vitest/browser": "^4.0.16",
|
||||
"@vitest/browser-playwright": "^4.0.16",
|
||||
"@vitest/coverage-v8": "^4.0.16",
|
||||
"@vitest/browser": "^4.1.1",
|
||||
"@vitest/browser-playwright": "^4.1.1",
|
||||
"@vitest/coverage-v8": "^4.1.1",
|
||||
"builtin-modules": "5.0.0",
|
||||
"dotenv": "^17.3.1",
|
||||
"dotenv-cli": "^11.0.0",
|
||||
@@ -118,8 +120,9 @@
|
||||
"tsx": "^4.21.0",
|
||||
"typescript": "5.9.3",
|
||||
"vite": "^7.3.1",
|
||||
"vitest": "^4.0.16",
|
||||
"webdriverio": "^9.24.0",
|
||||
"vite-plugin-istanbul": "^8.0.0",
|
||||
"vitest": "^4.1.1",
|
||||
"webdriverio": "^9.27.0",
|
||||
"yaml": "^2.8.2"
|
||||
},
|
||||
"dependencies": {
|
||||
@@ -129,16 +132,17 @@
|
||||
"@smithy/middleware-apply-body-checksum": "^4.3.9",
|
||||
"@smithy/protocol-http": "^5.3.9",
|
||||
"@smithy/querystring-builder": "^4.2.9",
|
||||
"@trystero-p2p/nostr": "^0.23.0",
|
||||
"commander": "^14.0.3",
|
||||
"diff-match-patch": "^1.0.5",
|
||||
"fflate": "^0.8.2",
|
||||
"idb": "^8.0.3",
|
||||
"markdown-it": "^14.1.1",
|
||||
"minimatch": "^10.2.2",
|
||||
"node-datachannel": "^0.32.1",
|
||||
"octagonal-wheels": "^0.1.45",
|
||||
"pouchdb-adapter-leveldb": "^9.0.0",
|
||||
"qrcode-generator": "^1.4.4",
|
||||
"trystero": "^0.22.0",
|
||||
"werift": "^0.22.9",
|
||||
"xxhash-wasm-102": "npm:xxhash-wasm@^1.0.2"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,6 +13,7 @@ import type { CheckPointInfo } from "./lib/src/replication/journal/JournalSyncTy
|
||||
import type { LiveSyncJournalReplicatorEnv } from "./lib/src/replication/journal/LiveSyncJournalReplicatorEnv";
|
||||
import type { LiveSyncReplicatorEnv } from "./lib/src/replication/LiveSyncAbstractReplicator";
|
||||
import { useTargetFilters } from "./lib/src/serviceFeatures/targetFilter";
|
||||
import { useRemoteConfigurationMigration } from "./lib/src/serviceFeatures/remoteConfig";
|
||||
import type { ServiceContext } from "./lib/src/services/base/ServiceBase";
|
||||
import type { InjectableServiceHub } from "./lib/src/services/InjectableServices";
|
||||
import { AbstractModule } from "./modules/AbstractModule";
|
||||
@@ -272,6 +273,8 @@ export class LiveSyncBaseCore<
|
||||
useTargetFilters(this);
|
||||
// enable target filter feature.
|
||||
usePrepareDatabaseForUse(this);
|
||||
// Migration to multiple remote configurations
|
||||
useRemoteConfigurationMigration(this);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
10
src/apps/cli/.gitignore
vendored
10
src/apps/cli/.gitignore
vendored
@@ -1,4 +1,6 @@
|
||||
.livesync
|
||||
test/*
|
||||
!test/*.sh
|
||||
node_modules
|
||||
.livesync
|
||||
test/*
|
||||
!test/*.sh
|
||||
test/test-init.local.sh
|
||||
node_modules
|
||||
.*.json
|
||||
111
src/apps/cli/Dockerfile
Normal file
111
src/apps/cli/Dockerfile
Normal file
@@ -0,0 +1,111 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
#
|
||||
# Self-hosted LiveSync CLI — Docker image
|
||||
#
|
||||
# Build (from the repository root):
|
||||
# docker build -f src/apps/cli/Dockerfile -t livesync-cli .
|
||||
#
|
||||
# Run:
|
||||
# docker run --rm -v /path/to/your/vault:/data livesync-cli sync
|
||||
# docker run --rm -v /path/to/your/vault:/data livesync-cli ls
|
||||
# docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
|
||||
# docker run --rm -v /path/to/your/vault:/data livesync-cli --help
|
||||
#
|
||||
# The first positional argument (database-path) is automatically set to /data.
|
||||
# Mount your vault at /data, or override with: -e LIVESYNC_DB_PATH=/other/path
|
||||
#
|
||||
# P2P (WebRTC) networking — important notes
|
||||
# -----------------------------------------
|
||||
# The P2P replicator (p2p-host / p2p-sync / p2p-peers) uses WebRTC, which
|
||||
# generates ICE candidates of three kinds:
|
||||
#
|
||||
# host — the container's bridge IP (172.17.x.x). Unreachable from outside
|
||||
# the Docker bridge, so LAN peers cannot connect via this candidate.
|
||||
# srflx — the host's public IP, obtained via STUN reflection. Works fine
|
||||
# over the internet even with the default bridge network.
|
||||
# relay — traffic relayed through a TURN server. Always reachable regardless
|
||||
# of network mode.
|
||||
#
|
||||
# Recommended network modes per use-case:
|
||||
#
|
||||
# LAN P2P (Linux only)
|
||||
# docker run --network host ...
|
||||
# This exposes the real host IP as the 'host' candidate so LAN peers can
|
||||
# connect directly. --network host is not available on Docker Desktop for
|
||||
# macOS or Windows.
|
||||
#
|
||||
# LAN P2P (macOS / Windows Docker Desktop)
|
||||
# Configure a TURN server in settings (P2P_turnServers / P2P_turnUsername /
|
||||
# P2P_turnCredential). All data is then relayed through the TURN server,
|
||||
# bypassing the bridge-network limitation.
|
||||
#
|
||||
# Internet P2P
|
||||
# Default bridge network is sufficient; the srflx candidate carries the
|
||||
# host's public IP and peers can connect normally.
|
||||
#
|
||||
# CouchDB sync only (no P2P)
|
||||
# Default bridge network. No special configuration required.
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Stage 1 — builder
|
||||
# Full Node.js environment to compile native modules and bundle the CLI.
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
FROM node:22-slim AS builder
|
||||
|
||||
# Build tools required by native Node.js addons (mainly leveldown)
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends python3 make g++ \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Install workspace dependencies first (layer-cache friendly)
|
||||
COPY package.json ./
|
||||
RUN npm install
|
||||
|
||||
# Copy the full source tree and build the CLI bundle
|
||||
COPY . .
|
||||
RUN cd src/apps/cli && npm run build
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Stage 2 — runtime-deps
|
||||
# Install only the external (unbundled) packages that the CLI requires at
|
||||
# runtime. Native addons are compiled here against the same base image that
|
||||
# the final runtime stage uses.
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
FROM node:22-slim AS runtime-deps
|
||||
|
||||
# Build tools required to compile native addons
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends python3 make g++ \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /deps
|
||||
|
||||
# runtime-package.json lists only the packages that Vite leaves external
|
||||
COPY src/apps/cli/runtime-package.json ./package.json
|
||||
RUN npm install --omit=dev
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Stage 3 — runtime
|
||||
# Minimal image: pre-compiled native modules + CLI bundle only.
|
||||
# No build tools are included, keeping the image small.
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
FROM node:22-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy pre-compiled external node_modules from runtime-deps stage
|
||||
COPY --from=runtime-deps /deps/node_modules ./node_modules
|
||||
|
||||
# Copy the built CLI bundle from builder stage
|
||||
COPY --from=builder /build/src/apps/cli/dist ./dist
|
||||
|
||||
# Install entrypoint wrapper
|
||||
COPY src/apps/cli/docker-entrypoint.sh /usr/local/bin/livesync-cli
|
||||
RUN chmod +x /usr/local/bin/livesync-cli
|
||||
|
||||
# Mount your vault / local database directory here
|
||||
VOLUME ["/data"]
|
||||
|
||||
ENTRYPOINT ["livesync-cli"]
|
||||
@@ -1,362 +1,505 @@
|
||||
# Self-hosted LiveSync CLI
|
||||
Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian.
|
||||
|
||||
## Features
|
||||
|
||||
- ✅ Sync Obsidian vaults using CouchDB without running Obsidian
|
||||
- ✅ Compatible with Self-hosted LiveSync plugin settings
|
||||
- ✅ Supports all core sync features (encryption, conflict resolution, etc.)
|
||||
- ✅ Lightweight and headless operation
|
||||
- ✅ Cross-platform (Windows, macOS, Linux)
|
||||
|
||||
## Architecture
|
||||
|
||||
This CLI version is built using the same core as the Obsidian plugin:
|
||||
|
||||
```
|
||||
CLI Main
|
||||
└─ LiveSyncBaseCore<ServiceContext, IMinimumLiveSyncCommands>
|
||||
├─ NodeServiceHub (All services without Obsidian dependencies)
|
||||
└─ ServiceModules (wired by initialiseServiceModulesCLI)
|
||||
├─ FileAccessCLI (Node.js FileSystemAdapter)
|
||||
├─ StorageEventManagerCLI
|
||||
├─ ServiceFileAccessCLI
|
||||
├─ ServiceDatabaseFileAccessCLI
|
||||
├─ ServiceFileHandler
|
||||
└─ ServiceRebuilder
|
||||
```
|
||||
|
||||
### Key Components
|
||||
|
||||
1. **Node.js FileSystem Adapter** (`adapters/`)
|
||||
- Platform-agnostic file operations using Node.js `fs/promises`
|
||||
- Implements same interface as Obsidian's file system
|
||||
|
||||
2. **Service Modules** (`serviceModules/`)
|
||||
- Initialised by `initialiseServiceModulesCLI`
|
||||
- All core sync functionality preserved
|
||||
|
||||
3. **Service Hub and Settings Services** (`services/`)
|
||||
- `NodeServiceHub` provides the CLI service context
|
||||
- Node-specific settings and key-value services are provided without Obsidian dependencies
|
||||
|
||||
4. **Main Entry Point** (`main.ts`)
|
||||
- Command-line interface
|
||||
- Settings management (JSON file)
|
||||
- Graceful shutdown handling
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
|
||||
# due to shared dependencies with webapp and main library
|
||||
npm install
|
||||
# Build the project (ensure you are in `src/apps/cli` directory)
|
||||
npm run build
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
As you know, the CLI is designed to be used in a headless environment. Hence all operations are performed against a local vault directory and a settings file. Here are some example commands:
|
||||
|
||||
```bash
|
||||
# Sync local database with CouchDB (no files will be changed).
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync
|
||||
|
||||
# Push files to local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
|
||||
|
||||
# Pull files from local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
|
||||
|
||||
# Verbose logging
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose
|
||||
|
||||
# Apply setup URI to settings file (settings only; does not run synchronisation)
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
|
||||
|
||||
# Put text from stdin into local database
|
||||
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
|
||||
|
||||
# Output a file from local database to stdout
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
|
||||
|
||||
# Output a specific revision of a file from local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
|
||||
|
||||
# Pull a specific revision of a file from local database to local storage
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
|
||||
|
||||
# List files in local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
|
||||
|
||||
# Show metadata for a file in local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
|
||||
|
||||
# Mark a file as deleted in local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
|
||||
|
||||
# Resolve conflict by keeping a specific revision
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
The CLI uses the same settings format as the Obsidian plugin. Create a `.livesync/settings.json` file in your vault directory:
|
||||
|
||||
```json
|
||||
{
|
||||
"couchDB_URI": "http://localhost:5984",
|
||||
"couchDB_USER": "admin",
|
||||
"couchDB_PASSWORD": "password",
|
||||
"couchDB_DBNAME": "obsidian-livesync",
|
||||
"liveSync": true,
|
||||
"syncOnSave": true,
|
||||
"syncOnStart": true,
|
||||
"encrypt": true,
|
||||
"passphrase": "your-encryption-passphrase",
|
||||
"usePluginSync": false,
|
||||
"isConfigured": true
|
||||
}
|
||||
```
|
||||
|
||||
**Minimum required settings:**
|
||||
|
||||
- `couchDB_URI`: CouchDB server URL
|
||||
- `couchDB_USER`: CouchDB username
|
||||
- `couchDB_PASSWORD`: CouchDB password
|
||||
- `couchDB_DBNAME`: Database name
|
||||
- `isConfigured`: Set to `true` after configuration
|
||||
|
||||
### Command-line Reference
|
||||
|
||||
```
|
||||
Usage:
|
||||
livesync-cli [database-path] [options] [command] [command-args]
|
||||
|
||||
Arguments:
|
||||
database-path Path to the local database directory (required except for init-settings)
|
||||
|
||||
Options:
|
||||
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
|
||||
--force, -f Overwrite existing file on init-settings
|
||||
--verbose, -v Enable verbose logging
|
||||
--help, -h Show this help message
|
||||
|
||||
Commands:
|
||||
init-settings [path] Create settings JSON from DEFAULT_SETTINGS
|
||||
sync Run one replication cycle and exit
|
||||
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
|
||||
p2p-sync <peer> <timeout> Synchronise with specified peer-id or peer-name
|
||||
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
|
||||
push <src> <dst> Push local file <src> into local database path <dst>
|
||||
pull <src> <dst> Pull file <src> from local database into local file <dst>
|
||||
pull-rev <src> <dst> <revision> Pull specific revision into local file <dst>
|
||||
setup <setupURI> Apply setup URI to settings file
|
||||
put <vaultPath> Read text from standard input and write to local database
|
||||
cat <vaultPath> Write latest file content from local database to standard output
|
||||
cat-rev <vaultPath> <revision> Write specific revision content from local database to standard output
|
||||
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
|
||||
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list
|
||||
rm <vaultPath> Mark file as deleted in local database
|
||||
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision
|
||||
mirror <storagePath> <vaultPath> Mirror local file into local database.
|
||||
```
|
||||
|
||||
Run via npm script:
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- [database-path] [options] [command] [command-args]
|
||||
```
|
||||
|
||||
#### Detailed Command Descriptions
|
||||
|
||||
##### ls
|
||||
`ls` lists files in the local database with optional prefix filtering. Output format is:
|
||||
|
||||
```vault/path/file.md<TAB>size<TAB>mtime<TAB>revision[*]
|
||||
```
|
||||
Note: `*` indicates if the file has conflicts.
|
||||
|
||||
##### p2p-peers
|
||||
|
||||
`p2p-peers <timeout>` waits for the specified number of seconds, then prints each discovered peer on a separate line:
|
||||
|
||||
```text
|
||||
[peer]<TAB><peer-id><TAB><peer-name>
|
||||
```
|
||||
|
||||
Use this command to select a target for `p2p-sync`.
|
||||
|
||||
##### p2p-sync
|
||||
|
||||
`p2p-sync <peer> <timeout>` discovers peers up to the specified timeout and synchronises with the selected peer.
|
||||
|
||||
- `<peer>` accepts either `peer-id` or `peer-name` from `p2p-peers` output.
|
||||
- On success, the command prints a completion message to standard error and exits with status code `0`.
|
||||
- On failure, the command prints an error message and exits non-zero.
|
||||
|
||||
##### p2p-host
|
||||
|
||||
`p2p-host` starts the local P2P host and keeps running until interrupted.
|
||||
|
||||
- Other peers can discover and synchronise with this host while it is running.
|
||||
- Stop the host with `Ctrl+C`.
|
||||
- In CLI mode, behaviour is non-interactive and acceptance follows settings.
|
||||
|
||||
##### info
|
||||
|
||||
`info` output fields:
|
||||
|
||||
- `id`: Document ID
|
||||
- `revision`: Current revision
|
||||
- `conflicts`: Conflicted revisions, or `N/A`
|
||||
- `filename`: Basename of path
|
||||
- `path`: Vault-relative path
|
||||
- `size`: Size in bytes
|
||||
- `revisions`: Available non-current revisions
|
||||
- `chunks`: Number of chunk IDs
|
||||
- `children`: Chunk ID list
|
||||
|
||||
##### mirror
|
||||
|
||||
`mirror` is a command that synchronises your storage with your local vault. It is essentially a process that runs upon startup in Obsidian.
|
||||
|
||||
In other words, it performs the following actions:
|
||||
|
||||
1. **Precondition checks** — Aborts early if any of the following conditions are not met:
|
||||
- Settings must be configured (`isConfigured: true`).
|
||||
- File watching must not be suspended (`suspendFileWatching: false`).
|
||||
- Remediation mode must be inactive (`maxMTimeForReflectEvents: 0`).
|
||||
|
||||
2. **State restoration** — On subsequent runs (after the first successful scan), restores the previous storage state before proceeding.
|
||||
|
||||
3. **Expired deletion cleanup** — If `automaticallyDeleteMetadataOfDeletedFiles` is set to a positive number of days, any document that is marked deleted and whose `mtime` is older than the retention period is permanently removed from the local database.
|
||||
|
||||
4. **File collection** — Enumerates files from two sources:
|
||||
- **Storage**: all files under the vault path that pass `isTargetFile`.
|
||||
- **Local database**: all normal documents (fetched with conflict information) whose paths are valid and pass `isTargetFile`.
|
||||
- Both collections build case-insensitive ↔ case-sensitive path maps, controlled by `handleFilenameCaseSensitive`.
|
||||
|
||||
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
|
||||
|
||||
| Group | Condition | Action |
|
||||
|---|---|---|
|
||||
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
|
||||
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
|
||||
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
|
||||
|
||||
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
|
||||
|
||||
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases).
|
||||
|
||||
### Planned options:
|
||||
|
||||
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`).
|
||||
- `serve`: Start CLI in server mode, exposing REST APIs for remote, and batch operations.
|
||||
- `cause-conflicted <vaultPath>`: Mark a file as conflicted without changing its content, to trigger conflict resolution in Obsidian.
|
||||
|
||||
## Use Cases
|
||||
|
||||
### 1. Bootstrap a new headless vault
|
||||
|
||||
Create default settings, apply a setup URI, then run one sync cycle.
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- init-settings /data/livesync-settings.json
|
||||
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
|
||||
```
|
||||
|
||||
### 2. Scripted import and export
|
||||
|
||||
Push local files into the database from automation, and pull them back for export or backup.
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
|
||||
```
|
||||
|
||||
### 3. Revision inspection and restore
|
||||
|
||||
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
|
||||
```
|
||||
|
||||
### 4. Conflict and cleanup workflow
|
||||
|
||||
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
|
||||
```
|
||||
|
||||
### 5. CI smoke test for content round-trip
|
||||
|
||||
Validate that `put`/`cat` is behaving as expected in a pipeline.
|
||||
|
||||
```bash
|
||||
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Project Structure
|
||||
|
||||
```
|
||||
src/apps/cli/
|
||||
├── commands/ # Command dispatcher and command utilities
|
||||
│ ├── runCommand.ts
|
||||
│ ├── runCommand.unit.spec.ts
|
||||
│ ├── types.ts
|
||||
│ ├── utils.ts
|
||||
│ └── utils.unit.spec.ts
|
||||
├── adapters/ # Node.js FileSystem Adapter
|
||||
│ ├── NodeConversionAdapter.ts
|
||||
│ ├── NodeFileSystemAdapter.ts
|
||||
│ ├── NodePathAdapter.ts
|
||||
│ ├── NodeStorageAdapter.ts
|
||||
│ ├── NodeStorageAdapter.unit.spec.ts
|
||||
│ ├── NodeTypeGuardAdapter.ts
|
||||
│ ├── NodeTypes.ts
|
||||
│ └── NodeVaultAdapter.ts
|
||||
├── lib/
|
||||
│ └── pouchdb-node.ts
|
||||
├── managers/ # CLI-specific managers
|
||||
│ ├── CLIStorageEventManagerAdapter.ts
|
||||
│ └── StorageEventManagerCLI.ts
|
||||
├── serviceModules/ # Service modules (ported from main.ts)
|
||||
│ ├── CLIServiceModules.ts
|
||||
│ ├── DatabaseFileAccess.ts
|
||||
│ ├── FileAccessCLI.ts
|
||||
│ └── ServiceFileAccessImpl.ts
|
||||
├── services/
|
||||
│ ├── NodeKeyValueDBService.ts
|
||||
│ ├── NodeServiceHub.ts
|
||||
│ └── NodeSettingService.ts
|
||||
├── test/
|
||||
│ ├── test-e2e-two-vaults-common.sh
|
||||
│ ├── test-e2e-two-vaults-matrix.sh
|
||||
│ ├── test-e2e-two-vaults-with-docker-linux.sh
|
||||
│ ├── test-push-pull-linux.sh
|
||||
│ ├── test-setup-put-cat-linux.sh
|
||||
│ └── test-sync-two-local-databases-linux.sh
|
||||
├── .gitignore
|
||||
├── entrypoint.ts # CLI executable entry point (shebang)
|
||||
├── main.ts # CLI entry point
|
||||
├── main.unit.spec.ts
|
||||
├── package.json
|
||||
├── README.md # This file
|
||||
├── tsconfig.json
|
||||
├── util/ # Test and local utility scripts
|
||||
└── vite.config.ts
|
||||
```
|
||||
# Self-hosted LiveSync CLI
|
||||
Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian.
|
||||
|
||||
## Features
|
||||
|
||||
- ✅ Sync Obsidian vaults using CouchDB without running Obsidian
|
||||
- ✅ Compatible with Self-hosted LiveSync plugin settings
|
||||
- ✅ Supports all core sync features (encryption, conflict resolution, etc.)
|
||||
- ✅ Lightweight and headless operation
|
||||
- ✅ Cross-platform (Windows, macOS, Linux)
|
||||
|
||||
## Architecture
|
||||
|
||||
This CLI version is built using the same core as the Obsidian plugin:
|
||||
|
||||
```
|
||||
CLI Main
|
||||
└─ LiveSyncBaseCore<ServiceContext, IMinimumLiveSyncCommands>
|
||||
├─ NodeServiceHub (All services without Obsidian dependencies)
|
||||
└─ ServiceModules (wired by initialiseServiceModulesCLI)
|
||||
├─ FileAccessCLI (Node.js FileSystemAdapter)
|
||||
├─ StorageEventManagerCLI
|
||||
├─ ServiceFileAccessCLI
|
||||
├─ ServiceDatabaseFileAccessCLI
|
||||
├─ ServiceFileHandler
|
||||
└─ ServiceRebuilder
|
||||
```
|
||||
|
||||
### Key Components
|
||||
|
||||
1. **Node.js FileSystem Adapter** (`adapters/`)
|
||||
- Platform-agnostic file operations using Node.js `fs/promises`
|
||||
- Implements same interface as Obsidian's file system
|
||||
|
||||
2. **Service Modules** (`serviceModules/`)
|
||||
- Initialised by `initialiseServiceModulesCLI`
|
||||
- All core sync functionality preserved
|
||||
|
||||
3. **Service Hub and Settings Services** (`services/`)
|
||||
- `NodeServiceHub` provides the CLI service context
|
||||
- Node-specific settings and key-value services are provided without Obsidian dependencies
|
||||
|
||||
4. **Main Entry Point** (`main.ts`)
|
||||
- Command-line interface
|
||||
- Settings management (JSON file)
|
||||
- Graceful shutdown handling
|
||||
|
||||
## Usage
|
||||
|
||||
The CLI operates on a **database directory** which contains PouchDB data and settings.
|
||||
|
||||
> [!NOTE]
|
||||
> `livesync-cli` is the alias for the CLI executable. Please replace with the actual command of your installation (e.g. `npm run --silent cli --` or `docker run ...`).
|
||||
|
||||
```bash
|
||||
livesync-cli [database-path] [command] [args...]
|
||||
```
|
||||
|
||||
|
||||
### Arguments
|
||||
|
||||
- `database-path`: Path to the directory where `.livesync` folder and `settings.json` are (or will be) located.
|
||||
- Note: In previous versions, this was referred to as the "vault" path. Now it is clearly distinguished from the actual vault (the directory containing your `.md` files).
|
||||
|
||||
### Commands
|
||||
|
||||
- `sync`: Run one replication cycle with the remote CouchDB.
|
||||
- `mirror [vault-path]`: Bidirectional sync between the local database and a local directory (**the actual vault**).
|
||||
- If `vault-path` is provided, the CLI will synchronise the database with files in the vault directory.
|
||||
- If `vault-path` is omitted, it defaults to `database-path` (compatibility mode).
|
||||
- Use this command to keep your local `.md` files in sync with the database.
|
||||
- `ls [prefix]`: List files currently stored in the local database.
|
||||
- `push <src> <dst>`: Push a local file `<src>` into the database at path `<dst>`.
|
||||
- `pull <src> <dst>`: Pull a file `<src>` from the database into local file `<dst>`.
|
||||
- `cat <src>`: Read a file from the database and write to stdout.
|
||||
- `put <dst>`: Read from stdin and write to the database path `<dst>`.
|
||||
- `init-settings [file]`: Create a default settings file.
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Basic sync with remote
|
||||
livesync-cli ./my-db sync
|
||||
|
||||
# Mirroring to your actual Obsidian vault
|
||||
livesync-cli ./my-db mirror /path/to/obsidian-vault
|
||||
|
||||
# Manual file operations
|
||||
livesync-cli ./my-db push ./note.md folder/note.md
|
||||
livesync-cli ./my-db pull folder/note.md ./note.md
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
### Build from source
|
||||
|
||||
```bash
|
||||
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
|
||||
# due to shared dependencies with webapp and main library
|
||||
npm install
|
||||
# Build the project (ensure you are in `src/apps/cli` directory)
|
||||
npm run build
|
||||
```
|
||||
|
||||
Run the CLI:
|
||||
|
||||
```bash
|
||||
# Run with npm script (from repository root)
|
||||
npm run --silent cli -- [database-path] [command] [args...]
|
||||
# Run the built executable directly
|
||||
node src/apps/cli/dist/index.cjs [database-path] [command] [args...]
|
||||
```
|
||||
|
||||
### Docker
|
||||
|
||||
A Docker image is provided for headless / server deployments. Build from the repository root:
|
||||
|
||||
```bash
|
||||
docker build -f src/apps/cli/Dockerfile -t livesync-cli .
|
||||
```
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
# Sync with CouchDB
|
||||
docker run --rm -v /path/to/your/db:/data livesync-cli sync
|
||||
|
||||
# Mirror to a specific vault directory
|
||||
docker run --rm -v /path/to/your/db:/data -v /path/to/your/vault:/vault livesync-cli mirror /vault
|
||||
|
||||
# List files in the local database
|
||||
docker run --rm -v /path/to/your/db:/data livesync-cli ls
|
||||
```
|
||||
|
||||
The database directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
|
||||
|
||||
#### P2P (WebRTC) and Docker networking
|
||||
|
||||
The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates
|
||||
three kinds of ICE candidates. The default Docker bridge network affects which
|
||||
candidates are usable:
|
||||
|
||||
| Candidate type | Description | Bridge network |
|
||||
| -------------- | ---------------------------------- | -------------------------- |
|
||||
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
|
||||
| `srflx` | Host public IP via STUN reflection | Works over the internet |
|
||||
| `relay` | Traffic relayed via TURN server | Always reachable |
|
||||
|
||||
**LAN P2P on Linux** — use `--network host` so that the real host IP is
|
||||
advertised as the `host` candidate:
|
||||
|
||||
```bash
|
||||
docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host
|
||||
```
|
||||
|
||||
Note: also fix the alias to include `--network host` if you want to use `livesync-cli` for P2P commands.
|
||||
|
||||
> `--network host` is not available on Docker Desktop for macOS or Windows.
|
||||
|
||||
**LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the
|
||||
settings file (`P2P_turnServers`, `P2P_turnUsername`, `P2P_turnCredential`).
|
||||
All P2P traffic will then be relayed through the TURN server, bypassing the
|
||||
bridge-network limitation.
|
||||
|
||||
**Internet P2P** — the default bridge network is sufficient. The `srflx`
|
||||
candidate carries the host's public IP and peers can connect normally.
|
||||
|
||||
**CouchDB sync only (no P2P)** — no special network configuration is required.
|
||||
|
||||
|
||||
### Adding `livesync-cli` alias
|
||||
|
||||
To use the `livesync-cli` command globally, you can add an alias to your shell configuration file (e.g., `.zshrc` or `.bashrc`).
|
||||
|
||||
If you are using `npm run`, add the following line:
|
||||
|
||||
```bash
|
||||
alias livesync-cli='npm run --silent --prefix /path/to/repository/src/apps/cli cli --'
|
||||
# or
|
||||
alias livesync-cli="npm run --silent --prefix $PWD cli --"
|
||||
```
|
||||
|
||||
Alternatively, if you want to use the built executable directly:
|
||||
|
||||
```bash
|
||||
alias livesync-cli='node /path/to/repository/src/apps/cli/dist/index.cjs'
|
||||
or
|
||||
alias livesync-cli="node $PWD/dist/index.cjs"
|
||||
```
|
||||
|
||||
If you prefer using Docker:
|
||||
|
||||
```bash
|
||||
alias livesync-cli='docker run --rm -v /path/to/your/db:/data livesync-cli'
|
||||
```
|
||||
|
||||
After adding the alias, restart your shell or run `source ~/.zshrc` (or `.bashrc`).
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
As you know, the CLI is designed to be used in a headless environment. Hence all operations are performed against a local vault directory and a settings file. Here are some example commands:
|
||||
|
||||
```bash
|
||||
# Sync local database with CouchDB (no files will be changed).
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json sync
|
||||
|
||||
# Push files to local database
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
|
||||
|
||||
# Pull files from local database
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
|
||||
|
||||
# Verbose logging
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json --verbose
|
||||
|
||||
# Apply setup URI to settings file (settings only; does not run synchronisation)
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
|
||||
|
||||
# Put text from stdin into local database
|
||||
echo "Hello from stdin" | livesync-cli /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
|
||||
|
||||
# Output a file from local database to stdout
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
|
||||
|
||||
# Output a specific revision of a file from local database
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
|
||||
|
||||
# Pull a specific revision of a file from local database to local storage
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
|
||||
|
||||
# List files in local database
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
|
||||
|
||||
# Show metadata for a file in local database
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
|
||||
|
||||
# Mark a file as deleted in local database
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
|
||||
|
||||
# Resolve conflict by keeping a specific revision
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
The CLI uses the same settings format as the Obsidian plugin. Create a `.livesync/settings.json` file in your vault directory:
|
||||
|
||||
```json
|
||||
{
|
||||
"couchDB_URI": "http://localhost:5984",
|
||||
"couchDB_USER": "admin",
|
||||
"couchDB_PASSWORD": "password",
|
||||
"couchDB_DBNAME": "obsidian-livesync",
|
||||
"liveSync": true,
|
||||
"syncOnSave": true,
|
||||
"syncOnStart": true,
|
||||
"encrypt": true,
|
||||
"passphrase": "your-encryption-passphrase",
|
||||
"usePluginSync": false,
|
||||
"isConfigured": true
|
||||
}
|
||||
```
|
||||
|
||||
**Minimum required settings:**
|
||||
|
||||
- `couchDB_URI`: CouchDB server URL
|
||||
- `couchDB_USER`: CouchDB username
|
||||
- `couchDB_PASSWORD`: CouchDB password
|
||||
- `couchDB_DBNAME`: Database name
|
||||
- `isConfigured`: Set to `true` after configuration
|
||||
|
||||
### Command-line Reference
|
||||
|
||||
```
|
||||
Usage:
|
||||
livesync-cli <database-path> [options] <command> [command-args]
|
||||
livesync-cli init-settings [path]
|
||||
|
||||
Arguments:
|
||||
database-path Path to the local database directory (required except for init-settings)
|
||||
|
||||
Options:
|
||||
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
|
||||
--force, -f Overwrite existing file on init-settings
|
||||
--verbose, -v Enable verbose logging
|
||||
--debug, -d Enable debug logging (includes verbose)
|
||||
--help, -h Show help message
|
||||
|
||||
Commands:
|
||||
init-settings [path] Create settings JSON from DEFAULT_SETTINGS
|
||||
sync Run one replication cycle and exit
|
||||
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
|
||||
p2p-sync <peer> <timeout> Synchronise with specified peer-id or peer-name
|
||||
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
|
||||
push <src> <dst> Push local file <src> into local database path <dst>
|
||||
pull <src> <dst> Pull file <src> from local database into local file <dst>
|
||||
pull-rev <src> <dst> <rev> Pull specific revision <rev> into local file <dst>
|
||||
setup <setupURI> Apply setup URI to settings file
|
||||
put <dst> Read text from standard input and write to local database path <dst>
|
||||
cat <src> Write latest file content from local database to standard output
|
||||
cat-rev <src> <rev> Write specific revision <rev> content from local database to standard output
|
||||
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
|
||||
info <path> Show file metadata including current and past revisions, conflicts, and chunk list
|
||||
rm <path> Mark file as deleted in local database
|
||||
resolve <path> <rev> Resolve conflict by keeping the specified revision
|
||||
mirror [vaultPath] Mirror database contents to the local file system (vaultPath defaults to database-path)
|
||||
```
|
||||
|
||||
Run via npm script:
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- [database-path] [options] [command] [command-args]
|
||||
```
|
||||
|
||||
#### Detailed Command Descriptions
|
||||
|
||||
##### ls
|
||||
`ls` lists files in the local database with optional prefix filtering. Output format is:
|
||||
|
||||
```vault/path/file.md<TAB>size<TAB>mtime<TAB>revision[*]
|
||||
```
|
||||
Note: `*` indicates if the file has conflicts.
|
||||
|
||||
##### p2p-peers
|
||||
|
||||
`p2p-peers <timeout>` waits for the specified number of seconds, then prints each discovered peer on a separate line:
|
||||
|
||||
```text
|
||||
[peer]<TAB><peer-id><TAB><peer-name>
|
||||
```
|
||||
|
||||
Use this command to select a target for `p2p-sync`.
|
||||
|
||||
##### p2p-sync
|
||||
|
||||
`p2p-sync <peer> <timeout>` discovers peers up to the specified timeout and synchronises with the selected peer.
|
||||
|
||||
- `<peer>` accepts either `peer-id` or `peer-name` from `p2p-peers` output.
|
||||
- On success, the command prints a completion message to standard error and exits with status code `0`.
|
||||
- On failure, the command prints an error message and exits non-zero.
|
||||
|
||||
##### p2p-host
|
||||
|
||||
`p2p-host` starts the local P2P host and keeps running until interrupted.
|
||||
|
||||
- Other peers can discover and synchronise with this host while it is running.
|
||||
- Stop the host with `Ctrl+C`.
|
||||
- In CLI mode, behaviour is non-interactive and acceptance follows settings.
|
||||
|
||||
##### info
|
||||
|
||||
`info` output fields:
|
||||
|
||||
- `id`: Document ID
|
||||
- `revision`: Current revision
|
||||
- `conflicts`: Conflicted revisions, or `N/A`
|
||||
- `filename`: Basename of path
|
||||
- `path`: Vault-relative path
|
||||
- `size`: Size in bytes
|
||||
- `revisions`: Available non-current revisions
|
||||
- `chunks`: Number of chunk IDs
|
||||
- `children`: Chunk ID list
|
||||
|
||||
##### mirror
|
||||
|
||||
`mirror` is a command that synchronises your storage with your local vault. It is essentially a process that runs upon startup in Obsidian.
|
||||
|
||||
In other words, it performs the following actions:
|
||||
|
||||
1. **Precondition checks** — Aborts early if any of the following conditions are not met:
|
||||
- Settings must be configured (`isConfigured: true`).
|
||||
- File watching must not be suspended (`suspendFileWatching: false`).
|
||||
- Remediation mode must be inactive (`maxMTimeForReflectEvents: 0`).
|
||||
|
||||
2. **State restoration** — On subsequent runs (after the first successful scan), restores the previous storage state before proceeding.
|
||||
|
||||
3. **Expired deletion cleanup** — If `automaticallyDeleteMetadataOfDeletedFiles` is set to a positive number of days, any document that is marked deleted and whose `mtime` is older than the retention period is permanently removed from the local database.
|
||||
|
||||
4. **File collection** — Enumerates files from two sources:
|
||||
- **Storage**: all files under the vault path that pass `isTargetFile`.
|
||||
- **Local database**: all normal documents (fetched with conflict information) whose paths are valid and pass `isTargetFile`.
|
||||
- Both collections build case-insensitive ↔ case-sensitive path maps, controlled by `handleFilenameCaseSensitive`.
|
||||
|
||||
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
|
||||
|
||||
| Group | Condition | Action |
|
||||
| ----------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
|
||||
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
|
||||
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
|
||||
|
||||
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
|
||||
|
||||
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases).
|
||||
|
||||
### Planned options:
|
||||
|
||||
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`).
|
||||
- `serve`: Start CLI in server mode, exposing REST APIs for remote, and batch operations.
|
||||
- `cause-conflicted <vaultPath>`: Mark a file as conflicted without changing its content, to trigger conflict resolution in Obsidian.
|
||||
|
||||
## Use Cases
|
||||
|
||||
### 1. Bootstrap a new headless vault
|
||||
|
||||
Create default settings, apply a setup URI, then run one sync cycle.
|
||||
|
||||
```bash
|
||||
livesync-cli -- init-settings /data/livesync-settings.json
|
||||
printf '%s\n' "$SETUP_PASSPHRASE" | livesync-cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json sync
|
||||
```
|
||||
|
||||
### 2. Scripted import and export
|
||||
|
||||
Push local files into the database from automation, and pull them back for export or backup.
|
||||
|
||||
```bash
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
|
||||
```
|
||||
|
||||
### 3. Revision inspection and restore
|
||||
|
||||
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
|
||||
|
||||
```bash
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
|
||||
```
|
||||
|
||||
### 4. Conflict and cleanup workflow
|
||||
|
||||
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
|
||||
|
||||
```bash
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
|
||||
```
|
||||
|
||||
### 5. CI smoke test for content round-trip
|
||||
|
||||
Validate that `put`/`cat` is behaving as expected in a pipeline.
|
||||
|
||||
```bash
|
||||
echo "hello-ci" | livesync-cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Project Structure
|
||||
|
||||
```
|
||||
src/apps/cli/
|
||||
├── commands/ # Command dispatcher and command utilities
|
||||
│ ├── runCommand.ts
|
||||
│ ├── runCommand.unit.spec.ts
|
||||
│ ├── types.ts
|
||||
│ ├── utils.ts
|
||||
│ └── utils.unit.spec.ts
|
||||
├── adapters/ # Node.js FileSystem Adapter
|
||||
│ ├── NodeConversionAdapter.ts
|
||||
│ ├── NodeFileSystemAdapter.ts
|
||||
│ ├── NodePathAdapter.ts
|
||||
│ ├── NodeStorageAdapter.ts
|
||||
│ ├── NodeStorageAdapter.unit.spec.ts
|
||||
│ ├── NodeTypeGuardAdapter.ts
|
||||
│ ├── NodeTypes.ts
|
||||
│ └── NodeVaultAdapter.ts
|
||||
├── lib/
|
||||
│ └── pouchdb-node.ts
|
||||
├── managers/ # CLI-specific managers
|
||||
│ ├── CLIStorageEventManagerAdapter.ts
|
||||
│ └── StorageEventManagerCLI.ts
|
||||
├── serviceModules/ # Service modules (ported from main.ts)
|
||||
│ ├── CLIServiceModules.ts
|
||||
│ ├── DatabaseFileAccess.ts
|
||||
│ ├── FileAccessCLI.ts
|
||||
│ └── ServiceFileAccessImpl.ts
|
||||
├── services/
|
||||
│ ├── NodeKeyValueDBService.ts
|
||||
│ ├── NodeServiceHub.ts
|
||||
│ └── NodeSettingService.ts
|
||||
├── test/
|
||||
│ ├── test-e2e-two-vaults-common.sh
|
||||
│ ├── test-e2e-two-vaults-matrix.sh
|
||||
│ ├── test-e2e-two-vaults-with-docker-linux.sh
|
||||
│ ├── test-push-pull-linux.sh
|
||||
│ ├── test-setup-put-cat-linux.sh
|
||||
│ └── test-sync-two-local-databases-linux.sh
|
||||
├── .gitignore
|
||||
├── entrypoint.ts # CLI executable entry point (shebang)
|
||||
├── main.ts # CLI entry point
|
||||
├── main.unit.spec.ts
|
||||
├── package.json
|
||||
├── README.md # This file
|
||||
├── tsconfig.json
|
||||
├── util/ # Test and local utility scripts
|
||||
└── vite.config.ts
|
||||
```
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
import type { LiveSyncBaseCore } from "../../../LiveSyncBaseCore";
|
||||
import { P2P_DEFAULT_SETTINGS, SETTING_KEY_P2P_DEVICE_NAME, type EntryDoc } from "@lib/common/types";
|
||||
import { P2P_DEFAULT_SETTINGS } from "@lib/common/types";
|
||||
import type { ServiceContext } from "@lib/services/base/ServiceBase";
|
||||
import { TrysteroReplicator } from "@lib/replication/trystero/TrysteroReplicator";
|
||||
|
||||
import { LiveSyncTrysteroReplicator } from "@lib/replication/trystero/LiveSyncTrysteroReplicator";
|
||||
import { addP2PEventHandlers } from "@lib/replication/trystero/addP2PEventHandlers";
|
||||
type CLIP2PPeer = {
|
||||
peerId: string;
|
||||
name: string;
|
||||
@@ -32,42 +32,14 @@ function validateP2PSettings(core: LiveSyncBaseCore<ServiceContext, any>) {
|
||||
settings.P2P_IsHeadless = true;
|
||||
}
|
||||
|
||||
async function createReplicator(core: LiveSyncBaseCore<ServiceContext, any>): Promise<TrysteroReplicator> {
|
||||
function createReplicator(core: LiveSyncBaseCore<ServiceContext, any>): LiveSyncTrysteroReplicator {
|
||||
validateP2PSettings(core);
|
||||
const getSettings = () => core.services.setting.currentSettings();
|
||||
const getDB = () => core.services.database.localDatabase.localDatabase;
|
||||
const getSimpleStore = () => core.services.keyValueDB.openSimpleStore("p2p-sync");
|
||||
const getDeviceName = () =>
|
||||
core.services.config.getSmallConfig(SETTING_KEY_P2P_DEVICE_NAME) || core.services.vault.getVaultName();
|
||||
|
||||
const env = {
|
||||
get settings() {
|
||||
return getSettings();
|
||||
},
|
||||
get db() {
|
||||
return getDB();
|
||||
},
|
||||
get simpleStore() {
|
||||
return getSimpleStore();
|
||||
},
|
||||
get deviceName() {
|
||||
return getDeviceName();
|
||||
},
|
||||
get platform() {
|
||||
return core.services.API.getPlatform();
|
||||
},
|
||||
get confirm() {
|
||||
return core.services.API.confirm;
|
||||
},
|
||||
processReplicatedDocs: async (docs: EntryDoc[]) => {
|
||||
await core.services.replication.parseSynchroniseResult(docs as any);
|
||||
},
|
||||
};
|
||||
|
||||
return new TrysteroReplicator(env as any);
|
||||
const replicator = new LiveSyncTrysteroReplicator({ services: core.services });
|
||||
addP2PEventHandlers(replicator);
|
||||
return replicator;
|
||||
}
|
||||
|
||||
function getSortedPeers(replicator: TrysteroReplicator): CLIP2PPeer[] {
|
||||
function getSortedPeers(replicator: LiveSyncTrysteroReplicator): CLIP2PPeer[] {
|
||||
return [...replicator.knownAdvertisements]
|
||||
.map((peer) => ({ peerId: peer.peerId, name: peer.name }))
|
||||
.sort((a, b) => a.peerId.localeCompare(b.peerId));
|
||||
@@ -77,7 +49,7 @@ export async function collectPeers(
|
||||
core: LiveSyncBaseCore<ServiceContext, any>,
|
||||
timeoutSec: number
|
||||
): Promise<CLIP2PPeer[]> {
|
||||
const replicator = await createReplicator(core);
|
||||
const replicator = createReplicator(core);
|
||||
await replicator.open();
|
||||
try {
|
||||
await delay(timeoutSec * 1000);
|
||||
@@ -107,7 +79,7 @@ export async function syncWithPeer(
|
||||
peerToken: string,
|
||||
timeoutSec: number
|
||||
): Promise<CLIP2PPeer> {
|
||||
const replicator = await createReplicator(core);
|
||||
const replicator = createReplicator(core);
|
||||
await replicator.open();
|
||||
try {
|
||||
const timeoutMs = timeoutSec * 1000;
|
||||
@@ -142,8 +114,8 @@ export async function syncWithPeer(
|
||||
}
|
||||
}
|
||||
|
||||
export async function openP2PHost(core: LiveSyncBaseCore<ServiceContext, any>): Promise<TrysteroReplicator> {
|
||||
const replicator = await createReplicator(core);
|
||||
export async function openP2PHost(core: LiveSyncBaseCore<ServiceContext, any>): Promise<LiveSyncTrysteroReplicator> {
|
||||
const replicator = createReplicator(core);
|
||||
await replicator.open();
|
||||
return replicator;
|
||||
}
|
||||
|
||||
@@ -5,13 +5,13 @@ import { configURIBase } from "@lib/common/models/shared.const";
|
||||
import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types";
|
||||
import { stripAllPrefixes } from "@lib/string_and_binary/path";
|
||||
import type { CLICommandContext, CLIOptions } from "./types";
|
||||
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils";
|
||||
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toDatabaseRelativePath } from "./utils";
|
||||
import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p";
|
||||
import { performFullScan } from "@lib/serviceFeatures/offlineScanner";
|
||||
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
|
||||
|
||||
export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> {
|
||||
const { vaultPath, core, settingsPath } = context;
|
||||
const { databasePath, core, settingsPath } = context;
|
||||
|
||||
await core.services.control.activated;
|
||||
if (options.command === "daemon") {
|
||||
@@ -21,6 +21,18 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.command === "sync") {
|
||||
console.log("[Command] sync");
|
||||
const result = await core.services.replication.replicate(true);
|
||||
if (!result) {
|
||||
// TODO: Standardise the logic for identifying the cause of replication
|
||||
// failure so that every reason (locked DB, version mismatch, network
|
||||
// error, etc.) is surfaced with a CLI-specific actionable message.
|
||||
const replicator = core.services.replicator.getActiveReplicator();
|
||||
if (replicator?.remoteLockedAndDeviceNotAccepted) {
|
||||
console.error(
|
||||
`[Error] The remote database is locked and this device is not yet accepted.\n` +
|
||||
`[Error] Please unlock the database from the Obsidian plugin and retry.`
|
||||
);
|
||||
}
|
||||
}
|
||||
return !!result;
|
||||
}
|
||||
|
||||
@@ -65,16 +77,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
throw new Error("push requires two arguments: <src> <dst>");
|
||||
}
|
||||
const sourcePath = path.resolve(options.commandArgs[0]);
|
||||
const destinationVaultPath = toVaultRelativePath(options.commandArgs[1], vaultPath);
|
||||
const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[1], databasePath);
|
||||
const sourceData = await fs.readFile(sourcePath);
|
||||
const sourceStat = await fs.stat(sourcePath);
|
||||
console.log(`[Command] push ${sourcePath} -> ${destinationVaultPath}`);
|
||||
console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`);
|
||||
|
||||
await core.serviceModules.storageAccess.writeFileAuto(destinationVaultPath, toArrayBuffer(sourceData), {
|
||||
await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), {
|
||||
mtime: sourceStat.mtimeMs,
|
||||
ctime: sourceStat.ctimeMs,
|
||||
});
|
||||
const destinationPathWithPrefix = destinationVaultPath as FilePathWithPrefix;
|
||||
const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix;
|
||||
const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true);
|
||||
return stored;
|
||||
}
|
||||
@@ -83,16 +95,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("pull requires two arguments: <src> <dst>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const destinationPath = path.resolve(options.commandArgs[1]);
|
||||
console.log(`[Command] pull ${sourceVaultPath} -> ${destinationPath}`);
|
||||
console.log(`[Command] pull ${sourceDatabasePath} -> ${destinationPath}`);
|
||||
|
||||
const sourcePathWithPrefix = sourceVaultPath as FilePathWithPrefix;
|
||||
const sourcePathWithPrefix = sourceDatabasePath as FilePathWithPrefix;
|
||||
const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true);
|
||||
if (!restored) {
|
||||
return false;
|
||||
}
|
||||
const data = await core.serviceModules.storageAccess.readFileAuto(sourceVaultPath);
|
||||
const data = await core.serviceModules.storageAccess.readFileAuto(sourceDatabasePath);
|
||||
await fs.mkdir(path.dirname(destinationPath), { recursive: true });
|
||||
if (typeof data === "string") {
|
||||
await fs.writeFile(destinationPath, data, "utf-8");
|
||||
@@ -106,16 +118,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 3) {
|
||||
throw new Error("pull-rev requires three arguments: <src> <dst> <rev>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const destinationPath = path.resolve(options.commandArgs[1]);
|
||||
const rev = options.commandArgs[2].trim();
|
||||
if (!rev) {
|
||||
throw new Error("pull-rev requires a non-empty revision");
|
||||
}
|
||||
console.log(`[Command] pull-rev ${sourceVaultPath}@${rev} -> ${destinationPath}`);
|
||||
console.log(`[Command] pull-rev ${sourceDatabasePath}@${rev} -> ${destinationPath}`);
|
||||
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
rev,
|
||||
true
|
||||
);
|
||||
@@ -154,7 +166,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
} as ObsidianLiveSyncSettings;
|
||||
|
||||
console.log(`[Command] setup -> ${settingsPath}`);
|
||||
await core.services.setting.applyPartial(nextSettings, true);
|
||||
await core.services.setting.applyExternalSettings(nextSettings, true);
|
||||
await core.services.control.applySettings();
|
||||
return true;
|
||||
}
|
||||
@@ -163,11 +175,11 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("put requires one argument: <dst>");
|
||||
}
|
||||
const destinationVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const content = await readStdinAsUtf8();
|
||||
console.log(`[Command] put stdin -> ${destinationVaultPath}`);
|
||||
console.log(`[Command] put stdin -> ${destinationDatabasePath}`);
|
||||
return await core.serviceModules.databaseFileAccess.storeContent(
|
||||
destinationVaultPath as FilePathWithPrefix,
|
||||
destinationDatabasePath as FilePathWithPrefix,
|
||||
content
|
||||
);
|
||||
}
|
||||
@@ -176,10 +188,10 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("cat requires one argument: <src>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
console.error(`[Command] cat ${sourceVaultPath}`);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
console.error(`[Command] cat ${sourceDatabasePath}`);
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
undefined,
|
||||
true
|
||||
);
|
||||
@@ -200,14 +212,14 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("cat-rev requires two arguments: <src> <rev>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const rev = options.commandArgs[1].trim();
|
||||
if (!rev) {
|
||||
throw new Error("cat-rev requires a non-empty revision");
|
||||
}
|
||||
console.error(`[Command] cat-rev ${sourceVaultPath} @ ${rev}`);
|
||||
console.error(`[Command] cat-rev ${sourceDatabasePath} @ ${rev}`);
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
rev,
|
||||
true
|
||||
);
|
||||
@@ -227,7 +239,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.command === "ls") {
|
||||
const prefix =
|
||||
options.commandArgs.length > 0 && options.commandArgs[0].trim() !== ""
|
||||
? toVaultRelativePath(options.commandArgs[0], vaultPath)
|
||||
? toDatabaseRelativePath(options.commandArgs[0], databasePath)
|
||||
: "";
|
||||
const rows: { path: string; line: string }[] = [];
|
||||
|
||||
@@ -249,6 +261,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
rows.sort((a, b) => a.path.localeCompare(b.path));
|
||||
if (rows.length > 0) {
|
||||
process.stdout.write(rows.map((e) => e.line).join("\n") + "\n");
|
||||
} else {
|
||||
process.stderr.write("[Info] No documents found in the local database.\n");
|
||||
}
|
||||
return true;
|
||||
}
|
||||
@@ -257,7 +271,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("info requires one argument: <path>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
|
||||
for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) {
|
||||
if (doc._deleted || doc.deleted) continue;
|
||||
@@ -301,7 +315,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("rm requires one argument: <path>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
console.error(`[Command] rm ${targetPath}`);
|
||||
return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix);
|
||||
}
|
||||
@@ -310,7 +324,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("resolve requires two arguments: <path> <revision-to-keep>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath) as FilePathWithPrefix;
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath) as FilePathWithPrefix;
|
||||
const revisionToKeep = options.commandArgs[1].trim();
|
||||
if (revisionToKeep === "") {
|
||||
throw new Error("resolve requires a non-empty revision-to-keep");
|
||||
|
||||
@@ -14,6 +14,7 @@ function createCoreMock() {
|
||||
applySettings: vi.fn(async () => {}),
|
||||
},
|
||||
setting: {
|
||||
applyExternalSettings: vi.fn(async () => {}),
|
||||
applyPartial: vi.fn(async () => {}),
|
||||
},
|
||||
},
|
||||
@@ -57,7 +58,7 @@ async function createSetupURI(passphrase: string): Promise<string> {
|
||||
|
||||
describe("runCommand abnormal cases", () => {
|
||||
const context = {
|
||||
vaultPath: "/tmp/vault",
|
||||
databasePath: "/tmp/vault",
|
||||
settingsPath: "/tmp/vault/.livesync/settings.json",
|
||||
} as any;
|
||||
|
||||
@@ -176,9 +177,9 @@ describe("runCommand abnormal cases", () => {
|
||||
});
|
||||
|
||||
expect(result).toBe(true);
|
||||
expect(core.services.setting.applyPartial).toHaveBeenCalledTimes(1);
|
||||
expect(core.services.setting.applyExternalSettings).toHaveBeenCalledTimes(1);
|
||||
expect(core.services.control.applySettings).toHaveBeenCalledTimes(1);
|
||||
const [appliedSettings, saveImmediately] = core.services.setting.applyPartial.mock.calls[0];
|
||||
const [appliedSettings, saveImmediately] = core.services.setting.applyExternalSettings.mock.calls[0];
|
||||
expect(saveImmediately).toBe(true);
|
||||
expect(appliedSettings.couchDB_URI).toBe("http://127.0.0.1:5984");
|
||||
expect(appliedSettings.couchDB_DBNAME).toBe("livesync-test-db");
|
||||
@@ -198,7 +199,7 @@ describe("runCommand abnormal cases", () => {
|
||||
})
|
||||
).rejects.toThrow();
|
||||
|
||||
expect(core.services.setting.applyPartial).not.toHaveBeenCalled();
|
||||
expect(core.services.setting.applyExternalSettings).not.toHaveBeenCalled();
|
||||
expect(core.services.control.applySettings).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -32,7 +32,7 @@ export interface CLIOptions {
|
||||
}
|
||||
|
||||
export interface CLICommandContext {
|
||||
vaultPath: string;
|
||||
databasePath: string;
|
||||
core: LiveSyncBaseCore<ServiceContext, any>;
|
||||
settingsPath: string;
|
||||
}
|
||||
|
||||
@@ -5,19 +5,19 @@ export function toArrayBuffer(data: Buffer): ArrayBuffer {
|
||||
return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer;
|
||||
}
|
||||
|
||||
export function toVaultRelativePath(inputPath: string, vaultPath: string): string {
|
||||
export function toDatabaseRelativePath(inputPath: string, databasePath: string): string {
|
||||
const stripped = inputPath.replace(/^[/\\]+/, "");
|
||||
if (!path.isAbsolute(inputPath)) {
|
||||
const normalized = stripped.replace(/\\/g, "/");
|
||||
const resolved = path.resolve(vaultPath, normalized);
|
||||
const rel = path.relative(vaultPath, resolved);
|
||||
const resolved = path.resolve(databasePath, normalized);
|
||||
const rel = path.relative(databasePath, resolved);
|
||||
if (rel.startsWith("..") || path.isAbsolute(rel)) {
|
||||
throw new Error(`Path ${inputPath} is outside of the local database directory`);
|
||||
}
|
||||
return rel.replace(/\\/g, "/");
|
||||
}
|
||||
const resolved = path.resolve(inputPath);
|
||||
const rel = path.relative(vaultPath, resolved);
|
||||
const rel = path.relative(databasePath, resolved);
|
||||
if (rel.startsWith("..") || path.isAbsolute(rel)) {
|
||||
throw new Error(`Path ${inputPath} is outside of the local database directory`);
|
||||
}
|
||||
@@ -25,15 +25,15 @@ export function toVaultRelativePath(inputPath: string, vaultPath: string): strin
|
||||
}
|
||||
|
||||
export async function readStdinAsUtf8(): Promise<string> {
|
||||
const chunks: Buffer[] = [];
|
||||
const chunks = [];
|
||||
for await (const chunk of process.stdin) {
|
||||
if (typeof chunk === "string") {
|
||||
chunks.push(Buffer.from(chunk, "utf-8"));
|
||||
} else {
|
||||
chunks.push(chunk);
|
||||
chunks.push(chunk as Buffer);
|
||||
}
|
||||
}
|
||||
return Buffer.concat(chunks).toString("utf-8");
|
||||
return Buffer.concat(chunks as Uint8Array[]).toString("utf-8");
|
||||
}
|
||||
|
||||
export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> {
|
||||
|
||||
@@ -1,29 +1,33 @@
|
||||
import * as path from "path";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { toVaultRelativePath } from "./utils";
|
||||
import { toDatabaseRelativePath } from "./utils";
|
||||
|
||||
describe("toVaultRelativePath", () => {
|
||||
const vaultPath = path.resolve("/tmp/livesync-vault");
|
||||
describe("toDatabaseRelativePath", () => {
|
||||
const databasePath = path.resolve("/tmp/livesync-vault");
|
||||
|
||||
it("rejects absolute paths outside vault", () => {
|
||||
expect(() => toVaultRelativePath("/etc/passwd", vaultPath)).toThrow("outside of the local database directory");
|
||||
expect(() => toDatabaseRelativePath("/etc/passwd", databasePath)).toThrow(
|
||||
"outside of the local database directory"
|
||||
);
|
||||
});
|
||||
|
||||
it("normalizes leading slash for absolute path inside vault", () => {
|
||||
const absoluteInsideVault = path.join(vaultPath, "notes", "foo.md");
|
||||
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("notes/foo.md");
|
||||
const absoluteInsideVault = path.join(databasePath, "notes", "foo.md");
|
||||
expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("notes/foo.md");
|
||||
});
|
||||
|
||||
it("normalizes Windows-style separators", () => {
|
||||
expect(toVaultRelativePath("notes\\daily\\2026-03-12.md", vaultPath)).toBe("notes/daily/2026-03-12.md");
|
||||
expect(toDatabaseRelativePath("notes\\daily\\2026-03-12.md", databasePath)).toBe("notes/daily/2026-03-12.md");
|
||||
});
|
||||
|
||||
it("returns vault-relative path for another absolute path inside vault", () => {
|
||||
const absoluteInsideVault = path.join(vaultPath, "docs", "inside.md");
|
||||
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("docs/inside.md");
|
||||
const absoluteInsideVault = path.join(databasePath, "docs", "inside.md");
|
||||
expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("docs/inside.md");
|
||||
});
|
||||
|
||||
it("rejects relative path traversal that escapes vault", () => {
|
||||
expect(() => toVaultRelativePath("../escape.md", vaultPath)).toThrow("outside of the local database directory");
|
||||
expect(() => toDatabaseRelativePath("../escape.md", databasePath)).toThrow(
|
||||
"outside of the local database directory"
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
25
src/apps/cli/docker-entrypoint.sh
Normal file
25
src/apps/cli/docker-entrypoint.sh
Normal file
@@ -0,0 +1,25 @@
|
||||
#!/bin/sh
|
||||
# Entrypoint wrapper for the Self-hosted LiveSync CLI Docker image.
|
||||
#
|
||||
# By default, /data is used as the database-path (the vault mount point).
|
||||
# Override this via the LIVESYNC_DB_PATH environment variable.
|
||||
#
|
||||
# Examples:
|
||||
# docker run -v /path/to/vault:/data livesync-cli sync
|
||||
# docker run -v /path/to/vault:/data livesync-cli --settings /data/.livesync/settings.json sync
|
||||
# docker run -v /path/to/vault:/data livesync-cli init-settings
|
||||
# docker run -e LIVESYNC_DB_PATH=/vault -v /path/to/vault:/vault livesync-cli sync
|
||||
|
||||
set -e
|
||||
|
||||
case "${1:-}" in
|
||||
init-settings | --help | -h | "")
|
||||
# Commands that do not require a leading database-path argument
|
||||
exec node /app/dist/index.cjs "$@"
|
||||
;;
|
||||
*)
|
||||
# All other commands: prepend the database-path so users only need
|
||||
# to supply the command and its options.
|
||||
exec node /app/dist/index.cjs "${LIVESYNC_DB_PATH:-/data}" "$@"
|
||||
;;
|
||||
esac
|
||||
@@ -1,10 +1,11 @@
|
||||
#!/usr/bin/env node
|
||||
import polyfill from "node-datachannel/polyfill";
|
||||
import * as polyfill from "werift";
|
||||
import { main } from "./main";
|
||||
|
||||
for (const prop in polyfill) {
|
||||
// @ts-ignore Applying polyfill to globalThis
|
||||
globalThis[prop] = (polyfill as any)[prop];
|
||||
const rtcPolyfillCtor = (polyfill as any).RTCPeerConnection;
|
||||
if (typeof (globalThis as any).RTCPeerConnection === "undefined" && typeof rtcPolyfillCtor === "function") {
|
||||
// Fill only the standard WebRTC global in Node CLI runtime.
|
||||
(globalThis as any).RTCPeerConnection = rtcPolyfillCtor;
|
||||
}
|
||||
|
||||
main().catch((error) => {
|
||||
|
||||
@@ -3,26 +3,12 @@
|
||||
* Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian
|
||||
*/
|
||||
|
||||
if (!("localStorage" in globalThis)) {
|
||||
const store = new Map<string, string>();
|
||||
(globalThis as any).localStorage = {
|
||||
getItem: (key: string) => (store.has(key) ? store.get(key)! : null),
|
||||
setItem: (key: string, value: string) => {
|
||||
store.set(key, value);
|
||||
},
|
||||
removeItem: (key: string) => {
|
||||
store.delete(key);
|
||||
},
|
||||
clear: () => {
|
||||
store.clear();
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
import * as fs from "fs/promises";
|
||||
import * as path from "path";
|
||||
import { NodeServiceContext, NodeServiceHub } from "./services/NodeServiceHub";
|
||||
import { configureNodeLocalStorage, ensureGlobalNodeLocalStorage } from "./services/NodeLocalStorage";
|
||||
import { LiveSyncBaseCore } from "../../LiveSyncBaseCore";
|
||||
import { ModuleReplicatorP2P } from "../../modules/core/ModuleReplicatorP2P";
|
||||
import { initialiseServiceModulesCLI } from "./serviceModules/CLIServiceModules";
|
||||
import { DEFAULT_SETTINGS, LOG_LEVEL_VERBOSE, type LOG_LEVEL, type ObsidianLiveSyncSettings } from "@lib/common/types";
|
||||
import type { InjectableServiceHub } from "@lib/services/implements/injectable/InjectableServiceHub";
|
||||
@@ -42,69 +28,55 @@ import { getPathFromUXFileInfo } from "@lib/common/typeUtils";
|
||||
import { stripAllPrefixes } from "@lib/string_and_binary/path";
|
||||
|
||||
const SETTINGS_FILE = ".livesync/settings.json";
|
||||
ensureGlobalNodeLocalStorage();
|
||||
defaultLoggerEnv.minLogLevel = LOG_LEVEL_DEBUG;
|
||||
|
||||
// DI the log again.
|
||||
// const recentLogEntries = reactiveSource<LogEntry[]>([]);
|
||||
// const globalLogFunction = (message: any, level?: number, key?: string) => {
|
||||
// const messageX =
|
||||
// message instanceof Error
|
||||
// ? new LiveSyncError("[Error Logged]: " + message.message, { cause: message })
|
||||
// : message;
|
||||
// const entry = { message: messageX, level, key } as LogEntry;
|
||||
// recentLogEntries.value = [...recentLogEntries.value, entry];
|
||||
// };
|
||||
|
||||
// setGlobalLogFunction((msg, level) => {
|
||||
// console.error(`[${level}] ${typeof msg === "string" ? msg : JSON.stringify(msg)}`);
|
||||
// if (msg instanceof Error) {
|
||||
// console.error(msg);
|
||||
// }
|
||||
// });
|
||||
function printHelp(): void {
|
||||
console.log(`
|
||||
Self-hosted LiveSync CLI
|
||||
|
||||
Usage:
|
||||
livesync-cli [database-path] [options] [command] [command-args]
|
||||
livesync-cli <database-path> [options] <command> [command-args]
|
||||
livesync-cli init-settings [path]
|
||||
|
||||
Arguments:
|
||||
database-path Path to the local database directory (required)
|
||||
database-path Path to the local database directory
|
||||
|
||||
Commands:
|
||||
sync Run one replication cycle and exit
|
||||
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
|
||||
p2p-peers <timeout> Show discovered peers as [peer]\t<peer-id>\t<peer-name>
|
||||
p2p-sync <peer> <timeout>
|
||||
Sync with the specified peer-id or peer-name
|
||||
p2p-host Start P2P host mode and wait until interrupted
|
||||
push <src> <dst> Push local file <src> into local database path <dst>
|
||||
pull <src> <dst> Pull file <src> from local database into local file <dst>
|
||||
push <src> <dst> Push local file <src> into local database path <dst>
|
||||
pull <src> <dst> Pull file <src> from local database into local file <dst>
|
||||
pull-rev <src> <dst> <rev> Pull file <src> at specific revision <rev> into local file <dst>
|
||||
setup <setupURI> Apply setup URI to settings file
|
||||
put <dst> Read UTF-8 content from stdin and write to local database path <dst>
|
||||
cat <src> Read file <src> from local database and write to stdout
|
||||
cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout
|
||||
ls [prefix] List DB files as path<TAB>size<TAB>mtime<TAB>revision[*]
|
||||
ls [prefix] List DB files as path\tsize\tmtime\trevision[*]
|
||||
info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks)
|
||||
rm <path> Mark a file as deleted in local database
|
||||
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
|
||||
mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path)
|
||||
Examples:
|
||||
livesync-cli ./my-database sync
|
||||
livesync-cli ./my-database p2p-peers 5
|
||||
livesync-cli ./my-database p2p-sync my-peer-name 15
|
||||
livesync-cli ./my-database p2p-host
|
||||
livesync-cli ./my-database p2p-peers 5
|
||||
livesync-cli ./my-database p2p-sync my-peer-name 15
|
||||
livesync-cli ./my-database p2p-host
|
||||
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
|
||||
livesync-cli ./my-database pull folder/note.md ./exports/note.md
|
||||
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
|
||||
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
|
||||
echo "Hello" | livesync-cli ./my-database put notes/hello.md
|
||||
livesync-cli ./my-database cat notes/hello.md
|
||||
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
|
||||
livesync-cli ./my-database ls notes/
|
||||
livesync-cli ./my-database info notes/hello.md
|
||||
livesync-cli ./my-database rm notes/hello.md
|
||||
livesync-cli ./my-database resolve notes/hello.md 3-abcdef
|
||||
livesync-cli init-settings ./data.json
|
||||
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
|
||||
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
|
||||
echo "Hello" | livesync-cli ./my-database put notes/hello.md
|
||||
livesync-cli ./my-database cat notes/hello.md
|
||||
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
|
||||
livesync-cli ./my-database ls notes/
|
||||
livesync-cli ./my-database info notes/hello.md
|
||||
livesync-cli ./my-database rm notes/hello.md
|
||||
livesync-cli ./my-database resolve notes/hello.md 3-abcdef
|
||||
livesync-cli init-settings ./data.json
|
||||
livesync-cli ./my-database --verbose
|
||||
`);
|
||||
}
|
||||
@@ -142,6 +114,7 @@ export function parseArgs(): CLIOptions {
|
||||
case "-d":
|
||||
// debugging automatically enables verbose logging, as it is intended for debugging issues.
|
||||
debug = true;
|
||||
// falls through
|
||||
case "--verbose":
|
||||
case "-v":
|
||||
verbose = true;
|
||||
@@ -250,33 +223,34 @@ export async function main() {
|
||||
return;
|
||||
}
|
||||
|
||||
// Resolve vault path
|
||||
const vaultPath = path.resolve(options.databasePath!);
|
||||
// Check if vault directory exists
|
||||
// Resolve database path
|
||||
const databasePath = path.resolve(options.databasePath!);
|
||||
// Check if database directory exists
|
||||
try {
|
||||
const stat = await fs.stat(vaultPath);
|
||||
const stat = await fs.stat(databasePath);
|
||||
if (!stat.isDirectory()) {
|
||||
console.error(`Error: ${vaultPath} is not a directory`);
|
||||
console.error(`Error: ${databasePath} is not a directory`);
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Error: Vault directory ${vaultPath} does not exist`);
|
||||
console.error(`Error: Database directory ${databasePath} does not exist`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Resolve settings path
|
||||
const settingsPath = options.settingsPath
|
||||
? path.resolve(options.settingsPath)
|
||||
: path.join(vaultPath, SETTINGS_FILE);
|
||||
: path.join(databasePath, SETTINGS_FILE);
|
||||
configureNodeLocalStorage(path.join(databasePath, ".livesync", "runtime", "local-storage.json"));
|
||||
|
||||
infoLog(`Self-hosted LiveSync CLI`);
|
||||
infoLog(`Vault: ${vaultPath}`);
|
||||
infoLog(`Database Path: ${databasePath}`);
|
||||
infoLog(`Settings: ${settingsPath}`);
|
||||
infoLog("");
|
||||
|
||||
// Create service context and hub
|
||||
const context = new NodeServiceContext(vaultPath);
|
||||
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(vaultPath, context);
|
||||
const context = new NodeServiceContext(databasePath);
|
||||
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(databasePath, context);
|
||||
serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => {
|
||||
let levelStr = "";
|
||||
switch (level) {
|
||||
@@ -350,15 +324,22 @@ export async function main() {
|
||||
const core = new LiveSyncBaseCore(
|
||||
serviceHubInstance,
|
||||
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
|
||||
return initialiseServiceModulesCLI(vaultPath, core, serviceHub);
|
||||
const mirrorVaultPath =
|
||||
options.command === "mirror" && options.commandArgs[0]
|
||||
? path.resolve(options.commandArgs[0])
|
||||
: databasePath;
|
||||
return initialiseServiceModulesCLI(mirrorVaultPath, core, serviceHub);
|
||||
},
|
||||
() => [], // No extra modules
|
||||
(core) => [
|
||||
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
|
||||
// new ModuleReplicatorP2P(core),
|
||||
],
|
||||
() => [], // No add-ons
|
||||
(core) => {
|
||||
// Add target filter to prevent internal files are handled
|
||||
core.services.vault.isTargetFile.addHandler(async (target) => {
|
||||
const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target));
|
||||
const parts = vaultPath.split(path.sep);
|
||||
const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
|
||||
const parts = targetPath.split(path.sep);
|
||||
// if some part of the path starts with dot, treat it as internal file and ignore.
|
||||
if (parts.some((part) => part.startsWith("."))) {
|
||||
return await Promise.resolve(false);
|
||||
@@ -419,7 +400,7 @@ export async function main() {
|
||||
infoLog("");
|
||||
}
|
||||
|
||||
const result = await runCommand(options, { vaultPath, core, settingsPath });
|
||||
const result = await runCommand(options, { databasePath, core, settingsPath });
|
||||
if (!result) {
|
||||
console.error(`[Error] Command '${options.command}' failed`);
|
||||
process.exitCode = 1;
|
||||
|
||||
@@ -17,7 +17,7 @@ describe("CLI parseArgs", () => {
|
||||
});
|
||||
|
||||
it("exits 1 when --settings has no value", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "--settings"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "--settings"];
|
||||
const exitMock = mockProcessExit();
|
||||
const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
|
||||
|
||||
@@ -37,7 +37,7 @@ describe("CLI parseArgs", () => {
|
||||
});
|
||||
|
||||
it("exits 1 for unknown command after database-path", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "unknown-cmd"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "unknown-cmd"];
|
||||
const exitMock = mockProcessExit();
|
||||
const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
|
||||
|
||||
@@ -56,32 +56,32 @@ describe("CLI parseArgs", () => {
|
||||
expect(stdout).toHaveBeenCalled();
|
||||
const combined = stdout.mock.calls.flat().join("\n");
|
||||
expect(combined).toContain("Usage:");
|
||||
expect(combined).toContain("livesync-cli [database-path]");
|
||||
expect(combined).toContain("livesync-cli <database-path> [options] <command> [command-args]");
|
||||
});
|
||||
|
||||
it("parses p2p-peers command and timeout", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-peers", "5"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-peers", "5"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-peers");
|
||||
expect(parsed.commandArgs).toEqual(["5"]);
|
||||
});
|
||||
|
||||
it("parses p2p-sync command with peer and timeout", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-sync", "peer-1", "12"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-sync", "peer-1", "12"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-sync");
|
||||
expect(parsed.commandArgs).toEqual(["peer-1", "12"]);
|
||||
});
|
||||
|
||||
it("parses p2p-host command", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-host"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-host"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-host");
|
||||
expect(parsed.commandArgs).toEqual([]);
|
||||
});
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
"preview": "vite preview",
|
||||
"cli": "node dist/index.cjs",
|
||||
"buildRun": "npm run build && npm run cli --",
|
||||
"build:docker": "docker build -f Dockerfile -t livesync-cli ../../..",
|
||||
"check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json",
|
||||
"test:unit": "cd ../../.. && npx vitest run --config vitest.config.unit.ts src/apps/cli/main.unit.spec.ts src/apps/cli/commands/utils.unit.spec.ts src/apps/cli/commands/runCommand.unit.spec.ts src/apps/cli/commands/p2p.unit.spec.ts",
|
||||
"test:e2e:two-vaults": "bash test/test-e2e-two-vaults-with-docker-linux.sh",
|
||||
@@ -19,12 +20,20 @@
|
||||
"test:e2e:setup-put-cat": "bash test/test-setup-put-cat-linux.sh",
|
||||
"test:e2e:sync-two-local": "bash test/test-sync-two-local-databases-linux.sh",
|
||||
"test:e2e:p2p": "bash test/test-p2p-three-nodes-conflict-linux.sh",
|
||||
"test:e2e:p2p-upload-download-repro": "bash test/test-p2p-upload-download-repro-linux.sh",
|
||||
"test:e2e:p2p-host": "bash test/test-p2p-host-linux.sh",
|
||||
"test:e2e:p2p-sync": "bash test/test-p2p-sync-linux.sh",
|
||||
"test:e2e:p2p-peers:local-relay": "bash test/test-p2p-peers-local-relay.sh",
|
||||
"test:e2e:mirror": "bash test/test-mirror-linux.sh",
|
||||
"pretest:e2e:all": "npm run build",
|
||||
"test:e2e:all": " export RUN_BUILD=0 && npm run test:e2e:setup-put-cat && npm run test:e2e:push-pull && npm run test:e2e:sync-two-local && npm run test:e2e:p2p && npm run test:e2e:mirror && npm run test:e2e:two-vaults && npm run test:e2e:p2p"
|
||||
"test:e2e:all": " export RUN_BUILD=0 && npm run test:e2e:setup-put-cat && npm run test:e2e:push-pull && npm run test:e2e:sync-two-local && npm run test:e2e:p2p && npm run test:e2e:mirror && npm run test:e2e:two-vaults && npm run test:e2e:p2p",
|
||||
"pretest:e2e:docker:all": "npm run build:docker",
|
||||
"test:e2e:docker:push-pull": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-push-pull-linux.sh",
|
||||
"test:e2e:docker:setup-put-cat": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-setup-put-cat-linux.sh",
|
||||
"test:e2e:docker:mirror": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-mirror-linux.sh",
|
||||
"test:e2e:docker:sync-two-local": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-sync-two-local-databases-linux.sh",
|
||||
"test:e2e:docker:p2p": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-p2p-three-nodes-conflict-linux.sh",
|
||||
"test:e2e:docker:p2p-sync": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-p2p-sync-linux.sh",
|
||||
"test:e2e:docker:all": "export RUN_BUILD=0 && npm run test:e2e:docker:setup-put-cat && npm run test:e2e:docker:push-pull && npm run test:e2e:docker:sync-two-local && npm run test:e2e:docker:mirror"
|
||||
},
|
||||
"dependencies": {},
|
||||
"devDependencies": {}
|
||||
|
||||
24
src/apps/cli/runtime-package.json
Normal file
24
src/apps/cli/runtime-package.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"name": "livesync-cli-runtime",
|
||||
"private": true,
|
||||
"version": "0.0.0",
|
||||
"description": "Runtime dependencies for Self-hosted LiveSync CLI Docker image",
|
||||
"dependencies": {
|
||||
"commander": "^14.0.3",
|
||||
"werift": "^0.22.9",
|
||||
"pouchdb-adapter-http": "^9.0.0",
|
||||
"pouchdb-adapter-idb": "^9.0.0",
|
||||
"pouchdb-adapter-indexeddb": "^9.0.0",
|
||||
"pouchdb-adapter-leveldb": "^9.0.0",
|
||||
"pouchdb-adapter-memory": "^9.0.0",
|
||||
"pouchdb-core": "^9.0.0",
|
||||
"pouchdb-errors": "^9.0.0",
|
||||
"pouchdb-find": "^9.0.0",
|
||||
"pouchdb-mapreduce": "^9.0.0",
|
||||
"pouchdb-merge": "^9.0.0",
|
||||
"pouchdb-replication": "^9.0.0",
|
||||
"pouchdb-utils": "^9.0.0",
|
||||
"pouchdb-wrappers": "*",
|
||||
"transform-pouch": "^2.0.0"
|
||||
}
|
||||
}
|
||||
111
src/apps/cli/services/NodeLocalStorage.ts
Normal file
111
src/apps/cli/services/NodeLocalStorage.ts
Normal file
@@ -0,0 +1,111 @@
|
||||
import * as nodeFs from "node:fs";
|
||||
import * as nodePath from "node:path";
|
||||
|
||||
type LocalStorageShape = {
|
||||
getItem(key: string): string | null;
|
||||
setItem(key: string, value: string): void;
|
||||
removeItem(key: string): void;
|
||||
clear(): void;
|
||||
};
|
||||
|
||||
class PersistentNodeLocalStorage {
|
||||
private storagePath: string | undefined;
|
||||
private localStore: Record<string, string> = {};
|
||||
|
||||
configure(storagePath: string) {
|
||||
if (this.storagePath === storagePath) {
|
||||
return;
|
||||
}
|
||||
this.storagePath = storagePath;
|
||||
this.loadFromFile();
|
||||
}
|
||||
|
||||
private loadFromFile() {
|
||||
if (!this.storagePath) {
|
||||
this.localStore = {};
|
||||
return;
|
||||
}
|
||||
try {
|
||||
const loaded = JSON.parse(nodeFs.readFileSync(this.storagePath, "utf-8")) as Record<string, string>;
|
||||
this.localStore = { ...loaded };
|
||||
} catch {
|
||||
this.localStore = {};
|
||||
}
|
||||
}
|
||||
|
||||
private flushToFile() {
|
||||
if (!this.storagePath) {
|
||||
return;
|
||||
}
|
||||
nodeFs.mkdirSync(nodePath.dirname(this.storagePath), { recursive: true });
|
||||
nodeFs.writeFileSync(this.storagePath, JSON.stringify(this.localStore, null, 2), "utf-8");
|
||||
}
|
||||
|
||||
getItem(key: string): string | null {
|
||||
return this.localStore[key] ?? null;
|
||||
}
|
||||
|
||||
setItem(key: string, value: string) {
|
||||
this.localStore[key] = value;
|
||||
this.flushToFile();
|
||||
}
|
||||
|
||||
removeItem(key: string) {
|
||||
if (!(key in this.localStore)) {
|
||||
return;
|
||||
}
|
||||
delete this.localStore[key];
|
||||
this.flushToFile();
|
||||
}
|
||||
|
||||
clear() {
|
||||
this.localStore = {};
|
||||
this.flushToFile();
|
||||
}
|
||||
}
|
||||
|
||||
const persistentNodeLocalStorage = new PersistentNodeLocalStorage();
|
||||
|
||||
function createNodeLocalStorageShim(): LocalStorageShape {
|
||||
return {
|
||||
getItem(key: string) {
|
||||
return persistentNodeLocalStorage.getItem(key);
|
||||
},
|
||||
setItem(key: string, value: string) {
|
||||
persistentNodeLocalStorage.setItem(key, value);
|
||||
},
|
||||
removeItem(key: string) {
|
||||
persistentNodeLocalStorage.removeItem(key);
|
||||
},
|
||||
clear() {
|
||||
persistentNodeLocalStorage.clear();
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export function ensureGlobalNodeLocalStorage() {
|
||||
if (!("localStorage" in globalThis) || typeof (globalThis as any).localStorage?.getItem !== "function") {
|
||||
(globalThis as any).localStorage = createNodeLocalStorageShim();
|
||||
}
|
||||
}
|
||||
|
||||
export function configureNodeLocalStorage(storagePath: string) {
|
||||
persistentNodeLocalStorage.configure(storagePath);
|
||||
ensureGlobalNodeLocalStorage();
|
||||
}
|
||||
|
||||
export function getNodeLocalStorageItem(key: string): string {
|
||||
return persistentNodeLocalStorage.getItem(key) ?? "";
|
||||
}
|
||||
|
||||
export function setNodeLocalStorageItem(key: string, value: string) {
|
||||
persistentNodeLocalStorage.setItem(key, value);
|
||||
}
|
||||
|
||||
export function deleteNodeLocalStorageItem(key: string) {
|
||||
persistentNodeLocalStorage.removeItem(key);
|
||||
}
|
||||
|
||||
export function clearNodeLocalStorage() {
|
||||
persistentNodeLocalStorage.clear();
|
||||
}
|
||||
60
src/apps/cli/services/NodeLocalStorage.unit.spec.ts
Normal file
60
src/apps/cli/services/NodeLocalStorage.unit.spec.ts
Normal file
@@ -0,0 +1,60 @@
|
||||
import * as fs from "node:fs";
|
||||
import * as os from "node:os";
|
||||
import * as path from "node:path";
|
||||
import { afterEach, describe, expect, it } from "vitest";
|
||||
import {
|
||||
clearNodeLocalStorage,
|
||||
configureNodeLocalStorage,
|
||||
ensureGlobalNodeLocalStorage,
|
||||
getNodeLocalStorageItem,
|
||||
setNodeLocalStorageItem,
|
||||
} from "./NodeLocalStorage";
|
||||
|
||||
describe("NodeLocalStorage", () => {
|
||||
const tempDirs: string[] = [];
|
||||
|
||||
afterEach(() => {
|
||||
clearNodeLocalStorage();
|
||||
for (const tempDir of tempDirs.splice(0)) {
|
||||
fs.rmSync(tempDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it("persists values to the configured file", () => {
|
||||
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
|
||||
tempDirs.push(tempDir);
|
||||
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
|
||||
|
||||
configureNodeLocalStorage(storagePath);
|
||||
setNodeLocalStorageItem("checkpoint", "42");
|
||||
|
||||
const saved = JSON.parse(fs.readFileSync(storagePath, "utf-8")) as Record<string, string>;
|
||||
expect(saved.checkpoint).toBe("42");
|
||||
});
|
||||
|
||||
it("reloads persisted values when configured again", () => {
|
||||
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
|
||||
tempDirs.push(tempDir);
|
||||
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
|
||||
|
||||
fs.mkdirSync(path.dirname(storagePath), { recursive: true });
|
||||
fs.writeFileSync(storagePath, JSON.stringify({ persisted: "value" }, null, 2), "utf-8");
|
||||
|
||||
configureNodeLocalStorage(storagePath);
|
||||
|
||||
expect(getNodeLocalStorageItem("persisted")).toBe("value");
|
||||
});
|
||||
|
||||
it("installs a global localStorage shim backed by the same store", () => {
|
||||
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
|
||||
tempDirs.push(tempDir);
|
||||
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
|
||||
|
||||
configureNodeLocalStorage(storagePath);
|
||||
ensureGlobalNodeLocalStorage();
|
||||
|
||||
globalThis.localStorage.setItem("shared", "state");
|
||||
|
||||
expect(getNodeLocalStorageItem("shared")).toBe("state");
|
||||
});
|
||||
});
|
||||
@@ -27,10 +27,10 @@ import { DatabaseService } from "@lib/services/base/DatabaseService";
|
||||
import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types";
|
||||
|
||||
export class NodeServiceContext extends ServiceContext {
|
||||
vaultPath: string;
|
||||
constructor(vaultPath: string) {
|
||||
databasePath: string;
|
||||
constructor(databasePath: string) {
|
||||
super();
|
||||
this.vaultPath = vaultPath;
|
||||
this.databasePath = databasePath;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -64,7 +64,7 @@ class NodeDatabaseService<T extends NodeServiceContext> extends DatabaseService<
|
||||
): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } {
|
||||
const optionPass = {
|
||||
...options,
|
||||
prefix: this.context.vaultPath + nodePath.sep,
|
||||
prefix: this.context.databasePath + nodePath.sep,
|
||||
};
|
||||
const passSettings = { ...settings, useIndexedDBAdapter: false };
|
||||
return super.modifyDatabaseOptions(passSettings, name, optionPass);
|
||||
|
||||
@@ -5,17 +5,17 @@ import { handlers } from "@lib/services/lib/HandlerUtils";
|
||||
import type { ObsidianLiveSyncSettings } from "@lib/common/types";
|
||||
import type { ServiceContext } from "@lib/services/base/ServiceBase";
|
||||
import { SettingService, type SettingServiceDependencies } from "@lib/services/base/SettingService";
|
||||
import * as nodeFs from "node:fs";
|
||||
import * as nodePath from "node:path";
|
||||
import {
|
||||
configureNodeLocalStorage,
|
||||
deleteNodeLocalStorageItem,
|
||||
getNodeLocalStorageItem,
|
||||
setNodeLocalStorageItem,
|
||||
} from "./NodeLocalStorage";
|
||||
|
||||
export class NodeSettingService<T extends ServiceContext> extends SettingService<T> {
|
||||
private storagePath: string;
|
||||
private localStore: Record<string, string> = {};
|
||||
|
||||
constructor(context: T, dependencies: SettingServiceDependencies, storagePath: string) {
|
||||
super(context, dependencies);
|
||||
this.storagePath = storagePath;
|
||||
this.loadLocalStoreFromFile();
|
||||
configureNodeLocalStorage(storagePath);
|
||||
this.onSettingSaved.addHandler((settings) => {
|
||||
eventHub.emitEvent(EVENT_SETTING_SAVED, settings);
|
||||
return Promise.resolve(true);
|
||||
@@ -26,34 +26,16 @@ export class NodeSettingService<T extends ServiceContext> extends SettingService
|
||||
});
|
||||
}
|
||||
|
||||
private loadLocalStoreFromFile() {
|
||||
try {
|
||||
const loaded = JSON.parse(nodeFs.readFileSync(this.storagePath, "utf-8")) as Record<string, string>;
|
||||
this.localStore = { ...loaded };
|
||||
} catch {
|
||||
this.localStore = {};
|
||||
}
|
||||
}
|
||||
|
||||
private flushLocalStoreToFile() {
|
||||
nodeFs.mkdirSync(nodePath.dirname(this.storagePath), { recursive: true });
|
||||
nodeFs.writeFileSync(this.storagePath, JSON.stringify(this.localStore, null, 2), "utf-8");
|
||||
}
|
||||
|
||||
protected setItem(key: string, value: string) {
|
||||
this.localStore[key] = value;
|
||||
this.flushLocalStoreToFile();
|
||||
setNodeLocalStorageItem(key, value);
|
||||
}
|
||||
|
||||
protected getItem(key: string): string {
|
||||
return this.localStore[key] ?? "";
|
||||
return getNodeLocalStorageItem(key);
|
||||
}
|
||||
|
||||
protected deleteItem(key: string): void {
|
||||
if (key in this.localStore) {
|
||||
delete this.localStore[key];
|
||||
this.flushLocalStoreToFile();
|
||||
}
|
||||
deleteNodeLocalStorageItem(key);
|
||||
}
|
||||
|
||||
public saveData = handlers<{ saveData: (data: ObsidianLiveSyncSettings) => Promise<void> }>().binder("saveData");
|
||||
|
||||
49
src/apps/cli/test/repro-issue-860.sh
Executable file
49
src/apps/cli/test/repro-issue-860.sh
Executable file
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
display_test_info "Test for Issue #860: Empty output from ls and mirror"
|
||||
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-repro-860.XXXXXX")"
|
||||
trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
|
||||
SETTINGS_FILE="$WORK_DIR/data.json"
|
||||
VAULT_DIR="$WORK_DIR/vault"
|
||||
mkdir -p "$VAULT_DIR"
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
npm run build
|
||||
fi
|
||||
|
||||
echo "[INFO] generating settings -> $SETTINGS_FILE"
|
||||
cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
|
||||
# 1. Test 'ls' on empty database
|
||||
echo "[INFO] Testing 'ls' on empty database..."
|
||||
LS_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls)
|
||||
if [[ -z "$LS_OUTPUT" ]]; then
|
||||
echo "[REPRODUCED] 'ls' returned empty output for empty database."
|
||||
else
|
||||
echo "[INFO] 'ls' output: $LS_OUTPUT"
|
||||
fi
|
||||
|
||||
# 2. Test 'mirror' on empty vault
|
||||
echo "[INFO] Testing 'mirror' on empty vault..."
|
||||
MIRROR_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror 2>&1)
|
||||
if [[ "$MIRROR_OUTPUT" == *"[Command] mirror"* ]] && [[ ! "$MIRROR_OUTPUT" == *"[Mirror]"* ]]; then
|
||||
# Note: currently it prints [Command] mirror to stderr.
|
||||
# Let's see if it prints anything else.
|
||||
echo "[REPRODUCED] 'mirror' produced no functional logs (only command header)."
|
||||
else
|
||||
echo "[INFO] 'mirror' output: $MIRROR_OUTPUT"
|
||||
fi
|
||||
|
||||
echo "[DONE] finished repro-860 test"
|
||||
16
src/apps/cli/test/test-e2e-two-vaults-common.sh
Executable file → Normal file
16
src/apps/cli/test/test-e2e-two-vaults-common.sh
Executable file → Normal file
@@ -136,6 +136,8 @@ fi
|
||||
|
||||
TARGET_A_ONLY="e2e/a-only-info.md"
|
||||
TARGET_SYNC="e2e/sync-info.md"
|
||||
TARGET_SYNC_TWICE_FIRST="e2e/sync-twice-first.md"
|
||||
TARGET_SYNC_TWICE_SECOND="e2e/sync-twice-second.md"
|
||||
TARGET_PUSH="e2e/pushed-from-a.md"
|
||||
TARGET_PUT="e2e/put-from-a.md"
|
||||
TARGET_PUSH_BINARY="e2e/pushed-from-a.bin"
|
||||
@@ -154,6 +156,20 @@ INFO_B_SYNC="$(run_cli_b info "$TARGET_SYNC")"
|
||||
cli_test_assert_contains "$INFO_B_SYNC" "\"path\": \"$TARGET_SYNC\"" "B info should include path after sync"
|
||||
echo "[PASS] sync A->B and B info"
|
||||
|
||||
echo "[CASE] B can sync again after first replication has completed"
|
||||
printf 'first-sync-round-%s\n' "$DB_SUFFIX" | run_cli_a put "$TARGET_SYNC_TWICE_FIRST" >/dev/null
|
||||
run_cli_a sync >/dev/null
|
||||
run_cli_b sync >/dev/null
|
||||
CAT_B_SYNC_TWICE_FIRST="$(run_cli_b cat "$TARGET_SYNC_TWICE_FIRST" | cli_test_sanitise_cat_stdout)"
|
||||
cli_test_assert_equal "first-sync-round-$DB_SUFFIX" "$CAT_B_SYNC_TWICE_FIRST" "B should receive first update after first sync"
|
||||
|
||||
printf 'second-sync-round-%s\n' "$DB_SUFFIX" | run_cli_a put "$TARGET_SYNC_TWICE_SECOND" >/dev/null
|
||||
run_cli_a sync >/dev/null
|
||||
run_cli_b sync >/dev/null
|
||||
CAT_B_SYNC_TWICE_SECOND="$(run_cli_b cat "$TARGET_SYNC_TWICE_SECOND" | cli_test_sanitise_cat_stdout)"
|
||||
cli_test_assert_equal "second-sync-round-$DB_SUFFIX" "$CAT_B_SYNC_TWICE_SECOND" "B should receive second update after re-running sync"
|
||||
echo "[PASS] second sync after completion works"
|
||||
|
||||
echo "[CASE] A pushes and puts, both sync, and B can pull and cat"
|
||||
PUSH_SRC="$WORK_DIR/push-source.txt"
|
||||
PULL_DST="$WORK_DIR/pull-destination.txt"
|
||||
|
||||
0
src/apps/cli/test/test-e2e-two-vaults-matrix.sh
Executable file → Normal file
0
src/apps/cli/test/test-e2e-two-vaults-matrix.sh
Executable file → Normal file
0
src/apps/cli/test/test-e2e-two-vaults-with-docker-linux.sh
Executable file → Normal file
0
src/apps/cli/test/test-e2e-two-vaults-with-docker-linux.sh
Executable file → Normal file
150
src/apps/cli/test/test-helpers-docker.sh
Normal file
150
src/apps/cli/test/test-helpers-docker.sh
Normal file
@@ -0,0 +1,150 @@
|
||||
#!/usr/bin/env bash
|
||||
# test-helpers-docker.sh
|
||||
#
|
||||
# Docker-mode overrides for test-helpers.sh.
|
||||
# Sourced automatically at the end of test-helpers.sh when
|
||||
# LIVESYNC_TEST_DOCKER=1 is set, replacing run_cli (and related helpers)
|
||||
# with a Docker-based implementation.
|
||||
#
|
||||
# The Docker container and the host share a common directory layout:
|
||||
# $WORK_DIR (host) <-> /workdir (container)
|
||||
# $CLI_DIR (host) <-> /clidir (container)
|
||||
#
|
||||
# Usage (run an existing test against the Docker image):
|
||||
# LIVESYNC_TEST_DOCKER=1 bash test/test-push-pull-linux.sh
|
||||
# LIVESYNC_TEST_DOCKER=1 bash test/test-mirror-linux.sh
|
||||
# LIVESYNC_TEST_DOCKER=1 bash test/test-sync-two-local-databases-linux.sh
|
||||
# LIVESYNC_TEST_DOCKER=1 bash test/test-setup-put-cat-linux.sh
|
||||
#
|
||||
# Optional environment variables:
|
||||
# DOCKER_IMAGE Image name/tag to use (default: livesync-cli)
|
||||
# RUN_BUILD Set to 1 to rebuild the Docker image before the test
|
||||
# (default: 0 — assumes the image is already built)
|
||||
# Build command: npm run build:docker (from src/apps/cli/)
|
||||
#
|
||||
# Notes:
|
||||
# - The container is started with --network host so that it can reach
|
||||
# CouchDB / P2P relay containers that are also using the host network.
|
||||
# - On macOS / Windows Docker Desktop --network host behaves differently
|
||||
# (it is not a true host-network bridge); tests that rely on localhost
|
||||
# connectivity to other containers may fail on those platforms.
|
||||
|
||||
# Ensure Docker-mode tests do not trigger host-side `npm run build` unless
|
||||
# explicitly requested by the caller.
|
||||
RUN_BUILD="${RUN_BUILD:-0}"
|
||||
|
||||
# Override the standard implementation.
|
||||
# In Docker mode the CLI_CMD array is a no-op sentinel; run_cli is overridden
|
||||
# directly.
|
||||
cli_test_init_cli_cmd() {
|
||||
DOCKER_IMAGE="${DOCKER_IMAGE:-livesync-cli}"
|
||||
# CLI_CMD is unused in Docker mode; set a sentinel so existing code
|
||||
# that references it will not error.
|
||||
CLI_CMD=(__docker__)
|
||||
}
|
||||
|
||||
# ─── display_test_info ────────────────────────────────────────────────────────
|
||||
display_test_info() {
|
||||
local image="${DOCKER_IMAGE:-livesync-cli}"
|
||||
local image_id
|
||||
image_id="$(docker inspect --format='{{slice .Id 7 19}}' "$image" 2>/dev/null || echo "N/A")"
|
||||
echo "======================"
|
||||
echo "Script: ${BASH_SOURCE[1]:-$0}"
|
||||
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
echo "Commit: $(git -C "${SCRIPT_DIR:-.}" rev-parse --short HEAD 2>/dev/null || echo "N/A")"
|
||||
echo "Mode: Docker image=${image} id=${image_id}"
|
||||
echo "======================"
|
||||
}
|
||||
|
||||
# ─── _docker_translate_arg ───────────────────────────────────────────────────
|
||||
# Translate a single host filesystem path to its in-container equivalent.
|
||||
# Paths under WORK_DIR → /workdir/...
|
||||
# Paths under CLI_DIR → /clidir/...
|
||||
# Everything else is returned unchanged (relative paths, URIs, plain names).
|
||||
_docker_translate_arg() {
|
||||
local arg="$1"
|
||||
if [[ -n "${WORK_DIR:-}" && "$arg" == "$WORK_DIR"* ]]; then
|
||||
printf '%s' "/workdir${arg#$WORK_DIR}"
|
||||
return
|
||||
fi
|
||||
if [[ -n "${CLI_DIR:-}" && "$arg" == "$CLI_DIR"* ]]; then
|
||||
printf '%s' "/clidir${arg#$CLI_DIR}"
|
||||
return
|
||||
fi
|
||||
printf '%s' "$arg"
|
||||
}
|
||||
|
||||
# ─── run_cli ─────────────────────────────────────────────────────────────────
|
||||
# Drop-in replacement for run_cli that executes the CLI inside a Docker
|
||||
# container, translating host paths to container paths automatically.
|
||||
#
|
||||
# Calling convention is identical to the native run_cli:
|
||||
# run_cli <vault-path> [options] <command> [command-args]
|
||||
# run_cli init-settings [options] <settings-file>
|
||||
#
|
||||
# The vault path (first positional argument for regular commands) is forwarded
|
||||
# via the LIVESYNC_DB_PATH environment variable so that docker-entrypoint.sh
|
||||
# can inject it before the remaining CLI arguments.
|
||||
run_cli() {
|
||||
local args=("$@")
|
||||
|
||||
# ── 1. Translate all host paths to container paths ────────────────────
|
||||
local translated=()
|
||||
for arg in "${args[@]}"; do
|
||||
translated+=("$(_docker_translate_arg "$arg")")
|
||||
done
|
||||
|
||||
# ── 2. Split vault path from the rest of the arguments ───────────────
|
||||
local first="${translated[0]:-}"
|
||||
local env_args=()
|
||||
local cli_args=()
|
||||
|
||||
# These tokens are commands or flags that appear before any vault path.
|
||||
case "$first" in
|
||||
"" | --help | -h \
|
||||
| init-settings \
|
||||
| -v | --verbose | -d | --debug | -f | --force | -s | --settings)
|
||||
# No leading vault path — pass all translated args as-is.
|
||||
cli_args=("${translated[@]}")
|
||||
;;
|
||||
*)
|
||||
# First arg is the vault path; hand it to docker-entrypoint.sh
|
||||
# via LIVESYNC_DB_PATH so the entrypoint prepends it correctly.
|
||||
env_args+=(-e "LIVESYNC_DB_PATH=$first")
|
||||
cli_args=("${translated[@]:1}")
|
||||
;;
|
||||
esac
|
||||
|
||||
# ── 3. Inject verbose / debug flags ──────────────────────────────────
|
||||
if [[ "${VERBOSE_TEST_LOGGING:-0}" == "1" ]]; then
|
||||
cli_args=(-v "${cli_args[@]}")
|
||||
fi
|
||||
|
||||
# ── 4. Volume mounts ──────────────────────────────────────────────────
|
||||
local vol_args=()
|
||||
if [[ -n "${WORK_DIR:-}" ]]; then
|
||||
vol_args+=(-v "${WORK_DIR}:/workdir")
|
||||
fi
|
||||
# Mount CLI_DIR (src/apps/cli) for two-vault tests that store vault data
|
||||
# under $CLI_DIR/.livesync/.
|
||||
if [[ -n "${CLI_DIR:-}" ]]; then
|
||||
vol_args+=(-v "${CLI_DIR}:/clidir")
|
||||
fi
|
||||
|
||||
# ── 5. stdin forwarding ───────────────────────────────────────────────
|
||||
# Attach stdin only when it is a pipe (the 'put' command reads from stdin).
|
||||
# Without -i the pipe data would never reach the container process.
|
||||
local stdin_flags=()
|
||||
if [[ ! -t 0 ]]; then
|
||||
stdin_flags=(-i)
|
||||
fi
|
||||
|
||||
docker run --rm \
|
||||
"${stdin_flags[@]}" \
|
||||
--network host \
|
||||
--user "$(id -u):$(id -g)" \
|
||||
"${vol_args[@]}" \
|
||||
"${env_args[@]}" \
|
||||
"${DOCKER_IMAGE:-livesync-cli}" \
|
||||
"${cli_args[@]}"
|
||||
}
|
||||
@@ -1,5 +1,15 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# ─── local init hook ────────────────────────────────────────────────────────
|
||||
# If test-init.local.sh exists alongside this file, source it before anything
|
||||
# else. Use it to set up your local environment (e.g. activate nvm, set
|
||||
# DOCKER_IMAGE, ...). The file is git-ignored so it is safe to put personal
|
||||
# or machine-specific configuration there.
|
||||
_TEST_HELPERS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# shellcheck source=/dev/null
|
||||
[[ -f "$_TEST_HELPERS_DIR/test-init.local.sh" ]] && source "$_TEST_HELPERS_DIR/test-init.local.sh"
|
||||
unset _TEST_HELPERS_DIR
|
||||
|
||||
cli_test_init_cli_cmd() {
|
||||
if [[ "${VERBOSE_TEST_LOGGING:-0}" == "1" ]]; then
|
||||
CLI_CMD=(npm --silent run cli -- -v)
|
||||
@@ -343,4 +353,10 @@ display_test_info(){
|
||||
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
echo "Git commit: $(git -C "$SCRIPT_DIR/.." rev-parse --short HEAD 2>/dev/null || echo "N/A")"
|
||||
echo "======================"
|
||||
}
|
||||
}
|
||||
|
||||
# Docker-mode hook — source overrides when LIVESYNC_TEST_DOCKER=1.
|
||||
if [[ "${LIVESYNC_TEST_DOCKER:-0}" == "1" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "$(dirname "${BASH_SOURCE[0]}")/test-helpers-docker.sh"
|
||||
fi
|
||||
@@ -28,7 +28,9 @@ trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
|
||||
SETTINGS_FILE="$WORK_DIR/data.json"
|
||||
VAULT_DIR="$WORK_DIR/vault"
|
||||
DB_DIR="$WORK_DIR/db"
|
||||
mkdir -p "$VAULT_DIR/test"
|
||||
mkdir -p "$DB_DIR"
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
@@ -41,6 +43,20 @@ cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
# isConfigured=true is required for mirror (canProceedScan checks this)
|
||||
cli_test_mark_settings_configured "$SETTINGS_FILE"
|
||||
|
||||
# Preparation: Sync settings and files logic
|
||||
DB_SETTINGS="$DB_DIR/settings.json"
|
||||
cp "$SETTINGS_FILE" "$DB_SETTINGS"
|
||||
|
||||
# Helper for standard run (Separated paths)
|
||||
run_mirror_test() {
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" mirror "$VAULT_DIR"
|
||||
}
|
||||
|
||||
# Helper for compatibility run (Same path)
|
||||
run_mirror_compat() {
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
}
|
||||
|
||||
PASS=0
|
||||
FAIL=0
|
||||
|
||||
@@ -78,19 +94,27 @@ portable_touch_timestamp() {
|
||||
# Case 1: File exists only in storage → should be synced into DB after mirror
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 1: storage-only → DB ==="
|
||||
echo "=== Case 1: storage-only → DB (Separated Paths) ==="
|
||||
|
||||
printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md"
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
echo "[DEBUG] DB_DIR: $DB_DIR"
|
||||
echo "[DEBUG] VAULT_DIR: $VAULT_DIR"
|
||||
|
||||
run_mirror_test
|
||||
|
||||
RESULT_FILE="$WORK_DIR/case1-cat.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE"
|
||||
# Try 'ls' first to see what's in the DB
|
||||
echo "--- DB contents ---"
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" ls
|
||||
echo "-------------------"
|
||||
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/storage-only.md "$RESULT_FILE"
|
||||
|
||||
if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then
|
||||
assert_pass "storage-only file was synced into DB"
|
||||
assert_pass "storage-only file was synced into DB using separated paths"
|
||||
else
|
||||
assert_fail "storage-only file NOT synced into DB"
|
||||
assert_fail "storage-only file NOT synced into DB with separated paths"
|
||||
echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2
|
||||
echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2
|
||||
fi
|
||||
@@ -99,9 +123,9 @@ fi
|
||||
# Case 2: File exists only in DB → should be restored to storage after mirror
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 2: DB-only → storage ==="
|
||||
echo "=== Case 2: DB-only → storage (Separated Paths) ==="
|
||||
|
||||
printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md
|
||||
printf 'db-only content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/db-only.md
|
||||
|
||||
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
assert_fail "db-only.md unexpectedly exists in storage before mirror"
|
||||
@@ -109,7 +133,7 @@ else
|
||||
echo "[INFO] confirmed: test/db-only.md not in storage before mirror"
|
||||
fi
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")"
|
||||
@@ -119,19 +143,19 @@ if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')"
|
||||
fi
|
||||
else
|
||||
assert_fail "DB-only file was NOT restored to storage"
|
||||
assert_fail "DB-only file NOT restored to storage after mirror"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 3: File deleted in DB → should NOT be created in storage
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 3: DB-deleted → storage untouched ==="
|
||||
echo "=== Case 3: DB-deleted → storage untouched (Separated Paths) ==="
|
||||
|
||||
printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md
|
||||
printf 'to-be-deleted\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/deleted.md
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" rm test/deleted.md
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then
|
||||
assert_pass "deleted DB entry was not restored to storage"
|
||||
@@ -143,19 +167,19 @@ fi
|
||||
# Case 4: Both exist, storage is newer → DB should be updated
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 4: storage newer → DB updated ==="
|
||||
echo "=== Case 4: storage newer → DB updated (Separated Paths) ==="
|
||||
|
||||
# Seed DB with old content (mtime ≈ now)
|
||||
printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md
|
||||
printf 'old content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-storage-newer.md
|
||||
|
||||
# Write new content to storage with a timestamp 1 hour in the future
|
||||
printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md"
|
||||
touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md"
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
DB_RESULT_FILE="$WORK_DIR/case4-pull.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
|
||||
if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then
|
||||
assert_pass "DB updated to match newer storage file"
|
||||
else
|
||||
@@ -168,16 +192,16 @@ fi
|
||||
# Case 5: Both exist, DB is newer → storage should be updated
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 5: DB newer → storage updated ==="
|
||||
echo "=== Case 5: DB newer → storage updated (Separated Paths) ==="
|
||||
|
||||
# Write old content to storage with a timestamp 1 hour in the past
|
||||
printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md"
|
||||
touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md"
|
||||
|
||||
# Write new content to DB only (mtime ≈ now, newer than the storage file)
|
||||
printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md
|
||||
printf 'new db content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-db-newer.md
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")"
|
||||
if [[ "$STORAGE_CONTENT" == "new db content" ]]; then
|
||||
@@ -186,6 +210,25 @@ else
|
||||
assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 6: Compatibility test - omitted vault-path
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 6: omitted vault-path (Compatibility Mode) ==="
|
||||
|
||||
# We use VAULT_DIR as the "main" database path for this part.
|
||||
printf 'compat-content\n' > "$VAULT_DIR/compat.md"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
|
||||
# In compat mode, it should find it in the DB at root
|
||||
CAT_RESULT="$WORK_DIR/compat-cat.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull compat.md "$CAT_RESULT"
|
||||
if [[ "$(cat "$CAT_RESULT")" == "compat-content" ]]; then
|
||||
assert_pass "Compatibility mode works (omitted vault-path)"
|
||||
else
|
||||
assert_fail "Compatibility mode failed to sync file into DB"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Summary
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
228
src/apps/cli/test/test-p2p-upload-download-repro-linux.sh
Normal file
228
src/apps/cli/test/test-p2p-upload-download-repro-linux.sh
Normal file
@@ -0,0 +1,228 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
display_test_info
|
||||
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
KEEP_TEST_DATA="${KEEP_TEST_DATA:-1}"
|
||||
VERBOSE_TEST_LOGGING="${VERBOSE_TEST_LOGGING:-0}"
|
||||
|
||||
RELAY="${RELAY:-ws://localhost:4000/}"
|
||||
USE_INTERNAL_RELAY="${USE_INTERNAL_RELAY:-1}"
|
||||
APP_ID="${APP_ID:-self-hosted-livesync-cli-tests}"
|
||||
PEERS_TIMEOUT="${PEERS_TIMEOUT:-20}"
|
||||
SYNC_TIMEOUT="${SYNC_TIMEOUT:-240}"
|
||||
|
||||
ROOM_ID="p2p-room-$(date +%s)-$RANDOM-$RANDOM"
|
||||
PASSPHRASE="p2p-pass-$(date +%s)-$RANDOM-$RANDOM"
|
||||
|
||||
HOST_PEER_NAME="p2p-cli-host"
|
||||
UPLOAD_PEER_NAME="p2p-cli-upload-$(date +%s)-$RANDOM"
|
||||
DOWNLOAD_PEER_NAME="p2p-cli-download-$(date +%s)-$RANDOM"
|
||||
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI"
|
||||
npm run build
|
||||
fi
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-p2p-upload-download.XXXXXX")"
|
||||
VAULT_HOST="$WORK_DIR/vault-host"
|
||||
VAULT_UP="$WORK_DIR/vault-up"
|
||||
VAULT_DOWN="$WORK_DIR/vault-down"
|
||||
SETTINGS_HOST="$WORK_DIR/settings-host.json"
|
||||
SETTINGS_UP="$WORK_DIR/settings-up.json"
|
||||
SETTINGS_DOWN="$WORK_DIR/settings-down.json"
|
||||
HOST_LOG="$WORK_DIR/p2p-host.log"
|
||||
mkdir -p "$VAULT_HOST" "$VAULT_UP" "$VAULT_DOWN"
|
||||
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
if [[ -n "${HOST_PID:-}" ]] && kill -0 "$HOST_PID" >/dev/null 2>&1; then
|
||||
kill -TERM "$HOST_PID" >/dev/null 2>&1 || true
|
||||
wait "$HOST_PID" >/dev/null 2>&1 || true
|
||||
fi
|
||||
if [[ "${P2P_RELAY_STARTED:-0}" == "1" ]]; then
|
||||
cli_test_stop_p2p_relay
|
||||
fi
|
||||
if [[ "$KEEP_TEST_DATA" != "1" ]]; then
|
||||
rm -rf "$WORK_DIR"
|
||||
else
|
||||
echo "[INFO] KEEP_TEST_DATA=1, preserving artefacts at $WORK_DIR"
|
||||
fi
|
||||
exit "$exit_code"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
if [[ "$USE_INTERNAL_RELAY" == "1" ]]; then
|
||||
if cli_test_is_local_p2p_relay "$RELAY"; then
|
||||
cli_test_start_p2p_relay
|
||||
P2P_RELAY_STARTED=1
|
||||
else
|
||||
echo "[INFO] USE_INTERNAL_RELAY=1 but RELAY is not local ($RELAY), skipping local relay startup"
|
||||
fi
|
||||
fi
|
||||
|
||||
run_cli_host() {
|
||||
run_cli "$VAULT_HOST" --settings "$SETTINGS_HOST" "$@"
|
||||
}
|
||||
|
||||
run_cli_up() {
|
||||
run_cli "$VAULT_UP" --settings "$SETTINGS_UP" "$@"
|
||||
}
|
||||
|
||||
run_cli_down() {
|
||||
run_cli "$VAULT_DOWN" --settings "$SETTINGS_DOWN" "$@"
|
||||
}
|
||||
|
||||
apply_p2p_test_tweaks() {
|
||||
local settings_file="$1"
|
||||
local device_name="$2"
|
||||
SETTINGS_FILE="$settings_file" DEVICE_NAME="$device_name" PASSPHRASE_VAL="$PASSPHRASE" node <<'NODE'
|
||||
const fs = require("node:fs");
|
||||
const settingsPath = process.env.SETTINGS_FILE;
|
||||
const data = JSON.parse(fs.readFileSync(settingsPath, "utf-8"));
|
||||
|
||||
data.remoteType = "ONLY_P2P";
|
||||
data.encrypt = true;
|
||||
data.passphrase = process.env.PASSPHRASE_VAL;
|
||||
data.usePathObfuscation = true;
|
||||
data.handleFilenameCaseSensitive = false;
|
||||
data.customChunkSize = 50;
|
||||
data.usePluginSyncV2 = true;
|
||||
data.doNotUseFixedRevisionForChunks = false;
|
||||
data.P2P_DevicePeerName = process.env.DEVICE_NAME;
|
||||
data.isConfigured = true;
|
||||
|
||||
fs.writeFileSync(settingsPath, JSON.stringify(data, null, 2), "utf-8");
|
||||
NODE
|
||||
}
|
||||
|
||||
discover_peer_id() {
|
||||
local side="$1"
|
||||
local output
|
||||
local peer_id
|
||||
if [[ "$side" == "up" ]]; then
|
||||
output="$(run_cli_up p2p-peers "$PEERS_TIMEOUT")"
|
||||
else
|
||||
output="$(run_cli_down p2p-peers "$PEERS_TIMEOUT")"
|
||||
fi
|
||||
peer_id="$(awk -F $'\t' 'NF>=3 && $1=="[peer]" {print $2; exit}' <<< "$output")"
|
||||
if [[ -z "$peer_id" ]]; then
|
||||
echo "[FAIL] ${side} could not discover any peer" >&2
|
||||
echo "[FAIL] peers output:" >&2
|
||||
echo "$output" >&2
|
||||
return 1
|
||||
fi
|
||||
echo "$peer_id"
|
||||
}
|
||||
|
||||
echo "[INFO] preparing settings"
|
||||
echo "[INFO] relay=$RELAY room=$ROOM_ID app=$APP_ID"
|
||||
cli_test_init_settings_file "$SETTINGS_HOST"
|
||||
cli_test_init_settings_file "$SETTINGS_UP"
|
||||
cli_test_init_settings_file "$SETTINGS_DOWN"
|
||||
cli_test_apply_p2p_settings "$SETTINGS_HOST" "$ROOM_ID" "$PASSPHRASE" "$APP_ID" "$RELAY" "~.*"
|
||||
cli_test_apply_p2p_settings "$SETTINGS_UP" "$ROOM_ID" "$PASSPHRASE" "$APP_ID" "$RELAY" "~.*"
|
||||
cli_test_apply_p2p_settings "$SETTINGS_DOWN" "$ROOM_ID" "$PASSPHRASE" "$APP_ID" "$RELAY" "~.*"
|
||||
apply_p2p_test_tweaks "$SETTINGS_HOST" "$HOST_PEER_NAME"
|
||||
apply_p2p_test_tweaks "$SETTINGS_UP" "$UPLOAD_PEER_NAME"
|
||||
apply_p2p_test_tweaks "$SETTINGS_DOWN" "$DOWNLOAD_PEER_NAME"
|
||||
|
||||
echo "[CASE] start p2p-host"
|
||||
run_cli_host p2p-host >"$HOST_LOG" 2>&1 &
|
||||
HOST_PID=$!
|
||||
for _ in 1 2 3 4 5 6 7 8 9 10 11 12; do
|
||||
if grep -Fq "P2P host is running" "$HOST_LOG"; then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
if ! grep -Fq "P2P host is running" "$HOST_LOG"; then
|
||||
echo "[FAIL] p2p-host did not become ready" >&2
|
||||
cat "$HOST_LOG" >&2
|
||||
exit 1
|
||||
fi
|
||||
echo "[PASS] p2p-host started"
|
||||
|
||||
echo "[CASE] upload peer discovers host"
|
||||
HOST_PEER_ID_FOR_UP="$(discover_peer_id up)"
|
||||
echo "[PASS] upload peer discovered host: $HOST_PEER_ID_FOR_UP"
|
||||
|
||||
echo "[CASE] upload phase writes source files"
|
||||
STORE_TEXT="$WORK_DIR/store-file.md"
|
||||
DIFF_A_TEXT="$WORK_DIR/test-diff-1.md"
|
||||
DIFF_B_TEXT="$WORK_DIR/test-diff-2.md"
|
||||
DIFF_C_TEXT="$WORK_DIR/test-diff-3.md"
|
||||
printf 'Hello, World!\n' > "$STORE_TEXT"
|
||||
printf 'Content A\n' > "$DIFF_A_TEXT"
|
||||
printf 'Content B\n' > "$DIFF_B_TEXT"
|
||||
printf 'Content C\n' > "$DIFF_C_TEXT"
|
||||
run_cli_up push "$STORE_TEXT" p2p/store-file.md >/dev/null
|
||||
run_cli_up push "$DIFF_A_TEXT" p2p/test-diff-1.md >/dev/null
|
||||
run_cli_up push "$DIFF_B_TEXT" p2p/test-diff-2.md >/dev/null
|
||||
run_cli_up push "$DIFF_C_TEXT" p2p/test-diff-3.md >/dev/null
|
||||
|
||||
LARGE_TXT_100K="$WORK_DIR/large-100k.txt"
|
||||
LARGE_TXT_1M="$WORK_DIR/large-1m.txt"
|
||||
head -c 100000 /dev/zero | tr '\0' 'a' > "$LARGE_TXT_100K"
|
||||
head -c 1000000 /dev/zero | tr '\0' 'b' > "$LARGE_TXT_1M"
|
||||
run_cli_up push "$LARGE_TXT_100K" p2p/large-100000.md >/dev/null
|
||||
run_cli_up push "$LARGE_TXT_1M" p2p/large-1000000.md >/dev/null
|
||||
|
||||
BINARY_100K="$WORK_DIR/binary-100k.bin"
|
||||
BINARY_5M="$WORK_DIR/binary-5m.bin"
|
||||
head -c 100000 /dev/urandom > "$BINARY_100K"
|
||||
head -c 5000000 /dev/urandom > "$BINARY_5M"
|
||||
run_cli_up push "$BINARY_100K" p2p/binary-100000.bin >/dev/null
|
||||
run_cli_up push "$BINARY_5M" p2p/binary-5000000.bin >/dev/null
|
||||
echo "[PASS] upload source files prepared"
|
||||
|
||||
echo "[CASE] upload phase syncs to host"
|
||||
run_cli_up p2p-sync "$HOST_PEER_ID_FOR_UP" "$SYNC_TIMEOUT" >/dev/null
|
||||
run_cli_up p2p-sync "$HOST_PEER_ID_FOR_UP" "$SYNC_TIMEOUT" >/dev/null
|
||||
echo "[PASS] upload phase synced"
|
||||
|
||||
echo "[CASE] download peer discovers host"
|
||||
HOST_PEER_ID_FOR_DOWN="$(discover_peer_id down)"
|
||||
echo "[PASS] download peer discovered host: $HOST_PEER_ID_FOR_DOWN"
|
||||
|
||||
echo "[CASE] download phase syncs from host"
|
||||
run_cli_down p2p-sync "$HOST_PEER_ID_FOR_DOWN" "$SYNC_TIMEOUT" >/dev/null
|
||||
run_cli_down p2p-sync "$HOST_PEER_ID_FOR_DOWN" "$SYNC_TIMEOUT" >/dev/null
|
||||
echo "[PASS] download phase synced"
|
||||
|
||||
echo "[CASE] verify text files on download peer"
|
||||
DOWN_STORE_TEXT="$WORK_DIR/down-store-file.md"
|
||||
DOWN_DIFF_A_TEXT="$WORK_DIR/down-test-diff-1.md"
|
||||
DOWN_DIFF_B_TEXT="$WORK_DIR/down-test-diff-2.md"
|
||||
DOWN_DIFF_C_TEXT="$WORK_DIR/down-test-diff-3.md"
|
||||
run_cli_down pull p2p/store-file.md "$DOWN_STORE_TEXT" >/dev/null
|
||||
run_cli_down pull p2p/test-diff-1.md "$DOWN_DIFF_A_TEXT" >/dev/null
|
||||
run_cli_down pull p2p/test-diff-2.md "$DOWN_DIFF_B_TEXT" >/dev/null
|
||||
run_cli_down pull p2p/test-diff-3.md "$DOWN_DIFF_C_TEXT" >/dev/null
|
||||
cmp -s "$STORE_TEXT" "$DOWN_STORE_TEXT" || { echo "[FAIL] store-file mismatch" >&2; exit 1; }
|
||||
cmp -s "$DIFF_A_TEXT" "$DOWN_DIFF_A_TEXT" || { echo "[FAIL] test-diff-1 mismatch" >&2; exit 1; }
|
||||
cmp -s "$DIFF_B_TEXT" "$DOWN_DIFF_B_TEXT" || { echo "[FAIL] test-diff-2 mismatch" >&2; exit 1; }
|
||||
cmp -s "$DIFF_C_TEXT" "$DOWN_DIFF_C_TEXT" || { echo "[FAIL] test-diff-3 mismatch" >&2; exit 1; }
|
||||
|
||||
echo "[CASE] verify pushed files on download peer"
|
||||
DOWN_LARGE_100K="$WORK_DIR/down-large-100k.txt"
|
||||
DOWN_LARGE_1M="$WORK_DIR/down-large-1m.txt"
|
||||
DOWN_BINARY_100K="$WORK_DIR/down-binary-100k.bin"
|
||||
DOWN_BINARY_5M="$WORK_DIR/down-binary-5m.bin"
|
||||
run_cli_down pull p2p/large-100000.md "$DOWN_LARGE_100K" >/dev/null
|
||||
run_cli_down pull p2p/large-1000000.md "$DOWN_LARGE_1M" >/dev/null
|
||||
run_cli_down pull p2p/binary-100000.bin "$DOWN_BINARY_100K" >/dev/null
|
||||
run_cli_down pull p2p/binary-5000000.bin "$DOWN_BINARY_5M" >/dev/null
|
||||
cmp -s "$LARGE_TXT_100K" "$DOWN_LARGE_100K" || { echo "[FAIL] large-100000 mismatch" >&2; exit 1; }
|
||||
cmp -s "$LARGE_TXT_1M" "$DOWN_LARGE_1M" || { echo "[FAIL] large-1000000 mismatch" >&2; exit 1; }
|
||||
cmp -s "$BINARY_100K" "$DOWN_BINARY_100K" || { echo "[FAIL] binary-100000 mismatch" >&2; exit 1; }
|
||||
cmp -s "$BINARY_5M" "$DOWN_BINARY_5M" || { echo "[FAIL] binary-5000000 mismatch" >&2; exit 1; }
|
||||
|
||||
echo "[PASS] CLI P2P upload/download reproduction scenario completed"
|
||||
0
src/apps/cli/test/test-setup-put-cat-linux.sh
Executable file → Normal file
0
src/apps/cli/test/test-setup-put-cat-linux.sh
Executable file → Normal file
136
src/apps/cli/test/test-sync-locked-remote-linux.sh
Normal file
136
src/apps/cli/test/test-sync-locked-remote-linux.sh
Normal file
@@ -0,0 +1,136 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test: CLI sync behaviour against a locked remote database.
|
||||
#
|
||||
# Scenario:
|
||||
# 1. Start CouchDB, create a test database, and perform an initial sync so that
|
||||
# the milestone document is created on the remote.
|
||||
# 2. Unlock the milestone (locked=false, accepted_nodes=[]) and verify sync
|
||||
# succeeds without the locked error message.
|
||||
# 3. Lock the milestone (locked=true, accepted_nodes=[]) and verify sync fails
|
||||
# with an actionable error message.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
display_test_info
|
||||
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
TEST_ENV_FILE="${TEST_ENV_FILE:-$CLI_DIR/.test.env}"
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
if [[ ! -f "$TEST_ENV_FILE" ]]; then
|
||||
echo "[ERROR] test env file not found: $TEST_ENV_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
set -a
|
||||
source "$TEST_ENV_FILE"
|
||||
set +a
|
||||
|
||||
DB_SUFFIX="$(date +%s)-$RANDOM"
|
||||
|
||||
COUCHDB_URI="${hostname%/}"
|
||||
COUCHDB_DBNAME="${dbname}-locked-${DB_SUFFIX}"
|
||||
COUCHDB_USER="${username:-}"
|
||||
COUCHDB_PASSWORD="${password:-}"
|
||||
|
||||
if [[ -z "$COUCHDB_URI" || -z "$COUCHDB_USER" || -z "$COUCHDB_PASSWORD" ]]; then
|
||||
echo "[ERROR] COUCHDB_URI, COUCHDB_USER, COUCHDB_PASSWORD are required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-locked-test.XXXXXX")"
|
||||
VAULT_DIR="$WORK_DIR/vault"
|
||||
SETTINGS_FILE="$WORK_DIR/settings.json"
|
||||
mkdir -p "$VAULT_DIR"
|
||||
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
cli_test_stop_couchdb
|
||||
rm -rf "$WORK_DIR"
|
||||
exit "$exit_code"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI"
|
||||
npm run build
|
||||
fi
|
||||
|
||||
echo "[INFO] starting CouchDB and creating test database: $COUCHDB_DBNAME"
|
||||
cli_test_start_couchdb "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME"
|
||||
|
||||
echo "[INFO] preparing settings"
|
||||
cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
cli_test_apply_couchdb_settings "$SETTINGS_FILE" "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME" 1
|
||||
|
||||
echo "[INFO] initial sync to create milestone document"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >/dev/null
|
||||
|
||||
MILESTONE_ID="_local/obsydian_livesync_milestone"
|
||||
MILESTONE_URL="${COUCHDB_URI}/${COUCHDB_DBNAME}/${MILESTONE_ID}"
|
||||
|
||||
update_milestone() {
|
||||
local locked="$1"
|
||||
local accepted_nodes="$2"
|
||||
local current
|
||||
current="$(cli_test_curl_json --user "${COUCHDB_USER}:${COUCHDB_PASSWORD}" "$MILESTONE_URL")"
|
||||
local updated
|
||||
updated="$(node -e '
|
||||
const doc = JSON.parse(process.argv[1]);
|
||||
doc.locked = process.argv[2] === "true";
|
||||
doc.accepted_nodes = JSON.parse(process.argv[3]);
|
||||
process.stdout.write(JSON.stringify(doc));
|
||||
' "$current" "$locked" "$accepted_nodes")"
|
||||
cli_test_curl_json -X PUT \
|
||||
--user "${COUCHDB_USER}:${COUCHDB_PASSWORD}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$updated" \
|
||||
"$MILESTONE_URL" >/dev/null
|
||||
}
|
||||
|
||||
SYNC_LOG="$WORK_DIR/sync.log"
|
||||
|
||||
echo "[CASE] sync should succeed when remote is not locked"
|
||||
update_milestone "false" "[]"
|
||||
|
||||
set +e
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >"$SYNC_LOG" 2>&1
|
||||
SYNC_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [[ "$SYNC_EXIT" -ne 0 ]]; then
|
||||
echo "[FAIL] sync should succeed when remote is not locked" >&2
|
||||
cat "$SYNC_LOG" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if grep -Fq "The remote database is locked" "$SYNC_LOG"; then
|
||||
echo "[FAIL] locked error should not appear when remote is not locked" >&2
|
||||
cat "$SYNC_LOG" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[PASS] unlocked remote DB syncs successfully"
|
||||
|
||||
echo "[CASE] sync should fail with actionable error when remote is locked"
|
||||
update_milestone "true" "[]"
|
||||
|
||||
set +e
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >"$SYNC_LOG" 2>&1
|
||||
SYNC_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [[ "$SYNC_EXIT" -eq 0 ]]; then
|
||||
echo "[FAIL] sync should have exited with non-zero when remote is locked" >&2
|
||||
cat "$SYNC_LOG" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cli_test_assert_contains "$(cat "$SYNC_LOG")" \
|
||||
"The remote database is locked and this device is not yet accepted" \
|
||||
"sync output should contain the locked-remote error message"
|
||||
|
||||
echo "[PASS] locked remote DB produces actionable CLI error"
|
||||
0
src/apps/cli/test/test-sync-two-local-databases-linux.sh
Executable file → Normal file
0
src/apps/cli/test/test-sync-two-local-databases-linux.sh
Executable file → Normal file
150
src/apps/cli/testdeno/CONTRIBUTING_TESTS.md
Normal file
150
src/apps/cli/testdeno/CONTRIBUTING_TESTS.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# Writing CLI Tests on Deno
|
||||
|
||||
This guide explains how to add or update tests under `src/apps/cli/testdeno/`.
|
||||
Note that new tests should be added to the Deno suite rather than the existing bash suite due to the cross-platform execution and TypeScript benefits.
|
||||
|
||||
## Scope
|
||||
|
||||
The Deno suite is designed for cross-platform execution, with a strong focus on Windows compatibility while keeping behaviour equivalent to existing bash tests.
|
||||
|
||||
## Principles
|
||||
|
||||
- Keep one scenario per file when practical.
|
||||
- Reuse helpers from `helpers/` rather than duplicating process, Docker, or settings logic.
|
||||
- Prefer deterministic data over random inputs unless randomness is explicitly required.
|
||||
- Ensure every test can clean up automatically.
|
||||
- Keep assertions actionable with clear failure messages.
|
||||
|
||||
## Directory structure
|
||||
|
||||
```
|
||||
src/apps/cli/testdeno/
|
||||
helpers/
|
||||
backgroundCli.ts
|
||||
cli.ts
|
||||
docker.ts
|
||||
env.ts
|
||||
p2p.ts
|
||||
settings.ts
|
||||
temp.ts
|
||||
test-*.ts
|
||||
deno.json
|
||||
```
|
||||
|
||||
## Test file naming
|
||||
|
||||
- Use `test-<feature>.ts`.
|
||||
- Use names aligned with existing bash tests when porting, for example:
|
||||
- `test-sync-locked-remote.ts`
|
||||
- `test-p2p-sync.ts`
|
||||
|
||||
## Core helper usage
|
||||
|
||||
### Temporary workspace
|
||||
|
||||
Use `TempDir` and `await using` so cleanup is automatic:
|
||||
|
||||
```ts
|
||||
await using workDir = await TempDir.create("livesync-cli-my-test");
|
||||
```
|
||||
|
||||
### CLI execution
|
||||
|
||||
- `runCli(...)`: returns code and combined output.
|
||||
- `runCliOrFail(...)`: throws on non-zero exit.
|
||||
- `runCliWithInputOrFail(input, ...)`: for `put` and stdin-driven commands.
|
||||
|
||||
### Settings
|
||||
|
||||
- `initSettingsFile(...)`: creates a baseline settings file.
|
||||
- `applyCouchdbSettings(...)`: applies CouchDB fields.
|
||||
- `applyRemoteSyncSettings(...)`: applies remote and encryption fields.
|
||||
- `applyP2pSettings(...)`: applies P2P fields.
|
||||
- `applyP2pTestTweaks(...)`: enables P2P-only test profile.
|
||||
|
||||
### Docker services
|
||||
|
||||
- `startCouchdb(...)`, `stopCouchdb()`
|
||||
- `startP2pRelay()`, `stopP2pRelay()`
|
||||
|
||||
### P2P discovery
|
||||
|
||||
- `discoverPeer(...)`
|
||||
- `maybeStartLocalRelay(...)`
|
||||
- `stopLocalRelayIfStarted(...)`
|
||||
|
||||
### Background host process
|
||||
|
||||
Use `startCliInBackground(...)` for long-running host mode such as `p2p-host`.
|
||||
|
||||
## Recommended test structure
|
||||
|
||||
1. Arrange
|
||||
2. Act
|
||||
3. Assert
|
||||
4. Cleanup in `finally`
|
||||
|
||||
Example skeleton:
|
||||
|
||||
```ts
|
||||
Deno.test("feature: behaviour", async () => {
|
||||
await using workDir = await TempDir.create("example");
|
||||
// Arrange
|
||||
|
||||
try {
|
||||
// Act
|
||||
|
||||
// Assert
|
||||
} finally {
|
||||
// Optional explicit cleanup
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
## Reliability guidelines
|
||||
|
||||
- Use explicit waits only when needed for eventual consistency.
|
||||
- Re-run sync operations where the protocol is eventually consistent.
|
||||
- For network-sensitive commands, use `LIVESYNC_CLI_RETRY` during debugging.
|
||||
- Keep Docker container reuse disabled by default unless debugging.
|
||||
|
||||
## Environment variables
|
||||
|
||||
Common variables:
|
||||
|
||||
- `LIVESYNC_DOCKER_MODE`
|
||||
- `LIVESYNC_DOCKER_COMMAND`
|
||||
- `LIVESYNC_TEST_TEE`
|
||||
- `LIVESYNC_DOCKER_TEE`
|
||||
- `LIVESYNC_CLI_DEBUG`
|
||||
- `LIVESYNC_CLI_VERBOSE`
|
||||
- `LIVESYNC_CLI_RETRY`
|
||||
- `LIVESYNC_DEBUG_KEEP_DOCKER`
|
||||
|
||||
P2P variables:
|
||||
|
||||
- `RELAY`
|
||||
- `ROOM_ID`
|
||||
- `PASSPHRASE`
|
||||
- `APP_ID`
|
||||
- `PEERS_TIMEOUT`
|
||||
- `SYNC_TIMEOUT`
|
||||
- `USE_INTERNAL_RELAY`
|
||||
|
||||
## Adding a new test task
|
||||
|
||||
1. Add the test file under `src/apps/cli/testdeno/`.
|
||||
2. Add a task in `src/apps/cli/testdeno/deno.json`.
|
||||
3. Update `src/apps/cli/testdeno/test_dev_deno.md`.
|
||||
4. Run the new task locally.
|
||||
|
||||
## Validation checklist
|
||||
|
||||
- The test passes on a clean workspace.
|
||||
- The test does not leave persistent artefacts unless explicitly requested.
|
||||
- Failure messages identify both expected and actual behaviour.
|
||||
- The corresponding task is documented.
|
||||
|
||||
## Out of scope for this suite
|
||||
|
||||
- One-off reproduction scripts that are not intended as stable regression tests.
|
||||
22
src/apps/cli/testdeno/deno.json
Normal file
22
src/apps/cli/testdeno/deno.json
Normal file
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"tasks": {
|
||||
"test": "deno test -A --no-check test-*.ts",
|
||||
"test:local": "deno test -A --no-check test-setup-put-cat.ts test-mirror.ts",
|
||||
"test:push-pull": "deno test -A --no-check test-push-pull.ts",
|
||||
"test:setup-put-cat": "deno test -A --no-check test-setup-put-cat.ts",
|
||||
"test:mirror": "deno test -A --no-check test-mirror.ts",
|
||||
"test:sync-two-local": "deno test -A --no-check test-sync-two-local-databases.ts",
|
||||
"test:sync-locked-remote": "deno test -A --no-check test-sync-locked-remote.ts",
|
||||
"test:p2p-host": "deno test -A --no-check test-p2p-host.ts",
|
||||
"test:p2p-peers": "deno test -A --no-check test-p2p-peers-local-relay.ts",
|
||||
"test:p2p-sync": "deno test -A --no-check test-p2p-sync.ts",
|
||||
"test:p2p-three-nodes": "deno test -A --no-check test-p2p-three-nodes-conflict.ts",
|
||||
"test:p2p-upload-download": "deno test -A --no-check test-p2p-upload-download-repro.ts",
|
||||
"test:e2e-couchdb": "deno test -A --no-check test-e2e-two-vaults-couchdb.ts",
|
||||
"test:e2e-matrix": "deno test -A --no-check test-e2e-two-vaults-matrix.ts"
|
||||
},
|
||||
"imports": {
|
||||
"@std/assert": "jsr:@std/assert@^1.0.13",
|
||||
"@std/path": "jsr:@std/path@^1.0.9"
|
||||
}
|
||||
}
|
||||
31
src/apps/cli/testdeno/deno.lock
generated
Normal file
31
src/apps/cli/testdeno/deno.lock
generated
Normal file
@@ -0,0 +1,31 @@
|
||||
{
|
||||
"version": "5",
|
||||
"specifiers": {
|
||||
"jsr:@std/assert@^1.0.13": "1.0.19",
|
||||
"jsr:@std/internal@^1.0.12": "1.0.12",
|
||||
"jsr:@std/path@^1.0.9": "1.1.4"
|
||||
},
|
||||
"jsr": {
|
||||
"@std/assert@1.0.19": {
|
||||
"integrity": "eaada96ee120cb980bc47e040f82814d786fe8162ecc53c91d8df60b8755991e",
|
||||
"dependencies": [
|
||||
"jsr:@std/internal"
|
||||
]
|
||||
},
|
||||
"@std/internal@1.0.12": {
|
||||
"integrity": "972a634fd5bc34b242024402972cd5143eac68d8dffaca5eaa4dba30ce17b027"
|
||||
},
|
||||
"@std/path@1.1.4": {
|
||||
"integrity": "1d2d43f39efb1b42f0b1882a25486647cb851481862dc7313390b2bb044314b5",
|
||||
"dependencies": [
|
||||
"jsr:@std/internal"
|
||||
]
|
||||
}
|
||||
},
|
||||
"workspace": {
|
||||
"dependencies": [
|
||||
"jsr:@std/assert@^1.0.13",
|
||||
"jsr:@std/path@^1.0.9"
|
||||
]
|
||||
}
|
||||
}
|
||||
112
src/apps/cli/testdeno/helpers/backgroundCli.ts
Normal file
112
src/apps/cli/testdeno/helpers/backgroundCli.ts
Normal file
@@ -0,0 +1,112 @@
|
||||
import { CLI_DIR } from "./cli.ts";
|
||||
import { join } from "@std/path";
|
||||
|
||||
const CLI_DIST = join(CLI_DIR, "dist", "index.cjs");
|
||||
const VERBOSE_ENABLED = Deno.env.get("LIVESYNC_CLI_VERBOSE") === "1";
|
||||
const DEBUG_ENABLED = Deno.env.get("LIVESYNC_CLI_DEBUG") === "1";
|
||||
|
||||
function decorateArgs(args: string[]): string[] {
|
||||
return DEBUG_ENABLED ? ["-d", ...args] : VERBOSE_ENABLED ? ["-v", ...args] : args;
|
||||
}
|
||||
|
||||
async function pump(
|
||||
stream: ReadableStream<Uint8Array>,
|
||||
sink: (text: string) => void,
|
||||
teeTarget: WritableStream<Uint8Array> | null
|
||||
): Promise<void> {
|
||||
const reader = stream.getReader();
|
||||
const writer = teeTarget?.getWriter();
|
||||
const dec = new TextDecoder();
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
if (!value) continue;
|
||||
sink(dec.decode(value, { stream: true }));
|
||||
if (writer) {
|
||||
await writer.write(value);
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
if (writer) writer.releaseLock();
|
||||
reader.releaseLock();
|
||||
}
|
||||
}
|
||||
|
||||
export class BackgroundCliProcess {
|
||||
#stdout = "";
|
||||
#stderr = "";
|
||||
#stdoutDone: Promise<void>;
|
||||
#stderrDone: Promise<void>;
|
||||
|
||||
constructor(
|
||||
readonly child: Deno.ChildProcess,
|
||||
readonly args: string[]
|
||||
) {
|
||||
this.#stdoutDone = pump(
|
||||
child.stdout,
|
||||
(text) => {
|
||||
this.#stdout += text;
|
||||
},
|
||||
null
|
||||
);
|
||||
this.#stderrDone = pump(
|
||||
child.stderr,
|
||||
(text) => {
|
||||
this.#stderr += text;
|
||||
},
|
||||
null
|
||||
);
|
||||
}
|
||||
|
||||
get stdout(): string {
|
||||
return this.#stdout;
|
||||
}
|
||||
|
||||
get stderr(): string {
|
||||
return this.#stderr;
|
||||
}
|
||||
|
||||
get combined(): string {
|
||||
return this.#stdout + this.#stderr;
|
||||
}
|
||||
|
||||
async waitUntilContains(needle: string, timeoutMs = 15000): Promise<void> {
|
||||
const started = Date.now();
|
||||
while (Date.now() - started < timeoutMs) {
|
||||
if (this.combined.includes(needle)) return;
|
||||
const status = await Promise.race([
|
||||
this.child.status.then((s) => ({ type: "status" as const, status: s })),
|
||||
new Promise<{ type: "tick" }>((resolve) => setTimeout(() => resolve({ type: "tick" }), 100)),
|
||||
]);
|
||||
if (status.type === "status") {
|
||||
throw new Error(
|
||||
`Background CLI exited before '${needle}' appeared (code ${status.status.code})\n${this.combined}`
|
||||
);
|
||||
}
|
||||
}
|
||||
throw new Error(`Timed out waiting for '${needle}'\n${this.combined}`);
|
||||
}
|
||||
|
||||
async stop(): Promise<number> {
|
||||
try {
|
||||
this.child.kill("SIGTERM");
|
||||
} catch {
|
||||
// ignore already-exited processes
|
||||
}
|
||||
const status = await this.child.status;
|
||||
await Promise.all([this.#stdoutDone, this.#stderrDone]);
|
||||
return status.code;
|
||||
}
|
||||
}
|
||||
|
||||
export function startCliInBackground(...args: string[]): BackgroundCliProcess {
|
||||
const child = new Deno.Command("node", {
|
||||
args: [CLI_DIST, ...decorateArgs(args)],
|
||||
cwd: CLI_DIR,
|
||||
stdin: "null",
|
||||
stdout: "piped",
|
||||
stderr: "piped",
|
||||
}).spawn();
|
||||
return new BackgroundCliProcess(child, args);
|
||||
}
|
||||
231
src/apps/cli/testdeno/helpers/cli.ts
Normal file
231
src/apps/cli/testdeno/helpers/cli.ts
Normal file
@@ -0,0 +1,231 @@
|
||||
import { join } from "@std/path";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Path resolution
|
||||
// ---------------------------------------------------------------------------
|
||||
// This file lives at: src/apps/cli/testdeno/helpers/cli.ts
|
||||
// CLI root (src/apps/cli/) is two levels up.
|
||||
// import.meta.dirname is available in Deno 1.40+ as an OS-native path string.
|
||||
export const CLI_DIR: string = join(import.meta.dirname!, "..", "..");
|
||||
const CLI_DIST = join(CLI_DIR, "dist", "index.cjs");
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Result type
|
||||
// ---------------------------------------------------------------------------
|
||||
export interface CliResult {
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
/** stdout + stderr concatenated — useful for assertion messages. */
|
||||
combined: string;
|
||||
code: number;
|
||||
}
|
||||
|
||||
const TEE_ENABLED = Deno.env.get("LIVESYNC_TEST_TEE") === "1";
|
||||
const VERBOSE_ENABLED = Deno.env.get("LIVESYNC_CLI_VERBOSE") === "1";
|
||||
const DEBUG_ENABLED = Deno.env.get("LIVESYNC_CLI_DEBUG") === "1";
|
||||
|
||||
function sleep(ms: number): Promise<void> {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
function concatChunks(chunks: Uint8Array[]): Uint8Array {
|
||||
const total = chunks.reduce((n, c) => n + c.length, 0);
|
||||
const out = new Uint8Array(total);
|
||||
let offset = 0;
|
||||
for (const c of chunks) {
|
||||
out.set(c, offset);
|
||||
offset += c.length;
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
async function collectStream(
|
||||
stream: ReadableStream<Uint8Array>,
|
||||
teeTarget: WritableStream<Uint8Array> | null
|
||||
): Promise<Uint8Array> {
|
||||
const reader = stream.getReader();
|
||||
const chunks: Uint8Array[] = [];
|
||||
const writer = teeTarget?.getWriter();
|
||||
try {
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
if (value) {
|
||||
chunks.push(value);
|
||||
if (writer) {
|
||||
await writer.write(value);
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
if (writer) {
|
||||
writer.releaseLock();
|
||||
}
|
||||
reader.releaseLock();
|
||||
}
|
||||
return concatChunks(chunks);
|
||||
}
|
||||
|
||||
async function runNodeCommand(args: string[], stdinData?: Uint8Array): Promise<CliResult> {
|
||||
const cliArgs = DEBUG_ENABLED ? ["-d", ...args] : VERBOSE_ENABLED ? ["-v", ...args] : args;
|
||||
const child = new Deno.Command("node", {
|
||||
args: [CLI_DIST, ...cliArgs],
|
||||
cwd: CLI_DIR,
|
||||
stdin: stdinData ? "piped" : "null",
|
||||
stdout: "piped",
|
||||
stderr: "piped",
|
||||
}).spawn();
|
||||
|
||||
const stdoutPromise = collectStream(child.stdout, TEE_ENABLED ? Deno.stdout.writable : null);
|
||||
const stderrPromise = collectStream(child.stderr, TEE_ENABLED ? Deno.stderr.writable : null);
|
||||
|
||||
if (stdinData) {
|
||||
const w = child.stdin.getWriter();
|
||||
await w.write(stdinData);
|
||||
await w.close();
|
||||
}
|
||||
|
||||
const [status, stdout, stderr] = await Promise.all([child.status, stdoutPromise, stderrPromise]);
|
||||
|
||||
const dec = new TextDecoder();
|
||||
const out = dec.decode(stdout);
|
||||
const err = dec.decode(stderr);
|
||||
return { stdout: out, stderr: err, combined: out + err, code: status.code };
|
||||
}
|
||||
|
||||
function isTransientNetworkError(message: string): boolean {
|
||||
const m = message.toLowerCase();
|
||||
return (
|
||||
m.includes("fetch failed") ||
|
||||
m.includes("econnreset") ||
|
||||
m.includes("econnrefused") ||
|
||||
m.includes("und_err_socket") ||
|
||||
m.includes("other side closed")
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Core runners
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Run the CLI (node dist/index.cjs) with the supplied arguments.
|
||||
* Pass the vault / DB path as the first argument, exactly as the bash helpers
|
||||
* do. Does NOT throw on non-zero exit — check `.code` yourself.
|
||||
*/
|
||||
export async function runCli(...args: string[]): Promise<CliResult> {
|
||||
const retries = Number(Deno.env.get("LIVESYNC_CLI_RETRY") ?? "0");
|
||||
for (let attempt = 0; ; attempt++) {
|
||||
const result = await runNodeCommand(args);
|
||||
if (result.code === 0) return result;
|
||||
|
||||
if (attempt >= retries || !isTransientNetworkError(result.combined)) {
|
||||
return result;
|
||||
}
|
||||
const waitMs = 400 * (attempt + 1);
|
||||
console.warn(`[WARN] transient CLI failure, retrying (${attempt + 1}/${retries}) in ${waitMs}ms`);
|
||||
await sleep(waitMs);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Run the CLI and throw if it exits non-zero. Returns stdout.
|
||||
*/
|
||||
export async function runCliOrFail(...args: string[]): Promise<string> {
|
||||
const r = await runCli(...args);
|
||||
if (r.code !== 0) {
|
||||
throw new Error(`CLI exited with code ${r.code}\nstdout: ${r.stdout}\nstderr: ${r.stderr}`);
|
||||
}
|
||||
return r.stdout;
|
||||
}
|
||||
|
||||
/**
|
||||
* Run the CLI with data piped to stdin (equivalent to `echo … | run_cli …`
|
||||
* or `cat file | run_cli …`).
|
||||
*/
|
||||
export async function runCliWithInput(input: string | Uint8Array, ...args: string[]): Promise<CliResult> {
|
||||
const data = typeof input === "string" ? new TextEncoder().encode(input) : input;
|
||||
|
||||
const retries = Number(Deno.env.get("LIVESYNC_CLI_RETRY") ?? "0");
|
||||
for (let attempt = 0; ; attempt++) {
|
||||
const result = await runNodeCommand(args, data);
|
||||
if (result.code === 0) return result;
|
||||
|
||||
if (attempt >= retries || !isTransientNetworkError(result.combined)) {
|
||||
return result;
|
||||
}
|
||||
const waitMs = 400 * (attempt + 1);
|
||||
console.warn(`[WARN] transient CLI(stdin) failure, retrying (${attempt + 1}/${retries}) in ${waitMs}ms`);
|
||||
await sleep(waitMs);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* runCliWithInput — throws on non-zero exit, returns stdout.
|
||||
*/
|
||||
export async function runCliWithInputOrFail(input: string | Uint8Array, ...args: string[]): Promise<string> {
|
||||
const r = await runCliWithInput(input, ...args);
|
||||
if (r.code !== 0) {
|
||||
throw new Error(`CLI (with stdin) exited with code ${r.code}\nstdout: ${r.stdout}\nstderr: ${r.stderr}`);
|
||||
}
|
||||
return r.stdout;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Output helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Strip the CLIWatchAdapter banner line that `cat` emits. */
|
||||
export function sanitiseCatStdout(raw: string): string {
|
||||
return raw
|
||||
.split("\n")
|
||||
.filter((l) => l !== "[CLIWatchAdapter] File watching is not enabled in CLI version")
|
||||
.join("\n");
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Assertions (parity with test-helpers.sh)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function assertContains(haystack: string, needle: string, message: string): void {
|
||||
if (!haystack.includes(needle)) {
|
||||
throw new Error(`[FAIL] ${message}\nExpected to find: ${JSON.stringify(needle)}\nActual output:\n${haystack}`);
|
||||
}
|
||||
}
|
||||
|
||||
export function assertNotContains(haystack: string, needle: string, message: string): void {
|
||||
if (haystack.includes(needle)) {
|
||||
throw new Error(`[FAIL] ${message}\nDid NOT expect: ${JSON.stringify(needle)}\nActual output:\n${haystack}`);
|
||||
}
|
||||
}
|
||||
|
||||
export async function assertFilesEqual(expectedPath: string, actualPath: string, message: string): Promise<void> {
|
||||
const [expected, actual] = await Promise.all([Deno.readFile(expectedPath), Deno.readFile(actualPath)]);
|
||||
if (expected.length !== actual.length || expected.some((b, i) => b !== actual[i])) {
|
||||
const hex = async (d: Uint8Array<ArrayBuffer>) => {
|
||||
const h = await crypto.subtle.digest("SHA-256", d);
|
||||
return [...new Uint8Array(h)].map((b) => b.toString(16).padStart(2, "0")).join("");
|
||||
};
|
||||
throw new Error(
|
||||
`[FAIL] ${message}\nexpected SHA-256: ${await hex(expected)}\nactual SHA-256: ${await hex(actual)}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// JSON helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export async function readJsonFile<T = Record<string, unknown>>(filePath: string): Promise<T> {
|
||||
return JSON.parse(await Deno.readTextFile(filePath)) as T;
|
||||
}
|
||||
|
||||
export function jsonStringField(jsonText: string, field: string): string {
|
||||
const data = JSON.parse(jsonText) as Record<string, unknown>;
|
||||
const value = data[field];
|
||||
return typeof value === "string" ? value : "";
|
||||
}
|
||||
|
||||
export function jsonFieldIsNa(data: Record<string, unknown>, field: string): boolean {
|
||||
return data[field] === "N/A";
|
||||
}
|
||||
530
src/apps/cli/testdeno/helpers/docker.ts
Normal file
530
src/apps/cli/testdeno/helpers/docker.ts
Normal file
@@ -0,0 +1,530 @@
|
||||
/**
|
||||
* Docker service management for tests.
|
||||
*
|
||||
* CouchDB start/stop/init is implemented directly using `docker` CLI commands
|
||||
* and the Fetch API, so it works on any platform where Docker (Desktop) is
|
||||
* available — including Windows — without needing bash.
|
||||
*/
|
||||
|
||||
type DockerInvoker = {
|
||||
bin: string;
|
||||
prefix: string[];
|
||||
label: string;
|
||||
};
|
||||
|
||||
let dockerInvokerPromise: Promise<DockerInvoker> | null = null;
|
||||
const DOCKER_TEE = Deno.env.get("LIVESYNC_DOCKER_TEE") === "1" || Deno.env.get("LIVESYNC_TEST_TEE") === "1";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Low-level docker wrapper
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function parseCommand(command: string): { bin: string; prefix: string[] } {
|
||||
const parts = command.trim().split(/\s+/).filter(Boolean);
|
||||
if (parts.length === 0) {
|
||||
throw new Error("LIVESYNC_DOCKER_COMMAND is empty");
|
||||
}
|
||||
return { bin: parts[0], prefix: parts.slice(1) };
|
||||
}
|
||||
|
||||
async function runCommand(bin: string, args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
|
||||
const cmd = new Deno.Command(bin, {
|
||||
args,
|
||||
stdin: "null",
|
||||
stdout: "piped",
|
||||
stderr: "piped",
|
||||
});
|
||||
try {
|
||||
const { code, stdout, stderr } = await cmd.output();
|
||||
const dec = new TextDecoder();
|
||||
const result = {
|
||||
code,
|
||||
stdout: dec.decode(stdout),
|
||||
stderr: dec.decode(stderr),
|
||||
};
|
||||
if (DOCKER_TEE) {
|
||||
if (result.stdout.trim().length > 0) {
|
||||
console.log(`[docker:${bin}] ${result.stdout.trimEnd()}`);
|
||||
}
|
||||
if (result.stderr.trim().length > 0) {
|
||||
console.error(`[docker:${bin}] ${result.stderr.trimEnd()}`);
|
||||
}
|
||||
}
|
||||
return result;
|
||||
} catch (err) {
|
||||
if (err instanceof Deno.errors.NotFound) {
|
||||
return {
|
||||
code: 127,
|
||||
stdout: "",
|
||||
stderr: `Command not found: ${bin}`,
|
||||
};
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
async function resolveDockerInvoker(): Promise<DockerInvoker> {
|
||||
const custom = Deno.env.get("LIVESYNC_DOCKER_COMMAND")?.trim();
|
||||
if (custom) {
|
||||
const parsed = parseCommand(custom);
|
||||
const runner: DockerInvoker = {
|
||||
...parsed,
|
||||
label: `custom(${custom})`,
|
||||
};
|
||||
|
||||
// Validate custom command eagerly so misconfiguration fails fast.
|
||||
const checkArgs = runner.prefix.length === 0 ? ["--version"] : [...runner.prefix, "docker", "--version"];
|
||||
const check = await runCommand(runner.bin, checkArgs);
|
||||
if (check.code !== 0) {
|
||||
throw new Error(`LIVESYNC_DOCKER_COMMAND is not usable: ${custom}\n${check.stderr || check.stdout}`);
|
||||
}
|
||||
return runner;
|
||||
}
|
||||
|
||||
const mode = (Deno.env.get("LIVESYNC_DOCKER_MODE") ?? "auto").toLowerCase();
|
||||
const onWindows = Deno.build.os === "windows";
|
||||
|
||||
const native: DockerInvoker = { bin: "docker", prefix: [], label: "docker" };
|
||||
const wsl: DockerInvoker = { bin: "wsl", prefix: [], label: "wsl docker" };
|
||||
|
||||
if (mode === "native") {
|
||||
return native;
|
||||
}
|
||||
if (mode === "wsl") {
|
||||
return wsl;
|
||||
}
|
||||
if (mode !== "auto") {
|
||||
throw new Error(`Unsupported LIVESYNC_DOCKER_MODE='${mode}'. Use auto, native, or wsl.`);
|
||||
}
|
||||
|
||||
// On Windows we prefer `wsl docker` first, then native docker.
|
||||
// This typically works better in setups where Docker is installed only in
|
||||
// WSL and not exposed as docker.exe on PATH.
|
||||
const candidates = onWindows ? [wsl, native] : [native, wsl];
|
||||
for (const c of candidates) {
|
||||
if (c.bin === "docker") {
|
||||
const r = await runCommand("docker", ["--version"]);
|
||||
if (r.code === 0) return c;
|
||||
continue;
|
||||
}
|
||||
const r = await runCommand("wsl", ["docker", "--version"]);
|
||||
if (r.code === 0) return c;
|
||||
}
|
||||
|
||||
throw new Error(
|
||||
[
|
||||
"Docker command is not available.",
|
||||
"Set one of:",
|
||||
"- LIVESYNC_DOCKER_MODE=native",
|
||||
"- LIVESYNC_DOCKER_MODE=wsl",
|
||||
"- LIVESYNC_DOCKER_COMMAND='docker'",
|
||||
"- LIVESYNC_DOCKER_COMMAND='wsl docker'",
|
||||
].join("\n")
|
||||
);
|
||||
}
|
||||
|
||||
async function getDockerInvoker(): Promise<DockerInvoker> {
|
||||
if (!dockerInvokerPromise) {
|
||||
dockerInvokerPromise = resolveDockerInvoker().then((r) => {
|
||||
console.log(`[INFO] docker runner: ${r.label}`);
|
||||
return r;
|
||||
});
|
||||
}
|
||||
return await dockerInvokerPromise;
|
||||
}
|
||||
|
||||
async function docker(...args: string[]): Promise<{ code: number; stdout: string; stderr: string }> {
|
||||
const invoker = await getDockerInvoker();
|
||||
|
||||
// Either:
|
||||
// docker <args>
|
||||
// Or:
|
||||
// wsl docker <args>
|
||||
const finalArgs =
|
||||
invoker.prefix.length === 0
|
||||
? invoker.bin === "wsl"
|
||||
? ["docker", ...args]
|
||||
: args
|
||||
: [...invoker.prefix, ...args];
|
||||
|
||||
const r = await runCommand(invoker.bin, finalArgs);
|
||||
return { code: r.code, stdout: r.stdout, stderr: r.stderr };
|
||||
}
|
||||
|
||||
async function dockerOrFail(...args: string[]): Promise<string> {
|
||||
const r = await docker(...args);
|
||||
if (r.code !== 0) {
|
||||
throw new Error(`docker ${args[0]} failed (code ${r.code}): ${r.stderr.trim()}`);
|
||||
}
|
||||
return r.stdout;
|
||||
}
|
||||
|
||||
function sleep(ms: number): Promise<void> {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
|
||||
async function waitForCouchdbStable(hostname: string, user: string, password: string): Promise<void> {
|
||||
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
|
||||
const auth = btoa(`${user}:${password}`);
|
||||
const headers = { Authorization: `Basic ${auth}` };
|
||||
let consecutive = 0;
|
||||
for (let i = 0; i < 30; i++) {
|
||||
try {
|
||||
const r = await fetch(`${h}/_up`, {
|
||||
headers,
|
||||
signal: AbortSignal.timeout(3000),
|
||||
});
|
||||
if (r.ok) {
|
||||
consecutive++;
|
||||
if (consecutive >= 3) return;
|
||||
} else {
|
||||
consecutive = 0;
|
||||
}
|
||||
} catch {
|
||||
consecutive = 0;
|
||||
}
|
||||
await sleep(500);
|
||||
}
|
||||
throw new Error("CouchDB did not become stable in time");
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Fetch with retry (mirrors cli_test_curl_json() retry loop)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function fetchRetry(
|
||||
url: string,
|
||||
init: RequestInit,
|
||||
retries = 30,
|
||||
delayMs = 2000,
|
||||
allowStatus: number[] = []
|
||||
): Promise<void> {
|
||||
let lastError: unknown;
|
||||
let lastStatus: number | undefined;
|
||||
for (let i = 0; i < retries; i++) {
|
||||
try {
|
||||
const r = await fetch(url, {
|
||||
signal: AbortSignal.timeout(5000),
|
||||
...init,
|
||||
});
|
||||
lastStatus = r.status;
|
||||
await r.body?.cancel().catch(() => {});
|
||||
if (r.ok || allowStatus.includes(r.status)) return;
|
||||
lastError = `HTTP ${r.status}`;
|
||||
} catch (e) {
|
||||
lastError = e;
|
||||
}
|
||||
await sleep(delayMs);
|
||||
}
|
||||
throw new Error(
|
||||
`Could not reach ${url} after ${retries} retries: ${lastError} (last status: ${lastStatus ?? "N/A"})`
|
||||
);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// CouchDB
|
||||
// ---------------------------------------------------------------------------
|
||||
//
|
||||
// TODO: these values could be configurable via environment variables.
|
||||
//
|
||||
const COUCHDB_CONTAINER = "couchdb-test";
|
||||
const COUCHDB_IMAGE = "couchdb:3.5.0";
|
||||
|
||||
const MINIO_CONTAINER = "minio-test";
|
||||
const MINIO_IMAGE = "minio/minio";
|
||||
const MINIO_MC_IMAGE = "minio/mc";
|
||||
|
||||
export async function stopCouchdb(): Promise<void> {
|
||||
await docker("stop", COUCHDB_CONTAINER);
|
||||
await docker("rm", COUCHDB_CONTAINER);
|
||||
}
|
||||
|
||||
/**
|
||||
* Start a CouchDB test container, initialise it, and create the test DB.
|
||||
* Mirrors cli_test_start_couchdb() from test-helpers.sh, using direct
|
||||
* docker / fetch calls instead of the bash util scripts.
|
||||
*/
|
||||
export async function startCouchdb(couchdbUri: string, user: string, password: string, dbname: string): Promise<void> {
|
||||
console.log("[INFO] stopping leftover CouchDB container if present");
|
||||
await stopCouchdb().catch(() => {});
|
||||
|
||||
console.log("[INFO] starting CouchDB test container");
|
||||
await dockerOrFail(
|
||||
"run",
|
||||
"-d",
|
||||
"--name",
|
||||
COUCHDB_CONTAINER,
|
||||
"-p",
|
||||
// TODO: port mapping should be configurable.
|
||||
"5989:5984",
|
||||
"-e",
|
||||
`COUCHDB_USER=${user}`,
|
||||
"-e",
|
||||
`COUCHDB_PASSWORD=${password}`,
|
||||
"-e",
|
||||
"COUCHDB_SINGLE_NODE=y",
|
||||
COUCHDB_IMAGE
|
||||
);
|
||||
|
||||
console.log("[INFO] initialising CouchDB");
|
||||
await initCouchdb(couchdbUri, user, password);
|
||||
|
||||
console.log("[INFO] waiting for CouchDB to become stable");
|
||||
await waitForCouchdbStable(couchdbUri, user, password);
|
||||
|
||||
console.log(`[INFO] creating test database: ${dbname}`);
|
||||
await createCouchdbDatabase(couchdbUri, user, password, dbname);
|
||||
}
|
||||
|
||||
/**
|
||||
* Mirror couchdb-init.sh: configure single-node CouchDB via its REST API.
|
||||
*/
|
||||
async function initCouchdb(hostname: string, user: string, password: string, node = "_local"): Promise<void> {
|
||||
// Podman environments often resolve localhost to ::1; use 127.0.0.1 like
|
||||
// the bash script does.
|
||||
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
|
||||
const auth = btoa(`${user}:${password}`);
|
||||
const headers = {
|
||||
"Content-Type": "application/json",
|
||||
Authorization: `Basic ${auth}`,
|
||||
};
|
||||
|
||||
const calls: Array<[string, string, string]> = [
|
||||
[
|
||||
"POST",
|
||||
`${h}/_cluster_setup`,
|
||||
JSON.stringify({
|
||||
action: "enable_single_node",
|
||||
username: user,
|
||||
password,
|
||||
bind_address: "0.0.0.0",
|
||||
port: 5984,
|
||||
singlenode: true,
|
||||
}),
|
||||
],
|
||||
["PUT", `${h}/_node/${node}/_config/chttpd/require_valid_user`, '"true"'],
|
||||
["PUT", `${h}/_node/${node}/_config/chttpd_auth/require_valid_user`, '"true"'],
|
||||
["PUT", `${h}/_node/${node}/_config/httpd/WWW-Authenticate`, '"Basic realm=\\"couchdb\\""'],
|
||||
["PUT", `${h}/_node/${node}/_config/httpd/enable_cors`, '"true"'],
|
||||
["PUT", `${h}/_node/${node}/_config/chttpd/enable_cors`, '"true"'],
|
||||
["PUT", `${h}/_node/${node}/_config/chttpd/max_http_request_size`, '"4294967296"'],
|
||||
["PUT", `${h}/_node/${node}/_config/couchdb/max_document_size`, '"50000000"'],
|
||||
["PUT", `${h}/_node/${node}/_config/cors/credentials`, '"true"'],
|
||||
["PUT", `${h}/_node/${node}/_config/cors/origins`, '"*"'],
|
||||
];
|
||||
|
||||
for (const [method, url, body] of calls) {
|
||||
await fetchRetry(url, { method, headers, body });
|
||||
}
|
||||
}
|
||||
|
||||
export async function createCouchdbDatabase(
|
||||
hostname: string,
|
||||
user: string,
|
||||
password: string,
|
||||
dbname: string
|
||||
): Promise<void> {
|
||||
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
|
||||
const auth = btoa(`${user}:${password}`);
|
||||
await fetchRetry(`${h}/${dbname}`, {
|
||||
method: "PUT",
|
||||
headers: { Authorization: `Basic ${auth}` },
|
||||
});
|
||||
}
|
||||
|
||||
/** Update a CouchDB document via PUT. Returns the updated document. */
|
||||
export async function updateCouchdbDoc(
|
||||
hostname: string,
|
||||
user: string,
|
||||
password: string,
|
||||
docUrl: string,
|
||||
updater: (doc: Record<string, unknown>) => Record<string, unknown>
|
||||
): Promise<void> {
|
||||
const h = hostname.replace(/\/$/, "").replace("localhost", "127.0.0.1");
|
||||
const auth = btoa(`${user}:${password}`);
|
||||
const headers = {
|
||||
"Content-Type": "application/json",
|
||||
Authorization: `Basic ${auth}`,
|
||||
};
|
||||
const getRes = await fetch(`${h}/${docUrl}`, { headers });
|
||||
const current = (await getRes.json()) as Record<string, unknown>;
|
||||
const updated = updater(current);
|
||||
await fetchRetry(`${h}/${docUrl}`, {
|
||||
method: "PUT",
|
||||
headers,
|
||||
body: JSON.stringify(updated),
|
||||
});
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// MinIO
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function shQuote(value: string): string {
|
||||
return `'${value.split("'").join(`'"'"'`)}'`;
|
||||
}
|
||||
|
||||
export async function stopMinio(): Promise<void> {
|
||||
await docker("stop", MINIO_CONTAINER);
|
||||
await docker("rm", MINIO_CONTAINER);
|
||||
}
|
||||
|
||||
async function initMinioBucket(
|
||||
minioEndpoint: string,
|
||||
accessKey: string,
|
||||
secretKey: string,
|
||||
bucket: string
|
||||
): Promise<boolean> {
|
||||
const cmd =
|
||||
`mc alias set myminio ${shQuote(minioEndpoint)} ${shQuote(accessKey)} ${shQuote(secretKey)} >/dev/null 2>&1 && ` +
|
||||
`mc mb --ignore-existing myminio/${shQuote(bucket)} >/dev/null 2>&1`;
|
||||
const r = await docker("run", "--rm", "--network", "host", "--entrypoint", "/bin/sh", MINIO_MC_IMAGE, "-c", cmd);
|
||||
return r.code === 0;
|
||||
}
|
||||
|
||||
async function waitForMinioBucket(
|
||||
minioEndpoint: string,
|
||||
accessKey: string,
|
||||
secretKey: string,
|
||||
bucket: string
|
||||
): Promise<void> {
|
||||
for (let i = 0; i < 30; i++) {
|
||||
const checkCmd =
|
||||
`mc alias set myminio ${shQuote(minioEndpoint)} ${shQuote(accessKey)} ${shQuote(secretKey)} >/dev/null 2>&1 && ` +
|
||||
`mc ls myminio/${shQuote(bucket)} >/dev/null 2>&1`;
|
||||
const check = await docker(
|
||||
"run",
|
||||
"--rm",
|
||||
"--network",
|
||||
// Now I used host networking to access the container via localhost for some environments (Docker Desktop on Windows).
|
||||
// We need something good idea to work across all environments.
|
||||
"host",
|
||||
"--entrypoint",
|
||||
"/bin/sh",
|
||||
MINIO_MC_IMAGE,
|
||||
"-c",
|
||||
checkCmd
|
||||
);
|
||||
if (check.code === 0) {
|
||||
return;
|
||||
}
|
||||
await initMinioBucket(minioEndpoint, accessKey, secretKey, bucket);
|
||||
await sleep(2000);
|
||||
}
|
||||
throw new Error(`MinIO bucket not ready: ${bucket}`);
|
||||
}
|
||||
|
||||
export async function startMinio(
|
||||
minioEndpoint: string,
|
||||
accessKey: string,
|
||||
secretKey: string,
|
||||
bucket: string
|
||||
): Promise<void> {
|
||||
console.log("[INFO] stopping leftover MinIO container if present");
|
||||
await stopMinio().catch(() => {});
|
||||
|
||||
console.log("[INFO] starting MinIO test container");
|
||||
await dockerOrFail(
|
||||
"run",
|
||||
"-d",
|
||||
"--name",
|
||||
MINIO_CONTAINER,
|
||||
// TODO: Ports should be configurable.
|
||||
"-p",
|
||||
"9000:9000",
|
||||
"-p",
|
||||
"9001:9001",
|
||||
"-e",
|
||||
`MINIO_ROOT_USER=${accessKey}`,
|
||||
"-e",
|
||||
`MINIO_ROOT_PASSWORD=${secretKey}`,
|
||||
"-e",
|
||||
`MINIO_SERVER_URL=${minioEndpoint}`,
|
||||
MINIO_IMAGE,
|
||||
"server",
|
||||
"/data",
|
||||
"--console-address",
|
||||
":9001"
|
||||
);
|
||||
|
||||
console.log(`[INFO] initialising MinIO test bucket: ${bucket}`);
|
||||
let initialised = false;
|
||||
for (let i = 0; i < 5; i++) {
|
||||
if (await initMinioBucket(minioEndpoint, accessKey, secretKey, bucket)) {
|
||||
initialised = true;
|
||||
break;
|
||||
}
|
||||
await sleep(2000);
|
||||
}
|
||||
if (!initialised) {
|
||||
throw new Error(`Could not initialise MinIO bucket after retries: ${bucket}`);
|
||||
}
|
||||
|
||||
await waitForMinioBucket(minioEndpoint, accessKey, secretKey, bucket);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// P2P relay (strfry)
|
||||
// ---------------------------------------------------------------------------
|
||||
// TODO: these values could be configurable via environment variables.
|
||||
const P2P_RELAY_CONTAINER = "relay-test";
|
||||
const P2P_RELAY_IMAGE = "ghcr.io/hoytech/strfry:latest";
|
||||
const STRFRY_BOOTSTRAP_SH = String.raw`cat > /tmp/strfry.conf <<"EOF"
|
||||
db = "./strfry-db/"
|
||||
|
||||
relay {
|
||||
bind = "0.0.0.0"
|
||||
port = 7777
|
||||
nofiles = 100000
|
||||
|
||||
info {
|
||||
name = "livesync test relay"
|
||||
description = "local relay for livesync p2p tests"
|
||||
}
|
||||
|
||||
maxWebsocketPayloadSize = 131072
|
||||
autoPingSeconds = 55
|
||||
|
||||
writePolicy {
|
||||
plugin = ""
|
||||
}
|
||||
}
|
||||
EOF
|
||||
exec /app/strfry --config /tmp/strfry.conf relay`;
|
||||
|
||||
export async function stopP2pRelay(): Promise<void> {
|
||||
await docker("stop", P2P_RELAY_CONTAINER);
|
||||
await docker("rm", P2P_RELAY_CONTAINER);
|
||||
}
|
||||
|
||||
/**
|
||||
* Start the local P2P relay container through the same docker runner used
|
||||
* by CouchDB helpers. This keeps process ownership consistent across
|
||||
* start/stop on Windows, WSL, and native Linux/macOS.
|
||||
*/
|
||||
export async function startP2pRelay(): Promise<void> {
|
||||
console.log("[INFO] stopping leftover P2P relay container if present");
|
||||
await stopP2pRelay().catch(() => {});
|
||||
|
||||
console.log("[INFO] starting local P2P relay container");
|
||||
await dockerOrFail(
|
||||
"run",
|
||||
"-d",
|
||||
"--name",
|
||||
P2P_RELAY_CONTAINER,
|
||||
"-p",
|
||||
//TODO: port mapping should be configurable.
|
||||
"4000:7777",
|
||||
"--tmpfs",
|
||||
"/app/strfry-db:rw,size=256m",
|
||||
"--entrypoint",
|
||||
"sh",
|
||||
P2P_RELAY_IMAGE,
|
||||
"-lc",
|
||||
STRFRY_BOOTSTRAP_SH
|
||||
);
|
||||
}
|
||||
|
||||
export function isLocalP2pRelay(relayUrl: string): boolean {
|
||||
return relayUrl === "ws://localhost:4000" || relayUrl === "ws://localhost:4000/";
|
||||
}
|
||||
26
src/apps/cli/testdeno/helpers/env.ts
Normal file
26
src/apps/cli/testdeno/helpers/env.ts
Normal file
@@ -0,0 +1,26 @@
|
||||
/**
|
||||
* Load a .env-style file (KEY=value per line) into a plain object.
|
||||
* Equivalent to `source $TEST_ENV_FILE; set -a` in bash.
|
||||
* Maybe we should use some library... now it is just the minimal implementation that covers our use cases.
|
||||
*
|
||||
* Supported value formats:
|
||||
* KEY=value
|
||||
* KEY='single quoted'
|
||||
* KEY="double quoted"
|
||||
* # comment lines are ignored
|
||||
*/
|
||||
export async function loadEnvFile(filePath: string): Promise<Record<string, string>> {
|
||||
const text = await Deno.readTextFile(filePath);
|
||||
const result: Record<string, string> = {};
|
||||
for (const line of text.split("\n")) {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed || trimmed.startsWith("#")) continue;
|
||||
const idx = trimmed.indexOf("=");
|
||||
if (idx < 0) continue;
|
||||
const key = trimmed.slice(0, idx).trim();
|
||||
const raw = trimmed.slice(idx + 1).trim();
|
||||
// Strip surrounding single or double quotes
|
||||
result[key] = raw.replace(/^(['"])(.*)\1$/, "$2");
|
||||
}
|
||||
return result;
|
||||
}
|
||||
52
src/apps/cli/testdeno/helpers/p2p.ts
Normal file
52
src/apps/cli/testdeno/helpers/p2p.ts
Normal file
@@ -0,0 +1,52 @@
|
||||
import { runCli } from "./cli.ts";
|
||||
import { isLocalP2pRelay, startP2pRelay, stopP2pRelay } from "./docker.ts";
|
||||
|
||||
export type PeerEntry = {
|
||||
id: string;
|
||||
name: string;
|
||||
};
|
||||
|
||||
export function parsePeerLines(output: string): PeerEntry[] {
|
||||
return output
|
||||
.split(/\r?\n/)
|
||||
.map((line) => line.split("\t"))
|
||||
.filter((parts) => parts.length >= 3 && parts[0] === "[peer]")
|
||||
.map((parts) => ({ id: parts[1], name: parts[2] }));
|
||||
}
|
||||
|
||||
export async function discoverPeer(
|
||||
vaultDir: string,
|
||||
settingsFile: string,
|
||||
timeoutSeconds: number,
|
||||
targetPeer?: string
|
||||
): Promise<PeerEntry> {
|
||||
const result = await runCli(vaultDir, "--settings", settingsFile, "p2p-peers", String(timeoutSeconds));
|
||||
if (result.code !== 0) {
|
||||
throw new Error(`p2p-peers failed\n${result.combined}`);
|
||||
}
|
||||
const peers = parsePeerLines(result.stdout);
|
||||
if (targetPeer) {
|
||||
const matched = peers.find((peer) => peer.id === targetPeer || peer.name === targetPeer);
|
||||
if (matched) return matched;
|
||||
}
|
||||
if (peers.length === 0) {
|
||||
const fallback = result.combined.match(/Advertisement from\s+([^\s]+)/);
|
||||
if (fallback?.[1]) {
|
||||
return { id: fallback[1], name: fallback[1] };
|
||||
}
|
||||
throw new Error(`No peers discovered\n${result.combined}`);
|
||||
}
|
||||
return peers[0];
|
||||
}
|
||||
|
||||
export async function maybeStartLocalRelay(relay: string): Promise<boolean> {
|
||||
if (!isLocalP2pRelay(relay)) return false;
|
||||
await startP2pRelay();
|
||||
return true;
|
||||
}
|
||||
|
||||
export async function stopLocalRelayIfStarted(started: boolean): Promise<void> {
|
||||
if (started) {
|
||||
await stopP2pRelay().catch(() => {});
|
||||
}
|
||||
}
|
||||
205
src/apps/cli/testdeno/helpers/settings.ts
Normal file
205
src/apps/cli/testdeno/helpers/settings.ts
Normal file
@@ -0,0 +1,205 @@
|
||||
import { join } from "@std/path";
|
||||
import { CLI_DIR, runCliOrFail } from "./cli.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Settings file initialisation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Generate a default settings file using the CLI's init-settings command. */
|
||||
export async function initSettingsFile(settingsFile: string): Promise<void> {
|
||||
await runCliOrFail("init-settings", "--force", settingsFile);
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a full setup URI from a settings file via src/lib API.
|
||||
* Mirrors the bash flow in test-setup-put-cat-linux.sh.
|
||||
*/
|
||||
export async function generateSetupUriFromSettings(settingsFile: string, setupPassphrase: string): Promise<string> {
|
||||
const repoRoot = join(CLI_DIR, "..", "..", "..");
|
||||
const script = [
|
||||
"import fs from 'node:fs';",
|
||||
"import { pathToFileURL } from 'node:url';",
|
||||
"(async () => {",
|
||||
" const modulePath = process.env.REPO_ROOT + '/src/lib/src/API/processSetting.ts';",
|
||||
" const moduleUrl = pathToFileURL(modulePath).href;",
|
||||
" const { encodeSettingsToSetupURI } = await import(moduleUrl);",
|
||||
" const settingsPath = process.env.SETTINGS_FILE;",
|
||||
" const passphrase = process.env.SETUP_PASSPHRASE;",
|
||||
" const settings = JSON.parse(fs.readFileSync(settingsPath, 'utf-8'));",
|
||||
" settings.couchDB_DBNAME = 'setup-put-cat-db';",
|
||||
" settings.couchDB_URI = 'http://127.0.0.1:5999';",
|
||||
" settings.couchDB_USER = 'dummy';",
|
||||
" settings.couchDB_PASSWORD = 'dummy';",
|
||||
" settings.liveSync = false;",
|
||||
" settings.syncOnStart = false;",
|
||||
" settings.syncOnSave = false;",
|
||||
" const uri = await encodeSettingsToSetupURI(settings, passphrase);",
|
||||
" process.stdout.write(uri.trim());",
|
||||
"})();",
|
||||
].join("\n");
|
||||
|
||||
const scriptPath = await Deno.makeTempFile({
|
||||
prefix: "livesync-setup-uri-",
|
||||
suffix: ".mts",
|
||||
});
|
||||
await Deno.writeTextFile(scriptPath, script);
|
||||
|
||||
try {
|
||||
const cmd = new Deno.Command("npx", {
|
||||
args: ["tsx", scriptPath],
|
||||
cwd: CLI_DIR,
|
||||
env: {
|
||||
REPO_ROOT: repoRoot,
|
||||
SETTINGS_FILE: settingsFile,
|
||||
SETUP_PASSPHRASE: setupPassphrase,
|
||||
},
|
||||
stdin: "null",
|
||||
stdout: "piped",
|
||||
stderr: "piped",
|
||||
});
|
||||
|
||||
const { code, stdout, stderr } = await cmd.output();
|
||||
const dec = new TextDecoder();
|
||||
if (code !== 0) {
|
||||
throw new Error(
|
||||
`Failed to generate setup URI (code ${code})\nstdout: ${dec.decode(stdout)}\nstderr: ${dec.decode(stderr)}`
|
||||
);
|
||||
}
|
||||
|
||||
const uri = dec.decode(stdout).trim();
|
||||
if (!uri) {
|
||||
throw new Error("Failed to generate setup URI: output is empty");
|
||||
}
|
||||
return uri;
|
||||
} finally {
|
||||
await Deno.remove(scriptPath).catch(() => {});
|
||||
}
|
||||
}
|
||||
|
||||
/** Set isConfigured=true in a settings file (required for mirror / scan). */
|
||||
export async function markSettingsConfigured(settingsFile: string): Promise<void> {
|
||||
const data = JSON.parse(await Deno.readTextFile(settingsFile));
|
||||
data.isConfigured = true;
|
||||
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// CouchDB remote settings
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Apply CouchDB connection details to a settings file.
|
||||
* Mirrors cli_test_apply_couchdb_settings() from test-helpers.sh.
|
||||
*/
|
||||
export async function applyCouchdbSettings(
|
||||
settingsFile: string,
|
||||
couchdbUri: string,
|
||||
couchdbUser: string,
|
||||
couchdbPassword: string,
|
||||
couchdbDbname: string,
|
||||
liveSync = false
|
||||
): Promise<void> {
|
||||
const data = JSON.parse(await Deno.readTextFile(settingsFile));
|
||||
data.couchDB_URI = couchdbUri;
|
||||
data.couchDB_USER = couchdbUser;
|
||||
data.couchDB_PASSWORD = couchdbPassword;
|
||||
data.couchDB_DBNAME = couchdbDbname;
|
||||
if (liveSync) {
|
||||
data.liveSync = true;
|
||||
data.syncOnStart = false;
|
||||
data.syncOnSave = false;
|
||||
data.usePluginSync = false;
|
||||
}
|
||||
data.isConfigured = true;
|
||||
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
|
||||
}
|
||||
|
||||
export async function applyRemoteSyncSettings(
|
||||
settingsFile: string,
|
||||
options: {
|
||||
remoteType: "COUCHDB" | "MINIO";
|
||||
couchdbUri?: string;
|
||||
couchdbUser?: string;
|
||||
couchdbPassword?: string;
|
||||
couchdbDbname?: string;
|
||||
minioBucket?: string;
|
||||
minioEndpoint?: string;
|
||||
minioAccessKey?: string;
|
||||
minioSecretKey?: string;
|
||||
encrypt?: boolean;
|
||||
passphrase?: string;
|
||||
}
|
||||
): Promise<void> {
|
||||
const data = JSON.parse(await Deno.readTextFile(settingsFile));
|
||||
|
||||
if (options.remoteType === "COUCHDB") {
|
||||
data.remoteType = "";
|
||||
data.couchDB_URI = options.couchdbUri;
|
||||
data.couchDB_USER = options.couchdbUser;
|
||||
data.couchDB_PASSWORD = options.couchdbPassword;
|
||||
data.couchDB_DBNAME = options.couchdbDbname;
|
||||
} else {
|
||||
data.remoteType = "MINIO";
|
||||
data.bucket = options.minioBucket;
|
||||
data.endpoint = options.minioEndpoint;
|
||||
data.accessKey = options.minioAccessKey;
|
||||
data.secretKey = options.minioSecretKey;
|
||||
data.region = "auto";
|
||||
data.forcePathStyle = true;
|
||||
}
|
||||
|
||||
data.liveSync = true;
|
||||
data.syncOnStart = false;
|
||||
data.syncOnSave = false;
|
||||
data.usePluginSync = false;
|
||||
data.encrypt = options.encrypt === true;
|
||||
data.passphrase = options.encrypt ? (options.passphrase ?? "") : "";
|
||||
data.isConfigured = true;
|
||||
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// P2P settings
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Apply P2P connection details to a settings file.
|
||||
* Mirrors cli_test_apply_p2p_settings() from test-helpers.sh.
|
||||
*/
|
||||
export async function applyP2pSettings(
|
||||
settingsFile: string,
|
||||
roomId: string,
|
||||
passphrase: string,
|
||||
appId = "self-hosted-livesync-cli-tests",
|
||||
relays = "ws://localhost:4000/",
|
||||
autoAccept = "~.*"
|
||||
): Promise<void> {
|
||||
const data = JSON.parse(await Deno.readTextFile(settingsFile));
|
||||
data.P2P_Enabled = true;
|
||||
data.P2P_AutoStart = false;
|
||||
data.P2P_AutoBroadcast = false;
|
||||
data.P2P_AppID = appId;
|
||||
data.P2P_roomID = roomId;
|
||||
data.P2P_passphrase = passphrase;
|
||||
data.P2P_relays = relays;
|
||||
data.P2P_AutoAcceptingPeers = autoAccept;
|
||||
data.P2P_AutoDenyingPeers = "";
|
||||
data.P2P_IsHeadless = true;
|
||||
data.isConfigured = true;
|
||||
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
|
||||
}
|
||||
|
||||
export async function applyP2pTestTweaks(settingsFile: string, deviceName: string, passphrase: string): Promise<void> {
|
||||
const data = JSON.parse(await Deno.readTextFile(settingsFile));
|
||||
data.remoteType = "ONLY_P2P";
|
||||
data.encrypt = true;
|
||||
data.passphrase = passphrase;
|
||||
data.usePathObfuscation = true;
|
||||
data.handleFilenameCaseSensitive = false;
|
||||
data.customChunkSize = 50;
|
||||
data.usePluginSyncV2 = true;
|
||||
data.doNotUseFixedRevisionForChunks = false;
|
||||
data.P2P_DevicePeerName = deviceName;
|
||||
data.isConfigured = true;
|
||||
await Deno.writeTextFile(settingsFile, JSON.stringify(data, null, 2));
|
||||
}
|
||||
33
src/apps/cli/testdeno/helpers/temp.ts
Normal file
33
src/apps/cli/testdeno/helpers/temp.ts
Normal file
@@ -0,0 +1,33 @@
|
||||
import { join } from "@std/path";
|
||||
|
||||
/**
|
||||
* A temporary directory that cleans itself up via `await using`.
|
||||
* Requires TypeScript 5.2+ / Deno 1.40+ for the AsyncDisposable protocol.
|
||||
*
|
||||
* @example
|
||||
* ```ts
|
||||
* await using tmp = await TempDir.create();
|
||||
* const file = tmp.join("data.json");
|
||||
* ```
|
||||
*/
|
||||
export class TempDir implements AsyncDisposable {
|
||||
readonly path: string;
|
||||
|
||||
private constructor(path: string) {
|
||||
this.path = path;
|
||||
}
|
||||
|
||||
static async create(prefix = "livesync-deno-test"): Promise<TempDir> {
|
||||
const path = await Deno.makeTempDir({ prefix: `${prefix}.` });
|
||||
return new TempDir(path);
|
||||
}
|
||||
|
||||
/** Return an OS path joined to the temp directory root. */
|
||||
join(...parts: string[]): string {
|
||||
return join(this.path, ...parts);
|
||||
}
|
||||
|
||||
async [Symbol.asyncDispose](): Promise<void> {
|
||||
await Deno.remove(this.path, { recursive: true }).catch(() => {});
|
||||
}
|
||||
}
|
||||
279
src/apps/cli/testdeno/test-e2e-two-vaults-couchdb.ts
Normal file
279
src/apps/cli/testdeno/test-e2e-two-vaults-couchdb.ts
Normal file
@@ -0,0 +1,279 @@
|
||||
import { assert } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { loadEnvFile } from "./helpers/env.ts";
|
||||
import {
|
||||
runCli,
|
||||
runCliOrFail,
|
||||
runCliWithInputOrFail,
|
||||
sanitiseCatStdout,
|
||||
assertFilesEqual,
|
||||
jsonStringField,
|
||||
} from "./helpers/cli.ts";
|
||||
import { applyRemoteSyncSettings, initSettingsFile } from "./helpers/settings.ts";
|
||||
import { startCouchdb, startMinio, stopCouchdb, stopMinio } from "./helpers/docker.ts";
|
||||
import { join } from "@std/path";
|
||||
|
||||
const TEST_ENV = join(import.meta.dirname!, "..", ".test.env");
|
||||
type RemoteType = "COUCHDB" | "MINIO";
|
||||
|
||||
function requireEnv(env: Record<string, string>, key: string): string {
|
||||
const value = env[key]?.trim();
|
||||
if (!value) throw new Error(`Required env var is missing: ${key}`);
|
||||
return value;
|
||||
}
|
||||
|
||||
export async function runScenario(remoteType: RemoteType, encrypt: boolean): Promise<void> {
|
||||
const env = await loadEnvFile(TEST_ENV);
|
||||
const dbSuffix = `${Date.now()}-${Math.floor(Math.random() * 100000)}`;
|
||||
|
||||
const couchdbUri = remoteType === "COUCHDB" ? requireEnv(env, "hostname").replace(/\/$/, "") : "";
|
||||
const couchdbUser = remoteType === "COUCHDB" ? requireEnv(env, "username") : "";
|
||||
const couchdbPassword = remoteType === "COUCHDB" ? requireEnv(env, "password") : "";
|
||||
const dbPrefix = remoteType === "COUCHDB" ? requireEnv(env, "dbname") : "";
|
||||
const dbname = remoteType === "COUCHDB" ? `${dbPrefix}-${dbSuffix}` : "";
|
||||
|
||||
const minioEndpoint = remoteType === "MINIO" ? requireEnv(env, "minioEndpoint").replace(/\/$/, "") : "";
|
||||
const minioAccessKey = remoteType === "MINIO" ? requireEnv(env, "accessKey") : "";
|
||||
const minioSecretKey = remoteType === "MINIO" ? requireEnv(env, "secretKey") : "";
|
||||
const minioBucketBase = remoteType === "MINIO" ? requireEnv(env, "bucketName") : "";
|
||||
const minioBucket = remoteType === "MINIO" ? `${minioBucketBase}-${dbSuffix}` : "";
|
||||
|
||||
const passphrase = "e2e-passphrase";
|
||||
|
||||
await using workDir = await TempDir.create(
|
||||
`livesync-cli-e2e-${remoteType.toLowerCase()}-${encrypt ? "enc1" : "enc0"}`
|
||||
);
|
||||
const vaultA = workDir.join("testvault_a");
|
||||
const vaultB = workDir.join("testvault_b");
|
||||
const settingsA = workDir.join("test-settings-a.json");
|
||||
const settingsB = workDir.join("test-settings-b.json");
|
||||
const pushSrc = workDir.join("push-source.txt");
|
||||
const pullDst = workDir.join("pull-destination.txt");
|
||||
const pushBinarySrc = workDir.join("push-source.bin");
|
||||
const pullBinaryDst = workDir.join("pull-destination.bin");
|
||||
await Deno.mkdir(vaultA, { recursive: true });
|
||||
await Deno.mkdir(vaultB, { recursive: true });
|
||||
|
||||
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
|
||||
if (remoteType === "COUCHDB") {
|
||||
await startCouchdb(couchdbUri, couchdbUser, couchdbPassword, dbname);
|
||||
} else {
|
||||
await startMinio(minioEndpoint, minioAccessKey, minioSecretKey, minioBucket);
|
||||
}
|
||||
|
||||
try {
|
||||
await initSettingsFile(settingsA);
|
||||
await initSettingsFile(settingsB);
|
||||
await applyRemoteSyncSettings(settingsA, {
|
||||
remoteType,
|
||||
couchdbUri,
|
||||
couchdbUser,
|
||||
couchdbPassword,
|
||||
couchdbDbname: dbname,
|
||||
minioBucket,
|
||||
minioEndpoint,
|
||||
minioAccessKey,
|
||||
minioSecretKey,
|
||||
encrypt,
|
||||
passphrase,
|
||||
});
|
||||
await applyRemoteSyncSettings(settingsB, {
|
||||
remoteType,
|
||||
couchdbUri,
|
||||
couchdbUser,
|
||||
couchdbPassword,
|
||||
couchdbDbname: dbname,
|
||||
minioBucket,
|
||||
minioEndpoint,
|
||||
minioAccessKey,
|
||||
minioSecretKey,
|
||||
encrypt,
|
||||
passphrase,
|
||||
});
|
||||
|
||||
const syncBoth = async () => {
|
||||
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
|
||||
};
|
||||
|
||||
const targetAOnly = "e2e/a-only-info.md";
|
||||
const targetSync = "e2e/sync-info.md";
|
||||
const targetSyncTwiceFirst = "e2e/sync-twice-first.md";
|
||||
const targetSyncTwiceSecond = "e2e/sync-twice-second.md";
|
||||
const targetPush = "e2e/pushed-from-a.md";
|
||||
const targetPut = "e2e/put-from-a.md";
|
||||
const targetPushBinary = "e2e/pushed-from-a.bin";
|
||||
const targetConflict = "e2e/conflict.md";
|
||||
|
||||
await runCliWithInputOrFail("alpha-from-a\n", vaultA, "--settings", settingsA, "put", targetAOnly);
|
||||
const infoAOnly = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetAOnly);
|
||||
assert(infoAOnly.includes(`"path": "${targetAOnly}"`));
|
||||
|
||||
await runCliWithInputOrFail("visible-after-sync\n", vaultA, "--settings", settingsA, "put", targetSync);
|
||||
await syncBoth();
|
||||
const infoBSync = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetSync);
|
||||
assert(infoBSync.includes(`"path": "${targetSync}"`));
|
||||
|
||||
await runCliWithInputOrFail(
|
||||
`first-sync-round-${dbSuffix}\n`,
|
||||
vaultA,
|
||||
"--settings",
|
||||
settingsA,
|
||||
"put",
|
||||
targetSyncTwiceFirst
|
||||
);
|
||||
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
|
||||
const firstVisible = sanitiseCatStdout(
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetSyncTwiceFirst)
|
||||
).trimEnd();
|
||||
assert(firstVisible === `first-sync-round-${dbSuffix}`);
|
||||
|
||||
await runCliWithInputOrFail(
|
||||
`second-sync-round-${dbSuffix}\n`,
|
||||
vaultA,
|
||||
"--settings",
|
||||
settingsA,
|
||||
"put",
|
||||
targetSyncTwiceSecond
|
||||
);
|
||||
await runCliOrFail(vaultA, "--settings", settingsA, "sync");
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "sync");
|
||||
const secondVisible = sanitiseCatStdout(
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetSyncTwiceSecond)
|
||||
).trimEnd();
|
||||
assert(secondVisible === `second-sync-round-${dbSuffix}`);
|
||||
|
||||
await Deno.writeTextFile(pushSrc, `pushed-content-${dbSuffix}\n`);
|
||||
await runCliOrFail(vaultA, "--settings", settingsA, "push", pushSrc, targetPush);
|
||||
await runCliWithInputOrFail(`put-content-${dbSuffix}\n`, vaultA, "--settings", settingsA, "put", targetPut);
|
||||
await syncBoth();
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "pull", targetPush, pullDst);
|
||||
await assertFilesEqual(pushSrc, pullDst, "B pull result does not match pushed source");
|
||||
const catBPut = sanitiseCatStdout(
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetPut)
|
||||
).trimEnd();
|
||||
assert(catBPut === `put-content-${dbSuffix}`);
|
||||
|
||||
const binary = new Uint8Array(4096);
|
||||
binary.fill(0x61);
|
||||
await Deno.writeFile(pushBinarySrc, binary);
|
||||
await runCliOrFail(vaultA, "--settings", settingsA, "push", pushBinarySrc, targetPushBinary);
|
||||
await syncBoth();
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "pull", targetPushBinary, pullBinaryDst);
|
||||
await assertFilesEqual(pushBinarySrc, pullBinaryDst, "B pull result does not match pushed binary source");
|
||||
|
||||
await runCliOrFail(vaultA, "--settings", settingsA, "rm", targetPut);
|
||||
await syncBoth();
|
||||
const removed = await runCli(vaultB, "--settings", settingsB, "cat", targetPut);
|
||||
assert(removed.code !== 0, `B cat should fail after A removed the file\n${removed.combined}`);
|
||||
|
||||
await runCliWithInputOrFail("conflict-base\n", vaultA, "--settings", settingsA, "put", targetConflict);
|
||||
await syncBoth();
|
||||
await runCliWithInputOrFail(
|
||||
`conflict-from-a-${dbSuffix}\n`,
|
||||
vaultA,
|
||||
"--settings",
|
||||
settingsA,
|
||||
"put",
|
||||
targetConflict
|
||||
);
|
||||
await runCliWithInputOrFail(
|
||||
`conflict-from-b-${dbSuffix}\n`,
|
||||
vaultB,
|
||||
"--settings",
|
||||
settingsB,
|
||||
"put",
|
||||
targetConflict
|
||||
);
|
||||
|
||||
let infoAConflict = "";
|
||||
let infoBConflict = "";
|
||||
let conflictDetected = false;
|
||||
for (const side of ["a", "b", "a"] as const) {
|
||||
await runCliOrFail(
|
||||
side === "a" ? vaultA : vaultB,
|
||||
"--settings",
|
||||
side === "a" ? settingsA : settingsB,
|
||||
"sync"
|
||||
);
|
||||
infoAConflict = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetConflict);
|
||||
infoBConflict = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetConflict);
|
||||
if (
|
||||
jsonStringField(infoAConflict, "conflicts") !== "N/A" ||
|
||||
jsonStringField(infoBConflict, "conflicts") !== "N/A"
|
||||
) {
|
||||
conflictDetected = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
assert(conflictDetected, `conflict was expected\nA: ${infoAConflict}\nB: ${infoBConflict}`);
|
||||
|
||||
const lsAConflict =
|
||||
(await runCliOrFail(vaultA, "--settings", settingsA, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
|
||||
const lsBConflict =
|
||||
(await runCliOrFail(vaultB, "--settings", settingsB, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
|
||||
const revA = lsAConflict.split("\t")[3] ?? "";
|
||||
const revB = lsBConflict.split("\t")[3] ?? "";
|
||||
assert(
|
||||
revA.includes("*") || revB.includes("*"),
|
||||
`conflicted entry should be marked with '*'\nA: ${lsAConflict}\nB: ${lsBConflict}`
|
||||
);
|
||||
|
||||
const keepRevision = jsonStringField(infoAConflict, "revision");
|
||||
assert(keepRevision.length > 0, `could not extract revision\n${infoAConflict}`);
|
||||
await runCliOrFail(vaultA, "--settings", settingsA, "resolve", targetConflict, keepRevision);
|
||||
|
||||
let resolved = false;
|
||||
let infoAResolved = "";
|
||||
let infoBResolved = "";
|
||||
for (let i = 0; i < 6; i++) {
|
||||
await syncBoth();
|
||||
infoAResolved = await runCliOrFail(vaultA, "--settings", settingsA, "info", targetConflict);
|
||||
infoBResolved = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetConflict);
|
||||
if (
|
||||
jsonStringField(infoAResolved, "conflicts") === "N/A" &&
|
||||
jsonStringField(infoBResolved, "conflicts") === "N/A"
|
||||
) {
|
||||
resolved = true;
|
||||
break;
|
||||
}
|
||||
const retryRevision = jsonStringField(infoAResolved, "revision");
|
||||
if (retryRevision) {
|
||||
await runCli(vaultA, "--settings", settingsA, "resolve", targetConflict, retryRevision);
|
||||
}
|
||||
}
|
||||
assert(resolved, `conflicts should be resolved\nA: ${infoAResolved}\nB: ${infoBResolved}`);
|
||||
|
||||
const lsAResolved =
|
||||
(await runCliOrFail(vaultA, "--settings", settingsA, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
|
||||
const lsBResolved =
|
||||
(await runCliOrFail(vaultB, "--settings", settingsB, "ls", targetConflict)).trim().split(/\r?\n/)[0] ?? "";
|
||||
assert(!(lsAResolved.split("\t")[3] ?? "").includes("*"));
|
||||
assert(!(lsBResolved.split("\t")[3] ?? "").includes("*"));
|
||||
|
||||
const catAResolved = sanitiseCatStdout(
|
||||
await runCliOrFail(vaultA, "--settings", settingsA, "cat", targetConflict)
|
||||
).trimEnd();
|
||||
const catBResolved = sanitiseCatStdout(
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetConflict)
|
||||
).trimEnd();
|
||||
assert(catAResolved === catBResolved, `resolved content should match\nA: ${catAResolved}\nB: ${catBResolved}`);
|
||||
} finally {
|
||||
if (!keepDocker) {
|
||||
if (remoteType === "COUCHDB") {
|
||||
await stopCouchdb().catch(() => {});
|
||||
} else {
|
||||
await stopMinio().catch(() => {});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Deno.test("e2e: two vaults over CouchDB without encryption", async () => {
|
||||
await runScenario("COUCHDB", false);
|
||||
});
|
||||
|
||||
Deno.test("e2e: two vaults over CouchDB with encryption", async () => {
|
||||
await runScenario("COUCHDB", true);
|
||||
});
|
||||
20
src/apps/cli/testdeno/test-e2e-two-vaults-matrix.ts
Normal file
20
src/apps/cli/testdeno/test-e2e-two-vaults-matrix.ts
Normal file
@@ -0,0 +1,20 @@
|
||||
import { runScenario } from "./test-e2e-two-vaults-couchdb.ts";
|
||||
|
||||
type MatrixCase = {
|
||||
remoteType: "COUCHDB" | "MINIO";
|
||||
encrypt: boolean;
|
||||
label: string;
|
||||
};
|
||||
|
||||
const matrixCases: MatrixCase[] = [
|
||||
{ remoteType: "COUCHDB", encrypt: false, label: "COUCHDB-enc0" },
|
||||
{ remoteType: "COUCHDB", encrypt: true, label: "COUCHDB-enc1" },
|
||||
{ remoteType: "MINIO", encrypt: false, label: "MINIO-enc0" },
|
||||
{ remoteType: "MINIO", encrypt: true, label: "MINIO-enc1" },
|
||||
];
|
||||
|
||||
for (const tc of matrixCases) {
|
||||
Deno.test(`e2e matrix: ${tc.label}`, async () => {
|
||||
await runScenario(tc.remoteType, tc.encrypt);
|
||||
});
|
||||
}
|
||||
196
src/apps/cli/testdeno/test-mirror.ts
Normal file
196
src/apps/cli/testdeno/test-mirror.ts
Normal file
@@ -0,0 +1,196 @@
|
||||
/**
|
||||
* Deno port of test-mirror-linux.sh
|
||||
*
|
||||
* Tests the `mirror` command — bidirectional synchronisation between a local
|
||||
* storage directory (vault) and an in-process database.
|
||||
*
|
||||
* Covered cases (identical to the bash test):
|
||||
* 1. Storage-only file -> synced into DB (UPDATE DATABASE)
|
||||
* 2. DB-only file -> restored to storage (UPDATE STORAGE)
|
||||
* 3. DB-deleted file -> NOT restored to storage (UPDATE STORAGE skip)
|
||||
* 4. Both, storage newer -> DB updated (SYNC: STORAGE -> DB)
|
||||
* 5. Both, DB newer -> storage updated (SYNC: DB -> STORAGE)
|
||||
* 6. Compatibility mode -> omitted vault-path works (same DB + vault path)
|
||||
*
|
||||
* No external services are required.
|
||||
*
|
||||
* Run:
|
||||
* deno test -A test-mirror.ts
|
||||
*/
|
||||
|
||||
import { assert } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { runCliOrFail } from "./helpers/cli.ts";
|
||||
import { initSettingsFile, markSettingsConfigured } from "./helpers/settings.ts";
|
||||
|
||||
Deno.test("mirror: storage <-> DB synchronisation", async (t) => {
|
||||
await using workDir = await TempDir.create("livesync-cli-mirror");
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Shared setup
|
||||
// -------------------------------------------------------------------
|
||||
const settingsFile = workDir.join("data.json");
|
||||
const vaultDir = workDir.join("vault");
|
||||
const dbDir = workDir.join("db");
|
||||
await Deno.mkdir(workDir.join("vault", "test"), { recursive: true });
|
||||
await Deno.mkdir(dbDir, { recursive: true });
|
||||
|
||||
await initSettingsFile(settingsFile);
|
||||
// isConfigured=true is required for canProceedScan in the mirror command.
|
||||
await markSettingsConfigured(settingsFile);
|
||||
|
||||
// Copy settings to the DB directory (separated-path mode)
|
||||
const dbSettings = workDir.join("db", "settings.json");
|
||||
await Deno.copyFile(settingsFile, dbSettings);
|
||||
|
||||
/** Run mirror in separated-path mode: DB dir ≠ vault dir. */
|
||||
const runMirror = () => runCliOrFail(dbDir, "--settings", dbSettings, "mirror", vaultDir);
|
||||
|
||||
/** Run mirror in compatibility mode: DB path = vault path. */
|
||||
const runMirrorCompat = () => runCliOrFail(vaultDir, "--settings", settingsFile, "mirror");
|
||||
|
||||
// Helper wrappers
|
||||
const dbRun = (...args: string[]) => runCliOrFail(dbDir, "--settings", dbSettings, ...args);
|
||||
const compatRun = (...args: string[]) => runCliOrFail(vaultDir, "--settings", settingsFile, ...args);
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Case 1: storage-only -> DB (UPDATE DATABASE)
|
||||
// -------------------------------------------------------------------
|
||||
await t.step("case 1: storage-only file is synced into DB", async () => {
|
||||
const storageFile = workDir.join("vault", "test", "storage-only.md");
|
||||
await Deno.writeTextFile(storageFile, "storage-only content\n");
|
||||
|
||||
await runMirror();
|
||||
|
||||
const resultFile = workDir.join("case1-pull.txt");
|
||||
await dbRun("pull", "test/storage-only.md", resultFile);
|
||||
|
||||
const storageContent = await Deno.readTextFile(storageFile);
|
||||
const pulledContent = await Deno.readTextFile(resultFile);
|
||||
assert(
|
||||
storageContent === pulledContent,
|
||||
`storage-only file NOT synced into DB\nexpected: ${storageContent}\ngot: ${pulledContent}`
|
||||
);
|
||||
console.log("[PASS] case 1: storage-only file was synced into DB");
|
||||
});
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Case 2: DB-only -> storage (UPDATE STORAGE)
|
||||
// -------------------------------------------------------------------
|
||||
await t.step("case 2: DB-only file is restored to storage", async () => {
|
||||
await dbRun(
|
||||
"push",
|
||||
// write inline via push (pipe not needed — push takes a file path)
|
||||
// create a temp file with content and push it
|
||||
await (async () => {
|
||||
const tmp = workDir.join("db-only-src.txt");
|
||||
await Deno.writeTextFile(tmp, "db-only content\n");
|
||||
return tmp;
|
||||
})(),
|
||||
"test/db-only.md"
|
||||
);
|
||||
|
||||
const storagePath = workDir.join("vault", "test", "db-only.md");
|
||||
assert(!(await exists(storagePath)), "db-only.md unexpectedly exists in storage before mirror");
|
||||
|
||||
await runMirror();
|
||||
|
||||
assert(await exists(storagePath), "DB-only file NOT restored to storage after mirror");
|
||||
const content = await Deno.readTextFile(storagePath);
|
||||
assert(content === "db-only content\n", `DB-only file restored but content mismatch: '${content}'`);
|
||||
console.log("[PASS] case 2: DB-only file was restored to storage");
|
||||
});
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Case 3: DB-deleted -> storage untouched
|
||||
// -------------------------------------------------------------------
|
||||
await t.step("case 3: DB-deleted entry is NOT restored to storage", async () => {
|
||||
const deletedSrc = workDir.join("deleted-src.txt");
|
||||
await Deno.writeTextFile(deletedSrc, "to-be-deleted\n");
|
||||
await dbRun("push", deletedSrc, "test/deleted.md");
|
||||
await dbRun("rm", "test/deleted.md");
|
||||
|
||||
await runMirror();
|
||||
|
||||
const storagePath = workDir.join("vault", "test", "deleted.md");
|
||||
assert(!(await exists(storagePath)), "deleted DB entry was incorrectly restored to storage");
|
||||
console.log("[PASS] case 3: deleted DB entry was NOT restored to storage");
|
||||
});
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Case 4: storage newer -> DB updated (SYNC: STORAGE -> DB)
|
||||
// -------------------------------------------------------------------
|
||||
await t.step("case 4: storage newer than DB -> DB is updated", async () => {
|
||||
// Seed DB with old content (mtime ~ now)
|
||||
const seedFile = workDir.join("case4-seed.txt");
|
||||
await Deno.writeTextFile(seedFile, "old content\n");
|
||||
await dbRun("push", seedFile, "test/sync-storage-newer.md");
|
||||
|
||||
// Write new content to storage with a timestamp 1 hour in the future
|
||||
const storageFile = workDir.join("vault", "test", "sync-storage-newer.md");
|
||||
await Deno.writeTextFile(storageFile, "new content\n");
|
||||
await Deno.utime(storageFile, new Date(), new Date(Date.now() + 3600_000));
|
||||
|
||||
await runMirror();
|
||||
|
||||
const resultFile = workDir.join("case4-pull.txt");
|
||||
await dbRun("pull", "test/sync-storage-newer.md", resultFile);
|
||||
const storageContent = await Deno.readTextFile(storageFile);
|
||||
const pulledContent = await Deno.readTextFile(resultFile);
|
||||
assert(
|
||||
storageContent === pulledContent,
|
||||
`DB NOT updated to match newer storage file\nexpected: ${storageContent}\ngot: ${pulledContent}`
|
||||
);
|
||||
console.log("[PASS] case 4: DB updated to match newer storage file");
|
||||
});
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Case 5: DB newer -> storage updated (SYNC: DB -> STORAGE)
|
||||
// -------------------------------------------------------------------
|
||||
await t.step("case 5: DB newer than storage -> storage is updated", async () => {
|
||||
// Write old content to storage with a timestamp 1 hour in the past
|
||||
const storageFile = workDir.join("vault", "test", "sync-db-newer.md");
|
||||
await Deno.writeTextFile(storageFile, "old storage content\n");
|
||||
await Deno.utime(storageFile, new Date(), new Date(Date.now() - 3600_000));
|
||||
|
||||
// Write new content to DB only (mtime ~ now, newer than the storage file)
|
||||
const dbNewFile = workDir.join("case5-db-new.txt");
|
||||
await Deno.writeTextFile(dbNewFile, "new db content\n");
|
||||
await dbRun("push", dbNewFile, "test/sync-db-newer.md");
|
||||
|
||||
await runMirror();
|
||||
|
||||
const content = await Deno.readTextFile(storageFile);
|
||||
assert(content === "new db content\n", `storage NOT updated to match newer DB entry (got: '${content}')`);
|
||||
console.log("[PASS] case 5: storage updated to match newer DB entry");
|
||||
});
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Case 6: compatibility mode (vault path = DB path)
|
||||
// -------------------------------------------------------------------
|
||||
await t.step("case 6: compatibility mode (omitted vault-path)", async () => {
|
||||
const compatFile = workDir.join("vault", "compat.md");
|
||||
await Deno.writeTextFile(compatFile, "compat-content\n");
|
||||
|
||||
await runMirrorCompat();
|
||||
|
||||
const resultFile = workDir.join("case6-pull.txt");
|
||||
await compatRun("pull", "compat.md", resultFile);
|
||||
const pulled = await Deno.readTextFile(resultFile);
|
||||
assert(pulled === "compat-content\n", `Compatibility mode failed to sync file into DB (got: '${pulled}')`);
|
||||
console.log("[PASS] case 6: compatibility mode works");
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Utility
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function exists(path: string): Promise<boolean> {
|
||||
try {
|
||||
await Deno.stat(path);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
40
src/apps/cli/testdeno/test-p2p-host.ts
Normal file
40
src/apps/cli/testdeno/test-p2p-host.ts
Normal file
@@ -0,0 +1,40 @@
|
||||
import { assert } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { initSettingsFile, applyP2pSettings } from "./helpers/settings.ts";
|
||||
import { startP2pRelay, stopP2pRelay, isLocalP2pRelay } from "./helpers/docker.ts";
|
||||
import { startCliInBackground } from "./helpers/backgroundCli.ts";
|
||||
|
||||
Deno.test("p2p-host: starts and becomes ready", async () => {
|
||||
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
|
||||
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
|
||||
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
|
||||
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
|
||||
const useInternalRelay = Deno.env.get("USE_INTERNAL_RELAY") !== "0";
|
||||
|
||||
await using workDir = await TempDir.create("livesync-cli-p2p-host");
|
||||
const vaultDir = workDir.join("vault-host");
|
||||
const settingsFile = workDir.join("settings-host.json");
|
||||
await Deno.mkdir(vaultDir, { recursive: true });
|
||||
|
||||
let relayStarted = false;
|
||||
if (useInternalRelay && isLocalP2pRelay(relay)) {
|
||||
await startP2pRelay();
|
||||
relayStarted = true;
|
||||
}
|
||||
|
||||
try {
|
||||
await initSettingsFile(settingsFile);
|
||||
await applyP2pSettings(settingsFile, roomId, passphrase, appId, relay);
|
||||
const host = startCliInBackground(vaultDir, "--settings", settingsFile, "p2p-host");
|
||||
try {
|
||||
await host.waitUntilContains("P2P host is running", 20000);
|
||||
assert(host.combined.includes("P2P host is running"));
|
||||
} finally {
|
||||
await host.stop();
|
||||
}
|
||||
} finally {
|
||||
if (relayStarted) {
|
||||
await stopP2pRelay().catch(() => {});
|
||||
}
|
||||
}
|
||||
});
|
||||
42
src/apps/cli/testdeno/test-p2p-peers-local-relay.ts
Normal file
42
src/apps/cli/testdeno/test-p2p-peers-local-relay.ts
Normal file
@@ -0,0 +1,42 @@
|
||||
import { assert } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { initSettingsFile, applyP2pSettings, applyP2pTestTweaks } from "./helpers/settings.ts";
|
||||
import { startCliInBackground } from "./helpers/backgroundCli.ts";
|
||||
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
|
||||
|
||||
Deno.test("p2p-peers: discovers host through local relay", async () => {
|
||||
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
|
||||
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
|
||||
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
|
||||
const timeoutSeconds = Number(Deno.env.get("TIMEOUT_SECONDS") ?? "8");
|
||||
|
||||
await using workDir = await TempDir.create("livesync-cli-p2p-peers-local-relay");
|
||||
const hostVault = workDir.join("vault-host");
|
||||
const hostSettings = workDir.join("settings-host.json");
|
||||
const clientVault = workDir.join("vault");
|
||||
const clientSettings = workDir.join("settings.json");
|
||||
await Deno.mkdir(hostVault, { recursive: true });
|
||||
await Deno.mkdir(clientVault, { recursive: true });
|
||||
|
||||
const relayStarted = await maybeStartLocalRelay(relay);
|
||||
try {
|
||||
await initSettingsFile(hostSettings);
|
||||
await initSettingsFile(clientSettings);
|
||||
await applyP2pSettings(hostSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
|
||||
await applyP2pSettings(clientSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
|
||||
await applyP2pTestTweaks(hostSettings, "p2p-host", passphrase);
|
||||
await applyP2pTestTweaks(clientSettings, "p2p-client", passphrase);
|
||||
|
||||
const host = startCliInBackground(hostVault, "--settings", hostSettings, "p2p-host");
|
||||
try {
|
||||
await host.waitUntilContains("P2P host is running", 20000);
|
||||
const peer = await discoverPeer(clientVault, clientSettings, timeoutSeconds);
|
||||
assert(peer.id.length > 0);
|
||||
assert(peer.name.length > 0);
|
||||
} finally {
|
||||
await host.stop();
|
||||
}
|
||||
} finally {
|
||||
await stopLocalRelayIfStarted(relayStarted);
|
||||
}
|
||||
});
|
||||
59
src/apps/cli/testdeno/test-p2p-sync.ts
Normal file
59
src/apps/cli/testdeno/test-p2p-sync.ts
Normal file
@@ -0,0 +1,59 @@
|
||||
import { assert } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { initSettingsFile, applyP2pSettings, applyP2pTestTweaks } from "./helpers/settings.ts";
|
||||
import { startCliInBackground } from "./helpers/backgroundCli.ts";
|
||||
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
|
||||
import { runCli } from "./helpers/cli.ts";
|
||||
|
||||
Deno.test("p2p-sync: discovers peer and completes sync", async () => {
|
||||
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
|
||||
const roomId = Deno.env.get("ROOM_ID") ?? `room-${Date.now()}`;
|
||||
const passphrase = Deno.env.get("PASSPHRASE") ?? "test";
|
||||
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "12");
|
||||
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "15");
|
||||
|
||||
await using workDir = await TempDir.create("livesync-cli-p2p-sync");
|
||||
const hostVault = workDir.join("vault-host");
|
||||
const hostSettings = workDir.join("settings-host.json");
|
||||
const clientVault = workDir.join("vault-sync");
|
||||
const clientSettings = workDir.join("settings-sync.json");
|
||||
await Deno.mkdir(hostVault, { recursive: true });
|
||||
await Deno.mkdir(clientVault, { recursive: true });
|
||||
|
||||
const relayStarted = await maybeStartLocalRelay(relay);
|
||||
try {
|
||||
await initSettingsFile(hostSettings);
|
||||
await initSettingsFile(clientSettings);
|
||||
await applyP2pSettings(hostSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
|
||||
await applyP2pSettings(clientSettings, roomId, passphrase, "self-hosted-livesync-cli-tests", relay);
|
||||
await applyP2pTestTweaks(hostSettings, "p2p-host", passphrase);
|
||||
await applyP2pTestTweaks(clientSettings, "p2p-client", passphrase);
|
||||
|
||||
const host = startCliInBackground(hostVault, "--settings", hostSettings, "p2p-host");
|
||||
try {
|
||||
await host.waitUntilContains("P2P host is running", 20000);
|
||||
const peer = await discoverPeer(
|
||||
clientVault,
|
||||
clientSettings,
|
||||
peersTimeout,
|
||||
Deno.env.get("TARGET_PEER") ?? undefined
|
||||
);
|
||||
const syncResult = await runCli(
|
||||
clientVault,
|
||||
"--settings",
|
||||
clientSettings,
|
||||
"p2p-sync",
|
||||
peer.id,
|
||||
String(syncTimeout)
|
||||
);
|
||||
assert(
|
||||
syncResult.code === 0,
|
||||
`p2p-sync failed\nstdout: ${syncResult.stdout}\nstderr: ${syncResult.stderr}`
|
||||
);
|
||||
} finally {
|
||||
await host.stop();
|
||||
}
|
||||
} finally {
|
||||
await stopLocalRelayIfStarted(relayStarted);
|
||||
}
|
||||
});
|
||||
118
src/apps/cli/testdeno/test-p2p-three-nodes-conflict.ts
Normal file
118
src/apps/cli/testdeno/test-p2p-three-nodes-conflict.ts
Normal file
@@ -0,0 +1,118 @@
|
||||
import { assert } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { applyP2pSettings, initSettingsFile } from "./helpers/settings.ts";
|
||||
import { startCliInBackground } from "./helpers/backgroundCli.ts";
|
||||
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
|
||||
import { jsonStringField, runCliOrFail, runCliWithInputOrFail, sanitiseCatStdout } from "./helpers/cli.ts";
|
||||
|
||||
Deno.test("p2p: three nodes detect and resolve conflicts", async () => {
|
||||
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
|
||||
const roomId = `${Deno.env.get("ROOM_ID_PREFIX") ?? "p2p-room"}-${Date.now()}`;
|
||||
const passphrase = `${Deno.env.get("PASSPHRASE_PREFIX") ?? "p2p-pass"}-${Date.now()}`;
|
||||
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
|
||||
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "10");
|
||||
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "15");
|
||||
|
||||
await using workDir = await TempDir.create("livesync-cli-p2p-3nodes");
|
||||
const vaultA = workDir.join("vault-a");
|
||||
const vaultB = workDir.join("vault-b");
|
||||
const vaultC = workDir.join("vault-c");
|
||||
const settingsA = workDir.join("settings-a.json");
|
||||
const settingsB = workDir.join("settings-b.json");
|
||||
const settingsC = workDir.join("settings-c.json");
|
||||
await Deno.mkdir(vaultA, { recursive: true });
|
||||
await Deno.mkdir(vaultB, { recursive: true });
|
||||
await Deno.mkdir(vaultC, { recursive: true });
|
||||
|
||||
const relayStarted = await maybeStartLocalRelay(relay);
|
||||
try {
|
||||
for (const settings of [settingsA, settingsB, settingsC]) {
|
||||
await initSettingsFile(settings);
|
||||
await applyP2pSettings(settings, roomId, passphrase, appId, relay);
|
||||
}
|
||||
|
||||
const host = startCliInBackground(vaultA, "--settings", settingsA, "p2p-host");
|
||||
try {
|
||||
await host.waitUntilContains("P2P host is running", 20000);
|
||||
const peerFromB = await discoverPeer(vaultB, settingsB, peersTimeout);
|
||||
const peerFromC = await discoverPeer(vaultC, settingsC, peersTimeout);
|
||||
const targetPath = "p2p/conflicted-from-two-clients.txt";
|
||||
|
||||
await runCliWithInputOrFail("from-client-b-v1\n", vaultB, "--settings", settingsB, "put", targetPath);
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout));
|
||||
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
|
||||
|
||||
let visibleOnC = "";
|
||||
for (let i = 0; i < 5; i++) {
|
||||
try {
|
||||
visibleOnC = sanitiseCatStdout(
|
||||
await runCliOrFail(vaultC, "--settings", settingsC, "cat", targetPath)
|
||||
).trimEnd();
|
||||
if (visibleOnC === "from-client-b-v1") break;
|
||||
} catch {
|
||||
// retry below
|
||||
}
|
||||
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
|
||||
}
|
||||
assert(visibleOnC === "from-client-b-v1", `C should see file created by B, got: ${visibleOnC}`);
|
||||
|
||||
await runCliWithInputOrFail("from-client-b-v2\n", vaultB, "--settings", settingsB, "put", targetPath);
|
||||
await runCliWithInputOrFail("from-client-c-v2\n", vaultC, "--settings", settingsC, "put", targetPath);
|
||||
|
||||
const [syncB, syncC] = await Promise.all([
|
||||
runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout)),
|
||||
runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout)),
|
||||
]);
|
||||
void syncB;
|
||||
void syncC;
|
||||
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "p2p-sync", peerFromB.id, String(syncTimeout));
|
||||
await runCliOrFail(vaultC, "--settings", settingsC, "p2p-sync", peerFromC.id, String(syncTimeout));
|
||||
|
||||
const infoBBefore = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetPath);
|
||||
const conflictsBBefore = jsonStringField(infoBBefore, "conflicts");
|
||||
const keepRevB = jsonStringField(infoBBefore, "revision");
|
||||
assert(
|
||||
conflictsBBefore !== "N/A" && conflictsBBefore.length > 0,
|
||||
`expected conflicts on B\n${infoBBefore}`
|
||||
);
|
||||
assert(keepRevB.length > 0, `could not read revision on B\n${infoBBefore}`);
|
||||
|
||||
const infoCBefore = await runCliOrFail(vaultC, "--settings", settingsC, "info", targetPath);
|
||||
const conflictsCBefore = jsonStringField(infoCBefore, "conflicts");
|
||||
const keepRevC = jsonStringField(infoCBefore, "revision");
|
||||
assert(
|
||||
conflictsCBefore !== "N/A" && conflictsCBefore.length > 0,
|
||||
`expected conflicts on C\n${infoCBefore}`
|
||||
);
|
||||
assert(keepRevC.length > 0, `could not read revision on C\n${infoCBefore}`);
|
||||
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "resolve", targetPath, keepRevB);
|
||||
await runCliOrFail(vaultC, "--settings", settingsC, "resolve", targetPath, keepRevC);
|
||||
|
||||
const infoBAfter = await runCliOrFail(vaultB, "--settings", settingsB, "info", targetPath);
|
||||
const infoCAfter = await runCliOrFail(vaultC, "--settings", settingsC, "info", targetPath);
|
||||
assert(jsonStringField(infoBAfter, "conflicts") === "N/A", `conflict still remains on B\n${infoBAfter}`);
|
||||
assert(jsonStringField(infoCAfter, "conflicts") === "N/A", `conflict still remains on C\n${infoCAfter}`);
|
||||
|
||||
const finalContentB = sanitiseCatStdout(
|
||||
await runCliOrFail(vaultB, "--settings", settingsB, "cat", targetPath)
|
||||
).trimEnd();
|
||||
const finalContentC = sanitiseCatStdout(
|
||||
await runCliOrFail(vaultC, "--settings", settingsC, "cat", targetPath)
|
||||
).trimEnd();
|
||||
assert(
|
||||
finalContentB === "from-client-b-v2" || finalContentB === "from-client-c-v2",
|
||||
`unexpected final content on B: ${finalContentB}`
|
||||
);
|
||||
assert(
|
||||
finalContentC === "from-client-b-v2" || finalContentC === "from-client-c-v2",
|
||||
`unexpected final content on C: ${finalContentC}`
|
||||
);
|
||||
} finally {
|
||||
await host.stop();
|
||||
}
|
||||
} finally {
|
||||
await stopLocalRelayIfStarted(relayStarted);
|
||||
}
|
||||
});
|
||||
111
src/apps/cli/testdeno/test-p2p-upload-download-repro.ts
Normal file
111
src/apps/cli/testdeno/test-p2p-upload-download-repro.ts
Normal file
@@ -0,0 +1,111 @@
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { applyP2pSettings, applyP2pTestTweaks, initSettingsFile } from "./helpers/settings.ts";
|
||||
import { startCliInBackground } from "./helpers/backgroundCli.ts";
|
||||
import { discoverPeer, maybeStartLocalRelay, stopLocalRelayIfStarted } from "./helpers/p2p.ts";
|
||||
import { assertFilesEqual, runCliOrFail } from "./helpers/cli.ts";
|
||||
|
||||
async function writeFilledFile(path: string, size: number, byte: number): Promise<void> {
|
||||
const data = new Uint8Array(size);
|
||||
data.fill(byte);
|
||||
await Deno.writeFile(path, data);
|
||||
}
|
||||
|
||||
Deno.test("p2p: upload/download reproduction scenario", async () => {
|
||||
const relay = Deno.env.get("RELAY") ?? "ws://localhost:4000/";
|
||||
const appId = Deno.env.get("APP_ID") ?? "self-hosted-livesync-cli-tests";
|
||||
const peersTimeout = Number(Deno.env.get("PEERS_TIMEOUT") ?? "20");
|
||||
const syncTimeout = Number(Deno.env.get("SYNC_TIMEOUT") ?? "240");
|
||||
const roomId = `p2p-room-${Date.now()}`;
|
||||
const passphrase = `p2p-pass-${Date.now()}`;
|
||||
|
||||
await using workDir = await TempDir.create("livesync-cli-p2p-upload-download");
|
||||
const vaultHost = workDir.join("vault-host");
|
||||
const vaultUp = workDir.join("vault-up");
|
||||
const vaultDown = workDir.join("vault-down");
|
||||
const settingsHost = workDir.join("settings-host.json");
|
||||
const settingsUp = workDir.join("settings-up.json");
|
||||
const settingsDown = workDir.join("settings-down.json");
|
||||
for (const dir of [vaultHost, vaultUp, vaultDown]) {
|
||||
await Deno.mkdir(dir, { recursive: true });
|
||||
}
|
||||
|
||||
const relayStarted = await maybeStartLocalRelay(relay);
|
||||
try {
|
||||
for (const settings of [settingsHost, settingsUp, settingsDown]) {
|
||||
await initSettingsFile(settings);
|
||||
await applyP2pSettings(settings, roomId, passphrase, appId, relay, "~.*");
|
||||
}
|
||||
await applyP2pTestTweaks(settingsHost, "p2p-cli-host", passphrase);
|
||||
await applyP2pTestTweaks(settingsUp, `p2p-cli-upload-${Date.now()}`, passphrase);
|
||||
await applyP2pTestTweaks(settingsDown, `p2p-cli-download-${Date.now()}`, passphrase);
|
||||
|
||||
const host = startCliInBackground(vaultHost, "--settings", settingsHost, "p2p-host");
|
||||
try {
|
||||
await host.waitUntilContains("P2P host is running", 20000);
|
||||
const uploadPeer = await discoverPeer(vaultUp, settingsUp, peersTimeout);
|
||||
|
||||
const storeText = workDir.join("store-file.md");
|
||||
const diffA = workDir.join("test-diff-1.md");
|
||||
const diffB = workDir.join("test-diff-2.md");
|
||||
const diffC = workDir.join("test-diff-3.md");
|
||||
await Deno.writeTextFile(storeText, "Hello, World!\n");
|
||||
await Deno.writeTextFile(diffA, "Content A\n");
|
||||
await Deno.writeTextFile(diffB, "Content B\n");
|
||||
await Deno.writeTextFile(diffC, "Content C\n");
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", storeText, "p2p/store-file.md");
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffA, "p2p/test-diff-1.md");
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffB, "p2p/test-diff-2.md");
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", diffC, "p2p/test-diff-3.md");
|
||||
|
||||
const large100k = workDir.join("large-100k.txt");
|
||||
const large1m = workDir.join("large-1m.txt");
|
||||
const binary100k = workDir.join("binary-100k.bin");
|
||||
const binary5m = workDir.join("binary-5m.bin");
|
||||
await Deno.writeTextFile(large100k, "a".repeat(100000));
|
||||
await Deno.writeTextFile(large1m, "b".repeat(1000000));
|
||||
await writeFilledFile(binary100k, 100000, 0x5a);
|
||||
await writeFilledFile(binary5m, 5000000, 0x7c);
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", large100k, "p2p/large-100000.md");
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", large1m, "p2p/large-1000000.md");
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", binary100k, "p2p/binary-100000.bin");
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "push", binary5m, "p2p/binary-5000000.bin");
|
||||
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "p2p-sync", uploadPeer.id, String(syncTimeout));
|
||||
await runCliOrFail(vaultUp, "--settings", settingsUp, "p2p-sync", uploadPeer.id, String(syncTimeout));
|
||||
|
||||
const downloadPeer = await discoverPeer(vaultDown, settingsDown, peersTimeout);
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "p2p-sync", downloadPeer.id, String(syncTimeout));
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "p2p-sync", downloadPeer.id, String(syncTimeout));
|
||||
|
||||
const downStoreText = workDir.join("down-store-file.md");
|
||||
const downDiffA = workDir.join("down-test-diff-1.md");
|
||||
const downDiffB = workDir.join("down-test-diff-2.md");
|
||||
const downDiffC = workDir.join("down-test-diff-3.md");
|
||||
const downLarge100k = workDir.join("down-large-100k.txt");
|
||||
const downLarge1m = workDir.join("down-large-1m.txt");
|
||||
const downBinary100k = workDir.join("down-binary-100k.bin");
|
||||
const downBinary5m = workDir.join("down-binary-5m.bin");
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/store-file.md", downStoreText);
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-1.md", downDiffA);
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-2.md", downDiffB);
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/test-diff-3.md", downDiffC);
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/large-100000.md", downLarge100k);
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/large-1000000.md", downLarge1m);
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/binary-100000.bin", downBinary100k);
|
||||
await runCliOrFail(vaultDown, "--settings", settingsDown, "pull", "p2p/binary-5000000.bin", downBinary5m);
|
||||
|
||||
await assertFilesEqual(storeText, downStoreText, "store-file mismatch");
|
||||
await assertFilesEqual(diffA, downDiffA, "test-diff-1 mismatch");
|
||||
await assertFilesEqual(diffB, downDiffB, "test-diff-2 mismatch");
|
||||
await assertFilesEqual(diffC, downDiffC, "test-diff-3 mismatch");
|
||||
await assertFilesEqual(large100k, downLarge100k, "large-100000 mismatch");
|
||||
await assertFilesEqual(large1m, downLarge1m, "large-1000000 mismatch");
|
||||
await assertFilesEqual(binary100k, downBinary100k, "binary-100000 mismatch");
|
||||
await assertFilesEqual(binary5m, downBinary5m, "binary-5000000 mismatch");
|
||||
} finally {
|
||||
await host.stop();
|
||||
}
|
||||
} finally {
|
||||
await stopLocalRelayIfStarted(relayStarted);
|
||||
}
|
||||
});
|
||||
78
src/apps/cli/testdeno/test-push-pull.ts
Normal file
78
src/apps/cli/testdeno/test-push-pull.ts
Normal file
@@ -0,0 +1,78 @@
|
||||
/**
|
||||
* Deno port of test-push-pull-linux.sh
|
||||
*
|
||||
* Requires CouchDB connection details either via environment variables or a
|
||||
* .test.env file. If neither is present the test logs a warning and the
|
||||
* CLI will likely fail at the push step.
|
||||
*
|
||||
* Run:
|
||||
* deno test -A test-push-pull.ts
|
||||
*
|
||||
* With explicit CouchDB:
|
||||
* COUCHDB_URI=http://127.0.0.1:5984 \
|
||||
* COUCHDB_USER=admin \
|
||||
* COUCHDB_PASSWORD=password \
|
||||
* COUCHDB_DBNAME=livesync-test \
|
||||
* deno test -A test-push-pull.ts
|
||||
*/
|
||||
|
||||
import { join } from "@std/path";
|
||||
import { assertEquals } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { runCliOrFail } from "./helpers/cli.ts";
|
||||
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
|
||||
import { startCouchdb, stopCouchdb } from "./helpers/docker.ts";
|
||||
|
||||
const REMOTE_PATH = Deno.env.get("REMOTE_PATH") ?? "test/push-pull.txt";
|
||||
|
||||
Deno.test("push/pull roundtrip", async () => {
|
||||
await using workDir = await TempDir.create("livesync-cli-push-pull");
|
||||
|
||||
const settingsFile = workDir.join("data.json");
|
||||
const vaultDir = workDir.join("vault");
|
||||
await Deno.mkdir(join(vaultDir, "test"), { recursive: true });
|
||||
|
||||
const uri = Deno.env.get("COUCHDB_URI") ?? "http://127.0.0.1:5989/";
|
||||
const user = Deno.env.get("COUCHDB_USER") ?? "admin";
|
||||
const password = Deno.env.get("COUCHDB_PASSWORD") ?? "testpassword";
|
||||
const dbname = Deno.env.get("COUCHDB_DBNAME") ?? `push-pull-${Date.now()}`;
|
||||
|
||||
const shouldStartDocker = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
|
||||
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
|
||||
|
||||
if (shouldStartDocker) {
|
||||
await startCouchdb(uri, user, password, dbname);
|
||||
}
|
||||
|
||||
try {
|
||||
await initSettingsFile(settingsFile);
|
||||
|
||||
if (uri && user && password && dbname) {
|
||||
console.log("[INFO] applying CouchDB env vars to settings");
|
||||
await applyCouchdbSettings(settingsFile, uri, user, password, dbname);
|
||||
} else {
|
||||
console.warn(
|
||||
"[WARN] CouchDB env vars not fully set — push/pull may fail unless the generated settings already contain connection details"
|
||||
);
|
||||
}
|
||||
|
||||
const srcFile = workDir.join("push-source.txt");
|
||||
const pulledFile = workDir.join("pull-result.txt");
|
||||
const content = `push-pull-test ${new Date().toISOString()}\n`;
|
||||
await Deno.writeTextFile(srcFile, content);
|
||||
|
||||
console.log(`[INFO] push -> ${REMOTE_PATH}`);
|
||||
await runCliOrFail(vaultDir, "--settings", settingsFile, "push", srcFile, REMOTE_PATH);
|
||||
|
||||
console.log(`[INFO] pull <- ${REMOTE_PATH}`);
|
||||
await runCliOrFail(vaultDir, "--settings", settingsFile, "pull", REMOTE_PATH, pulledFile);
|
||||
|
||||
const pulled = await Deno.readTextFile(pulledFile);
|
||||
assertEquals(content, pulled, "push/pull roundtrip content mismatch");
|
||||
console.log("[PASS] push/pull roundtrip matched");
|
||||
} finally {
|
||||
if (shouldStartDocker && !keepDocker) {
|
||||
await stopCouchdb().catch(() => {});
|
||||
}
|
||||
}
|
||||
});
|
||||
214
src/apps/cli/testdeno/test-setup-put-cat.ts
Normal file
214
src/apps/cli/testdeno/test-setup-put-cat.ts
Normal file
@@ -0,0 +1,214 @@
|
||||
/**
|
||||
* Deno port of test-setup-put-cat-linux.sh
|
||||
*
|
||||
* Tests all local-DB file operations that require no external remote:
|
||||
* setup /
|
||||
* push / cat / ls / info / rm / resolve / cat-rev / pull-rev
|
||||
*
|
||||
* Run (no external services needed):
|
||||
* deno test -A test-setup-put-cat.ts
|
||||
*/
|
||||
|
||||
import { join } from "@std/path";
|
||||
import { assertEquals, assert } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { runCli, runCliOrFail, runCliWithInput, sanitiseCatStdout } from "./helpers/cli.ts";
|
||||
import { generateSetupUriFromSettings, initSettingsFile } from "./helpers/settings.ts";
|
||||
|
||||
const REMOTE_PATH = Deno.env.get("REMOTE_PATH") ?? "test/setup-put-cat.txt";
|
||||
const SETUP_PASSPHRASE = Deno.env.get("SETUP_PASSPHRASE") ?? "setup-passphrase";
|
||||
|
||||
Deno.test("CLI file operations: push / cat / ls / info / rm / resolve / cat-rev / pull-rev", async (t) => {
|
||||
await using workDir = await TempDir.create("livesync-cli-setup-put-cat");
|
||||
|
||||
const settingsFile = workDir.join("data.json");
|
||||
const vaultDir = workDir.join("vault");
|
||||
await Deno.mkdir(join(vaultDir, "test"), { recursive: true });
|
||||
|
||||
await initSettingsFile(settingsFile);
|
||||
|
||||
const setupUri = await generateSetupUriFromSettings(settingsFile, SETUP_PASSPHRASE);
|
||||
const setupResult = await runCliWithInput(
|
||||
`${SETUP_PASSPHRASE}\n`,
|
||||
vaultDir,
|
||||
"--settings",
|
||||
settingsFile,
|
||||
"setup",
|
||||
setupUri
|
||||
);
|
||||
assert(setupResult.code === 0, `setup command exited with ${setupResult.code}\n${setupResult.combined}`);
|
||||
assert(
|
||||
setupResult.combined.includes("[Command] setup ->"),
|
||||
`setup command did not execute expected code path\n${setupResult.combined}`
|
||||
);
|
||||
|
||||
const run = (...args: string[]) => runCliOrFail(vaultDir, "--settings", settingsFile, ...args);
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// push / cat roundtrip
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("push/cat roundtrip", async () => {
|
||||
const srcFile = workDir.join("put-source.txt");
|
||||
const content = `setup-put-cat-test ${new Date().toISOString()}\nline-2\n`;
|
||||
await Deno.writeTextFile(srcFile, content);
|
||||
|
||||
console.log(`[INFO] push -> ${REMOTE_PATH}`);
|
||||
await runCliWithInput(content, vaultDir, "--settings", settingsFile, "put", REMOTE_PATH);
|
||||
|
||||
console.log(`[INFO] cat <- ${REMOTE_PATH}`);
|
||||
const rawOutput = await run("cat", REMOTE_PATH);
|
||||
const catOutput = sanitiseCatStdout(rawOutput);
|
||||
|
||||
assertEquals(content, catOutput, "push/cat roundtrip content mismatch");
|
||||
console.log("[PASS] push/cat roundtrip matched");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// ls: single file
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("ls output format (single file)", async () => {
|
||||
const lsOutput = await run("ls", REMOTE_PATH);
|
||||
const line = lsOutput
|
||||
.trim()
|
||||
.split("\n")
|
||||
.find((l) => l.startsWith(REMOTE_PATH + "\t"));
|
||||
assert(line, `ls output did not include ${REMOTE_PATH}`);
|
||||
|
||||
const [lsPath, lsSize, lsMtime, lsRev] = line.split("\t");
|
||||
assertEquals(lsPath, REMOTE_PATH, "ls path column mismatch");
|
||||
assert(/^\d+$/.test(lsSize), `ls size not numeric: ${lsSize}`);
|
||||
assert(/^\d+$/.test(lsMtime), `ls mtime not numeric: ${lsMtime}`);
|
||||
assert(lsRev?.length > 0, "ls revision column is empty");
|
||||
console.log("[PASS] ls output format matched");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// ls: prefix filter and sort order
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("ls prefix filter and sort order", async () => {
|
||||
await runCliWithInput("file-a\n", vaultDir, "--settings", settingsFile, "put", "test/a-first.txt");
|
||||
await runCliWithInput("file-z\n", vaultDir, "--settings", settingsFile, "put", "test/z-last.txt");
|
||||
|
||||
const lsOut = await run("ls", "test/");
|
||||
const lines = lsOut.trim().split("\n").filter(Boolean);
|
||||
assert(lines.length >= 3, "ls prefix output expected at least 3 rows");
|
||||
|
||||
// Verify sorted ascending by path
|
||||
const paths = lines.map((l) => l.split("\t")[0]);
|
||||
for (let i = 1; i < paths.length; i++) {
|
||||
assert(paths[i - 1] <= paths[i], `ls output not sorted: ${paths[i - 1]} > ${paths[i]}`);
|
||||
}
|
||||
assert(
|
||||
lines.some((l) => l.startsWith("test/a-first.txt\t")),
|
||||
"ls prefix output missing test/a-first.txt"
|
||||
);
|
||||
assert(
|
||||
lines.some((l) => l.startsWith("test/z-last.txt\t")),
|
||||
"ls prefix output missing test/z-last.txt"
|
||||
);
|
||||
console.log("[PASS] ls prefix and sorting matched");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// ls: no-match prefix returns empty output
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("ls no-match prefix returns empty", async () => {
|
||||
const lsOut = await run("ls", "no-such-prefix/");
|
||||
assertEquals(lsOut.trim(), "", "ls no-match prefix should produce empty output");
|
||||
console.log("[PASS] ls no-match prefix matched");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// info: JSON output format
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("info output JSON format", async () => {
|
||||
const infoOut = await run("info", REMOTE_PATH);
|
||||
let data: Record<string, unknown>;
|
||||
try {
|
||||
data = JSON.parse(infoOut);
|
||||
} catch {
|
||||
throw new Error(`info output is not valid JSON:\n${infoOut}`);
|
||||
}
|
||||
assertEquals(data.path, REMOTE_PATH, "info .path mismatch");
|
||||
assertEquals(data.filename, REMOTE_PATH.split("/").at(-1), "info .filename mismatch");
|
||||
assert(typeof data.size === "number" && data.size >= 0, `info .size invalid: ${data.size}`);
|
||||
assert(typeof data.chunks === "number" && (data.chunks as number) >= 1, `info .chunks invalid: ${data.chunks}`);
|
||||
assertEquals(data.conflicts, "N/A", "info .conflicts should be N/A");
|
||||
console.log("[PASS] info output format matched");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// info: non-existent path exits non-zero
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("info non-existent path returns non-zero", async () => {
|
||||
const r = await runCli(vaultDir, "--settings", settingsFile, "info", "no-such-file.md");
|
||||
assert(r.code !== 0, "info on non-existent file should exit non-zero");
|
||||
console.log("[PASS] info non-existent path returns non-zero");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// rm: removes file from ls and makes cat fail
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("rm removes target from ls and cat", async () => {
|
||||
await run("rm", "test/z-last.txt");
|
||||
|
||||
const catResult = await runCli(vaultDir, "--settings", settingsFile, "cat", "test/z-last.txt");
|
||||
assert(catResult.code !== 0, "rm target should not be readable by cat");
|
||||
|
||||
const lsOut = await run("ls", "test/");
|
||||
assert(!lsOut.includes("test/z-last.txt\t"), "rm target should not appear in ls output");
|
||||
console.log("[PASS] rm removed target from visible entries");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// resolve: accepts current revision, rejects invalid revision
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("resolve: valid and invalid revisions", async () => {
|
||||
const lsLine = (await run("ls", "test/a-first.txt")).trim().split("\n")[0];
|
||||
assert(lsLine, "could not fetch revision for resolve test");
|
||||
const rev = lsLine.split("\t")[3];
|
||||
assert(rev?.length > 0, "revision was empty for resolve test");
|
||||
|
||||
await run("resolve", "test/a-first.txt", rev);
|
||||
console.log("[PASS] resolve accepted current revision");
|
||||
|
||||
const badR = await runCli(vaultDir, "--settings", settingsFile, "resolve", "test/a-first.txt", "9-no-such-rev");
|
||||
assert(badR.code !== 0, "resolve with non-existent revision should exit non-zero");
|
||||
console.log("[PASS] resolve non-existent revision returns non-zero");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// cat-rev / pull-rev: retrieve a past revision
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("cat-rev / pull-rev: retrieve past revision", async () => {
|
||||
const revPath = "test/revision-history.txt";
|
||||
await runCliWithInput("revision-v1\n", vaultDir, "--settings", settingsFile, "put", revPath);
|
||||
await runCliWithInput("revision-v2\n", vaultDir, "--settings", settingsFile, "put", revPath);
|
||||
await runCliWithInput("revision-v3\n", vaultDir, "--settings", settingsFile, "put", revPath);
|
||||
|
||||
const infoOut = await run("info", revPath);
|
||||
const infoData = JSON.parse(infoOut) as {
|
||||
revisions?: string[];
|
||||
};
|
||||
const revisions = Array.isArray(infoData.revisions) ? infoData.revisions : [];
|
||||
const pastRev = revisions.find((r): r is string => typeof r === "string" && r !== "N/A");
|
||||
assert(pastRev, "info output did not include any past revision");
|
||||
|
||||
const catRevOut = await run("cat-rev", revPath, pastRev);
|
||||
const catRevClean = sanitiseCatStdout(catRevOut);
|
||||
assert(
|
||||
catRevClean === "revision-v1\n" || catRevClean === "revision-v2\n",
|
||||
`cat-rev output did not match expected past revision:\n${catRevClean}`
|
||||
);
|
||||
console.log("[PASS] cat-rev matched one of the past revisions from info");
|
||||
|
||||
const pullRevFile = workDir.join("rev-pull-output.txt");
|
||||
await run("pull-rev", revPath, pullRevFile, pastRev);
|
||||
const pullRevContent = await Deno.readTextFile(pullRevFile);
|
||||
assert(
|
||||
pullRevContent === "revision-v1\n" || pullRevContent === "revision-v2\n",
|
||||
`pull-rev output did not match expected past revision:\n${pullRevContent}`
|
||||
);
|
||||
console.log("[PASS] pull-rev matched one of the past revisions from info");
|
||||
});
|
||||
});
|
||||
97
src/apps/cli/testdeno/test-sync-locked-remote.ts
Normal file
97
src/apps/cli/testdeno/test-sync-locked-remote.ts
Normal file
@@ -0,0 +1,97 @@
|
||||
/**
|
||||
* Deno port of test-sync-locked-remote-linux.sh
|
||||
*
|
||||
* Verifies CLI sync behaviour when the remote milestone document is unlocked
|
||||
* versus locked.
|
||||
*/
|
||||
|
||||
import { assert, assertStringIncludes } from "@std/assert";
|
||||
import { join } from "@std/path";
|
||||
import { loadEnvFile } from "./helpers/env.ts";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { runCli } from "./helpers/cli.ts";
|
||||
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
|
||||
import { createCouchdbDatabase, startCouchdb, stopCouchdb, updateCouchdbDoc } from "./helpers/docker.ts";
|
||||
|
||||
const TEST_ENV = join(import.meta.dirname!, "..", ".test.env");
|
||||
const MILESTONE_DOC = "_local/obsydian_livesync_milestone";
|
||||
|
||||
function requireEnv(env: Record<string, string>, key: string): string {
|
||||
const value = env[key]?.trim();
|
||||
if (!value) {
|
||||
throw new Error(`Required env var is missing: ${key}`);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
Deno.test("sync: actionable error against locked remote DB", async () => {
|
||||
const env = await loadEnvFile(TEST_ENV);
|
||||
const couchdbUri = requireEnv(env, "hostname").replace(/\/$/, "");
|
||||
const couchdbUser = requireEnv(env, "username");
|
||||
const couchdbPassword = requireEnv(env, "password");
|
||||
const dbPrefix = requireEnv(env, "dbname");
|
||||
const dbname = `${dbPrefix}-locked-${Date.now()}-${Math.floor(Math.random() * 100000)}`;
|
||||
|
||||
await using workDir = await TempDir.create("livesync-cli-locked-test");
|
||||
const vaultDir = workDir.join("vault");
|
||||
const settingsFile = workDir.join("settings.json");
|
||||
await Deno.mkdir(vaultDir, { recursive: true });
|
||||
|
||||
const shouldStartDocker = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
|
||||
const keepDocker = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
|
||||
|
||||
if (shouldStartDocker) {
|
||||
console.log(`[INFO] starting CouchDB and creating test database: ${dbname}`);
|
||||
await startCouchdb(couchdbUri, couchdbUser, couchdbPassword, dbname);
|
||||
} else {
|
||||
console.log(`[INFO] using existing CouchDB and creating test database: ${dbname}`);
|
||||
await createCouchdbDatabase(couchdbUri, couchdbUser, couchdbPassword, dbname);
|
||||
}
|
||||
|
||||
try {
|
||||
await initSettingsFile(settingsFile);
|
||||
await applyCouchdbSettings(settingsFile, couchdbUri, couchdbUser, couchdbPassword, dbname, true);
|
||||
|
||||
console.log("[CASE] initial sync to create milestone document");
|
||||
const initialSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
|
||||
assert(
|
||||
initialSync.code === 0,
|
||||
`initial sync failed\nstdout: ${initialSync.stdout}\nstderr: ${initialSync.stderr}`
|
||||
);
|
||||
|
||||
const updateMilestone = async (locked: boolean) => {
|
||||
await updateCouchdbDoc(couchdbUri, couchdbUser, couchdbPassword, `${dbname}/${MILESTONE_DOC}`, (doc) => ({
|
||||
...doc,
|
||||
locked,
|
||||
accepted_nodes: [],
|
||||
}));
|
||||
};
|
||||
|
||||
console.log("[CASE] sync should succeed when remote is not locked");
|
||||
await updateMilestone(false);
|
||||
const unlockedSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
|
||||
assert(
|
||||
unlockedSync.code === 0,
|
||||
`sync should succeed when remote is not locked\nstdout: ${unlockedSync.stdout}\nstderr: ${unlockedSync.stderr}`
|
||||
);
|
||||
assert(
|
||||
!unlockedSync.combined.includes("The remote database is locked"),
|
||||
`locked error should not appear when remote is not locked\n${unlockedSync.combined}`
|
||||
);
|
||||
console.log("[PASS] unlocked remote DB syncs successfully");
|
||||
|
||||
console.log("[CASE] sync should fail with actionable error when remote is locked");
|
||||
await updateMilestone(true);
|
||||
const lockedSync = await runCli(vaultDir, "--settings", settingsFile, "sync");
|
||||
assert(
|
||||
lockedSync.code !== 0,
|
||||
`sync should fail when remote is locked\nstdout: ${lockedSync.stdout}\nstderr: ${lockedSync.stderr}`
|
||||
);
|
||||
assertStringIncludes(lockedSync.combined, "The remote database is locked and this device is not yet accepted");
|
||||
console.log("[PASS] locked remote DB produces actionable CLI error");
|
||||
} finally {
|
||||
if (shouldStartDocker && !keepDocker) {
|
||||
await stopCouchdb().catch(() => {});
|
||||
}
|
||||
}
|
||||
});
|
||||
287
src/apps/cli/testdeno/test-sync-two-local-databases.ts
Normal file
287
src/apps/cli/testdeno/test-sync-two-local-databases.ts
Normal file
@@ -0,0 +1,287 @@
|
||||
/**
|
||||
* Deno port of test-sync-two-local-databases-linux.sh
|
||||
*
|
||||
* Tests two-vault synchronisation via CouchDB including conflict detection
|
||||
* and resolution.
|
||||
*
|
||||
* Requires CouchDB connection details. Provide them via environment variables
|
||||
* OR place a .test.env file at src/apps/cli/.test.env.
|
||||
*
|
||||
* By default, a CouchDB Docker container is started automatically
|
||||
* (LIVESYNC_START_DOCKER=1). Set LIVESYNC_START_DOCKER=0 to use an existing
|
||||
* CouchDB instance instead.
|
||||
*
|
||||
* Run:
|
||||
* deno test -A test-sync-two-local-databases.ts
|
||||
*
|
||||
* With an existing CouchDB:
|
||||
* COUCHDB_URI=http://127.0.0.1:5984 \
|
||||
* COUCHDB_USER=admin \
|
||||
* COUCHDB_PASSWORD=password \
|
||||
* COUCHDB_DBNAME=livesync-test \
|
||||
* LIVESYNC_START_DOCKER=0 \
|
||||
* deno test -A test-sync-two-local-databases.ts
|
||||
*/
|
||||
|
||||
import { join } from "@std/path";
|
||||
import { assertEquals, assert } from "@std/assert";
|
||||
import { TempDir } from "./helpers/temp.ts";
|
||||
import { CLI_DIR, runCliOrFail, jsonFieldIsNa } from "./helpers/cli.ts";
|
||||
import { applyCouchdbSettings, initSettingsFile } from "./helpers/settings.ts";
|
||||
import { startCouchdb, stopCouchdb } from "./helpers/docker.ts";
|
||||
import { loadEnvFile } from "./helpers/env.ts";
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Load configuration
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function resolveConfig(): Promise<{
|
||||
uri: string;
|
||||
user: string;
|
||||
password: string;
|
||||
baseDbname: string;
|
||||
} | null> {
|
||||
let env: Record<string, string> = {};
|
||||
|
||||
// 1. Explicit environment variables take priority
|
||||
if (Deno.env.get("COUCHDB_URI")) {
|
||||
env = Object.fromEntries(Deno.env.toObject());
|
||||
} else {
|
||||
// 2. TEST_ENV_FILE env var
|
||||
const envFile = Deno.env.get("TEST_ENV_FILE") ?? join(CLI_DIR, ".test.env");
|
||||
try {
|
||||
env = await loadEnvFile(envFile);
|
||||
} catch {
|
||||
return null; // no config available — skip
|
||||
}
|
||||
}
|
||||
|
||||
const uri = (env["COUCHDB_URI"] ?? env["hostname"] ?? "").replace(/\/$/, "");
|
||||
const user = env["COUCHDB_USER"] ?? env["username"] ?? "";
|
||||
const password = env["COUCHDB_PASSWORD"] ?? env["password"] ?? "";
|
||||
const baseDbname = env["COUCHDB_DBNAME"] ?? env["dbname"] ?? "livesync-test";
|
||||
|
||||
if (!uri || !user || !password) return null;
|
||||
return { uri, user, password, baseDbname };
|
||||
}
|
||||
|
||||
const config = await resolveConfig();
|
||||
const START_DOCKER = Deno.env.get("LIVESYNC_START_DOCKER") !== "0";
|
||||
const KEEP_DOCKER = Deno.env.get("LIVESYNC_DEBUG_KEEP_DOCKER") === "1";
|
||||
const SYNC_RETRY = Number(Deno.env.get("LIVESYNC_SYNC_RETRY") ?? "8");
|
||||
|
||||
// Provide a sane default for flaky remote connectivity in Docker-on-WSL
|
||||
// environments. Users can override explicitly if needed.
|
||||
if (!Deno.env.has("LIVESYNC_CLI_RETRY")) {
|
||||
Deno.env.set("LIVESYNC_CLI_RETRY", "2");
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Test suite
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
Deno.test(
|
||||
{
|
||||
name: "sync two local databases: sync + conflict detection + resolution",
|
||||
ignore: config === null,
|
||||
},
|
||||
async (t) => {
|
||||
if (!config) return; // narrowing for TypeScript
|
||||
|
||||
const suffix = `${Date.now()}-${Math.floor(Math.random() * 65535)}`;
|
||||
const dbname = `${config.baseDbname}-${suffix}`;
|
||||
|
||||
await using workDir = await TempDir.create("livesync-cli-two-db-test");
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// Docker lifecycle
|
||||
// ------------------------------------------------------------------
|
||||
if (START_DOCKER) {
|
||||
await startCouchdb(config.uri, config.user, config.password, dbname);
|
||||
}
|
||||
|
||||
try {
|
||||
await runSuite(t, workDir, config, dbname);
|
||||
} finally {
|
||||
if (START_DOCKER && !KEEP_DOCKER) {
|
||||
await stopCouchdb().catch(() => {});
|
||||
}
|
||||
if (START_DOCKER && KEEP_DOCKER) {
|
||||
console.log("[INFO] LIVESYNC_DEBUG_KEEP_DOCKER=1, keeping couchdb-test container");
|
||||
}
|
||||
console.log(`[INFO] test database '${dbname}' is preserved for debugging.`);
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Suite implementation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function runSuite(
|
||||
t: Deno.TestContext,
|
||||
workDir: TempDir,
|
||||
config: { uri: string; user: string; password: string },
|
||||
dbname: string
|
||||
): Promise<void> {
|
||||
const sleep = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms));
|
||||
const runWithRetry = async <T>(label: string, fn: () => Promise<T>, retries = SYNC_RETRY): Promise<T> => {
|
||||
let lastErr: unknown;
|
||||
for (let i = 0; i <= retries; i++) {
|
||||
try {
|
||||
return await fn();
|
||||
} catch (err) {
|
||||
lastErr = err;
|
||||
if (i === retries) break;
|
||||
const delayMs = 500 * (i + 1);
|
||||
console.warn(`[WARN] ${label} failed, retrying (${i + 1}/${retries}) in ${delayMs}ms`);
|
||||
await sleep(delayMs);
|
||||
}
|
||||
}
|
||||
throw lastErr;
|
||||
};
|
||||
|
||||
const vaultA = workDir.join("vault-a");
|
||||
const vaultB = workDir.join("vault-b");
|
||||
const settingsA = workDir.join("a-settings.json");
|
||||
const settingsB = workDir.join("b-settings.json");
|
||||
await Deno.mkdir(vaultA, { recursive: true });
|
||||
await Deno.mkdir(vaultB, { recursive: true });
|
||||
|
||||
await initSettingsFile(settingsA);
|
||||
await initSettingsFile(settingsB);
|
||||
|
||||
const applySettings = async (f: string) =>
|
||||
applyCouchdbSettings(f, config.uri, config.user, config.password, dbname, /* liveSync */ true);
|
||||
await applySettings(settingsA);
|
||||
await applySettings(settingsB);
|
||||
|
||||
const runA = (...args: string[]) => runCliOrFail(vaultA, "--settings", settingsA, ...args);
|
||||
const runB = (...args: string[]) => runCliOrFail(vaultB, "--settings", settingsB, ...args);
|
||||
|
||||
const syncA = () => runWithRetry("syncA", () => runA("sync"));
|
||||
const syncB = () => runWithRetry("syncB", () => runB("sync"));
|
||||
const catA = (path: string) => runA("cat", path);
|
||||
const catB = (path: string) => runB("cat", path);
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// Case 1: A creates file, B reads after sync
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("case 1: A creates file -> B can read after sync", async () => {
|
||||
const srcA = workDir.join("from-a-src.txt");
|
||||
await Deno.writeTextFile(srcA, "from-a\n");
|
||||
await runA("push", srcA, "shared/from-a.txt");
|
||||
await syncA();
|
||||
await syncB();
|
||||
const value = (await catB("shared/from-a.txt")).replace(/\r\n/g, "\n").trimEnd();
|
||||
assertEquals(value, "from-a", "B could not read file created on A");
|
||||
console.log("[PASS] case 1 passed");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// Case 2: B creates file, A reads after sync
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("case 2: B creates file -> A can read after sync", async () => {
|
||||
const srcB = workDir.join("from-b-src.txt");
|
||||
await Deno.writeTextFile(srcB, "from-b\n");
|
||||
await runB("push", srcB, "shared/from-b.txt");
|
||||
await syncB();
|
||||
await syncA();
|
||||
const value = (await catA("shared/from-b.txt")).replace(/\r\n/g, "\n").trimEnd();
|
||||
assertEquals(value, "from-b", "A could not read file created on B");
|
||||
console.log("[PASS] case 2 passed");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// Case 3: concurrent edits create a conflict
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("case 3: concurrent edits create conflict", async () => {
|
||||
const baseSrc = workDir.join("base-src.txt");
|
||||
await Deno.writeTextFile(baseSrc, "base\n");
|
||||
await runA("push", baseSrc, "shared/conflicted.txt");
|
||||
await syncA();
|
||||
await syncB();
|
||||
|
||||
const aEdit = workDir.join("edit-a.txt");
|
||||
const bEdit = workDir.join("edit-b.txt");
|
||||
await Deno.writeTextFile(aEdit, "edit-from-a\n");
|
||||
await Deno.writeTextFile(bEdit, "edit-from-b\n");
|
||||
await runA("push", aEdit, "shared/conflicted.txt");
|
||||
await runB("push", bEdit, "shared/conflicted.txt");
|
||||
|
||||
const infoFileA = workDir.join("info-a.json");
|
||||
const infoFileB = workDir.join("info-b.json");
|
||||
|
||||
let conflictDetected = false;
|
||||
for (const side of ["a", "b"] as const) {
|
||||
if (side === "a") await syncA();
|
||||
else await syncB();
|
||||
await Deno.writeTextFile(infoFileA, await runA("info", "shared/conflicted.txt"));
|
||||
await Deno.writeTextFile(infoFileB, await runB("info", "shared/conflicted.txt"));
|
||||
const da = JSON.parse(await Deno.readTextFile(infoFileA)) as Record<string, unknown>;
|
||||
const db = JSON.parse(await Deno.readTextFile(infoFileB)) as Record<string, unknown>;
|
||||
if (!jsonFieldIsNa(da, "conflicts") || !jsonFieldIsNa(db, "conflicts")) {
|
||||
conflictDetected = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
assert(conflictDetected, "expected conflict after concurrent edits, but both sides show N/A");
|
||||
console.log("[PASS] case 3 conflict detected");
|
||||
});
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// Case 4: resolve on A, verify B has no conflict after sync
|
||||
// ------------------------------------------------------------------
|
||||
await t.step("case 4: resolve on A propagates to B", async () => {
|
||||
const infoFileA = workDir.join("info-a-resolve.json");
|
||||
const infoFileB = workDir.join("info-b-resolve.json");
|
||||
|
||||
// Ensure A sees the conflict
|
||||
for (let i = 0; i < 5; i++) {
|
||||
const raw = await runA("info", "shared/conflicted.txt");
|
||||
await Deno.writeTextFile(infoFileA, raw);
|
||||
const da = JSON.parse(raw) as Record<string, unknown>;
|
||||
if (!jsonFieldIsNa(da, "conflicts")) break;
|
||||
await syncB();
|
||||
await syncA();
|
||||
}
|
||||
|
||||
const rawA = await runA("info", "shared/conflicted.txt");
|
||||
await Deno.writeTextFile(infoFileA, rawA);
|
||||
const dataA = JSON.parse(rawA) as Record<string, unknown>;
|
||||
assert(!jsonFieldIsNa(dataA, "conflicts"), "A does not see conflict, cannot resolve from A only");
|
||||
|
||||
const keepRev = dataA["revision"] as string;
|
||||
assert(keepRev?.length > 0, "could not read revision from A info output");
|
||||
|
||||
await runA("resolve", "shared/conflicted.txt", keepRev);
|
||||
|
||||
let resolved = false;
|
||||
for (let i = 0; i < 6; i++) {
|
||||
await syncA();
|
||||
await syncB();
|
||||
const rawA2 = await runA("info", "shared/conflicted.txt");
|
||||
const rawB2 = await runB("info", "shared/conflicted.txt");
|
||||
await Deno.writeTextFile(infoFileA, rawA2);
|
||||
await Deno.writeTextFile(infoFileB, rawB2);
|
||||
const da2 = JSON.parse(rawA2) as Record<string, unknown>;
|
||||
const db2 = JSON.parse(rawB2) as Record<string, unknown>;
|
||||
if (jsonFieldIsNa(da2, "conflicts") && jsonFieldIsNa(db2, "conflicts")) {
|
||||
resolved = true;
|
||||
break;
|
||||
}
|
||||
// If A still sees a conflict, resolve it again
|
||||
if (!jsonFieldIsNa(da2, "conflicts")) {
|
||||
const rev2 = da2["revision"] as string;
|
||||
if (rev2) await runA("resolve", "shared/conflicted.txt", rev2).catch(() => {});
|
||||
}
|
||||
}
|
||||
assert(resolved, "conflicts should be resolved on both A and B");
|
||||
|
||||
const contentA = (await catA("shared/conflicted.txt")).replace(/\r\n/g, "\n");
|
||||
const contentB = (await catB("shared/conflicted.txt")).replace(/\r\n/g, "\n");
|
||||
assertEquals(contentA, contentB, "resolved content mismatch between A and B");
|
||||
console.log("[PASS] case 4 passed");
|
||||
console.log("[PASS] all sync/resolve scenarios passed");
|
||||
});
|
||||
}
|
||||
298
src/apps/cli/testdeno/test_dev_deno.md
Normal file
298
src/apps/cli/testdeno/test_dev_deno.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# CLI Deno Test Development Notes
|
||||
|
||||
This document provides an overview of the Deno-based compatibility tests under `src/apps/cli/testdeno/`.
|
||||
The existing bash tests under `src/apps/cli/test/` are preserved, while a Windows-friendly suite is maintained in parallel.
|
||||
|
||||
---
|
||||
|
||||
## Goals
|
||||
|
||||
- Keep existing bash tests intact.
|
||||
- Provide direct execution from Windows PowerShell.
|
||||
- Establish a TypeScript (Deno) foundation for core end-to-end and integration scenarios.
|
||||
|
||||
---
|
||||
|
||||
## Directory structure
|
||||
|
||||
```
|
||||
src/apps/cli/testdeno/
|
||||
deno.json
|
||||
CONTRIBUTING_TESTS.md
|
||||
helpers/
|
||||
backgroundCli.ts
|
||||
cli.ts
|
||||
docker.ts
|
||||
env.ts
|
||||
p2p.ts
|
||||
settings.ts
|
||||
temp.ts
|
||||
test-e2e-two-vaults-couchdb.ts
|
||||
test-push-pull.ts
|
||||
test-p2p-host.ts
|
||||
test-p2p-peers-local-relay.ts
|
||||
test-p2p-sync.ts
|
||||
test-p2p-three-nodes-conflict.ts
|
||||
test-p2p-upload-download-repro.ts
|
||||
test-e2e-two-vaults-matrix.ts
|
||||
test-setup-put-cat.ts
|
||||
test-mirror.ts
|
||||
test-sync-two-local-databases.ts
|
||||
test-sync-locked-remote.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key files
|
||||
|
||||
### `deno.json`
|
||||
|
||||
- Defines Deno tasks.
|
||||
- Defines import maps for `@std/assert` and `@std/path`.
|
||||
|
||||
Main tasks:
|
||||
|
||||
- `deno task test`
|
||||
- `deno task test:local`
|
||||
- `deno task test:push-pull`
|
||||
- `deno task test:setup-put-cat`
|
||||
- `deno task test:mirror`
|
||||
- `deno task test:sync-two-local`
|
||||
- `deno task test:sync-locked-remote`
|
||||
- `deno task test:p2p-host`
|
||||
- `deno task test:p2p-peers`
|
||||
- `deno task test:p2p-sync`
|
||||
- `deno task test:p2p-three-nodes`
|
||||
- `deno task test:p2p-upload-download`
|
||||
- `deno task test:e2e-couchdb`
|
||||
- `deno task test:e2e-matrix`
|
||||
|
||||
### `helpers/cli.ts`
|
||||
|
||||
- CLI execution wrappers.
|
||||
- `runCli`, `runCliOrFail`, `runCliWithInput`.
|
||||
- Output normalisation via `sanitiseCatStdout`.
|
||||
- Comparison utilities, including `assertFilesEqual`.
|
||||
|
||||
This file corresponds to `run_cli` and common assertions in `test-helpers.sh`.
|
||||
|
||||
### `helpers/settings.ts`
|
||||
|
||||
- Executes `init-settings --force`.
|
||||
- Marks `isConfigured = true`.
|
||||
- Applies CouchDB and P2P settings.
|
||||
- Applies remote synchronisation settings and P2P test tweaks.
|
||||
|
||||
This file corresponds to settings helpers in `test-helpers.sh`.
|
||||
|
||||
### `helpers/docker.ts`
|
||||
|
||||
- Starts, stops, and initialises CouchDB directly from Deno.
|
||||
- Configures CouchDB via `fetch + retry`.
|
||||
- Starts and stops the P2P relay through the same Docker runner.
|
||||
|
||||
Both CouchDB and P2P relay flows are bash-independent.
|
||||
|
||||
### `helpers/backgroundCli.ts`
|
||||
|
||||
- Starts long-running commands such as `p2p-host` in the background.
|
||||
- Waits for readiness logs and handles termination.
|
||||
|
||||
### `helpers/p2p.ts`
|
||||
|
||||
- Determines whether a local relay should be started.
|
||||
- Parses `p2p-peers` output.
|
||||
- Discovers peer IDs with a fallback based on advertisement logs.
|
||||
|
||||
### `helpers/env.ts`
|
||||
|
||||
- Loads `.test.env`.
|
||||
- Supports `KEY=value`, single-quoted values, and double-quoted values.
|
||||
|
||||
### `helpers/temp.ts`
|
||||
|
||||
- Provides `TempDir`.
|
||||
- Uses `await using` to auto-clean temporary directories.
|
||||
|
||||
---
|
||||
|
||||
## Implemented tests
|
||||
|
||||
### `test-push-pull.ts`
|
||||
|
||||
- Verifies push and pull round trips.
|
||||
- Uses environment variables or `.test.env` for CouchDB values.
|
||||
|
||||
### `test-setup-put-cat.ts`
|
||||
|
||||
- Verifies `setup` with full setup URI generation via `encodeSettingsToSetupURI`.
|
||||
- Verifies `push`, `cat`, `ls`, `info`, `rm`, `resolve`, `cat-rev`, and `pull-rev`.
|
||||
- Does not require an external remote.
|
||||
|
||||
### `test-mirror.ts`
|
||||
|
||||
- Verifies six core mirror scenarios.
|
||||
- Does not require an external remote.
|
||||
|
||||
### `test-sync-two-local-databases.ts`
|
||||
|
||||
- Verifies sync between two vaults and CouchDB.
|
||||
- Verifies conflict detection and resolve propagation.
|
||||
- Starts Docker CouchDB by default when `LIVESYNC_START_DOCKER != 0`.
|
||||
|
||||
### `test-sync-locked-remote.ts`
|
||||
|
||||
- Updates the CouchDB milestone `locked` flag.
|
||||
- Verifies sync success when unlocked.
|
||||
- Verifies actionable CLI error when locked.
|
||||
|
||||
### `test-p2p-host.ts`
|
||||
|
||||
- Verifies that `p2p-host` starts and emits readiness output.
|
||||
|
||||
### `test-p2p-peers-local-relay.ts`
|
||||
|
||||
- Verifies peer discovery through a local relay.
|
||||
|
||||
### `test-p2p-sync.ts`
|
||||
|
||||
- Verifies that `p2p-sync` completes after peer discovery.
|
||||
|
||||
### `test-p2p-three-nodes-conflict.ts`
|
||||
|
||||
- Uses one host and two clients.
|
||||
- Verifies conflict creation, detection via `info`, and resolution via `resolve`.
|
||||
|
||||
### `test-p2p-upload-download-repro.ts`
|
||||
|
||||
- Uses host, upload, and download nodes.
|
||||
- Verifies transfer of text files and binary files, including larger files.
|
||||
|
||||
### `test-e2e-two-vaults-couchdb.ts`
|
||||
|
||||
- Verifies two-vault end-to-end scenarios on CouchDB.
|
||||
- Runs both encryption-off and encryption-on cases.
|
||||
- Includes conflict marker checks in `ls` and resolve propagation checks.
|
||||
|
||||
### `test-e2e-two-vaults-matrix.ts`
|
||||
|
||||
- Verifies the matrix equivalent of the bash script.
|
||||
- Runs four combinations:
|
||||
- `COUCHDB-enc0`
|
||||
- `COUCHDB-enc1`
|
||||
- `MINIO-enc0`
|
||||
- `MINIO-enc1`
|
||||
|
||||
---
|
||||
|
||||
## Running tests (PowerShell)
|
||||
|
||||
From `src/apps/cli/testdeno`:
|
||||
|
||||
```powershell
|
||||
cd src/apps/cli/testdeno
|
||||
|
||||
# Local-only set
|
||||
deno task test:local
|
||||
|
||||
# Individual tests
|
||||
deno task test:setup-put-cat
|
||||
deno task test:mirror
|
||||
deno task test:push-pull
|
||||
deno task test:sync-locked-remote
|
||||
|
||||
# CouchDB-based tests
|
||||
deno task test:sync-two-local
|
||||
deno task test:e2e-couchdb
|
||||
|
||||
# P2P-based tests
|
||||
deno task test:p2p-host
|
||||
deno task test:p2p-peers
|
||||
deno task test:p2p-sync
|
||||
deno task test:p2p-three-nodes
|
||||
deno task test:p2p-upload-download
|
||||
deno task test:e2e-matrix
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environment variables
|
||||
|
||||
### CouchDB
|
||||
|
||||
- `COUCHDB_URI`
|
||||
- `COUCHDB_USER`
|
||||
- `COUCHDB_PASSWORD`
|
||||
- `COUCHDB_DBNAME`
|
||||
|
||||
Equivalent keys in `src/apps/cli/.test.env`:
|
||||
|
||||
- `hostname`
|
||||
- `username`
|
||||
- `password`
|
||||
- `dbname`
|
||||
|
||||
### Behaviour switches
|
||||
|
||||
- `LIVESYNC_START_DOCKER=0`: use existing CouchDB.
|
||||
- `REMOTE_PATH`: override target path for selected tests.
|
||||
- `LIVESYNC_TEST_TEE=1`: stream CLI stdout and stderr during execution.
|
||||
- `LIVESYNC_DOCKER_TEE=1`: stream Docker stdout and stderr.
|
||||
- `LIVESYNC_CLI_RETRY=<n>`: retry transient network failures.
|
||||
- `LIVESYNC_DEBUG_KEEP_DOCKER=1`: keep `couchdb-test` after test completion.
|
||||
|
||||
### Docker command selection
|
||||
|
||||
`helpers/docker.ts` supports command selection via environment variables.
|
||||
|
||||
- `LIVESYNC_DOCKER_MODE=auto` (default)
|
||||
- Windows: tries `wsl docker` first, then `docker`.
|
||||
- Non-Windows: tries `docker` first, then `wsl docker`.
|
||||
- `LIVESYNC_DOCKER_MODE=native`: always uses `docker`.
|
||||
- `LIVESYNC_DOCKER_MODE=wsl`: always uses `wsl docker`.
|
||||
- `LIVESYNC_DOCKER_COMMAND="..."`: custom command, for example `wsl docker`.
|
||||
|
||||
`LIVESYNC_DOCKER_COMMAND` has priority over `LIVESYNC_DOCKER_MODE`.
|
||||
|
||||
PowerShell examples:
|
||||
|
||||
```powershell
|
||||
# Use Docker in WSL explicitly
|
||||
$env:LIVESYNC_DOCKER_MODE = "wsl"
|
||||
deno task test:sync-two-local
|
||||
|
||||
# Full custom command
|
||||
$env:LIVESYNC_DOCKER_COMMAND = "wsl docker"
|
||||
deno task test:sync-two-local
|
||||
```
|
||||
|
||||
### P2P
|
||||
|
||||
- `RELAY`
|
||||
- `ROOM_ID`
|
||||
- `PASSPHRASE`
|
||||
- `APP_ID`
|
||||
- `PEERS_TIMEOUT`
|
||||
- `SYNC_TIMEOUT`
|
||||
- `USE_INTERNAL_RELAY=0|1`
|
||||
- `TIMEOUT_SECONDS`
|
||||
|
||||
---
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
The GitHub Actions workflow `.github/workflows/cli-deno-tests.yml` is used to run these tests automatically on push and pull requests affecting the CLI.
|
||||
|
||||
---
|
||||
|
||||
## Current limitations
|
||||
|
||||
- MinIO startup and matrix coverage are ported. Current limits are elsewhere, not setup URI generation.
|
||||
|
||||
---
|
||||
|
||||
## Maintenance policy
|
||||
|
||||
- Existing bash tests remain available.
|
||||
- Deno tests are expanded in parallel for cross-platform usage.
|
||||
- New scenarios should be added through reusable helpers in `helpers/`.
|
||||
@@ -1,2 +1,30 @@
|
||||
#!/bin/bash
|
||||
docker run -d --name relay-test -p 4000:8080 scsibug/nostr-rs-relay:latest
|
||||
set -e
|
||||
|
||||
docker run -d --name relay-test -p 4000:7777 \
|
||||
--tmpfs /app/strfry-db:rw,size=256m \
|
||||
--entrypoint sh \
|
||||
ghcr.io/hoytech/strfry:latest \
|
||||
-lc 'cat > /tmp/strfry.conf <<"EOF"
|
||||
db = "./strfry-db/"
|
||||
|
||||
relay {
|
||||
bind = "0.0.0.0"
|
||||
port = 7777
|
||||
nofiles = 100000
|
||||
|
||||
info {
|
||||
name = "livesync test relay"
|
||||
description = "local relay for livesync p2p tests"
|
||||
}
|
||||
|
||||
maxWebsocketPayloadSize = 131072
|
||||
autoPingSeconds = 55
|
||||
|
||||
writePolicy {
|
||||
plugin = ""
|
||||
}
|
||||
}
|
||||
EOF
|
||||
exec /app/strfry --config /tmp/strfry.conf relay'
|
||||
|
||||
|
||||
@@ -12,8 +12,7 @@ const defaultExternal = [
|
||||
"pouchdb-adapter-leveldb",
|
||||
"commander",
|
||||
"punycode",
|
||||
"node-datachannel",
|
||||
"node-datachannel/polyfill",
|
||||
"werift",
|
||||
];
|
||||
export default defineConfig({
|
||||
plugins: [svelte()],
|
||||
@@ -52,7 +51,7 @@ export default defineConfig({
|
||||
if (id === "fs" || id === "fs/promises" || id === "path" || id === "crypto" || id === "worker_threads")
|
||||
return true;
|
||||
if (id.startsWith("pouchdb-")) return true;
|
||||
if (id.startsWith("node-datachannel")) return true;
|
||||
if (id.startsWith("werift")) return true;
|
||||
if (id.startsWith("node:")) return true;
|
||||
return false;
|
||||
},
|
||||
|
||||
1
src/apps/webapp/.gitignore
vendored
1
src/apps/webapp/.gitignore
vendored
@@ -2,3 +2,4 @@ node_modules
|
||||
dist
|
||||
.DS_Store
|
||||
*.log
|
||||
.nyc_output
|
||||
58
src/apps/webapp/Dockerfile
Normal file
58
src/apps/webapp/Dockerfile
Normal file
@@ -0,0 +1,58 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
#
|
||||
# Self-hosted LiveSync WebApp — Docker image
|
||||
# Browser-based vault sync using the FileSystem API, served by nginx.
|
||||
#
|
||||
# Build (from the repository root):
|
||||
# docker build -f src/apps/webapp/Dockerfile -t livesync-webapp .
|
||||
#
|
||||
# Run:
|
||||
# docker run --rm -p 8080:80 livesync-webapp
|
||||
# Then open http://localhost:8080/webapp.html in Chrome/Edge 86+.
|
||||
#
|
||||
# Notes:
|
||||
# - This image serves purely static files; no server-side code is involved.
|
||||
# - The FileSystem API is a browser feature and requires Chrome/Edge 86+ or
|
||||
# Safari 15.2+ (limited). Firefox is not supported.
|
||||
# - CouchDB / S3 connections are made directly from the browser; the container
|
||||
# only serves HTML/JS/CSS assets.
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Stage 1 — builder
|
||||
# Full Node.js environment to install dependencies and build the Vite bundle.
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
FROM node:22-slim AS builder
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Install workspace dependencies (all apps share the root package.json)
|
||||
COPY package.json ./
|
||||
RUN npm install
|
||||
|
||||
# Copy the full source tree and build the WebApp bundle
|
||||
COPY . .
|
||||
RUN cd src/apps/webapp && npm run build
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Stage 2 — runtime
|
||||
# Minimal nginx image that serves the static build output.
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
FROM nginx:stable-alpine
|
||||
|
||||
# Remove the default nginx welcome page
|
||||
RUN rm -rf /usr/share/nginx/html/*
|
||||
|
||||
# Copy the built static assets
|
||||
COPY --from=builder /build/src/apps/webapp/dist /usr/share/nginx/html
|
||||
|
||||
# Redirect the root to webapp.html so the app loads on first visit
|
||||
RUN printf 'server {\n\
|
||||
listen 80;\n\
|
||||
root /usr/share/nginx/html;\n\
|
||||
index webapp.html;\n\
|
||||
location / {\n\
|
||||
try_files $uri $uri/ =404;\n\
|
||||
}\n\
|
||||
}\n' > /etc/nginx/conf.d/default.conf
|
||||
|
||||
EXPOSE 80
|
||||
@@ -55,8 +55,8 @@ The built files will be in the `dist` directory.
|
||||
|
||||
### Usage
|
||||
|
||||
1. Open the webapp in your browser
|
||||
2. Grant directory access when prompted
|
||||
1. Open the webapp in your browser (`webapp.html`)
|
||||
2. Select a vault from history or grant access to a new directory
|
||||
3. Configure CouchDB connection by editing `.livesync/settings.json` in your vault
|
||||
- You can also copy data.json from Obsidian's plug-in folder.
|
||||
|
||||
@@ -98,8 +98,11 @@ webapp/
|
||||
│ ├── ServiceFileAccessImpl.ts
|
||||
│ ├── DatabaseFileAccess.ts
|
||||
│ └── FSAPIServiceModules.ts
|
||||
├── main.ts # Application entry point
|
||||
├── index.html # HTML entry
|
||||
├── bootstrap.ts # Vault picker + startup orchestration
|
||||
├── main.ts # LiveSync core bootstrap (after vault selected)
|
||||
├── vaultSelector.ts # FileSystem handle history and permission flow
|
||||
├── webapp.html # Main HTML entry
|
||||
├── index.html # Redirect entry for compatibility
|
||||
├── package.json
|
||||
├── vite.config.ts
|
||||
└── README.md
|
||||
|
||||
139
src/apps/webapp/bootstrap.ts
Normal file
139
src/apps/webapp/bootstrap.ts
Normal file
@@ -0,0 +1,139 @@
|
||||
import { LiveSyncWebApp } from "./main";
|
||||
import { VaultHistoryStore, type VaultHistoryItem } from "./vaultSelector";
|
||||
|
||||
const historyStore = new VaultHistoryStore();
|
||||
let app: LiveSyncWebApp | null = null;
|
||||
|
||||
function getRequiredElement<T extends HTMLElement>(id: string): T {
|
||||
const element = document.getElementById(id);
|
||||
if (!element) {
|
||||
throw new Error(`Missing element: #${id}`);
|
||||
}
|
||||
return element as T;
|
||||
}
|
||||
|
||||
function setStatus(kind: "info" | "warning" | "error" | "success", message: string): void {
|
||||
const statusEl = getRequiredElement<HTMLDivElement>("status");
|
||||
statusEl.className = kind;
|
||||
statusEl.textContent = message;
|
||||
}
|
||||
|
||||
function setBusyState(isBusy: boolean): void {
|
||||
const pickNewBtn = getRequiredElement<HTMLButtonElement>("pick-new-vault");
|
||||
pickNewBtn.disabled = isBusy;
|
||||
|
||||
const historyButtons = document.querySelectorAll<HTMLButtonElement>(".vault-item button");
|
||||
historyButtons.forEach((button) => {
|
||||
button.disabled = isBusy;
|
||||
});
|
||||
}
|
||||
|
||||
function formatLastUsed(unixMillis: number): string {
|
||||
if (!unixMillis) {
|
||||
return "unknown";
|
||||
}
|
||||
return new Date(unixMillis).toLocaleString();
|
||||
}
|
||||
|
||||
async function renderHistoryList(): Promise<VaultHistoryItem[]> {
|
||||
const listEl = getRequiredElement<HTMLDivElement>("vault-history-list");
|
||||
const emptyEl = getRequiredElement<HTMLParagraphElement>("vault-history-empty");
|
||||
|
||||
const [items, lastUsedId] = await Promise.all([historyStore.getVaultHistory(), historyStore.getLastUsedVaultId()]);
|
||||
|
||||
listEl.innerHTML = "";
|
||||
emptyEl.classList.toggle("is-hidden", items.length > 0);
|
||||
|
||||
for (const item of items) {
|
||||
const row = document.createElement("div");
|
||||
row.className = "vault-item";
|
||||
|
||||
const info = document.createElement("div");
|
||||
info.className = "vault-item-info";
|
||||
|
||||
const name = document.createElement("div");
|
||||
name.className = "vault-item-name";
|
||||
name.textContent = item.name;
|
||||
|
||||
const meta = document.createElement("div");
|
||||
meta.className = "vault-item-meta";
|
||||
const label = item.id === lastUsedId ? "Last used" : "Used";
|
||||
meta.textContent = `${label}: ${formatLastUsed(item.lastUsedAt)}`;
|
||||
|
||||
info.append(name, meta);
|
||||
|
||||
const useButton = document.createElement("button");
|
||||
useButton.type = "button";
|
||||
useButton.textContent = "Use this vault";
|
||||
useButton.addEventListener("click", () => {
|
||||
void startWithHistory(item);
|
||||
});
|
||||
|
||||
row.append(info, useButton);
|
||||
listEl.appendChild(row);
|
||||
}
|
||||
|
||||
return items;
|
||||
}
|
||||
|
||||
async function startWithHandle(handle: FileSystemDirectoryHandle): Promise<void> {
|
||||
setStatus("info", `Starting LiveSync with vault: ${handle.name}`);
|
||||
app = new LiveSyncWebApp(handle);
|
||||
await app.initialize();
|
||||
|
||||
const selectorEl = getRequiredElement<HTMLDivElement>("vault-selector");
|
||||
selectorEl.classList.add("is-hidden");
|
||||
}
|
||||
|
||||
async function startWithHistory(item: VaultHistoryItem): Promise<void> {
|
||||
setBusyState(true);
|
||||
try {
|
||||
const handle = await historyStore.activateHistoryItem(item);
|
||||
await startWithHandle(handle);
|
||||
} catch (error) {
|
||||
console.error("[Directory] Failed to open history vault:", error);
|
||||
setStatus("error", `Failed to open saved vault: ${String(error)}`);
|
||||
setBusyState(false);
|
||||
}
|
||||
}
|
||||
|
||||
async function startWithNewPicker(): Promise<void> {
|
||||
setBusyState(true);
|
||||
try {
|
||||
const handle = await historyStore.pickNewVault();
|
||||
await startWithHandle(handle);
|
||||
} catch (error) {
|
||||
console.error("[Directory] Failed to pick vault:", error);
|
||||
setStatus("warning", `Vault selection was cancelled or failed: ${String(error)}`);
|
||||
setBusyState(false);
|
||||
}
|
||||
}
|
||||
|
||||
async function initializeVaultSelector(): Promise<void> {
|
||||
setStatus("info", "Select a vault folder to start LiveSync.");
|
||||
|
||||
const pickNewBtn = getRequiredElement<HTMLButtonElement>("pick-new-vault");
|
||||
pickNewBtn.addEventListener("click", () => {
|
||||
void startWithNewPicker();
|
||||
});
|
||||
|
||||
await renderHistoryList();
|
||||
}
|
||||
|
||||
window.addEventListener("load", async () => {
|
||||
try {
|
||||
await initializeVaultSelector();
|
||||
} catch (error) {
|
||||
console.error("Failed to initialize vault selector:", error);
|
||||
setStatus("error", `Initialization failed: ${String(error)}`);
|
||||
}
|
||||
});
|
||||
|
||||
window.addEventListener("beforeunload", () => {
|
||||
void app?.shutdown();
|
||||
});
|
||||
|
||||
(window as any).livesyncApp = {
|
||||
getApp: () => app,
|
||||
historyStore,
|
||||
};
|
||||
@@ -3,207 +3,10 @@
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Self-hosted LiveSync WebApp</title>
|
||||
<style>
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
|
||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
min-height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
.container {
|
||||
background: white;
|
||||
border-radius: 12px;
|
||||
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
|
||||
padding: 40px;
|
||||
max-width: 600px;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
h1 {
|
||||
color: #333;
|
||||
margin-bottom: 10px;
|
||||
font-size: 28px;
|
||||
}
|
||||
|
||||
.subtitle {
|
||||
color: #666;
|
||||
margin-bottom: 30px;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
#status {
|
||||
padding: 15px;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 20px;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
#status.error {
|
||||
background: #fee;
|
||||
color: #c33;
|
||||
border: 1px solid #fcc;
|
||||
}
|
||||
|
||||
#status.warning {
|
||||
background: #ffeaa7;
|
||||
color: #d63031;
|
||||
border: 1px solid #fdcb6e;
|
||||
}
|
||||
|
||||
#status.success {
|
||||
background: #d4edda;
|
||||
color: #155724;
|
||||
border: 1px solid #c3e6cb;
|
||||
}
|
||||
|
||||
#status.info {
|
||||
background: #d1ecf1;
|
||||
color: #0c5460;
|
||||
border: 1px solid #bee5eb;
|
||||
}
|
||||
|
||||
.info-section {
|
||||
margin-top: 30px;
|
||||
padding: 20px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
}
|
||||
|
||||
.info-section h2 {
|
||||
font-size: 18px;
|
||||
margin-bottom: 15px;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.info-section ul {
|
||||
list-style: none;
|
||||
padding-left: 0;
|
||||
}
|
||||
|
||||
.info-section li {
|
||||
padding: 8px 0;
|
||||
color: #666;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.info-section li::before {
|
||||
content: "•";
|
||||
color: #667eea;
|
||||
font-weight: bold;
|
||||
display: inline-block;
|
||||
width: 1em;
|
||||
margin-left: -1em;
|
||||
padding-right: 0.5em;
|
||||
}
|
||||
|
||||
.feature-list {
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.feature-list h3 {
|
||||
font-size: 16px;
|
||||
margin-bottom: 10px;
|
||||
color: #444;
|
||||
}
|
||||
|
||||
code {
|
||||
background: #e9ecef;
|
||||
padding: 2px 6px;
|
||||
border-radius: 4px;
|
||||
font-family: 'Courier New', monospace;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
.footer {
|
||||
margin-top: 30px;
|
||||
text-align: center;
|
||||
color: #999;
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.footer a {
|
||||
color: #667eea;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.footer a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.console-link {
|
||||
margin-top: 20px;
|
||||
text-align: center;
|
||||
font-size: 13px;
|
||||
color: #666;
|
||||
}
|
||||
|
||||
@media (max-width: 600px) {
|
||||
.container {
|
||||
padding: 30px 20px;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 24px;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
<title>Self-hosted LiveSync WebApp Launcher</title>
|
||||
<meta http-equiv="refresh" content="0; url=./webapp.html">
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<h1>🔄 Self-hosted LiveSync</h1>
|
||||
<p class="subtitle">Browser-based Self-hosted LiveSync using FileSystem API</p>
|
||||
|
||||
<div id="status" class="info">
|
||||
Initialising...
|
||||
</div>
|
||||
|
||||
<div class="info-section">
|
||||
<h2>About This Application</h2>
|
||||
<ul>
|
||||
<li>Runs entirely in your browser</li>
|
||||
<li>Uses FileSystem API to access your local vault</li>
|
||||
<li>Syncs with CouchDB server (like Obsidian plugin)</li>
|
||||
<li>Settings stored in <code>.livesync/settings.json</code></li>
|
||||
<li>Real-time file watching with FileSystemObserver (Chrome 124+)</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="info-section">
|
||||
<h2>How to Use</h2>
|
||||
<ul>
|
||||
<li>Grant directory access when prompted</li>
|
||||
<li>Create <code>.livesync/settings.json</code> in your vault folder. (Compatible with Obsidian's Self-hosted LiveSync)</li>
|
||||
<li>Add your CouchDB connection details</li>
|
||||
<li>Your files will be synced automatically</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="console-link">
|
||||
💡 Open browser console (F12) for detailed logs
|
||||
</div>
|
||||
|
||||
<div class="footer">
|
||||
<p>
|
||||
Powered by
|
||||
<a href="https://github.com/vrtmrz/obsidian-livesync" target="_blank">
|
||||
Self-hosted LiveSync
|
||||
</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script type="module" src="./main.ts"></script>
|
||||
<p>Redirecting to <a href="./webapp.html">WebApp</a>...</p>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@@ -13,10 +13,12 @@ import type { InjectableSettingService } from "@lib/services/implements/injectab
|
||||
import { useOfflineScanner } from "@lib/serviceFeatures/offlineScanner";
|
||||
import { useRedFlagFeatures } from "@/serviceFeatures/redFlag";
|
||||
import { useCheckRemoteSize } from "@lib/serviceFeatures/checkRemoteSize";
|
||||
import { useSetupURIFeature } from "@lib/serviceFeatures/setupObsidian/setupUri";
|
||||
import { useRemoteConfiguration } from "@lib/serviceFeatures/remoteConfig";
|
||||
import { SetupManager } from "@/modules/features/SetupManager";
|
||||
// import { ModuleObsidianSettingsAsMarkdown } from "@/modules/features/ModuleObsidianSettingAsMarkdown";
|
||||
import { ModuleSetupObsidian } from "@/modules/features/ModuleSetupObsidian";
|
||||
// import { ModuleObsidianMenu } from "@/modules/essentialObsidian/ModuleObsidianMenu";
|
||||
import { useSetupManagerHandlersFeature } from "@/serviceFeatures/setupObsidian/setupManagerHandlers";
|
||||
import { useP2PReplicatorCommands } from "@/lib/src/replication/trystero/useP2PReplicatorCommands";
|
||||
import { useP2PReplicatorFeature } from "@/lib/src/replication/trystero/useP2PReplicatorFeature";
|
||||
|
||||
const SETTINGS_DIR = ".livesync";
|
||||
const SETTINGS_FILE = "settings.json";
|
||||
@@ -47,21 +49,18 @@ const DEFAULT_SETTINGS: Partial<ObsidianLiveSyncSettings> = {
|
||||
};
|
||||
|
||||
class LiveSyncWebApp {
|
||||
private rootHandle: FileSystemDirectoryHandle | null = null;
|
||||
private rootHandle: FileSystemDirectoryHandle;
|
||||
private core: LiveSyncBaseCore<ServiceContext, any> | null = null;
|
||||
private serviceHub: BrowserServiceHub<ServiceContext> | null = null;
|
||||
|
||||
constructor(rootHandle: FileSystemDirectoryHandle) {
|
||||
this.rootHandle = rootHandle;
|
||||
}
|
||||
|
||||
async initialize() {
|
||||
console.log("Self-hosted LiveSync WebApp");
|
||||
console.log("Initializing...");
|
||||
|
||||
// Request directory access
|
||||
await this.requestDirectoryAccess();
|
||||
|
||||
if (!this.rootHandle) {
|
||||
throw new Error("Failed to get directory access");
|
||||
}
|
||||
|
||||
console.log(`Vault directory: ${this.rootHandle.name}`);
|
||||
|
||||
// Create service context and hub
|
||||
@@ -98,18 +97,26 @@ class LiveSyncWebApp {
|
||||
return DEFAULT_SETTINGS as ObsidianLiveSyncSettings;
|
||||
});
|
||||
|
||||
// App lifecycle handlers
|
||||
this.serviceHub.appLifecycle.scheduleRestart.setHandler(async () => {
|
||||
console.log("[AppLifecycle] Restart requested");
|
||||
await this.shutdown();
|
||||
await this.initialize();
|
||||
setTimeout(() => {
|
||||
window.location.reload();
|
||||
}, 1000);
|
||||
});
|
||||
|
||||
// Create LiveSync core
|
||||
this.core = new LiveSyncBaseCore(
|
||||
this.serviceHub,
|
||||
(core, serviceHub) => {
|
||||
return initialiseServiceModulesFSAPI(this.rootHandle!, core, serviceHub);
|
||||
return initialiseServiceModulesFSAPI(this.rootHandle, core, serviceHub);
|
||||
},
|
||||
(core) => [
|
||||
// new ModuleObsidianEvents(this, core),
|
||||
// new ModuleObsidianSettingDialogue(this, core),
|
||||
// new ModuleObsidianMenu(core),
|
||||
new ModuleSetupObsidian(core),
|
||||
new SetupManager(core),
|
||||
// new ModuleObsidianSettingsAsMarkdown(core),
|
||||
// new ModuleLog(this, core),
|
||||
// new ModuleObsidianDocumentHistory(this, core),
|
||||
@@ -118,13 +125,20 @@ class LiveSyncWebApp {
|
||||
// new ModuleDev(this, core),
|
||||
// new ModuleReplicateTest(this, core),
|
||||
// new ModuleIntegratedTest(this, core),
|
||||
// new SetupManager(core),
|
||||
// new ModuleReplicatorP2P(core), // Register P2P replicator for CLI (useP2PReplicator is not used here)
|
||||
new SetupManager(core),
|
||||
],
|
||||
() => [], // No add-ons
|
||||
(core) => {
|
||||
useOfflineScanner(core);
|
||||
useRedFlagFeatures(core);
|
||||
useCheckRemoteSize(core);
|
||||
useRemoteConfiguration(core);
|
||||
const replicator = useP2PReplicatorFeature(core);
|
||||
useP2PReplicatorCommands(core, replicator);
|
||||
const setupManager = core.getModule(SetupManager);
|
||||
useSetupManagerHandlersFeature(core, setupManager);
|
||||
useSetupURIFeature(core);
|
||||
}
|
||||
);
|
||||
|
||||
@@ -133,8 +147,6 @@ class LiveSyncWebApp {
|
||||
}
|
||||
|
||||
private async saveSettingsToFile(data: ObsidianLiveSyncSettings): Promise<void> {
|
||||
if (!this.rootHandle) return;
|
||||
|
||||
try {
|
||||
// Create .livesync directory if it doesn't exist
|
||||
const livesyncDir = await this.rootHandle.getDirectoryHandle(SETTINGS_DIR, { create: true });
|
||||
@@ -151,8 +163,6 @@ class LiveSyncWebApp {
|
||||
}
|
||||
|
||||
private async loadSettingsFromFile(): Promise<Partial<ObsidianLiveSyncSettings> | null> {
|
||||
if (!this.rootHandle) return null;
|
||||
|
||||
try {
|
||||
const livesyncDir = await this.rootHandle.getDirectoryHandle(SETTINGS_DIR);
|
||||
const fileHandle = await livesyncDir.getFileHandle(SETTINGS_FILE);
|
||||
@@ -165,90 +175,6 @@ class LiveSyncWebApp {
|
||||
}
|
||||
}
|
||||
|
||||
private async requestDirectoryAccess() {
|
||||
try {
|
||||
// Check if we have a cached directory handle
|
||||
const cached = await this.loadCachedDirectoryHandle();
|
||||
if (cached) {
|
||||
// Verify permission (cast to any for compatibility)
|
||||
try {
|
||||
const permission = await (cached as any).queryPermission({ mode: "readwrite" });
|
||||
if (permission === "granted") {
|
||||
this.rootHandle = cached;
|
||||
console.log("[Directory] Using cached directory handle");
|
||||
return;
|
||||
}
|
||||
} catch (e) {
|
||||
// queryPermission might not be supported, try to use anyway
|
||||
console.log("[Directory] Could not verify permission, requesting new access");
|
||||
}
|
||||
}
|
||||
|
||||
// Request new directory access
|
||||
console.log("[Directory] Requesting directory access...");
|
||||
this.rootHandle = await (window as any).showDirectoryPicker({
|
||||
mode: "readwrite",
|
||||
startIn: "documents",
|
||||
});
|
||||
|
||||
// Save the handle for next time
|
||||
await this.saveCachedDirectoryHandle(this.rootHandle);
|
||||
console.log("[Directory] Directory access granted");
|
||||
} catch (error) {
|
||||
console.error("[Directory] Failed to get directory access:", error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
private async saveCachedDirectoryHandle(handle: FileSystemDirectoryHandle) {
|
||||
try {
|
||||
// Use IndexedDB to store the directory handle
|
||||
const db = await this.openHandleDB();
|
||||
const transaction = db.transaction(["handles"], "readwrite");
|
||||
const store = transaction.objectStore("handles");
|
||||
await new Promise((resolve, reject) => {
|
||||
const request = store.put(handle, "rootHandle");
|
||||
request.onsuccess = resolve;
|
||||
request.onerror = reject;
|
||||
});
|
||||
db.close();
|
||||
} catch (error) {
|
||||
console.error("[Directory] Failed to cache handle:", error);
|
||||
}
|
||||
}
|
||||
|
||||
private async loadCachedDirectoryHandle(): Promise<FileSystemDirectoryHandle | null> {
|
||||
try {
|
||||
const db = await this.openHandleDB();
|
||||
const transaction = db.transaction(["handles"], "readonly");
|
||||
const store = transaction.objectStore("handles");
|
||||
const handle = await new Promise<FileSystemDirectoryHandle | null>((resolve, reject) => {
|
||||
const request = store.get("rootHandle");
|
||||
request.onsuccess = () => resolve(request.result || null);
|
||||
request.onerror = reject;
|
||||
});
|
||||
db.close();
|
||||
return handle;
|
||||
} catch (error) {
|
||||
console.error("[Directory] Failed to load cached handle:", error);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private async openHandleDB(): Promise<IDBDatabase> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const request = indexedDB.open("livesync-webapp-handles", 1);
|
||||
request.onerror = () => reject(request.error);
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onupgradeneeded = (event) => {
|
||||
const db = (event.target as IDBOpenDBRequest).result;
|
||||
if (!db.objectStoreNames.contains("handles")) {
|
||||
db.createObjectStore("handles");
|
||||
}
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
private async start() {
|
||||
if (!this.core) {
|
||||
throw new Error("Core not initialized");
|
||||
@@ -333,21 +259,4 @@ class LiveSyncWebApp {
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize on load
|
||||
const app = new LiveSyncWebApp();
|
||||
|
||||
window.addEventListener("load", async () => {
|
||||
try {
|
||||
await app.initialize();
|
||||
} catch (error) {
|
||||
console.error("Failed to initialize:", error);
|
||||
}
|
||||
});
|
||||
|
||||
// Handle page unload
|
||||
window.addEventListener("beforeunload", () => {
|
||||
void app.shutdown();
|
||||
});
|
||||
|
||||
// Export for debugging
|
||||
(window as any).livesyncApp = app;
|
||||
export { LiveSyncWebApp };
|
||||
|
||||
@@ -7,6 +7,8 @@
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
"build": "vite build",
|
||||
"build:docker": "docker build -f Dockerfile -t livesync-webapp ../../..",
|
||||
"run:docker": "docker run -p 8002:80 livesync-webapp",
|
||||
"preview": "vite preview"
|
||||
},
|
||||
"dependencies": {},
|
||||
|
||||
81
src/apps/webapp/playwright.config.ts
Normal file
81
src/apps/webapp/playwright.config.ts
Normal file
@@ -0,0 +1,81 @@
|
||||
import { defineConfig, devices } from "@playwright/test";
|
||||
import * as path from "path";
|
||||
import * as fs from "fs";
|
||||
import { fileURLToPath } from "url";
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Load environment variables from .test.env (root) so that CouchDB
|
||||
// connection details are visible to the test process.
|
||||
// ---------------------------------------------------------------------------
|
||||
function loadEnvFile(envPath: string): Record<string, string> {
|
||||
const result: Record<string, string> = {};
|
||||
if (!fs.existsSync(envPath)) return result;
|
||||
const lines = fs.readFileSync(envPath, "utf-8").split("\n");
|
||||
for (const line of lines) {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed || trimmed.startsWith("#")) continue;
|
||||
const eq = trimmed.indexOf("=");
|
||||
if (eq < 0) continue;
|
||||
const key = trimmed.slice(0, eq).trim();
|
||||
const val = trimmed.slice(eq + 1).trim();
|
||||
result[key] = val;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
// __dirname is src/apps/webapp — root is three levels up
|
||||
const ROOT = path.resolve(__dirname, "../../..");
|
||||
const envVars = {
|
||||
...loadEnvFile(path.join(ROOT, ".env")),
|
||||
...loadEnvFile(path.join(ROOT, ".test.env")),
|
||||
};
|
||||
|
||||
// Make the loaded variables available to all test files via process.env.
|
||||
for (const [k, v] of Object.entries(envVars)) {
|
||||
if (!(k in process.env)) {
|
||||
process.env[k] = v;
|
||||
}
|
||||
}
|
||||
|
||||
export default defineConfig({
|
||||
testDir: "./test",
|
||||
// Give each test plenty of time for replication round-trips.
|
||||
timeout: 120_000,
|
||||
expect: { timeout: 30_000 },
|
||||
// Run test files sequentially; the tests themselves manage two contexts.
|
||||
fullyParallel: false,
|
||||
workers: 1,
|
||||
reporter: "list",
|
||||
|
||||
use: {
|
||||
baseURL: "http://localhost:3000",
|
||||
// Use Chromium for OPFS and FileSystem API support.
|
||||
...devices["Desktop Chrome"],
|
||||
headless: true,
|
||||
// Launch args to match the main vitest browser config.
|
||||
launchOptions: {
|
||||
args: ["--js-flags=--expose-gc"],
|
||||
},
|
||||
},
|
||||
|
||||
projects: [
|
||||
{
|
||||
name: "chromium",
|
||||
use: { ...devices["Desktop Chrome"] },
|
||||
},
|
||||
],
|
||||
|
||||
// Start the vite dev server before running the tests.
|
||||
webServer: {
|
||||
command: "npx vite --port 3000",
|
||||
url: "http://localhost:3000",
|
||||
// Re-use a running dev server when developing locally.
|
||||
reuseExistingServer: !process.env.CI,
|
||||
timeout: 30_000,
|
||||
// Run from the webapp directory so vite finds its config.
|
||||
cwd: __dirname,
|
||||
},
|
||||
});
|
||||
203
src/apps/webapp/test-entry.ts
Normal file
203
src/apps/webapp/test-entry.ts
Normal file
@@ -0,0 +1,203 @@
|
||||
/**
|
||||
* LiveSync WebApp E2E test entry point.
|
||||
*
|
||||
* When served by vite dev server (at /test.html), this module wires up
|
||||
* `window.livesyncTest`, a plain JS API that Playwright tests can call via
|
||||
* `page.evaluate()`. All methods are async and serialisation-safe.
|
||||
*
|
||||
* Vault storage is backed by OPFS so no `showDirectoryPicker()` interaction
|
||||
* is required, making it fully headless-compatible.
|
||||
*/
|
||||
|
||||
import { LiveSyncWebApp } from "./main";
|
||||
import type { ObsidianLiveSyncSettings } from "@lib/common/types";
|
||||
import type { FilePathWithPrefix } from "@lib/common/types";
|
||||
|
||||
// --------------------------------------------------------------------------
|
||||
// Internal state – one app instance per page / browser context
|
||||
// --------------------------------------------------------------------------
|
||||
let app: LiveSyncWebApp | null = null;
|
||||
|
||||
// --------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// --------------------------------------------------------------------------
|
||||
|
||||
/** Strip the "plain:" / "enc:" / … prefix used internally in PouchDB paths. */
|
||||
function stripPrefix(raw: string): string {
|
||||
return raw.replace(/^[^:]+:/, "");
|
||||
}
|
||||
|
||||
/**
|
||||
* Poll every 300 ms until all known processing queues are drained, or until
|
||||
* the timeout elapses. Mirrors `waitForIdle` in the existing vitest harness.
|
||||
*/
|
||||
async function waitForIdle(core: any, timeoutMs = 60_000): Promise<void> {
|
||||
const deadline = Date.now() + timeoutMs;
|
||||
while (Date.now() < deadline) {
|
||||
const q =
|
||||
(core.services?.replication?.databaseQueueCount?.value ?? 0) +
|
||||
(core.services?.fileProcessing?.totalQueued?.value ?? 0) +
|
||||
(core.services?.fileProcessing?.batched?.value ?? 0) +
|
||||
(core.services?.fileProcessing?.processing?.value ?? 0) +
|
||||
(core.services?.replication?.storageApplyingCount?.value ?? 0);
|
||||
if (q === 0) return;
|
||||
await new Promise<void>((r) => setTimeout(r, 300));
|
||||
}
|
||||
throw new Error(`waitForIdle timed out after ${timeoutMs} ms`);
|
||||
}
|
||||
|
||||
function getCore(): any {
|
||||
const core = (app as any)?.core;
|
||||
if (!core) throw new Error("Vault not initialised – call livesyncTest.init() first");
|
||||
return core;
|
||||
}
|
||||
|
||||
// --------------------------------------------------------------------------
|
||||
// Public test API
|
||||
// --------------------------------------------------------------------------
|
||||
|
||||
export interface LiveSyncTestAPI {
|
||||
/**
|
||||
* Initialise a vault in OPFS under the given name and apply `settings`.
|
||||
* Any previous contents of the OPFS directory are wiped first so each
|
||||
* test run starts clean.
|
||||
*/
|
||||
init(vaultName: string, settings: Partial<ObsidianLiveSyncSettings>): Promise<void>;
|
||||
|
||||
/**
|
||||
* Write `content` to the local PouchDB under `vaultPath` (equivalent to
|
||||
* the CLI `put` command). Waiting for the DB write to finish is
|
||||
* included; you still need to call `replicate()` to push to remote.
|
||||
*/
|
||||
putFile(vaultPath: string, content: string): Promise<boolean>;
|
||||
|
||||
/**
|
||||
* Mark `vaultPath` as deleted in the local PouchDB (equivalent to CLI
|
||||
* `rm`). Call `replicate()` afterwards to propagate to remote.
|
||||
*/
|
||||
deleteFile(vaultPath: string): Promise<boolean>;
|
||||
|
||||
/**
|
||||
* Run one full replication cycle (push + pull) against the remote CouchDB,
|
||||
* then wait for the local storage-application queue to drain.
|
||||
*/
|
||||
replicate(): Promise<boolean>;
|
||||
|
||||
/**
|
||||
* Wait until all processing queues are idle. Usually not needed after
|
||||
* `putFile` / `deleteFile` since those already await, but useful when
|
||||
* testing results after `replicate()`.
|
||||
*/
|
||||
waitForIdle(timeoutMs?: number): Promise<void>;
|
||||
|
||||
/**
|
||||
* Return metadata for `vaultPath` from the local database, or `null` if
|
||||
* not found / deleted.
|
||||
*/
|
||||
getInfo(vaultPath: string): Promise<{
|
||||
path: string;
|
||||
revision: string;
|
||||
conflicts: string[];
|
||||
size: number;
|
||||
mtime: number;
|
||||
} | null>;
|
||||
|
||||
/** Convenience wrapper: returns true when the doc has ≥1 conflict revision. */
|
||||
hasConflict(vaultPath: string): Promise<boolean>;
|
||||
|
||||
/** Tear down the current app instance. */
|
||||
shutdown(): Promise<void>;
|
||||
}
|
||||
|
||||
// --------------------------------------------------------------------------
|
||||
// Implementation
|
||||
// --------------------------------------------------------------------------
|
||||
|
||||
const livesyncTest: LiveSyncTestAPI = {
|
||||
async init(vaultName: string, settings: Partial<ObsidianLiveSyncSettings>): Promise<void> {
|
||||
// Clean up any stale OPFS data from previous runs.
|
||||
const opfsRoot = await navigator.storage.getDirectory();
|
||||
try {
|
||||
await opfsRoot.removeEntry(vaultName, { recursive: true });
|
||||
} catch {
|
||||
// directory did not exist – that's fine
|
||||
}
|
||||
const vaultDir = await opfsRoot.getDirectoryHandle(vaultName, { create: true });
|
||||
|
||||
// Pre-write settings so they are loaded during initialise().
|
||||
const livesyncDir = await vaultDir.getDirectoryHandle(".livesync", { create: true });
|
||||
const settingsFile = await livesyncDir.getFileHandle("settings.json", { create: true });
|
||||
const writable = await settingsFile.createWritable();
|
||||
await writable.write(JSON.stringify(settings));
|
||||
await writable.close();
|
||||
|
||||
app = new LiveSyncWebApp(vaultDir);
|
||||
await app.initialize();
|
||||
|
||||
// Give background startup tasks a moment to settle.
|
||||
await waitForIdle(getCore(), 30_000);
|
||||
},
|
||||
|
||||
async putFile(vaultPath: string, content: string): Promise<boolean> {
|
||||
const core = getCore();
|
||||
const result = await core.serviceModules.databaseFileAccess.storeContent(
|
||||
vaultPath as FilePathWithPrefix,
|
||||
content
|
||||
);
|
||||
await waitForIdle(core);
|
||||
return result !== false;
|
||||
},
|
||||
|
||||
async deleteFile(vaultPath: string): Promise<boolean> {
|
||||
const core = getCore();
|
||||
const result = await core.serviceModules.databaseFileAccess.delete(vaultPath as FilePathWithPrefix);
|
||||
await waitForIdle(core);
|
||||
return result !== false;
|
||||
},
|
||||
|
||||
async replicate(): Promise<boolean> {
|
||||
const core = getCore();
|
||||
const result = await core.services.replication.replicate(true);
|
||||
// After replicate() resolves, remote docs may still be queued for
|
||||
// local storage application – wait until all queues are drained.
|
||||
await waitForIdle(core);
|
||||
return result !== false;
|
||||
},
|
||||
|
||||
async waitForIdle(timeoutMs?: number): Promise<void> {
|
||||
await waitForIdle(getCore(), timeoutMs ?? 60_000);
|
||||
},
|
||||
|
||||
async getInfo(vaultPath: string) {
|
||||
const core = getCore();
|
||||
const db = core.services?.database;
|
||||
for await (const doc of db.localDatabase.findAllNormalDocs({ conflicts: true })) {
|
||||
if (doc._deleted || doc.deleted) continue;
|
||||
const docPath = stripPrefix(doc.path ?? "");
|
||||
if (docPath !== vaultPath) continue;
|
||||
return {
|
||||
path: docPath,
|
||||
revision: (doc._rev as string) ?? "",
|
||||
conflicts: (doc._conflicts as string[]) ?? [],
|
||||
size: (doc.size as number) ?? 0,
|
||||
mtime: (doc.mtime as number) ?? 0,
|
||||
};
|
||||
}
|
||||
return null;
|
||||
},
|
||||
|
||||
async hasConflict(vaultPath: string): Promise<boolean> {
|
||||
const info = await livesyncTest.getInfo(vaultPath);
|
||||
return (info?.conflicts?.length ?? 0) > 0;
|
||||
},
|
||||
|
||||
async shutdown(): Promise<void> {
|
||||
if (app) {
|
||||
await app.shutdown();
|
||||
app = null;
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
// Expose on window for Playwright page.evaluate() calls.
|
||||
(window as any).livesyncTest = livesyncTest;
|
||||
26
src/apps/webapp/test.html
Normal file
26
src/apps/webapp/test.html
Normal file
@@ -0,0 +1,26 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>LiveSync WebApp – E2E Test Page</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: monospace;
|
||||
padding: 1rem;
|
||||
}
|
||||
#status {
|
||||
margin-top: 1rem;
|
||||
padding: 0.5rem;
|
||||
border: 1px solid #ccc;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h2>LiveSync WebApp E2E</h2>
|
||||
<p>This page is used by Playwright tests only. <code>window.livesyncTest</code> is exposed by the script below.</p>
|
||||
<!-- status div required by LiveSyncWebApp internal helpers -->
|
||||
<div id="status">Loading…</div>
|
||||
<script type="module" src="/test-entry.ts"></script>
|
||||
</body>
|
||||
</html>
|
||||
294
src/apps/webapp/test/e2e.spec.ts
Normal file
294
src/apps/webapp/test/e2e.spec.ts
Normal file
@@ -0,0 +1,294 @@
|
||||
/**
|
||||
* WebApp E2E tests – two-vault scenarios.
|
||||
*
|
||||
* Each vault (A and B) runs in its own browser context so that JavaScript
|
||||
* global state (including Trystero's global signalling tables) is fully
|
||||
* isolated. The two vaults communicate only through the shared remote
|
||||
* CouchDB database.
|
||||
*
|
||||
* Vault storage is OPFS-backed – no file-picker interaction needed.
|
||||
*
|
||||
* Prerequisites:
|
||||
* - A reachable CouchDB instance whose connection details are in .test.env
|
||||
* (read automatically by playwright.config.ts).
|
||||
*
|
||||
* How to run:
|
||||
* cd src/apps/webapp && npm run test:e2e
|
||||
*/
|
||||
|
||||
import { test, expect, type BrowserContext, type Page, type TestInfo } from "@playwright/test";
|
||||
import type { LiveSyncTestAPI } from "../test-entry";
|
||||
import { mkdirSync, writeFileSync } from "node:fs";
|
||||
import path from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Settings helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function requireEnv(name: string): string {
|
||||
const v = process.env[name];
|
||||
if (!v) throw new Error(`Missing required env variable: ${name}`);
|
||||
return v;
|
||||
}
|
||||
|
||||
async function ensureCouchDbDatabase(uri: string, user: string, pass: string, dbName: string): Promise<void> {
|
||||
const base = uri.replace(/\/+$/, "");
|
||||
const dbUrl = `${base}/${encodeURIComponent(dbName)}`;
|
||||
const auth = Buffer.from(`${user}:${pass}`, "utf-8").toString("base64");
|
||||
const response = await fetch(dbUrl, {
|
||||
method: "PUT",
|
||||
headers: {
|
||||
Authorization: `Basic ${auth}`,
|
||||
},
|
||||
});
|
||||
|
||||
// 201: created, 202: accepted, 412: already exists
|
||||
if (response.status === 201 || response.status === 202 || response.status === 412) {
|
||||
return;
|
||||
}
|
||||
|
||||
const body = await response.text().catch(() => "");
|
||||
throw new Error(`Failed to ensure CouchDB database (${response.status}): ${body}`);
|
||||
}
|
||||
|
||||
function buildSettings(dbName: string): Record<string, unknown> {
|
||||
return {
|
||||
// Remote database (shared between A and B – this is the replication target)
|
||||
couchDB_URI: requireEnv("hostname").replace(/\/+$/, ""),
|
||||
couchDB_USER: process.env["username"] ?? "",
|
||||
couchDB_PASSWORD: process.env["password"] ?? "",
|
||||
couchDB_DBNAME: dbName,
|
||||
|
||||
// Core behaviour
|
||||
isConfigured: true,
|
||||
liveSync: false,
|
||||
syncOnSave: false,
|
||||
syncOnStart: false,
|
||||
periodicReplication: false,
|
||||
gcDelay: 0,
|
||||
savingDelay: 0,
|
||||
notifyThresholdOfRemoteStorageSize: 0,
|
||||
|
||||
// Encryption off for test simplicity
|
||||
encrypt: false,
|
||||
|
||||
// Disable plugin/hidden-file sync (not needed in webapp)
|
||||
usePluginSync: false,
|
||||
autoSweepPlugins: false,
|
||||
autoSweepPluginsPeriodic: false,
|
||||
|
||||
//Auto accept perr
|
||||
P2P_AutoAcceptingPeers: "~.*",
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Test-page helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** Navigate to the test entry page and wait for `window.livesyncTest`. */
|
||||
async function openTestPage(ctx: BrowserContext): Promise<Page> {
|
||||
const page = await ctx.newPage();
|
||||
await page.goto("/test.html");
|
||||
await page.waitForFunction(() => !!(window as any).livesyncTest, { timeout: 20_000 });
|
||||
return page;
|
||||
}
|
||||
|
||||
/** Type-safe wrapper – calls `window.livesyncTest.<method>(...args)` in the page. */
|
||||
async function call<M extends keyof LiveSyncTestAPI>(
|
||||
page: Page,
|
||||
method: M,
|
||||
...args: Parameters<LiveSyncTestAPI[M]>
|
||||
): Promise<Awaited<ReturnType<LiveSyncTestAPI[M]>>> {
|
||||
const invoke = () =>
|
||||
page.evaluate(([m, a]) => (window as any).livesyncTest[m](...a), [method, args] as [
|
||||
string,
|
||||
unknown[],
|
||||
]) as Promise<Awaited<ReturnType<LiveSyncTestAPI[M]>>>;
|
||||
|
||||
try {
|
||||
return await invoke();
|
||||
} catch (ex: any) {
|
||||
const message = String(ex?.message ?? ex);
|
||||
// Some startup flows may trigger one page reload; recover once.
|
||||
if (
|
||||
message.includes("Execution context was destroyed") ||
|
||||
message.includes("Most likely the page has been closed")
|
||||
) {
|
||||
await page.waitForFunction(() => !!(window as any).livesyncTest, { timeout: 20_000 });
|
||||
return await invoke();
|
||||
}
|
||||
throw ex;
|
||||
}
|
||||
}
|
||||
|
||||
async function dumpCoverage(page: Page | undefined, label: string, testInfo: TestInfo): Promise<void> {
|
||||
if (!process.env.PW_COVERAGE || !page || page.isClosed()) {
|
||||
return;
|
||||
}
|
||||
const cov = await page
|
||||
.evaluate(() => {
|
||||
const data = (window as any).__coverage__;
|
||||
if (!data) return null;
|
||||
// Reset between tests to avoid runaway accumulation.
|
||||
(window as any).__coverage__ = {};
|
||||
return data;
|
||||
})
|
||||
.catch(() => null!);
|
||||
if (!cov) return;
|
||||
if (typeof cov === "object" && Object.keys(cov as Record<string, unknown>).length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const outDir = path.resolve(__dirname, "../.nyc_output");
|
||||
mkdirSync(outDir, { recursive: true });
|
||||
const name = `${testInfo.testId.replace(/[^a-zA-Z0-9_-]/g, "_")}-${label}.json`;
|
||||
writeFileSync(path.join(outDir, name), JSON.stringify(cov), "utf-8");
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Two-vault E2E suite
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
test.describe("WebApp two-vault E2E", () => {
|
||||
let ctxA: BrowserContext;
|
||||
let ctxB: BrowserContext;
|
||||
let pageA: Page;
|
||||
let pageB: Page;
|
||||
|
||||
const DB_SUFFIX = `${Date.now()}-${Math.random().toString(36).slice(2, 8)}`;
|
||||
const dbName = `${requireEnv("dbname")}-${DB_SUFFIX}`;
|
||||
const settings = buildSettings(dbName);
|
||||
|
||||
test.beforeAll(async ({ browser }) => {
|
||||
await ensureCouchDbDatabase(
|
||||
String(settings.couchDB_URI ?? ""),
|
||||
String(settings.couchDB_USER ?? ""),
|
||||
String(settings.couchDB_PASSWORD ?? ""),
|
||||
dbName
|
||||
);
|
||||
|
||||
// Open Vault A and Vault B in completely separate browser contexts.
|
||||
// Each context has its own JS runtime, IndexedDB and OPFS root, so
|
||||
// Trystero global state and PouchDB instance names cannot collide.
|
||||
ctxA = await browser.newContext();
|
||||
ctxB = await browser.newContext();
|
||||
|
||||
pageA = await openTestPage(ctxA);
|
||||
pageB = await openTestPage(ctxB);
|
||||
|
||||
await call(pageA, "init", "testvault_a", settings as any);
|
||||
await call(pageB, "init", "testvault_b", settings as any);
|
||||
});
|
||||
|
||||
test.afterAll(async () => {
|
||||
await call(pageA, "shutdown").catch(() => {});
|
||||
await call(pageB, "shutdown").catch(() => {});
|
||||
await ctxA.close();
|
||||
await ctxB.close();
|
||||
});
|
||||
|
||||
test.afterEach(async ({}, testInfo) => {
|
||||
await dumpCoverage(pageA, "vaultA", testInfo);
|
||||
await dumpCoverage(pageB, "vaultB", testInfo);
|
||||
});
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Case 1: Vault A writes a file and can read its metadata back from the
|
||||
// local database (no replication yet).
|
||||
// -----------------------------------------------------------------------
|
||||
test("Case 1: A writes a file and can get its info", async () => {
|
||||
const FILE = "e2e/case1-a-only.md";
|
||||
const CONTENT = "hello from vault A";
|
||||
|
||||
const ok = await call(pageA, "putFile", FILE, CONTENT);
|
||||
expect(ok).toBe(true);
|
||||
|
||||
const info = await call(pageA, "getInfo", FILE);
|
||||
expect(info).not.toBeNull();
|
||||
expect(info!.path).toBe(FILE);
|
||||
expect(info!.revision).toBeTruthy();
|
||||
expect(info!.conflicts).toHaveLength(0);
|
||||
});
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Case 2: Vault A writes a file, both vaults replicate, and Vault B ends
|
||||
// up with the file in its local database.
|
||||
// -----------------------------------------------------------------------
|
||||
test("Case 2: A writes a file, both replicate, B receives the file", async () => {
|
||||
const FILE = "e2e/case2-sync.md";
|
||||
const CONTENT = "content from A – should appear in B";
|
||||
|
||||
await call(pageA, "putFile", FILE, CONTENT);
|
||||
|
||||
// A pushes to remote, B pulls from remote.
|
||||
await call(pageA, "replicate");
|
||||
await call(pageB, "replicate");
|
||||
|
||||
const infoB = await call(pageB, "getInfo", FILE);
|
||||
expect(infoB).not.toBeNull();
|
||||
expect(infoB!.path).toBe(FILE);
|
||||
});
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Case 3: Vault A deletes the file it synced in case 2. After both
|
||||
// vaults replicate, Vault B no longer sees the file.
|
||||
// -----------------------------------------------------------------------
|
||||
test("Case 3: A deletes the file, both replicate, B no longer sees it", async () => {
|
||||
// This test depends on Case 2 having put e2e/case2-sync.md into both vaults.
|
||||
const FILE = "e2e/case2-sync.md";
|
||||
|
||||
await call(pageA, "deleteFile", FILE);
|
||||
|
||||
await call(pageA, "replicate");
|
||||
await call(pageB, "replicate");
|
||||
|
||||
const infoB = await call(pageB, "getInfo", FILE);
|
||||
// The file should be gone (null means not found or deleted).
|
||||
expect(infoB).toBeNull();
|
||||
});
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Case 4: A and B each independently edit the same file that was already
|
||||
// synced. After both vaults replicate the editing cycle, both
|
||||
// vaults report a conflict on that file.
|
||||
// -----------------------------------------------------------------------
|
||||
test("Case 4: concurrent edits from A and B produce a conflict on both sides", async () => {
|
||||
const FILE = "e2e/case4-conflict.md";
|
||||
|
||||
// 1) Write a baseline and synchronise so both vaults start from the
|
||||
// same revision.
|
||||
await call(pageA, "putFile", FILE, "base content");
|
||||
await call(pageA, "replicate");
|
||||
await call(pageB, "replicate");
|
||||
|
||||
// Confirm B has the base file with no conflicts yet.
|
||||
const baseInfoB = await call(pageB, "getInfo", FILE);
|
||||
expect(baseInfoB).not.toBeNull();
|
||||
expect(baseInfoB!.conflicts).toHaveLength(0);
|
||||
|
||||
// 2) Both vaults write diverging content without syncing in between –
|
||||
// this creates two competing revisions.
|
||||
await call(pageA, "putFile", FILE, "content from A (conflict side)");
|
||||
await call(pageB, "putFile", FILE, "content from B (conflict side)");
|
||||
|
||||
// 3) Run replication on both sides. The order mirrors the pattern
|
||||
// from the CLI two-vault tests (A → remote → B → remote → A).
|
||||
await call(pageA, "replicate");
|
||||
await call(pageB, "replicate");
|
||||
await call(pageA, "replicate"); // re-check from A to pick up B's revision
|
||||
|
||||
// 4) At least one side must report a conflict.
|
||||
const hasConflictA = await call(pageA, "hasConflict", FILE);
|
||||
const hasConflictB = await call(pageB, "hasConflict", FILE);
|
||||
|
||||
expect(
|
||||
hasConflictA || hasConflictB,
|
||||
"Expected a conflict to appear on vault A or vault B after diverging edits"
|
||||
).toBe(true);
|
||||
});
|
||||
});
|
||||
191
src/apps/webapp/vaultSelector.ts
Normal file
191
src/apps/webapp/vaultSelector.ts
Normal file
@@ -0,0 +1,191 @@
|
||||
const HANDLE_DB_NAME = "livesync-webapp-handles";
|
||||
const HANDLE_STORE_NAME = "handles";
|
||||
const LAST_USED_KEY = "meta:lastUsedVaultId";
|
||||
const VAULT_KEY_PREFIX = "vault:";
|
||||
const MAX_HISTORY_COUNT = 10;
|
||||
|
||||
export type VaultHistoryItem = {
|
||||
id: string;
|
||||
name: string;
|
||||
handle: FileSystemDirectoryHandle;
|
||||
lastUsedAt: number;
|
||||
};
|
||||
|
||||
type VaultHistoryValue = VaultHistoryItem;
|
||||
|
||||
function makeVaultKey(id: string): string {
|
||||
return `${VAULT_KEY_PREFIX}${id}`;
|
||||
}
|
||||
|
||||
function parseVaultId(key: string): string | null {
|
||||
if (!key.startsWith(VAULT_KEY_PREFIX)) {
|
||||
return null;
|
||||
}
|
||||
return key.slice(VAULT_KEY_PREFIX.length);
|
||||
}
|
||||
|
||||
function randomId(): string {
|
||||
const n = Math.random().toString(36).slice(2, 10);
|
||||
return `${Date.now()}-${n}`;
|
||||
}
|
||||
|
||||
async function hasReadWritePermission(handle: FileSystemDirectoryHandle, requestIfNeeded: boolean): Promise<boolean> {
|
||||
const h = handle as any;
|
||||
if (typeof h.queryPermission === "function") {
|
||||
const queried = await h.queryPermission({ mode: "readwrite" });
|
||||
if (queried === "granted") {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
if (!requestIfNeeded) {
|
||||
return false;
|
||||
}
|
||||
if (typeof h.requestPermission === "function") {
|
||||
const requested = await h.requestPermission({ mode: "readwrite" });
|
||||
return requested === "granted";
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
export class VaultHistoryStore {
|
||||
private async openHandleDB(): Promise<IDBDatabase> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const request = indexedDB.open(HANDLE_DB_NAME, 1);
|
||||
request.onerror = () => reject(request.error);
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onupgradeneeded = (event) => {
|
||||
const db = (event.target as IDBOpenDBRequest).result;
|
||||
if (!db.objectStoreNames.contains(HANDLE_STORE_NAME)) {
|
||||
db.createObjectStore(HANDLE_STORE_NAME);
|
||||
}
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
private async withStore<T>(mode: IDBTransactionMode, task: (store: IDBObjectStore) => Promise<T>): Promise<T> {
|
||||
const db = await this.openHandleDB();
|
||||
try {
|
||||
const tx = db.transaction([HANDLE_STORE_NAME], mode);
|
||||
const store = tx.objectStore(HANDLE_STORE_NAME);
|
||||
return await task(store);
|
||||
} finally {
|
||||
db.close();
|
||||
}
|
||||
}
|
||||
|
||||
private async requestAsPromise<T>(request: IDBRequest<T>): Promise<T> {
|
||||
return new Promise((resolve, reject) => {
|
||||
request.onsuccess = () => resolve(request.result);
|
||||
request.onerror = () => reject(request.error);
|
||||
});
|
||||
}
|
||||
|
||||
async getLastUsedVaultId(): Promise<string | null> {
|
||||
return this.withStore("readonly", async (store) => {
|
||||
const value = await this.requestAsPromise(store.get(LAST_USED_KEY));
|
||||
return typeof value === "string" ? value : null;
|
||||
});
|
||||
}
|
||||
|
||||
async getVaultHistory(): Promise<VaultHistoryItem[]> {
|
||||
return this.withStore("readonly", async (store) => {
|
||||
const keys = (await this.requestAsPromise(store.getAllKeys())) as IDBValidKey[];
|
||||
const values = (await this.requestAsPromise(store.getAll())) as unknown[];
|
||||
const items: VaultHistoryItem[] = [];
|
||||
for (let i = 0; i < keys.length; i++) {
|
||||
const key = String(keys[i]);
|
||||
const id = parseVaultId(key);
|
||||
const value = values[i] as Partial<VaultHistoryValue> | undefined;
|
||||
if (!id || !value || !value.handle || !value.name) {
|
||||
continue;
|
||||
}
|
||||
items.push({
|
||||
id,
|
||||
name: String(value.name),
|
||||
handle: value.handle,
|
||||
lastUsedAt: Number(value.lastUsedAt || 0),
|
||||
});
|
||||
}
|
||||
items.sort((a, b) => b.lastUsedAt - a.lastUsedAt);
|
||||
return items;
|
||||
});
|
||||
}
|
||||
|
||||
async saveSelectedVault(handle: FileSystemDirectoryHandle): Promise<VaultHistoryItem> {
|
||||
const now = Date.now();
|
||||
const existing = await this.getVaultHistory();
|
||||
|
||||
let matched: VaultHistoryItem | null = null;
|
||||
for (const item of existing) {
|
||||
try {
|
||||
if (await item.handle.isSameEntry(handle)) {
|
||||
matched = item;
|
||||
break;
|
||||
}
|
||||
} catch {
|
||||
// Ignore handles that cannot be compared, keep scanning.
|
||||
}
|
||||
}
|
||||
|
||||
const item: VaultHistoryItem = {
|
||||
id: matched?.id ?? randomId(),
|
||||
name: handle.name,
|
||||
handle,
|
||||
lastUsedAt: now,
|
||||
};
|
||||
|
||||
await this.withStore("readwrite", async (store): Promise<void> => {
|
||||
await this.requestAsPromise(store.put(item, makeVaultKey(item.id)));
|
||||
await this.requestAsPromise(store.put(item.id, LAST_USED_KEY));
|
||||
|
||||
const merged = [...existing.filter((v) => v.id !== item.id), item].sort(
|
||||
(a, b) => b.lastUsedAt - a.lastUsedAt
|
||||
);
|
||||
const stale = merged.slice(MAX_HISTORY_COUNT);
|
||||
for (const old of stale) {
|
||||
await this.requestAsPromise(store.delete(makeVaultKey(old.id)));
|
||||
}
|
||||
});
|
||||
|
||||
return item;
|
||||
}
|
||||
|
||||
async activateHistoryItem(item: VaultHistoryItem): Promise<FileSystemDirectoryHandle> {
|
||||
const granted = await hasReadWritePermission(item.handle, true);
|
||||
if (!granted) {
|
||||
throw new Error("Vault permissions were not granted");
|
||||
}
|
||||
|
||||
const activated: VaultHistoryItem = {
|
||||
...item,
|
||||
lastUsedAt: Date.now(),
|
||||
};
|
||||
|
||||
await this.withStore("readwrite", async (store): Promise<void> => {
|
||||
await this.requestAsPromise(store.put(activated, makeVaultKey(activated.id)));
|
||||
await this.requestAsPromise(store.put(activated.id, LAST_USED_KEY));
|
||||
});
|
||||
|
||||
return item.handle;
|
||||
}
|
||||
|
||||
async pickNewVault(): Promise<FileSystemDirectoryHandle> {
|
||||
const picker = (window as any).showDirectoryPicker;
|
||||
if (typeof picker !== "function") {
|
||||
throw new Error("FileSystem API showDirectoryPicker is not supported in this browser");
|
||||
}
|
||||
|
||||
const handle = (await picker({
|
||||
mode: "readwrite",
|
||||
startIn: "documents",
|
||||
})) as FileSystemDirectoryHandle;
|
||||
|
||||
const granted = await hasReadWritePermission(handle, true);
|
||||
if (!granted) {
|
||||
throw new Error("Vault permissions were not granted");
|
||||
}
|
||||
|
||||
await this.saveSelectedVault(handle);
|
||||
return handle;
|
||||
}
|
||||
}
|
||||
@@ -1,16 +1,45 @@
|
||||
import { defineConfig } from "vite";
|
||||
import { svelte } from "@sveltejs/vite-plugin-svelte";
|
||||
import istanbul from "vite-plugin-istanbul";
|
||||
import path from "node:path";
|
||||
import { readFileSync } from "node:fs";
|
||||
const packageJson = JSON.parse(readFileSync("../../../package.json", "utf-8"));
|
||||
const manifestJson = JSON.parse(readFileSync("../../../manifest.json", "utf-8"));
|
||||
const enableCoverage = process.env.PW_COVERAGE === "1";
|
||||
const repoRoot = path.resolve(__dirname, "../../..");
|
||||
// https://vite.dev/config/
|
||||
export default defineConfig({
|
||||
plugins: [svelte()],
|
||||
plugins: [
|
||||
svelte(),
|
||||
...(enableCoverage
|
||||
? [
|
||||
istanbul({
|
||||
cwd: repoRoot,
|
||||
include: ["src/**/*.ts", "src/**/*.svelte"],
|
||||
exclude: [
|
||||
"node_modules",
|
||||
"dist",
|
||||
"test",
|
||||
"coverage",
|
||||
"src/apps/webapp/test/**",
|
||||
"playwright.config.ts",
|
||||
"vite.config.ts",
|
||||
"**/*.spec.ts",
|
||||
"**/*.test.ts",
|
||||
],
|
||||
extension: [".js", ".ts", ".svelte"],
|
||||
requireEnv: false,
|
||||
cypress: false,
|
||||
checkProd: false,
|
||||
}),
|
||||
]
|
||||
: []),
|
||||
],
|
||||
resolve: {
|
||||
alias: {
|
||||
"@": path.resolve(__dirname, "../../"),
|
||||
"@lib": path.resolve(__dirname, "../../lib/src"),
|
||||
obsidian: path.resolve(__dirname, "../../../test/harness/obsidian-mock.ts"),
|
||||
},
|
||||
},
|
||||
base: "./",
|
||||
@@ -18,14 +47,21 @@ export default defineConfig({
|
||||
outDir: "dist",
|
||||
emptyOutDir: true,
|
||||
rollupOptions: {
|
||||
// test.html is used by the Playwright dev-server; include it here
|
||||
// so the production build doesn't emit warnings about unused inputs.
|
||||
input: {
|
||||
index: path.resolve(__dirname, "index.html"),
|
||||
webapp: path.resolve(__dirname, "webapp.html"),
|
||||
test: path.resolve(__dirname, "test.html"),
|
||||
},
|
||||
external: ["crypto"],
|
||||
},
|
||||
},
|
||||
define: {
|
||||
MANIFEST_VERSION: JSON.stringify(process.env.MANIFEST_VERSION || manifestJson.version || "0.0.0"),
|
||||
PACKAGE_VERSION: JSON.stringify(process.env.PACKAGE_VERSION || packageJson.version || "0.0.0"),
|
||||
global: "globalThis",
|
||||
hostPlatform: JSON.stringify(process.platform || "linux"),
|
||||
},
|
||||
server: {
|
||||
port: 3000,
|
||||
|
||||
402
src/apps/webapp/webapp.css
Normal file
402
src/apps/webapp/webapp.css
Normal file
@@ -0,0 +1,402 @@
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
:root {
|
||||
--background-primary: #ffffff;
|
||||
--background-primary-alt: #667eea;
|
||||
--background-secondary: #f0f0f0;
|
||||
--background-secondary-alt: #e0e0e0;
|
||||
--background-modifier-border: #d0d0d0;
|
||||
--text-normal: #333333;
|
||||
--text-warning: #d9534f;
|
||||
--text-accent: #5bc0de;
|
||||
--text-on-accent: #ffffff;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
|
||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
min-height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
.container {
|
||||
background: white;
|
||||
border-radius: 12px;
|
||||
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
|
||||
padding: 40px;
|
||||
max-width: 700px;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
h1 {
|
||||
color: #333;
|
||||
margin-bottom: 10px;
|
||||
font-size: 28px;
|
||||
}
|
||||
|
||||
.subtitle {
|
||||
color: #666;
|
||||
margin-bottom: 24px;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
#status {
|
||||
padding: 15px;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 20px;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
#status.error {
|
||||
background: #fee;
|
||||
color: #c33;
|
||||
border: 1px solid #fcc;
|
||||
}
|
||||
|
||||
#status.warning {
|
||||
background: #ffeaa7;
|
||||
color: #d63031;
|
||||
border: 1px solid #fdcb6e;
|
||||
}
|
||||
|
||||
#status.success {
|
||||
background: #d4edda;
|
||||
color: #155724;
|
||||
border: 1px solid #c3e6cb;
|
||||
}
|
||||
|
||||
#status.info {
|
||||
background: #d1ecf1;
|
||||
color: #0c5460;
|
||||
border: 1px solid #bee5eb;
|
||||
}
|
||||
|
||||
.vault-selector {
|
||||
border: 1px solid #e6e9f2;
|
||||
border-radius: 8px;
|
||||
padding: 16px;
|
||||
background: #fbfcff;
|
||||
margin-bottom: 22px;
|
||||
}
|
||||
|
||||
.vault-selector h2 {
|
||||
font-size: 18px;
|
||||
margin-bottom: 8px;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.vault-selector p {
|
||||
color: #555;
|
||||
font-size: 14px;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.vault-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 10px;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.vault-item {
|
||||
border: 1px solid #d9deee;
|
||||
border-radius: 8px;
|
||||
padding: 10px 12px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: 12px;
|
||||
background: #fff;
|
||||
}
|
||||
|
||||
.vault-item-info {
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.vault-item-name {
|
||||
font-weight: 600;
|
||||
color: #1f2a44;
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
.vault-item-meta {
|
||||
margin-top: 2px;
|
||||
font-size: 12px;
|
||||
color: #63708f;
|
||||
}
|
||||
|
||||
button {
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
padding: 8px 12px;
|
||||
background: #2f5ae5;
|
||||
color: #fff;
|
||||
cursor: pointer;
|
||||
font-weight: 600;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
button:hover {
|
||||
background: #1e4ad6;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
button:disabled {
|
||||
cursor: not-allowed;
|
||||
opacity: 0.6;
|
||||
}
|
||||
|
||||
.empty-note {
|
||||
font-size: 13px;
|
||||
color: #6c757d;
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.empty-note.is-hidden,
|
||||
.vault-selector.is-hidden {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.info-section {
|
||||
margin-top: 20px;
|
||||
padding: 20px;
|
||||
background: #f8f9fa;
|
||||
border-radius: 8px;
|
||||
}
|
||||
|
||||
.info-section h2 {
|
||||
font-size: 18px;
|
||||
margin-bottom: 12px;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.info-section ul {
|
||||
list-style: none;
|
||||
padding-left: 0;
|
||||
}
|
||||
|
||||
.info-section li {
|
||||
padding: 7px 0;
|
||||
color: #666;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.info-section li::before {
|
||||
content: "•";
|
||||
color: #667eea;
|
||||
font-weight: bold;
|
||||
display: inline-block;
|
||||
width: 1em;
|
||||
margin-left: -1em;
|
||||
padding-right: 0.5em;
|
||||
}
|
||||
|
||||
code {
|
||||
background: #e9ecef;
|
||||
padding: 2px 6px;
|
||||
border-radius: 4px;
|
||||
font-family: "Courier New", monospace;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
.footer {
|
||||
margin-top: 24px;
|
||||
text-align: center;
|
||||
color: #999;
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.footer a {
|
||||
color: #667eea;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.footer a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
body.livesync-log-visible {
|
||||
min-height: 100vh;
|
||||
padding-bottom: 42vh;
|
||||
}
|
||||
|
||||
#livesync-log-panel {
|
||||
position: fixed;
|
||||
left: 0;
|
||||
right: 0;
|
||||
bottom: 0;
|
||||
height: 42vh;
|
||||
z-index: 900;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
background: #0f172a;
|
||||
border-top: 1px solid #334155;
|
||||
}
|
||||
|
||||
.livesync-log-header {
|
||||
padding: 8px 12px;
|
||||
font-size: 12px;
|
||||
font-weight: 600;
|
||||
color: #e2e8f0;
|
||||
background: #111827;
|
||||
border-bottom: 1px solid #334155;
|
||||
}
|
||||
|
||||
#livesync-log-viewport {
|
||||
flex: 1;
|
||||
overflow: auto;
|
||||
padding: 8px 12px;
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, Consolas, "Liberation Mono", monospace;
|
||||
font-size: 12px;
|
||||
line-height: 1.4;
|
||||
color: #e2e8f0;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
.livesync-log-line {
|
||||
margin-bottom: 2px;
|
||||
}
|
||||
|
||||
#livesync-command-bar {
|
||||
position: fixed;
|
||||
right: 16px;
|
||||
bottom: 16px;
|
||||
z-index: 1000;
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 8px;
|
||||
max-width: 40vw;
|
||||
padding: 10px;
|
||||
border-radius: 10px;
|
||||
background: rgba(255, 255, 255, 0.95);
|
||||
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.2);
|
||||
}
|
||||
|
||||
.livesync-command-button {
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 8px;
|
||||
padding: 6px 10px;
|
||||
background: #fff;
|
||||
color: #111827;
|
||||
cursor: pointer;
|
||||
font-size: 12px;
|
||||
line-height: 1.2;
|
||||
white-space: nowrap;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.livesync-command-button:hover:not(:disabled) {
|
||||
background: #f3f4f6;
|
||||
}
|
||||
|
||||
.livesync-command-button.is-disabled {
|
||||
opacity: 0.55;
|
||||
}
|
||||
|
||||
#livesync-window-root {
|
||||
position: fixed;
|
||||
top: 16px;
|
||||
left: 16px;
|
||||
right: 16px;
|
||||
bottom: calc(42vh + 16px);
|
||||
z-index: 850;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
border-radius: 10px;
|
||||
background: rgba(255, 255, 255, 0.98);
|
||||
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.18);
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
#livesync-window-tabs {
|
||||
display: flex;
|
||||
gap: 6px;
|
||||
padding: 8px;
|
||||
background: #f3f4f6;
|
||||
border-bottom: 1px solid #e5e7eb;
|
||||
}
|
||||
|
||||
#livesync-window-body {
|
||||
position: relative;
|
||||
flex: 1;
|
||||
overflow: auto;
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
.livesync-window-tab {
|
||||
border: 1px solid #d1d5db;
|
||||
background: #fff;
|
||||
color: #111827;
|
||||
padding: 4px 8px;
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
font-size: 12px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.livesync-window-tab.is-active {
|
||||
background: #e0e7ff;
|
||||
border-color: #818cf8;
|
||||
}
|
||||
|
||||
.livesync-window-panel {
|
||||
display: none;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
.livesync-window-panel.is-active {
|
||||
display: block;
|
||||
}
|
||||
|
||||
@media (max-width: 600px) {
|
||||
.container {
|
||||
padding: 28px 18px;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 24px;
|
||||
}
|
||||
|
||||
.vault-item {
|
||||
flex-direction: column;
|
||||
align-items: stretch;
|
||||
}
|
||||
|
||||
#livesync-command-bar {
|
||||
max-width: calc(100vw - 24px);
|
||||
right: 12px;
|
||||
left: 12px;
|
||||
bottom: 12px;
|
||||
}
|
||||
}
|
||||
|
||||
popup {
|
||||
position: fixed;
|
||||
min-width: 80vw;
|
||||
max-width: 90vw;
|
||||
min-height: 40vh;
|
||||
max-height: 80vh;
|
||||
background: rgba(255, 255, 255, 0.8);
|
||||
padding: 1em;
|
||||
border-radius: 6px;
|
||||
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.2);
|
||||
z-index: 10000;
|
||||
overflow-y: auto;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
backdrop-filter: blur(15px);
|
||||
border-radius: 10px;
|
||||
z-index: 10;
|
||||
}
|
||||
45
src/apps/webapp/webapp.html
Normal file
45
src/apps/webapp/webapp.html
Normal file
@@ -0,0 +1,45 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Self-hosted LiveSync WebApp</title>
|
||||
<link rel="stylesheet" href="./webapp.css">
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<h1>Self-hosted LiveSync on Web</h1>
|
||||
<p class="subtitle">Browser-based Self-hosted LiveSync using FileSystem API</p>
|
||||
|
||||
<div id="status" class="info">Initialising...</div>
|
||||
|
||||
<div id="vault-selector" class="vault-selector">
|
||||
<h2>Select Vault Folder</h2>
|
||||
<p>Open a vault you already used, or pick a new folder.</p>
|
||||
|
||||
<div id="vault-history-list" class="vault-list"></div>
|
||||
<p id="vault-history-empty" class="empty-note">No saved vaults yet.</p>
|
||||
<button id="pick-new-vault" type="button">Choose new vault folder</button>
|
||||
</div>
|
||||
|
||||
<div class="info-section">
|
||||
<h2>How to Use</h2>
|
||||
<ul>
|
||||
<li>Select a vault folder and grant permission</li>
|
||||
<li>Create <code>.livesync/settings.json</code> in your vault folder</li>
|
||||
<li>Or use Setup-URI to apply settings</li>
|
||||
<li>Your files will be synced after "replicate now"</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="footer">
|
||||
<p>
|
||||
Powered by
|
||||
<a href="https://github.com/vrtmrz/obsidian-livesync" target="_blank">Self-hosted LiveSync</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script type="module" src="./bootstrap.ts"></script>
|
||||
</body>
|
||||
</html>
|
||||
57
src/apps/webpeer/Dockerfile
Normal file
57
src/apps/webpeer/Dockerfile
Normal file
@@ -0,0 +1,57 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
#
|
||||
# Self-hosted LiveSync WebPeer — Docker image
|
||||
# Browser-based P2P peer daemon served by nginx.
|
||||
#
|
||||
# Build (from the repository root):
|
||||
# docker build -f src/apps/webpeer/Dockerfile -t livesync-webpeer .
|
||||
#
|
||||
# Run:
|
||||
# docker run --rm -p 8081:80 livesync-webpeer
|
||||
# Then open http://localhost:8081/ in any modern browser.
|
||||
#
|
||||
# What is WebPeer?
|
||||
# WebPeer acts as a pseudo P2P peer that runs entirely in the browser.
|
||||
# It can replace a CouchDB remote server by replying to sync requests from
|
||||
# other Self-hosted LiveSync instances over the WebRTC P2P channel.
|
||||
#
|
||||
# P2P (WebRTC) networking notes
|
||||
# ─────────────────────────────
|
||||
# WebRTC connections are initiated by the *browser* visiting this page, not by
|
||||
# the nginx container itself. Therefore the Docker network mode of this
|
||||
# container has NO effect on WebRTC connectivity.
|
||||
# Simply publish port 80 (as above) and the browser handles all ICE/STUN/TURN
|
||||
# negotiation on its own.
|
||||
#
|
||||
# If the browser is running inside another container or a restricted network,
|
||||
# configuring a TURN server in the WebPeer settings is recommended.
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Stage 1 — builder
|
||||
# Full Node.js environment to install dependencies and build the Vite bundle.
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
FROM node:22-slim AS builder
|
||||
|
||||
WORKDIR /build
|
||||
|
||||
# Install workspace dependencies (all apps share the root package.json)
|
||||
COPY package.json ./
|
||||
RUN npm install
|
||||
|
||||
# Copy the full source tree and build the WebPeer bundle
|
||||
COPY . .
|
||||
RUN cd src/apps/webpeer && npm run build
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Stage 2 — runtime
|
||||
# Minimal nginx image that serves the static build output.
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
FROM nginx:stable-alpine
|
||||
|
||||
# Remove the default nginx welcome page
|
||||
RUN rm -rf /usr/share/nginx/html/*
|
||||
|
||||
# Copy the built static assets
|
||||
COPY --from=builder /build/src/apps/webpeer/dist /usr/share/nginx/html
|
||||
|
||||
EXPOSE 80
|
||||
@@ -6,6 +6,8 @@
|
||||
"scripts": {
|
||||
"dev": "vite",
|
||||
"build": "vite build",
|
||||
"build:docker": "docker build -f Dockerfile -t livesync-webpeer ../../..",
|
||||
"run:docker": "docker run -p 8001:80 livesync-webpeer",
|
||||
"preview": "vite preview",
|
||||
"check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json"
|
||||
},
|
||||
|
||||
@@ -1,10 +1,8 @@
|
||||
import { PouchDB } from "@lib/pouchdb/pouchdb-browser";
|
||||
import {
|
||||
type EntryDoc,
|
||||
type LOG_LEVEL,
|
||||
type ObsidianLiveSyncSettings,
|
||||
type P2PSyncSetting,
|
||||
LOG_LEVEL_NOTICE,
|
||||
LOG_LEVEL_VERBOSE,
|
||||
P2P_DEFAULT_SETTINGS,
|
||||
REMOTE_P2P,
|
||||
@@ -12,35 +10,27 @@ import {
|
||||
import { eventHub } from "@lib/hub/hub";
|
||||
|
||||
import type { Confirm } from "@lib/interfaces/Confirm";
|
||||
import { LOG_LEVEL_INFO, Logger } from "@lib/common/logger";
|
||||
import { LOG_LEVEL_NOTICE, Logger } from "@lib/common/logger";
|
||||
import { storeP2PStatusLine } from "./CommandsShim";
|
||||
import {
|
||||
EVENT_P2P_PEER_SHOW_EXTRA_MENU,
|
||||
type CommandShim,
|
||||
type PeerStatus,
|
||||
type PluginShim,
|
||||
} from "@lib/replication/trystero/P2PReplicatorPaneCommon";
|
||||
import {
|
||||
closeP2PReplicator,
|
||||
openP2PReplicator,
|
||||
P2PLogCollector,
|
||||
type P2PReplicatorBase,
|
||||
} from "@lib/replication/trystero/P2PReplicatorCore";
|
||||
import { P2PLogCollector, type P2PReplicatorBase, useP2PReplicator } from "@lib/replication/trystero/P2PReplicatorCore";
|
||||
import type { SimpleStore } from "octagonal-wheels/databases/SimpleStoreBase";
|
||||
import { reactiveSource } from "octagonal-wheels/dataobject/reactive_v2";
|
||||
import { EVENT_SETTING_SAVED } from "@lib/events/coreEvents";
|
||||
import { unique } from "octagonal-wheels/collection";
|
||||
import { BrowserServiceHub } from "@lib/services/BrowserServices";
|
||||
import { TrysteroReplicator } from "@lib/replication/trystero/TrysteroReplicator";
|
||||
import { SETTING_KEY_P2P_DEVICE_NAME } from "@lib/common/types";
|
||||
import { ServiceContext } from "@lib/services/base/ServiceBase";
|
||||
import type { InjectableServiceHub } from "@lib/services/InjectableServices";
|
||||
import { Menu } from "@lib/services/implements/browser/Menu";
|
||||
import type { InjectableVaultServiceCompat } from "@lib/services/implements/injectable/InjectableVaultService";
|
||||
import { SimpleStoreIDBv2 } from "octagonal-wheels/databases/SimpleStoreIDBv2";
|
||||
import type { InjectableAPIService } from "@/lib/src/services/implements/injectable/InjectableAPIService";
|
||||
import type { BrowserAPIService } from "@/lib/src/services/implements/browser/BrowserAPIService";
|
||||
import type { InjectableSettingService } from "@/lib/src/services/implements/injectable/InjectableSettingService";
|
||||
import { LiveSyncTrysteroReplicator } from "@lib/replication/trystero/LiveSyncTrysteroReplicator";
|
||||
|
||||
function addToList(item: string, list: string) {
|
||||
return unique(
|
||||
@@ -60,12 +50,10 @@ function removeFromList(item: string, list: string) {
|
||||
.join(",");
|
||||
}
|
||||
|
||||
export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
export class P2PReplicatorShim implements P2PReplicatorBase {
|
||||
storeP2PStatusLine = reactiveSource("");
|
||||
plugin!: PluginShim;
|
||||
// environment!: IEnvironment;
|
||||
confirm!: Confirm;
|
||||
// simpleStoreAPI!: ISimpleStoreAPI;
|
||||
db?: PouchDB.Database<EntryDoc>;
|
||||
services: InjectableServiceHub<ServiceContext>;
|
||||
|
||||
@@ -76,12 +64,30 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
return this.db;
|
||||
}
|
||||
_simpleStore!: SimpleStore<any>;
|
||||
|
||||
async closeDB() {
|
||||
if (this.db) {
|
||||
await this.db.close();
|
||||
this.db = undefined;
|
||||
}
|
||||
}
|
||||
|
||||
private _liveSyncReplicator?: LiveSyncTrysteroReplicator;
|
||||
p2pLogCollector!: P2PLogCollector;
|
||||
|
||||
private _initP2PReplicator() {
|
||||
const {
|
||||
replicator,
|
||||
p2pLogCollector,
|
||||
storeP2PStatusLine: p2pStatusLine,
|
||||
} = useP2PReplicator({ services: this.services } as any);
|
||||
this._liveSyncReplicator = replicator;
|
||||
this.p2pLogCollector = p2pLogCollector;
|
||||
p2pLogCollector.p2pReplicationLine.onChanged((line) => {
|
||||
storeP2PStatusLine.set(line.value);
|
||||
});
|
||||
}
|
||||
|
||||
constructor() {
|
||||
const browserServiceHub = new BrowserServiceHub<ServiceContext>();
|
||||
this.services = browserServiceHub;
|
||||
@@ -89,7 +95,6 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
(this.services.API as BrowserAPIService<ServiceContext>).getSystemVaultName.setHandler(
|
||||
() => "p2p-livesync-web-peer"
|
||||
);
|
||||
// this.services.API.addLog.setHandler(Logger);
|
||||
const repStore = SimpleStoreIDBv2.open<any>("p2p-livesync-web-peer");
|
||||
this._simpleStore = repStore;
|
||||
let _settings = { ...P2P_DEFAULT_SETTINGS, additionalSuffixOfDatabaseName: "" } as ObsidianLiveSyncSettings;
|
||||
@@ -103,14 +108,13 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
return settings;
|
||||
});
|
||||
}
|
||||
|
||||
get settings() {
|
||||
return this.services.setting.currentSettings() as P2PSyncSetting;
|
||||
}
|
||||
|
||||
async init() {
|
||||
// const { simpleStoreAPI } = await getWrappedSynchromesh();
|
||||
// this.confirm = confirm;
|
||||
this.confirm = this.services.UI.confirm;
|
||||
// this.environment = environment;
|
||||
|
||||
if (this.db) {
|
||||
try {
|
||||
@@ -123,30 +127,16 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
|
||||
await this.services.setting.loadSettings();
|
||||
this.plugin = {
|
||||
// saveSettings: async () => {
|
||||
// await repStore.set("settings", _settings);
|
||||
// eventHub.emitEvent(EVENT_SETTING_SAVED, _settings);
|
||||
// },
|
||||
// get settings() {
|
||||
// return _settings;
|
||||
// },
|
||||
// set settings(newSettings: P2PSyncSetting) {
|
||||
// _settings = { ..._settings, ...newSettings };
|
||||
// },
|
||||
// rebuilder: null,
|
||||
// core: {
|
||||
// settings: this.services.setting.settings,
|
||||
// },
|
||||
services: this.services,
|
||||
core: {
|
||||
services: this.services,
|
||||
},
|
||||
// $$scheduleAppReload: () => {},
|
||||
// $$getVaultName: () => "p2p-livesync-web-peer",
|
||||
};
|
||||
// const deviceName = this.getDeviceName();
|
||||
const database_name = this.settings.P2P_AppID + "-" + this.settings.P2P_roomID + "p2p-livesync-web-peer";
|
||||
this.db = new PouchDB<EntryDoc>(database_name);
|
||||
|
||||
this._initP2PReplicator();
|
||||
|
||||
setTimeout(() => {
|
||||
if (this.settings.P2P_AutoStart && this.settings.P2P_Enabled) {
|
||||
void this.open();
|
||||
@@ -155,7 +145,7 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
return this;
|
||||
}
|
||||
|
||||
_log(msg: any, level?: LOG_LEVEL): void {
|
||||
_log(msg: any, level?: any): void {
|
||||
Logger(msg, level);
|
||||
}
|
||||
_notice(msg: string, key?: string): void {
|
||||
@@ -167,14 +157,10 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
simpleStore(): SimpleStore<any> {
|
||||
return this._simpleStore;
|
||||
}
|
||||
handleReplicatedDocuments(docs: EntryDoc[]): Promise<boolean> {
|
||||
// No op. This is a client and does not need to process the docs
|
||||
handleReplicatedDocuments(_docs: EntryDoc[]): Promise<boolean> {
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
|
||||
getPluginShim() {
|
||||
return {};
|
||||
}
|
||||
getConfig(key: string) {
|
||||
const vaultName = this.services.vault.getVaultName();
|
||||
const dbKey = `${vaultName}-${key}`;
|
||||
@@ -189,9 +175,7 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
getDeviceName(): string {
|
||||
return this.getConfig(SETTING_KEY_P2P_DEVICE_NAME) ?? this.plugin.services.vault.getVaultName();
|
||||
}
|
||||
getPlatform(): string {
|
||||
return "pseudo-replicator";
|
||||
}
|
||||
|
||||
m?: Menu;
|
||||
afterConstructor(): void {
|
||||
eventHub.onEvent(EVENT_P2P_PEER_SHOW_EXTRA_MENU, ({ peer, event }) => {
|
||||
@@ -202,12 +186,6 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
.addItem((item) => item.setTitle("📥 Only Fetch").onClick(() => this.replicateFrom(peer)))
|
||||
.addItem((item) => item.setTitle("📤 Only Send").onClick(() => this.replicateTo(peer)))
|
||||
.addSeparator()
|
||||
// .addItem((item) => {
|
||||
// item.setTitle("🔧 Get Configuration").onClick(async () => {
|
||||
// await this.getRemoteConfig(peer);
|
||||
// });
|
||||
// })
|
||||
// .addSeparator()
|
||||
.addItem((item) => {
|
||||
const mark = peer.syncOnConnect ? "checkmark" : null;
|
||||
item.setTitle("Toggle Sync on connect")
|
||||
@@ -234,97 +212,43 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
});
|
||||
void this.m.showAtPosition({ x: event.x, y: event.y });
|
||||
});
|
||||
this.p2pLogCollector.p2pReplicationLine.onChanged((line) => {
|
||||
storeP2PStatusLine.set(line.value);
|
||||
});
|
||||
}
|
||||
|
||||
_replicatorInstance?: TrysteroReplicator;
|
||||
p2pLogCollector = new P2PLogCollector();
|
||||
async open() {
|
||||
await openP2PReplicator(this);
|
||||
await this._liveSyncReplicator?.open();
|
||||
}
|
||||
|
||||
async close() {
|
||||
await closeP2PReplicator(this);
|
||||
await this._liveSyncReplicator?.close();
|
||||
}
|
||||
|
||||
enableBroadcastCastings() {
|
||||
return this?._replicatorInstance?.enableBroadcastChanges();
|
||||
return this._liveSyncReplicator?.enableBroadcastChanges();
|
||||
}
|
||||
disableBroadcastCastings() {
|
||||
return this?._replicatorInstance?.disableBroadcastChanges();
|
||||
}
|
||||
|
||||
async initialiseP2PReplicator(): Promise<TrysteroReplicator> {
|
||||
await this.init();
|
||||
try {
|
||||
if (this._replicatorInstance) {
|
||||
await this._replicatorInstance.close();
|
||||
this._replicatorInstance = undefined;
|
||||
}
|
||||
|
||||
if (!this.settings.P2P_AppID) {
|
||||
this.settings.P2P_AppID = P2P_DEFAULT_SETTINGS.P2P_AppID;
|
||||
}
|
||||
const getInitialDeviceName = () =>
|
||||
this.getConfig(SETTING_KEY_P2P_DEVICE_NAME) || this.services.vault.getVaultName();
|
||||
|
||||
const getSettings = () => this.settings;
|
||||
const store = () => this.simpleStore();
|
||||
const getDB = () => this.getDB();
|
||||
|
||||
const getConfirm = () => this.confirm;
|
||||
const getPlatform = () => this.getPlatform();
|
||||
const env = {
|
||||
get db() {
|
||||
return getDB();
|
||||
},
|
||||
get confirm() {
|
||||
return getConfirm();
|
||||
},
|
||||
get deviceName() {
|
||||
return getInitialDeviceName();
|
||||
},
|
||||
get platform() {
|
||||
return getPlatform();
|
||||
},
|
||||
get settings() {
|
||||
return getSettings();
|
||||
},
|
||||
processReplicatedDocs: async (docs: EntryDoc[]): Promise<void> => {
|
||||
await this.handleReplicatedDocuments(docs);
|
||||
// No op. This is a client and does not need to process the docs
|
||||
},
|
||||
get simpleStore() {
|
||||
return store();
|
||||
},
|
||||
};
|
||||
this._replicatorInstance = new TrysteroReplicator(env);
|
||||
return this._replicatorInstance;
|
||||
} catch (e) {
|
||||
this._log(
|
||||
e instanceof Error ? e.message : "Something occurred on Initialising P2P Replicator",
|
||||
LOG_LEVEL_INFO
|
||||
);
|
||||
this._log(e, LOG_LEVEL_VERBOSE);
|
||||
throw e;
|
||||
}
|
||||
return this._liveSyncReplicator?.disableBroadcastChanges();
|
||||
}
|
||||
|
||||
get replicator() {
|
||||
return this._replicatorInstance!;
|
||||
return this._liveSyncReplicator;
|
||||
}
|
||||
|
||||
async replicateFrom(peer: PeerStatus) {
|
||||
await this.replicator.replicateFrom(peer.peerId);
|
||||
const r = this._liveSyncReplicator;
|
||||
if (!r) return;
|
||||
await r.replicateFrom(peer.peerId);
|
||||
}
|
||||
|
||||
async replicateTo(peer: PeerStatus) {
|
||||
await this.replicator.requestSynchroniseToPeer(peer.peerId);
|
||||
await this._liveSyncReplicator?.requestSynchroniseToPeer(peer.peerId);
|
||||
}
|
||||
|
||||
async getRemoteConfig(peer: PeerStatus) {
|
||||
Logger(
|
||||
`Requesting remote config for ${peer.name}. Please input the passphrase on the remote device`,
|
||||
LOG_LEVEL_NOTICE
|
||||
);
|
||||
const remoteConfig = await this.replicator.getRemoteConfig(peer.peerId);
|
||||
const remoteConfig = await this._liveSyncReplicator?.getRemoteConfig(peer.peerId);
|
||||
if (remoteConfig) {
|
||||
Logger(`Remote config for ${peer.name} is retrieved successfully`);
|
||||
const DROP = "Yes, and drop local database";
|
||||
@@ -344,9 +268,7 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
if (remoteConfig.remoteType !== REMOTE_P2P) {
|
||||
const yn2 = await this.confirm.askYesNoDialog(
|
||||
`Do you want to set the remote type to "P2P Sync" to rebuild by "P2P replication"?`,
|
||||
{
|
||||
title: "Rebuild from remote device",
|
||||
}
|
||||
{ title: "Rebuild from remote device" }
|
||||
);
|
||||
if (yn2 === "yes") {
|
||||
remoteConfig.remoteType = REMOTE_P2P;
|
||||
@@ -354,10 +276,8 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
}
|
||||
}
|
||||
}
|
||||
await this.services.setting.applyPartial(remoteConfig, true);
|
||||
if (yn === DROP) {
|
||||
// await this.plugin.rebuilder.scheduleFetch();
|
||||
} else {
|
||||
await this.services.setting.applyExternalSettings(remoteConfig, true);
|
||||
if (yn !== DROP) {
|
||||
await this.plugin.core.services.appLifecycle.scheduleRestart();
|
||||
}
|
||||
} else {
|
||||
@@ -381,8 +301,6 @@ export class P2PReplicatorShim implements P2PReplicatorBase, CommandShim {
|
||||
[targetSetting]: currentSettingAll ? currentSettingAll[targetSetting] : "",
|
||||
};
|
||||
if (peer[prop]) {
|
||||
// this.plugin.settings[targetSetting] = removeFromList(peer.name, this.plugin.settings[targetSetting]);
|
||||
// await this.plugin.saveSettings();
|
||||
currentSetting[targetSetting] = removeFromList(peer.name, currentSetting[targetSetting]);
|
||||
} else {
|
||||
currentSetting[targetSetting] = addToList(peer.name, currentSetting[targetSetting]);
|
||||
|
||||
@@ -16,9 +16,6 @@ export const EVENT_REQUEST_RELOAD_SETTING_TAB = "reload-setting-tab";
|
||||
|
||||
export const EVENT_REQUEST_OPEN_PLUGIN_SYNC_DIALOG = "request-open-plugin-sync-dialog";
|
||||
|
||||
export const EVENT_REQUEST_OPEN_P2P = "request-open-p2p";
|
||||
export const EVENT_REQUEST_CLOSE_P2P = "request-close-p2p";
|
||||
|
||||
export const EVENT_REQUEST_RUN_DOCTOR = "request-run-doctor";
|
||||
export const EVENT_REQUEST_RUN_FIX_INCOMPLETE = "request-run-fix-incomplete";
|
||||
|
||||
@@ -36,8 +33,6 @@ declare global {
|
||||
[EVENT_REQUEST_OPEN_SETTING_WIZARD]: undefined;
|
||||
[EVENT_REQUEST_RELOAD_SETTING_TAB]: undefined;
|
||||
[EVENT_LEAF_ACTIVE_CHANGED]: undefined;
|
||||
[EVENT_REQUEST_CLOSE_P2P]: undefined;
|
||||
[EVENT_REQUEST_OPEN_P2P]: undefined;
|
||||
[EVENT_REQUEST_OPEN_SETUP_URI]: undefined;
|
||||
[EVENT_REQUEST_COPY_SETUP_URI]: undefined;
|
||||
[EVENT_REQUEST_SHOW_SETUP_QR]: undefined;
|
||||
|
||||
@@ -138,7 +138,7 @@ export const _requestToCouchDBFetch = async (
|
||||
authorization: authHeader,
|
||||
"content-type": "application/json",
|
||||
};
|
||||
const uri = `${baseUri}/${path}`;
|
||||
const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
|
||||
const requestParam = {
|
||||
url: uri,
|
||||
method: method || (body ? "PUT" : "GET"),
|
||||
@@ -162,7 +162,7 @@ export const _requestToCouchDB = async (
|
||||
const authHeaderGen = new AuthorizationHeaderGenerator();
|
||||
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
|
||||
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
|
||||
const uri = `${baseUri}/${path}`;
|
||||
const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
|
||||
const requestParam: RequestUrlParam = {
|
||||
url: uri,
|
||||
method: method || (body ? "PUT" : "GET"),
|
||||
|
||||
@@ -30,7 +30,8 @@
|
||||
type JSONData = Record<string | number | symbol, any> | [any];
|
||||
|
||||
const docsArray = $derived.by(() => {
|
||||
if (docs && docs.length >= 1) {
|
||||
// The merge pane compares two revisions, so guard against incomplete input before reading docs[1].
|
||||
if (docs && docs.length >= 2) {
|
||||
if (keepOrder || docs[0].mtime < docs[1].mtime) {
|
||||
return { a: docs[0], b: docs[1] } as const;
|
||||
} else {
|
||||
|
||||
@@ -636,10 +636,24 @@ Offline Changed files: ${processFiles.length}`;
|
||||
|
||||
// --> Conflict processing
|
||||
|
||||
// Keep one in-flight conflict check per path so repeated sync events do not close the active merge dialogue.
|
||||
pendingConflictChecks = new Set<FilePathWithPrefix>();
|
||||
|
||||
queueConflictCheck(path: FilePathWithPrefix) {
|
||||
if (this.pendingConflictChecks.has(path)) return;
|
||||
this.pendingConflictChecks.add(path);
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
}
|
||||
|
||||
finishConflictCheck(path: FilePathWithPrefix) {
|
||||
this.pendingConflictChecks.delete(path);
|
||||
}
|
||||
|
||||
requeueConflictCheck(path: FilePathWithPrefix) {
|
||||
this.finishConflictCheck(path);
|
||||
this.queueConflictCheck(path);
|
||||
}
|
||||
|
||||
async resolveConflictOnInternalFiles() {
|
||||
// Scan all conflicted internal files
|
||||
const conflicted = this.localDatabase.findEntries(ICHeader, ICHeaderEnd, { conflicts: true });
|
||||
@@ -648,7 +662,7 @@ Offline Changed files: ${processFiles.length}`;
|
||||
for await (const doc of conflicted) {
|
||||
if (!("_conflicts" in doc)) continue;
|
||||
if (isInternalMetadata(doc._id)) {
|
||||
this.conflictResolutionProcessor.enqueue(doc.path);
|
||||
this.queueConflictCheck(doc.path);
|
||||
}
|
||||
}
|
||||
} catch (ex) {
|
||||
@@ -679,21 +693,27 @@ Offline Changed files: ${processFiles.length}`;
|
||||
const cc = await this.localDatabase.getRaw(id, { conflicts: true });
|
||||
if (cc._conflicts?.length === 0) {
|
||||
await this.extractInternalFileFromDatabase(stripAllPrefixes(path));
|
||||
this.finishConflictCheck(path);
|
||||
} else {
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
this.requeueConflictCheck(path);
|
||||
}
|
||||
// check the file again
|
||||
}
|
||||
conflictResolutionProcessor = new QueueProcessor(
|
||||
async (paths: FilePathWithPrefix[]) => {
|
||||
const path = paths[0];
|
||||
sendSignal(`cancel-internal-conflict:${path}`);
|
||||
try {
|
||||
// Retrieve data
|
||||
const id = await this.path2id(path, ICHeader);
|
||||
const doc = await this.localDatabase.getRaw<MetaEntry>(id, { conflicts: true });
|
||||
if (doc._conflicts === undefined) return [];
|
||||
if (doc._conflicts.length == 0) return [];
|
||||
if (doc._conflicts === undefined) {
|
||||
this.finishConflictCheck(path);
|
||||
return [];
|
||||
}
|
||||
if (doc._conflicts.length == 0) {
|
||||
this.finishConflictCheck(path);
|
||||
return [];
|
||||
}
|
||||
this._log(`Hidden file conflicted:${path}`);
|
||||
const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
|
||||
const revA = doc._rev;
|
||||
@@ -725,7 +745,7 @@ Offline Changed files: ${processFiles.length}`;
|
||||
await this.storeInternalFileToDatabase({ path: filename, ...stat });
|
||||
await this.extractInternalFileFromDatabase(filename);
|
||||
await this.localDatabase.removeRevision(id, revB);
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
this.requeueConflictCheck(path);
|
||||
return [];
|
||||
} else {
|
||||
this._log(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE);
|
||||
@@ -743,6 +763,7 @@ Offline Changed files: ${processFiles.length}`;
|
||||
await this.resolveByNewerEntry(id, path, doc, revA, revB);
|
||||
return [];
|
||||
} catch (ex) {
|
||||
this.finishConflictCheck(path);
|
||||
this._log(`Failed to resolve conflict (Hidden): ${path}`);
|
||||
this._log(ex, LOG_LEVEL_VERBOSE);
|
||||
return [];
|
||||
@@ -761,15 +782,22 @@ Offline Changed files: ${processFiles.length}`;
|
||||
const prefixedPath = addPrefix(path, ICHeader);
|
||||
const docAMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revA });
|
||||
const docBMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revB });
|
||||
if (docAMerge != false && docBMerge != false) {
|
||||
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
|
||||
// Again for other conflicted revisions.
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
try {
|
||||
if (docAMerge != false && docBMerge != false) {
|
||||
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
|
||||
// Again for other conflicted revisions.
|
||||
this.requeueConflictCheck(path);
|
||||
} else {
|
||||
this.finishConflictCheck(path);
|
||||
}
|
||||
return;
|
||||
} else {
|
||||
// If either revision could not read, force resolving by the newer one.
|
||||
await this.resolveByNewerEntry(id, path, doc, revA, revB);
|
||||
}
|
||||
return;
|
||||
} else {
|
||||
// If either revision could not read, force resolving by the newer one.
|
||||
await this.resolveByNewerEntry(id, path, doc, revA, revB);
|
||||
} catch (ex) {
|
||||
this.finishConflictCheck(path);
|
||||
throw ex;
|
||||
}
|
||||
},
|
||||
{
|
||||
@@ -793,6 +821,8 @@ Offline Changed files: ${processFiles.length}`;
|
||||
const storeFilePath = strippedPath;
|
||||
const displayFilename = `${storeFilePath}`;
|
||||
// const path = this.prefixedConfigDir2configDir(stripAllPrefixes(docA.path)) || docA.path;
|
||||
// Cancel only when replacing an existing dialogue for the same path, not on every queue pass.
|
||||
sendSignal(`cancel-internal-conflict:${docA.path}`);
|
||||
const modal = new JsonResolveModal(this.app, storageFilePath, [docA, docB], async (keep, result) => {
|
||||
// modal.close();
|
||||
try {
|
||||
@@ -1164,7 +1194,7 @@ Offline Changed files: ${files.length}`;
|
||||
// Check if the file is conflicted, and if so, enqueue to resolve.
|
||||
// Until the conflict is resolved, the file will not be processed.
|
||||
if (docMeta._conflicts && docMeta._conflicts.length > 0) {
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
this.queueConflictCheck(path);
|
||||
this._log(`${headerLine} Hidden file conflicted, enqueued to resolve`);
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -781,7 +781,8 @@ Success: ${successCount}, Errored: ${errored}`;
|
||||
const credential = generateCredentialObject(this.settings);
|
||||
const request = async (path: string, method: string = "GET", body: any = undefined) => {
|
||||
const req = await _requestToCouchDB(
|
||||
this.settings.couchDB_URI + (this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
|
||||
this.settings.couchDB_URI.replace(/\/+$/, "") +
|
||||
(this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
|
||||
credential,
|
||||
window.origin,
|
||||
path,
|
||||
|
||||
@@ -1,278 +0,0 @@
|
||||
import { P2PReplicatorPaneView, VIEW_TYPE_P2P } from "./P2PReplicator/P2PReplicatorPaneView.ts";
|
||||
import {
|
||||
AutoAccepting,
|
||||
LOG_LEVEL_NOTICE,
|
||||
P2P_DEFAULT_SETTINGS,
|
||||
REMOTE_P2P,
|
||||
type EntryDoc,
|
||||
type P2PSyncSetting,
|
||||
type RemoteDBSettings,
|
||||
} from "../../lib/src/common/types.ts";
|
||||
import { LiveSyncCommands } from "../LiveSyncCommands.ts";
|
||||
import {
|
||||
LiveSyncTrysteroReplicator,
|
||||
setReplicatorFunc,
|
||||
} from "../../lib/src/replication/trystero/LiveSyncTrysteroReplicator.ts";
|
||||
import { EVENT_REQUEST_OPEN_P2P, eventHub } from "../../common/events.ts";
|
||||
import type { LiveSyncAbstractReplicator } from "../../lib/src/replication/LiveSyncAbstractReplicator.ts";
|
||||
import { LOG_LEVEL_INFO, LOG_LEVEL_VERBOSE, Logger } from "octagonal-wheels/common/logger";
|
||||
import type { CommandShim } from "../../lib/src/replication/trystero/P2PReplicatorPaneCommon.ts";
|
||||
import {
|
||||
addP2PEventHandlers,
|
||||
closeP2PReplicator,
|
||||
openP2PReplicator,
|
||||
P2PLogCollector,
|
||||
removeP2PReplicatorInstance,
|
||||
type P2PReplicatorBase,
|
||||
} from "../../lib/src/replication/trystero/P2PReplicatorCore.ts";
|
||||
import { reactiveSource } from "octagonal-wheels/dataobject/reactive_v2";
|
||||
import type { Confirm } from "../../lib/src/interfaces/Confirm.ts";
|
||||
import type ObsidianLiveSyncPlugin from "../../main.ts";
|
||||
import type { SimpleStore } from "octagonal-wheels/databases/SimpleStoreBase";
|
||||
// import { getPlatformName } from "../../lib/src/PlatformAPIs/obsidian/Environment.ts";
|
||||
import type { LiveSyncCore } from "../../main.ts";
|
||||
import { TrysteroReplicator } from "../../lib/src/replication/trystero/TrysteroReplicator.ts";
|
||||
import { SETTING_KEY_P2P_DEVICE_NAME } from "../../lib/src/common/types.ts";
|
||||
|
||||
export class P2PReplicator extends LiveSyncCommands implements P2PReplicatorBase, CommandShim {
|
||||
storeP2PStatusLine = reactiveSource("");
|
||||
|
||||
getSettings(): P2PSyncSetting {
|
||||
return this.core.settings;
|
||||
}
|
||||
getDB() {
|
||||
return this.core.localDatabase.localDatabase;
|
||||
}
|
||||
|
||||
get confirm(): Confirm {
|
||||
return this.core.confirm;
|
||||
}
|
||||
_simpleStore!: SimpleStore<any>;
|
||||
|
||||
simpleStore(): SimpleStore<any> {
|
||||
return this._simpleStore;
|
||||
}
|
||||
|
||||
constructor(plugin: ObsidianLiveSyncPlugin, core: LiveSyncCore) {
|
||||
super(plugin, core);
|
||||
setReplicatorFunc(() => this._replicatorInstance);
|
||||
addP2PEventHandlers(this);
|
||||
this.afterConstructor();
|
||||
// onBindFunction is called in super class
|
||||
// this.onBindFunction(plugin, plugin.services);
|
||||
}
|
||||
|
||||
async handleReplicatedDocuments(docs: EntryDoc[]): Promise<boolean> {
|
||||
// console.log("Processing Replicated Docs", docs);
|
||||
return await this.services.replication.parseSynchroniseResult(
|
||||
docs as PouchDB.Core.ExistingDocument<EntryDoc>[]
|
||||
);
|
||||
}
|
||||
|
||||
_anyNewReplicator(settingOverride: Partial<RemoteDBSettings> = {}): Promise<LiveSyncAbstractReplicator> {
|
||||
const settings = { ...this.settings, ...settingOverride };
|
||||
if (settings.remoteType == REMOTE_P2P) {
|
||||
return Promise.resolve(new LiveSyncTrysteroReplicator(this.plugin.core));
|
||||
}
|
||||
return undefined!;
|
||||
}
|
||||
_replicatorInstance?: TrysteroReplicator;
|
||||
p2pLogCollector = new P2PLogCollector();
|
||||
|
||||
afterConstructor() {
|
||||
return;
|
||||
}
|
||||
|
||||
async open() {
|
||||
await openP2PReplicator(this);
|
||||
}
|
||||
async close() {
|
||||
await closeP2PReplicator(this);
|
||||
}
|
||||
|
||||
getConfig(key: string) {
|
||||
return this.services.config.getSmallConfig(key);
|
||||
}
|
||||
setConfig(key: string, value: string) {
|
||||
return this.services.config.setSmallConfig(key, value);
|
||||
}
|
||||
enableBroadcastCastings() {
|
||||
return this?._replicatorInstance?.enableBroadcastChanges();
|
||||
}
|
||||
disableBroadcastCastings() {
|
||||
return this?._replicatorInstance?.disableBroadcastChanges();
|
||||
}
|
||||
|
||||
init() {
|
||||
this._simpleStore = this.services.keyValueDB.openSimpleStore("p2p-sync");
|
||||
return Promise.resolve(this);
|
||||
}
|
||||
|
||||
async initialiseP2PReplicator(): Promise<TrysteroReplicator> {
|
||||
await this.init();
|
||||
try {
|
||||
if (this._replicatorInstance) {
|
||||
await this._replicatorInstance.close();
|
||||
this._replicatorInstance = undefined;
|
||||
}
|
||||
|
||||
if (!this.settings.P2P_AppID) {
|
||||
this.settings.P2P_AppID = P2P_DEFAULT_SETTINGS.P2P_AppID;
|
||||
}
|
||||
const getInitialDeviceName = () =>
|
||||
this.getConfig(SETTING_KEY_P2P_DEVICE_NAME) || this.services.vault.getVaultName();
|
||||
|
||||
const getSettings = () => this.settings;
|
||||
const store = () => this.simpleStore();
|
||||
const getDB = () => this.getDB();
|
||||
|
||||
const getConfirm = () => this.confirm;
|
||||
const getPlatform = () => this.services.API.getPlatform();
|
||||
const env = {
|
||||
get db() {
|
||||
return getDB();
|
||||
},
|
||||
get confirm() {
|
||||
return getConfirm();
|
||||
},
|
||||
get deviceName() {
|
||||
return getInitialDeviceName();
|
||||
},
|
||||
get platform() {
|
||||
return getPlatform();
|
||||
},
|
||||
get settings() {
|
||||
return getSettings();
|
||||
},
|
||||
processReplicatedDocs: async (docs: EntryDoc[]): Promise<void> => {
|
||||
await this.handleReplicatedDocuments(docs);
|
||||
// No op. This is a client and does not need to process the docs
|
||||
},
|
||||
get simpleStore() {
|
||||
return store();
|
||||
},
|
||||
};
|
||||
this._replicatorInstance = new TrysteroReplicator(env);
|
||||
return this._replicatorInstance;
|
||||
} catch (e) {
|
||||
this._log(
|
||||
e instanceof Error ? e.message : "Something occurred on Initialising P2P Replicator",
|
||||
LOG_LEVEL_INFO
|
||||
);
|
||||
this._log(e, LOG_LEVEL_VERBOSE);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
onunload(): void {
|
||||
removeP2PReplicatorInstance();
|
||||
void this.close();
|
||||
}
|
||||
|
||||
onload(): void | Promise<void> {
|
||||
eventHub.onEvent(EVENT_REQUEST_OPEN_P2P, () => {
|
||||
void this.openPane();
|
||||
});
|
||||
this.p2pLogCollector.p2pReplicationLine.onChanged((line) => {
|
||||
this.storeP2PStatusLine.value = line.value;
|
||||
});
|
||||
}
|
||||
async _everyOnInitializeDatabase(): Promise<boolean> {
|
||||
await this.initialiseP2PReplicator();
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
|
||||
private async _allSuspendExtraSync() {
|
||||
this.plugin.core.settings.P2P_Enabled = false;
|
||||
this.plugin.core.settings.P2P_AutoAccepting = AutoAccepting.NONE;
|
||||
this.plugin.core.settings.P2P_AutoBroadcast = false;
|
||||
this.plugin.core.settings.P2P_AutoStart = false;
|
||||
this.plugin.core.settings.P2P_AutoSyncPeers = "";
|
||||
this.plugin.core.settings.P2P_AutoWatchPeers = "";
|
||||
return await Promise.resolve(true);
|
||||
}
|
||||
|
||||
// async $everyOnLoadStart() {
|
||||
// return await Promise.resolve();
|
||||
// }
|
||||
|
||||
async openPane() {
|
||||
await this.services.API.showWindow(VIEW_TYPE_P2P);
|
||||
}
|
||||
|
||||
async _everyOnloadStart(): Promise<boolean> {
|
||||
this.plugin.registerView(
|
||||
VIEW_TYPE_P2P,
|
||||
(leaf) => new P2PReplicatorPaneView(leaf, this.plugin.core, this.plugin)
|
||||
);
|
||||
this.plugin.addCommand({
|
||||
id: "open-p2p-replicator",
|
||||
name: "P2P Sync : Open P2P Replicator",
|
||||
callback: async () => {
|
||||
await this.openPane();
|
||||
},
|
||||
});
|
||||
this.plugin.addCommand({
|
||||
id: "p2p-establish-connection",
|
||||
name: "P2P Sync : Connect to the Signalling Server",
|
||||
checkCallback: (isChecking) => {
|
||||
if (isChecking) {
|
||||
return !(this._replicatorInstance?.server?.isServing ?? false);
|
||||
}
|
||||
void this.open();
|
||||
},
|
||||
});
|
||||
this.plugin.addCommand({
|
||||
id: "p2p-close-connection",
|
||||
name: "P2P Sync : Disconnect from the Signalling Server",
|
||||
checkCallback: (isChecking) => {
|
||||
if (isChecking) {
|
||||
return this._replicatorInstance?.server?.isServing ?? false;
|
||||
}
|
||||
Logger(`Closing P2P Connection`, LOG_LEVEL_NOTICE);
|
||||
void this.close();
|
||||
},
|
||||
});
|
||||
this.plugin.addCommand({
|
||||
id: "replicate-now-by-p2p",
|
||||
name: "Replicate now by P2P",
|
||||
checkCallback: (isChecking) => {
|
||||
if (isChecking) {
|
||||
if (this.settings.remoteType == REMOTE_P2P) return false;
|
||||
if (!this._replicatorInstance?.server?.isServing) return false;
|
||||
return true;
|
||||
}
|
||||
void this._replicatorInstance?.replicateFromCommand(false);
|
||||
},
|
||||
});
|
||||
this.plugin
|
||||
.addRibbonIcon("waypoints", "P2P Replicator", async () => {
|
||||
await this.openPane();
|
||||
})
|
||||
.addClass("livesync-ribbon-replicate-p2p");
|
||||
|
||||
return await Promise.resolve(true);
|
||||
}
|
||||
_everyAfterResumeProcess(): Promise<boolean> {
|
||||
if (this.settings.P2P_Enabled && this.settings.P2P_AutoStart) {
|
||||
setTimeout(() => void this.open(), 100);
|
||||
}
|
||||
const rep = this._replicatorInstance;
|
||||
rep?.allowReconnection();
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
_everyBeforeSuspendProcess(): Promise<boolean> {
|
||||
const rep = this._replicatorInstance;
|
||||
rep?.disconnectFromServer();
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
|
||||
override onBindFunction(core: LiveSyncCore, services: typeof core.services): void {
|
||||
services.replicator.getNewReplicator.addHandler(this._anyNewReplicator.bind(this));
|
||||
services.databaseEvents.onDatabaseInitialisation.addHandler(this._everyOnInitializeDatabase.bind(this));
|
||||
services.appLifecycle.onInitialise.addHandler(this._everyOnloadStart.bind(this));
|
||||
services.appLifecycle.onSuspending.addHandler(this._everyBeforeSuspendProcess.bind(this));
|
||||
services.appLifecycle.onResumed.addHandler(this._everyAfterResumeProcess.bind(this));
|
||||
services.setting.suspendExtraSync.addHandler(this._allSuspendExtraSync.bind(this));
|
||||
}
|
||||
}
|
||||
@@ -4,10 +4,9 @@
|
||||
import {
|
||||
AcceptedStatus,
|
||||
ConnectionStatus,
|
||||
type CommandShim,
|
||||
type PeerStatus,
|
||||
type PluginShim,
|
||||
} from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
|
||||
import type { LiveSyncTrysteroReplicator } from "../../../lib/src/replication/trystero/LiveSyncTrysteroReplicator";
|
||||
import PeerStatusRow from "../P2PReplicator/PeerStatusRow.svelte";
|
||||
import { EVENT_LAYOUT_READY, eventHub } from "../../../common/events";
|
||||
import {
|
||||
@@ -23,7 +22,7 @@
|
||||
import type { LiveSyncBaseCore } from "@/LiveSyncBaseCore";
|
||||
|
||||
interface Props {
|
||||
cmdSync: CommandShim;
|
||||
cmdSync: LiveSyncTrysteroReplicator;
|
||||
core: LiveSyncBaseCore;
|
||||
}
|
||||
|
||||
@@ -95,9 +94,8 @@
|
||||
},
|
||||
true
|
||||
);
|
||||
cmdSync.setConfig(SETTING_KEY_P2P_DEVICE_NAME, eDeviceName);
|
||||
core.services.config.setSmallConfig(SETTING_KEY_P2P_DEVICE_NAME, eDeviceName);
|
||||
deviceName = eDeviceName;
|
||||
// await plugin.saveSettings();
|
||||
}
|
||||
async function revert() {
|
||||
eP2PEnabled = settings.P2P_Enabled;
|
||||
@@ -115,7 +113,7 @@
|
||||
const applyLoadSettings = (d: P2PSyncSetting, force: boolean) => {
|
||||
if (force) {
|
||||
const initDeviceName =
|
||||
cmdSync.getConfig(SETTING_KEY_P2P_DEVICE_NAME) ?? core.services.vault.getVaultName();
|
||||
core.services.config.getSmallConfig(SETTING_KEY_P2P_DEVICE_NAME) ?? core.services.vault.getVaultName();
|
||||
deviceName = initDeviceName;
|
||||
eDeviceName = initDeviceName;
|
||||
}
|
||||
@@ -239,16 +237,16 @@
|
||||
await cmdSync.close();
|
||||
}
|
||||
function startBroadcasting() {
|
||||
void cmdSync.enableBroadcastCastings();
|
||||
void cmdSync.enableBroadcastChanges();
|
||||
}
|
||||
function stopBroadcasting() {
|
||||
void cmdSync.disableBroadcastCastings();
|
||||
void cmdSync.disableBroadcastChanges();
|
||||
}
|
||||
|
||||
const initialDialogStatusKey = `p2p-dialog-status`;
|
||||
const getDialogStatus = () => {
|
||||
try {
|
||||
const initialDialogStatus = JSON.parse(cmdSync.getConfig(initialDialogStatusKey) ?? "{}") as {
|
||||
const initialDialogStatus = JSON.parse(core.services.config.getSmallConfig(initialDialogStatusKey) ?? "{}") as {
|
||||
notice?: boolean;
|
||||
setting?: boolean;
|
||||
};
|
||||
@@ -265,7 +263,7 @@
|
||||
notice: isNoticeOpened,
|
||||
setting: isSettingOpened,
|
||||
};
|
||||
cmdSync.setConfig(initialDialogStatusKey, JSON.stringify(dialogStatus));
|
||||
core.services.config.setSmallConfig(initialDialogStatusKey, JSON.stringify(dialogStatus));
|
||||
});
|
||||
let isObsidian = $derived.by(() => {
|
||||
return core.services.API.getPlatform() === "obsidian";
|
||||
|
||||
@@ -1,19 +1,15 @@
|
||||
import { Menu, WorkspaceLeaf } from "@/deps.ts";
|
||||
import ReplicatorPaneComponent from "./P2PReplicatorPane.svelte";
|
||||
import type ObsidianLiveSyncPlugin from "../../../main.ts";
|
||||
import { mount } from "svelte";
|
||||
import { SvelteItemView } from "../../../common/SvelteItemView.ts";
|
||||
import { eventHub } from "../../../common/events.ts";
|
||||
import { SvelteItemView } from "@/common/SvelteItemView.ts";
|
||||
import { eventHub } from "@/common/events.ts";
|
||||
|
||||
import { unique } from "octagonal-wheels/collection";
|
||||
import { LOG_LEVEL_NOTICE, REMOTE_P2P } from "../../../lib/src/common/types.ts";
|
||||
import { Logger } from "../../../lib/src/common/logger.ts";
|
||||
import { P2PReplicator } from "../CmdP2PReplicator.ts";
|
||||
import {
|
||||
EVENT_P2P_PEER_SHOW_EXTRA_MENU,
|
||||
type PeerStatus,
|
||||
} from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon.ts";
|
||||
import { LOG_LEVEL_NOTICE, REMOTE_P2P } from "@lib/common/types.ts";
|
||||
import { Logger } from "@lib/common/logger.ts";
|
||||
import { EVENT_P2P_PEER_SHOW_EXTRA_MENU, type PeerStatus } from "@lib/replication/trystero/P2PReplicatorPaneCommon.ts";
|
||||
import type { LiveSyncBaseCore } from "@/LiveSyncBaseCore.ts";
|
||||
import type { P2PPaneParams } from "@/lib/src/replication/trystero/UseP2PReplicatorResult";
|
||||
export const VIEW_TYPE_P2P = "p2p-replicator";
|
||||
|
||||
function addToList(item: string, list: string) {
|
||||
@@ -35,8 +31,8 @@ function removeFromList(item: string, list: string) {
|
||||
}
|
||||
|
||||
export class P2PReplicatorPaneView extends SvelteItemView {
|
||||
// plugin: ObsidianLiveSyncPlugin;
|
||||
core: LiveSyncBaseCore;
|
||||
private _p2pResult: P2PPaneParams;
|
||||
override icon = "waypoints";
|
||||
title: string = "";
|
||||
override navigation = false;
|
||||
@@ -45,11 +41,7 @@ export class P2PReplicatorPaneView extends SvelteItemView {
|
||||
return "waypoints";
|
||||
}
|
||||
get replicator() {
|
||||
const r = this.core.getAddOn<P2PReplicator>(P2PReplicator.name);
|
||||
if (!r || !r._replicatorInstance) {
|
||||
throw new Error("Replicator not found");
|
||||
}
|
||||
return r._replicatorInstance;
|
||||
return this._p2pResult.replicator;
|
||||
}
|
||||
async replicateFrom(peer: PeerStatus) {
|
||||
await this.replicator.replicateFrom(peer.peerId);
|
||||
@@ -95,7 +87,7 @@ And you can also drop the local database to rebuild from the remote device.`,
|
||||
|
||||
// this.plugin.settings = remoteConfig;
|
||||
// await this.plugin.saveSettings();
|
||||
await this.core.services.setting.applyPartial(remoteConfig);
|
||||
await this.core.services.setting.applyExternalSettings(remoteConfig);
|
||||
if (yn === DROP) {
|
||||
await this.core.rebuilder.scheduleFetch();
|
||||
} else {
|
||||
@@ -131,10 +123,10 @@ And you can also drop the local database to rebuild from the remote device.`,
|
||||
await this.core.services.setting.applyPartial(currentSetting, true);
|
||||
}
|
||||
m?: Menu;
|
||||
constructor(leaf: WorkspaceLeaf, core: LiveSyncBaseCore, plugin: ObsidianLiveSyncPlugin) {
|
||||
constructor(leaf: WorkspaceLeaf, core: LiveSyncBaseCore, p2pResult: P2PPaneParams) {
|
||||
super(leaf);
|
||||
// this.plugin = plugin;
|
||||
this.core = core;
|
||||
this._p2pResult = p2pResult;
|
||||
eventHub.onEvent(EVENT_P2P_PEER_SHOW_EXTRA_MENU, ({ peer, event }) => {
|
||||
if (this.m) {
|
||||
this.m.hide();
|
||||
@@ -192,14 +184,10 @@ And you can also drop the local database to rebuild from the remote device.`,
|
||||
}
|
||||
}
|
||||
instantiateComponent(target: HTMLElement) {
|
||||
const cmdSync = this.core.getAddOn<P2PReplicator>(P2PReplicator.name);
|
||||
if (!cmdSync) {
|
||||
throw new Error("Replicator not found");
|
||||
}
|
||||
return mount(ReplicatorPaneComponent, {
|
||||
target: target,
|
||||
props: {
|
||||
cmdSync: cmdSync,
|
||||
cmdSync: this._p2pResult.replicator,
|
||||
core: this.core,
|
||||
},
|
||||
});
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
<script lang="ts">
|
||||
import { getContext } from "svelte";
|
||||
import { AcceptedStatus, type PeerStatus } from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
|
||||
import type { P2PReplicator } from "../CmdP2PReplicator";
|
||||
import type { LiveSyncTrysteroReplicator } from "../../../lib/src/replication/trystero/LiveSyncTrysteroReplicator";
|
||||
import { eventHub } from "../../../common/events";
|
||||
import { EVENT_P2P_PEER_SHOW_EXTRA_MENU } from "../../../lib/src/replication/trystero/P2PReplicatorPaneCommon";
|
||||
|
||||
@@ -57,7 +57,7 @@
|
||||
let isNew = $derived.by(() => peer.accepted === AcceptedStatus.UNKNOWN);
|
||||
|
||||
function makeDecision(isAccepted: boolean, isTemporary: boolean) {
|
||||
cmdReplicator._replicatorInstance?.server?.makeDecision({
|
||||
replicator.makeDecision({
|
||||
peerId: peer.peerId,
|
||||
name: peer.name,
|
||||
decision: isAccepted,
|
||||
@@ -65,13 +65,12 @@
|
||||
});
|
||||
}
|
||||
function revokeDecision() {
|
||||
cmdReplicator._replicatorInstance?.server?.revokeDecision({
|
||||
replicator.revokeDecision({
|
||||
peerId: peer.peerId,
|
||||
name: peer.name,
|
||||
});
|
||||
}
|
||||
const cmdReplicator = getContext<() => P2PReplicator>("getReplicator")();
|
||||
const replicator = cmdReplicator._replicatorInstance!;
|
||||
const replicator = getContext<() => LiveSyncTrysteroReplicator>("getReplicator")();
|
||||
|
||||
const peerAttrLabels = $derived.by(() => {
|
||||
const attrs = [];
|
||||
@@ -87,14 +86,14 @@
|
||||
return attrs;
|
||||
});
|
||||
function startWatching() {
|
||||
replicator.watchPeer(peer.peerId);
|
||||
replicator?.watchPeer(peer.peerId);
|
||||
}
|
||||
function stopWatching() {
|
||||
replicator.unwatchPeer(peer.peerId);
|
||||
replicator?.unwatchPeer(peer.peerId);
|
||||
}
|
||||
|
||||
function sync() {
|
||||
replicator.sync(peer.peerId, false);
|
||||
void replicator?.sync(peer.peerId, false);
|
||||
}
|
||||
|
||||
function moreMenu(evt: MouseEvent) {
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user