Compare commits

..

62 Commits

Author SHA1 Message Date
vorotamoroz
fa7ef62302 Fix: adjusting help
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:42:54 +09:00
vorotamoroz
81d8224330 bump
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:39:48 +09:00
vorotamoroz
cc466a4b3c ### Fixed
- Now larger settings can be exported and imported via QR code without issues. (#595)

- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 18:37:44 +09:00
vorotamoroz
ceebca7de9 Merge pull request #862 from fabiomanz/main
chore: remove obsolete `version` attribute from docker-compose.yml
2026-04-29 17:30:35 +09:00
Fabio
c2f696d0a4 chore: attribute version is obsolete 2026-04-29 07:07:45 +00:00
vorotamoroz
1aa7c45794 Fix the readme
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:55:34 +09:00
vorotamoroz
faefa80cbd Fix again 2026-04-29 12:40:40 +09:00
vorotamoroz
3737eacffd Fix readme
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:39:42 +09:00
vorotamoroz
4c0af0b608 Fixed(cli):
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.

Co-authored-by: Copilot <copilot@github.com>
2026-04-29 12:22:00 +09:00
vorotamoroz
bb69eb13e7 bump 2026-04-27 11:15:07 +09:00
vorotamoroz
7c9db6376f Fixed:
- No longer Setup-wizard drops username and password silently. (#865)
- Setup URI is now correctly imported (#859).
- now French translation is added.
2026-04-27 11:14:06 +09:00
vorotamoroz
4c04e4e676 Merge pull request #863 from koteitan/fix/859-strip-trailing-slash-from-uri
fix: strip trailing slash from couchDB_URI to avoid double-slash 401
2026-04-27 11:08:11 +09:00
koteitan
14ec35b257 fix: strip trailing slash from couchDB_URI to avoid double-slash 401
When couchDB_URI ends with a trailing slash (e.g. https://host/), the
database name concatenation produces a double-slash path
(https://host//obsidiannotes), which causes CouchDB to reject requests
with 401 "Name or password is incorrect".

Strip trailing slashes from couchDB_URI / baseUri at the path
concatenation sites in:
- src/common/utils.ts (_requestToCouchDBFetch, _requestToCouchDB)
- src/features/LocalDatabaseMainte/CmdLocalDatabaseMainte.ts

The companion fix for the replication path is in the livesync-commonlib
submodule.

Ref: #859
2026-04-27 00:12:57 +09:00
vorotamoroz
b609e4973c Merge remote-tracking branch 'refs/remotes/origin/main' 2026-04-25 20:37:08 +09:00
vorotamoroz
354f0be9a3 bump
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 20:16:50 +09:00
vorotamoroz
16804ed34c Merge pull request #842 from kdavh/patch-1
Update README.md, fix webpeer link
2026-04-25 19:07:56 +09:00
vorotamoroz
31bd270869 Fixed: Hidden file JSON conflicts no longer keep re-opening and dismissing the merge dialogue before we can act, which fixes persistent unresolvable data.json conflicts in plug-in settings sync (related: #850).
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 17:22:25 +09:00
vorotamoroz
b5d054f259 Fixed: Issue report generation now redacts remoteConfigurations connection strings and keeps only the scheme (e.g. sls+https://), so credentials are not exposed in reports.
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 17:09:43 +09:00
vorotamoroz
1ef2955d00 - Fixed a worker-side recursion issue that could raise Maximum call stack size exceeded during chunk splitting (related: #855).
- Improved background worker crash cleanup so pending split/encryption tasks are released cleanly instead of being left in a waiting state (related: #855).
- On start-up, the selected remote configuration is now applied to runtime connection fields as well, reducing intermittent authentication failures caused by stale runtime settings (related: #855).

Co-authored-by: Copilot <copilot@github.com>
2026-04-25 16:51:37 +09:00
vorotamoroz
6ef56063b3 Fixed: No longer credentials are broken during object storage configuration (related: #852).
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 15:03:38 +09:00
vorotamoroz
a912585800 Improve issue template
Co-authored-by: Copilot <copilot@github.com>
2026-04-25 14:01:18 +09:00
vorotamoroz
7a863625bc bump 2026-04-09 04:32:34 +01:00
vorotamoroz
99b4037820 Fixed
- Packing a batch during the journal sync now continues even if the batch contains no items to upload.
2026-04-06 12:51:09 +01:00
vorotamoroz
d59b5dc2f9 Fixed
- Remote configuration URIs are now correctly encrypted when saved after editing in the settings dialogue.
- Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848).
2026-04-06 11:47:19 +01:00
vorotamoroz
4d0203e4ca Update: beta tagging 2026-04-06 11:46:51 +01:00
vorotamoroz
3e4db571cd Fixed
- Remote configuration URIs are now correctly encrypted when saved after editing in the settings dialogue.
- Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848).
2026-04-06 11:45:26 +01:00
vorotamoroz
b0a9bd84d6 bump 2026-04-05 18:21:38 +09:00
vorotamoroz
8c4e62e7c1 ### Fixed
- Now surely remote configurations are editable in the settings dialogue.
- We can fetch remote settings from the remote and apply them to the local settings for each remote configuration entry.
- No longer layout breaking occurs when the description of a remote configuration entry is too long.
2026-04-05 18:20:56 +09:00
vorotamoroz
3e03d1dbd5 bump 2026-04-05 17:48:00 +09:00
vorotamoroz
0dbf4cface bump 2026-04-05 17:47:45 +09:00
vorotamoroz
bc22d61a3a Fixed: Now error messages are kept hidden if the show status inside the editor is disabled. 2026-04-05 17:43:29 +09:00
vorotamoroz
d709bcc1d0 Add encryption for connection management 2026-04-05 16:24:34 +09:00
vorotamoroz
d7088be8af Improved: remote management 2026-04-05 16:00:57 +09:00
vorotamoroz
f17f1ecd93 ### Fixed
- No unexpected error (about a replicator) during early stage of initialisation.

### New features

- Now we can configure multiple Remote Databases of the same type, e.g, multiple CouchDBs or S3 remotes.
- We can switch between multiple Remote Databases in the settings dialogue.
2026-04-03 13:47:56 +01:00
vorotamoroz
bf556bd9f4 Merge pull request #838 from chinhkrb113/contribai/docs/undocumented-test-environment-variables
📝 Docs: Undocumented test environment variables
2026-04-03 13:05:05 +09:00
vorotamoroz
8b40969fa3 Add ru locale 2026-04-02 10:28:58 +00:00
vorotamoroz
6cce931a88 Add test for CLI 2026-04-02 09:58:25 +00:00
vorotamoroz
216861f2c3 Prettified 2026-04-02 10:33:36 +01:00
vorotamoroz
6ce724afb4 Add dockerfiles to webapp and webpeer 2026-04-02 10:33:13 +01:00
vorotamoroz
2e3e106fb2 Fix dockerfile 2026-04-02 10:31:17 +01:00
vorotamoroz
00f2606a2f Added a bit for development on Windows. 2026-04-02 10:31:03 +01:00
vorotamoroz
3c94a44285 Fixed: Replication progress is now correctly saved and restored in the CLI. 2026-04-02 10:30:14 +01:00
vorotamoroz
4c0908acde Add CI Build for cli-docker image 2026-04-02 07:47:10 +01:00
vorotamoroz
cda27fb7f8 - Update trystero to v0.23.0
- Add dockerfile for CLI
- Change relay image for testing on arm64
2026-03-31 07:17:51 +00:00
vorotamoroz
837a828cec Fix: fix update note... 2026-03-30 09:20:01 +01:00
vorotamoroz
4c8e13ccb9 bump 2026-03-30 09:14:49 +01:00
vorotamoroz
1ae4eaab02 Merge pull request #805 from L4z3x/patch-1
[docs] added changing docker compose data and etc folder ownership to user 5984.
2026-03-30 17:04:12 +09:00
chinhkrb113
b1efbf74c7 docs: clarify P2P_TEST_RELAY as Nostr relay 2026-03-30 02:18:25 +07:00
kdavh
12f04f6cf7 Update README.md, fix webpeer link 2026-03-28 12:47:28 -04:00
vorotamoroz
a937feed3f Merge pull request #833 from rewse/fix/cli-sync-locked-error-message
fix(cli): show actionable error when sync fails due to locked remote DB
2026-03-28 23:58:34 +09:00
ChinhLee
2de9899a99 docs: undocumented test environment variables
The P2P test suite relies on several specific environment variables (e.g., `P2P_TEST_ROOM_ID`, `P2P_TEST_PASSPHRASE`, `P2P_TEST_RELAY`) loaded from `.env` or `.test.env`. Because these are not documented anywhere in the repository, new contributors will be unable to configure their local environment to run the P2P tests successfully.

Affected files: vitest.config.p2p.ts

Signed-off-by: ChinhLee <76194645+chinhkrb113@users.noreply.github.com>
2026-03-27 22:26:00 +07:00
vorotamoroz
a0af6201a5 - No longer Peer-to-Peer Sync is not enabled. We cannot open a new connection. error occurs when we have not enabled P2P sync and are not expected to use it (#830). 2026-03-26 13:13:27 +01:00
vorotamoroz
9c7c6c8859 Merge pull request #831 from rewse/fix/cli-entrypoint-polyfill-default
fix(cli): handle incomplete localStorage in Node.js v25+
2026-03-26 20:36:46 +09:00
vorotamoroz
38d7cae1bc update some dependencies and ran npm-update. 2026-03-26 12:15:38 +01:00
vorotamoroz
fee34f0dcb Update dependency: deduplicate 2026-03-26 11:55:06 +01:00
Shibata, Tats
e01f7f4d92 test(cli): add TODO comment and locked-remote-DB test script
- Add inline TODO comment in runCommand.ts about standardising
  replication failure cause identification logic.
- Add test-sync-locked-remote-linux.sh that verifies:
  1. sync succeeds when the remote milestone is not locked.
  2. sync fails with an actionable error when the remote milestone
     has locked=true and accepted_nodes is empty.
2026-03-26 00:58:51 +09:00
Shibata, Tats
985004bc0e fix(cli): show actionable error when sync fails due to locked remote DB
When the remote database is locked and the CLI device is not in the
accepted_nodes list, openReplication returns false with no CLI-specific
guidance. The existing log message ('Fetch rebuilt DB, explicit
unlocking or chunk clean-up is required') is aimed at the Obsidian
plugin UI.

Check the replicator's remoteLockedAndDeviceNotAccepted flag after
sync failure and print a clear message directing the user to unlock
from the Obsidian plugin.

Ref: #832
2026-03-22 12:37:17 +09:00
Shibata, Tats
967a78d657 fix(cli): handle incomplete localStorage in Node.js v25+
Node.js v25 provides a built-in localStorage on globalThis, but without
`--localstorage-file` it is an empty object lacking getItem/setItem.
The existing check `!("localStorage" in globalThis)` passes, so the
polyfill is skipped and the CLI crashes with:

  TypeError: localStorage.getItem is not a function

Check for getItem as well so the polyfill is applied when the native
implementation is incomplete.
2026-03-22 11:57:47 +09:00
vorotamoroz
2ff60dd5ac Add missed files 2026-03-18 12:20:52 +01:00
vorotamoroz
c3341da242 Fix english 2026-03-18 12:05:15 +01:00
vorotamoroz
c2bfaeb5a9 Fixed: wrong import 2026-03-18 12:03:51 +01:00
L4z3x
310496d0b8 added changing docker compose data and etc folder, to prevent permissions errors
while trying to follow the docker compose guide i created the data folders using the root user, and had this error when i run the stack:
`touch: cannot touch '/opt/couchdb/etc/local.d/docker.ini': Permission denied`

the problem was solved by changing the ownership of the folder to the user 5984, then one in the docker compose file.
2026-02-23 22:21:18 +01:00
81 changed files with 6681 additions and 17470 deletions

31
.dockerignore Normal file
View File

@@ -0,0 +1,31 @@
# Git history
.git/
.gitignore
# Dependencies — re-installed inside Docker
node_modules/
src/apps/cli/node_modules/
# Pre-built CLI output — rebuilt inside Docker
src/apps/cli/dist/
# Obsidian plugin build outputs
main.js
main_org.js
pouchdb-browser.js
production/
# Test coverage and reports
coverage/
# Local environment / secrets
.env
*.env
.test.env
# local config files
*.local
# OS artefacts
.DS_Store
Thumbs.db

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
*.sh text eol=lf

View File

@@ -2,77 +2,104 @@
name: Issue report name: Issue report
about: Create a report to help us improve about: Create a report to help us improve
title: '' title: ''
labels: '' labels: 'bug'
assignees: '' assignees: ''
--- ---
Thank you for taking the time to report this issue! Thank you for taking the time to report this issue!
To improve the process, I would like to ask you to let me know the information in advance. Before filling in this form, please read: [How to report an issue](../docs/to_issue_reporting.md).
All instructions and examples, and empty entries can be deleted. Issues with sufficient information will be prioritised.
Just for your information, a [filled example](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Issue+example) is also written.
## Abstract ---
The synchronisation hung up immediately after connecting.
## Expected behaviour ## Required
- Synchronisation ends with the message `Replication completed`
- Everything synchronised
## Actually happened ### Abstract
- Synchronisation has been cancelled with the message `TypeError ... ` (captured in the attached log, around LL.10-LL.12) <!-- Briefly describe the problem in one or two sentences. -->
- No files synchronised
## Reproducing procedure ### Expected behaviour
<!-- What did you expect to happen? -->
1. Configure LiveSync as in the attached material. ### Actually happened
2. Click the replication button on the ribbon. <!-- What actually happened? Include any error messages. -->
3. Synchronising has begun.
4. About two or three seconds later, we got the error `TypeError ... `.
5. Replication has been stopped. No files synchronised.
Note: If you do not catch the reproducing procedure, please let me know the frequency and signs. ### Reproducing procedure
<!-- Step-by-step instructions to reproduce the issue. If you cannot reproduce it reliably, please describe the frequency and any signs you noticed. -->
## Report materials
If the information is not available, do not hesitate to report it as it is. You can also of course omit it if you think this is indeed unnecessary. If it is necessary, I will ask you.
### Report from the LiveSync
For more information, please refer to [Making the report](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Making+the+report).
<details>
<summary>Report from hatch</summary>
```
<!-- paste here -->
```
</details>
### Obsidian debug info ### Obsidian debug info
Please provide debug info for **each device involved**. The primary device (where the issue occurred) is required; others are strongly recommended. If your issue involves synchronisation between devices, debug info from relevant devices is very helpful.
To get it: open the command palette → "Show debug info".
<details> <details>
<summary>Debug info</summary> <summary>Device 1 (primary)</summary>
``` ```
<!-- paste here --> <!-- paste here -->
``` ```
</details> </details>
<details>
<summary>Device 2 (if applicable)</summary>
```
<!-- paste here -->
```
</details>
### LiveSync version
The hatch report (below) includes version information. If you cannot provide the report, please fill in the version here.
- Self-hosted LiveSync version: <!-- e.g. 0.23.0 — find it in Obsidian Settings → Community Plugins -->
### Report from LiveSync
Open the `Hatch` pane in LiveSync settings and press `Make report`. Paste here or upload to [Gist](https://gist.github.com/) and share the link.
<details>
<summary>Report from hatch (primary)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
<details>
<summary>Report from hatch (if applicable)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
### Plug-in log ### Plug-in log
We can see the log by tapping the Document box icon. If you noticed something suspicious, please let me know. Enable `Verbose Log` in General Settings first, then reproduce the issue and copy the log (tap the document box icon in the ribbon).
Note: **Please enable `Verbose Log`**. For detail, refer to [Logging](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Logging), please. Paste here or upload to [Gist](https://gist.github.com/) and share the link.
<details> <details>
<summary>Plug-in log</summary> <summary>Plug-in log (primary)</summary>
``` ```
<!-- paste here --> <!-- paste here or link to Gist -->
``` ```
</details> </details>
### Network log
Network logs displayed in DevTools will possibly help with connection-related issues. To capture that, please refer to [DevTools](https://docs.vrtmrz.net/LiveSync/hintandtrivia/DevTools). <details>
<summary>Plug-in log (if applicable)</summary>
```
<!-- paste here or link to Gist -->
```
</details>
---
## Optional
### Screenshots ### Screenshots
If applicable, please add screenshots to help explain your problem. If applicable, please add screenshots to help explain your problem.
### Other information, insights and intuition. ### Other information, insights and intuition
Please provide any additional context or information about the problem. Please provide any additional context or information about the problem.

101
.github/workflows/cli-docker.yml vendored Normal file
View File

@@ -0,0 +1,101 @@
# Build and push the CLI Docker image to GitHub Container Registry (GHCR).#
# Image tag format: <manifest-version>-<unix-epoch>-cli
# Example: 0.25.56-1743500000-cli
#
# The image is also tagged 'latest' for convenience.
# Image name: ghcr.io/<owner>/livesync-cli
name: Build and Push CLI Docker Image
on:
push:
tags:
- "*.*.*-cli"
workflow_dispatch:
inputs:
dry_run:
description: Build only (do not push image to GHCR)
required: false
type: boolean
default: true
force:
description: Continue to build/push even if CLI E2E fails (workflow_dispatch only)
required: false
type: boolean
default: false
jobs:
build-and-push:
runs-on: ubuntu-latest
timeout-minutes: 90
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Derive image tag
id: meta
run: |
VERSION=$(jq -r '.version' manifest.json)
EPOCH=$(date +%s)
TAG="${VERSION}-${EPOCH}-cli"
IMAGE="ghcr.io/${{ github.repository_owner }}/livesync-cli"
echo "tag=${TAG}" >> $GITHUB_OUTPUT
echo "image=${IMAGE}" >> $GITHUB_OUTPUT
echo "full=${IMAGE}:${TAG}" >> $GITHUB_OUTPUT
echo "version=${IMAGE}:${VERSION}-cli" >> $GITHUB_OUTPUT
echo "latest=${IMAGE}:latest" >> $GITHUB_OUTPUT
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "24.x"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Run CLI E2E (docker)
id: e2e
continue-on-error: ${{ github.event_name == 'workflow_dispatch' && inputs.force }}
working-directory: src/apps/cli
env:
CI: true
run: npm run test:e2e:docker:all
- name: Stop test containers (safety net)
if: always()
working-directory: src/apps/cli
run: |
# Keep this as a safety net for future suites/steps that may leave containers running.
bash ./util/couchdb-stop.sh >/dev/null 2>&1 || true
bash ./util/minio-stop.sh >/dev/null 2>&1 || true
bash ./util/p2p-stop.sh >/dev/null 2>&1 || true
- name: Build and push
if: ${{ steps.e2e.outcome == 'success' || (github.event_name == 'workflow_dispatch' && inputs.force) }}
uses: docker/build-push-action@v6
with:
context: .
file: src/apps/cli/Dockerfile
push: ${{ !(github.event_name == 'workflow_dispatch' && inputs.dry_run) }}
tags: |
${{ steps.meta.outputs.full }}
${{ steps.meta.outputs.version }}
${{ steps.meta.outputs.latest }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -24,7 +24,7 @@ Additionally, it supports peer-to-peer synchronisation using WebRTC now (experim
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**. - WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers: - Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices. - [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices. - [webpeer](https://github.com/vrtmrz/obsidian-livesync/tree/main/src/apps/webpeer): A pseudo-client for receiving and sending data between devices.
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it. - A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync). - For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).

92
aggregator.html Normal file
View File

@@ -0,0 +1,92 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Self-hosted LiveSync Setup QR Aggregator</title>
<style>
body { font-family: sans-serif; display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100vh; margin: 0; background-color: #f4f4f9; color: #333; }
.container { background: white; padding: 2rem; border-radius: 8px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); text-align: center; max-width: 90%; }
.progress { margin: 20px 0; font-size: 1.2rem; font-weight: bold; }
.status { margin-bottom: 20px; color: #666; }
.btn { display: inline-block; padding: 12px 24px; background-color: #7c4dff; color: white; text-decoration: none; border-radius: 4px; font-weight: bold; transition: background-color 0.2s; border: none; cursor: pointer; }
.btn:hover { background-color: #651fff; }
.btn:disabled { background-color: #ccc; cursor: not-allowed; }
.grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(40px, 1fr)); gap: 8px; margin: 20px 0; }
.tile { width: 40px; height: 40px; border: 2px solid #ddd; border-radius: 4px; display: flex; align-items: center; justify-content: center; font-size: 0.8rem; }
.tile.filled { background-color: #7c4dff; color: white; border-color: #7c4dff; }
</style>
</head>
<body>
<div class="container">
<h1>LiveSync Setup</h1>
<div id="app">
<p>Checking hash data...</p>
</div>
</div>
<script>
function updateUI() {
const hash = window.location.hash.substring(1);
const params = new URLSearchParams(hash);
const id = params.get('id');
const total = parseInt(params.get('n') || '0');
const index = parseInt(params.get('i') || '-1');
const data = params.get('d');
const app = document.getElementById('app');
if (!id || total <= 0 || index === -1 || !data) {
app.innerHTML = '<p class="status">Invalid setup URL. Please scan the QR code correctly.</p>';
return;
}
// Get session data
const storageKey = 'ls_agg_' + id;
let session = JSON.parse(localStorage.getItem(storageKey) || '{}');
// Save current data
session[index] = data;
localStorage.setItem(storageKey, JSON.stringify(session));
const receivedIndexes = Object.keys(session).map(Number);
const count = receivedIndexes.length;
let html = `
<div class="status">Session ID: ${id}</div>
<div class="progress">${count} / ${total} Loaded</div>
<div class="grid">
`;
for (let i = 0; i < total; i++) {
const isFilled = session[i] !== undefined;
html += `<div class="tile ${isFilled ? 'filled' : ''}">${i + 1}</div>`;
}
html += `</div>`;
if (count === total) {
const sortedData = Array.from({length: total}, (_, i) => session[i]).join('');
// Use the correct protocol for settings
const obsidianUri = `obsidian://setuplivesync?settingsQR=${sortedData}`;
html += `
<p>All parts have been collected!</p>
<a href="${obsidianUri}" class="btn">Open Obsidian to complete setup</a>
<p style="margin-top:20px; font-size:0.8rem; color: #999;">Note: If the button does not respond, please ensure you are opening this in a browser that can trigger Obsidian.</p>
`;
} else {
html += `
<p class="status">Please scan the next QR code.</p>
<button class="btn" disabled>Waiting...</button>
`;
}
app.innerHTML = html;
}
window.addEventListener('hashchange', updateUI);
updateUI();
</script>
</body>
</html>

10
devs.md
View File

@@ -153,17 +153,17 @@ export class ModuleExample extends AbstractObsidianModule {
## Beta Policy ## Beta Policy
- Beta versions are denoted by appending `-patched-N` to the base version number. - Beta versions are denoted by appending `+patchedN` to the base version number.
- `The base version` mostly corresponds to the stable release version. - `The base version` mostly corresponds to the stable release version.
- e.g., v0.25.41-patched-1 is equivalent to v0.25.42-beta1. - e.g., v0.25.41+patched1 is equivalent to v0.25.42-beta1.
- This notation is due to SemVer incompatibility of Obsidian's plugin system. - This notation is due to SemVer incompatibility of Obsidian's plugin system.
- Hence, this release is `0.25.41-patched-1`. - Hence, this release is `0.25.41+patched1`.
- Each beta version may include larger changes, but bug fixes will often not be included. - Each beta version may include larger changes, but bug fixes will often not be included.
- I think that in most cases, bug fixes will cause the stable releases. - I think that in most cases, bug fixes will cause the stable releases.
- They will not be released per branch or backported; they will simply be released. - They will not be released per branch or backported; they will simply be released.
- Bug fixes for previous versions will be applied to the latest beta version. - Bug fixes for previous versions will be applied to the latest beta version.
This means, if xx.yy.02-patched-1 exists and there is a defect in xx.yy.01, a fix is applied to xx.yy.02-patched-1 and yields xx.yy.02-patched-2. This means, if xx.yy.02+patched1 exists and there is a defect in xx.yy.01, a fix is applied to xx.yy.02+patched1 and yields xx.yy.02+patched2.
If the fix is required immediately, it is released as xx.yy.02 (with xx.yy.01-patched-1). If the fix is required immediately, it is released as xx.yy.02 (with xx.yy.01+patched1).
- This procedure remains unchanged from the current one. - This procedure remains unchanged from the current one.
- At the very least, I am using the latest beta. - At the very least, I am using the latest beta.
- However, I will not be using a beta continuously for a week after it has been released. It is probably closer to an RC in nature. - However, I will not be using a beta continuously for a week after it has been released. It is probably closer to an RC in nature.

View File

@@ -1,7 +1,6 @@
# For details and other explanations about this file refer to: # For details and other explanations about this file refer to:
# https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik # https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb:latest image: couchdb:latest

View File

@@ -0,0 +1,206 @@
# The design document of remote configuration management
## Goal
- Allow us to manage multiple remote connections in a single vault.
- Keep the existing synchronisation implementations working without requiring a large rewrite.
- Provide a safe migration path from the previous single-remote configuration model.
- Allow connections to be imported and exported in a compact and reusable format.
## Motivation
Historically, Self-hosted LiveSync stored one effective remote configuration directly in the main settings. This was simple, but it had several limitations.
- We could only keep one CouchDB, one bucket, or one Peer-to-Peer target as the effective configuration at a time.
- Switching between same-type-remotes required manually rewriting the active settings.
- Setup URI, QR code, CLI setup, and similar entry points all restored settings differently, which made migration logic easy to miss.
- The internal settings shape had gradually become a mix of user-facing settings, transport-specific credentials, and compatibility-oriented values.
Once multiple remotes of the same type became desirable, the previous model no longer scaled well enough. We therefore needed a structure that could store many remotes, still expose one effective remote to the replication logic, and keep migration and import behaviour consistent.
## Prerequisite
- Existing synchronisation features must continue to read an effective remote configuration from the current settings.
- Existing vaults must continue to work without requiring manual reconfiguration.
- Setup URI, QR code, CLI setup, protocol handlers, and other imported settings must be normalised in the same way.
- Import and export must be compact enough to be shared easily.
- We must be explicit that exported connection strings may contain credentials or secrets.
## Outlined methods and implementation plans
### Abstract
The current settings now have two layers for remote configuration.
1. A stored collection of named remotes.
2. One active remote projected into the legacy flat settings fields.
This means the replication and database layers can continue to read the effective remote from the existing settings fields, while the settings dialogue and migration logic can manage many stored remotes.
In short, the list is the source of truth for saved remotes, and the legacy fields remain the runtime compatibility layer.
### Data model
The main settings now contain the following properties.
```typescript
type RemoteConfiguration = {
id: string;
name: string;
uri: string;
isEncrypted: boolean;
};
type RemoteConfigurations = {
remoteConfigurations: Record<string, RemoteConfiguration>;
activeConfigurationId: string;
};
```
Each entry stores a connection string in `uri`.
- `sls+http://` or `sls+https://` for CouchDB-compatible remotes
- `sls+s3://` for bucket-style remotes
- `sls+p2p://` for Peer-to-Peer remotes
This structure allows multiple remotes of the same type to be stored without adding a large number of duplicated settings fields.
### Runtime compatibility
The replication logic still reads the effective remote from legacy flat settings such as the following.
- `remoteType`
- `couchDB_URI`, `couchDB_USER`, `couchDB_PASSWORD`, `couchDB_DBNAME`
- `endpoint`, `bucket`, `accessKey`, `secretKey`, and related bucket fields
- `P2P_roomID`, `P2P_passphrase`, and related Peer-to-Peer fields
When a remote is activated, its connection string is parsed and projected into these legacy fields. Therefore, existing services do not need to know whether the remote came from an old vault, a Setup URI, or the new remote list.
This projection is intentionally one-way at runtime. The stored remote list is the persistent catalogue, while the flat fields describe the remote currently in use.
### Connection string format
The connection string is the transport-neutral storage format for a remote entry.
Benefits:
- It is compact enough for clipboard-based workflows.
- It can be used for import and export in the settings dialogue.
- It avoids introducing a separate serialisation format only for the remote list.
- It can be parsed into the legacy settings shape whenever the active remote changes.
This is not equivalent to Setup URI.
- Setup URI represents a broader settings transfer workflow.
- A remote connection string represents one remote only.
### Import and export
The settings dialogue now supports the following workflows.
- Add connection: create a new remote by using the remote setup dialogues.
- Import connection: paste a connection string, validate it, and save it as a named remote.
- Export: copy a stored remote connection string to the clipboard.
Import normalises the string by parsing and serialising it again before saving. This ensures that equivalent but differently formatted URIs are saved in a canonical form.
Export is intentionally simple. It copies the connection string itself, because this is the most direct representation of one remote entry.
### Security note
Connection strings may include credentials, secrets, JWT-related values, or Peer-to-Peer passphrases.
Therefore:
- Export is a deliberate clipboard operation.
- Import trusts the supplied connection string as-is after parsing.
- We should regard exported connection strings as sensitive information, much like Setup URI or a credentials-bearing configuration file.
The `isEncrypted` field is currently reserved for future expansion. At present, the connection string itself is stored plainly inside the settings data, in the same sense that the effective runtime configuration can contain usable remote credentials.
### Migration strategy
Older vaults store only one effective remote in the flat settings fields. The migration creates a first remote list from those values.
Rules:
- If no remote list exists and the legacy fields contain a CouchDB configuration, create `legacy-couchdb`.
- If no remote list exists and the legacy fields contain a bucket configuration, create `legacy-s3`.
- If no remote list exists and the legacy fields contain a Peer-to-Peer configuration, create `legacy-p2p`.
- If more than one legacy remote is populated, create all possible entries and select the active one according to `remoteType`.
This migration is intentionally additive. It does not remove the flat fields because they remain necessary as the active runtime projection.
### Normalisation and application paths
One important design lesson from this work is that migration cannot rely only on loading `data.json`.
Settings may enter the system from several routes:
- normal settings load
- Setup URI
- QR code
- protocol handler
- CLI setup
- Peer-to-Peer remote configuration retrieval
- red flag based remote adjustment
- settings markdown import
To keep behaviour consistent, normalisation is centralised in the settings service.
- `adjustSettings` is responsible for in-place normalisation and migration of a settings object.
- `applyExternalSettings` is responsible for applying imported or externally supplied settings after passing them through the same normalisation flow.
This ensures that imported settings can migrate to the current remote list model even if they never passed through the ordinary `loadSettings` path.
### Why not store only the remote list
It would be possible to let all consumers parse the active remote every time and stop using the flat fields entirely. However, this would require broader changes across replication, diagnostics, and compatibility layers.
The current design keeps the change set limited.
- The remote list improves storage and UX.
- The flat fields preserve compatibility and reduce migration risk.
This is a pragmatic transitional architecture, not an accidental duplication.
## Test strategy
The feature should be tested from four viewpoints.
1. Migration from old settings.
- A vault with only legacy flat remote settings should gain a remote list automatically.
- The correct active remote should be selected according to `remoteType`.
2. Runtime activation.
- Activating a stored remote should correctly project its values into the effective flat settings.
3. External import paths.
- Setup URI, QR code, CLI setup, Peer-to-Peer remote config, red flag adjustment, and settings markdown import should all pass through the same normalisation path.
4. Import and export.
- Imported connection strings should be parsed, canonicalised, named, and stored correctly.
- Export should copy the exact saved connection string.
## Documentation strategy
- This document explains the design and compatibility model of remote configuration management.
- User-facing setup documents should explain only how to add, import, export, and activate remotes.
- Release notes may refer to this document when changes in remote handling are significant.
## Outlook
Import/export configuration strings should also be encrypted in the future, but this is a separate feature that can be added on top of the current design.
## Consideration and conclusion
The remote configuration list solves the practical need to manage multiple remotes without forcing the whole codebase to abandon the previous effective-settings model at once.
Its core idea is modest but effective.
- Store named remotes as connection strings.
- Select one active remote.
- Project it into the legacy settings for runtime use.
- Normalise every imported settings object through the same path.
This keeps the implementation understandable and migration-friendly. It also opens the door for future work, such as encrypted per-remote storage, richer remote metadata, or remote-scoped options, without forcing another large redesign of how remotes are represented.

View File

@@ -64,6 +64,10 @@ Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
# Creating the save data & configuration directories. # Creating the save data & configuration directories.
mkdir couchdb-data mkdir couchdb-data
mkdir couchdb-etc mkdir couchdb-etc
# Changing perms to user 5984.
chown -R 5984:5984 ./couchdb-data
chown -R 5984:5984 ./couchdb-etc
``` ```
#### 2. Create a `docker-compose.yml` file with the following added to it #### 2. Create a `docker-compose.yml` file with the following added to it
@@ -226,7 +230,6 @@ And, be sure to check the server log and be careful of malicious access.
If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`. If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`.
```yaml ```yaml
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb:latest image: couchdb:latest

View File

@@ -71,7 +71,6 @@ obsidian-livesync
可以参照以下内容编辑 `docker-compose.yml`: 可以参照以下内容编辑 `docker-compose.yml`:
```yaml ```yaml
version: "2.1"
services: services:
couchdb: couchdb:
image: couchdb image: couchdb

145
docs/to_issue_reporting.md Normal file
View File

@@ -0,0 +1,145 @@
# How to report an issue
Thank you for helping improve Self-hosted LiveSync!
This document explains how to collect the information needed for an issue report. Issues with sufficient information will be prioritised.
---
## Filled example
Here is an example of a well-filled report for reference.
### Abstract
The synchronisation hung up immediately after connecting.
### Expected behaviour
- Synchronisation ends with the message `Replication completed`
- Everything synchronised
### Actually happened
- Synchronisation was cancelled with the message `TypeError: Failed to fetch` (visible in the plug-in log around lines 1012)
- No files synchronised
### Reproducing procedure
1. Configure LiveSync with the settings shown in the attached report.
2. Click the sync button on the ribbon.
3. Synchronisation begins.
4. About two or three seconds later, the error `TypeError: Failed to fetch` appears.
5. Replication stops. No files synchronised.
### Obsidian debug info (Device 1 — Windows desktop)
```
SYSTEM INFO:
Obsidian version: v1.2.8
Installer version: v1.1.15
Operating system: Windows 10 Pro 10.0.19044
Login status: logged in
Catalyst license: supporter
Insider build toggle: off
Community theme: Minimal v6.1.11
Snippets enabled: 3
Restricted mode: off
Plugins installed: 35
Plugins enabled: 11
1: Self-hosted LiveSync v0.19.4
...
```
### Report from LiveSync
```
----remote config----
cors:
credentials: "true"
...
---- Plug-in config ---
couchDB_URI: self-hosted
couchDB_USER: 𝑅𝐸𝐷𝐴𝐶𝑇𝐸𝐷
...
```
### Plug-in log
```
2023/5/24 10:50:33->HTTP:GET to:/ -> failed
2023/5/24 10:50:33->TypeError:Failed to fetch
2023/5/24 10:50:33->could not connect to https://example.com/ : your vault
(TypeError:Failed to fetch)
```
---
## How to collect each piece of information
### Obsidian debug info
Open the command palette (`Ctrl/Cmd + P`) and run **"Show debug info"**. Copy the output and paste it into the issue.
If multiple devices are involved in the problem (e.g., sync between a phone and a desktop), please provide the debug info for each device. The device where the issue occurred is required; information from other devices is strongly recommended.
### Report from LiveSync (hatch report)
1. Open LiveSync settings.
2. Go to the **Hatch** pane.
3. Press the **Make report** button.
The report will be copied to your clipboard. It contains your LiveSync configuration and the remote server configuration, with credentials automatically redacted.
**Tip:** For large reports, consider uploading to [GitHub Gist](https://gist.github.com/) and sharing the link instead of pasting directly into the issue. This makes it easier to manage, and if you accidentally leave sensitive data in, a Gist can be deleted.
If you paste directly, wrap it in a `<details>` tag to keep the issue readable:
```
<details>
<summary>Report from hatch</summary>
```
----remote config----
:
```
</details>
```
### Plug-in log
The plug-in log is volatile by default (not saved to disk) and shown only in the log dialogue, which can be opened by tapping the **document box icon** in the ribbon.
#### Enable verbose log
Before reproducing the issue, enable **Verbose Log** in LiveSync's **General Settings** pane. Without this, many diagnostic messages will be suppressed.
#### Persist the log to a file (optional)
If you need to capture a log across a restart, enable **"Write logs into the file"** in General Settings. Note that log files may contain sensitive information — use this option only for troubleshooting, and disable it afterwards.
As with the hatch report, consider uploading large logs to [GitHub Gist](https://gist.github.com/).
### Network log (for connection-related issues only)
If the issue is related to network connectivity (e.g., cannot connect to the server, authentication errors), a network log captured from browser DevTools can be very helpful. You do not need to include this for non-connection issues.
#### Opening DevTools
| Platform | Shortcut |
|----------|----------|
| Windows / Linux | `Ctrl + Shift + I` |
| macOS | `Cmd + Shift + I` |
| Android | Use [Chrome remote debugging](https://developer.chrome.com/docs/devtools/remote-debugging/) |
| iOS | Use [Safari Web Inspector](https://developer.apple.com/documentation/safari-developer-tools/inspecting-ios) on a Mac |
#### What to capture
1. Open the **Network** pane in DevTools.
2. Reproduce the issue.
3. Look for requests marked in red.
4. Capture screenshots of the **Headers**, **Payload**, and **Response** tabs for those requests.
**Important — redact before sharing:**
- Headers: conceal the request URL path, Remote Address, `authority`, and `authorisation` values.
- Payload / Response: the `_id` field contains your file paths — redact if needed.

View File

@@ -1,7 +1,7 @@
{ {
"id": "obsidian-livesync", "id": "obsidian-livesync",
"name": "Self-hosted LiveSync", "name": "Self-hosted LiveSync",
"version": "0.25.54", "version": "0.25.60",
"minAppVersion": "0.9.12", "minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz", "author": "vorotamoroz",

19558
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{ {
"name": "obsidian-livesync", "name": "obsidian-livesync",
"version": "0.25.54", "version": "0.25.60",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.", "description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js", "main": "main.js",
"type": "module", "type": "module",
@@ -80,9 +80,9 @@
"@types/transform-pouch": "^1.0.6", "@types/transform-pouch": "^1.0.6",
"@typescript-eslint/eslint-plugin": "8.56.1", "@typescript-eslint/eslint-plugin": "8.56.1",
"@typescript-eslint/parser": "8.56.1", "@typescript-eslint/parser": "8.56.1",
"@vitest/browser": "^4.0.16", "@vitest/browser": "^4.1.1",
"@vitest/browser-playwright": "^4.0.16", "@vitest/browser-playwright": "^4.1.1",
"@vitest/coverage-v8": "^4.0.16", "@vitest/coverage-v8": "^4.1.1",
"builtin-modules": "5.0.0", "builtin-modules": "5.0.0",
"dotenv": "^17.3.1", "dotenv": "^17.3.1",
"dotenv-cli": "^11.0.0", "dotenv-cli": "^11.0.0",
@@ -121,8 +121,8 @@
"typescript": "5.9.3", "typescript": "5.9.3",
"vite": "^7.3.1", "vite": "^7.3.1",
"vite-plugin-istanbul": "^8.0.0", "vite-plugin-istanbul": "^8.0.0",
"vitest": "^4.0.16", "vitest": "^4.1.1",
"webdriverio": "^9.24.0", "webdriverio": "^9.27.0",
"yaml": "^2.8.2" "yaml": "^2.8.2"
}, },
"dependencies": { "dependencies": {
@@ -132,17 +132,17 @@
"@smithy/middleware-apply-body-checksum": "^4.3.9", "@smithy/middleware-apply-body-checksum": "^4.3.9",
"@smithy/protocol-http": "^5.3.9", "@smithy/protocol-http": "^5.3.9",
"@smithy/querystring-builder": "^4.2.9", "@smithy/querystring-builder": "^4.2.9",
"@trystero-p2p/nostr": "^0.23.0",
"commander": "^14.0.3", "commander": "^14.0.3",
"diff-match-patch": "^1.0.5", "diff-match-patch": "^1.0.5",
"fflate": "^0.8.2", "fflate": "^0.8.2",
"idb": "^8.0.3", "idb": "^8.0.3",
"markdown-it": "^14.1.1", "markdown-it": "^14.1.1",
"minimatch": "^10.2.2", "minimatch": "^10.2.2",
"node-datachannel": "^0.32.1",
"octagonal-wheels": "^0.1.45", "octagonal-wheels": "^0.1.45",
"pouchdb-adapter-leveldb": "^9.0.0", "pouchdb-adapter-leveldb": "^9.0.0",
"qrcode-generator": "^1.4.4", "qrcode-generator": "^1.4.4",
"trystero": "^0.22.0", "werift": "^0.22.9",
"xxhash-wasm-102": "npm:xxhash-wasm@^1.0.2" "xxhash-wasm-102": "npm:xxhash-wasm@^1.0.2"
} }
} }

View File

@@ -13,6 +13,7 @@ import type { CheckPointInfo } from "./lib/src/replication/journal/JournalSyncTy
import type { LiveSyncJournalReplicatorEnv } from "./lib/src/replication/journal/LiveSyncJournalReplicatorEnv"; import type { LiveSyncJournalReplicatorEnv } from "./lib/src/replication/journal/LiveSyncJournalReplicatorEnv";
import type { LiveSyncReplicatorEnv } from "./lib/src/replication/LiveSyncAbstractReplicator"; import type { LiveSyncReplicatorEnv } from "./lib/src/replication/LiveSyncAbstractReplicator";
import { useTargetFilters } from "./lib/src/serviceFeatures/targetFilter"; import { useTargetFilters } from "./lib/src/serviceFeatures/targetFilter";
import { useRemoteConfigurationMigration } from "./lib/src/serviceFeatures/remoteConfig";
import type { ServiceContext } from "./lib/src/services/base/ServiceBase"; import type { ServiceContext } from "./lib/src/services/base/ServiceBase";
import type { InjectableServiceHub } from "./lib/src/services/InjectableServices"; import type { InjectableServiceHub } from "./lib/src/services/InjectableServices";
import { AbstractModule } from "./modules/AbstractModule"; import { AbstractModule } from "./modules/AbstractModule";
@@ -272,6 +273,8 @@ export class LiveSyncBaseCore<
useTargetFilters(this); useTargetFilters(this);
// enable target filter feature. // enable target filter feature.
usePrepareDatabaseForUse(this); usePrepareDatabaseForUse(this);
// Migration to multiple remote configurations
useRemoteConfigurationMigration(this);
} }
} }

View File

@@ -1,5 +1,6 @@
.livesync .livesync
test/* test/*
!test/*.sh !test/*.sh
node_modules test/test-init.local.sh
node_modules
.*.json .*.json

111
src/apps/cli/Dockerfile Normal file
View File

@@ -0,0 +1,111 @@
# syntax=docker/dockerfile:1
#
# Self-hosted LiveSync CLI — Docker image
#
# Build (from the repository root):
# docker build -f src/apps/cli/Dockerfile -t livesync-cli .
#
# Run:
# docker run --rm -v /path/to/your/vault:/data livesync-cli sync
# docker run --rm -v /path/to/your/vault:/data livesync-cli ls
# docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
# docker run --rm -v /path/to/your/vault:/data livesync-cli --help
#
# The first positional argument (database-path) is automatically set to /data.
# Mount your vault at /data, or override with: -e LIVESYNC_DB_PATH=/other/path
#
# P2P (WebRTC) networking — important notes
# -----------------------------------------
# The P2P replicator (p2p-host / p2p-sync / p2p-peers) uses WebRTC, which
# generates ICE candidates of three kinds:
#
# host — the container's bridge IP (172.17.x.x). Unreachable from outside
# the Docker bridge, so LAN peers cannot connect via this candidate.
# srflx — the host's public IP, obtained via STUN reflection. Works fine
# over the internet even with the default bridge network.
# relay — traffic relayed through a TURN server. Always reachable regardless
# of network mode.
#
# Recommended network modes per use-case:
#
# LAN P2P (Linux only)
# docker run --network host ...
# This exposes the real host IP as the 'host' candidate so LAN peers can
# connect directly. --network host is not available on Docker Desktop for
# macOS or Windows.
#
# LAN P2P (macOS / Windows Docker Desktop)
# Configure a TURN server in settings (P2P_turnServers / P2P_turnUsername /
# P2P_turnCredential). All data is then relayed through the TURN server,
# bypassing the bridge-network limitation.
#
# Internet P2P
# Default bridge network is sufficient; the srflx candidate carries the
# host's public IP and peers can connect normally.
#
# CouchDB sync only (no P2P)
# Default bridge network. No special configuration required.
# ─────────────────────────────────────────────────────────────────────────────
# Stage 1 — builder
# Full Node.js environment to compile native modules and bundle the CLI.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim AS builder
# Build tools required by native Node.js addons (mainly leveldown)
RUN apt-get update \
&& apt-get install -y --no-install-recommends python3 make g++ \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
# Install workspace dependencies first (layer-cache friendly)
COPY package.json ./
RUN npm install
# Copy the full source tree and build the CLI bundle
COPY . .
RUN cd src/apps/cli && npm run build
# ─────────────────────────────────────────────────────────────────────────────
# Stage 2 — runtime-deps
# Install only the external (unbundled) packages that the CLI requires at
# runtime. Native addons are compiled here against the same base image that
# the final runtime stage uses.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim AS runtime-deps
# Build tools required to compile native addons
RUN apt-get update \
&& apt-get install -y --no-install-recommends python3 make g++ \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /deps
# runtime-package.json lists only the packages that Vite leaves external
COPY src/apps/cli/runtime-package.json ./package.json
RUN npm install --omit=dev
# ─────────────────────────────────────────────────────────────────────────────
# Stage 3 — runtime
# Minimal image: pre-compiled native modules + CLI bundle only.
# No build tools are included, keeping the image small.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim
WORKDIR /app
# Copy pre-compiled external node_modules from runtime-deps stage
COPY --from=runtime-deps /deps/node_modules ./node_modules
# Copy the built CLI bundle from builder stage
COPY --from=builder /build/src/apps/cli/dist ./dist
# Install entrypoint wrapper
COPY src/apps/cli/docker-entrypoint.sh /usr/local/bin/livesync-cli
RUN chmod +x /usr/local/bin/livesync-cli
# Mount your vault / local database directory here
VOLUME ["/data"]
ENTRYPOINT ["livesync-cli"]

View File

@@ -1,362 +1,505 @@
# Self-hosted LiveSync CLI # Self-hosted LiveSync CLI
Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian. Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian.
## Features ## Features
- ✅ Sync Obsidian vaults using CouchDB without running Obsidian - ✅ Sync Obsidian vaults using CouchDB without running Obsidian
- ✅ Compatible with Self-hosted LiveSync plugin settings - ✅ Compatible with Self-hosted LiveSync plugin settings
- ✅ Supports all core sync features (encryption, conflict resolution, etc.) - ✅ Supports all core sync features (encryption, conflict resolution, etc.)
- ✅ Lightweight and headless operation - ✅ Lightweight and headless operation
- ✅ Cross-platform (Windows, macOS, Linux) - ✅ Cross-platform (Windows, macOS, Linux)
## Architecture ## Architecture
This CLI version is built using the same core as the Obsidian plugin: This CLI version is built using the same core as the Obsidian plugin:
``` ```
CLI Main CLI Main
└─ LiveSyncBaseCore<ServiceContext, IMinimumLiveSyncCommands> └─ LiveSyncBaseCore<ServiceContext, IMinimumLiveSyncCommands>
├─ NodeServiceHub (All services without Obsidian dependencies) ├─ NodeServiceHub (All services without Obsidian dependencies)
└─ ServiceModules (wired by initialiseServiceModulesCLI) └─ ServiceModules (wired by initialiseServiceModulesCLI)
├─ FileAccessCLI (Node.js FileSystemAdapter) ├─ FileAccessCLI (Node.js FileSystemAdapter)
├─ StorageEventManagerCLI ├─ StorageEventManagerCLI
├─ ServiceFileAccessCLI ├─ ServiceFileAccessCLI
├─ ServiceDatabaseFileAccessCLI ├─ ServiceDatabaseFileAccessCLI
├─ ServiceFileHandler ├─ ServiceFileHandler
└─ ServiceRebuilder └─ ServiceRebuilder
``` ```
### Key Components ### Key Components
1. **Node.js FileSystem Adapter** (`adapters/`) 1. **Node.js FileSystem Adapter** (`adapters/`)
- Platform-agnostic file operations using Node.js `fs/promises` - Platform-agnostic file operations using Node.js `fs/promises`
- Implements same interface as Obsidian's file system - Implements same interface as Obsidian's file system
2. **Service Modules** (`serviceModules/`) 2. **Service Modules** (`serviceModules/`)
- Initialised by `initialiseServiceModulesCLI` - Initialised by `initialiseServiceModulesCLI`
- All core sync functionality preserved - All core sync functionality preserved
3. **Service Hub and Settings Services** (`services/`) 3. **Service Hub and Settings Services** (`services/`)
- `NodeServiceHub` provides the CLI service context - `NodeServiceHub` provides the CLI service context
- Node-specific settings and key-value services are provided without Obsidian dependencies - Node-specific settings and key-value services are provided without Obsidian dependencies
4. **Main Entry Point** (`main.ts`) 4. **Main Entry Point** (`main.ts`)
- Command-line interface - Command-line interface
- Settings management (JSON file) - Settings management (JSON file)
- Graceful shutdown handling - Graceful shutdown handling
## Installation ## Usage
```bash The CLI operates on a **database directory** which contains PouchDB data and settings.
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
# due to shared dependencies with webapp and main library > [!NOTE]
npm install > `livesync-cli` is the alias for the CLI executable. Please replace with the actual command of your installation (e.g. `npm run --silent cli --` or `docker run ...`).
# Build the project (ensure you are in `src/apps/cli` directory)
npm run build ```bash
``` livesync-cli [database-path] [command] [args...]
```
## Usage
### Basic Usage ### Arguments
As you know, the CLI is designed to be used in a headless environment. Hence all operations are performed against a local vault directory and a settings file. Here are some example commands: - `database-path`: Path to the directory where `.livesync` folder and `settings.json` are (or will be) located.
- Note: In previous versions, this was referred to as the "vault" path. Now it is clearly distinguished from the actual vault (the directory containing your `.md` files).
```bash
# Sync local database with CouchDB (no files will be changed). ### Commands
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync
- `sync`: Run one replication cycle with the remote CouchDB.
# Push files to local database - `mirror [vault-path]`: Bidirectional sync between the local database and a local directory (**the actual vault**).
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md - If `vault-path` is provided, the CLI will synchronise the database with files in the vault directory.
- If `vault-path` is omitted, it defaults to `database-path` (compatibility mode).
# Pull files from local database - Use this command to keep your local `.md` files in sync with the database.
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md - `ls [prefix]`: List files currently stored in the local database.
- `push <src> <dst>`: Push a local file `<src>` into the database at path `<dst>`.
# Verbose logging - `pull <src> <dst>`: Pull a file `<src>` from the database into local file `<dst>`.
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose - `cat <src>`: Read a file from the database and write to stdout.
- `put <dst>`: Read from stdin and write to the database path `<dst>`.
# Apply setup URI to settings file (settings only; does not run synchronisation) - `init-settings [file]`: Create a default settings file.
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
### Examples
# Put text from stdin into local database
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md ```bash
# Basic sync with remote
# Output a file from local database to stdout livesync-cli ./my-db sync
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
# Mirroring to your actual Obsidian vault
# Output a specific revision of a file from local database livesync-cli ./my-db mirror /path/to/obsidian-vault
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
# Manual file operations
# Pull a specific revision of a file from local database to local storage livesync-cli ./my-db push ./note.md folder/note.md
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef livesync-cli ./my-db pull folder/note.md ./note.md
```
# List files in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/ ## Installation
# Show metadata for a file in local database ### Build from source
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
```bash
# Mark a file as deleted in local database # Install dependencies (ensure you are in repository root directory, not src/apps/cli)
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md # due to shared dependencies with webapp and main library
npm install
# Resolve conflict by keeping a specific revision # Build the project (ensure you are in `src/apps/cli` directory)
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef npm run build
``` ```
### Configuration Run the CLI:
The CLI uses the same settings format as the Obsidian plugin. Create a `.livesync/settings.json` file in your vault directory: ```bash
# Run with npm script (from repository root)
```json npm run --silent cli -- [database-path] [command] [args...]
{ # Run the built executable directly
"couchDB_URI": "http://localhost:5984", node src/apps/cli/dist/index.cjs [database-path] [command] [args...]
"couchDB_USER": "admin", ```
"couchDB_PASSWORD": "password",
"couchDB_DBNAME": "obsidian-livesync", ### Docker
"liveSync": true,
"syncOnSave": true, A Docker image is provided for headless / server deployments. Build from the repository root:
"syncOnStart": true,
"encrypt": true, ```bash
"passphrase": "your-encryption-passphrase", docker build -f src/apps/cli/Dockerfile -t livesync-cli .
"usePluginSync": false, ```
"isConfigured": true
} Run:
```
```bash
**Minimum required settings:** # Sync with CouchDB
docker run --rm -v /path/to/your/db:/data livesync-cli sync
- `couchDB_URI`: CouchDB server URL
- `couchDB_USER`: CouchDB username # Mirror to a specific vault directory
- `couchDB_PASSWORD`: CouchDB password docker run --rm -v /path/to/your/db:/data -v /path/to/your/vault:/vault livesync-cli mirror /vault
- `couchDB_DBNAME`: Database name
- `isConfigured`: Set to `true` after configuration # List files in the local database
docker run --rm -v /path/to/your/db:/data livesync-cli ls
### Command-line Reference ```
``` The database directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
Usage:
livesync-cli [database-path] [options] [command] [command-args] #### P2P (WebRTC) and Docker networking
Arguments: The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates
database-path Path to the local database directory (required except for init-settings) three kinds of ICE candidates. The default Docker bridge network affects which
candidates are usable:
Options:
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory) | Candidate type | Description | Bridge network |
--force, -f Overwrite existing file on init-settings | -------------- | ---------------------------------- | -------------------------- |
--verbose, -v Enable verbose logging | `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
--help, -h Show this help message | `srflx` | Host public IP via STUN reflection | Works over the internet |
| `relay` | Traffic relayed via TURN server | Always reachable |
Commands:
init-settings [path] Create settings JSON from DEFAULT_SETTINGS **LAN P2P on Linux** — use `--network host` so that the real host IP is
sync Run one replication cycle and exit advertised as the `host` candidate:
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
p2p-sync <peer> <timeout> Synchronise with specified peer-id or peer-name ```bash
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C) docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host
push <src> <dst> Push local file <src> into local database path <dst> ```
pull <src> <dst> Pull file <src> from local database into local file <dst>
pull-rev <src> <dst> <revision> Pull specific revision into local file <dst> Note: also fix the alias to include `--network host` if you want to use `livesync-cli` for P2P commands.
setup <setupURI> Apply setup URI to settings file
put <vaultPath> Read text from standard input and write to local database > `--network host` is not available on Docker Desktop for macOS or Windows.
cat <vaultPath> Write latest file content from local database to standard output
cat-rev <vaultPath> <revision> Write specific revision content from local database to standard output **LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*] settings file (`P2P_turnServers`, `P2P_turnUsername`, `P2P_turnCredential`).
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list All P2P traffic will then be relayed through the TURN server, bypassing the
rm <vaultPath> Mark file as deleted in local database bridge-network limitation.
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision
mirror <storagePath> <vaultPath> Mirror local file into local database. **Internet P2P** — the default bridge network is sufficient. The `srflx`
``` candidate carries the host's public IP and peers can connect normally.
Run via npm script: **CouchDB sync only (no P2P)** — no special network configuration is required.
```bash
npm run --silent cli -- [database-path] [options] [command] [command-args] ### Adding `livesync-cli` alias
```
To use the `livesync-cli` command globally, you can add an alias to your shell configuration file (e.g., `.zshrc` or `.bashrc`).
#### Detailed Command Descriptions
If you are using `npm run`, add the following line:
##### ls
`ls` lists files in the local database with optional prefix filtering. Output format is: ```bash
alias livesync-cli='npm run --silent --prefix /path/to/repository/src/apps/cli cli --'
```vault/path/file.md<TAB>size<TAB>mtime<TAB>revision[*] # or
``` alias livesync-cli="npm run --silent --prefix $PWD cli --"
Note: `*` indicates if the file has conflicts. ```
##### p2p-peers Alternatively, if you want to use the built executable directly:
`p2p-peers <timeout>` waits for the specified number of seconds, then prints each discovered peer on a separate line: ```bash
alias livesync-cli='node /path/to/repository/src/apps/cli/dist/index.cjs'
```text or
[peer]<TAB><peer-id><TAB><peer-name> alias livesync-cli="node $PWD/dist/index.cjs"
``` ```
Use this command to select a target for `p2p-sync`. If you prefer using Docker:
##### p2p-sync ```bash
alias livesync-cli='docker run --rm -v /path/to/your/db:/data livesync-cli'
`p2p-sync <peer> <timeout>` discovers peers up to the specified timeout and synchronises with the selected peer. ```
- `<peer>` accepts either `peer-id` or `peer-name` from `p2p-peers` output. After adding the alias, restart your shell or run `source ~/.zshrc` (or `.bashrc`).
- On success, the command prints a completion message to standard error and exits with status code `0`.
- On failure, the command prints an error message and exits non-zero. ## Usage
##### p2p-host ### Basic Usage
`p2p-host` starts the local P2P host and keeps running until interrupted. As you know, the CLI is designed to be used in a headless environment. Hence all operations are performed against a local vault directory and a settings file. Here are some example commands:
- Other peers can discover and synchronise with this host while it is running. ```bash
- Stop the host with `Ctrl+C`. # Sync local database with CouchDB (no files will be changed).
- In CLI mode, behaviour is non-interactive and acceptance follows settings. livesync-cli /path/to/your-local-database --settings /path/to/settings.json sync
##### info # Push files to local database
livesync-cli /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
`info` output fields:
# Pull files from local database
- `id`: Document ID livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
- `revision`: Current revision
- `conflicts`: Conflicted revisions, or `N/A` # Verbose logging
- `filename`: Basename of path livesync-cli /path/to/your-local-database --settings /path/to/settings.json --verbose
- `path`: Vault-relative path
- `size`: Size in bytes # Apply setup URI to settings file (settings only; does not run synchronisation)
- `revisions`: Available non-current revisions livesync-cli /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
- `chunks`: Number of chunk IDs
- `children`: Chunk ID list # Put text from stdin into local database
echo "Hello from stdin" | livesync-cli /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
##### mirror
# Output a file from local database to stdout
`mirror` is a command that synchronises your storage with your local vault. It is essentially a process that runs upon startup in Obsidian. livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
In other words, it performs the following actions: # Output a specific revision of a file from local database
livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
1. **Precondition checks** — Aborts early if any of the following conditions are not met:
- Settings must be configured (`isConfigured: true`). # Pull a specific revision of a file from local database to local storage
- File watching must not be suspended (`suspendFileWatching: false`). livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
- Remediation mode must be inactive (`maxMTimeForReflectEvents: 0`).
# List files in local database
2. **State restoration** — On subsequent runs (after the first successful scan), restores the previous storage state before proceeding. livesync-cli /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
3. **Expired deletion cleanup** — If `automaticallyDeleteMetadataOfDeletedFiles` is set to a positive number of days, any document that is marked deleted and whose `mtime` is older than the retention period is permanently removed from the local database. # Show metadata for a file in local database
livesync-cli /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
4. **File collection** — Enumerates files from two sources:
- **Storage**: all files under the vault path that pass `isTargetFile`. # Mark a file as deleted in local database
- **Local database**: all normal documents (fetched with conflict information) whose paths are valid and pass `isTargetFile`. livesync-cli /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
- Both collections build case-insensitive ↔ case-sensitive path maps, controlled by `handleFilenameCaseSensitive`.
# Resolve conflict by keeping a specific revision
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time): livesync-cli /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
```
| Group | Condition | Action |
|---|---|---| ### Configuration
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. | The CLI uses the same settings format as the Obsidian plugin. Create a `.livesync/settings.json` file in your vault directory:
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
```json
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2. {
"couchDB_URI": "http://localhost:5984",
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases). "couchDB_USER": "admin",
"couchDB_PASSWORD": "password",
### Planned options: "couchDB_DBNAME": "obsidian-livesync",
"liveSync": true,
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`). "syncOnSave": true,
- `serve`: Start CLI in server mode, exposing REST APIs for remote, and batch operations. "syncOnStart": true,
- `cause-conflicted <vaultPath>`: Mark a file as conflicted without changing its content, to trigger conflict resolution in Obsidian. "encrypt": true,
"passphrase": "your-encryption-passphrase",
## Use Cases "usePluginSync": false,
"isConfigured": true
### 1. Bootstrap a new headless vault }
```
Create default settings, apply a setup URI, then run one sync cycle.
**Minimum required settings:**
```bash
npm run --silent cli -- init-settings /data/livesync-settings.json - `couchDB_URI`: CouchDB server URL
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI" - `couchDB_USER`: CouchDB username
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync - `couchDB_PASSWORD`: CouchDB password
``` - `couchDB_DBNAME`: Database name
- `isConfigured`: Set to `true` after configuration
### 2. Scripted import and export
### Command-line Reference
Push local files into the database from automation, and pull them back for export or backup.
```
```bash Usage:
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md livesync-cli <database-path> [options] <command> [command-args]
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md livesync-cli init-settings [path]
```
Arguments:
### 3. Revision inspection and restore database-path Path to the local database directory (required except for init-settings)
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`). Options:
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
```bash --force, -f Overwrite existing file on init-settings
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md --verbose, -v Enable verbose logging
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef --debug, -d Enable debug logging (includes verbose)
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef --help, -h Show help message
```
Commands:
### 4. Conflict and cleanup workflow init-settings [path] Create settings JSON from DEFAULT_SETTINGS
sync Run one replication cycle and exit
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files. p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
p2p-sync <peer> <timeout> Synchronise with specified peer-id or peer-name
```bash p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md push <src> <dst> Push local file <src> into local database path <dst>
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef pull <src> <dst> Pull file <src> from local database into local file <dst>
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md pull-rev <src> <dst> <rev> Pull specific revision <rev> into local file <dst>
``` setup <setupURI> Apply setup URI to settings file
put <dst> Read text from standard input and write to local database path <dst>
### 5. CI smoke test for content round-trip cat <src> Write latest file content from local database to standard output
cat-rev <src> <rev> Write specific revision <rev> content from local database to standard output
Validate that `put`/`cat` is behaving as expected in a pipeline. ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
info <path> Show file metadata including current and past revisions, conflicts, and chunk list
```bash rm <path> Mark file as deleted in local database
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md resolve <path> <rev> Resolve conflict by keeping the specified revision
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md mirror [vaultPath] Mirror database contents to the local file system (vaultPath defaults to database-path)
``` ```
## Development Run via npm script:
### Project Structure ```bash
npm run --silent cli -- [database-path] [options] [command] [command-args]
``` ```
src/apps/cli/
├── commands/ # Command dispatcher and command utilities #### Detailed Command Descriptions
│ ├── runCommand.ts
│ ├── runCommand.unit.spec.ts ##### ls
│ ├── types.ts `ls` lists files in the local database with optional prefix filtering. Output format is:
│ ├── utils.ts
│ └── utils.unit.spec.ts ```vault/path/file.md<TAB>size<TAB>mtime<TAB>revision[*]
├── adapters/ # Node.js FileSystem Adapter ```
│ ├── NodeConversionAdapter.ts Note: `*` indicates if the file has conflicts.
│ ├── NodeFileSystemAdapter.ts
│ ├── NodePathAdapter.ts ##### p2p-peers
│ ├── NodeStorageAdapter.ts
│ ├── NodeStorageAdapter.unit.spec.ts `p2p-peers <timeout>` waits for the specified number of seconds, then prints each discovered peer on a separate line:
│ ├── NodeTypeGuardAdapter.ts
│ ├── NodeTypes.ts ```text
│ └── NodeVaultAdapter.ts [peer]<TAB><peer-id><TAB><peer-name>
├── lib/ ```
│ └── pouchdb-node.ts
├── managers/ # CLI-specific managers Use this command to select a target for `p2p-sync`.
│ ├── CLIStorageEventManagerAdapter.ts
│ └── StorageEventManagerCLI.ts ##### p2p-sync
├── serviceModules/ # Service modules (ported from main.ts)
│ ├── CLIServiceModules.ts `p2p-sync <peer> <timeout>` discovers peers up to the specified timeout and synchronises with the selected peer.
│ ├── DatabaseFileAccess.ts
│ ├── FileAccessCLI.ts - `<peer>` accepts either `peer-id` or `peer-name` from `p2p-peers` output.
│ └── ServiceFileAccessImpl.ts - On success, the command prints a completion message to standard error and exits with status code `0`.
├── services/ - On failure, the command prints an error message and exits non-zero.
│ ├── NodeKeyValueDBService.ts
│ ├── NodeServiceHub.ts ##### p2p-host
│ └── NodeSettingService.ts
├── test/ `p2p-host` starts the local P2P host and keeps running until interrupted.
│ ├── test-e2e-two-vaults-common.sh
│ ├── test-e2e-two-vaults-matrix.sh - Other peers can discover and synchronise with this host while it is running.
│ ├── test-e2e-two-vaults-with-docker-linux.sh - Stop the host with `Ctrl+C`.
│ ├── test-push-pull-linux.sh - In CLI mode, behaviour is non-interactive and acceptance follows settings.
│ ├── test-setup-put-cat-linux.sh
│ └── test-sync-two-local-databases-linux.sh ##### info
├── .gitignore
├── entrypoint.ts # CLI executable entry point (shebang) `info` output fields:
├── main.ts # CLI entry point
├── main.unit.spec.ts - `id`: Document ID
├── package.json - `revision`: Current revision
├── README.md # This file - `conflicts`: Conflicted revisions, or `N/A`
├── tsconfig.json - `filename`: Basename of path
├── util/ # Test and local utility scripts - `path`: Vault-relative path
└── vite.config.ts - `size`: Size in bytes
``` - `revisions`: Available non-current revisions
- `chunks`: Number of chunk IDs
- `children`: Chunk ID list
##### mirror
`mirror` is a command that synchronises your storage with your local vault. It is essentially a process that runs upon startup in Obsidian.
In other words, it performs the following actions:
1. **Precondition checks** — Aborts early if any of the following conditions are not met:
- Settings must be configured (`isConfigured: true`).
- File watching must not be suspended (`suspendFileWatching: false`).
- Remediation mode must be inactive (`maxMTimeForReflectEvents: 0`).
2. **State restoration** — On subsequent runs (after the first successful scan), restores the previous storage state before proceeding.
3. **Expired deletion cleanup** — If `automaticallyDeleteMetadataOfDeletedFiles` is set to a positive number of days, any document that is marked deleted and whose `mtime` is older than the retention period is permanently removed from the local database.
4. **File collection** — Enumerates files from two sources:
- **Storage**: all files under the vault path that pass `isTargetFile`.
- **Local database**: all normal documents (fetched with conflict information) whose paths are valid and pass `isTargetFile`.
- Both collections build case-insensitive ↔ case-sensitive path maps, controlled by `handleFilenameCaseSensitive`.
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
| Group | Condition | Action |
| ----------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases).
### Planned options:
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`).
- `serve`: Start CLI in server mode, exposing REST APIs for remote, and batch operations.
- `cause-conflicted <vaultPath>`: Mark a file as conflicted without changing its content, to trigger conflict resolution in Obsidian.
## Use Cases
### 1. Bootstrap a new headless vault
Create default settings, apply a setup URI, then run one sync cycle.
```bash
livesync-cli -- init-settings /data/livesync-settings.json
printf '%s\n' "$SETUP_PASSPHRASE" | livesync-cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
livesync-cli -- /data/vault --settings /data/livesync-settings.json sync
```
### 2. Scripted import and export
Push local files into the database from automation, and pull them back for export or backup.
```bash
livesync-cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
```
### 3. Revision inspection and restore
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
```bash
livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
livesync-cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
```
### 4. Conflict and cleanup workflow
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
```bash
livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
livesync-cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
```
### 5. CI smoke test for content round-trip
Validate that `put`/`cat` is behaving as expected in a pipeline.
```bash
echo "hello-ci" | livesync-cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
livesync-cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
```
## Development
### Project Structure
```
src/apps/cli/
├── commands/ # Command dispatcher and command utilities
│ ├── runCommand.ts
│ ├── runCommand.unit.spec.ts
│ ├── types.ts
│ ├── utils.ts
│ └── utils.unit.spec.ts
├── adapters/ # Node.js FileSystem Adapter
│ ├── NodeConversionAdapter.ts
│ ├── NodeFileSystemAdapter.ts
│ ├── NodePathAdapter.ts
│ ├── NodeStorageAdapter.ts
│ ├── NodeStorageAdapter.unit.spec.ts
│ ├── NodeTypeGuardAdapter.ts
│ ├── NodeTypes.ts
│ └── NodeVaultAdapter.ts
├── lib/
│ └── pouchdb-node.ts
├── managers/ # CLI-specific managers
│ ├── CLIStorageEventManagerAdapter.ts
│ └── StorageEventManagerCLI.ts
├── serviceModules/ # Service modules (ported from main.ts)
│ ├── CLIServiceModules.ts
│ ├── DatabaseFileAccess.ts
│ ├── FileAccessCLI.ts
│ └── ServiceFileAccessImpl.ts
├── services/
│ ├── NodeKeyValueDBService.ts
│ ├── NodeServiceHub.ts
│ └── NodeSettingService.ts
├── test/
│ ├── test-e2e-two-vaults-common.sh
│ ├── test-e2e-two-vaults-matrix.sh
│ ├── test-e2e-two-vaults-with-docker-linux.sh
│ ├── test-push-pull-linux.sh
│ ├── test-setup-put-cat-linux.sh
│ └── test-sync-two-local-databases-linux.sh
├── .gitignore
├── entrypoint.ts # CLI executable entry point (shebang)
├── main.ts # CLI entry point
├── main.unit.spec.ts
├── package.json
├── README.md # This file
├── tsconfig.json
├── util/ # Test and local utility scripts
└── vite.config.ts
```

View File

@@ -2,8 +2,7 @@ import type { LiveSyncBaseCore } from "../../../LiveSyncBaseCore";
import { P2P_DEFAULT_SETTINGS } from "@lib/common/types"; import { P2P_DEFAULT_SETTINGS } from "@lib/common/types";
import type { ServiceContext } from "@lib/services/base/ServiceBase"; import type { ServiceContext } from "@lib/services/base/ServiceBase";
import { LiveSyncTrysteroReplicator } from "@lib/replication/trystero/LiveSyncTrysteroReplicator"; import { LiveSyncTrysteroReplicator } from "@lib/replication/trystero/LiveSyncTrysteroReplicator";
import { addP2PEventHandlers } from "@lib/replication/trystero/P2PReplicatorCore"; import { addP2PEventHandlers } from "@lib/replication/trystero/addP2PEventHandlers";
type CLIP2PPeer = { type CLIP2PPeer = {
peerId: string; peerId: string;
name: string; name: string;

View File

@@ -5,13 +5,13 @@ import { configURIBase } from "@lib/common/models/shared.const";
import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types"; import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types";
import { stripAllPrefixes } from "@lib/string_and_binary/path"; import { stripAllPrefixes } from "@lib/string_and_binary/path";
import type { CLICommandContext, CLIOptions } from "./types"; import type { CLICommandContext, CLIOptions } from "./types";
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils"; import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toDatabaseRelativePath } from "./utils";
import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p"; import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p";
import { performFullScan } from "@lib/serviceFeatures/offlineScanner"; import { performFullScan } from "@lib/serviceFeatures/offlineScanner";
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager"; import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> { export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> {
const { vaultPath, core, settingsPath } = context; const { databasePath, core, settingsPath } = context;
await core.services.control.activated; await core.services.control.activated;
if (options.command === "daemon") { if (options.command === "daemon") {
@@ -21,6 +21,18 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.command === "sync") { if (options.command === "sync") {
console.log("[Command] sync"); console.log("[Command] sync");
const result = await core.services.replication.replicate(true); const result = await core.services.replication.replicate(true);
if (!result) {
// TODO: Standardise the logic for identifying the cause of replication
// failure so that every reason (locked DB, version mismatch, network
// error, etc.) is surfaced with a CLI-specific actionable message.
const replicator = core.services.replicator.getActiveReplicator();
if (replicator?.remoteLockedAndDeviceNotAccepted) {
console.error(
`[Error] The remote database is locked and this device is not yet accepted.\n` +
`[Error] Please unlock the database from the Obsidian plugin and retry.`
);
}
}
return !!result; return !!result;
} }
@@ -65,16 +77,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
throw new Error("push requires two arguments: <src> <dst>"); throw new Error("push requires two arguments: <src> <dst>");
} }
const sourcePath = path.resolve(options.commandArgs[0]); const sourcePath = path.resolve(options.commandArgs[0]);
const destinationVaultPath = toVaultRelativePath(options.commandArgs[1], vaultPath); const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[1], databasePath);
const sourceData = await fs.readFile(sourcePath); const sourceData = await fs.readFile(sourcePath);
const sourceStat = await fs.stat(sourcePath); const sourceStat = await fs.stat(sourcePath);
console.log(`[Command] push ${sourcePath} -> ${destinationVaultPath}`); console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`);
await core.serviceModules.storageAccess.writeFileAuto(destinationVaultPath, toArrayBuffer(sourceData), { await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), {
mtime: sourceStat.mtimeMs, mtime: sourceStat.mtimeMs,
ctime: sourceStat.ctimeMs, ctime: sourceStat.ctimeMs,
}); });
const destinationPathWithPrefix = destinationVaultPath as FilePathWithPrefix; const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix;
const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true); const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true);
return stored; return stored;
} }
@@ -83,16 +95,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("pull requires two arguments: <src> <dst>"); throw new Error("pull requires two arguments: <src> <dst>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const destinationPath = path.resolve(options.commandArgs[1]); const destinationPath = path.resolve(options.commandArgs[1]);
console.log(`[Command] pull ${sourceVaultPath} -> ${destinationPath}`); console.log(`[Command] pull ${sourceDatabasePath} -> ${destinationPath}`);
const sourcePathWithPrefix = sourceVaultPath as FilePathWithPrefix; const sourcePathWithPrefix = sourceDatabasePath as FilePathWithPrefix;
const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true); const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true);
if (!restored) { if (!restored) {
return false; return false;
} }
const data = await core.serviceModules.storageAccess.readFileAuto(sourceVaultPath); const data = await core.serviceModules.storageAccess.readFileAuto(sourceDatabasePath);
await fs.mkdir(path.dirname(destinationPath), { recursive: true }); await fs.mkdir(path.dirname(destinationPath), { recursive: true });
if (typeof data === "string") { if (typeof data === "string") {
await fs.writeFile(destinationPath, data, "utf-8"); await fs.writeFile(destinationPath, data, "utf-8");
@@ -106,16 +118,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 3) { if (options.commandArgs.length < 3) {
throw new Error("pull-rev requires three arguments: <src> <dst> <rev>"); throw new Error("pull-rev requires three arguments: <src> <dst> <rev>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const destinationPath = path.resolve(options.commandArgs[1]); const destinationPath = path.resolve(options.commandArgs[1]);
const rev = options.commandArgs[2].trim(); const rev = options.commandArgs[2].trim();
if (!rev) { if (!rev) {
throw new Error("pull-rev requires a non-empty revision"); throw new Error("pull-rev requires a non-empty revision");
} }
console.log(`[Command] pull-rev ${sourceVaultPath}@${rev} -> ${destinationPath}`); console.log(`[Command] pull-rev ${sourceDatabasePath}@${rev} -> ${destinationPath}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
rev, rev,
true true
); );
@@ -154,7 +166,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
} as ObsidianLiveSyncSettings; } as ObsidianLiveSyncSettings;
console.log(`[Command] setup -> ${settingsPath}`); console.log(`[Command] setup -> ${settingsPath}`);
await core.services.setting.applyPartial(nextSettings, true); await core.services.setting.applyExternalSettings(nextSettings, true);
await core.services.control.applySettings(); await core.services.control.applySettings();
return true; return true;
} }
@@ -163,11 +175,11 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("put requires one argument: <dst>"); throw new Error("put requires one argument: <dst>");
} }
const destinationVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const content = await readStdinAsUtf8(); const content = await readStdinAsUtf8();
console.log(`[Command] put stdin -> ${destinationVaultPath}`); console.log(`[Command] put stdin -> ${destinationDatabasePath}`);
return await core.serviceModules.databaseFileAccess.storeContent( return await core.serviceModules.databaseFileAccess.storeContent(
destinationVaultPath as FilePathWithPrefix, destinationDatabasePath as FilePathWithPrefix,
content content
); );
} }
@@ -176,10 +188,10 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("cat requires one argument: <src>"); throw new Error("cat requires one argument: <src>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
console.error(`[Command] cat ${sourceVaultPath}`); console.error(`[Command] cat ${sourceDatabasePath}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
undefined, undefined,
true true
); );
@@ -200,14 +212,14 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("cat-rev requires two arguments: <src> <rev>"); throw new Error("cat-rev requires two arguments: <src> <rev>");
} }
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
const rev = options.commandArgs[1].trim(); const rev = options.commandArgs[1].trim();
if (!rev) { if (!rev) {
throw new Error("cat-rev requires a non-empty revision"); throw new Error("cat-rev requires a non-empty revision");
} }
console.error(`[Command] cat-rev ${sourceVaultPath} @ ${rev}`); console.error(`[Command] cat-rev ${sourceDatabasePath} @ ${rev}`);
const source = await core.serviceModules.databaseFileAccess.fetch( const source = await core.serviceModules.databaseFileAccess.fetch(
sourceVaultPath as FilePathWithPrefix, sourceDatabasePath as FilePathWithPrefix,
rev, rev,
true true
); );
@@ -227,7 +239,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.command === "ls") { if (options.command === "ls") {
const prefix = const prefix =
options.commandArgs.length > 0 && options.commandArgs[0].trim() !== "" options.commandArgs.length > 0 && options.commandArgs[0].trim() !== ""
? toVaultRelativePath(options.commandArgs[0], vaultPath) ? toDatabaseRelativePath(options.commandArgs[0], databasePath)
: ""; : "";
const rows: { path: string; line: string }[] = []; const rows: { path: string; line: string }[] = [];
@@ -249,6 +261,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
rows.sort((a, b) => a.path.localeCompare(b.path)); rows.sort((a, b) => a.path.localeCompare(b.path));
if (rows.length > 0) { if (rows.length > 0) {
process.stdout.write(rows.map((e) => e.line).join("\n") + "\n"); process.stdout.write(rows.map((e) => e.line).join("\n") + "\n");
} else {
process.stderr.write("[Info] No documents found in the local database.\n");
} }
return true; return true;
} }
@@ -257,7 +271,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("info requires one argument: <path>"); throw new Error("info requires one argument: <path>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) { for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) {
if (doc._deleted || doc.deleted) continue; if (doc._deleted || doc.deleted) continue;
@@ -301,7 +315,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 1) { if (options.commandArgs.length < 1) {
throw new Error("rm requires one argument: <path>"); throw new Error("rm requires one argument: <path>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath); const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
console.error(`[Command] rm ${targetPath}`); console.error(`[Command] rm ${targetPath}`);
return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix); return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix);
} }
@@ -310,7 +324,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.commandArgs.length < 2) { if (options.commandArgs.length < 2) {
throw new Error("resolve requires two arguments: <path> <revision-to-keep>"); throw new Error("resolve requires two arguments: <path> <revision-to-keep>");
} }
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath) as FilePathWithPrefix; const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath) as FilePathWithPrefix;
const revisionToKeep = options.commandArgs[1].trim(); const revisionToKeep = options.commandArgs[1].trim();
if (revisionToKeep === "") { if (revisionToKeep === "") {
throw new Error("resolve requires a non-empty revision-to-keep"); throw new Error("resolve requires a non-empty revision-to-keep");

View File

@@ -14,6 +14,7 @@ function createCoreMock() {
applySettings: vi.fn(async () => {}), applySettings: vi.fn(async () => {}),
}, },
setting: { setting: {
applyExternalSettings: vi.fn(async () => {}),
applyPartial: vi.fn(async () => {}), applyPartial: vi.fn(async () => {}),
}, },
}, },
@@ -57,7 +58,7 @@ async function createSetupURI(passphrase: string): Promise<string> {
describe("runCommand abnormal cases", () => { describe("runCommand abnormal cases", () => {
const context = { const context = {
vaultPath: "/tmp/vault", databasePath: "/tmp/vault",
settingsPath: "/tmp/vault/.livesync/settings.json", settingsPath: "/tmp/vault/.livesync/settings.json",
} as any; } as any;
@@ -176,9 +177,9 @@ describe("runCommand abnormal cases", () => {
}); });
expect(result).toBe(true); expect(result).toBe(true);
expect(core.services.setting.applyPartial).toHaveBeenCalledTimes(1); expect(core.services.setting.applyExternalSettings).toHaveBeenCalledTimes(1);
expect(core.services.control.applySettings).toHaveBeenCalledTimes(1); expect(core.services.control.applySettings).toHaveBeenCalledTimes(1);
const [appliedSettings, saveImmediately] = core.services.setting.applyPartial.mock.calls[0]; const [appliedSettings, saveImmediately] = core.services.setting.applyExternalSettings.mock.calls[0];
expect(saveImmediately).toBe(true); expect(saveImmediately).toBe(true);
expect(appliedSettings.couchDB_URI).toBe("http://127.0.0.1:5984"); expect(appliedSettings.couchDB_URI).toBe("http://127.0.0.1:5984");
expect(appliedSettings.couchDB_DBNAME).toBe("livesync-test-db"); expect(appliedSettings.couchDB_DBNAME).toBe("livesync-test-db");
@@ -198,7 +199,7 @@ describe("runCommand abnormal cases", () => {
}) })
).rejects.toThrow(); ).rejects.toThrow();
expect(core.services.setting.applyPartial).not.toHaveBeenCalled(); expect(core.services.setting.applyExternalSettings).not.toHaveBeenCalled();
expect(core.services.control.applySettings).not.toHaveBeenCalled(); expect(core.services.control.applySettings).not.toHaveBeenCalled();
}); });
}); });

View File

@@ -32,7 +32,7 @@ export interface CLIOptions {
} }
export interface CLICommandContext { export interface CLICommandContext {
vaultPath: string; databasePath: string;
core: LiveSyncBaseCore<ServiceContext, any>; core: LiveSyncBaseCore<ServiceContext, any>;
settingsPath: string; settingsPath: string;
} }

View File

@@ -5,19 +5,19 @@ export function toArrayBuffer(data: Buffer): ArrayBuffer {
return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer; return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer;
} }
export function toVaultRelativePath(inputPath: string, vaultPath: string): string { export function toDatabaseRelativePath(inputPath: string, databasePath: string): string {
const stripped = inputPath.replace(/^[/\\]+/, ""); const stripped = inputPath.replace(/^[/\\]+/, "");
if (!path.isAbsolute(inputPath)) { if (!path.isAbsolute(inputPath)) {
const normalized = stripped.replace(/\\/g, "/"); const normalized = stripped.replace(/\\/g, "/");
const resolved = path.resolve(vaultPath, normalized); const resolved = path.resolve(databasePath, normalized);
const rel = path.relative(vaultPath, resolved); const rel = path.relative(databasePath, resolved);
if (rel.startsWith("..") || path.isAbsolute(rel)) { if (rel.startsWith("..") || path.isAbsolute(rel)) {
throw new Error(`Path ${inputPath} is outside of the local database directory`); throw new Error(`Path ${inputPath} is outside of the local database directory`);
} }
return rel.replace(/\\/g, "/"); return rel.replace(/\\/g, "/");
} }
const resolved = path.resolve(inputPath); const resolved = path.resolve(inputPath);
const rel = path.relative(vaultPath, resolved); const rel = path.relative(databasePath, resolved);
if (rel.startsWith("..") || path.isAbsolute(rel)) { if (rel.startsWith("..") || path.isAbsolute(rel)) {
throw new Error(`Path ${inputPath} is outside of the local database directory`); throw new Error(`Path ${inputPath} is outside of the local database directory`);
} }
@@ -25,15 +25,15 @@ export function toVaultRelativePath(inputPath: string, vaultPath: string): strin
} }
export async function readStdinAsUtf8(): Promise<string> { export async function readStdinAsUtf8(): Promise<string> {
const chunks: Buffer[] = []; const chunks = [];
for await (const chunk of process.stdin) { for await (const chunk of process.stdin) {
if (typeof chunk === "string") { if (typeof chunk === "string") {
chunks.push(Buffer.from(chunk, "utf-8")); chunks.push(Buffer.from(chunk, "utf-8"));
} else { } else {
chunks.push(chunk); chunks.push(chunk as Buffer);
} }
} }
return Buffer.concat(chunks).toString("utf-8"); return Buffer.concat(chunks as Uint8Array[]).toString("utf-8");
} }
export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> { export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> {

View File

@@ -1,29 +1,33 @@
import * as path from "path"; import * as path from "path";
import { describe, expect, it } from "vitest"; import { describe, expect, it } from "vitest";
import { toVaultRelativePath } from "./utils"; import { toDatabaseRelativePath } from "./utils";
describe("toVaultRelativePath", () => { describe("toDatabaseRelativePath", () => {
const vaultPath = path.resolve("/tmp/livesync-vault"); const databasePath = path.resolve("/tmp/livesync-vault");
it("rejects absolute paths outside vault", () => { it("rejects absolute paths outside vault", () => {
expect(() => toVaultRelativePath("/etc/passwd", vaultPath)).toThrow("outside of the local database directory"); expect(() => toDatabaseRelativePath("/etc/passwd", databasePath)).toThrow(
"outside of the local database directory"
);
}); });
it("normalizes leading slash for absolute path inside vault", () => { it("normalizes leading slash for absolute path inside vault", () => {
const absoluteInsideVault = path.join(vaultPath, "notes", "foo.md"); const absoluteInsideVault = path.join(databasePath, "notes", "foo.md");
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("notes/foo.md"); expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("notes/foo.md");
}); });
it("normalizes Windows-style separators", () => { it("normalizes Windows-style separators", () => {
expect(toVaultRelativePath("notes\\daily\\2026-03-12.md", vaultPath)).toBe("notes/daily/2026-03-12.md"); expect(toDatabaseRelativePath("notes\\daily\\2026-03-12.md", databasePath)).toBe("notes/daily/2026-03-12.md");
}); });
it("returns vault-relative path for another absolute path inside vault", () => { it("returns vault-relative path for another absolute path inside vault", () => {
const absoluteInsideVault = path.join(vaultPath, "docs", "inside.md"); const absoluteInsideVault = path.join(databasePath, "docs", "inside.md");
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("docs/inside.md"); expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("docs/inside.md");
}); });
it("rejects relative path traversal that escapes vault", () => { it("rejects relative path traversal that escapes vault", () => {
expect(() => toVaultRelativePath("../escape.md", vaultPath)).toThrow("outside of the local database directory"); expect(() => toDatabaseRelativePath("../escape.md", databasePath)).toThrow(
"outside of the local database directory"
);
}); });
}); });

View File

@@ -0,0 +1,25 @@
#!/bin/sh
# Entrypoint wrapper for the Self-hosted LiveSync CLI Docker image.
#
# By default, /data is used as the database-path (the vault mount point).
# Override this via the LIVESYNC_DB_PATH environment variable.
#
# Examples:
# docker run -v /path/to/vault:/data livesync-cli sync
# docker run -v /path/to/vault:/data livesync-cli --settings /data/.livesync/settings.json sync
# docker run -v /path/to/vault:/data livesync-cli init-settings
# docker run -e LIVESYNC_DB_PATH=/vault -v /path/to/vault:/vault livesync-cli sync
set -e
case "${1:-}" in
init-settings | --help | -h | "")
# Commands that do not require a leading database-path argument
exec node /app/dist/index.cjs "$@"
;;
*)
# All other commands: prepend the database-path so users only need
# to supply the command and its options.
exec node /app/dist/index.cjs "${LIVESYNC_DB_PATH:-/data}" "$@"
;;
esac

View File

@@ -1,10 +1,11 @@
#!/usr/bin/env node #!/usr/bin/env node
import polyfill from "node-datachannel/polyfill"; import * as polyfill from "werift";
import { main } from "./main"; import { main } from "./main";
for (const prop in polyfill) { const rtcPolyfillCtor = (polyfill as any).RTCPeerConnection;
// @ts-ignore Applying polyfill to globalThis if (typeof (globalThis as any).RTCPeerConnection === "undefined" && typeof rtcPolyfillCtor === "function") {
globalThis[prop] = (polyfill as any)[prop]; // Fill only the standard WebRTC global in Node CLI runtime.
(globalThis as any).RTCPeerConnection = rtcPolyfillCtor;
} }
main().catch((error) => { main().catch((error) => {

View File

@@ -3,25 +3,10 @@
* Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian * Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian
*/ */
if (!("localStorage" in globalThis)) {
const store = new Map<string, string>();
(globalThis as any).localStorage = {
getItem: (key: string) => (store.has(key) ? store.get(key)! : null),
setItem: (key: string, value: string) => {
store.set(key, value);
},
removeItem: (key: string) => {
store.delete(key);
},
clear: () => {
store.clear();
},
};
}
import * as fs from "fs/promises"; import * as fs from "fs/promises";
import * as path from "path"; import * as path from "path";
import { NodeServiceContext, NodeServiceHub } from "./services/NodeServiceHub"; import { NodeServiceContext, NodeServiceHub } from "./services/NodeServiceHub";
import { configureNodeLocalStorage, ensureGlobalNodeLocalStorage } from "./services/NodeLocalStorage";
import { LiveSyncBaseCore } from "../../LiveSyncBaseCore"; import { LiveSyncBaseCore } from "../../LiveSyncBaseCore";
import { ModuleReplicatorP2P } from "../../modules/core/ModuleReplicatorP2P"; import { ModuleReplicatorP2P } from "../../modules/core/ModuleReplicatorP2P";
import { initialiseServiceModulesCLI } from "./serviceModules/CLIServiceModules"; import { initialiseServiceModulesCLI } from "./serviceModules/CLIServiceModules";
@@ -43,6 +28,7 @@ import { getPathFromUXFileInfo } from "@lib/common/typeUtils";
import { stripAllPrefixes } from "@lib/string_and_binary/path"; import { stripAllPrefixes } from "@lib/string_and_binary/path";
const SETTINGS_FILE = ".livesync/settings.json"; const SETTINGS_FILE = ".livesync/settings.json";
ensureGlobalNodeLocalStorage();
defaultLoggerEnv.minLogLevel = LOG_LEVEL_DEBUG; defaultLoggerEnv.minLogLevel = LOG_LEVEL_DEBUG;
function printHelp(): void { function printHelp(): void {
@@ -50,14 +36,15 @@ function printHelp(): void {
Self-hosted LiveSync CLI Self-hosted LiveSync CLI
Usage: Usage:
livesync-cli [database-path] [options] [command] [command-args] livesync-cli <database-path> [options] <command> [command-args]
livesync-cli init-settings [path]
Arguments: Arguments:
database-path Path to the local database directory (required) database-path Path to the local database directory
Commands: Commands:
sync Run one replication cycle and exit sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name> p2p-peers <timeout> Show discovered peers as [peer]\t<peer-id>\t<peer-name>
p2p-sync <peer> <timeout> p2p-sync <peer> <timeout>
Sync with the specified peer-id or peer-name Sync with the specified peer-id or peer-name
p2p-host Start P2P host mode and wait until interrupted p2p-host Start P2P host mode and wait until interrupted
@@ -68,28 +55,29 @@ Commands:
put <dst> Read UTF-8 content from stdin and write to local database path <dst> put <dst> Read UTF-8 content from stdin and write to local database path <dst>
cat <src> Read file <src> from local database and write to stdout cat <src> Read file <src> from local database and write to stdout
cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout
ls [prefix] List DB files as path<TAB>size<TAB>mtime<TAB>revision[*] ls [prefix] List DB files as path\tsize\tmtime\trevision[*]
info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks) info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks)
rm <path> Mark a file as deleted in local database rm <path> Mark a file as deleted in local database
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path)
Examples: Examples:
livesync-cli ./my-database sync livesync-cli ./my-database sync
livesync-cli ./my-database p2p-peers 5 livesync-cli ./my-database p2p-peers 5
livesync-cli ./my-database p2p-sync my-peer-name 15 livesync-cli ./my-database p2p-sync my-peer-name 15
livesync-cli ./my-database p2p-host livesync-cli ./my-database p2p-host
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
livesync-cli ./my-database pull folder/note.md ./exports/note.md livesync-cli ./my-database pull folder/note.md ./exports/note.md
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..." livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
echo "Hello" | livesync-cli ./my-database put notes/hello.md echo "Hello" | livesync-cli ./my-database put notes/hello.md
livesync-cli ./my-database cat notes/hello.md livesync-cli ./my-database cat notes/hello.md
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
livesync-cli ./my-database ls notes/ livesync-cli ./my-database ls notes/
livesync-cli ./my-database info notes/hello.md livesync-cli ./my-database info notes/hello.md
livesync-cli ./my-database rm notes/hello.md livesync-cli ./my-database rm notes/hello.md
livesync-cli ./my-database resolve notes/hello.md 3-abcdef livesync-cli ./my-database resolve notes/hello.md 3-abcdef
livesync-cli init-settings ./data.json livesync-cli init-settings ./data.json
livesync-cli ./my-database --verbose livesync-cli ./my-database --verbose
`); `);
} }
@@ -126,6 +114,7 @@ export function parseArgs(): CLIOptions {
case "-d": case "-d":
// debugging automatically enables verbose logging, as it is intended for debugging issues. // debugging automatically enables verbose logging, as it is intended for debugging issues.
debug = true; debug = true;
// falls through
case "--verbose": case "--verbose":
case "-v": case "-v":
verbose = true; verbose = true;
@@ -234,33 +223,34 @@ export async function main() {
return; return;
} }
// Resolve vault path // Resolve database path
const vaultPath = path.resolve(options.databasePath!); const databasePath = path.resolve(options.databasePath!);
// Check if vault directory exists // Check if database directory exists
try { try {
const stat = await fs.stat(vaultPath); const stat = await fs.stat(databasePath);
if (!stat.isDirectory()) { if (!stat.isDirectory()) {
console.error(`Error: ${vaultPath} is not a directory`); console.error(`Error: ${databasePath} is not a directory`);
process.exit(1); process.exit(1);
} }
} catch (error) { } catch (error) {
console.error(`Error: Vault directory ${vaultPath} does not exist`); console.error(`Error: Database directory ${databasePath} does not exist`);
process.exit(1); process.exit(1);
} }
// Resolve settings path // Resolve settings path
const settingsPath = options.settingsPath const settingsPath = options.settingsPath
? path.resolve(options.settingsPath) ? path.resolve(options.settingsPath)
: path.join(vaultPath, SETTINGS_FILE); : path.join(databasePath, SETTINGS_FILE);
configureNodeLocalStorage(path.join(databasePath, ".livesync", "runtime", "local-storage.json"));
infoLog(`Self-hosted LiveSync CLI`); infoLog(`Self-hosted LiveSync CLI`);
infoLog(`Vault: ${vaultPath}`); infoLog(`Database Path: ${databasePath}`);
infoLog(`Settings: ${settingsPath}`); infoLog(`Settings: ${settingsPath}`);
infoLog(""); infoLog("");
// Create service context and hub // Create service context and hub
const context = new NodeServiceContext(vaultPath); const context = new NodeServiceContext(databasePath);
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(vaultPath, context); const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(databasePath, context);
serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => { serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => {
let levelStr = ""; let levelStr = "";
switch (level) { switch (level) {
@@ -334,7 +324,11 @@ export async function main() {
const core = new LiveSyncBaseCore( const core = new LiveSyncBaseCore(
serviceHubInstance, serviceHubInstance,
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => { (core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
return initialiseServiceModulesCLI(vaultPath, core, serviceHub); const mirrorVaultPath =
options.command === "mirror" && options.commandArgs[0]
? path.resolve(options.commandArgs[0])
: databasePath;
return initialiseServiceModulesCLI(mirrorVaultPath, core, serviceHub);
}, },
(core) => [ (core) => [
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts // No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
@@ -344,8 +338,8 @@ export async function main() {
(core) => { (core) => {
// Add target filter to prevent internal files are handled // Add target filter to prevent internal files are handled
core.services.vault.isTargetFile.addHandler(async (target) => { core.services.vault.isTargetFile.addHandler(async (target) => {
const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target)); const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
const parts = vaultPath.split(path.sep); const parts = targetPath.split(path.sep);
// if some part of the path starts with dot, treat it as internal file and ignore. // if some part of the path starts with dot, treat it as internal file and ignore.
if (parts.some((part) => part.startsWith("."))) { if (parts.some((part) => part.startsWith("."))) {
return await Promise.resolve(false); return await Promise.resolve(false);
@@ -406,7 +400,7 @@ export async function main() {
infoLog(""); infoLog("");
} }
const result = await runCommand(options, { vaultPath, core, settingsPath }); const result = await runCommand(options, { databasePath, core, settingsPath });
if (!result) { if (!result) {
console.error(`[Error] Command '${options.command}' failed`); console.error(`[Error] Command '${options.command}' failed`);
process.exitCode = 1; process.exitCode = 1;

View File

@@ -17,7 +17,7 @@ describe("CLI parseArgs", () => {
}); });
it("exits 1 when --settings has no value", () => { it("exits 1 when --settings has no value", () => {
process.argv = ["node", "livesync-cli", "./vault", "--settings"]; process.argv = ["node", "livesync-cli", "./databasePath", "--settings"];
const exitMock = mockProcessExit(); const exitMock = mockProcessExit();
const stderr = vi.spyOn(console, "error").mockImplementation(() => {}); const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
@@ -37,7 +37,7 @@ describe("CLI parseArgs", () => {
}); });
it("exits 1 for unknown command after database-path", () => { it("exits 1 for unknown command after database-path", () => {
process.argv = ["node", "livesync-cli", "./vault", "unknown-cmd"]; process.argv = ["node", "livesync-cli", "./databasePath", "unknown-cmd"];
const exitMock = mockProcessExit(); const exitMock = mockProcessExit();
const stderr = vi.spyOn(console, "error").mockImplementation(() => {}); const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
@@ -56,32 +56,32 @@ describe("CLI parseArgs", () => {
expect(stdout).toHaveBeenCalled(); expect(stdout).toHaveBeenCalled();
const combined = stdout.mock.calls.flat().join("\n"); const combined = stdout.mock.calls.flat().join("\n");
expect(combined).toContain("Usage:"); expect(combined).toContain("Usage:");
expect(combined).toContain("livesync-cli [database-path]"); expect(combined).toContain("livesync-cli <database-path> [options] <command> [command-args]");
}); });
it("parses p2p-peers command and timeout", () => { it("parses p2p-peers command and timeout", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-peers", "5"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-peers", "5"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-peers"); expect(parsed.command).toBe("p2p-peers");
expect(parsed.commandArgs).toEqual(["5"]); expect(parsed.commandArgs).toEqual(["5"]);
}); });
it("parses p2p-sync command with peer and timeout", () => { it("parses p2p-sync command with peer and timeout", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-sync", "peer-1", "12"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-sync", "peer-1", "12"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-sync"); expect(parsed.command).toBe("p2p-sync");
expect(parsed.commandArgs).toEqual(["peer-1", "12"]); expect(parsed.commandArgs).toEqual(["peer-1", "12"]);
}); });
it("parses p2p-host command", () => { it("parses p2p-host command", () => {
process.argv = ["node", "livesync-cli", "./vault", "p2p-host"]; process.argv = ["node", "livesync-cli", "./databasePath", "p2p-host"];
const parsed = parseArgs(); const parsed = parseArgs();
expect(parsed.databasePath).toBe("./vault"); expect(parsed.databasePath).toBe("./databasePath");
expect(parsed.command).toBe("p2p-host"); expect(parsed.command).toBe("p2p-host");
expect(parsed.commandArgs).toEqual([]); expect(parsed.commandArgs).toEqual([]);
}); });

View File

@@ -10,6 +10,7 @@
"preview": "vite preview", "preview": "vite preview",
"cli": "node dist/index.cjs", "cli": "node dist/index.cjs",
"buildRun": "npm run build && npm run cli --", "buildRun": "npm run build && npm run cli --",
"build:docker": "docker build -f Dockerfile -t livesync-cli ../../..",
"check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json", "check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json",
"test:unit": "cd ../../.. && npx vitest run --config vitest.config.unit.ts src/apps/cli/main.unit.spec.ts src/apps/cli/commands/utils.unit.spec.ts src/apps/cli/commands/runCommand.unit.spec.ts src/apps/cli/commands/p2p.unit.spec.ts", "test:unit": "cd ../../.. && npx vitest run --config vitest.config.unit.ts src/apps/cli/main.unit.spec.ts src/apps/cli/commands/utils.unit.spec.ts src/apps/cli/commands/runCommand.unit.spec.ts src/apps/cli/commands/p2p.unit.spec.ts",
"test:e2e:two-vaults": "bash test/test-e2e-two-vaults-with-docker-linux.sh", "test:e2e:two-vaults": "bash test/test-e2e-two-vaults-with-docker-linux.sh",
@@ -24,7 +25,15 @@
"test:e2e:p2p-sync": "bash test/test-p2p-sync-linux.sh", "test:e2e:p2p-sync": "bash test/test-p2p-sync-linux.sh",
"test:e2e:mirror": "bash test/test-mirror-linux.sh", "test:e2e:mirror": "bash test/test-mirror-linux.sh",
"pretest:e2e:all": "npm run build", "pretest:e2e:all": "npm run build",
"test:e2e:all": " export RUN_BUILD=0 && npm run test:e2e:setup-put-cat && npm run test:e2e:push-pull && npm run test:e2e:sync-two-local && npm run test:e2e:p2p && npm run test:e2e:mirror && npm run test:e2e:two-vaults && npm run test:e2e:p2p" "test:e2e:all": " export RUN_BUILD=0 && npm run test:e2e:setup-put-cat && npm run test:e2e:push-pull && npm run test:e2e:sync-two-local && npm run test:e2e:p2p && npm run test:e2e:mirror && npm run test:e2e:two-vaults && npm run test:e2e:p2p",
"pretest:e2e:docker:all": "npm run build:docker",
"test:e2e:docker:push-pull": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-push-pull-linux.sh",
"test:e2e:docker:setup-put-cat": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-setup-put-cat-linux.sh",
"test:e2e:docker:mirror": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-mirror-linux.sh",
"test:e2e:docker:sync-two-local": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-sync-two-local-databases-linux.sh",
"test:e2e:docker:p2p": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-p2p-three-nodes-conflict-linux.sh",
"test:e2e:docker:p2p-sync": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-p2p-sync-linux.sh",
"test:e2e:docker:all": "export RUN_BUILD=0 && npm run test:e2e:docker:setup-put-cat && npm run test:e2e:docker:push-pull && npm run test:e2e:docker:sync-two-local && npm run test:e2e:docker:mirror"
}, },
"dependencies": {}, "dependencies": {},
"devDependencies": {} "devDependencies": {}

View File

@@ -0,0 +1,24 @@
{
"name": "livesync-cli-runtime",
"private": true,
"version": "0.0.0",
"description": "Runtime dependencies for Self-hosted LiveSync CLI Docker image",
"dependencies": {
"commander": "^14.0.3",
"werift": "^0.22.9",
"pouchdb-adapter-http": "^9.0.0",
"pouchdb-adapter-idb": "^9.0.0",
"pouchdb-adapter-indexeddb": "^9.0.0",
"pouchdb-adapter-leveldb": "^9.0.0",
"pouchdb-adapter-memory": "^9.0.0",
"pouchdb-core": "^9.0.0",
"pouchdb-errors": "^9.0.0",
"pouchdb-find": "^9.0.0",
"pouchdb-mapreduce": "^9.0.0",
"pouchdb-merge": "^9.0.0",
"pouchdb-replication": "^9.0.0",
"pouchdb-utils": "^9.0.0",
"pouchdb-wrappers": "*",
"transform-pouch": "^2.0.0"
}
}

View File

@@ -0,0 +1,111 @@
import * as nodeFs from "node:fs";
import * as nodePath from "node:path";
type LocalStorageShape = {
getItem(key: string): string | null;
setItem(key: string, value: string): void;
removeItem(key: string): void;
clear(): void;
};
class PersistentNodeLocalStorage {
private storagePath: string | undefined;
private localStore: Record<string, string> = {};
configure(storagePath: string) {
if (this.storagePath === storagePath) {
return;
}
this.storagePath = storagePath;
this.loadFromFile();
}
private loadFromFile() {
if (!this.storagePath) {
this.localStore = {};
return;
}
try {
const loaded = JSON.parse(nodeFs.readFileSync(this.storagePath, "utf-8")) as Record<string, string>;
this.localStore = { ...loaded };
} catch {
this.localStore = {};
}
}
private flushToFile() {
if (!this.storagePath) {
return;
}
nodeFs.mkdirSync(nodePath.dirname(this.storagePath), { recursive: true });
nodeFs.writeFileSync(this.storagePath, JSON.stringify(this.localStore, null, 2), "utf-8");
}
getItem(key: string): string | null {
return this.localStore[key] ?? null;
}
setItem(key: string, value: string) {
this.localStore[key] = value;
this.flushToFile();
}
removeItem(key: string) {
if (!(key in this.localStore)) {
return;
}
delete this.localStore[key];
this.flushToFile();
}
clear() {
this.localStore = {};
this.flushToFile();
}
}
const persistentNodeLocalStorage = new PersistentNodeLocalStorage();
function createNodeLocalStorageShim(): LocalStorageShape {
return {
getItem(key: string) {
return persistentNodeLocalStorage.getItem(key);
},
setItem(key: string, value: string) {
persistentNodeLocalStorage.setItem(key, value);
},
removeItem(key: string) {
persistentNodeLocalStorage.removeItem(key);
},
clear() {
persistentNodeLocalStorage.clear();
},
};
}
export function ensureGlobalNodeLocalStorage() {
if (!("localStorage" in globalThis) || typeof (globalThis as any).localStorage?.getItem !== "function") {
(globalThis as any).localStorage = createNodeLocalStorageShim();
}
}
export function configureNodeLocalStorage(storagePath: string) {
persistentNodeLocalStorage.configure(storagePath);
ensureGlobalNodeLocalStorage();
}
export function getNodeLocalStorageItem(key: string): string {
return persistentNodeLocalStorage.getItem(key) ?? "";
}
export function setNodeLocalStorageItem(key: string, value: string) {
persistentNodeLocalStorage.setItem(key, value);
}
export function deleteNodeLocalStorageItem(key: string) {
persistentNodeLocalStorage.removeItem(key);
}
export function clearNodeLocalStorage() {
persistentNodeLocalStorage.clear();
}

View File

@@ -0,0 +1,60 @@
import * as fs from "node:fs";
import * as os from "node:os";
import * as path from "node:path";
import { afterEach, describe, expect, it } from "vitest";
import {
clearNodeLocalStorage,
configureNodeLocalStorage,
ensureGlobalNodeLocalStorage,
getNodeLocalStorageItem,
setNodeLocalStorageItem,
} from "./NodeLocalStorage";
describe("NodeLocalStorage", () => {
const tempDirs: string[] = [];
afterEach(() => {
clearNodeLocalStorage();
for (const tempDir of tempDirs.splice(0)) {
fs.rmSync(tempDir, { recursive: true, force: true });
}
});
it("persists values to the configured file", () => {
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
tempDirs.push(tempDir);
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
configureNodeLocalStorage(storagePath);
setNodeLocalStorageItem("checkpoint", "42");
const saved = JSON.parse(fs.readFileSync(storagePath, "utf-8")) as Record<string, string>;
expect(saved.checkpoint).toBe("42");
});
it("reloads persisted values when configured again", () => {
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
tempDirs.push(tempDir);
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
fs.mkdirSync(path.dirname(storagePath), { recursive: true });
fs.writeFileSync(storagePath, JSON.stringify({ persisted: "value" }, null, 2), "utf-8");
configureNodeLocalStorage(storagePath);
expect(getNodeLocalStorageItem("persisted")).toBe("value");
});
it("installs a global localStorage shim backed by the same store", () => {
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
tempDirs.push(tempDir);
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
configureNodeLocalStorage(storagePath);
ensureGlobalNodeLocalStorage();
globalThis.localStorage.setItem("shared", "state");
expect(getNodeLocalStorageItem("shared")).toBe("state");
});
});

View File

@@ -27,10 +27,10 @@ import { DatabaseService } from "@lib/services/base/DatabaseService";
import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types"; import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types";
export class NodeServiceContext extends ServiceContext { export class NodeServiceContext extends ServiceContext {
vaultPath: string; databasePath: string;
constructor(vaultPath: string) { constructor(databasePath: string) {
super(); super();
this.vaultPath = vaultPath; this.databasePath = databasePath;
} }
} }
@@ -64,7 +64,7 @@ class NodeDatabaseService<T extends NodeServiceContext> extends DatabaseService<
): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } { ): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } {
const optionPass = { const optionPass = {
...options, ...options,
prefix: this.context.vaultPath + nodePath.sep, prefix: this.context.databasePath + nodePath.sep,
}; };
const passSettings = { ...settings, useIndexedDBAdapter: false }; const passSettings = { ...settings, useIndexedDBAdapter: false };
return super.modifyDatabaseOptions(passSettings, name, optionPass); return super.modifyDatabaseOptions(passSettings, name, optionPass);

View File

@@ -5,17 +5,17 @@ import { handlers } from "@lib/services/lib/HandlerUtils";
import type { ObsidianLiveSyncSettings } from "@lib/common/types"; import type { ObsidianLiveSyncSettings } from "@lib/common/types";
import type { ServiceContext } from "@lib/services/base/ServiceBase"; import type { ServiceContext } from "@lib/services/base/ServiceBase";
import { SettingService, type SettingServiceDependencies } from "@lib/services/base/SettingService"; import { SettingService, type SettingServiceDependencies } from "@lib/services/base/SettingService";
import * as nodeFs from "node:fs"; import {
import * as nodePath from "node:path"; configureNodeLocalStorage,
deleteNodeLocalStorageItem,
getNodeLocalStorageItem,
setNodeLocalStorageItem,
} from "./NodeLocalStorage";
export class NodeSettingService<T extends ServiceContext> extends SettingService<T> { export class NodeSettingService<T extends ServiceContext> extends SettingService<T> {
private storagePath: string;
private localStore: Record<string, string> = {};
constructor(context: T, dependencies: SettingServiceDependencies, storagePath: string) { constructor(context: T, dependencies: SettingServiceDependencies, storagePath: string) {
super(context, dependencies); super(context, dependencies);
this.storagePath = storagePath; configureNodeLocalStorage(storagePath);
this.loadLocalStoreFromFile();
this.onSettingSaved.addHandler((settings) => { this.onSettingSaved.addHandler((settings) => {
eventHub.emitEvent(EVENT_SETTING_SAVED, settings); eventHub.emitEvent(EVENT_SETTING_SAVED, settings);
return Promise.resolve(true); return Promise.resolve(true);
@@ -26,34 +26,16 @@ export class NodeSettingService<T extends ServiceContext> extends SettingService
}); });
} }
private loadLocalStoreFromFile() {
try {
const loaded = JSON.parse(nodeFs.readFileSync(this.storagePath, "utf-8")) as Record<string, string>;
this.localStore = { ...loaded };
} catch {
this.localStore = {};
}
}
private flushLocalStoreToFile() {
nodeFs.mkdirSync(nodePath.dirname(this.storagePath), { recursive: true });
nodeFs.writeFileSync(this.storagePath, JSON.stringify(this.localStore, null, 2), "utf-8");
}
protected setItem(key: string, value: string) { protected setItem(key: string, value: string) {
this.localStore[key] = value; setNodeLocalStorageItem(key, value);
this.flushLocalStoreToFile();
} }
protected getItem(key: string): string { protected getItem(key: string): string {
return this.localStore[key] ?? ""; return getNodeLocalStorageItem(key);
} }
protected deleteItem(key: string): void { protected deleteItem(key: string): void {
if (key in this.localStore) { deleteNodeLocalStorageItem(key);
delete this.localStore[key];
this.flushLocalStoreToFile();
}
} }
public saveData = handlers<{ saveData: (data: ObsidianLiveSyncSettings) => Promise<void> }>().binder("saveData"); public saveData = handlers<{ saveData: (data: ObsidianLiveSyncSettings) => Promise<void> }>().binder("saveData");

View File

@@ -0,0 +1,49 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
cd "$CLI_DIR"
source "$SCRIPT_DIR/test-helpers.sh"
display_test_info "Test for Issue #860: Empty output from ls and mirror"
RUN_BUILD="${RUN_BUILD:-1}"
cli_test_init_cli_cmd
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-repro-860.XXXXXX")"
trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault"
mkdir -p "$VAULT_DIR"
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..."
npm run build
fi
echo "[INFO] generating settings -> $SETTINGS_FILE"
cli_test_init_settings_file "$SETTINGS_FILE"
# 1. Test 'ls' on empty database
echo "[INFO] Testing 'ls' on empty database..."
LS_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls)
if [[ -z "$LS_OUTPUT" ]]; then
echo "[REPRODUCED] 'ls' returned empty output for empty database."
else
echo "[INFO] 'ls' output: $LS_OUTPUT"
fi
# 2. Test 'mirror' on empty vault
echo "[INFO] Testing 'mirror' on empty vault..."
MIRROR_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror 2>&1)
if [[ "$MIRROR_OUTPUT" == *"[Command] mirror"* ]] && [[ ! "$MIRROR_OUTPUT" == *"[Mirror]"* ]]; then
# Note: currently it prints [Command] mirror to stderr.
# Let's see if it prints anything else.
echo "[REPRODUCED] 'mirror' produced no functional logs (only command header)."
else
echo "[INFO] 'mirror' output: $MIRROR_OUTPUT"
fi
echo "[DONE] finished repro-860 test"

16
src/apps/cli/test/test-e2e-two-vaults-common.sh Executable file → Normal file
View File

@@ -136,6 +136,8 @@ fi
TARGET_A_ONLY="e2e/a-only-info.md" TARGET_A_ONLY="e2e/a-only-info.md"
TARGET_SYNC="e2e/sync-info.md" TARGET_SYNC="e2e/sync-info.md"
TARGET_SYNC_TWICE_FIRST="e2e/sync-twice-first.md"
TARGET_SYNC_TWICE_SECOND="e2e/sync-twice-second.md"
TARGET_PUSH="e2e/pushed-from-a.md" TARGET_PUSH="e2e/pushed-from-a.md"
TARGET_PUT="e2e/put-from-a.md" TARGET_PUT="e2e/put-from-a.md"
TARGET_PUSH_BINARY="e2e/pushed-from-a.bin" TARGET_PUSH_BINARY="e2e/pushed-from-a.bin"
@@ -154,6 +156,20 @@ INFO_B_SYNC="$(run_cli_b info "$TARGET_SYNC")"
cli_test_assert_contains "$INFO_B_SYNC" "\"path\": \"$TARGET_SYNC\"" "B info should include path after sync" cli_test_assert_contains "$INFO_B_SYNC" "\"path\": \"$TARGET_SYNC\"" "B info should include path after sync"
echo "[PASS] sync A->B and B info" echo "[PASS] sync A->B and B info"
echo "[CASE] B can sync again after first replication has completed"
printf 'first-sync-round-%s\n' "$DB_SUFFIX" | run_cli_a put "$TARGET_SYNC_TWICE_FIRST" >/dev/null
run_cli_a sync >/dev/null
run_cli_b sync >/dev/null
CAT_B_SYNC_TWICE_FIRST="$(run_cli_b cat "$TARGET_SYNC_TWICE_FIRST" | cli_test_sanitise_cat_stdout)"
cli_test_assert_equal "first-sync-round-$DB_SUFFIX" "$CAT_B_SYNC_TWICE_FIRST" "B should receive first update after first sync"
printf 'second-sync-round-%s\n' "$DB_SUFFIX" | run_cli_a put "$TARGET_SYNC_TWICE_SECOND" >/dev/null
run_cli_a sync >/dev/null
run_cli_b sync >/dev/null
CAT_B_SYNC_TWICE_SECOND="$(run_cli_b cat "$TARGET_SYNC_TWICE_SECOND" | cli_test_sanitise_cat_stdout)"
cli_test_assert_equal "second-sync-round-$DB_SUFFIX" "$CAT_B_SYNC_TWICE_SECOND" "B should receive second update after re-running sync"
echo "[PASS] second sync after completion works"
echo "[CASE] A pushes and puts, both sync, and B can pull and cat" echo "[CASE] A pushes and puts, both sync, and B can pull and cat"
PUSH_SRC="$WORK_DIR/push-source.txt" PUSH_SRC="$WORK_DIR/push-source.txt"
PULL_DST="$WORK_DIR/pull-destination.txt" PULL_DST="$WORK_DIR/pull-destination.txt"

0
src/apps/cli/test/test-e2e-two-vaults-matrix.sh Executable file → Normal file
View File

View File

View File

@@ -0,0 +1,150 @@
#!/usr/bin/env bash
# test-helpers-docker.sh
#
# Docker-mode overrides for test-helpers.sh.
# Sourced automatically at the end of test-helpers.sh when
# LIVESYNC_TEST_DOCKER=1 is set, replacing run_cli (and related helpers)
# with a Docker-based implementation.
#
# The Docker container and the host share a common directory layout:
# $WORK_DIR (host) <-> /workdir (container)
# $CLI_DIR (host) <-> /clidir (container)
#
# Usage (run an existing test against the Docker image):
# LIVESYNC_TEST_DOCKER=1 bash test/test-push-pull-linux.sh
# LIVESYNC_TEST_DOCKER=1 bash test/test-mirror-linux.sh
# LIVESYNC_TEST_DOCKER=1 bash test/test-sync-two-local-databases-linux.sh
# LIVESYNC_TEST_DOCKER=1 bash test/test-setup-put-cat-linux.sh
#
# Optional environment variables:
# DOCKER_IMAGE Image name/tag to use (default: livesync-cli)
# RUN_BUILD Set to 1 to rebuild the Docker image before the test
# (default: 0 — assumes the image is already built)
# Build command: npm run build:docker (from src/apps/cli/)
#
# Notes:
# - The container is started with --network host so that it can reach
# CouchDB / P2P relay containers that are also using the host network.
# - On macOS / Windows Docker Desktop --network host behaves differently
# (it is not a true host-network bridge); tests that rely on localhost
# connectivity to other containers may fail on those platforms.
# Ensure Docker-mode tests do not trigger host-side `npm run build` unless
# explicitly requested by the caller.
RUN_BUILD="${RUN_BUILD:-0}"
# Override the standard implementation.
# In Docker mode the CLI_CMD array is a no-op sentinel; run_cli is overridden
# directly.
cli_test_init_cli_cmd() {
DOCKER_IMAGE="${DOCKER_IMAGE:-livesync-cli}"
# CLI_CMD is unused in Docker mode; set a sentinel so existing code
# that references it will not error.
CLI_CMD=(__docker__)
}
# ─── display_test_info ────────────────────────────────────────────────────────
display_test_info() {
local image="${DOCKER_IMAGE:-livesync-cli}"
local image_id
image_id="$(docker inspect --format='{{slice .Id 7 19}}' "$image" 2>/dev/null || echo "N/A")"
echo "======================"
echo "Script: ${BASH_SOURCE[1]:-$0}"
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
echo "Commit: $(git -C "${SCRIPT_DIR:-.}" rev-parse --short HEAD 2>/dev/null || echo "N/A")"
echo "Mode: Docker image=${image} id=${image_id}"
echo "======================"
}
# ─── _docker_translate_arg ───────────────────────────────────────────────────
# Translate a single host filesystem path to its in-container equivalent.
# Paths under WORK_DIR → /workdir/...
# Paths under CLI_DIR → /clidir/...
# Everything else is returned unchanged (relative paths, URIs, plain names).
_docker_translate_arg() {
local arg="$1"
if [[ -n "${WORK_DIR:-}" && "$arg" == "$WORK_DIR"* ]]; then
printf '%s' "/workdir${arg#$WORK_DIR}"
return
fi
if [[ -n "${CLI_DIR:-}" && "$arg" == "$CLI_DIR"* ]]; then
printf '%s' "/clidir${arg#$CLI_DIR}"
return
fi
printf '%s' "$arg"
}
# ─── run_cli ─────────────────────────────────────────────────────────────────
# Drop-in replacement for run_cli that executes the CLI inside a Docker
# container, translating host paths to container paths automatically.
#
# Calling convention is identical to the native run_cli:
# run_cli <vault-path> [options] <command> [command-args]
# run_cli init-settings [options] <settings-file>
#
# The vault path (first positional argument for regular commands) is forwarded
# via the LIVESYNC_DB_PATH environment variable so that docker-entrypoint.sh
# can inject it before the remaining CLI arguments.
run_cli() {
local args=("$@")
# ── 1. Translate all host paths to container paths ────────────────────
local translated=()
for arg in "${args[@]}"; do
translated+=("$(_docker_translate_arg "$arg")")
done
# ── 2. Split vault path from the rest of the arguments ───────────────
local first="${translated[0]:-}"
local env_args=()
local cli_args=()
# These tokens are commands or flags that appear before any vault path.
case "$first" in
"" | --help | -h \
| init-settings \
| -v | --verbose | -d | --debug | -f | --force | -s | --settings)
# No leading vault path — pass all translated args as-is.
cli_args=("${translated[@]}")
;;
*)
# First arg is the vault path; hand it to docker-entrypoint.sh
# via LIVESYNC_DB_PATH so the entrypoint prepends it correctly.
env_args+=(-e "LIVESYNC_DB_PATH=$first")
cli_args=("${translated[@]:1}")
;;
esac
# ── 3. Inject verbose / debug flags ──────────────────────────────────
if [[ "${VERBOSE_TEST_LOGGING:-0}" == "1" ]]; then
cli_args=(-v "${cli_args[@]}")
fi
# ── 4. Volume mounts ──────────────────────────────────────────────────
local vol_args=()
if [[ -n "${WORK_DIR:-}" ]]; then
vol_args+=(-v "${WORK_DIR}:/workdir")
fi
# Mount CLI_DIR (src/apps/cli) for two-vault tests that store vault data
# under $CLI_DIR/.livesync/.
if [[ -n "${CLI_DIR:-}" ]]; then
vol_args+=(-v "${CLI_DIR}:/clidir")
fi
# ── 5. stdin forwarding ───────────────────────────────────────────────
# Attach stdin only when it is a pipe (the 'put' command reads from stdin).
# Without -i the pipe data would never reach the container process.
local stdin_flags=()
if [[ ! -t 0 ]]; then
stdin_flags=(-i)
fi
docker run --rm \
"${stdin_flags[@]}" \
--network host \
--user "$(id -u):$(id -g)" \
"${vol_args[@]}" \
"${env_args[@]}" \
"${DOCKER_IMAGE:-livesync-cli}" \
"${cli_args[@]}"
}

View File

@@ -1,5 +1,15 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# ─── local init hook ────────────────────────────────────────────────────────
# If test-init.local.sh exists alongside this file, source it before anything
# else. Use it to set up your local environment (e.g. activate nvm, set
# DOCKER_IMAGE, ...). The file is git-ignored so it is safe to put personal
# or machine-specific configuration there.
_TEST_HELPERS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
[[ -f "$_TEST_HELPERS_DIR/test-init.local.sh" ]] && source "$_TEST_HELPERS_DIR/test-init.local.sh"
unset _TEST_HELPERS_DIR
cli_test_init_cli_cmd() { cli_test_init_cli_cmd() {
if [[ "${VERBOSE_TEST_LOGGING:-0}" == "1" ]]; then if [[ "${VERBOSE_TEST_LOGGING:-0}" == "1" ]]; then
CLI_CMD=(npm --silent run cli -- -v) CLI_CMD=(npm --silent run cli -- -v)
@@ -343,4 +353,10 @@ display_test_info(){
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)" echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
echo "Git commit: $(git -C "$SCRIPT_DIR/.." rev-parse --short HEAD 2>/dev/null || echo "N/A")" echo "Git commit: $(git -C "$SCRIPT_DIR/.." rev-parse --short HEAD 2>/dev/null || echo "N/A")"
echo "======================" echo "======================"
} }
# Docker-mode hook — source overrides when LIVESYNC_TEST_DOCKER=1.
if [[ "${LIVESYNC_TEST_DOCKER:-0}" == "1" ]]; then
# shellcheck source=/dev/null
source "$(dirname "${BASH_SOURCE[0]}")/test-helpers-docker.sh"
fi

View File

@@ -28,7 +28,9 @@ trap 'rm -rf "$WORK_DIR"' EXIT
SETTINGS_FILE="$WORK_DIR/data.json" SETTINGS_FILE="$WORK_DIR/data.json"
VAULT_DIR="$WORK_DIR/vault" VAULT_DIR="$WORK_DIR/vault"
DB_DIR="$WORK_DIR/db"
mkdir -p "$VAULT_DIR/test" mkdir -p "$VAULT_DIR/test"
mkdir -p "$DB_DIR"
if [[ "$RUN_BUILD" == "1" ]]; then if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI..." echo "[INFO] building CLI..."
@@ -41,6 +43,20 @@ cli_test_init_settings_file "$SETTINGS_FILE"
# isConfigured=true is required for mirror (canProceedScan checks this) # isConfigured=true is required for mirror (canProceedScan checks this)
cli_test_mark_settings_configured "$SETTINGS_FILE" cli_test_mark_settings_configured "$SETTINGS_FILE"
# Preparation: Sync settings and files logic
DB_SETTINGS="$DB_DIR/settings.json"
cp "$SETTINGS_FILE" "$DB_SETTINGS"
# Helper for standard run (Separated paths)
run_mirror_test() {
run_cli "$DB_DIR" --settings "$DB_SETTINGS" mirror "$VAULT_DIR"
}
# Helper for compatibility run (Same path)
run_mirror_compat() {
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
}
PASS=0 PASS=0
FAIL=0 FAIL=0
@@ -78,19 +94,27 @@ portable_touch_timestamp() {
# Case 1: File exists only in storage → should be synced into DB after mirror # Case 1: File exists only in storage → should be synced into DB after mirror
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 1: storage-only → DB ===" echo "=== Case 1: storage-only → DB (Separated Paths) ==="
printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md" printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror echo "[DEBUG] DB_DIR: $DB_DIR"
echo "[DEBUG] VAULT_DIR: $VAULT_DIR"
run_mirror_test
RESULT_FILE="$WORK_DIR/case1-cat.txt" RESULT_FILE="$WORK_DIR/case1-cat.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE" # Try 'ls' first to see what's in the DB
echo "--- DB contents ---"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" ls
echo "-------------------"
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/storage-only.md "$RESULT_FILE"
if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then
assert_pass "storage-only file was synced into DB" assert_pass "storage-only file was synced into DB using separated paths"
else else
assert_fail "storage-only file NOT synced into DB" assert_fail "storage-only file NOT synced into DB with separated paths"
echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2 echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2
echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2 echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2
fi fi
@@ -99,9 +123,9 @@ fi
# Case 2: File exists only in DB → should be restored to storage after mirror # Case 2: File exists only in DB → should be restored to storage after mirror
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 2: DB-only → storage ===" echo "=== Case 2: DB-only → storage (Separated Paths) ==="
printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md printf 'db-only content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/db-only.md
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
assert_fail "db-only.md unexpectedly exists in storage before mirror" assert_fail "db-only.md unexpectedly exists in storage before mirror"
@@ -109,7 +133,7 @@ else
echo "[INFO] confirmed: test/db-only.md not in storage before mirror" echo "[INFO] confirmed: test/db-only.md not in storage before mirror"
fi fi
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")" STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")"
@@ -119,19 +143,19 @@ if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')" assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')"
fi fi
else else
assert_fail "DB-only file was NOT restored to storage" assert_fail "DB-only file NOT restored to storage after mirror"
fi fi
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
# Case 3: File deleted in DB → should NOT be created in storage # Case 3: File deleted in DB → should NOT be created in storage
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 3: DB-deleted → storage untouched ===" echo "=== Case 3: DB-deleted → storage untouched (Separated Paths) ==="
printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md printf 'to-be-deleted\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/deleted.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md run_cli "$DB_DIR" --settings "$DB_SETTINGS" rm test/deleted.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then
assert_pass "deleted DB entry was not restored to storage" assert_pass "deleted DB entry was not restored to storage"
@@ -143,19 +167,19 @@ fi
# Case 4: Both exist, storage is newer → DB should be updated # Case 4: Both exist, storage is newer → DB should be updated
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 4: storage newer → DB updated ===" echo "=== Case 4: storage newer → DB updated (Separated Paths) ==="
# Seed DB with old content (mtime ≈ now) # Seed DB with old content (mtime ≈ now)
printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md printf 'old content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-storage-newer.md
# Write new content to storage with a timestamp 1 hour in the future # Write new content to storage with a timestamp 1 hour in the future
printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md" printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md"
touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md" touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
DB_RESULT_FILE="$WORK_DIR/case4-pull.txt" DB_RESULT_FILE="$WORK_DIR/case4-pull.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE" run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then
assert_pass "DB updated to match newer storage file" assert_pass "DB updated to match newer storage file"
else else
@@ -168,16 +192,16 @@ fi
# Case 5: Both exist, DB is newer → storage should be updated # Case 5: Both exist, DB is newer → storage should be updated
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
echo "" echo ""
echo "=== Case 5: DB newer → storage updated ===" echo "=== Case 5: DB newer → storage updated (Separated Paths) ==="
# Write old content to storage with a timestamp 1 hour in the past # Write old content to storage with a timestamp 1 hour in the past
printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md" printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md"
touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md" touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md"
# Write new content to DB only (mtime ≈ now, newer than the storage file) # Write new content to DB only (mtime ≈ now, newer than the storage file)
printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md printf 'new db content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-db-newer.md
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror run_mirror_test
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")" STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")"
if [[ "$STORAGE_CONTENT" == "new db content" ]]; then if [[ "$STORAGE_CONTENT" == "new db content" ]]; then
@@ -186,6 +210,25 @@ else
assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')" assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')"
fi fi
# ─────────────────────────────────────────────────────────────────────────────
# Case 6: Compatibility test - omitted vault-path
# ─────────────────────────────────────────────────────────────────────────────
echo ""
echo "=== Case 6: omitted vault-path (Compatibility Mode) ==="
# We use VAULT_DIR as the "main" database path for this part.
printf 'compat-content\n' > "$VAULT_DIR/compat.md"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
# In compat mode, it should find it in the DB at root
CAT_RESULT="$WORK_DIR/compat-cat.txt"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull compat.md "$CAT_RESULT"
if [[ "$(cat "$CAT_RESULT")" == "compat-content" ]]; then
assert_pass "Compatibility mode works (omitted vault-path)"
else
assert_fail "Compatibility mode failed to sync file into DB"
fi
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────
# Summary # Summary
# ───────────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────────

View File

0
src/apps/cli/test/test-setup-put-cat-linux.sh Executable file → Normal file
View File

View File

@@ -0,0 +1,136 @@
#!/usr/bin/env bash
# Test: CLI sync behaviour against a locked remote database.
#
# Scenario:
# 1. Start CouchDB, create a test database, and perform an initial sync so that
# the milestone document is created on the remote.
# 2. Unlock the milestone (locked=false, accepted_nodes=[]) and verify sync
# succeeds without the locked error message.
# 3. Lock the milestone (locked=true, accepted_nodes=[]) and verify sync fails
# with an actionable error message.
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
cd "$CLI_DIR"
source "$SCRIPT_DIR/test-helpers.sh"
display_test_info
RUN_BUILD="${RUN_BUILD:-1}"
TEST_ENV_FILE="${TEST_ENV_FILE:-$CLI_DIR/.test.env}"
cli_test_init_cli_cmd
if [[ ! -f "$TEST_ENV_FILE" ]]; then
echo "[ERROR] test env file not found: $TEST_ENV_FILE" >&2
exit 1
fi
set -a
source "$TEST_ENV_FILE"
set +a
DB_SUFFIX="$(date +%s)-$RANDOM"
COUCHDB_URI="${hostname%/}"
COUCHDB_DBNAME="${dbname}-locked-${DB_SUFFIX}"
COUCHDB_USER="${username:-}"
COUCHDB_PASSWORD="${password:-}"
if [[ -z "$COUCHDB_URI" || -z "$COUCHDB_USER" || -z "$COUCHDB_PASSWORD" ]]; then
echo "[ERROR] COUCHDB_URI, COUCHDB_USER, COUCHDB_PASSWORD are required" >&2
exit 1
fi
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-locked-test.XXXXXX")"
VAULT_DIR="$WORK_DIR/vault"
SETTINGS_FILE="$WORK_DIR/settings.json"
mkdir -p "$VAULT_DIR"
cleanup() {
local exit_code=$?
cli_test_stop_couchdb
rm -rf "$WORK_DIR"
exit "$exit_code"
}
trap cleanup EXIT
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI"
npm run build
fi
echo "[INFO] starting CouchDB and creating test database: $COUCHDB_DBNAME"
cli_test_start_couchdb "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME"
echo "[INFO] preparing settings"
cli_test_init_settings_file "$SETTINGS_FILE"
cli_test_apply_couchdb_settings "$SETTINGS_FILE" "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME" 1
echo "[INFO] initial sync to create milestone document"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >/dev/null
MILESTONE_ID="_local/obsydian_livesync_milestone"
MILESTONE_URL="${COUCHDB_URI}/${COUCHDB_DBNAME}/${MILESTONE_ID}"
update_milestone() {
local locked="$1"
local accepted_nodes="$2"
local current
current="$(cli_test_curl_json --user "${COUCHDB_USER}:${COUCHDB_PASSWORD}" "$MILESTONE_URL")"
local updated
updated="$(node -e '
const doc = JSON.parse(process.argv[1]);
doc.locked = process.argv[2] === "true";
doc.accepted_nodes = JSON.parse(process.argv[3]);
process.stdout.write(JSON.stringify(doc));
' "$current" "$locked" "$accepted_nodes")"
cli_test_curl_json -X PUT \
--user "${COUCHDB_USER}:${COUCHDB_PASSWORD}" \
-H "Content-Type: application/json" \
-d "$updated" \
"$MILESTONE_URL" >/dev/null
}
SYNC_LOG="$WORK_DIR/sync.log"
echo "[CASE] sync should succeed when remote is not locked"
update_milestone "false" "[]"
set +e
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >"$SYNC_LOG" 2>&1
SYNC_EXIT=$?
set -e
if [[ "$SYNC_EXIT" -ne 0 ]]; then
echo "[FAIL] sync should succeed when remote is not locked" >&2
cat "$SYNC_LOG" >&2
exit 1
fi
if grep -Fq "The remote database is locked" "$SYNC_LOG"; then
echo "[FAIL] locked error should not appear when remote is not locked" >&2
cat "$SYNC_LOG" >&2
exit 1
fi
echo "[PASS] unlocked remote DB syncs successfully"
echo "[CASE] sync should fail with actionable error when remote is locked"
update_milestone "true" "[]"
set +e
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >"$SYNC_LOG" 2>&1
SYNC_EXIT=$?
set -e
if [[ "$SYNC_EXIT" -eq 0 ]]; then
echo "[FAIL] sync should have exited with non-zero when remote is locked" >&2
cat "$SYNC_LOG" >&2
exit 1
fi
cli_test_assert_contains "$(cat "$SYNC_LOG")" \
"The remote database is locked and this device is not yet accepted" \
"sync output should contain the locked-remote error message"
echo "[PASS] locked remote DB produces actionable CLI error"

View File

View File

@@ -1,2 +1,30 @@
#!/bin/bash #!/bin/bash
docker run -d --name relay-test -p 4000:8080 scsibug/nostr-rs-relay:latest set -e
docker run -d --name relay-test -p 4000:7777 \
--tmpfs /app/strfry-db:rw,size=256m \
--entrypoint sh \
ghcr.io/hoytech/strfry:latest \
-lc 'cat > /tmp/strfry.conf <<"EOF"
db = "./strfry-db/"
relay {
bind = "0.0.0.0"
port = 7777
nofiles = 100000
info {
name = "livesync test relay"
description = "local relay for livesync p2p tests"
}
maxWebsocketPayloadSize = 131072
autoPingSeconds = 55
writePolicy {
plugin = ""
}
}
EOF
exec /app/strfry --config /tmp/strfry.conf relay'

View File

@@ -12,8 +12,7 @@ const defaultExternal = [
"pouchdb-adapter-leveldb", "pouchdb-adapter-leveldb",
"commander", "commander",
"punycode", "punycode",
"node-datachannel", "werift",
"node-datachannel/polyfill",
]; ];
export default defineConfig({ export default defineConfig({
plugins: [svelte()], plugins: [svelte()],
@@ -52,7 +51,7 @@ export default defineConfig({
if (id === "fs" || id === "fs/promises" || id === "path" || id === "crypto" || id === "worker_threads") if (id === "fs" || id === "fs/promises" || id === "path" || id === "crypto" || id === "worker_threads")
return true; return true;
if (id.startsWith("pouchdb-")) return true; if (id.startsWith("pouchdb-")) return true;
if (id.startsWith("node-datachannel")) return true; if (id.startsWith("werift")) return true;
if (id.startsWith("node:")) return true; if (id.startsWith("node:")) return true;
return false; return false;
}, },

View File

@@ -0,0 +1,58 @@
# syntax=docker/dockerfile:1
#
# Self-hosted LiveSync WebApp — Docker image
# Browser-based vault sync using the FileSystem API, served by nginx.
#
# Build (from the repository root):
# docker build -f src/apps/webapp/Dockerfile -t livesync-webapp .
#
# Run:
# docker run --rm -p 8080:80 livesync-webapp
# Then open http://localhost:8080/webapp.html in Chrome/Edge 86+.
#
# Notes:
# - This image serves purely static files; no server-side code is involved.
# - The FileSystem API is a browser feature and requires Chrome/Edge 86+ or
# Safari 15.2+ (limited). Firefox is not supported.
# - CouchDB / S3 connections are made directly from the browser; the container
# only serves HTML/JS/CSS assets.
# ─────────────────────────────────────────────────────────────────────────────
# Stage 1 — builder
# Full Node.js environment to install dependencies and build the Vite bundle.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim AS builder
WORKDIR /build
# Install workspace dependencies (all apps share the root package.json)
COPY package.json ./
RUN npm install
# Copy the full source tree and build the WebApp bundle
COPY . .
RUN cd src/apps/webapp && npm run build
# ─────────────────────────────────────────────────────────────────────────────
# Stage 2 — runtime
# Minimal nginx image that serves the static build output.
# ─────────────────────────────────────────────────────────────────────────────
FROM nginx:stable-alpine
# Remove the default nginx welcome page
RUN rm -rf /usr/share/nginx/html/*
# Copy the built static assets
COPY --from=builder /build/src/apps/webapp/dist /usr/share/nginx/html
# Redirect the root to webapp.html so the app loads on first visit
RUN printf 'server {\n\
listen 80;\n\
root /usr/share/nginx/html;\n\
index webapp.html;\n\
location / {\n\
try_files $uri $uri/ =404;\n\
}\n\
}\n' > /etc/nginx/conf.d/default.conf
EXPOSE 80

View File

@@ -14,6 +14,7 @@ import { useOfflineScanner } from "@lib/serviceFeatures/offlineScanner";
import { useRedFlagFeatures } from "@/serviceFeatures/redFlag"; import { useRedFlagFeatures } from "@/serviceFeatures/redFlag";
import { useCheckRemoteSize } from "@lib/serviceFeatures/checkRemoteSize"; import { useCheckRemoteSize } from "@lib/serviceFeatures/checkRemoteSize";
import { useSetupURIFeature } from "@lib/serviceFeatures/setupObsidian/setupUri"; import { useSetupURIFeature } from "@lib/serviceFeatures/setupObsidian/setupUri";
import { useRemoteConfiguration } from "@lib/serviceFeatures/remoteConfig";
import { SetupManager } from "@/modules/features/SetupManager"; import { SetupManager } from "@/modules/features/SetupManager";
import { useSetupManagerHandlersFeature } from "@/serviceFeatures/setupObsidian/setupManagerHandlers"; import { useSetupManagerHandlersFeature } from "@/serviceFeatures/setupObsidian/setupManagerHandlers";
import { useP2PReplicatorCommands } from "@/lib/src/replication/trystero/useP2PReplicatorCommands"; import { useP2PReplicatorCommands } from "@/lib/src/replication/trystero/useP2PReplicatorCommands";
@@ -132,6 +133,7 @@ class LiveSyncWebApp {
useOfflineScanner(core); useOfflineScanner(core);
useRedFlagFeatures(core); useRedFlagFeatures(core);
useCheckRemoteSize(core); useCheckRemoteSize(core);
useRemoteConfiguration(core);
const replicator = useP2PReplicatorFeature(core); const replicator = useP2PReplicatorFeature(core);
useP2PReplicatorCommands(core, replicator); useP2PReplicatorCommands(core, replicator);
const setupManager = core.getModule(SetupManager); const setupManager = core.getModule(SetupManager);

View File

@@ -7,6 +7,8 @@
"scripts": { "scripts": {
"dev": "vite", "dev": "vite",
"build": "vite build", "build": "vite build",
"build:docker": "docker build -f Dockerfile -t livesync-webapp ../../..",
"run:docker": "docker run -p 8002:80 livesync-webapp",
"preview": "vite preview" "preview": "vite preview"
}, },
"dependencies": {}, "dependencies": {},

View File

@@ -0,0 +1,81 @@
import { defineConfig, devices } from "@playwright/test";
import * as path from "path";
import * as fs from "fs";
import { fileURLToPath } from "url";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// ---------------------------------------------------------------------------
// Load environment variables from .test.env (root) so that CouchDB
// connection details are visible to the test process.
// ---------------------------------------------------------------------------
function loadEnvFile(envPath: string): Record<string, string> {
const result: Record<string, string> = {};
if (!fs.existsSync(envPath)) return result;
const lines = fs.readFileSync(envPath, "utf-8").split("\n");
for (const line of lines) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) continue;
const eq = trimmed.indexOf("=");
if (eq < 0) continue;
const key = trimmed.slice(0, eq).trim();
const val = trimmed.slice(eq + 1).trim();
result[key] = val;
}
return result;
}
// __dirname is src/apps/webapp — root is three levels up
const ROOT = path.resolve(__dirname, "../../..");
const envVars = {
...loadEnvFile(path.join(ROOT, ".env")),
...loadEnvFile(path.join(ROOT, ".test.env")),
};
// Make the loaded variables available to all test files via process.env.
for (const [k, v] of Object.entries(envVars)) {
if (!(k in process.env)) {
process.env[k] = v;
}
}
export default defineConfig({
testDir: "./test",
// Give each test plenty of time for replication round-trips.
timeout: 120_000,
expect: { timeout: 30_000 },
// Run test files sequentially; the tests themselves manage two contexts.
fullyParallel: false,
workers: 1,
reporter: "list",
use: {
baseURL: "http://localhost:3000",
// Use Chromium for OPFS and FileSystem API support.
...devices["Desktop Chrome"],
headless: true,
// Launch args to match the main vitest browser config.
launchOptions: {
args: ["--js-flags=--expose-gc"],
},
},
projects: [
{
name: "chromium",
use: { ...devices["Desktop Chrome"] },
},
],
// Start the vite dev server before running the tests.
webServer: {
command: "npx vite --port 3000",
url: "http://localhost:3000",
// Re-use a running dev server when developing locally.
reuseExistingServer: !process.env.CI,
timeout: 30_000,
// Run from the webapp directory so vite finds its config.
cwd: __dirname,
},
});

View File

@@ -0,0 +1,203 @@
/**
* LiveSync WebApp E2E test entry point.
*
* When served by vite dev server (at /test.html), this module wires up
* `window.livesyncTest`, a plain JS API that Playwright tests can call via
* `page.evaluate()`. All methods are async and serialisation-safe.
*
* Vault storage is backed by OPFS so no `showDirectoryPicker()` interaction
* is required, making it fully headless-compatible.
*/
import { LiveSyncWebApp } from "./main";
import type { ObsidianLiveSyncSettings } from "@lib/common/types";
import type { FilePathWithPrefix } from "@lib/common/types";
// --------------------------------------------------------------------------
// Internal state one app instance per page / browser context
// --------------------------------------------------------------------------
let app: LiveSyncWebApp | null = null;
// --------------------------------------------------------------------------
// Helpers
// --------------------------------------------------------------------------
/** Strip the "plain:" / "enc:" / … prefix used internally in PouchDB paths. */
function stripPrefix(raw: string): string {
return raw.replace(/^[^:]+:/, "");
}
/**
* Poll every 300 ms until all known processing queues are drained, or until
* the timeout elapses. Mirrors `waitForIdle` in the existing vitest harness.
*/
async function waitForIdle(core: any, timeoutMs = 60_000): Promise<void> {
const deadline = Date.now() + timeoutMs;
while (Date.now() < deadline) {
const q =
(core.services?.replication?.databaseQueueCount?.value ?? 0) +
(core.services?.fileProcessing?.totalQueued?.value ?? 0) +
(core.services?.fileProcessing?.batched?.value ?? 0) +
(core.services?.fileProcessing?.processing?.value ?? 0) +
(core.services?.replication?.storageApplyingCount?.value ?? 0);
if (q === 0) return;
await new Promise<void>((r) => setTimeout(r, 300));
}
throw new Error(`waitForIdle timed out after ${timeoutMs} ms`);
}
function getCore(): any {
const core = (app as any)?.core;
if (!core) throw new Error("Vault not initialised call livesyncTest.init() first");
return core;
}
// --------------------------------------------------------------------------
// Public test API
// --------------------------------------------------------------------------
export interface LiveSyncTestAPI {
/**
* Initialise a vault in OPFS under the given name and apply `settings`.
* Any previous contents of the OPFS directory are wiped first so each
* test run starts clean.
*/
init(vaultName: string, settings: Partial<ObsidianLiveSyncSettings>): Promise<void>;
/**
* Write `content` to the local PouchDB under `vaultPath` (equivalent to
* the CLI `put` command). Waiting for the DB write to finish is
* included; you still need to call `replicate()` to push to remote.
*/
putFile(vaultPath: string, content: string): Promise<boolean>;
/**
* Mark `vaultPath` as deleted in the local PouchDB (equivalent to CLI
* `rm`). Call `replicate()` afterwards to propagate to remote.
*/
deleteFile(vaultPath: string): Promise<boolean>;
/**
* Run one full replication cycle (push + pull) against the remote CouchDB,
* then wait for the local storage-application queue to drain.
*/
replicate(): Promise<boolean>;
/**
* Wait until all processing queues are idle. Usually not needed after
* `putFile` / `deleteFile` since those already await, but useful when
* testing results after `replicate()`.
*/
waitForIdle(timeoutMs?: number): Promise<void>;
/**
* Return metadata for `vaultPath` from the local database, or `null` if
* not found / deleted.
*/
getInfo(vaultPath: string): Promise<{
path: string;
revision: string;
conflicts: string[];
size: number;
mtime: number;
} | null>;
/** Convenience wrapper: returns true when the doc has ≥1 conflict revision. */
hasConflict(vaultPath: string): Promise<boolean>;
/** Tear down the current app instance. */
shutdown(): Promise<void>;
}
// --------------------------------------------------------------------------
// Implementation
// --------------------------------------------------------------------------
const livesyncTest: LiveSyncTestAPI = {
async init(vaultName: string, settings: Partial<ObsidianLiveSyncSettings>): Promise<void> {
// Clean up any stale OPFS data from previous runs.
const opfsRoot = await navigator.storage.getDirectory();
try {
await opfsRoot.removeEntry(vaultName, { recursive: true });
} catch {
// directory did not exist that's fine
}
const vaultDir = await opfsRoot.getDirectoryHandle(vaultName, { create: true });
// Pre-write settings so they are loaded during initialise().
const livesyncDir = await vaultDir.getDirectoryHandle(".livesync", { create: true });
const settingsFile = await livesyncDir.getFileHandle("settings.json", { create: true });
const writable = await settingsFile.createWritable();
await writable.write(JSON.stringify(settings));
await writable.close();
app = new LiveSyncWebApp(vaultDir);
await app.initialize();
// Give background startup tasks a moment to settle.
await waitForIdle(getCore(), 30_000);
},
async putFile(vaultPath: string, content: string): Promise<boolean> {
const core = getCore();
const result = await core.serviceModules.databaseFileAccess.storeContent(
vaultPath as FilePathWithPrefix,
content
);
await waitForIdle(core);
return result !== false;
},
async deleteFile(vaultPath: string): Promise<boolean> {
const core = getCore();
const result = await core.serviceModules.databaseFileAccess.delete(vaultPath as FilePathWithPrefix);
await waitForIdle(core);
return result !== false;
},
async replicate(): Promise<boolean> {
const core = getCore();
const result = await core.services.replication.replicate(true);
// After replicate() resolves, remote docs may still be queued for
// local storage application wait until all queues are drained.
await waitForIdle(core);
return result !== false;
},
async waitForIdle(timeoutMs?: number): Promise<void> {
await waitForIdle(getCore(), timeoutMs ?? 60_000);
},
async getInfo(vaultPath: string) {
const core = getCore();
const db = core.services?.database;
for await (const doc of db.localDatabase.findAllNormalDocs({ conflicts: true })) {
if (doc._deleted || doc.deleted) continue;
const docPath = stripPrefix(doc.path ?? "");
if (docPath !== vaultPath) continue;
return {
path: docPath,
revision: (doc._rev as string) ?? "",
conflicts: (doc._conflicts as string[]) ?? [],
size: (doc.size as number) ?? 0,
mtime: (doc.mtime as number) ?? 0,
};
}
return null;
},
async hasConflict(vaultPath: string): Promise<boolean> {
const info = await livesyncTest.getInfo(vaultPath);
return (info?.conflicts?.length ?? 0) > 0;
},
async shutdown(): Promise<void> {
if (app) {
await app.shutdown();
app = null;
}
},
};
// Expose on window for Playwright page.evaluate() calls.
(window as any).livesyncTest = livesyncTest;

26
src/apps/webapp/test.html Normal file
View File

@@ -0,0 +1,26 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>LiveSync WebApp E2E Test Page</title>
<style>
body {
font-family: monospace;
padding: 1rem;
}
#status {
margin-top: 1rem;
padding: 0.5rem;
border: 1px solid #ccc;
}
</style>
</head>
<body>
<h2>LiveSync WebApp E2E</h2>
<p>This page is used by Playwright tests only. <code>window.livesyncTest</code> is exposed by the script below.</p>
<!-- status div required by LiveSyncWebApp internal helpers -->
<div id="status">Loading…</div>
<script type="module" src="/test-entry.ts"></script>
</body>
</html>

View File

@@ -0,0 +1,294 @@
/**
* WebApp E2E tests two-vault scenarios.
*
* Each vault (A and B) runs in its own browser context so that JavaScript
* global state (including Trystero's global signalling tables) is fully
* isolated. The two vaults communicate only through the shared remote
* CouchDB database.
*
* Vault storage is OPFS-backed no file-picker interaction needed.
*
* Prerequisites:
* - A reachable CouchDB instance whose connection details are in .test.env
* (read automatically by playwright.config.ts).
*
* How to run:
* cd src/apps/webapp && npm run test:e2e
*/
import { test, expect, type BrowserContext, type Page, type TestInfo } from "@playwright/test";
import type { LiveSyncTestAPI } from "../test-entry";
import { mkdirSync, writeFileSync } from "node:fs";
import path from "node:path";
import { fileURLToPath } from "node:url";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// ---------------------------------------------------------------------------
// Settings helpers
// ---------------------------------------------------------------------------
function requireEnv(name: string): string {
const v = process.env[name];
if (!v) throw new Error(`Missing required env variable: ${name}`);
return v;
}
async function ensureCouchDbDatabase(uri: string, user: string, pass: string, dbName: string): Promise<void> {
const base = uri.replace(/\/+$/, "");
const dbUrl = `${base}/${encodeURIComponent(dbName)}`;
const auth = Buffer.from(`${user}:${pass}`, "utf-8").toString("base64");
const response = await fetch(dbUrl, {
method: "PUT",
headers: {
Authorization: `Basic ${auth}`,
},
});
// 201: created, 202: accepted, 412: already exists
if (response.status === 201 || response.status === 202 || response.status === 412) {
return;
}
const body = await response.text().catch(() => "");
throw new Error(`Failed to ensure CouchDB database (${response.status}): ${body}`);
}
function buildSettings(dbName: string): Record<string, unknown> {
return {
// Remote database (shared between A and B this is the replication target)
couchDB_URI: requireEnv("hostname").replace(/\/+$/, ""),
couchDB_USER: process.env["username"] ?? "",
couchDB_PASSWORD: process.env["password"] ?? "",
couchDB_DBNAME: dbName,
// Core behaviour
isConfigured: true,
liveSync: false,
syncOnSave: false,
syncOnStart: false,
periodicReplication: false,
gcDelay: 0,
savingDelay: 0,
notifyThresholdOfRemoteStorageSize: 0,
// Encryption off for test simplicity
encrypt: false,
// Disable plugin/hidden-file sync (not needed in webapp)
usePluginSync: false,
autoSweepPlugins: false,
autoSweepPluginsPeriodic: false,
//Auto accept perr
P2P_AutoAcceptingPeers: "~.*",
};
}
// ---------------------------------------------------------------------------
// Test-page helpers
// ---------------------------------------------------------------------------
/** Navigate to the test entry page and wait for `window.livesyncTest`. */
async function openTestPage(ctx: BrowserContext): Promise<Page> {
const page = await ctx.newPage();
await page.goto("/test.html");
await page.waitForFunction(() => !!(window as any).livesyncTest, { timeout: 20_000 });
return page;
}
/** Type-safe wrapper calls `window.livesyncTest.<method>(...args)` in the page. */
async function call<M extends keyof LiveSyncTestAPI>(
page: Page,
method: M,
...args: Parameters<LiveSyncTestAPI[M]>
): Promise<Awaited<ReturnType<LiveSyncTestAPI[M]>>> {
const invoke = () =>
page.evaluate(([m, a]) => (window as any).livesyncTest[m](...a), [method, args] as [
string,
unknown[],
]) as Promise<Awaited<ReturnType<LiveSyncTestAPI[M]>>>;
try {
return await invoke();
} catch (ex: any) {
const message = String(ex?.message ?? ex);
// Some startup flows may trigger one page reload; recover once.
if (
message.includes("Execution context was destroyed") ||
message.includes("Most likely the page has been closed")
) {
await page.waitForFunction(() => !!(window as any).livesyncTest, { timeout: 20_000 });
return await invoke();
}
throw ex;
}
}
async function dumpCoverage(page: Page | undefined, label: string, testInfo: TestInfo): Promise<void> {
if (!process.env.PW_COVERAGE || !page || page.isClosed()) {
return;
}
const cov = await page
.evaluate(() => {
const data = (window as any).__coverage__;
if (!data) return null;
// Reset between tests to avoid runaway accumulation.
(window as any).__coverage__ = {};
return data;
})
.catch(() => null!);
if (!cov) return;
if (typeof cov === "object" && Object.keys(cov as Record<string, unknown>).length === 0) {
return;
}
const outDir = path.resolve(__dirname, "../.nyc_output");
mkdirSync(outDir, { recursive: true });
const name = `${testInfo.testId.replace(/[^a-zA-Z0-9_-]/g, "_")}-${label}.json`;
writeFileSync(path.join(outDir, name), JSON.stringify(cov), "utf-8");
}
// ---------------------------------------------------------------------------
// Two-vault E2E suite
// ---------------------------------------------------------------------------
test.describe("WebApp two-vault E2E", () => {
let ctxA: BrowserContext;
let ctxB: BrowserContext;
let pageA: Page;
let pageB: Page;
const DB_SUFFIX = `${Date.now()}-${Math.random().toString(36).slice(2, 8)}`;
const dbName = `${requireEnv("dbname")}-${DB_SUFFIX}`;
const settings = buildSettings(dbName);
test.beforeAll(async ({ browser }) => {
await ensureCouchDbDatabase(
String(settings.couchDB_URI ?? ""),
String(settings.couchDB_USER ?? ""),
String(settings.couchDB_PASSWORD ?? ""),
dbName
);
// Open Vault A and Vault B in completely separate browser contexts.
// Each context has its own JS runtime, IndexedDB and OPFS root, so
// Trystero global state and PouchDB instance names cannot collide.
ctxA = await browser.newContext();
ctxB = await browser.newContext();
pageA = await openTestPage(ctxA);
pageB = await openTestPage(ctxB);
await call(pageA, "init", "testvault_a", settings as any);
await call(pageB, "init", "testvault_b", settings as any);
});
test.afterAll(async () => {
await call(pageA, "shutdown").catch(() => {});
await call(pageB, "shutdown").catch(() => {});
await ctxA.close();
await ctxB.close();
});
test.afterEach(async ({}, testInfo) => {
await dumpCoverage(pageA, "vaultA", testInfo);
await dumpCoverage(pageB, "vaultB", testInfo);
});
// -----------------------------------------------------------------------
// Case 1: Vault A writes a file and can read its metadata back from the
// local database (no replication yet).
// -----------------------------------------------------------------------
test("Case 1: A writes a file and can get its info", async () => {
const FILE = "e2e/case1-a-only.md";
const CONTENT = "hello from vault A";
const ok = await call(pageA, "putFile", FILE, CONTENT);
expect(ok).toBe(true);
const info = await call(pageA, "getInfo", FILE);
expect(info).not.toBeNull();
expect(info!.path).toBe(FILE);
expect(info!.revision).toBeTruthy();
expect(info!.conflicts).toHaveLength(0);
});
// -----------------------------------------------------------------------
// Case 2: Vault A writes a file, both vaults replicate, and Vault B ends
// up with the file in its local database.
// -----------------------------------------------------------------------
test("Case 2: A writes a file, both replicate, B receives the file", async () => {
const FILE = "e2e/case2-sync.md";
const CONTENT = "content from A should appear in B";
await call(pageA, "putFile", FILE, CONTENT);
// A pushes to remote, B pulls from remote.
await call(pageA, "replicate");
await call(pageB, "replicate");
const infoB = await call(pageB, "getInfo", FILE);
expect(infoB).not.toBeNull();
expect(infoB!.path).toBe(FILE);
});
// -----------------------------------------------------------------------
// Case 3: Vault A deletes the file it synced in case 2. After both
// vaults replicate, Vault B no longer sees the file.
// -----------------------------------------------------------------------
test("Case 3: A deletes the file, both replicate, B no longer sees it", async () => {
// This test depends on Case 2 having put e2e/case2-sync.md into both vaults.
const FILE = "e2e/case2-sync.md";
await call(pageA, "deleteFile", FILE);
await call(pageA, "replicate");
await call(pageB, "replicate");
const infoB = await call(pageB, "getInfo", FILE);
// The file should be gone (null means not found or deleted).
expect(infoB).toBeNull();
});
// -----------------------------------------------------------------------
// Case 4: A and B each independently edit the same file that was already
// synced. After both vaults replicate the editing cycle, both
// vaults report a conflict on that file.
// -----------------------------------------------------------------------
test("Case 4: concurrent edits from A and B produce a conflict on both sides", async () => {
const FILE = "e2e/case4-conflict.md";
// 1) Write a baseline and synchronise so both vaults start from the
// same revision.
await call(pageA, "putFile", FILE, "base content");
await call(pageA, "replicate");
await call(pageB, "replicate");
// Confirm B has the base file with no conflicts yet.
const baseInfoB = await call(pageB, "getInfo", FILE);
expect(baseInfoB).not.toBeNull();
expect(baseInfoB!.conflicts).toHaveLength(0);
// 2) Both vaults write diverging content without syncing in between
// this creates two competing revisions.
await call(pageA, "putFile", FILE, "content from A (conflict side)");
await call(pageB, "putFile", FILE, "content from B (conflict side)");
// 3) Run replication on both sides. The order mirrors the pattern
// from the CLI two-vault tests (A → remote → B → remote → A).
await call(pageA, "replicate");
await call(pageB, "replicate");
await call(pageA, "replicate"); // re-check from A to pick up B's revision
// 4) At least one side must report a conflict.
const hasConflictA = await call(pageA, "hasConflict", FILE);
const hasConflictB = await call(pageB, "hasConflict", FILE);
expect(
hasConflictA || hasConflictB,
"Expected a conflict to appear on vault A or vault B after diverging edits"
).toBe(true);
});
});

View File

@@ -0,0 +1,57 @@
# syntax=docker/dockerfile:1
#
# Self-hosted LiveSync WebPeer — Docker image
# Browser-based P2P peer daemon served by nginx.
#
# Build (from the repository root):
# docker build -f src/apps/webpeer/Dockerfile -t livesync-webpeer .
#
# Run:
# docker run --rm -p 8081:80 livesync-webpeer
# Then open http://localhost:8081/ in any modern browser.
#
# What is WebPeer?
# WebPeer acts as a pseudo P2P peer that runs entirely in the browser.
# It can replace a CouchDB remote server by replying to sync requests from
# other Self-hosted LiveSync instances over the WebRTC P2P channel.
#
# P2P (WebRTC) networking notes
# ─────────────────────────────
# WebRTC connections are initiated by the *browser* visiting this page, not by
# the nginx container itself. Therefore the Docker network mode of this
# container has NO effect on WebRTC connectivity.
# Simply publish port 80 (as above) and the browser handles all ICE/STUN/TURN
# negotiation on its own.
#
# If the browser is running inside another container or a restricted network,
# configuring a TURN server in the WebPeer settings is recommended.
# ─────────────────────────────────────────────────────────────────────────────
# Stage 1 — builder
# Full Node.js environment to install dependencies and build the Vite bundle.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim AS builder
WORKDIR /build
# Install workspace dependencies (all apps share the root package.json)
COPY package.json ./
RUN npm install
# Copy the full source tree and build the WebPeer bundle
COPY . .
RUN cd src/apps/webpeer && npm run build
# ─────────────────────────────────────────────────────────────────────────────
# Stage 2 — runtime
# Minimal nginx image that serves the static build output.
# ─────────────────────────────────────────────────────────────────────────────
FROM nginx:stable-alpine
# Remove the default nginx welcome page
RUN rm -rf /usr/share/nginx/html/*
# Copy the built static assets
COPY --from=builder /build/src/apps/webpeer/dist /usr/share/nginx/html
EXPOSE 80

View File

@@ -6,6 +6,8 @@
"scripts": { "scripts": {
"dev": "vite", "dev": "vite",
"build": "vite build", "build": "vite build",
"build:docker": "docker build -f Dockerfile -t livesync-webpeer ../../..",
"run:docker": "docker run -p 8001:80 livesync-webpeer",
"preview": "vite preview", "preview": "vite preview",
"check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json" "check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json"
}, },

View File

@@ -276,7 +276,7 @@ export class P2PReplicatorShim implements P2PReplicatorBase {
} }
} }
} }
await this.services.setting.applyPartial(remoteConfig, true); await this.services.setting.applyExternalSettings(remoteConfig, true);
if (yn !== DROP) { if (yn !== DROP) {
await this.plugin.core.services.appLifecycle.scheduleRestart(); await this.plugin.core.services.appLifecycle.scheduleRestart();
} }

View File

@@ -138,7 +138,7 @@ export const _requestToCouchDBFetch = async (
authorization: authHeader, authorization: authHeader,
"content-type": "application/json", "content-type": "application/json",
}; };
const uri = `${baseUri}/${path}`; const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
const requestParam = { const requestParam = {
url: uri, url: uri,
method: method || (body ? "PUT" : "GET"), method: method || (body ? "PUT" : "GET"),
@@ -162,7 +162,7 @@ export const _requestToCouchDB = async (
const authHeaderGen = new AuthorizationHeaderGenerator(); const authHeaderGen = new AuthorizationHeaderGenerator();
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials); const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders }; const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
const uri = `${baseUri}/${path}`; const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
const requestParam: RequestUrlParam = { const requestParam: RequestUrlParam = {
url: uri, url: uri,
method: method || (body ? "PUT" : "GET"), method: method || (body ? "PUT" : "GET"),

View File

@@ -30,7 +30,8 @@
type JSONData = Record<string | number | symbol, any> | [any]; type JSONData = Record<string | number | symbol, any> | [any];
const docsArray = $derived.by(() => { const docsArray = $derived.by(() => {
if (docs && docs.length >= 1) { // The merge pane compares two revisions, so guard against incomplete input before reading docs[1].
if (docs && docs.length >= 2) {
if (keepOrder || docs[0].mtime < docs[1].mtime) { if (keepOrder || docs[0].mtime < docs[1].mtime) {
return { a: docs[0], b: docs[1] } as const; return { a: docs[0], b: docs[1] } as const;
} else { } else {

View File

@@ -636,10 +636,24 @@ Offline Changed files: ${processFiles.length}`;
// --> Conflict processing // --> Conflict processing
// Keep one in-flight conflict check per path so repeated sync events do not close the active merge dialogue.
pendingConflictChecks = new Set<FilePathWithPrefix>();
queueConflictCheck(path: FilePathWithPrefix) { queueConflictCheck(path: FilePathWithPrefix) {
if (this.pendingConflictChecks.has(path)) return;
this.pendingConflictChecks.add(path);
this.conflictResolutionProcessor.enqueue(path); this.conflictResolutionProcessor.enqueue(path);
} }
finishConflictCheck(path: FilePathWithPrefix) {
this.pendingConflictChecks.delete(path);
}
requeueConflictCheck(path: FilePathWithPrefix) {
this.finishConflictCheck(path);
this.queueConflictCheck(path);
}
async resolveConflictOnInternalFiles() { async resolveConflictOnInternalFiles() {
// Scan all conflicted internal files // Scan all conflicted internal files
const conflicted = this.localDatabase.findEntries(ICHeader, ICHeaderEnd, { conflicts: true }); const conflicted = this.localDatabase.findEntries(ICHeader, ICHeaderEnd, { conflicts: true });
@@ -648,7 +662,7 @@ Offline Changed files: ${processFiles.length}`;
for await (const doc of conflicted) { for await (const doc of conflicted) {
if (!("_conflicts" in doc)) continue; if (!("_conflicts" in doc)) continue;
if (isInternalMetadata(doc._id)) { if (isInternalMetadata(doc._id)) {
this.conflictResolutionProcessor.enqueue(doc.path); this.queueConflictCheck(doc.path);
} }
} }
} catch (ex) { } catch (ex) {
@@ -679,21 +693,27 @@ Offline Changed files: ${processFiles.length}`;
const cc = await this.localDatabase.getRaw(id, { conflicts: true }); const cc = await this.localDatabase.getRaw(id, { conflicts: true });
if (cc._conflicts?.length === 0) { if (cc._conflicts?.length === 0) {
await this.extractInternalFileFromDatabase(stripAllPrefixes(path)); await this.extractInternalFileFromDatabase(stripAllPrefixes(path));
this.finishConflictCheck(path);
} else { } else {
this.conflictResolutionProcessor.enqueue(path); this.requeueConflictCheck(path);
} }
// check the file again // check the file again
} }
conflictResolutionProcessor = new QueueProcessor( conflictResolutionProcessor = new QueueProcessor(
async (paths: FilePathWithPrefix[]) => { async (paths: FilePathWithPrefix[]) => {
const path = paths[0]; const path = paths[0];
sendSignal(`cancel-internal-conflict:${path}`);
try { try {
// Retrieve data // Retrieve data
const id = await this.path2id(path, ICHeader); const id = await this.path2id(path, ICHeader);
const doc = await this.localDatabase.getRaw<MetaEntry>(id, { conflicts: true }); const doc = await this.localDatabase.getRaw<MetaEntry>(id, { conflicts: true });
if (doc._conflicts === undefined) return []; if (doc._conflicts === undefined) {
if (doc._conflicts.length == 0) return []; this.finishConflictCheck(path);
return [];
}
if (doc._conflicts.length == 0) {
this.finishConflictCheck(path);
return [];
}
this._log(`Hidden file conflicted:${path}`); this._log(`Hidden file conflicted:${path}`);
const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0])); const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
const revA = doc._rev; const revA = doc._rev;
@@ -725,7 +745,7 @@ Offline Changed files: ${processFiles.length}`;
await this.storeInternalFileToDatabase({ path: filename, ...stat }); await this.storeInternalFileToDatabase({ path: filename, ...stat });
await this.extractInternalFileFromDatabase(filename); await this.extractInternalFileFromDatabase(filename);
await this.localDatabase.removeRevision(id, revB); await this.localDatabase.removeRevision(id, revB);
this.conflictResolutionProcessor.enqueue(path); this.requeueConflictCheck(path);
return []; return [];
} else { } else {
this._log(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE); this._log(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE);
@@ -743,6 +763,7 @@ Offline Changed files: ${processFiles.length}`;
await this.resolveByNewerEntry(id, path, doc, revA, revB); await this.resolveByNewerEntry(id, path, doc, revA, revB);
return []; return [];
} catch (ex) { } catch (ex) {
this.finishConflictCheck(path);
this._log(`Failed to resolve conflict (Hidden): ${path}`); this._log(`Failed to resolve conflict (Hidden): ${path}`);
this._log(ex, LOG_LEVEL_VERBOSE); this._log(ex, LOG_LEVEL_VERBOSE);
return []; return [];
@@ -761,15 +782,22 @@ Offline Changed files: ${processFiles.length}`;
const prefixedPath = addPrefix(path, ICHeader); const prefixedPath = addPrefix(path, ICHeader);
const docAMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revA }); const docAMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revA });
const docBMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revB }); const docBMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revB });
if (docAMerge != false && docBMerge != false) { try {
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) { if (docAMerge != false && docBMerge != false) {
// Again for other conflicted revisions. if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
this.conflictResolutionProcessor.enqueue(path); // Again for other conflicted revisions.
this.requeueConflictCheck(path);
} else {
this.finishConflictCheck(path);
}
return;
} else {
// If either revision could not read, force resolving by the newer one.
await this.resolveByNewerEntry(id, path, doc, revA, revB);
} }
return; } catch (ex) {
} else { this.finishConflictCheck(path);
// If either revision could not read, force resolving by the newer one. throw ex;
await this.resolveByNewerEntry(id, path, doc, revA, revB);
} }
}, },
{ {
@@ -793,6 +821,8 @@ Offline Changed files: ${processFiles.length}`;
const storeFilePath = strippedPath; const storeFilePath = strippedPath;
const displayFilename = `${storeFilePath}`; const displayFilename = `${storeFilePath}`;
// const path = this.prefixedConfigDir2configDir(stripAllPrefixes(docA.path)) || docA.path; // const path = this.prefixedConfigDir2configDir(stripAllPrefixes(docA.path)) || docA.path;
// Cancel only when replacing an existing dialogue for the same path, not on every queue pass.
sendSignal(`cancel-internal-conflict:${docA.path}`);
const modal = new JsonResolveModal(this.app, storageFilePath, [docA, docB], async (keep, result) => { const modal = new JsonResolveModal(this.app, storageFilePath, [docA, docB], async (keep, result) => {
// modal.close(); // modal.close();
try { try {
@@ -1164,7 +1194,7 @@ Offline Changed files: ${files.length}`;
// Check if the file is conflicted, and if so, enqueue to resolve. // Check if the file is conflicted, and if so, enqueue to resolve.
// Until the conflict is resolved, the file will not be processed. // Until the conflict is resolved, the file will not be processed.
if (docMeta._conflicts && docMeta._conflicts.length > 0) { if (docMeta._conflicts && docMeta._conflicts.length > 0) {
this.conflictResolutionProcessor.enqueue(path); this.queueConflictCheck(path);
this._log(`${headerLine} Hidden file conflicted, enqueued to resolve`); this._log(`${headerLine} Hidden file conflicted, enqueued to resolve`);
return true; return true;
} }

View File

@@ -781,7 +781,8 @@ Success: ${successCount}, Errored: ${errored}`;
const credential = generateCredentialObject(this.settings); const credential = generateCredentialObject(this.settings);
const request = async (path: string, method: string = "GET", body: any = undefined) => { const request = async (path: string, method: string = "GET", body: any = undefined) => {
const req = await _requestToCouchDB( const req = await _requestToCouchDB(
this.settings.couchDB_URI + (this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""), this.settings.couchDB_URI.replace(/\/+$/, "") +
(this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
credential, credential,
window.origin, window.origin,
path, path,

View File

@@ -87,7 +87,7 @@ And you can also drop the local database to rebuild from the remote device.`,
// this.plugin.settings = remoteConfig; // this.plugin.settings = remoteConfig;
// await this.plugin.saveSettings(); // await this.plugin.saveSettings();
await this.core.services.setting.applyPartial(remoteConfig); await this.core.services.setting.applyExternalSettings(remoteConfig);
if (yn === DROP) { if (yn === DROP) {
await this.core.rebuilder.scheduleFetch(); await this.core.rebuilder.scheduleFetch();
} else { } else {

Submodule src/lib updated: 202038d19e...91b5981219

View File

@@ -33,6 +33,7 @@ import { SetupManager } from "./modules/features/SetupManager.ts";
import { ModuleMigration } from "./modules/essential/ModuleMigration.ts"; import { ModuleMigration } from "./modules/essential/ModuleMigration.ts";
import { enableI18nFeature } from "./serviceFeatures/onLayoutReady/enablei18n.ts"; import { enableI18nFeature } from "./serviceFeatures/onLayoutReady/enablei18n.ts";
import { useOfflineScanner } from "@lib/serviceFeatures/offlineScanner.ts"; import { useOfflineScanner } from "@lib/serviceFeatures/offlineScanner.ts";
import { useRemoteConfiguration } from "@lib/serviceFeatures/remoteConfig.ts";
import { useCheckRemoteSize } from "@lib/serviceFeatures/checkRemoteSize.ts"; import { useCheckRemoteSize } from "@lib/serviceFeatures/checkRemoteSize.ts";
import { useRedFlagFeatures } from "./serviceFeatures/redFlag.ts"; import { useRedFlagFeatures } from "./serviceFeatures/redFlag.ts";
import { useSetupProtocolFeature } from "./serviceFeatures/setupObsidian/setupProtocol.ts"; import { useSetupProtocolFeature } from "./serviceFeatures/setupObsidian/setupProtocol.ts";
@@ -174,6 +175,9 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const curriedFeature = () => featuresInitialiser(core); const curriedFeature = () => featuresInitialiser(core);
core.services.appLifecycle.onLayoutReady.addHandler(curriedFeature); core.services.appLifecycle.onLayoutReady.addHandler(curriedFeature);
const setupManager = core.getModule(SetupManager); const setupManager = core.getModule(SetupManager);
useRemoteConfiguration(core);
useSetupProtocolFeature(core, setupManager); useSetupProtocolFeature(core, setupManager);
useSetupQRCodeFeature(core); useSetupQRCodeFeature(core);
useSetupURIFeature(core); useSetupURIFeature(core);

View File

@@ -277,27 +277,36 @@ export class ModuleLog extends AbstractObsidianModule {
} }
async updateMessageArea() { async updateMessageArea() {
if (this.messageArea) { if (!this.messageArea) return;
const messageLines = [];
const fileStatus = this.activeFileStatus.value;
if (fileStatus && !this.settings.hideFileWarningNotice) messageLines.push(fileStatus);
const messages = (await this.services.appLifecycle.getUnresolvedMessages()).flat().filter((e) => e);
const stringMessages = messages.filter((m): m is string => typeof m === "string"); // for 'startsWith'
const networkMessages = stringMessages.filter((m) => m.startsWith(MARK_LOG_NETWORK_ERROR));
const otherMessages = stringMessages.filter((m) => !m.startsWith(MARK_LOG_NETWORK_ERROR));
messageLines.push(...otherMessages); const showStatusOnEditor = this.settings?.showStatusOnEditor ?? false;
if (this.statusDiv) {
if ( this.statusDiv.style.display = showStatusOnEditor ? "" : "none";
this.settings.networkWarningStyle !== NetworkWarningStyles.ICON &&
this.settings.networkWarningStyle !== NetworkWarningStyles.HIDDEN
) {
messageLines.push(...networkMessages);
} else if (this.settings.networkWarningStyle === NetworkWarningStyles.ICON) {
if (networkMessages.length > 0) messageLines.push("🔗❌");
}
this.messageArea.innerText = messageLines.map((e) => `⚠️ ${e}`).join("\n");
} }
if (!showStatusOnEditor) {
this.messageArea.innerText = "";
return;
}
const messageLines = [];
const fileStatus = this.activeFileStatus.value;
if (fileStatus && !this.settings.hideFileWarningNotice) messageLines.push(fileStatus);
const messages = (await this.services.appLifecycle.getUnresolvedMessages()).flat().filter((e) => e);
const stringMessages = messages.filter((m): m is string => typeof m === "string"); // for 'startsWith'
const networkMessages = stringMessages.filter((m) => m.startsWith(MARK_LOG_NETWORK_ERROR));
const otherMessages = stringMessages.filter((m) => !m.startsWith(MARK_LOG_NETWORK_ERROR));
messageLines.push(...otherMessages);
if (
this.settings.networkWarningStyle !== NetworkWarningStyles.ICON &&
this.settings.networkWarningStyle !== NetworkWarningStyles.HIDDEN
) {
messageLines.push(...networkMessages);
} else if (this.settings.networkWarningStyle === NetworkWarningStyles.ICON) {
if (networkMessages.length > 0) messageLines.push("🔗❌");
}
this.messageArea.innerText = messageLines.map((e) => `⚠️ ${e}`).join("\n");
} }
onActiveLeafChange() { onActiveLeafChange() {
@@ -326,6 +335,9 @@ export class ModuleLog extends AbstractObsidianModule {
} }
this.statusBar?.setText(newMsg.split("\n")[0]); this.statusBar?.setText(newMsg.split("\n")[0]);
if (this.statusDiv) {
this.statusDiv.style.display = this.settings?.showStatusOnEditor ? "" : "none";
}
if (this.settings?.showStatusOnEditor && this.statusDiv) { if (this.settings?.showStatusOnEditor && this.statusDiv) {
if (this.settings.showLongerLogInsideEditor) { if (this.settings.showLongerLogInsideEditor) {
const now = new Date().getTime(); const now = new Date().getTime();
@@ -402,6 +414,7 @@ export class ModuleLog extends AbstractObsidianModule {
this.messageArea = this.statusDiv.createDiv({ cls: "livesync-status-messagearea" }); this.messageArea = this.statusDiv.createDiv({ cls: "livesync-status-messagearea" });
this.logMessage = this.statusDiv.createDiv({ cls: "livesync-status-logmessage" }); this.logMessage = this.statusDiv.createDiv({ cls: "livesync-status-logmessage" });
this.logHistory = this.statusDiv.createDiv({ cls: "livesync-status-loghistory" }); this.logHistory = this.statusDiv.createDiv({ cls: "livesync-status-loghistory" });
this.statusDiv.style.display = this.settings?.showStatusOnEditor ? "" : "none";
eventHub.onEvent(EVENT_LAYOUT_READY, () => this.adjustStatusDivPosition()); eventHub.onEvent(EVENT_LAYOUT_READY, () => this.adjustStatusDivPosition());
if (this.settings?.showStatusOnStatusbar) { if (this.settings?.showStatusOnStatusbar) {
this.statusBar = this.services.API.addStatusBarItem(); this.statusBar = this.services.API.addStatusBarItem();

View File

@@ -162,8 +162,8 @@ export class ModuleObsidianSettingsAsMarkdown extends AbstractModule {
result == APPLY_AND_REBUILD || result == APPLY_AND_REBUILD ||
result == APPLY_AND_FETCH result == APPLY_AND_FETCH
) { ) {
this.core.settings = settingToApply; await this.services.setting.applyExternalSettings(settingToApply, true);
await this.services.setting.saveSettingData(); this.services.setting.clearUsedPassphrase();
if (result == APPLY_ONLY) { if (result == APPLY_ONLY) {
this._log("Loaded settings have been applied!", LOG_LEVEL_NOTICE); this._log("Loaded settings have been applied!", LOG_LEVEL_NOTICE);
return; return;

View File

@@ -21,6 +21,9 @@ export function paneGeneral(
}); });
this.addOnSaved("displayLanguage", () => this.display()); this.addOnSaved("displayLanguage", () => this.display());
new Setting(paneEl).autoWireToggle("showStatusOnEditor"); new Setting(paneEl).autoWireToggle("showStatusOnEditor");
this.addOnSaved("showStatusOnEditor", () => {
eventHub.emitEvent(EVENT_ON_UNRESOLVED_ERROR);
});
new Setting(paneEl).autoWireToggle("showOnlyIconsOnEditor", { new Setting(paneEl).autoWireToggle("showOnlyIconsOnEditor", {
onUpdate: visibleOnly(() => this.isConfiguredAs("showStatusOnEditor", true)), onUpdate: visibleOnly(() => this.isConfiguredAs("showStatusOnEditor", true)),
}); });

View File

@@ -137,6 +137,23 @@ export function paneHatch(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement,
pluginConfig.accessKey = REDACTED; pluginConfig.accessKey = REDACTED;
pluginConfig.secretKey = REDACTED; pluginConfig.secretKey = REDACTED;
const redact = (source: string) => `${REDACTED}(${source.length} letters)`; const redact = (source: string) => `${REDACTED}(${source.length} letters)`;
const toSchemeOnly = (uri: string) => {
try {
return `${new URL(uri).protocol}//`;
} catch {
const matched = uri.match(/^[A-Za-z][A-Za-z0-9+.-]*:\/\//);
return matched?.[0] ?? REDACTED;
}
};
pluginConfig.remoteConfigurations = Object.fromEntries(
Object.entries(pluginConfig.remoteConfigurations || {}).map(([id, config]) => [
id,
{
...config,
uri: toSchemeOnly(config.uri),
},
])
);
pluginConfig.region = redact(pluginConfig.region); pluginConfig.region = redact(pluginConfig.region);
pluginConfig.bucket = redact(pluginConfig.bucket); pluginConfig.bucket = redact(pluginConfig.bucket);
pluginConfig.pluginSyncExtendedSetting = {}; pluginConfig.pluginSyncExtendedSetting = {};

View File

@@ -2,8 +2,11 @@ import {
REMOTE_COUCHDB, REMOTE_COUCHDB,
REMOTE_MINIO, REMOTE_MINIO,
REMOTE_P2P, REMOTE_P2P,
DEFAULT_SETTINGS,
LOG_LEVEL_NOTICE,
type ObsidianLiveSyncSettings, type ObsidianLiveSyncSettings,
} from "../../../lib/src/common/types.ts"; } from "../../../lib/src/common/types.ts";
import { Menu } from "@/deps.ts";
import { $msg } from "../../../lib/src/common/i18n.ts"; import { $msg } from "../../../lib/src/common/i18n.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts"; import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts"; import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
@@ -21,6 +24,15 @@ import {
import { SETTING_KEY_P2P_DEVICE_NAME } from "../../../lib/src/common/types.ts"; import { SETTING_KEY_P2P_DEVICE_NAME } from "../../../lib/src/common/types.ts";
import { SetupManager, UserMode } from "../SetupManager.ts"; import { SetupManager, UserMode } from "../SetupManager.ts";
import { OnDialogSettingsDefault, type AllSettings } from "./settingConstants.ts"; import { OnDialogSettingsDefault, type AllSettings } from "./settingConstants.ts";
import { activateRemoteConfiguration } from "../../../lib/src/serviceFeatures/remoteConfig.ts";
import { ConnectionStringParser } from "../../../lib/src/common/ConnectionString.ts";
import type { RemoteConfigurationResult } from "../../../lib/src/common/ConnectionString.ts";
import type { RemoteConfiguration } from "../../../lib/src/common/models/setting.type.ts";
import SetupRemote from "../SetupWizard/dialogs/SetupRemote.svelte";
import SetupRemoteCouchDB from "../SetupWizard/dialogs/SetupRemoteCouchDB.svelte";
import SetupRemoteBucket from "../SetupWizard/dialogs/SetupRemoteBucket.svelte";
import SetupRemoteP2P from "../SetupWizard/dialogs/SetupRemoteP2P.svelte";
import { syncActivatedRemoteSettings } from "./remoteConfigBuffer.ts";
function getSettingsFromEditingSettings(editingSettings: AllSettings): ObsidianLiveSyncSettings { function getSettingsFromEditingSettings(editingSettings: AllSettings): ObsidianLiveSyncSettings {
const workObj = { ...editingSettings } as ObsidianLiveSyncSettings; const workObj = { ...editingSettings } as ObsidianLiveSyncSettings;
@@ -39,17 +51,54 @@ const toggleActiveSyncClass = (el: HTMLElement, isActive: () => boolean) => {
return {}; return {};
}; };
function createRemoteConfigurationId(): string {
return `remote-${Date.now().toString(36)}-${Math.random().toString(36).slice(2, 8)}`;
}
function cloneRemoteConfigurations(
configs: Record<string, RemoteConfiguration> | undefined
): Record<string, RemoteConfiguration> {
return Object.fromEntries(Object.entries(configs || {}).map(([id, config]) => [id, { ...config }]));
}
function serializeRemoteConfiguration(settings: ObsidianLiveSyncSettings): string {
if (settings.remoteType === REMOTE_MINIO) {
return ConnectionStringParser.serialize({ type: "s3", settings });
}
if (settings.remoteType === REMOTE_P2P) {
return ConnectionStringParser.serialize({ type: "p2p", settings });
}
return ConnectionStringParser.serialize({ type: "couchdb", settings });
}
function setEmojiButton(button: any, emoji: string, tooltip: string) {
button.setButtonText(emoji);
button.setTooltip(tooltip, { delay: 10, placement: "top" });
// button.buttonEl.addClass("clickable-icon");
button.buttonEl.addClass("mod-muted");
return button;
}
function suggestRemoteConfigurationName(parsed: RemoteConfigurationResult): string {
if (parsed.type === "couchdb") {
try {
const url = new URL(parsed.settings.couchDB_URI);
return `CouchDB ${url.host}`;
} catch {
return "Imported CouchDB";
}
}
if (parsed.type === "s3") {
return `S3 ${parsed.settings.bucket || parsed.settings.endpoint}`;
}
return `P2P ${parsed.settings.P2P_roomID || "Remote"}`;
}
export function paneRemoteConfig( export function paneRemoteConfig(
this: ObsidianLiveSyncSettingTab, this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement, paneEl: HTMLElement,
{ addPanel, addPane }: PageFunctions { addPanel, addPane }: PageFunctions
): void { ): void {
const remoteNameMap = {
[REMOTE_COUCHDB]: $msg("obsidianLiveSyncSettingTab.optionCouchDB"),
[REMOTE_MINIO]: $msg("obsidianLiveSyncSettingTab.optionMinioS3R2"),
[REMOTE_P2P]: "Only Peer-to-Peer",
} as const;
{ {
/* E2EE */ /* E2EE */
const E2EEInitialProps = { const E2EEInitialProps = {
@@ -91,24 +140,381 @@ export function paneRemoteConfig(
}); });
} }
{ {
// TODO: very WIP. need to refactor the UI.
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleRemoteServer"), () => {}).then((paneEl) => { void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleRemoteServer"), () => {}).then((paneEl) => {
const setting = new Setting(paneEl).setName($msg("Active Remote Configuration")); const actions = new Setting(paneEl).setName("Remote Databases");
// actions.addButton((button) =>
// button
// .setButtonText("Change Remote and Setup")
// .setCta()
// .onClick(async () => {
// const setupManager = this.core.getModule(SetupManager);
// const originalSettings = getSettingsFromEditingSettings(this.editingSettings);
// await setupManager.onSelectServer(originalSettings, UserMode.Update);
// })
// );
const el = setting.controlEl.createDiv({}); // Connection List
el.setText(`${remoteNameMap[this.editingSettings.remoteType] || " - "}`); const listContainer = paneEl.createDiv({ cls: "sls-remote-list" });
setting.addButton((button) => const syncRemoteConfigurationBuffers = () => {
button const currentConfigs = cloneRemoteConfigurations(this.core.settings.remoteConfigurations);
.setButtonText("Change Remote and Setup") this.editingSettings.remoteConfigurations = currentConfigs;
.setCta() this.editingSettings.activeConfigurationId = this.core.settings.activeConfigurationId;
.onClick(async () => { if (this.initialSettings) {
const setupManager = this.core.getModule(SetupManager); this.initialSettings.remoteConfigurations = cloneRemoteConfigurations(currentConfigs);
const originalSettings = getSettingsFromEditingSettings(this.editingSettings); this.initialSettings.activeConfigurationId = this.core.settings.activeConfigurationId;
await setupManager.onSelectServer(originalSettings, UserMode.Update); }
}) };
const persistRemoteConfigurations = async (synchroniseActiveRemote: boolean = false) => {
await this.services.setting.updateSettings((currentSettings) => {
currentSettings.remoteConfigurations = cloneRemoteConfigurations(
this.editingSettings.remoteConfigurations
);
currentSettings.activeConfigurationId = this.editingSettings.activeConfigurationId;
if (synchroniseActiveRemote && currentSettings.activeConfigurationId) {
const activated = activateRemoteConfiguration(
currentSettings,
currentSettings.activeConfigurationId
);
if (activated) {
return activated;
}
}
return currentSettings;
}, true);
if (synchroniseActiveRemote) {
// Keep both buffers aligned with the newly activated remote before saving any remaining dirty keys.
syncActivatedRemoteSettings(this.editingSettings, this.core.settings);
if (this.initialSettings) {
syncActivatedRemoteSettings(this.initialSettings, this.core.settings);
}
await this.saveAllDirtySettings();
}
syncRemoteConfigurationBuffers();
this.requestUpdate();
};
const runRemoteSetup = async (
baseSettings: ObsidianLiveSyncSettings,
remoteType?: typeof REMOTE_COUCHDB | typeof REMOTE_MINIO | typeof REMOTE_P2P
): Promise<ObsidianLiveSyncSettings | false> => {
const setupManager = this.core.getModule(SetupManager);
const dialogManager = setupManager.dialogManager;
let targetRemoteType = remoteType;
if (targetRemoteType === undefined) {
const method = await dialogManager.openWithExplicitCancel(SetupRemote);
if (method === "cancelled") {
return false;
}
targetRemoteType =
method === "bucket" ? REMOTE_MINIO : method === "p2p" ? REMOTE_P2P : REMOTE_COUCHDB;
}
if (targetRemoteType === REMOTE_MINIO) {
const bucketConf = await dialogManager.openWithExplicitCancel(SetupRemoteBucket, baseSettings);
if (bucketConf === "cancelled" || typeof bucketConf !== "object") {
return false;
}
return { ...baseSettings, ...bucketConf, remoteType: REMOTE_MINIO };
}
if (targetRemoteType === REMOTE_P2P) {
const p2pConf = await dialogManager.openWithExplicitCancel(SetupRemoteP2P, baseSettings);
if (p2pConf === "cancelled" || typeof p2pConf !== "object") {
return false;
}
return { ...baseSettings, ...p2pConf, remoteType: REMOTE_P2P };
}
const couchConf = await dialogManager.openWithExplicitCancel(SetupRemoteCouchDB, baseSettings);
if (couchConf === "cancelled" || typeof couchConf !== "object") {
return false;
}
return { ...baseSettings, ...couchConf, remoteType: REMOTE_COUCHDB };
};
const createBaseRemoteSettings = (): ObsidianLiveSyncSettings => ({
...DEFAULT_SETTINGS,
...getSettingsFromEditingSettings(this.editingSettings),
});
const createNewRemoteSettings = (): ObsidianLiveSyncSettings => ({
...DEFAULT_SETTINGS,
encrypt: this.editingSettings.encrypt,
usePathObfuscation: this.editingSettings.usePathObfuscation,
passphrase: this.editingSettings.passphrase,
configPassphraseStore: this.editingSettings.configPassphraseStore,
});
const addRemoteConfiguration = async () => {
const name = await this.services.UI.confirm.askString("Remote name", "Display name", "New Remote");
if (name === false) {
return;
}
const nextSettings = await runRemoteSetup(createNewRemoteSettings());
if (!nextSettings) {
return;
}
const id = createRemoteConfigurationId();
const configs = cloneRemoteConfigurations(this.editingSettings.remoteConfigurations);
configs[id] = {
id,
name: name.trim() || "New Remote",
uri: serializeRemoteConfiguration(nextSettings),
isEncrypted: false,
};
this.editingSettings.remoteConfigurations = configs;
if (!this.editingSettings.activeConfigurationId) {
this.editingSettings.activeConfigurationId = id;
}
await persistRemoteConfigurations(this.editingSettings.activeConfigurationId === id);
refreshList();
};
const importRemoteConfiguration = async () => {
const importedURI = await this.services.UI.confirm.askString(
"Import connection",
"Paste a connection string",
""
);
if (importedURI === false) {
return;
}
const trimmedURI = importedURI.trim();
if (trimmedURI === "") {
return;
}
let parsed: RemoteConfigurationResult;
try {
parsed = ConnectionStringParser.parse(trimmedURI);
} catch (ex) {
this.services.API.addLog(`Failed to import remote configuration: ${ex}`, LOG_LEVEL_NOTICE);
return;
}
const defaultName = suggestRemoteConfigurationName(parsed);
const name = await this.services.UI.confirm.askString("Remote name", "Display name", defaultName);
if (name === false) {
return;
}
const id = createRemoteConfigurationId();
const configs = cloneRemoteConfigurations(this.editingSettings.remoteConfigurations);
configs[id] = {
id,
name: name.trim() || defaultName,
uri: ConnectionStringParser.serialize(parsed),
isEncrypted: false,
};
this.editingSettings.remoteConfigurations = configs;
if (!this.editingSettings.activeConfigurationId) {
this.editingSettings.activeConfigurationId = id;
}
await persistRemoteConfigurations(this.editingSettings.activeConfigurationId === id);
refreshList();
};
actions.addButton((button) =>
setEmojiButton(button, "", "Add new connection").onClick(async () => {
await addRemoteConfiguration();
})
); );
actions.addButton((button) =>
setEmojiButton(button, "📥", "Import connection").onClick(async () => {
await importRemoteConfiguration();
})
);
const refreshList = () => {
listContainer.empty();
const configs = this.editingSettings.remoteConfigurations || {};
for (const config of Object.values(configs)) {
const row = new Setting(listContainer)
.setName(config.name)
.setDesc(config.uri.split("@").pop() || ""); // Show host part for privacy
if (config.id === this.editingSettings.activeConfigurationId) {
row.nameEl.addClass("sls-active-remote-name");
row.nameEl.appendText(" (Active)");
}
row.addButton((btn) =>
setEmojiButton(btn, "🔧", "Configure").onClick(async () => {
let parsed: RemoteConfigurationResult;
try {
parsed = ConnectionStringParser.parse(config.uri);
} catch (ex) {
this.services.API.addLog(
`Failed to parse remote configuration '${config.id}' for editing: ${ex}`,
LOG_LEVEL_NOTICE
);
return;
}
const workSettings = createBaseRemoteSettings();
if (parsed.type === "couchdb") {
workSettings.remoteType = REMOTE_COUCHDB;
} else if (parsed.type === "s3") {
workSettings.remoteType = REMOTE_MINIO;
} else {
workSettings.remoteType = REMOTE_P2P;
}
Object.assign(workSettings, parsed.settings);
const nextSettings = await runRemoteSetup(workSettings, workSettings.remoteType);
if (!nextSettings) {
return;
}
const nextConfigs = cloneRemoteConfigurations(this.editingSettings.remoteConfigurations);
nextConfigs[config.id] = {
...config,
uri: serializeRemoteConfiguration(nextSettings),
isEncrypted: false,
};
this.editingSettings.remoteConfigurations = nextConfigs;
await persistRemoteConfigurations(config.id === this.editingSettings.activeConfigurationId);
refreshList();
})
);
row.addButton((btn) =>
btn
.setButtonText("✅")
.setTooltip("Activate", { delay: 10, placement: "top" })
.setDisabled(config.id === this.editingSettings.activeConfigurationId)
.onClick(async () => {
this.editingSettings.activeConfigurationId = config.id;
await persistRemoteConfigurations(true);
refreshList();
})
);
row.addButton((btn) =>
setEmojiButton(btn, "…", "More actions").onClick(() => {
const menu = new Menu()
.addItem((item) => {
item.setTitle("🪪 Rename").onClick(async () => {
const nextName = await this.services.UI.confirm.askString(
"Remote name",
"Display name",
config.name
);
if (nextName === false) {
return;
}
const nextConfigs = cloneRemoteConfigurations(
this.editingSettings.remoteConfigurations
);
nextConfigs[config.id] = {
...config,
name: nextName.trim() || config.name,
};
this.editingSettings.remoteConfigurations = nextConfigs;
await persistRemoteConfigurations();
refreshList();
});
})
.addItem((item) => {
item.setTitle("📤 Export").onClick(async () => {
await this.services.UI.promptCopyToClipboard(
`Remote configuration: ${config.name}`,
config.uri
);
});
})
.addItem((item) => {
item.setTitle("🧬 Duplicate").onClick(async () => {
const nextName = await this.services.UI.confirm.askString(
"Duplicate remote",
"Display name",
`${config.name} (Copy)`
);
if (nextName === false) {
return;
}
const nextId = createRemoteConfigurationId();
const nextConfigs = cloneRemoteConfigurations(
this.editingSettings.remoteConfigurations
);
nextConfigs[nextId] = {
...config,
id: nextId,
name: nextName.trim() || `${config.name} (Copy)`,
};
this.editingSettings.remoteConfigurations = nextConfigs;
await persistRemoteConfigurations();
refreshList();
});
})
.addSeparator()
.addItem((item) => {
item.setTitle("📡 Fetch remote settings").onClick(async () => {
let parsed: RemoteConfigurationResult;
try {
parsed = ConnectionStringParser.parse(config.uri);
} catch (ex) {
this.services.API.addLog(
`Failed to parse remote configuration '${config.id}': ${ex}`,
LOG_LEVEL_NOTICE
);
return;
}
const workSettings = createBaseRemoteSettings();
if (parsed.type === "couchdb") {
workSettings.remoteType = REMOTE_COUCHDB;
} else if (parsed.type === "s3") {
workSettings.remoteType = REMOTE_MINIO;
} else {
workSettings.remoteType = REMOTE_P2P;
}
Object.assign(workSettings, parsed.settings);
const newTweaks =
await this.services.tweakValue.checkAndAskUseRemoteConfiguration(
workSettings
);
if (newTweaks.result !== false) {
this.editingSettings = { ...this.editingSettings, ...newTweaks.result };
this.requestUpdate();
}
});
})
.addSeparator()
.addItem((item) => {
item.setTitle("🗑 Delete").onClick(async () => {
const confirmed = await this.services.UI.confirm.askYesNoDialog(
`Delete remote configuration '${config.name}'?`,
{ title: "Delete Remote Configuration", defaultOption: "No" }
);
if (confirmed !== "yes") {
return;
}
const nextConfigs = cloneRemoteConfigurations(
this.editingSettings.remoteConfigurations
);
delete nextConfigs[config.id];
this.editingSettings.remoteConfigurations = nextConfigs;
let syncActiveRemote = false;
if (this.editingSettings.activeConfigurationId === config.id) {
const nextActiveId = Object.keys(nextConfigs)[0] || "";
this.editingSettings.activeConfigurationId = nextActiveId;
syncActiveRemote = nextActiveId !== "";
}
await persistRemoteConfigurations(syncActiveRemote);
refreshList();
});
});
const rect = btn.buttonEl.getBoundingClientRect();
menu.showAtPosition({ x: rect.left, y: rect.bottom });
})
);
}
};
refreshList();
}); });
} }
{ // eslint-disable-next-line no-constant-condition
if (false) {
const initialProps = { const initialProps = {
info: getCouchDBConfigSummary(this.editingSettings), info: getCouchDBConfigSummary(this.editingSettings),
}; };
@@ -143,7 +549,8 @@ export function paneRemoteConfig(
); );
}); });
} }
{ // eslint-disable-next-line no-constant-condition
if (false) {
const initialProps = { const initialProps = {
info: getBucketConfigSummary(this.editingSettings), info: getBucketConfigSummary(this.editingSettings),
}; };
@@ -178,7 +585,8 @@ export function paneRemoteConfig(
); );
}); });
} }
{ // eslint-disable-next-line no-constant-condition
if (false) {
const getDevicePeerId = () => this.services.config.getSmallConfig(SETTING_KEY_P2P_DEVICE_NAME) || ""; const getDevicePeerId = () => this.services.config.getSmallConfig(SETTING_KEY_P2P_DEVICE_NAME) || "";
const initialProps = { const initialProps = {
info: getP2PConfigSummary(this.editingSettings, { info: getP2PConfigSummary(this.editingSettings, {

View File

@@ -0,0 +1,17 @@
import { pickBucketSyncSettings, pickCouchDBSyncSettings, pickP2PSyncSettings } from "@lib/common/utils.ts";
import type { ObsidianLiveSyncSettings } from "@lib/common/types.ts";
// Keep the setting dialogue buffer aligned with the current core settings before persisting other dirty keys.
// This also clears stale dirty values left from editing a different remote type before switching active remotes.
export function syncActivatedRemoteSettings(
target: Partial<ObsidianLiveSyncSettings>,
source: ObsidianLiveSyncSettings
): void {
Object.assign(target, {
remoteType: source.remoteType,
activeConfigurationId: source.activeConfigurationId,
...pickBucketSyncSettings(source),
...pickCouchDBSyncSettings(source),
...pickP2PSyncSettings(source),
});
}

View File

@@ -0,0 +1,83 @@
import { describe, expect, it } from "vitest";
import { DEFAULT_SETTINGS, REMOTE_COUCHDB, REMOTE_MINIO } from "../../../lib/src/common/types";
import { syncActivatedRemoteSettings } from "./remoteConfigBuffer";
describe("syncActivatedRemoteSettings", () => {
it("should copy active MinIO credentials into the editing buffer", () => {
const target = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_COUCHDB,
activeConfigurationId: "old-remote",
accessKey: "",
secretKey: "",
endpoint: "",
bucket: "",
region: "",
encrypt: true,
};
const source = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
bucketPrefix: "folder/",
useCustomRequestHandler: false,
forcePathStyle: true,
bucketCustomHeaders: "",
};
syncActivatedRemoteSettings(target, source);
expect(target.remoteType).toBe(REMOTE_MINIO);
expect(target.activeConfigurationId).toBe("remote-s3");
expect(target.accessKey).toBe("access");
expect(target.secretKey).toBe("secret");
expect(target.endpoint).toBe("https://minio.example.test");
expect(target.bucket).toBe("vault");
expect(target.region).toBe("sz-hq");
expect(target.bucketPrefix).toBe("folder/");
expect(target.encrypt).toBe(true);
});
it("should clear stale dirty values from a different remote type", () => {
const target = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
couchDB_URI: "https://edited.invalid",
couchDB_USER: "edited-user",
couchDB_PASSWORD: "edited-pass",
couchDB_DBNAME: "edited-db",
};
const source = {
...DEFAULT_SETTINGS,
remoteType: REMOTE_MINIO,
activeConfigurationId: "remote-s3",
accessKey: "access",
secretKey: "secret",
endpoint: "https://minio.example.test",
bucket: "vault",
region: "sz-hq",
couchDB_URI: "https://current.example.test",
couchDB_USER: "current-user",
couchDB_PASSWORD: "current-pass",
couchDB_DBNAME: "current-db",
};
syncActivatedRemoteSettings(target, source);
expect(target.couchDB_URI).toBe("https://current.example.test");
expect(target.couchDB_USER).toBe("current-user");
expect(target.couchDB_PASSWORD).toBe("current-pass");
expect(target.couchDB_DBNAME).toBe("current-db");
});
});

View File

@@ -275,6 +275,10 @@ export class SetupManager extends AbstractModule {
activate: boolean = true, activate: boolean = true,
extra: () => void = () => {} extra: () => void = () => {}
): Promise<boolean> { ): Promise<boolean> {
newConf = await this.services.setting.adjustSettings({
...this.settings,
...newConf,
});
let userMode = _userMode; let userMode = _userMode;
if (userMode === UserMode.Unknown) { if (userMode === UserMode.Unknown) {
if (isObjectDifferent(this.settings, newConf, true) === false) { if (isObjectDifferent(this.settings, newConf, true) === false) {
@@ -368,13 +372,8 @@ export class SetupManager extends AbstractModule {
* @returns Promise that resolves to true if settings applied successfully, false otherwise * @returns Promise that resolves to true if settings applied successfully, false otherwise
*/ */
async applySetting(newConf: ObsidianLiveSyncSettings, userMode: UserMode) { async applySetting(newConf: ObsidianLiveSyncSettings, userMode: UserMode) {
const newSetting = {
...this.core.settings,
...newConf,
};
this.core.settings = newSetting;
this.services.setting.clearUsedPassphrase(); this.services.setting.clearUsedPassphrase();
await this.services.setting.saveSettingData(); await this.services.setting.applyExternalSettings(newConf, true);
return true; return true;
} }
} }

View File

@@ -0,0 +1,157 @@
import { beforeEach, describe, expect, it, vi } from "vitest";
import { DEFAULT_SETTINGS, REMOTE_COUCHDB, type ObsidianLiveSyncSettings } from "../../lib/src/common/types";
import { SettingService } from "../../lib/src/services/base/SettingService";
import { ServiceContext } from "../../lib/src/services/base/ServiceBase";
vi.mock("./SetupWizard/dialogs/Intro.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SelectMethodNewUser.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SelectMethodExisting.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/ScanQRCode.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/UseSetupURI.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/OutroNewUser.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/OutroExistingUser.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/OutroAskUserMode.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemote.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemoteCouchDB.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemoteBucket.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemoteP2P.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemoteE2EE.svelte", () => ({ default: {} }));
vi.mock("../../lib/src/API/processSetting.ts", () => ({
decodeSettingsFromQRCodeData: vi.fn(),
}));
import { decodeSettingsFromQRCodeData } from "../../lib/src/API/processSetting.ts";
import { SetupManager, UserMode } from "./SetupManager";
class TestSettingService extends SettingService<ServiceContext> {
protected setItem(_key: string, _value: string): void {}
protected getItem(_key: string): string {
return "";
}
protected deleteItem(_key: string): void {}
protected saveData(_setting: ObsidianLiveSyncSettings): Promise<void> {
return Promise.resolve();
}
protected loadData(): Promise<ObsidianLiveSyncSettings | undefined> {
return Promise.resolve(undefined);
}
}
function createLegacyRemoteSetting(): ObsidianLiveSyncSettings {
return {
...DEFAULT_SETTINGS,
remoteConfigurations: {},
activeConfigurationId: "",
remoteType: REMOTE_COUCHDB,
couchDB_URI: "http://localhost:5984",
couchDB_USER: "user",
couchDB_PASSWORD: "password",
couchDB_DBNAME: "vault",
};
}
function createSetupManager() {
const setting = new TestSettingService(new ServiceContext(), {
APIService: {
getSystemVaultName: vi.fn(() => "vault"),
getAppID: vi.fn(() => "app"),
confirm: {
askString: vi.fn(() => Promise.resolve("")),
},
addLog: vi.fn(),
addCommand: vi.fn(),
registerWindow: vi.fn(),
addRibbonIcon: vi.fn(),
registerProtocolHandler: vi.fn(),
} as any,
});
setting.settings = {
...DEFAULT_SETTINGS,
remoteConfigurations: {},
activeConfigurationId: "",
};
vi.spyOn(setting, "saveSettingData").mockResolvedValue();
const dialogManager = {
openWithExplicitCancel: vi.fn(),
open: vi.fn(),
};
const services = {
API: {
addLog: vi.fn(),
addCommand: vi.fn(),
registerWindow: vi.fn(),
addRibbonIcon: vi.fn(),
registerProtocolHandler: vi.fn(),
},
UI: {
dialogManager,
},
setting,
} as any;
const core: any = {
_services: services,
rebuilder: {
scheduleRebuild: vi.fn(() => Promise.resolve()),
scheduleFetch: vi.fn(() => Promise.resolve()),
},
};
Object.defineProperty(core, "services", {
get() {
return services;
},
});
Object.defineProperty(core, "settings", {
get() {
return setting.settings;
},
set(value: ObsidianLiveSyncSettings) {
setting.settings = value;
},
});
return {
manager: new SetupManager(core),
setting,
dialogManager,
core,
};
}
describe("SetupManager", () => {
beforeEach(() => {
vi.clearAllMocks();
vi.restoreAllMocks();
});
it("onUseSetupURI should normalise imported legacy remote settings before applying", async () => {
const { manager, setting, dialogManager } = createSetupManager();
dialogManager.openWithExplicitCancel
.mockResolvedValueOnce(createLegacyRemoteSetting())
.mockResolvedValueOnce("compatible-existing-user");
const result = await manager.onUseSetupURI(UserMode.Unknown, "mock-config://settings");
expect(result).toBe(true);
expect(setting.currentSettings().remoteConfigurations["legacy-couchdb"]?.uri).toContain(
"sls+http://user:password@localhost:5984"
);
expect(setting.currentSettings().activeConfigurationId).toBe("legacy-couchdb");
});
it("decodeQR should normalise imported legacy remote settings before applying", async () => {
const { manager, setting, dialogManager } = createSetupManager();
vi.mocked(decodeSettingsFromQRCodeData).mockReturnValue(createLegacyRemoteSetting());
dialogManager.openWithExplicitCancel.mockResolvedValueOnce("compatible-existing-user");
const result = await manager.decodeQR("qr-data");
expect(result).toBe(true);
expect(decodeSettingsFromQRCodeData).toHaveBeenCalledWith("qr-data");
expect(setting.currentSettings().remoteConfigurations["legacy-couchdb"]?.uri).toContain(
"sls+http://user:password@localhost:5984"
);
expect(setting.currentSettings().activeConfigurationId).toBe("legacy-couchdb");
});
});

View File

@@ -176,7 +176,7 @@ export async function adjustSettingToRemote(
...config, ...config,
...Object.fromEntries(differentItems), ...Object.fromEntries(differentItems),
} satisfies ObsidianLiveSyncSettings; } satisfies ObsidianLiveSyncSettings;
await host.services.setting.applyPartial(config, true); await host.services.setting.applyExternalSettings(config, true);
log("Remote configuration applied.", LOG_LEVEL_NOTICE); log("Remote configuration applied.", LOG_LEVEL_NOTICE);
canProceed = true; canProceed = true;
const updatedConfig = host.services.setting.currentSettings(); const updatedConfig = host.services.setting.currentSettings();

View File

@@ -49,6 +49,10 @@ const createSettingServiceMock = () => {
return { return {
settings, settings,
currentSettings: vi.fn(() => settings), currentSettings: vi.fn(() => settings),
applyExternalSettings: vi.fn((partial: any, _feedback?: boolean) => {
Object.assign(settings, partial);
return Promise.resolve();
}),
applyPartial: vi.fn((partial: any, _feedback?: boolean) => { applyPartial: vi.fn((partial: any, _feedback?: boolean) => {
Object.assign(settings, partial); Object.assign(settings, partial);
return Promise.resolve(); return Promise.resolve();
@@ -552,7 +556,7 @@ describe("Red Flag Feature", () => {
await adjustSettingToRemote(host as any, createLoggerMock(), config); await adjustSettingToRemote(host as any, createLoggerMock(), config);
expect(host.mocks.ui.confirm.askSelectStringDialogue).toHaveBeenCalled(); expect(host.mocks.ui.confirm.askSelectStringDialogue).toHaveBeenCalled();
expect(host.mocks.setting.applyPartial).toHaveBeenCalled(); expect(host.mocks.setting.applyExternalSettings).toHaveBeenCalled();
} }
); );
const mismatchAcceptedKeys = Object.keys(TweakValuesRecommendedTemplate).filter( const mismatchAcceptedKeys = Object.keys(TweakValuesRecommendedTemplate).filter(
@@ -579,7 +583,7 @@ describe("Red Flag Feature", () => {
await adjustSettingToRemote(host as any, createLoggerMock(), config); await adjustSettingToRemote(host as any, createLoggerMock(), config);
expect(host.mocks.setting.applyPartial).toHaveBeenCalled(); expect(host.mocks.setting.applyExternalSettings).toHaveBeenCalled();
expect(host.mocks.ui.confirm.askSelectStringDialogue).not.toHaveBeenCalled(); expect(host.mocks.ui.confirm.askSelectStringDialogue).not.toHaveBeenCalled();
} }
); );

View File

@@ -73,6 +73,12 @@
overflow-y: scroll; overflow-y: scroll;
} }
.sls-remote-list .setting-item-description {
white-space: normal;
overflow-wrap: anywhere;
word-break: break-word;
}
.sls-plugins-tbl { .sls-plugins-tbl {
border: 1px solid var(--background-modifier-border); border: 1px solid var(--background-modifier-border);
width: 100%; width: 100%;

View File

@@ -3,6 +3,31 @@ set -e
script_dir=$(dirname "$0") script_dir=$(dirname "$0")
webpeer_dir=$script_dir/../../src/apps/webpeer webpeer_dir=$script_dir/../../src/apps/webpeer
docker run -d --name relay-test -p 4000:8080 scsibug/nostr-rs-relay:latest docker run -d --name relay-test -p 4000:7777 \
--tmpfs /app/strfry-db:rw,size=256m \
--entrypoint sh \
ghcr.io/hoytech/strfry:latest \
-lc 'cat > /tmp/strfry.conf <<"EOF"
db = "./strfry-db/"
relay {
bind = "0.0.0.0"
port = 7777
nofiles = 100000
info {
name = "livesync test relay"
description = "local relay for livesync p2p tests"
}
maxWebsocketPayloadSize = 131072
autoPingSeconds = 55
writePolicy {
plugin = ""
}
}
EOF
exec /app/strfry --config /tmp/strfry.conf relay'
npm run --prefix $webpeer_dir build npm run --prefix $webpeer_dir build
docker run -d --name webpeer-test -p 8081:8043 -v $webpeer_dir/dist:/srv/http pierrezemb/gostatic docker run -d --name webpeer-test -p 8081:8043 -v $webpeer_dir/dist:/srv/http pierrezemb/gostatic

View File

@@ -3,6 +3,90 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope. The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
## 0.25.60
29th April, 2026
### Fixed
- Now larger settings can be exported and imported via QR code without issues. (#595)
- When the settings data exceeds the QR code capacity, it is now split into multiple QR codes.
- These QR codes are reassembled by the aggregator page, which collects the split data and reconstructs the original settings.
- Aggregator page is available at `https://vrtmrz.github.io/obsidian-livesync/aggregator.html`, and this file is also included in the repository.
- We will not send the settings data to any server. The QR code data is generated and processed entirely on the client side, ensuring that your settings remain private and secure. HOWEVER, please be careful your network environment.
- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.
### Fixed (CLI)
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
- Improved the clarity of CLI command logs by including the total count of processed items.
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.
## 0.25.59
### Fixed
- No longer Setup-wizard drops username and password silently. (#865)
- Thank you so much for @koteitan !
- Setup URI is now correctly imported (#859).
- Also thank you so much for @koteitan !
### Improved
- now French translation is added by @foXaCe ! Thank you so much!
## 0.25.58
### Fixed
- No longer credentials are broken during object storage configuration (related: #852).
- Fixed a worker-side recursion issue that could raise `Maximum call stack size exceeded` during chunk splitting (related: #855).
- Improved background worker crash cleanup so pending split/encryption tasks are released cleanly instead of being left in a waiting state (related: #855).
- On start-up, the selected remote configuration is now applied to runtime connection fields as well, reducing intermittent authentication failures caused by stale runtime settings (related: #855).
- Issue report generation now redacts `remoteConfigurations` connection strings and keeps only the scheme (e.g. `sls+https://`), so credentials are not exposed in reports.
- Hidden file JSON conflicts no longer keep re-opening and dismissing the merge dialogue before we can act, which fixes persistent unresolvable `data.json` conflicts in plug-in settings sync (related: #850).
## 0.25.57
9th April, 2026
- Packing a batch during the journal sync now continues even if the batch contains no items to upload.
- No unexpected error (about a replicator) during the early stage of initialisation.
- Now error messages are kept hidden if the show status inside the editor is disabled (related: #829).
- Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848).
- Each device's local deduplication caches (`knownIDs`, `sentIDs`, `receivedFiles`, `sentFiles`) now track the remote journal epoch (derived from the encryption parameters stored on the remote).
- When the epoch changes, the plugin verifies whether the device's last uploaded file still exists on the remote. If the file is gone, it confirms a remote wipe and automatically clears the stale caches. If the file is still present (e.g. a protocol upgrade without a wipe), the caches are preserved, and only the epoch is updated. This means normal upgrades never cause unnecessary re-processing.
### Translations
- Russian translation has been added! Thank you so much for the contribution, @vipka1n! (#845)
### New features
- Now we can configure multiple Remote Databases of the same type, e.g, multiple CouchDBs or S3 remotes.
- A user interface for managing multiple remote databases has been added to the settings dialogue. I think no explanation is needed for the UI, but please let me know if you have any questions.
- We can switch between multiple Remote Databases in the settings dialogue.
### CLI
#### Fixed
- Replication progress is now correctly saved and restored in the CLI (related: #846).
## ~~0.25.55~~ 0.25.56
30th March, 2026
### Fixed
- No longer `Peer-to-Peer Sync is not enabled. We cannot open a new connection.` error occurs when we have not enabled P2P sync and are not expected to use it (#830).
### CLI
- Fixed incomplete localStorage support in the CLI (#831). Thank you so much @rewse !
- Fixed the issue where the CLI could not be connected to the remote which had been locked once (#833), also thanks to @rewse !
## 0.25.54 ## 0.25.54
18th March, 2026 18th March, 2026
@@ -10,23 +94,23 @@ The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsid
### Fixed ### Fixed
- Remote storage size check now works correctly again (#818). - Remote storage size check now works correctly again (#818).
- Some buttons on the setting dialogue now respond correctly again (#827). - Some buttons on the settings dialogue now respond correctly again (#827).
### Refactored ### Refactored
- P2P replicator has been refactored to be a little roust and easier to understand. - P2P replicator has been refactored to be a little more robust and easier to understand.
- Delete items which are no longer used that might cause potential problems - Delete items which are no longer used that might cause potential problems
### CLI ### CLI
- Fixed the corrupted display of the help message. - Fixed the corrupted display of the help message.
- Remove some unnecessary codes. - Remove some unnecessary code.
### WebApp ### WebApp
- Fixed the issue where the detail level was not being applied in the log pane. - Fixed the issue where the detail level was not being applied in the log pane.
- Pop-ups are now shown. - Pop-ups are now shown.
- Add coverage for test. - Add coverage for the test.
- Pop-ups are now shown in the web app as well. - Pop-ups are now shown in the web app as well.
## 0.25.53 ## 0.25.53
@@ -193,91 +277,6 @@ As a result of recent refactoring, we are able to write tests more easily now!
- `ModuleObsidianAPI` has been removed and implemented in `APIService` and `RemoteService`. - `ModuleObsidianAPI` has been removed and implemented in `APIService` and `RemoteService`.
- Now `APIService` is responsible for the network-online-status, not `databaseService.managers.networkManager`. - Now `APIService` is responsible for the network-online-status, not `databaseService.managers.networkManager`.
## 0.25.44
24th February, 2026
This release represents a significant architectural overhaul of the plug-in, focusing on modularity, testability, and stability. While many changes are internal, they pave the way for more robust features and easier maintenance.
However, as this update is very substantial, please do feel free to let me know if you encounter any issues.
### Fixed
- Ignore files (e.g., `.ignore`) are now handled efficiently.
- Replication & Database:
- Replication statistics are now correctly reset after switching replicators.
- Fixed `File already exists` for .md files has been merged (PR #802) So thanks @waspeer for the contribution!
### Improved
- Now we can configure network-error banners as icons, or hide them completely with the new `Network Warning Style` setting in the `General` pane of the settings dialogue. (#770, PR #804)
- Thanks so much to @A-wry!
### Refactored
#### Architectural Overhaul:
- A major transition from Class-based Modules to a Service/Middleware architecture has begun.
- Many modules (for example, `ModulePouchDB`, `ModuleLocalDatabaseObsidian`, `ModuleKeyValueDB`) have been removed or integrated into specific Services (`database`, `keyValueDB`, etc.).
- Reduced reliance on dynamic binding and inverted dependencies; dependencies are now explicit.
- `ObsidianLiveSyncPlugin` properties (`replicator`, `localDatabase`, `storageAccess`, etc.) have been moved to their respective services for better separation of concerns.
- In this refactoring, the Service will henceforth, as a rule, cease to use setHandler, that is to say, simple lazy binding.
- They will be implemented directly in the service.
- However, not everything will be middlewarised. Modules that maintain state or make decisions based on the results of multiple handlers are permitted.
- Lifecycle:
- Application LifeCycle now starts in `Main` rather than `ServiceHub` or `ObsidianMenuModule`, ensuring smoother startup coordination.
#### New Services & Utilities:
- Added a `control` service to orchestrate other services (for example, handling stop/start logic during settings realisation).
- Added `UnresolvedErrorManager` to handle and display unresolved errors in a unified way.
- Added `logUtils` to unify logging injection and formatting.
- `VaultService.isTargetFile` now uses multiple, distinct checkers for better extensibility.
#### Code Separation:
- Separated Obsidian-specific logic from base logic for `StorageEventManager` and `FileAccess` modules.
- Moved reactive state values and statistics from the main plug-in instance to the services responsible for them.
#### Internal Cleanups:
- Many functions have been renamed for clarity (for example, `_isTargetFileByLocalDB` is now `_isTargetAcceptedByLocalDB`).
- Added `override` keywords to overridden items and removed dynamic binding for clearer code inheritance.
- Moved common functions to the common library.
#### Dependencies:
- Bumped dependencies simply to a point where they can be considered problem-free (by human-powered-artefacts-diff).
- Svelte, terser, and more something will be bumped later. They have a significant impact on the diff and paint it totally.
- You may be surprised, but when I bump the library, I am actually checking for any unintended code.
## 0.25.43
5th, February, 2026
### Fixed
- Encryption/decryption issues when using Object Storage as remote have been fixed.
- Now the plug-in falls back to V1 encryption/decryption when V2 fails (if not configured as ForceV1).
- This may fix the issue reported in #772.
### Notice
Quite a few packages have been updated in this release. Please report if you find any unexpected behaviour after this update.
## 0.25.42
2nd, February, 2026
This release is identical to 0.25.41-patched-3, except for the version number.
### Refactored
- Now the service context is `protected` instead of `private` in `ServiceBase`.
- This change allows derived classes to access the context directly.
- Some dynamically bound services have been moved to services for better dependency management.
- `WebPeer` has been moved to the main repository from the sub repository `livesync-commonlib` for correct dependency management.
- Migrated from the outdated, unstable platform abstraction layer to services.
- A bit more services will be added in the future for better maintainability.
Full notes are in Full notes are in
[updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md).

View File

@@ -3,6 +3,47 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope. The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
## ~~0.25.55~~ 0.25.56
30th March, 2026
### Fixed
- No longer `Peer-to-Peer Sync is not enabled. We cannot open a new connection.` error occurs when we have not enabled P2P sync and are not expected to use it (#830).
### CLI
- Fixed incomplete localStorage support in the CLI (#831). Thank you so much @rewse !
- Fixed the issue where the CLI could not be connected to the remote which had been locked once (#833), also thanks to @rewse !
## 0.25.54
18th March, 2026
### Fixed
- Remote storage size check now works correctly again (#818).
- Some buttons on the settings dialogue now respond correctly again (#827).
### Refactored
- P2P replicator has been refactored to be a little more robust and easier to understand.
- Delete items which are no longer used that might cause potential problems
### CLI
- Fixed the corrupted display of the help message.
- Remove some unnecessary code.
### WebApp
- Fixed the issue where the detail level was not being applied in the log pane.
- Pop-ups are now shown.
- Add coverage for the test.
- Pop-ups are now shown in the web app as well.
## 0.25.53 ## 0.25.53
17th March, 2026 17th March, 2026

View File

@@ -5,6 +5,23 @@ import path from "path";
import dotenv from "dotenv"; import dotenv from "dotenv";
import { grantClipboardPermissions, writeHandoffFile, readHandoffFile } from "./test/lib/commands"; import { grantClipboardPermissions, writeHandoffFile, readHandoffFile } from "./test/lib/commands";
// P2P test environment variables
// Configure these in .env or .test.env, or inject via shell before running tests.
// Shell-injected values take precedence over dotenv files.
//
// Required:
// P2P_TEST_ROOM_ID - Shared room identifier for peers to discover each other
// P2P_TEST_PASSPHRASE - Encryption passphrase shared between test peers
//
// Optional:
// P2P_TEST_HOST_PEER_NAME - Name used to identify the host peer (default varies)
// P2P_TEST_RELAY - Nostr relay server URL used for peer signalling/discovery
// P2P_TEST_APP_ID - Application ID scoping the P2P session
// P2P_TEST_HANDOFF_FILE - File path used to pass state between up/down test phases
//
// General test options (also read from env):
// ENABLE_DEBUGGER - Set to "true" to attach a debugger and pause before tests
// ENABLE_UI - Set to "true" to open a visible browser window during tests
const defEnv = dotenv.config({ path: ".env" }).parsed; const defEnv = dotenv.config({ path: ".env" }).parsed;
const testEnv = dotenv.config({ path: ".test.env" }).parsed; const testEnv = dotenv.config({ path: ".test.env" }).parsed;
// Merge: dotenv files < process.env (so shell-injected vars like P2P_TEST_* take precedence) // Merge: dotenv files < process.env (so shell-injected vars like P2P_TEST_* take precedence)