Compare commits

..

41 Commits

Author SHA1 Message Date
vorotamoroz
bc22d61a3a Fixed: Now error messages are kept hidden if the show status inside the editor is disabled. 2026-04-05 17:43:29 +09:00
vorotamoroz
d709bcc1d0 Add encryption for connection management 2026-04-05 16:24:34 +09:00
vorotamoroz
d7088be8af Improved: remote management 2026-04-05 16:00:57 +09:00
vorotamoroz
f17f1ecd93 ### Fixed
- No unexpected error (about a replicator) during early stage of initialisation.

### New features

- Now we can configure multiple Remote Databases of the same type, e.g, multiple CouchDBs or S3 remotes.
- We can switch between multiple Remote Databases in the settings dialogue.
2026-04-03 13:47:56 +01:00
vorotamoroz
bf556bd9f4 Merge pull request #838 from chinhkrb113/contribai/docs/undocumented-test-environment-variables
📝 Docs: Undocumented test environment variables
2026-04-03 13:05:05 +09:00
vorotamoroz
8b40969fa3 Add ru locale 2026-04-02 10:28:58 +00:00
vorotamoroz
6cce931a88 Add test for CLI 2026-04-02 09:58:25 +00:00
vorotamoroz
216861f2c3 Prettified 2026-04-02 10:33:36 +01:00
vorotamoroz
6ce724afb4 Add dockerfiles to webapp and webpeer 2026-04-02 10:33:13 +01:00
vorotamoroz
2e3e106fb2 Fix dockerfile 2026-04-02 10:31:17 +01:00
vorotamoroz
00f2606a2f Added a bit for development on Windows. 2026-04-02 10:31:03 +01:00
vorotamoroz
3c94a44285 Fixed: Replication progress is now correctly saved and restored in the CLI. 2026-04-02 10:30:14 +01:00
vorotamoroz
4c0908acde Add CI Build for cli-docker image 2026-04-02 07:47:10 +01:00
vorotamoroz
cda27fb7f8 - Update trystero to v0.23.0
- Add dockerfile for CLI
- Change relay image for testing on arm64
2026-03-31 07:17:51 +00:00
vorotamoroz
837a828cec Fix: fix update note... 2026-03-30 09:20:01 +01:00
vorotamoroz
4c8e13ccb9 bump 2026-03-30 09:14:49 +01:00
vorotamoroz
1ae4eaab02 Merge pull request #805 from L4z3x/patch-1
[docs] added changing docker compose data and etc folder ownership to user 5984.
2026-03-30 17:04:12 +09:00
chinhkrb113
b1efbf74c7 docs: clarify P2P_TEST_RELAY as Nostr relay 2026-03-30 02:18:25 +07:00
vorotamoroz
a937feed3f Merge pull request #833 from rewse/fix/cli-sync-locked-error-message
fix(cli): show actionable error when sync fails due to locked remote DB
2026-03-28 23:58:34 +09:00
ChinhLee
2de9899a99 docs: undocumented test environment variables
The P2P test suite relies on several specific environment variables (e.g., `P2P_TEST_ROOM_ID`, `P2P_TEST_PASSPHRASE`, `P2P_TEST_RELAY`) loaded from `.env` or `.test.env`. Because these are not documented anywhere in the repository, new contributors will be unable to configure their local environment to run the P2P tests successfully.

Affected files: vitest.config.p2p.ts

Signed-off-by: ChinhLee <76194645+chinhkrb113@users.noreply.github.com>
2026-03-27 22:26:00 +07:00
vorotamoroz
a0af6201a5 - No longer Peer-to-Peer Sync is not enabled. We cannot open a new connection. error occurs when we have not enabled P2P sync and are not expected to use it (#830). 2026-03-26 13:13:27 +01:00
vorotamoroz
9c7c6c8859 Merge pull request #831 from rewse/fix/cli-entrypoint-polyfill-default
fix(cli): handle incomplete localStorage in Node.js v25+
2026-03-26 20:36:46 +09:00
vorotamoroz
38d7cae1bc update some dependencies and ran npm-update. 2026-03-26 12:15:38 +01:00
vorotamoroz
fee34f0dcb Update dependency: deduplicate 2026-03-26 11:55:06 +01:00
Shibata, Tats
e01f7f4d92 test(cli): add TODO comment and locked-remote-DB test script
- Add inline TODO comment in runCommand.ts about standardising
  replication failure cause identification logic.
- Add test-sync-locked-remote-linux.sh that verifies:
  1. sync succeeds when the remote milestone is not locked.
  2. sync fails with an actionable error when the remote milestone
     has locked=true and accepted_nodes is empty.
2026-03-26 00:58:51 +09:00
Shibata, Tats
985004bc0e fix(cli): show actionable error when sync fails due to locked remote DB
When the remote database is locked and the CLI device is not in the
accepted_nodes list, openReplication returns false with no CLI-specific
guidance. The existing log message ('Fetch rebuilt DB, explicit
unlocking or chunk clean-up is required') is aimed at the Obsidian
plugin UI.

Check the replicator's remoteLockedAndDeviceNotAccepted flag after
sync failure and print a clear message directing the user to unlock
from the Obsidian plugin.

Ref: #832
2026-03-22 12:37:17 +09:00
Shibata, Tats
967a78d657 fix(cli): handle incomplete localStorage in Node.js v25+
Node.js v25 provides a built-in localStorage on globalThis, but without
`--localstorage-file` it is an empty object lacking getItem/setItem.
The existing check `!("localStorage" in globalThis)` passes, so the
polyfill is skipped and the CLI crashes with:

  TypeError: localStorage.getItem is not a function

Check for getItem as well so the polyfill is applied when the native
implementation is incomplete.
2026-03-22 11:57:47 +09:00
vorotamoroz
2ff60dd5ac Add missed files 2026-03-18 12:20:52 +01:00
vorotamoroz
c3341da242 Fix english 2026-03-18 12:05:15 +01:00
vorotamoroz
c2bfaeb5a9 Fixed: wrong import 2026-03-18 12:03:51 +01:00
vorotamoroz
c454616e1c bump 2026-03-18 12:01:57 +01:00
vorotamoroz
c88e73b7d3 Add note 2026-03-18 11:55:50 +01:00
vorotamoroz
3a29818612 - Delete items which are no longer used that might cause potential problems
- Fix Some Imports
- Fix floating promises on tests
2026-03-18 11:54:22 +01:00
vorotamoroz
ee69085830 Fixed: Some buttons on the setting dialogue now respond correctly again (#827). 2026-03-18 11:51:52 +01:00
vorotamoroz
3963f7c971 Refactored: P2P replicator has been refactored to be a little roust and easier to understand. 2026-03-18 11:49:41 +01:00
vorotamoroz
602fcef949 - Fixed the issue where the detail level was not being applied in the log pane.
- Pop-ups are now shown.
- Add coverage for test.
- Pop-ups are now shown in the web app as well.
2026-03-18 11:48:31 +01:00
vorotamoroz
075d260fdd Fixed:
- Fixed the corrupted display of the help message.
- Remove some unnecessary codes.
2026-03-18 11:46:52 +01:00
vorotamoroz
0717093d81 update for npm ci 2026-03-17 20:09:28 +09:00
vorotamoroz
1f87a9fd3d port setupManager, setupProtocol to serviceFeature
remove styles on webapp UI, and add stylesheet
2026-03-17 19:58:12 +09:00
vorotamoroz
fdd3a3aecb Add: vaultSelector (webapp) 2026-03-17 19:51:04 +09:00
L4z3x
310496d0b8 added changing docker compose data and etc folder, to prevent permissions errors
while trying to follow the docker compose guide i created the data folders using the root user, and had this error when i run the stack:
`touch: cannot touch '/opt/couchdb/etc/local.d/docker.ini': Permission denied`

the problem was solved by changing the ownership of the folder to the user 5984, then one in the docker compose file.
2026-02-23 22:21:18 +01:00
75 changed files with 7560 additions and 16944 deletions

31
.dockerignore Normal file
View File

@@ -0,0 +1,31 @@
# Git history
.git/
.gitignore
# Dependencies — re-installed inside Docker
node_modules/
src/apps/cli/node_modules/
# Pre-built CLI output — rebuilt inside Docker
src/apps/cli/dist/
# Obsidian plugin build outputs
main.js
main_org.js
pouchdb-browser.js
production/
# Test coverage and reports
coverage/
# Local environment / secrets
.env
*.env
.test.env
# local config files
*.local
# OS artefacts
.DS_Store
Thumbs.db

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
*.sh text eol=lf

101
.github/workflows/cli-docker.yml vendored Normal file
View File

@@ -0,0 +1,101 @@
# Build and push the CLI Docker image to GitHub Container Registry (GHCR).#
# Image tag format: <manifest-version>-<unix-epoch>-cli
# Example: 0.25.56-1743500000-cli
#
# The image is also tagged 'latest' for convenience.
# Image name: ghcr.io/<owner>/livesync-cli
name: Build and Push CLI Docker Image
on:
push:
tags:
- "*.*.*-cli"
workflow_dispatch:
inputs:
dry_run:
description: Build only (do not push image to GHCR)
required: false
type: boolean
default: true
force:
description: Continue to build/push even if CLI E2E fails (workflow_dispatch only)
required: false
type: boolean
default: false
jobs:
build-and-push:
runs-on: ubuntu-latest
timeout-minutes: 90
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
- name: Derive image tag
id: meta
run: |
VERSION=$(jq -r '.version' manifest.json)
EPOCH=$(date +%s)
TAG="${VERSION}-${EPOCH}-cli"
IMAGE="ghcr.io/${{ github.repository_owner }}/livesync-cli"
echo "tag=${TAG}" >> $GITHUB_OUTPUT
echo "image=${IMAGE}" >> $GITHUB_OUTPUT
echo "full=${IMAGE}:${TAG}" >> $GITHUB_OUTPUT
echo "version=${IMAGE}:${VERSION}-cli" >> $GITHUB_OUTPUT
echo "latest=${IMAGE}:latest" >> $GITHUB_OUTPUT
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "24.x"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Run CLI E2E (docker)
id: e2e
continue-on-error: ${{ github.event_name == 'workflow_dispatch' && inputs.force }}
working-directory: src/apps/cli
env:
CI: true
run: npm run test:e2e:docker:all
- name: Stop test containers (safety net)
if: always()
working-directory: src/apps/cli
run: |
# Keep this as a safety net for future suites/steps that may leave containers running.
bash ./util/couchdb-stop.sh >/dev/null 2>&1 || true
bash ./util/minio-stop.sh >/dev/null 2>&1 || true
bash ./util/p2p-stop.sh >/dev/null 2>&1 || true
- name: Build and push
if: ${{ steps.e2e.outcome == 'success' || (github.event_name == 'workflow_dispatch' && inputs.force) }}
uses: docker/build-push-action@v6
with:
context: .
file: src/apps/cli/Dockerfile
push: ${{ !(github.event_name == 'workflow_dispatch' && inputs.dry_run) }}
tags: |
${{ steps.meta.outputs.full }}
${{ steps.meta.outputs.version }}
${{ steps.meta.outputs.latest }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -0,0 +1,206 @@
# The design document of remote configuration management
## Goal
- Allow us to manage multiple remote connections in a single vault.
- Keep the existing synchronisation implementations working without requiring a large rewrite.
- Provide a safe migration path from the previous single-remote configuration model.
- Allow connections to be imported and exported in a compact and reusable format.
## Motivation
Historically, Self-hosted LiveSync stored one effective remote configuration directly in the main settings. This was simple, but it had several limitations.
- We could only keep one CouchDB, one bucket, or one Peer-to-Peer target as the effective configuration at a time.
- Switching between same-type-remotes required manually rewriting the active settings.
- Setup URI, QR code, CLI setup, and similar entry points all restored settings differently, which made migration logic easy to miss.
- The internal settings shape had gradually become a mix of user-facing settings, transport-specific credentials, and compatibility-oriented values.
Once multiple remotes of the same type became desirable, the previous model no longer scaled well enough. We therefore needed a structure that could store many remotes, still expose one effective remote to the replication logic, and keep migration and import behaviour consistent.
## Prerequisite
- Existing synchronisation features must continue to read an effective remote configuration from the current settings.
- Existing vaults must continue to work without requiring manual reconfiguration.
- Setup URI, QR code, CLI setup, protocol handlers, and other imported settings must be normalised in the same way.
- Import and export must be compact enough to be shared easily.
- We must be explicit that exported connection strings may contain credentials or secrets.
## Outlined methods and implementation plans
### Abstract
The current settings now have two layers for remote configuration.
1. A stored collection of named remotes.
2. One active remote projected into the legacy flat settings fields.
This means the replication and database layers can continue to read the effective remote from the existing settings fields, while the settings dialogue and migration logic can manage many stored remotes.
In short, the list is the source of truth for saved remotes, and the legacy fields remain the runtime compatibility layer.
### Data model
The main settings now contain the following properties.
```typescript
type RemoteConfiguration = {
id: string;
name: string;
uri: string;
isEncrypted: boolean;
};
type RemoteConfigurations = {
remoteConfigurations: Record<string, RemoteConfiguration>;
activeConfigurationId: string;
};
```
Each entry stores a connection string in `uri`.
- `sls+http://` or `sls+https://` for CouchDB-compatible remotes
- `sls+s3://` for bucket-style remotes
- `sls+p2p://` for Peer-to-Peer remotes
This structure allows multiple remotes of the same type to be stored without adding a large number of duplicated settings fields.
### Runtime compatibility
The replication logic still reads the effective remote from legacy flat settings such as the following.
- `remoteType`
- `couchDB_URI`, `couchDB_USER`, `couchDB_PASSWORD`, `couchDB_DBNAME`
- `endpoint`, `bucket`, `accessKey`, `secretKey`, and related bucket fields
- `P2P_roomID`, `P2P_passphrase`, and related Peer-to-Peer fields
When a remote is activated, its connection string is parsed and projected into these legacy fields. Therefore, existing services do not need to know whether the remote came from an old vault, a Setup URI, or the new remote list.
This projection is intentionally one-way at runtime. The stored remote list is the persistent catalogue, while the flat fields describe the remote currently in use.
### Connection string format
The connection string is the transport-neutral storage format for a remote entry.
Benefits:
- It is compact enough for clipboard-based workflows.
- It can be used for import and export in the settings dialogue.
- It avoids introducing a separate serialisation format only for the remote list.
- It can be parsed into the legacy settings shape whenever the active remote changes.
This is not equivalent to Setup URI.
- Setup URI represents a broader settings transfer workflow.
- A remote connection string represents one remote only.
### Import and export
The settings dialogue now supports the following workflows.
- Add connection: create a new remote by using the remote setup dialogues.
- Import connection: paste a connection string, validate it, and save it as a named remote.
- Export: copy a stored remote connection string to the clipboard.
Import normalises the string by parsing and serialising it again before saving. This ensures that equivalent but differently formatted URIs are saved in a canonical form.
Export is intentionally simple. It copies the connection string itself, because this is the most direct representation of one remote entry.
### Security note
Connection strings may include credentials, secrets, JWT-related values, or Peer-to-Peer passphrases.
Therefore:
- Export is a deliberate clipboard operation.
- Import trusts the supplied connection string as-is after parsing.
- We should regard exported connection strings as sensitive information, much like Setup URI or a credentials-bearing configuration file.
The `isEncrypted` field is currently reserved for future expansion. At present, the connection string itself is stored plainly inside the settings data, in the same sense that the effective runtime configuration can contain usable remote credentials.
### Migration strategy
Older vaults store only one effective remote in the flat settings fields. The migration creates a first remote list from those values.
Rules:
- If no remote list exists and the legacy fields contain a CouchDB configuration, create `legacy-couchdb`.
- If no remote list exists and the legacy fields contain a bucket configuration, create `legacy-s3`.
- If no remote list exists and the legacy fields contain a Peer-to-Peer configuration, create `legacy-p2p`.
- If more than one legacy remote is populated, create all possible entries and select the active one according to `remoteType`.
This migration is intentionally additive. It does not remove the flat fields because they remain necessary as the active runtime projection.
### Normalisation and application paths
One important design lesson from this work is that migration cannot rely only on loading `data.json`.
Settings may enter the system from several routes:
- normal settings load
- Setup URI
- QR code
- protocol handler
- CLI setup
- Peer-to-Peer remote configuration retrieval
- red flag based remote adjustment
- settings markdown import
To keep behaviour consistent, normalisation is centralised in the settings service.
- `adjustSettings` is responsible for in-place normalisation and migration of a settings object.
- `applyExternalSettings` is responsible for applying imported or externally supplied settings after passing them through the same normalisation flow.
This ensures that imported settings can migrate to the current remote list model even if they never passed through the ordinary `loadSettings` path.
### Why not store only the remote list
It would be possible to let all consumers parse the active remote every time and stop using the flat fields entirely. However, this would require broader changes across replication, diagnostics, and compatibility layers.
The current design keeps the change set limited.
- The remote list improves storage and UX.
- The flat fields preserve compatibility and reduce migration risk.
This is a pragmatic transitional architecture, not an accidental duplication.
## Test strategy
The feature should be tested from four viewpoints.
1. Migration from old settings.
- A vault with only legacy flat remote settings should gain a remote list automatically.
- The correct active remote should be selected according to `remoteType`.
2. Runtime activation.
- Activating a stored remote should correctly project its values into the effective flat settings.
3. External import paths.
- Setup URI, QR code, CLI setup, Peer-to-Peer remote config, red flag adjustment, and settings markdown import should all pass through the same normalisation path.
4. Import and export.
- Imported connection strings should be parsed, canonicalised, named, and stored correctly.
- Export should copy the exact saved connection string.
## Documentation strategy
- This document explains the design and compatibility model of remote configuration management.
- User-facing setup documents should explain only how to add, import, export, and activate remotes.
- Release notes may refer to this document when changes in remote handling are significant.
## Outlook
Import/export configuration strings should also be encrypted in the future, but this is a separate feature that can be added on top of the current design.
## Consideration and conclusion
The remote configuration list solves the practical need to manage multiple remotes without forcing the whole codebase to abandon the previous effective-settings model at once.
Its core idea is modest but effective.
- Store named remotes as connection strings.
- Select one active remote.
- Project it into the legacy settings for runtime use.
- Normalise every imported settings object through the same path.
This keeps the implementation understandable and migration-friendly. It also opens the door for future work, such as encrypted per-remote storage, richer remote metadata, or remote-scoped options, without forcing another large redesign of how remotes are represented.

View File

@@ -64,6 +64,10 @@ Congrats, move on to [step 2](#2-run-couchdb-initsh-for-initialise)
# Creating the save data & configuration directories.
mkdir couchdb-data
mkdir couchdb-etc
# Changing perms to user 5984.
chown -R 5984:5984 ./couchdb-data
chown -R 5984:5984 ./couchdb-etc
```
#### 2. Create a `docker-compose.yml` file with the following added to it

View File

@@ -1,7 +1,7 @@
{
"id": "obsidian-livesync",
"name": "Self-hosted LiveSync",
"version": "0.25.53",
"version": "0.25.56",
"minAppVersion": "0.9.12",
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"author": "vorotamoroz",

19310
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "obsidian-livesync",
"version": "0.25.53",
"version": "0.25.56",
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
"main": "main.js",
"type": "module",
@@ -68,6 +68,7 @@
"@tsconfig/svelte": "^5.0.8",
"@types/deno": "^2.5.0",
"@types/diff-match-patch": "^1.0.36",
"@types/markdown-it": "^14.1.2",
"@types/node": "^24.10.13",
"@types/pouchdb": "^6.4.2",
"@types/pouchdb-adapter-http": "^6.1.6",
@@ -79,9 +80,9 @@
"@types/transform-pouch": "^1.0.6",
"@typescript-eslint/eslint-plugin": "8.56.1",
"@typescript-eslint/parser": "8.56.1",
"@vitest/browser": "^4.0.16",
"@vitest/browser-playwright": "^4.0.16",
"@vitest/coverage-v8": "^4.0.16",
"@vitest/browser": "^4.1.1",
"@vitest/browser-playwright": "^4.1.1",
"@vitest/coverage-v8": "^4.1.1",
"builtin-modules": "5.0.0",
"dotenv": "^17.3.1",
"dotenv-cli": "^11.0.0",
@@ -119,8 +120,9 @@
"tsx": "^4.21.0",
"typescript": "5.9.3",
"vite": "^7.3.1",
"vitest": "^4.0.16",
"webdriverio": "^9.24.0",
"vite-plugin-istanbul": "^8.0.0",
"vitest": "^4.1.1",
"webdriverio": "^9.27.0",
"yaml": "^2.8.2"
},
"dependencies": {
@@ -130,16 +132,17 @@
"@smithy/middleware-apply-body-checksum": "^4.3.9",
"@smithy/protocol-http": "^5.3.9",
"@smithy/querystring-builder": "^4.2.9",
"@trystero-p2p/nostr": "^0.23.0",
"commander": "^14.0.3",
"diff-match-patch": "^1.0.5",
"fflate": "^0.8.2",
"idb": "^8.0.3",
"markdown-it": "^14.1.1",
"minimatch": "^10.2.2",
"node-datachannel": "^0.32.1",
"octagonal-wheels": "^0.1.45",
"pouchdb-adapter-leveldb": "^9.0.0",
"qrcode-generator": "^1.4.4",
"trystero": "^0.22.0",
"werift": "^0.22.9",
"xxhash-wasm-102": "npm:xxhash-wasm@^1.0.2"
}
}

View File

@@ -13,6 +13,7 @@ import type { CheckPointInfo } from "./lib/src/replication/journal/JournalSyncTy
import type { LiveSyncJournalReplicatorEnv } from "./lib/src/replication/journal/LiveSyncJournalReplicatorEnv";
import type { LiveSyncReplicatorEnv } from "./lib/src/replication/LiveSyncAbstractReplicator";
import { useTargetFilters } from "./lib/src/serviceFeatures/targetFilter";
import { useRemoteConfigurationMigration } from "./lib/src/serviceFeatures/remoteConfig";
import type { ServiceContext } from "./lib/src/services/base/ServiceBase";
import type { InjectableServiceHub } from "./lib/src/services/InjectableServices";
import { AbstractModule } from "./modules/AbstractModule";
@@ -272,6 +273,8 @@ export class LiveSyncBaseCore<
useTargetFilters(this);
// enable target filter feature.
usePrepareDatabaseForUse(this);
// Migration to multiple remote configurations
useRemoteConfigurationMigration(this);
}
}

View File

@@ -1,4 +1,6 @@
.livesync
test/*
!test/*.sh
node_modules
.livesync
test/*
!test/*.sh
test/test-init.local.sh
node_modules
.*.json

111
src/apps/cli/Dockerfile Normal file
View File

@@ -0,0 +1,111 @@
# syntax=docker/dockerfile:1
#
# Self-hosted LiveSync CLI — Docker image
#
# Build (from the repository root):
# docker build -f src/apps/cli/Dockerfile -t livesync-cli .
#
# Run:
# docker run --rm -v /path/to/your/vault:/data livesync-cli sync
# docker run --rm -v /path/to/your/vault:/data livesync-cli ls
# docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
# docker run --rm -v /path/to/your/vault:/data livesync-cli --help
#
# The first positional argument (database-path) is automatically set to /data.
# Mount your vault at /data, or override with: -e LIVESYNC_DB_PATH=/other/path
#
# P2P (WebRTC) networking — important notes
# -----------------------------------------
# The P2P replicator (p2p-host / p2p-sync / p2p-peers) uses WebRTC, which
# generates ICE candidates of three kinds:
#
# host — the container's bridge IP (172.17.x.x). Unreachable from outside
# the Docker bridge, so LAN peers cannot connect via this candidate.
# srflx — the host's public IP, obtained via STUN reflection. Works fine
# over the internet even with the default bridge network.
# relay — traffic relayed through a TURN server. Always reachable regardless
# of network mode.
#
# Recommended network modes per use-case:
#
# LAN P2P (Linux only)
# docker run --network host ...
# This exposes the real host IP as the 'host' candidate so LAN peers can
# connect directly. --network host is not available on Docker Desktop for
# macOS or Windows.
#
# LAN P2P (macOS / Windows Docker Desktop)
# Configure a TURN server in settings (P2P_turnServers / P2P_turnUsername /
# P2P_turnCredential). All data is then relayed through the TURN server,
# bypassing the bridge-network limitation.
#
# Internet P2P
# Default bridge network is sufficient; the srflx candidate carries the
# host's public IP and peers can connect normally.
#
# CouchDB sync only (no P2P)
# Default bridge network. No special configuration required.
# ─────────────────────────────────────────────────────────────────────────────
# Stage 1 — builder
# Full Node.js environment to compile native modules and bundle the CLI.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim AS builder
# Build tools required by native Node.js addons (mainly leveldown)
RUN apt-get update \
&& apt-get install -y --no-install-recommends python3 make g++ \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
# Install workspace dependencies first (layer-cache friendly)
COPY package.json ./
RUN npm install
# Copy the full source tree and build the CLI bundle
COPY . .
RUN cd src/apps/cli && npm run build
# ─────────────────────────────────────────────────────────────────────────────
# Stage 2 — runtime-deps
# Install only the external (unbundled) packages that the CLI requires at
# runtime. Native addons are compiled here against the same base image that
# the final runtime stage uses.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim AS runtime-deps
# Build tools required to compile native addons
RUN apt-get update \
&& apt-get install -y --no-install-recommends python3 make g++ \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /deps
# runtime-package.json lists only the packages that Vite leaves external
COPY src/apps/cli/runtime-package.json ./package.json
RUN npm install --omit=dev
# ─────────────────────────────────────────────────────────────────────────────
# Stage 3 — runtime
# Minimal image: pre-compiled native modules + CLI bundle only.
# No build tools are included, keeping the image small.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim
WORKDIR /app
# Copy pre-compiled external node_modules from runtime-deps stage
COPY --from=runtime-deps /deps/node_modules ./node_modules
# Copy the built CLI bundle from builder stage
COPY --from=builder /build/src/apps/cli/dist ./dist
# Install entrypoint wrapper
COPY src/apps/cli/docker-entrypoint.sh /usr/local/bin/livesync-cli
RUN chmod +x /usr/local/bin/livesync-cli
# Mount your vault / local database directory here
VOLUME ["/data"]
ENTRYPOINT ["livesync-cli"]

View File

@@ -1,362 +1,420 @@
# Self-hosted LiveSync CLI
Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian.
## Features
- ✅ Sync Obsidian vaults using CouchDB without running Obsidian
- ✅ Compatible with Self-hosted LiveSync plugin settings
- ✅ Supports all core sync features (encryption, conflict resolution, etc.)
- ✅ Lightweight and headless operation
- ✅ Cross-platform (Windows, macOS, Linux)
## Architecture
This CLI version is built using the same core as the Obsidian plugin:
```
CLI Main
└─ LiveSyncBaseCore<ServiceContext, IMinimumLiveSyncCommands>
├─ NodeServiceHub (All services without Obsidian dependencies)
└─ ServiceModules (wired by initialiseServiceModulesCLI)
├─ FileAccessCLI (Node.js FileSystemAdapter)
├─ StorageEventManagerCLI
├─ ServiceFileAccessCLI
├─ ServiceDatabaseFileAccessCLI
├─ ServiceFileHandler
└─ ServiceRebuilder
```
### Key Components
1. **Node.js FileSystem Adapter** (`adapters/`)
- Platform-agnostic file operations using Node.js `fs/promises`
- Implements same interface as Obsidian's file system
2. **Service Modules** (`serviceModules/`)
- Initialised by `initialiseServiceModulesCLI`
- All core sync functionality preserved
3. **Service Hub and Settings Services** (`services/`)
- `NodeServiceHub` provides the CLI service context
- Node-specific settings and key-value services are provided without Obsidian dependencies
4. **Main Entry Point** (`main.ts`)
- Command-line interface
- Settings management (JSON file)
- Graceful shutdown handling
## Installation
```bash
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
# due to shared dependencies with webapp and main library
npm install
# Build the project (ensure you are in `src/apps/cli` directory)
npm run build
```
## Usage
### Basic Usage
As you know, the CLI is designed to be used in a headless environment. Hence all operations are performed against a local vault directory and a settings file. Here are some example commands:
```bash
# Sync local database with CouchDB (no files will be changed).
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync
# Push files to local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
# Pull files from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
# Verbose logging
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose
# Apply setup URI to settings file (settings only; does not run synchronisation)
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
# Put text from stdin into local database
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
# Output a file from local database to stdout
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
# Output a specific revision of a file from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
# Pull a specific revision of a file from local database to local storage
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
# List files in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
# Show metadata for a file in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
# Mark a file as deleted in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
# Resolve conflict by keeping a specific revision
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
```
### Configuration
The CLI uses the same settings format as the Obsidian plugin. Create a `.livesync/settings.json` file in your vault directory:
```json
{
"couchDB_URI": "http://localhost:5984",
"couchDB_USER": "admin",
"couchDB_PASSWORD": "password",
"couchDB_DBNAME": "obsidian-livesync",
"liveSync": true,
"syncOnSave": true,
"syncOnStart": true,
"encrypt": true,
"passphrase": "your-encryption-passphrase",
"usePluginSync": false,
"isConfigured": true
}
```
**Minimum required settings:**
- `couchDB_URI`: CouchDB server URL
- `couchDB_USER`: CouchDB username
- `couchDB_PASSWORD`: CouchDB password
- `couchDB_DBNAME`: Database name
- `isConfigured`: Set to `true` after configuration
### Command-line Reference
```
Usage:
livesync-cli [database-path] [options] [command] [command-args]
Arguments:
database-path Path to the local database directory (required except for init-settings)
Options:
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
--force, -f Overwrite existing file on init-settings
--verbose, -v Enable verbose logging
--help, -h Show this help message
Commands:
init-settings [path] Create settings JSON from DEFAULT_SETTINGS
sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
p2p-sync <peer> <timeout> Synchronise with specified peer-id or peer-name
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
push <src> <dst> Push local file <src> into local database path <dst>
pull <src> <dst> Pull file <src> from local database into local file <dst>
pull-rev <src> <dst> <revision> Pull specific revision into local file <dst>
setup <setupURI> Apply setup URI to settings file
put <vaultPath> Read text from standard input and write to local database
cat <vaultPath> Write latest file content from local database to standard output
cat-rev <vaultPath> <revision> Write specific revision content from local database to standard output
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list
rm <vaultPath> Mark file as deleted in local database
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision
mirror <storagePath> <vaultPath> Mirror local file into local database.
```
Run via npm script:
```bash
npm run --silent cli -- [database-path] [options] [command] [command-args]
```
#### Detailed Command Descriptions
##### ls
`ls` lists files in the local database with optional prefix filtering. Output format is:
```vault/path/file.md<TAB>size<TAB>mtime<TAB>revision[*]
```
Note: `*` indicates if the file has conflicts.
##### p2p-peers
`p2p-peers <timeout>` waits for the specified number of seconds, then prints each discovered peer on a separate line:
```text
[peer]<TAB><peer-id><TAB><peer-name>
```
Use this command to select a target for `p2p-sync`.
##### p2p-sync
`p2p-sync <peer> <timeout>` discovers peers up to the specified timeout and synchronises with the selected peer.
- `<peer>` accepts either `peer-id` or `peer-name` from `p2p-peers` output.
- On success, the command prints a completion message to standard error and exits with status code `0`.
- On failure, the command prints an error message and exits non-zero.
##### p2p-host
`p2p-host` starts the local P2P host and keeps running until interrupted.
- Other peers can discover and synchronise with this host while it is running.
- Stop the host with `Ctrl+C`.
- In CLI mode, behaviour is non-interactive and acceptance follows settings.
##### info
`info` output fields:
- `id`: Document ID
- `revision`: Current revision
- `conflicts`: Conflicted revisions, or `N/A`
- `filename`: Basename of path
- `path`: Vault-relative path
- `size`: Size in bytes
- `revisions`: Available non-current revisions
- `chunks`: Number of chunk IDs
- `children`: Chunk ID list
##### mirror
`mirror` is a command that synchronises your storage with your local vault. It is essentially a process that runs upon startup in Obsidian.
In other words, it performs the following actions:
1. **Precondition checks** — Aborts early if any of the following conditions are not met:
- Settings must be configured (`isConfigured: true`).
- File watching must not be suspended (`suspendFileWatching: false`).
- Remediation mode must be inactive (`maxMTimeForReflectEvents: 0`).
2. **State restoration** — On subsequent runs (after the first successful scan), restores the previous storage state before proceeding.
3. **Expired deletion cleanup** — If `automaticallyDeleteMetadataOfDeletedFiles` is set to a positive number of days, any document that is marked deleted and whose `mtime` is older than the retention period is permanently removed from the local database.
4. **File collection** — Enumerates files from two sources:
- **Storage**: all files under the vault path that pass `isTargetFile`.
- **Local database**: all normal documents (fetched with conflict information) whose paths are valid and pass `isTargetFile`.
- Both collections build case-insensitive ↔ case-sensitive path maps, controlled by `handleFilenameCaseSensitive`.
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
| Group | Condition | Action |
|---|---|---|
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases).
### Planned options:
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`).
- `serve`: Start CLI in server mode, exposing REST APIs for remote, and batch operations.
- `cause-conflicted <vaultPath>`: Mark a file as conflicted without changing its content, to trigger conflict resolution in Obsidian.
## Use Cases
### 1. Bootstrap a new headless vault
Create default settings, apply a setup URI, then run one sync cycle.
```bash
npm run --silent cli -- init-settings /data/livesync-settings.json
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
```
### 2. Scripted import and export
Push local files into the database from automation, and pull them back for export or backup.
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
```
### 3. Revision inspection and restore
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
```
### 4. Conflict and cleanup workflow
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
```
### 5. CI smoke test for content round-trip
Validate that `put`/`cat` is behaving as expected in a pipeline.
```bash
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
```
## Development
### Project Structure
```
src/apps/cli/
├── commands/ # Command dispatcher and command utilities
│ ├── runCommand.ts
│ ├── runCommand.unit.spec.ts
│ ├── types.ts
│ ├── utils.ts
│ └── utils.unit.spec.ts
├── adapters/ # Node.js FileSystem Adapter
│ ├── NodeConversionAdapter.ts
│ ├── NodeFileSystemAdapter.ts
│ ├── NodePathAdapter.ts
│ ├── NodeStorageAdapter.ts
│ ├── NodeStorageAdapter.unit.spec.ts
│ ├── NodeTypeGuardAdapter.ts
│ ├── NodeTypes.ts
│ └── NodeVaultAdapter.ts
├── lib/
│ └── pouchdb-node.ts
├── managers/ # CLI-specific managers
│ ├── CLIStorageEventManagerAdapter.ts
│ └── StorageEventManagerCLI.ts
├── serviceModules/ # Service modules (ported from main.ts)
│ ├── CLIServiceModules.ts
│ ├── DatabaseFileAccess.ts
│ ├── FileAccessCLI.ts
│ └── ServiceFileAccessImpl.ts
├── services/
│ ├── NodeKeyValueDBService.ts
│ ├── NodeServiceHub.ts
│ └── NodeSettingService.ts
├── test/
│ ├── test-e2e-two-vaults-common.sh
│ ├── test-e2e-two-vaults-matrix.sh
│ ├── test-e2e-two-vaults-with-docker-linux.sh
│ ├── test-push-pull-linux.sh
│ ├── test-setup-put-cat-linux.sh
│ └── test-sync-two-local-databases-linux.sh
├── .gitignore
├── entrypoint.ts # CLI executable entry point (shebang)
├── main.ts # CLI entry point
├── main.unit.spec.ts
├── package.json
├── README.md # This file
├── tsconfig.json
├── util/ # Test and local utility scripts
└── vite.config.ts
```
# Self-hosted LiveSync CLI
Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian.
## Features
- ✅ Sync Obsidian vaults using CouchDB without running Obsidian
- ✅ Compatible with Self-hosted LiveSync plugin settings
- ✅ Supports all core sync features (encryption, conflict resolution, etc.)
- ✅ Lightweight and headless operation
- ✅ Cross-platform (Windows, macOS, Linux)
## Architecture
This CLI version is built using the same core as the Obsidian plugin:
```
CLI Main
└─ LiveSyncBaseCore<ServiceContext, IMinimumLiveSyncCommands>
├─ NodeServiceHub (All services without Obsidian dependencies)
└─ ServiceModules (wired by initialiseServiceModulesCLI)
├─ FileAccessCLI (Node.js FileSystemAdapter)
├─ StorageEventManagerCLI
├─ ServiceFileAccessCLI
├─ ServiceDatabaseFileAccessCLI
├─ ServiceFileHandler
└─ ServiceRebuilder
```
### Key Components
1. **Node.js FileSystem Adapter** (`adapters/`)
- Platform-agnostic file operations using Node.js `fs/promises`
- Implements same interface as Obsidian's file system
2. **Service Modules** (`serviceModules/`)
- Initialised by `initialiseServiceModulesCLI`
- All core sync functionality preserved
3. **Service Hub and Settings Services** (`services/`)
- `NodeServiceHub` provides the CLI service context
- Node-specific settings and key-value services are provided without Obsidian dependencies
4. **Main Entry Point** (`main.ts`)
- Command-line interface
- Settings management (JSON file)
- Graceful shutdown handling
## Something I realised later that could lead to misunderstandings
The term `vault` in this README refers to the directory containing your local database and settings file. Not the actual files you want to sync. I will fix this later, but please be mind this for now.
## Docker
A Docker image is provided for headless / server deployments. Build from the repository root:
```bash
docker build -f src/apps/cli/Dockerfile -t livesync-cli .
```
Run:
```bash
# Sync with CouchDB
docker run --rm -v /path/to/your/vault:/data livesync-cli sync
# List files in the local database
docker run --rm -v /path/to/your/vault:/data livesync-cli ls
# Generate a default settings file
docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
```
The vault directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
### P2P (WebRTC) and Docker networking
The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates
three kinds of ICE candidates. The default Docker bridge network affects which
candidates are usable:
| Candidate type | Description | Bridge network |
|---|---|---|
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
| `srflx` | Host public IP via STUN reflection | Works over the internet |
| `relay` | Traffic relayed via TURN server | Always reachable |
**LAN P2P on Linux** — use `--network host` so that the real host IP is
advertised as the `host` candidate:
```bash
docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host
```
> `--network host` is not available on Docker Desktop for macOS or Windows.
**LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the
settings file (`P2P_turnServers`, `P2P_turnUsername`, `P2P_turnCredential`).
All P2P traffic will then be relayed through the TURN server, bypassing the
bridge-network limitation.
**Internet P2P** — the default bridge network is sufficient. The `srflx`
candidate carries the host's public IP and peers can connect normally.
**CouchDB sync only (no P2P)** — no special network configuration is required.
## Installation
```bash
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
# due to shared dependencies with webapp and main library
npm install
# Build the project (ensure you are in `src/apps/cli` directory)
npm run build
```
## Usage
### Basic Usage
As you know, the CLI is designed to be used in a headless environment. Hence all operations are performed against a local vault directory and a settings file. Here are some example commands:
```bash
# Sync local database with CouchDB (no files will be changed).
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync
# Push files to local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
# Pull files from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
# Verbose logging
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose
# Apply setup URI to settings file (settings only; does not run synchronisation)
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
# Put text from stdin into local database
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
# Output a file from local database to stdout
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
# Output a specific revision of a file from local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
# Pull a specific revision of a file from local database to local storage
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
# List files in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
# Show metadata for a file in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
# Mark a file as deleted in local database
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
# Resolve conflict by keeping a specific revision
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
```
### Configuration
The CLI uses the same settings format as the Obsidian plugin. Create a `.livesync/settings.json` file in your vault directory:
```json
{
"couchDB_URI": "http://localhost:5984",
"couchDB_USER": "admin",
"couchDB_PASSWORD": "password",
"couchDB_DBNAME": "obsidian-livesync",
"liveSync": true,
"syncOnSave": true,
"syncOnStart": true,
"encrypt": true,
"passphrase": "your-encryption-passphrase",
"usePluginSync": false,
"isConfigured": true
}
```
**Minimum required settings:**
- `couchDB_URI`: CouchDB server URL
- `couchDB_USER`: CouchDB username
- `couchDB_PASSWORD`: CouchDB password
- `couchDB_DBNAME`: Database name
- `isConfigured`: Set to `true` after configuration
### Command-line Reference
```
Usage:
livesync-cli [database-path] [options] [command] [command-args]
Arguments:
database-path Path to the local database directory (required except for init-settings)
Options:
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
--force, -f Overwrite existing file on init-settings
--verbose, -v Enable verbose logging
--help, -h Show this help message
Commands:
init-settings [path] Create settings JSON from DEFAULT_SETTINGS
sync Run one replication cycle and exit
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
p2p-sync <peer> <timeout> Synchronise with specified peer-id or peer-name
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
push <src> <dst> Push local file <src> into local database path <dst>
pull <src> <dst> Pull file <src> from local database into local file <dst>
pull-rev <src> <dst> <revision> Pull specific revision into local file <dst>
setup <setupURI> Apply setup URI to settings file
put <vaultPath> Read text from standard input and write to local database
cat <vaultPath> Write latest file content from local database to standard output
cat-rev <vaultPath> <revision> Write specific revision content from local database to standard output
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list
rm <vaultPath> Mark file as deleted in local database
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision
mirror <storagePath> <vaultPath> Mirror local file into local database.
```
Run via npm script:
```bash
npm run --silent cli -- [database-path] [options] [command] [command-args]
```
#### Detailed Command Descriptions
##### ls
`ls` lists files in the local database with optional prefix filtering. Output format is:
```vault/path/file.md<TAB>size<TAB>mtime<TAB>revision[*]
```
Note: `*` indicates if the file has conflicts.
##### p2p-peers
`p2p-peers <timeout>` waits for the specified number of seconds, then prints each discovered peer on a separate line:
```text
[peer]<TAB><peer-id><TAB><peer-name>
```
Use this command to select a target for `p2p-sync`.
##### p2p-sync
`p2p-sync <peer> <timeout>` discovers peers up to the specified timeout and synchronises with the selected peer.
- `<peer>` accepts either `peer-id` or `peer-name` from `p2p-peers` output.
- On success, the command prints a completion message to standard error and exits with status code `0`.
- On failure, the command prints an error message and exits non-zero.
##### p2p-host
`p2p-host` starts the local P2P host and keeps running until interrupted.
- Other peers can discover and synchronise with this host while it is running.
- Stop the host with `Ctrl+C`.
- In CLI mode, behaviour is non-interactive and acceptance follows settings.
##### info
`info` output fields:
- `id`: Document ID
- `revision`: Current revision
- `conflicts`: Conflicted revisions, or `N/A`
- `filename`: Basename of path
- `path`: Vault-relative path
- `size`: Size in bytes
- `revisions`: Available non-current revisions
- `chunks`: Number of chunk IDs
- `children`: Chunk ID list
##### mirror
`mirror` is a command that synchronises your storage with your local vault. It is essentially a process that runs upon startup in Obsidian.
In other words, it performs the following actions:
1. **Precondition checks** — Aborts early if any of the following conditions are not met:
- Settings must be configured (`isConfigured: true`).
- File watching must not be suspended (`suspendFileWatching: false`).
- Remediation mode must be inactive (`maxMTimeForReflectEvents: 0`).
2. **State restoration** — On subsequent runs (after the first successful scan), restores the previous storage state before proceeding.
3. **Expired deletion cleanup** — If `automaticallyDeleteMetadataOfDeletedFiles` is set to a positive number of days, any document that is marked deleted and whose `mtime` is older than the retention period is permanently removed from the local database.
4. **File collection** — Enumerates files from two sources:
- **Storage**: all files under the vault path that pass `isTargetFile`.
- **Local database**: all normal documents (fetched with conflict information) whose paths are valid and pass `isTargetFile`.
- Both collections build case-insensitive ↔ case-sensitive path maps, controlled by `handleFilenameCaseSensitive`.
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
| Group | Condition | Action |
|---|---|---|
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
Note: `mirror` does not respect file deletions. If a file is deleted in storage, it will be restored on the next `mirror` run. To delete a file, use the `rm` command instead. This is a little inconvenient, but it is intentional behaviour (if we handle this automatically in `mirror`, we should be against a ton of edge cases).
### Planned options:
- `--immediate`: Perform sync after the command (e.g. `push`, `pull`, `put`, `rm`).
- `serve`: Start CLI in server mode, exposing REST APIs for remote, and batch operations.
- `cause-conflicted <vaultPath>`: Mark a file as conflicted without changing its content, to trigger conflict resolution in Obsidian.
## Use Cases
### 1. Bootstrap a new headless vault
Create default settings, apply a setup URI, then run one sync cycle.
```bash
npm run --silent cli -- init-settings /data/livesync-settings.json
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
```
### 2. Scripted import and export
Push local files into the database from automation, and pull them back for export or backup.
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
```
### 3. Revision inspection and restore
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
```
### 4. Conflict and cleanup workflow
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
```bash
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
```
### 5. CI smoke test for content round-trip
Validate that `put`/`cat` is behaving as expected in a pipeline.
```bash
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
```
## Development
### Project Structure
```
src/apps/cli/
├── commands/ # Command dispatcher and command utilities
│ ├── runCommand.ts
│ ├── runCommand.unit.spec.ts
│ ├── types.ts
│ ├── utils.ts
│ └── utils.unit.spec.ts
├── adapters/ # Node.js FileSystem Adapter
│ ├── NodeConversionAdapter.ts
│ ├── NodeFileSystemAdapter.ts
│ ├── NodePathAdapter.ts
│ ├── NodeStorageAdapter.ts
│ ├── NodeStorageAdapter.unit.spec.ts
│ ├── NodeTypeGuardAdapter.ts
│ ├── NodeTypes.ts
│ └── NodeVaultAdapter.ts
├── lib/
│ └── pouchdb-node.ts
├── managers/ # CLI-specific managers
│ ├── CLIStorageEventManagerAdapter.ts
│ └── StorageEventManagerCLI.ts
├── serviceModules/ # Service modules (ported from main.ts)
│ ├── CLIServiceModules.ts
│ ├── DatabaseFileAccess.ts
│ ├── FileAccessCLI.ts
│ └── ServiceFileAccessImpl.ts
├── services/
│ ├── NodeKeyValueDBService.ts
│ ├── NodeServiceHub.ts
│ └── NodeSettingService.ts
├── test/
│ ├── test-e2e-two-vaults-common.sh
│ ├── test-e2e-two-vaults-matrix.sh
│ ├── test-e2e-two-vaults-with-docker-linux.sh
│ ├── test-push-pull-linux.sh
│ ├── test-setup-put-cat-linux.sh
│ └── test-sync-two-local-databases-linux.sh
├── .gitignore
├── entrypoint.ts # CLI executable entry point (shebang)
├── main.ts # CLI entry point
├── main.unit.spec.ts
├── package.json
├── README.md # This file
├── tsconfig.json
├── util/ # Test and local utility scripts
└── vite.config.ts
```

View File

@@ -2,8 +2,7 @@ import type { LiveSyncBaseCore } from "../../../LiveSyncBaseCore";
import { P2P_DEFAULT_SETTINGS } from "@lib/common/types";
import type { ServiceContext } from "@lib/services/base/ServiceBase";
import { LiveSyncTrysteroReplicator } from "@lib/replication/trystero/LiveSyncTrysteroReplicator";
import { addP2PEventHandlers } from "@lib/replication/trystero/P2PReplicatorCore";
import { addP2PEventHandlers } from "@lib/replication/trystero/addP2PEventHandlers";
type CLIP2PPeer = {
peerId: string;
name: string;

View File

@@ -21,6 +21,18 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
if (options.command === "sync") {
console.log("[Command] sync");
const result = await core.services.replication.replicate(true);
if (!result) {
// TODO: Standardise the logic for identifying the cause of replication
// failure so that every reason (locked DB, version mismatch, network
// error, etc.) is surfaced with a CLI-specific actionable message.
const replicator = core.services.replicator.getActiveReplicator();
if (replicator?.remoteLockedAndDeviceNotAccepted) {
console.error(
`[Error] The remote database is locked and this device is not yet accepted.\n` +
`[Error] Please unlock the database from the Obsidian plugin and retry.`
);
}
}
return !!result;
}
@@ -154,7 +166,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
} as ObsidianLiveSyncSettings;
console.log(`[Command] setup -> ${settingsPath}`);
await core.services.setting.applyPartial(nextSettings, true);
await core.services.setting.applyExternalSettings(nextSettings, true);
await core.services.control.applySettings();
return true;
}

View File

@@ -14,6 +14,7 @@ function createCoreMock() {
applySettings: vi.fn(async () => {}),
},
setting: {
applyExternalSettings: vi.fn(async () => {}),
applyPartial: vi.fn(async () => {}),
},
},
@@ -176,9 +177,9 @@ describe("runCommand abnormal cases", () => {
});
expect(result).toBe(true);
expect(core.services.setting.applyPartial).toHaveBeenCalledTimes(1);
expect(core.services.setting.applyExternalSettings).toHaveBeenCalledTimes(1);
expect(core.services.control.applySettings).toHaveBeenCalledTimes(1);
const [appliedSettings, saveImmediately] = core.services.setting.applyPartial.mock.calls[0];
const [appliedSettings, saveImmediately] = core.services.setting.applyExternalSettings.mock.calls[0];
expect(saveImmediately).toBe(true);
expect(appliedSettings.couchDB_URI).toBe("http://127.0.0.1:5984");
expect(appliedSettings.couchDB_DBNAME).toBe("livesync-test-db");
@@ -198,7 +199,7 @@ describe("runCommand abnormal cases", () => {
})
).rejects.toThrow();
expect(core.services.setting.applyPartial).not.toHaveBeenCalled();
expect(core.services.setting.applyExternalSettings).not.toHaveBeenCalled();
expect(core.services.control.applySettings).not.toHaveBeenCalled();
});
});

View File

@@ -0,0 +1,25 @@
#!/bin/sh
# Entrypoint wrapper for the Self-hosted LiveSync CLI Docker image.
#
# By default, /data is used as the database-path (the vault mount point).
# Override this via the LIVESYNC_DB_PATH environment variable.
#
# Examples:
# docker run -v /path/to/vault:/data livesync-cli sync
# docker run -v /path/to/vault:/data livesync-cli --settings /data/.livesync/settings.json sync
# docker run -v /path/to/vault:/data livesync-cli init-settings
# docker run -e LIVESYNC_DB_PATH=/vault -v /path/to/vault:/vault livesync-cli sync
set -e
case "${1:-}" in
init-settings | --help | -h | "")
# Commands that do not require a leading database-path argument
exec node /app/dist/index.cjs "$@"
;;
*)
# All other commands: prepend the database-path so users only need
# to supply the command and its options.
exec node /app/dist/index.cjs "${LIVESYNC_DB_PATH:-/data}" "$@"
;;
esac

View File

@@ -1,10 +1,11 @@
#!/usr/bin/env node
import polyfill from "node-datachannel/polyfill";
import * as polyfill from "werift";
import { main } from "./main";
for (const prop in polyfill) {
// @ts-ignore Applying polyfill to globalThis
globalThis[prop] = (polyfill as any)[prop];
const rtcPolyfillCtor = (polyfill as any).RTCPeerConnection;
if (typeof (globalThis as any).RTCPeerConnection === "undefined" && typeof rtcPolyfillCtor === "function") {
// Fill only the standard WebRTC global in Node CLI runtime.
(globalThis as any).RTCPeerConnection = rtcPolyfillCtor;
}
main().catch((error) => {

View File

@@ -3,25 +3,10 @@
* Command-line version of Self-hosted LiveSync plugin for syncing vaults without Obsidian
*/
if (!("localStorage" in globalThis)) {
const store = new Map<string, string>();
(globalThis as any).localStorage = {
getItem: (key: string) => (store.has(key) ? store.get(key)! : null),
setItem: (key: string, value: string) => {
store.set(key, value);
},
removeItem: (key: string) => {
store.delete(key);
},
clear: () => {
store.clear();
},
};
}
import * as fs from "fs/promises";
import * as path from "path";
import { NodeServiceContext, NodeServiceHub } from "./services/NodeServiceHub";
import { configureNodeLocalStorage, ensureGlobalNodeLocalStorage } from "./services/NodeLocalStorage";
import { LiveSyncBaseCore } from "../../LiveSyncBaseCore";
import { ModuleReplicatorP2P } from "../../modules/core/ModuleReplicatorP2P";
import { initialiseServiceModulesCLI } from "./serviceModules/CLIServiceModules";
@@ -43,25 +28,9 @@ import { getPathFromUXFileInfo } from "@lib/common/typeUtils";
import { stripAllPrefixes } from "@lib/string_and_binary/path";
const SETTINGS_FILE = ".livesync/settings.json";
ensureGlobalNodeLocalStorage();
defaultLoggerEnv.minLogLevel = LOG_LEVEL_DEBUG;
// DI the log again.
// const recentLogEntries = reactiveSource<LogEntry[]>([]);
// const globalLogFunction = (message: any, level?: number, key?: string) => {
// const messageX =
// message instanceof Error
// ? new LiveSyncError("[Error Logged]: " + message.message, { cause: message })
// : message;
// const entry = { message: messageX, level, key } as LogEntry;
// recentLogEntries.value = [...recentLogEntries.value, entry];
// };
// setGlobalLogFunction((msg, level) => {
// console.error(`[${level}] ${typeof msg === "string" ? msg : JSON.stringify(msg)}`);
// if (msg instanceof Error) {
// console.error(msg);
// }
// });
function printHelp(): void {
console.log(`
Self-hosted LiveSync CLI
@@ -78,8 +47,8 @@ Commands:
p2p-sync <peer> <timeout>
Sync with the specified peer-id or peer-name
p2p-host Start P2P host mode and wait until interrupted
push <src> <dst> Push local file <src> into local database path <dst>
pull <src> <dst> Pull file <src> from local database into local file <dst>
push <src> <dst> Push local file <src> into local database path <dst>
pull <src> <dst> Pull file <src> from local database into local file <dst>
pull-rev <src> <dst> <rev> Pull file <src> at specific revision <rev> into local file <dst>
setup <setupURI> Apply setup URI to settings file
put <dst> Read UTF-8 content from stdin and write to local database path <dst>
@@ -90,12 +59,12 @@ Commands:
rm <path> Mark a file as deleted in local database
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
Examples:
livesync-cli ./my-database sync
livesync-cli ./my-database sync
livesync-cli ./my-database p2p-peers 5
livesync-cli ./my-database p2p-sync my-peer-name 15
livesync-cli ./my-database p2p-host
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
livesync-cli ./my-database pull folder/note.md ./exports/note.md
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
livesync-cli ./my-database pull folder/note.md ./exports/note.md
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
echo "Hello" | livesync-cli ./my-database put notes/hello.md
@@ -106,7 +75,7 @@ Examples:
livesync-cli ./my-database rm notes/hello.md
livesync-cli ./my-database resolve notes/hello.md 3-abcdef
livesync-cli init-settings ./data.json
livesync-cli ./my-database --verbose
livesync-cli ./my-database --verbose
`);
}
@@ -269,6 +238,7 @@ export async function main() {
const settingsPath = options.settingsPath
? path.resolve(options.settingsPath)
: path.join(vaultPath, SETTINGS_FILE);
configureNodeLocalStorage(path.join(vaultPath, ".livesync", "runtime", "local-storage.json"));
infoLog(`Self-hosted LiveSync CLI`);
infoLog(`Vault: ${vaultPath}`);
@@ -353,7 +323,10 @@ export async function main() {
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
return initialiseServiceModulesCLI(vaultPath, core, serviceHub);
},
(core) => [new ModuleReplicatorP2P(core)], // Register P2P replicator for CLI (useP2PReplicator is not used here)
(core) => [
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
// new ModuleReplicatorP2P(core),
],
() => [], // No add-ons
(core) => {
// Add target filter to prevent internal files are handled

View File

@@ -10,6 +10,7 @@
"preview": "vite preview",
"cli": "node dist/index.cjs",
"buildRun": "npm run build && npm run cli --",
"build:docker": "docker build -f Dockerfile -t livesync-cli ../../..",
"check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json",
"test:unit": "cd ../../.. && npx vitest run --config vitest.config.unit.ts src/apps/cli/main.unit.spec.ts src/apps/cli/commands/utils.unit.spec.ts src/apps/cli/commands/runCommand.unit.spec.ts src/apps/cli/commands/p2p.unit.spec.ts",
"test:e2e:two-vaults": "bash test/test-e2e-two-vaults-with-docker-linux.sh",
@@ -22,10 +23,17 @@
"test:e2e:p2p-upload-download-repro": "bash test/test-p2p-upload-download-repro-linux.sh",
"test:e2e:p2p-host": "bash test/test-p2p-host-linux.sh",
"test:e2e:p2p-sync": "bash test/test-p2p-sync-linux.sh",
"test:e2e:p2p-peers:local-relay": "bash test/test-p2p-peers-local-relay.sh",
"test:e2e:mirror": "bash test/test-mirror-linux.sh",
"pretest:e2e:all": "npm run build",
"test:e2e:all": " export RUN_BUILD=0 && npm run test:e2e:setup-put-cat && npm run test:e2e:push-pull && npm run test:e2e:sync-two-local && npm run test:e2e:p2p && npm run test:e2e:mirror && npm run test:e2e:two-vaults && npm run test:e2e:p2p"
"test:e2e:all": " export RUN_BUILD=0 && npm run test:e2e:setup-put-cat && npm run test:e2e:push-pull && npm run test:e2e:sync-two-local && npm run test:e2e:p2p && npm run test:e2e:mirror && npm run test:e2e:two-vaults && npm run test:e2e:p2p",
"pretest:e2e:docker:all": "npm run build:docker",
"test:e2e:docker:push-pull": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-push-pull-linux.sh",
"test:e2e:docker:setup-put-cat": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-setup-put-cat-linux.sh",
"test:e2e:docker:mirror": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-mirror-linux.sh",
"test:e2e:docker:sync-two-local": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-sync-two-local-databases-linux.sh",
"test:e2e:docker:p2p": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-p2p-three-nodes-conflict-linux.sh",
"test:e2e:docker:p2p-sync": "RUN_BUILD=0 LIVESYNC_TEST_DOCKER=1 bash test/test-p2p-sync-linux.sh",
"test:e2e:docker:all": "export RUN_BUILD=0 && npm run test:e2e:docker:setup-put-cat && npm run test:e2e:docker:push-pull && npm run test:e2e:docker:sync-two-local && npm run test:e2e:docker:mirror"
},
"dependencies": {},
"devDependencies": {}

View File

@@ -0,0 +1,24 @@
{
"name": "livesync-cli-runtime",
"private": true,
"version": "0.0.0",
"description": "Runtime dependencies for Self-hosted LiveSync CLI Docker image",
"dependencies": {
"commander": "^14.0.3",
"werift": "^0.22.9",
"pouchdb-adapter-http": "^9.0.0",
"pouchdb-adapter-idb": "^9.0.0",
"pouchdb-adapter-indexeddb": "^9.0.0",
"pouchdb-adapter-leveldb": "^9.0.0",
"pouchdb-adapter-memory": "^9.0.0",
"pouchdb-core": "^9.0.0",
"pouchdb-errors": "^9.0.0",
"pouchdb-find": "^9.0.0",
"pouchdb-mapreduce": "^9.0.0",
"pouchdb-merge": "^9.0.0",
"pouchdb-replication": "^9.0.0",
"pouchdb-utils": "^9.0.0",
"pouchdb-wrappers": "*",
"transform-pouch": "^2.0.0"
}
}

View File

@@ -0,0 +1,111 @@
import * as nodeFs from "node:fs";
import * as nodePath from "node:path";
type LocalStorageShape = {
getItem(key: string): string | null;
setItem(key: string, value: string): void;
removeItem(key: string): void;
clear(): void;
};
class PersistentNodeLocalStorage {
private storagePath: string | undefined;
private localStore: Record<string, string> = {};
configure(storagePath: string) {
if (this.storagePath === storagePath) {
return;
}
this.storagePath = storagePath;
this.loadFromFile();
}
private loadFromFile() {
if (!this.storagePath) {
this.localStore = {};
return;
}
try {
const loaded = JSON.parse(nodeFs.readFileSync(this.storagePath, "utf-8")) as Record<string, string>;
this.localStore = { ...loaded };
} catch {
this.localStore = {};
}
}
private flushToFile() {
if (!this.storagePath) {
return;
}
nodeFs.mkdirSync(nodePath.dirname(this.storagePath), { recursive: true });
nodeFs.writeFileSync(this.storagePath, JSON.stringify(this.localStore, null, 2), "utf-8");
}
getItem(key: string): string | null {
return this.localStore[key] ?? null;
}
setItem(key: string, value: string) {
this.localStore[key] = value;
this.flushToFile();
}
removeItem(key: string) {
if (!(key in this.localStore)) {
return;
}
delete this.localStore[key];
this.flushToFile();
}
clear() {
this.localStore = {};
this.flushToFile();
}
}
const persistentNodeLocalStorage = new PersistentNodeLocalStorage();
function createNodeLocalStorageShim(): LocalStorageShape {
return {
getItem(key: string) {
return persistentNodeLocalStorage.getItem(key);
},
setItem(key: string, value: string) {
persistentNodeLocalStorage.setItem(key, value);
},
removeItem(key: string) {
persistentNodeLocalStorage.removeItem(key);
},
clear() {
persistentNodeLocalStorage.clear();
},
};
}
export function ensureGlobalNodeLocalStorage() {
if (!("localStorage" in globalThis) || typeof (globalThis as any).localStorage?.getItem !== "function") {
(globalThis as any).localStorage = createNodeLocalStorageShim();
}
}
export function configureNodeLocalStorage(storagePath: string) {
persistentNodeLocalStorage.configure(storagePath);
ensureGlobalNodeLocalStorage();
}
export function getNodeLocalStorageItem(key: string): string {
return persistentNodeLocalStorage.getItem(key) ?? "";
}
export function setNodeLocalStorageItem(key: string, value: string) {
persistentNodeLocalStorage.setItem(key, value);
}
export function deleteNodeLocalStorageItem(key: string) {
persistentNodeLocalStorage.removeItem(key);
}
export function clearNodeLocalStorage() {
persistentNodeLocalStorage.clear();
}

View File

@@ -0,0 +1,60 @@
import * as fs from "node:fs";
import * as os from "node:os";
import * as path from "node:path";
import { afterEach, describe, expect, it } from "vitest";
import {
clearNodeLocalStorage,
configureNodeLocalStorage,
ensureGlobalNodeLocalStorage,
getNodeLocalStorageItem,
setNodeLocalStorageItem,
} from "./NodeLocalStorage";
describe("NodeLocalStorage", () => {
const tempDirs: string[] = [];
afterEach(() => {
clearNodeLocalStorage();
for (const tempDir of tempDirs.splice(0)) {
fs.rmSync(tempDir, { recursive: true, force: true });
}
});
it("persists values to the configured file", () => {
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
tempDirs.push(tempDir);
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
configureNodeLocalStorage(storagePath);
setNodeLocalStorageItem("checkpoint", "42");
const saved = JSON.parse(fs.readFileSync(storagePath, "utf-8")) as Record<string, string>;
expect(saved.checkpoint).toBe("42");
});
it("reloads persisted values when configured again", () => {
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
tempDirs.push(tempDir);
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
fs.mkdirSync(path.dirname(storagePath), { recursive: true });
fs.writeFileSync(storagePath, JSON.stringify({ persisted: "value" }, null, 2), "utf-8");
configureNodeLocalStorage(storagePath);
expect(getNodeLocalStorageItem("persisted")).toBe("value");
});
it("installs a global localStorage shim backed by the same store", () => {
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "livesync-node-local-storage-"));
tempDirs.push(tempDir);
const storagePath = path.join(tempDir, "runtime", "local-storage.json");
configureNodeLocalStorage(storagePath);
ensureGlobalNodeLocalStorage();
globalThis.localStorage.setItem("shared", "state");
expect(getNodeLocalStorageItem("shared")).toBe("state");
});
});

View File

@@ -5,17 +5,17 @@ import { handlers } from "@lib/services/lib/HandlerUtils";
import type { ObsidianLiveSyncSettings } from "@lib/common/types";
import type { ServiceContext } from "@lib/services/base/ServiceBase";
import { SettingService, type SettingServiceDependencies } from "@lib/services/base/SettingService";
import * as nodeFs from "node:fs";
import * as nodePath from "node:path";
import {
configureNodeLocalStorage,
deleteNodeLocalStorageItem,
getNodeLocalStorageItem,
setNodeLocalStorageItem,
} from "./NodeLocalStorage";
export class NodeSettingService<T extends ServiceContext> extends SettingService<T> {
private storagePath: string;
private localStore: Record<string, string> = {};
constructor(context: T, dependencies: SettingServiceDependencies, storagePath: string) {
super(context, dependencies);
this.storagePath = storagePath;
this.loadLocalStoreFromFile();
configureNodeLocalStorage(storagePath);
this.onSettingSaved.addHandler((settings) => {
eventHub.emitEvent(EVENT_SETTING_SAVED, settings);
return Promise.resolve(true);
@@ -26,34 +26,16 @@ export class NodeSettingService<T extends ServiceContext> extends SettingService
});
}
private loadLocalStoreFromFile() {
try {
const loaded = JSON.parse(nodeFs.readFileSync(this.storagePath, "utf-8")) as Record<string, string>;
this.localStore = { ...loaded };
} catch {
this.localStore = {};
}
}
private flushLocalStoreToFile() {
nodeFs.mkdirSync(nodePath.dirname(this.storagePath), { recursive: true });
nodeFs.writeFileSync(this.storagePath, JSON.stringify(this.localStore, null, 2), "utf-8");
}
protected setItem(key: string, value: string) {
this.localStore[key] = value;
this.flushLocalStoreToFile();
setNodeLocalStorageItem(key, value);
}
protected getItem(key: string): string {
return this.localStore[key] ?? "";
return getNodeLocalStorageItem(key);
}
protected deleteItem(key: string): void {
if (key in this.localStore) {
delete this.localStore[key];
this.flushLocalStoreToFile();
}
deleteNodeLocalStorageItem(key);
}
public saveData = handlers<{ saveData: (data: ObsidianLiveSyncSettings) => Promise<void> }>().binder("saveData");

16
src/apps/cli/test/test-e2e-two-vaults-common.sh Executable file → Normal file
View File

@@ -136,6 +136,8 @@ fi
TARGET_A_ONLY="e2e/a-only-info.md"
TARGET_SYNC="e2e/sync-info.md"
TARGET_SYNC_TWICE_FIRST="e2e/sync-twice-first.md"
TARGET_SYNC_TWICE_SECOND="e2e/sync-twice-second.md"
TARGET_PUSH="e2e/pushed-from-a.md"
TARGET_PUT="e2e/put-from-a.md"
TARGET_PUSH_BINARY="e2e/pushed-from-a.bin"
@@ -154,6 +156,20 @@ INFO_B_SYNC="$(run_cli_b info "$TARGET_SYNC")"
cli_test_assert_contains "$INFO_B_SYNC" "\"path\": \"$TARGET_SYNC\"" "B info should include path after sync"
echo "[PASS] sync A->B and B info"
echo "[CASE] B can sync again after first replication has completed"
printf 'first-sync-round-%s\n' "$DB_SUFFIX" | run_cli_a put "$TARGET_SYNC_TWICE_FIRST" >/dev/null
run_cli_a sync >/dev/null
run_cli_b sync >/dev/null
CAT_B_SYNC_TWICE_FIRST="$(run_cli_b cat "$TARGET_SYNC_TWICE_FIRST" | cli_test_sanitise_cat_stdout)"
cli_test_assert_equal "first-sync-round-$DB_SUFFIX" "$CAT_B_SYNC_TWICE_FIRST" "B should receive first update after first sync"
printf 'second-sync-round-%s\n' "$DB_SUFFIX" | run_cli_a put "$TARGET_SYNC_TWICE_SECOND" >/dev/null
run_cli_a sync >/dev/null
run_cli_b sync >/dev/null
CAT_B_SYNC_TWICE_SECOND="$(run_cli_b cat "$TARGET_SYNC_TWICE_SECOND" | cli_test_sanitise_cat_stdout)"
cli_test_assert_equal "second-sync-round-$DB_SUFFIX" "$CAT_B_SYNC_TWICE_SECOND" "B should receive second update after re-running sync"
echo "[PASS] second sync after completion works"
echo "[CASE] A pushes and puts, both sync, and B can pull and cat"
PUSH_SRC="$WORK_DIR/push-source.txt"
PULL_DST="$WORK_DIR/pull-destination.txt"

0
src/apps/cli/test/test-e2e-two-vaults-matrix.sh Executable file → Normal file
View File

View File

View File

@@ -0,0 +1,150 @@
#!/usr/bin/env bash
# test-helpers-docker.sh
#
# Docker-mode overrides for test-helpers.sh.
# Sourced automatically at the end of test-helpers.sh when
# LIVESYNC_TEST_DOCKER=1 is set, replacing run_cli (and related helpers)
# with a Docker-based implementation.
#
# The Docker container and the host share a common directory layout:
# $WORK_DIR (host) <-> /workdir (container)
# $CLI_DIR (host) <-> /clidir (container)
#
# Usage (run an existing test against the Docker image):
# LIVESYNC_TEST_DOCKER=1 bash test/test-push-pull-linux.sh
# LIVESYNC_TEST_DOCKER=1 bash test/test-mirror-linux.sh
# LIVESYNC_TEST_DOCKER=1 bash test/test-sync-two-local-databases-linux.sh
# LIVESYNC_TEST_DOCKER=1 bash test/test-setup-put-cat-linux.sh
#
# Optional environment variables:
# DOCKER_IMAGE Image name/tag to use (default: livesync-cli)
# RUN_BUILD Set to 1 to rebuild the Docker image before the test
# (default: 0 — assumes the image is already built)
# Build command: npm run build:docker (from src/apps/cli/)
#
# Notes:
# - The container is started with --network host so that it can reach
# CouchDB / P2P relay containers that are also using the host network.
# - On macOS / Windows Docker Desktop --network host behaves differently
# (it is not a true host-network bridge); tests that rely on localhost
# connectivity to other containers may fail on those platforms.
# Ensure Docker-mode tests do not trigger host-side `npm run build` unless
# explicitly requested by the caller.
RUN_BUILD="${RUN_BUILD:-0}"
# Override the standard implementation.
# In Docker mode the CLI_CMD array is a no-op sentinel; run_cli is overridden
# directly.
cli_test_init_cli_cmd() {
DOCKER_IMAGE="${DOCKER_IMAGE:-livesync-cli}"
# CLI_CMD is unused in Docker mode; set a sentinel so existing code
# that references it will not error.
CLI_CMD=(__docker__)
}
# ─── display_test_info ────────────────────────────────────────────────────────
display_test_info() {
local image="${DOCKER_IMAGE:-livesync-cli}"
local image_id
image_id="$(docker inspect --format='{{slice .Id 7 19}}' "$image" 2>/dev/null || echo "N/A")"
echo "======================"
echo "Script: ${BASH_SOURCE[1]:-$0}"
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
echo "Commit: $(git -C "${SCRIPT_DIR:-.}" rev-parse --short HEAD 2>/dev/null || echo "N/A")"
echo "Mode: Docker image=${image} id=${image_id}"
echo "======================"
}
# ─── _docker_translate_arg ───────────────────────────────────────────────────
# Translate a single host filesystem path to its in-container equivalent.
# Paths under WORK_DIR → /workdir/...
# Paths under CLI_DIR → /clidir/...
# Everything else is returned unchanged (relative paths, URIs, plain names).
_docker_translate_arg() {
local arg="$1"
if [[ -n "${WORK_DIR:-}" && "$arg" == "$WORK_DIR"* ]]; then
printf '%s' "/workdir${arg#$WORK_DIR}"
return
fi
if [[ -n "${CLI_DIR:-}" && "$arg" == "$CLI_DIR"* ]]; then
printf '%s' "/clidir${arg#$CLI_DIR}"
return
fi
printf '%s' "$arg"
}
# ─── run_cli ─────────────────────────────────────────────────────────────────
# Drop-in replacement for run_cli that executes the CLI inside a Docker
# container, translating host paths to container paths automatically.
#
# Calling convention is identical to the native run_cli:
# run_cli <vault-path> [options] <command> [command-args]
# run_cli init-settings [options] <settings-file>
#
# The vault path (first positional argument for regular commands) is forwarded
# via the LIVESYNC_DB_PATH environment variable so that docker-entrypoint.sh
# can inject it before the remaining CLI arguments.
run_cli() {
local args=("$@")
# ── 1. Translate all host paths to container paths ────────────────────
local translated=()
for arg in "${args[@]}"; do
translated+=("$(_docker_translate_arg "$arg")")
done
# ── 2. Split vault path from the rest of the arguments ───────────────
local first="${translated[0]:-}"
local env_args=()
local cli_args=()
# These tokens are commands or flags that appear before any vault path.
case "$first" in
"" | --help | -h \
| init-settings \
| -v | --verbose | -d | --debug | -f | --force | -s | --settings)
# No leading vault path — pass all translated args as-is.
cli_args=("${translated[@]}")
;;
*)
# First arg is the vault path; hand it to docker-entrypoint.sh
# via LIVESYNC_DB_PATH so the entrypoint prepends it correctly.
env_args+=(-e "LIVESYNC_DB_PATH=$first")
cli_args=("${translated[@]:1}")
;;
esac
# ── 3. Inject verbose / debug flags ──────────────────────────────────
if [[ "${VERBOSE_TEST_LOGGING:-0}" == "1" ]]; then
cli_args=(-v "${cli_args[@]}")
fi
# ── 4. Volume mounts ──────────────────────────────────────────────────
local vol_args=()
if [[ -n "${WORK_DIR:-}" ]]; then
vol_args+=(-v "${WORK_DIR}:/workdir")
fi
# Mount CLI_DIR (src/apps/cli) for two-vault tests that store vault data
# under $CLI_DIR/.livesync/.
if [[ -n "${CLI_DIR:-}" ]]; then
vol_args+=(-v "${CLI_DIR}:/clidir")
fi
# ── 5. stdin forwarding ───────────────────────────────────────────────
# Attach stdin only when it is a pipe (the 'put' command reads from stdin).
# Without -i the pipe data would never reach the container process.
local stdin_flags=()
if [[ ! -t 0 ]]; then
stdin_flags=(-i)
fi
docker run --rm \
"${stdin_flags[@]}" \
--network host \
--user "$(id -u):$(id -g)" \
"${vol_args[@]}" \
"${env_args[@]}" \
"${DOCKER_IMAGE:-livesync-cli}" \
"${cli_args[@]}"
}

View File

@@ -1,5 +1,15 @@
#!/usr/bin/env bash
# ─── local init hook ────────────────────────────────────────────────────────
# If test-init.local.sh exists alongside this file, source it before anything
# else. Use it to set up your local environment (e.g. activate nvm, set
# DOCKER_IMAGE, ...). The file is git-ignored so it is safe to put personal
# or machine-specific configuration there.
_TEST_HELPERS_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=/dev/null
[[ -f "$_TEST_HELPERS_DIR/test-init.local.sh" ]] && source "$_TEST_HELPERS_DIR/test-init.local.sh"
unset _TEST_HELPERS_DIR
cli_test_init_cli_cmd() {
if [[ "${VERBOSE_TEST_LOGGING:-0}" == "1" ]]; then
CLI_CMD=(npm --silent run cli -- -v)
@@ -343,4 +353,10 @@ display_test_info(){
echo "Date: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
echo "Git commit: $(git -C "$SCRIPT_DIR/.." rev-parse --short HEAD 2>/dev/null || echo "N/A")"
echo "======================"
}
}
# Docker-mode hook — source overrides when LIVESYNC_TEST_DOCKER=1.
if [[ "${LIVESYNC_TEST_DOCKER:-0}" == "1" ]]; then
# shellcheck source=/dev/null
source "$(dirname "${BASH_SOURCE[0]}")/test-helpers-docker.sh"
fi

0
src/apps/cli/test/test-mirror-linux.sh Executable file → Normal file
View File

View File

0
src/apps/cli/test/test-setup-put-cat-linux.sh Executable file → Normal file
View File

View File

@@ -0,0 +1,136 @@
#!/usr/bin/env bash
# Test: CLI sync behaviour against a locked remote database.
#
# Scenario:
# 1. Start CouchDB, create a test database, and perform an initial sync so that
# the milestone document is created on the remote.
# 2. Unlock the milestone (locked=false, accepted_nodes=[]) and verify sync
# succeeds without the locked error message.
# 3. Lock the milestone (locked=true, accepted_nodes=[]) and verify sync fails
# with an actionable error message.
set -euo pipefail
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
cd "$CLI_DIR"
source "$SCRIPT_DIR/test-helpers.sh"
display_test_info
RUN_BUILD="${RUN_BUILD:-1}"
TEST_ENV_FILE="${TEST_ENV_FILE:-$CLI_DIR/.test.env}"
cli_test_init_cli_cmd
if [[ ! -f "$TEST_ENV_FILE" ]]; then
echo "[ERROR] test env file not found: $TEST_ENV_FILE" >&2
exit 1
fi
set -a
source "$TEST_ENV_FILE"
set +a
DB_SUFFIX="$(date +%s)-$RANDOM"
COUCHDB_URI="${hostname%/}"
COUCHDB_DBNAME="${dbname}-locked-${DB_SUFFIX}"
COUCHDB_USER="${username:-}"
COUCHDB_PASSWORD="${password:-}"
if [[ -z "$COUCHDB_URI" || -z "$COUCHDB_USER" || -z "$COUCHDB_PASSWORD" ]]; then
echo "[ERROR] COUCHDB_URI, COUCHDB_USER, COUCHDB_PASSWORD are required" >&2
exit 1
fi
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-cli-locked-test.XXXXXX")"
VAULT_DIR="$WORK_DIR/vault"
SETTINGS_FILE="$WORK_DIR/settings.json"
mkdir -p "$VAULT_DIR"
cleanup() {
local exit_code=$?
cli_test_stop_couchdb
rm -rf "$WORK_DIR"
exit "$exit_code"
}
trap cleanup EXIT
if [[ "$RUN_BUILD" == "1" ]]; then
echo "[INFO] building CLI"
npm run build
fi
echo "[INFO] starting CouchDB and creating test database: $COUCHDB_DBNAME"
cli_test_start_couchdb "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME"
echo "[INFO] preparing settings"
cli_test_init_settings_file "$SETTINGS_FILE"
cli_test_apply_couchdb_settings "$SETTINGS_FILE" "$COUCHDB_URI" "$COUCHDB_USER" "$COUCHDB_PASSWORD" "$COUCHDB_DBNAME" 1
echo "[INFO] initial sync to create milestone document"
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >/dev/null
MILESTONE_ID="_local/obsydian_livesync_milestone"
MILESTONE_URL="${COUCHDB_URI}/${COUCHDB_DBNAME}/${MILESTONE_ID}"
update_milestone() {
local locked="$1"
local accepted_nodes="$2"
local current
current="$(cli_test_curl_json --user "${COUCHDB_USER}:${COUCHDB_PASSWORD}" "$MILESTONE_URL")"
local updated
updated="$(node -e '
const doc = JSON.parse(process.argv[1]);
doc.locked = process.argv[2] === "true";
doc.accepted_nodes = JSON.parse(process.argv[3]);
process.stdout.write(JSON.stringify(doc));
' "$current" "$locked" "$accepted_nodes")"
cli_test_curl_json -X PUT \
--user "${COUCHDB_USER}:${COUCHDB_PASSWORD}" \
-H "Content-Type: application/json" \
-d "$updated" \
"$MILESTONE_URL" >/dev/null
}
SYNC_LOG="$WORK_DIR/sync.log"
echo "[CASE] sync should succeed when remote is not locked"
update_milestone "false" "[]"
set +e
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >"$SYNC_LOG" 2>&1
SYNC_EXIT=$?
set -e
if [[ "$SYNC_EXIT" -ne 0 ]]; then
echo "[FAIL] sync should succeed when remote is not locked" >&2
cat "$SYNC_LOG" >&2
exit 1
fi
if grep -Fq "The remote database is locked" "$SYNC_LOG"; then
echo "[FAIL] locked error should not appear when remote is not locked" >&2
cat "$SYNC_LOG" >&2
exit 1
fi
echo "[PASS] unlocked remote DB syncs successfully"
echo "[CASE] sync should fail with actionable error when remote is locked"
update_milestone "true" "[]"
set +e
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" sync >"$SYNC_LOG" 2>&1
SYNC_EXIT=$?
set -e
if [[ "$SYNC_EXIT" -eq 0 ]]; then
echo "[FAIL] sync should have exited with non-zero when remote is locked" >&2
cat "$SYNC_LOG" >&2
exit 1
fi
cli_test_assert_contains "$(cat "$SYNC_LOG")" \
"The remote database is locked and this device is not yet accepted" \
"sync output should contain the locked-remote error message"
echo "[PASS] locked remote DB produces actionable CLI error"

View File

View File

@@ -1,2 +1,30 @@
#!/bin/bash
docker run -d --name relay-test -p 4000:8080 scsibug/nostr-rs-relay:latest
set -e
docker run -d --name relay-test -p 4000:7777 \
--tmpfs /app/strfry-db:rw,size=256m \
--entrypoint sh \
ghcr.io/hoytech/strfry:latest \
-lc 'cat > /tmp/strfry.conf <<"EOF"
db = "./strfry-db/"
relay {
bind = "0.0.0.0"
port = 7777
nofiles = 100000
info {
name = "livesync test relay"
description = "local relay for livesync p2p tests"
}
maxWebsocketPayloadSize = 131072
autoPingSeconds = 55
writePolicy {
plugin = ""
}
}
EOF
exec /app/strfry --config /tmp/strfry.conf relay'

View File

@@ -12,8 +12,7 @@ const defaultExternal = [
"pouchdb-adapter-leveldb",
"commander",
"punycode",
"node-datachannel",
"node-datachannel/polyfill",
"werift",
];
export default defineConfig({
plugins: [svelte()],
@@ -52,7 +51,7 @@ export default defineConfig({
if (id === "fs" || id === "fs/promises" || id === "path" || id === "crypto" || id === "worker_threads")
return true;
if (id.startsWith("pouchdb-")) return true;
if (id.startsWith("node-datachannel")) return true;
if (id.startsWith("werift")) return true;
if (id.startsWith("node:")) return true;
return false;
},

View File

@@ -2,3 +2,4 @@ node_modules
dist
.DS_Store
*.log
.nyc_output

View File

@@ -0,0 +1,58 @@
# syntax=docker/dockerfile:1
#
# Self-hosted LiveSync WebApp — Docker image
# Browser-based vault sync using the FileSystem API, served by nginx.
#
# Build (from the repository root):
# docker build -f src/apps/webapp/Dockerfile -t livesync-webapp .
#
# Run:
# docker run --rm -p 8080:80 livesync-webapp
# Then open http://localhost:8080/webapp.html in Chrome/Edge 86+.
#
# Notes:
# - This image serves purely static files; no server-side code is involved.
# - The FileSystem API is a browser feature and requires Chrome/Edge 86+ or
# Safari 15.2+ (limited). Firefox is not supported.
# - CouchDB / S3 connections are made directly from the browser; the container
# only serves HTML/JS/CSS assets.
# ─────────────────────────────────────────────────────────────────────────────
# Stage 1 — builder
# Full Node.js environment to install dependencies and build the Vite bundle.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim AS builder
WORKDIR /build
# Install workspace dependencies (all apps share the root package.json)
COPY package.json ./
RUN npm install
# Copy the full source tree and build the WebApp bundle
COPY . .
RUN cd src/apps/webapp && npm run build
# ─────────────────────────────────────────────────────────────────────────────
# Stage 2 — runtime
# Minimal nginx image that serves the static build output.
# ─────────────────────────────────────────────────────────────────────────────
FROM nginx:stable-alpine
# Remove the default nginx welcome page
RUN rm -rf /usr/share/nginx/html/*
# Copy the built static assets
COPY --from=builder /build/src/apps/webapp/dist /usr/share/nginx/html
# Redirect the root to webapp.html so the app loads on first visit
RUN printf 'server {\n\
listen 80;\n\
root /usr/share/nginx/html;\n\
index webapp.html;\n\
location / {\n\
try_files $uri $uri/ =404;\n\
}\n\
}\n' > /etc/nginx/conf.d/default.conf
EXPOSE 80

View File

@@ -55,8 +55,8 @@ The built files will be in the `dist` directory.
### Usage
1. Open the webapp in your browser
2. Grant directory access when prompted
1. Open the webapp in your browser (`webapp.html`)
2. Select a vault from history or grant access to a new directory
3. Configure CouchDB connection by editing `.livesync/settings.json` in your vault
- You can also copy data.json from Obsidian's plug-in folder.
@@ -98,8 +98,11 @@ webapp/
│ ├── ServiceFileAccessImpl.ts
│ ├── DatabaseFileAccess.ts
│ └── FSAPIServiceModules.ts
├── main.ts # Application entry point
├── index.html # HTML entry
├── bootstrap.ts # Vault picker + startup orchestration
├── main.ts # LiveSync core bootstrap (after vault selected)
├── vaultSelector.ts # FileSystem handle history and permission flow
├── webapp.html # Main HTML entry
├── index.html # Redirect entry for compatibility
├── package.json
├── vite.config.ts
└── README.md

View File

@@ -0,0 +1,139 @@
import { LiveSyncWebApp } from "./main";
import { VaultHistoryStore, type VaultHistoryItem } from "./vaultSelector";
const historyStore = new VaultHistoryStore();
let app: LiveSyncWebApp | null = null;
function getRequiredElement<T extends HTMLElement>(id: string): T {
const element = document.getElementById(id);
if (!element) {
throw new Error(`Missing element: #${id}`);
}
return element as T;
}
function setStatus(kind: "info" | "warning" | "error" | "success", message: string): void {
const statusEl = getRequiredElement<HTMLDivElement>("status");
statusEl.className = kind;
statusEl.textContent = message;
}
function setBusyState(isBusy: boolean): void {
const pickNewBtn = getRequiredElement<HTMLButtonElement>("pick-new-vault");
pickNewBtn.disabled = isBusy;
const historyButtons = document.querySelectorAll<HTMLButtonElement>(".vault-item button");
historyButtons.forEach((button) => {
button.disabled = isBusy;
});
}
function formatLastUsed(unixMillis: number): string {
if (!unixMillis) {
return "unknown";
}
return new Date(unixMillis).toLocaleString();
}
async function renderHistoryList(): Promise<VaultHistoryItem[]> {
const listEl = getRequiredElement<HTMLDivElement>("vault-history-list");
const emptyEl = getRequiredElement<HTMLParagraphElement>("vault-history-empty");
const [items, lastUsedId] = await Promise.all([historyStore.getVaultHistory(), historyStore.getLastUsedVaultId()]);
listEl.innerHTML = "";
emptyEl.classList.toggle("is-hidden", items.length > 0);
for (const item of items) {
const row = document.createElement("div");
row.className = "vault-item";
const info = document.createElement("div");
info.className = "vault-item-info";
const name = document.createElement("div");
name.className = "vault-item-name";
name.textContent = item.name;
const meta = document.createElement("div");
meta.className = "vault-item-meta";
const label = item.id === lastUsedId ? "Last used" : "Used";
meta.textContent = `${label}: ${formatLastUsed(item.lastUsedAt)}`;
info.append(name, meta);
const useButton = document.createElement("button");
useButton.type = "button";
useButton.textContent = "Use this vault";
useButton.addEventListener("click", () => {
void startWithHistory(item);
});
row.append(info, useButton);
listEl.appendChild(row);
}
return items;
}
async function startWithHandle(handle: FileSystemDirectoryHandle): Promise<void> {
setStatus("info", `Starting LiveSync with vault: ${handle.name}`);
app = new LiveSyncWebApp(handle);
await app.initialize();
const selectorEl = getRequiredElement<HTMLDivElement>("vault-selector");
selectorEl.classList.add("is-hidden");
}
async function startWithHistory(item: VaultHistoryItem): Promise<void> {
setBusyState(true);
try {
const handle = await historyStore.activateHistoryItem(item);
await startWithHandle(handle);
} catch (error) {
console.error("[Directory] Failed to open history vault:", error);
setStatus("error", `Failed to open saved vault: ${String(error)}`);
setBusyState(false);
}
}
async function startWithNewPicker(): Promise<void> {
setBusyState(true);
try {
const handle = await historyStore.pickNewVault();
await startWithHandle(handle);
} catch (error) {
console.error("[Directory] Failed to pick vault:", error);
setStatus("warning", `Vault selection was cancelled or failed: ${String(error)}`);
setBusyState(false);
}
}
async function initializeVaultSelector(): Promise<void> {
setStatus("info", "Select a vault folder to start LiveSync.");
const pickNewBtn = getRequiredElement<HTMLButtonElement>("pick-new-vault");
pickNewBtn.addEventListener("click", () => {
void startWithNewPicker();
});
await renderHistoryList();
}
window.addEventListener("load", async () => {
try {
await initializeVaultSelector();
} catch (error) {
console.error("Failed to initialize vault selector:", error);
setStatus("error", `Initialization failed: ${String(error)}`);
}
});
window.addEventListener("beforeunload", () => {
void app?.shutdown();
});
(window as any).livesyncApp = {
getApp: () => app,
historyStore,
};

View File

@@ -3,207 +3,10 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Self-hosted LiveSync WebApp</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
display: flex;
align-items: center;
justify-content: center;
padding: 20px;
}
.container {
background: white;
border-radius: 12px;
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
padding: 40px;
max-width: 600px;
width: 100%;
}
h1 {
color: #333;
margin-bottom: 10px;
font-size: 28px;
}
.subtitle {
color: #666;
margin-bottom: 30px;
font-size: 14px;
}
#status {
padding: 15px;
border-radius: 8px;
margin-bottom: 20px;
font-size: 14px;
font-weight: 500;
}
#status.error {
background: #fee;
color: #c33;
border: 1px solid #fcc;
}
#status.warning {
background: #ffeaa7;
color: #d63031;
border: 1px solid #fdcb6e;
}
#status.success {
background: #d4edda;
color: #155724;
border: 1px solid #c3e6cb;
}
#status.info {
background: #d1ecf1;
color: #0c5460;
border: 1px solid #bee5eb;
}
.info-section {
margin-top: 30px;
padding: 20px;
background: #f8f9fa;
border-radius: 8px;
}
.info-section h2 {
font-size: 18px;
margin-bottom: 15px;
color: #333;
}
.info-section ul {
list-style: none;
padding-left: 0;
}
.info-section li {
padding: 8px 0;
color: #666;
font-size: 14px;
}
.info-section li::before {
content: "•";
color: #667eea;
font-weight: bold;
display: inline-block;
width: 1em;
margin-left: -1em;
padding-right: 0.5em;
}
.feature-list {
margin-top: 20px;
}
.feature-list h3 {
font-size: 16px;
margin-bottom: 10px;
color: #444;
}
code {
background: #e9ecef;
padding: 2px 6px;
border-radius: 4px;
font-family: 'Courier New', monospace;
font-size: 13px;
}
.footer {
margin-top: 30px;
text-align: center;
color: #999;
font-size: 12px;
}
.footer a {
color: #667eea;
text-decoration: none;
}
.footer a:hover {
text-decoration: underline;
}
.console-link {
margin-top: 20px;
text-align: center;
font-size: 13px;
color: #666;
}
@media (max-width: 600px) {
.container {
padding: 30px 20px;
}
h1 {
font-size: 24px;
}
}
</style>
<title>Self-hosted LiveSync WebApp Launcher</title>
<meta http-equiv="refresh" content="0; url=./webapp.html">
</head>
<body>
<div class="container">
<h1>🔄 Self-hosted LiveSync</h1>
<p class="subtitle">Browser-based Self-hosted LiveSync using FileSystem API</p>
<div id="status" class="info">
Initialising...
</div>
<div class="info-section">
<h2>About This Application</h2>
<ul>
<li>Runs entirely in your browser</li>
<li>Uses FileSystem API to access your local vault</li>
<li>Syncs with CouchDB server (like Obsidian plugin)</li>
<li>Settings stored in <code>.livesync/settings.json</code></li>
<li>Real-time file watching with FileSystemObserver (Chrome 124+)</li>
</ul>
</div>
<div class="info-section">
<h2>How to Use</h2>
<ul>
<li>Grant directory access when prompted</li>
<li>Create <code>.livesync/settings.json</code> in your vault folder. (Compatible with Obsidian's Self-hosted LiveSync)</li>
<li>Add your CouchDB connection details</li>
<li>Your files will be synced automatically</li>
</ul>
</div>
<div class="console-link">
💡 Open browser console (F12) for detailed logs
</div>
<div class="footer">
<p>
Powered by
<a href="https://github.com/vrtmrz/obsidian-livesync" target="_blank">
Self-hosted LiveSync
</a>
</p>
</div>
</div>
<script type="module" src="./main.ts"></script>
<p>Redirecting to <a href="./webapp.html">WebApp</a>...</p>
</body>
</html>

View File

@@ -13,10 +13,12 @@ import type { InjectableSettingService } from "@lib/services/implements/injectab
import { useOfflineScanner } from "@lib/serviceFeatures/offlineScanner";
import { useRedFlagFeatures } from "@/serviceFeatures/redFlag";
import { useCheckRemoteSize } from "@lib/serviceFeatures/checkRemoteSize";
import { useSetupURIFeature } from "@lib/serviceFeatures/setupObsidian/setupUri";
import { useRemoteConfiguration } from "@lib/serviceFeatures/remoteConfig";
import { SetupManager } from "@/modules/features/SetupManager";
// import { ModuleObsidianSettingsAsMarkdown } from "@/modules/features/ModuleObsidianSettingAsMarkdown";
import { ModuleSetupObsidian } from "@/modules/features/ModuleSetupObsidian";
// import { ModuleObsidianMenu } from "@/modules/essentialObsidian/ModuleObsidianMenu";
import { useSetupManagerHandlersFeature } from "@/serviceFeatures/setupObsidian/setupManagerHandlers";
import { useP2PReplicatorCommands } from "@/lib/src/replication/trystero/useP2PReplicatorCommands";
import { useP2PReplicatorFeature } from "@/lib/src/replication/trystero/useP2PReplicatorFeature";
const SETTINGS_DIR = ".livesync";
const SETTINGS_FILE = "settings.json";
@@ -47,21 +49,18 @@ const DEFAULT_SETTINGS: Partial<ObsidianLiveSyncSettings> = {
};
class LiveSyncWebApp {
private rootHandle: FileSystemDirectoryHandle | null = null;
private rootHandle: FileSystemDirectoryHandle;
private core: LiveSyncBaseCore<ServiceContext, any> | null = null;
private serviceHub: BrowserServiceHub<ServiceContext> | null = null;
constructor(rootHandle: FileSystemDirectoryHandle) {
this.rootHandle = rootHandle;
}
async initialize() {
console.log("Self-hosted LiveSync WebApp");
console.log("Initializing...");
// Request directory access
await this.requestDirectoryAccess();
if (!this.rootHandle) {
throw new Error("Failed to get directory access");
}
console.log(`Vault directory: ${this.rootHandle.name}`);
// Create service context and hub
@@ -98,18 +97,26 @@ class LiveSyncWebApp {
return DEFAULT_SETTINGS as ObsidianLiveSyncSettings;
});
// App lifecycle handlers
this.serviceHub.appLifecycle.scheduleRestart.setHandler(async () => {
console.log("[AppLifecycle] Restart requested");
await this.shutdown();
await this.initialize();
setTimeout(() => {
window.location.reload();
}, 1000);
});
// Create LiveSync core
this.core = new LiveSyncBaseCore(
this.serviceHub,
(core, serviceHub) => {
return initialiseServiceModulesFSAPI(this.rootHandle!, core, serviceHub);
return initialiseServiceModulesFSAPI(this.rootHandle, core, serviceHub);
},
(core) => [
// new ModuleObsidianEvents(this, core),
// new ModuleObsidianSettingDialogue(this, core),
// new ModuleObsidianMenu(core),
new ModuleSetupObsidian(core),
new SetupManager(core),
// new ModuleObsidianSettingsAsMarkdown(core),
// new ModuleLog(this, core),
// new ModuleObsidianDocumentHistory(this, core),
@@ -118,13 +125,20 @@ class LiveSyncWebApp {
// new ModuleDev(this, core),
// new ModuleReplicateTest(this, core),
// new ModuleIntegratedTest(this, core),
// new SetupManager(core),
// new ModuleReplicatorP2P(core), // Register P2P replicator for CLI (useP2PReplicator is not used here)
new SetupManager(core),
],
() => [], // No add-ons
(core) => {
useOfflineScanner(core);
useRedFlagFeatures(core);
useCheckRemoteSize(core);
useRemoteConfiguration(core);
const replicator = useP2PReplicatorFeature(core);
useP2PReplicatorCommands(core, replicator);
const setupManager = core.getModule(SetupManager);
useSetupManagerHandlersFeature(core, setupManager);
useSetupURIFeature(core);
}
);
@@ -133,8 +147,6 @@ class LiveSyncWebApp {
}
private async saveSettingsToFile(data: ObsidianLiveSyncSettings): Promise<void> {
if (!this.rootHandle) return;
try {
// Create .livesync directory if it doesn't exist
const livesyncDir = await this.rootHandle.getDirectoryHandle(SETTINGS_DIR, { create: true });
@@ -151,8 +163,6 @@ class LiveSyncWebApp {
}
private async loadSettingsFromFile(): Promise<Partial<ObsidianLiveSyncSettings> | null> {
if (!this.rootHandle) return null;
try {
const livesyncDir = await this.rootHandle.getDirectoryHandle(SETTINGS_DIR);
const fileHandle = await livesyncDir.getFileHandle(SETTINGS_FILE);
@@ -165,90 +175,6 @@ class LiveSyncWebApp {
}
}
private async requestDirectoryAccess() {
try {
// Check if we have a cached directory handle
const cached = await this.loadCachedDirectoryHandle();
if (cached) {
// Verify permission (cast to any for compatibility)
try {
const permission = await (cached as any).queryPermission({ mode: "readwrite" });
if (permission === "granted") {
this.rootHandle = cached;
console.log("[Directory] Using cached directory handle");
return;
}
} catch (e) {
// queryPermission might not be supported, try to use anyway
console.log("[Directory] Could not verify permission, requesting new access");
}
}
// Request new directory access
console.log("[Directory] Requesting directory access...");
this.rootHandle = await (window as any).showDirectoryPicker({
mode: "readwrite",
startIn: "documents",
});
// Save the handle for next time
await this.saveCachedDirectoryHandle(this.rootHandle);
console.log("[Directory] Directory access granted");
} catch (error) {
console.error("[Directory] Failed to get directory access:", error);
throw error;
}
}
private async saveCachedDirectoryHandle(handle: FileSystemDirectoryHandle) {
try {
// Use IndexedDB to store the directory handle
const db = await this.openHandleDB();
const transaction = db.transaction(["handles"], "readwrite");
const store = transaction.objectStore("handles");
await new Promise((resolve, reject) => {
const request = store.put(handle, "rootHandle");
request.onsuccess = resolve;
request.onerror = reject;
});
db.close();
} catch (error) {
console.error("[Directory] Failed to cache handle:", error);
}
}
private async loadCachedDirectoryHandle(): Promise<FileSystemDirectoryHandle | null> {
try {
const db = await this.openHandleDB();
const transaction = db.transaction(["handles"], "readonly");
const store = transaction.objectStore("handles");
const handle = await new Promise<FileSystemDirectoryHandle | null>((resolve, reject) => {
const request = store.get("rootHandle");
request.onsuccess = () => resolve(request.result || null);
request.onerror = reject;
});
db.close();
return handle;
} catch (error) {
console.error("[Directory] Failed to load cached handle:", error);
return null;
}
}
private async openHandleDB(): Promise<IDBDatabase> {
return new Promise((resolve, reject) => {
const request = indexedDB.open("livesync-webapp-handles", 1);
request.onerror = () => reject(request.error);
request.onsuccess = () => resolve(request.result);
request.onupgradeneeded = (event) => {
const db = (event.target as IDBOpenDBRequest).result;
if (!db.objectStoreNames.contains("handles")) {
db.createObjectStore("handles");
}
};
});
}
private async start() {
if (!this.core) {
throw new Error("Core not initialized");
@@ -333,21 +259,4 @@ class LiveSyncWebApp {
}
}
// Initialize on load
const app = new LiveSyncWebApp();
window.addEventListener("load", async () => {
try {
await app.initialize();
} catch (error) {
console.error("Failed to initialize:", error);
}
});
// Handle page unload
window.addEventListener("beforeunload", () => {
void app.shutdown();
});
// Export for debugging
(window as any).livesyncApp = app;
export { LiveSyncWebApp };

View File

@@ -7,6 +7,8 @@
"scripts": {
"dev": "vite",
"build": "vite build",
"build:docker": "docker build -f Dockerfile -t livesync-webapp ../../..",
"run:docker": "docker run -p 8002:80 livesync-webapp",
"preview": "vite preview"
},
"dependencies": {},

View File

@@ -0,0 +1,81 @@
import { defineConfig, devices } from "@playwright/test";
import * as path from "path";
import * as fs from "fs";
import { fileURLToPath } from "url";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// ---------------------------------------------------------------------------
// Load environment variables from .test.env (root) so that CouchDB
// connection details are visible to the test process.
// ---------------------------------------------------------------------------
function loadEnvFile(envPath: string): Record<string, string> {
const result: Record<string, string> = {};
if (!fs.existsSync(envPath)) return result;
const lines = fs.readFileSync(envPath, "utf-8").split("\n");
for (const line of lines) {
const trimmed = line.trim();
if (!trimmed || trimmed.startsWith("#")) continue;
const eq = trimmed.indexOf("=");
if (eq < 0) continue;
const key = trimmed.slice(0, eq).trim();
const val = trimmed.slice(eq + 1).trim();
result[key] = val;
}
return result;
}
// __dirname is src/apps/webapp — root is three levels up
const ROOT = path.resolve(__dirname, "../../..");
const envVars = {
...loadEnvFile(path.join(ROOT, ".env")),
...loadEnvFile(path.join(ROOT, ".test.env")),
};
// Make the loaded variables available to all test files via process.env.
for (const [k, v] of Object.entries(envVars)) {
if (!(k in process.env)) {
process.env[k] = v;
}
}
export default defineConfig({
testDir: "./test",
// Give each test plenty of time for replication round-trips.
timeout: 120_000,
expect: { timeout: 30_000 },
// Run test files sequentially; the tests themselves manage two contexts.
fullyParallel: false,
workers: 1,
reporter: "list",
use: {
baseURL: "http://localhost:3000",
// Use Chromium for OPFS and FileSystem API support.
...devices["Desktop Chrome"],
headless: true,
// Launch args to match the main vitest browser config.
launchOptions: {
args: ["--js-flags=--expose-gc"],
},
},
projects: [
{
name: "chromium",
use: { ...devices["Desktop Chrome"] },
},
],
// Start the vite dev server before running the tests.
webServer: {
command: "npx vite --port 3000",
url: "http://localhost:3000",
// Re-use a running dev server when developing locally.
reuseExistingServer: !process.env.CI,
timeout: 30_000,
// Run from the webapp directory so vite finds its config.
cwd: __dirname,
},
});

View File

@@ -0,0 +1,203 @@
/**
* LiveSync WebApp E2E test entry point.
*
* When served by vite dev server (at /test.html), this module wires up
* `window.livesyncTest`, a plain JS API that Playwright tests can call via
* `page.evaluate()`. All methods are async and serialisation-safe.
*
* Vault storage is backed by OPFS so no `showDirectoryPicker()` interaction
* is required, making it fully headless-compatible.
*/
import { LiveSyncWebApp } from "./main";
import type { ObsidianLiveSyncSettings } from "@lib/common/types";
import type { FilePathWithPrefix } from "@lib/common/types";
// --------------------------------------------------------------------------
// Internal state one app instance per page / browser context
// --------------------------------------------------------------------------
let app: LiveSyncWebApp | null = null;
// --------------------------------------------------------------------------
// Helpers
// --------------------------------------------------------------------------
/** Strip the "plain:" / "enc:" / … prefix used internally in PouchDB paths. */
function stripPrefix(raw: string): string {
return raw.replace(/^[^:]+:/, "");
}
/**
* Poll every 300 ms until all known processing queues are drained, or until
* the timeout elapses. Mirrors `waitForIdle` in the existing vitest harness.
*/
async function waitForIdle(core: any, timeoutMs = 60_000): Promise<void> {
const deadline = Date.now() + timeoutMs;
while (Date.now() < deadline) {
const q =
(core.services?.replication?.databaseQueueCount?.value ?? 0) +
(core.services?.fileProcessing?.totalQueued?.value ?? 0) +
(core.services?.fileProcessing?.batched?.value ?? 0) +
(core.services?.fileProcessing?.processing?.value ?? 0) +
(core.services?.replication?.storageApplyingCount?.value ?? 0);
if (q === 0) return;
await new Promise<void>((r) => setTimeout(r, 300));
}
throw new Error(`waitForIdle timed out after ${timeoutMs} ms`);
}
function getCore(): any {
const core = (app as any)?.core;
if (!core) throw new Error("Vault not initialised call livesyncTest.init() first");
return core;
}
// --------------------------------------------------------------------------
// Public test API
// --------------------------------------------------------------------------
export interface LiveSyncTestAPI {
/**
* Initialise a vault in OPFS under the given name and apply `settings`.
* Any previous contents of the OPFS directory are wiped first so each
* test run starts clean.
*/
init(vaultName: string, settings: Partial<ObsidianLiveSyncSettings>): Promise<void>;
/**
* Write `content` to the local PouchDB under `vaultPath` (equivalent to
* the CLI `put` command). Waiting for the DB write to finish is
* included; you still need to call `replicate()` to push to remote.
*/
putFile(vaultPath: string, content: string): Promise<boolean>;
/**
* Mark `vaultPath` as deleted in the local PouchDB (equivalent to CLI
* `rm`). Call `replicate()` afterwards to propagate to remote.
*/
deleteFile(vaultPath: string): Promise<boolean>;
/**
* Run one full replication cycle (push + pull) against the remote CouchDB,
* then wait for the local storage-application queue to drain.
*/
replicate(): Promise<boolean>;
/**
* Wait until all processing queues are idle. Usually not needed after
* `putFile` / `deleteFile` since those already await, but useful when
* testing results after `replicate()`.
*/
waitForIdle(timeoutMs?: number): Promise<void>;
/**
* Return metadata for `vaultPath` from the local database, or `null` if
* not found / deleted.
*/
getInfo(vaultPath: string): Promise<{
path: string;
revision: string;
conflicts: string[];
size: number;
mtime: number;
} | null>;
/** Convenience wrapper: returns true when the doc has ≥1 conflict revision. */
hasConflict(vaultPath: string): Promise<boolean>;
/** Tear down the current app instance. */
shutdown(): Promise<void>;
}
// --------------------------------------------------------------------------
// Implementation
// --------------------------------------------------------------------------
const livesyncTest: LiveSyncTestAPI = {
async init(vaultName: string, settings: Partial<ObsidianLiveSyncSettings>): Promise<void> {
// Clean up any stale OPFS data from previous runs.
const opfsRoot = await navigator.storage.getDirectory();
try {
await opfsRoot.removeEntry(vaultName, { recursive: true });
} catch {
// directory did not exist that's fine
}
const vaultDir = await opfsRoot.getDirectoryHandle(vaultName, { create: true });
// Pre-write settings so they are loaded during initialise().
const livesyncDir = await vaultDir.getDirectoryHandle(".livesync", { create: true });
const settingsFile = await livesyncDir.getFileHandle("settings.json", { create: true });
const writable = await settingsFile.createWritable();
await writable.write(JSON.stringify(settings));
await writable.close();
app = new LiveSyncWebApp(vaultDir);
await app.initialize();
// Give background startup tasks a moment to settle.
await waitForIdle(getCore(), 30_000);
},
async putFile(vaultPath: string, content: string): Promise<boolean> {
const core = getCore();
const result = await core.serviceModules.databaseFileAccess.storeContent(
vaultPath as FilePathWithPrefix,
content
);
await waitForIdle(core);
return result !== false;
},
async deleteFile(vaultPath: string): Promise<boolean> {
const core = getCore();
const result = await core.serviceModules.databaseFileAccess.delete(vaultPath as FilePathWithPrefix);
await waitForIdle(core);
return result !== false;
},
async replicate(): Promise<boolean> {
const core = getCore();
const result = await core.services.replication.replicate(true);
// After replicate() resolves, remote docs may still be queued for
// local storage application wait until all queues are drained.
await waitForIdle(core);
return result !== false;
},
async waitForIdle(timeoutMs?: number): Promise<void> {
await waitForIdle(getCore(), timeoutMs ?? 60_000);
},
async getInfo(vaultPath: string) {
const core = getCore();
const db = core.services?.database;
for await (const doc of db.localDatabase.findAllNormalDocs({ conflicts: true })) {
if (doc._deleted || doc.deleted) continue;
const docPath = stripPrefix(doc.path ?? "");
if (docPath !== vaultPath) continue;
return {
path: docPath,
revision: (doc._rev as string) ?? "",
conflicts: (doc._conflicts as string[]) ?? [],
size: (doc.size as number) ?? 0,
mtime: (doc.mtime as number) ?? 0,
};
}
return null;
},
async hasConflict(vaultPath: string): Promise<boolean> {
const info = await livesyncTest.getInfo(vaultPath);
return (info?.conflicts?.length ?? 0) > 0;
},
async shutdown(): Promise<void> {
if (app) {
await app.shutdown();
app = null;
}
},
};
// Expose on window for Playwright page.evaluate() calls.
(window as any).livesyncTest = livesyncTest;

26
src/apps/webapp/test.html Normal file
View File

@@ -0,0 +1,26 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>LiveSync WebApp E2E Test Page</title>
<style>
body {
font-family: monospace;
padding: 1rem;
}
#status {
margin-top: 1rem;
padding: 0.5rem;
border: 1px solid #ccc;
}
</style>
</head>
<body>
<h2>LiveSync WebApp E2E</h2>
<p>This page is used by Playwright tests only. <code>window.livesyncTest</code> is exposed by the script below.</p>
<!-- status div required by LiveSyncWebApp internal helpers -->
<div id="status">Loading…</div>
<script type="module" src="/test-entry.ts"></script>
</body>
</html>

View File

@@ -0,0 +1,294 @@
/**
* WebApp E2E tests two-vault scenarios.
*
* Each vault (A and B) runs in its own browser context so that JavaScript
* global state (including Trystero's global signalling tables) is fully
* isolated. The two vaults communicate only through the shared remote
* CouchDB database.
*
* Vault storage is OPFS-backed no file-picker interaction needed.
*
* Prerequisites:
* - A reachable CouchDB instance whose connection details are in .test.env
* (read automatically by playwright.config.ts).
*
* How to run:
* cd src/apps/webapp && npm run test:e2e
*/
import { test, expect, type BrowserContext, type Page, type TestInfo } from "@playwright/test";
import type { LiveSyncTestAPI } from "../test-entry";
import { mkdirSync, writeFileSync } from "node:fs";
import path from "node:path";
import { fileURLToPath } from "node:url";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
// ---------------------------------------------------------------------------
// Settings helpers
// ---------------------------------------------------------------------------
function requireEnv(name: string): string {
const v = process.env[name];
if (!v) throw new Error(`Missing required env variable: ${name}`);
return v;
}
async function ensureCouchDbDatabase(uri: string, user: string, pass: string, dbName: string): Promise<void> {
const base = uri.replace(/\/+$/, "");
const dbUrl = `${base}/${encodeURIComponent(dbName)}`;
const auth = Buffer.from(`${user}:${pass}`, "utf-8").toString("base64");
const response = await fetch(dbUrl, {
method: "PUT",
headers: {
Authorization: `Basic ${auth}`,
},
});
// 201: created, 202: accepted, 412: already exists
if (response.status === 201 || response.status === 202 || response.status === 412) {
return;
}
const body = await response.text().catch(() => "");
throw new Error(`Failed to ensure CouchDB database (${response.status}): ${body}`);
}
function buildSettings(dbName: string): Record<string, unknown> {
return {
// Remote database (shared between A and B this is the replication target)
couchDB_URI: requireEnv("hostname").replace(/\/+$/, ""),
couchDB_USER: process.env["username"] ?? "",
couchDB_PASSWORD: process.env["password"] ?? "",
couchDB_DBNAME: dbName,
// Core behaviour
isConfigured: true,
liveSync: false,
syncOnSave: false,
syncOnStart: false,
periodicReplication: false,
gcDelay: 0,
savingDelay: 0,
notifyThresholdOfRemoteStorageSize: 0,
// Encryption off for test simplicity
encrypt: false,
// Disable plugin/hidden-file sync (not needed in webapp)
usePluginSync: false,
autoSweepPlugins: false,
autoSweepPluginsPeriodic: false,
//Auto accept perr
P2P_AutoAcceptingPeers: "~.*",
};
}
// ---------------------------------------------------------------------------
// Test-page helpers
// ---------------------------------------------------------------------------
/** Navigate to the test entry page and wait for `window.livesyncTest`. */
async function openTestPage(ctx: BrowserContext): Promise<Page> {
const page = await ctx.newPage();
await page.goto("/test.html");
await page.waitForFunction(() => !!(window as any).livesyncTest, { timeout: 20_000 });
return page;
}
/** Type-safe wrapper calls `window.livesyncTest.<method>(...args)` in the page. */
async function call<M extends keyof LiveSyncTestAPI>(
page: Page,
method: M,
...args: Parameters<LiveSyncTestAPI[M]>
): Promise<Awaited<ReturnType<LiveSyncTestAPI[M]>>> {
const invoke = () =>
page.evaluate(([m, a]) => (window as any).livesyncTest[m](...a), [method, args] as [
string,
unknown[],
]) as Promise<Awaited<ReturnType<LiveSyncTestAPI[M]>>>;
try {
return await invoke();
} catch (ex: any) {
const message = String(ex?.message ?? ex);
// Some startup flows may trigger one page reload; recover once.
if (
message.includes("Execution context was destroyed") ||
message.includes("Most likely the page has been closed")
) {
await page.waitForFunction(() => !!(window as any).livesyncTest, { timeout: 20_000 });
return await invoke();
}
throw ex;
}
}
async function dumpCoverage(page: Page | undefined, label: string, testInfo: TestInfo): Promise<void> {
if (!process.env.PW_COVERAGE || !page || page.isClosed()) {
return;
}
const cov = await page
.evaluate(() => {
const data = (window as any).__coverage__;
if (!data) return null;
// Reset between tests to avoid runaway accumulation.
(window as any).__coverage__ = {};
return data;
})
.catch(() => null!);
if (!cov) return;
if (typeof cov === "object" && Object.keys(cov as Record<string, unknown>).length === 0) {
return;
}
const outDir = path.resolve(__dirname, "../.nyc_output");
mkdirSync(outDir, { recursive: true });
const name = `${testInfo.testId.replace(/[^a-zA-Z0-9_-]/g, "_")}-${label}.json`;
writeFileSync(path.join(outDir, name), JSON.stringify(cov), "utf-8");
}
// ---------------------------------------------------------------------------
// Two-vault E2E suite
// ---------------------------------------------------------------------------
test.describe("WebApp two-vault E2E", () => {
let ctxA: BrowserContext;
let ctxB: BrowserContext;
let pageA: Page;
let pageB: Page;
const DB_SUFFIX = `${Date.now()}-${Math.random().toString(36).slice(2, 8)}`;
const dbName = `${requireEnv("dbname")}-${DB_SUFFIX}`;
const settings = buildSettings(dbName);
test.beforeAll(async ({ browser }) => {
await ensureCouchDbDatabase(
String(settings.couchDB_URI ?? ""),
String(settings.couchDB_USER ?? ""),
String(settings.couchDB_PASSWORD ?? ""),
dbName
);
// Open Vault A and Vault B in completely separate browser contexts.
// Each context has its own JS runtime, IndexedDB and OPFS root, so
// Trystero global state and PouchDB instance names cannot collide.
ctxA = await browser.newContext();
ctxB = await browser.newContext();
pageA = await openTestPage(ctxA);
pageB = await openTestPage(ctxB);
await call(pageA, "init", "testvault_a", settings as any);
await call(pageB, "init", "testvault_b", settings as any);
});
test.afterAll(async () => {
await call(pageA, "shutdown").catch(() => {});
await call(pageB, "shutdown").catch(() => {});
await ctxA.close();
await ctxB.close();
});
test.afterEach(async ({}, testInfo) => {
await dumpCoverage(pageA, "vaultA", testInfo);
await dumpCoverage(pageB, "vaultB", testInfo);
});
// -----------------------------------------------------------------------
// Case 1: Vault A writes a file and can read its metadata back from the
// local database (no replication yet).
// -----------------------------------------------------------------------
test("Case 1: A writes a file and can get its info", async () => {
const FILE = "e2e/case1-a-only.md";
const CONTENT = "hello from vault A";
const ok = await call(pageA, "putFile", FILE, CONTENT);
expect(ok).toBe(true);
const info = await call(pageA, "getInfo", FILE);
expect(info).not.toBeNull();
expect(info!.path).toBe(FILE);
expect(info!.revision).toBeTruthy();
expect(info!.conflicts).toHaveLength(0);
});
// -----------------------------------------------------------------------
// Case 2: Vault A writes a file, both vaults replicate, and Vault B ends
// up with the file in its local database.
// -----------------------------------------------------------------------
test("Case 2: A writes a file, both replicate, B receives the file", async () => {
const FILE = "e2e/case2-sync.md";
const CONTENT = "content from A should appear in B";
await call(pageA, "putFile", FILE, CONTENT);
// A pushes to remote, B pulls from remote.
await call(pageA, "replicate");
await call(pageB, "replicate");
const infoB = await call(pageB, "getInfo", FILE);
expect(infoB).not.toBeNull();
expect(infoB!.path).toBe(FILE);
});
// -----------------------------------------------------------------------
// Case 3: Vault A deletes the file it synced in case 2. After both
// vaults replicate, Vault B no longer sees the file.
// -----------------------------------------------------------------------
test("Case 3: A deletes the file, both replicate, B no longer sees it", async () => {
// This test depends on Case 2 having put e2e/case2-sync.md into both vaults.
const FILE = "e2e/case2-sync.md";
await call(pageA, "deleteFile", FILE);
await call(pageA, "replicate");
await call(pageB, "replicate");
const infoB = await call(pageB, "getInfo", FILE);
// The file should be gone (null means not found or deleted).
expect(infoB).toBeNull();
});
// -----------------------------------------------------------------------
// Case 4: A and B each independently edit the same file that was already
// synced. After both vaults replicate the editing cycle, both
// vaults report a conflict on that file.
// -----------------------------------------------------------------------
test("Case 4: concurrent edits from A and B produce a conflict on both sides", async () => {
const FILE = "e2e/case4-conflict.md";
// 1) Write a baseline and synchronise so both vaults start from the
// same revision.
await call(pageA, "putFile", FILE, "base content");
await call(pageA, "replicate");
await call(pageB, "replicate");
// Confirm B has the base file with no conflicts yet.
const baseInfoB = await call(pageB, "getInfo", FILE);
expect(baseInfoB).not.toBeNull();
expect(baseInfoB!.conflicts).toHaveLength(0);
// 2) Both vaults write diverging content without syncing in between
// this creates two competing revisions.
await call(pageA, "putFile", FILE, "content from A (conflict side)");
await call(pageB, "putFile", FILE, "content from B (conflict side)");
// 3) Run replication on both sides. The order mirrors the pattern
// from the CLI two-vault tests (A → remote → B → remote → A).
await call(pageA, "replicate");
await call(pageB, "replicate");
await call(pageA, "replicate"); // re-check from A to pick up B's revision
// 4) At least one side must report a conflict.
const hasConflictA = await call(pageA, "hasConflict", FILE);
const hasConflictB = await call(pageB, "hasConflict", FILE);
expect(
hasConflictA || hasConflictB,
"Expected a conflict to appear on vault A or vault B after diverging edits"
).toBe(true);
});
});

View File

@@ -0,0 +1,191 @@
const HANDLE_DB_NAME = "livesync-webapp-handles";
const HANDLE_STORE_NAME = "handles";
const LAST_USED_KEY = "meta:lastUsedVaultId";
const VAULT_KEY_PREFIX = "vault:";
const MAX_HISTORY_COUNT = 10;
export type VaultHistoryItem = {
id: string;
name: string;
handle: FileSystemDirectoryHandle;
lastUsedAt: number;
};
type VaultHistoryValue = VaultHistoryItem;
function makeVaultKey(id: string): string {
return `${VAULT_KEY_PREFIX}${id}`;
}
function parseVaultId(key: string): string | null {
if (!key.startsWith(VAULT_KEY_PREFIX)) {
return null;
}
return key.slice(VAULT_KEY_PREFIX.length);
}
function randomId(): string {
const n = Math.random().toString(36).slice(2, 10);
return `${Date.now()}-${n}`;
}
async function hasReadWritePermission(handle: FileSystemDirectoryHandle, requestIfNeeded: boolean): Promise<boolean> {
const h = handle as any;
if (typeof h.queryPermission === "function") {
const queried = await h.queryPermission({ mode: "readwrite" });
if (queried === "granted") {
return true;
}
}
if (!requestIfNeeded) {
return false;
}
if (typeof h.requestPermission === "function") {
const requested = await h.requestPermission({ mode: "readwrite" });
return requested === "granted";
}
return true;
}
export class VaultHistoryStore {
private async openHandleDB(): Promise<IDBDatabase> {
return new Promise((resolve, reject) => {
const request = indexedDB.open(HANDLE_DB_NAME, 1);
request.onerror = () => reject(request.error);
request.onsuccess = () => resolve(request.result);
request.onupgradeneeded = (event) => {
const db = (event.target as IDBOpenDBRequest).result;
if (!db.objectStoreNames.contains(HANDLE_STORE_NAME)) {
db.createObjectStore(HANDLE_STORE_NAME);
}
};
});
}
private async withStore<T>(mode: IDBTransactionMode, task: (store: IDBObjectStore) => Promise<T>): Promise<T> {
const db = await this.openHandleDB();
try {
const tx = db.transaction([HANDLE_STORE_NAME], mode);
const store = tx.objectStore(HANDLE_STORE_NAME);
return await task(store);
} finally {
db.close();
}
}
private async requestAsPromise<T>(request: IDBRequest<T>): Promise<T> {
return new Promise((resolve, reject) => {
request.onsuccess = () => resolve(request.result);
request.onerror = () => reject(request.error);
});
}
async getLastUsedVaultId(): Promise<string | null> {
return this.withStore("readonly", async (store) => {
const value = await this.requestAsPromise(store.get(LAST_USED_KEY));
return typeof value === "string" ? value : null;
});
}
async getVaultHistory(): Promise<VaultHistoryItem[]> {
return this.withStore("readonly", async (store) => {
const keys = (await this.requestAsPromise(store.getAllKeys())) as IDBValidKey[];
const values = (await this.requestAsPromise(store.getAll())) as unknown[];
const items: VaultHistoryItem[] = [];
for (let i = 0; i < keys.length; i++) {
const key = String(keys[i]);
const id = parseVaultId(key);
const value = values[i] as Partial<VaultHistoryValue> | undefined;
if (!id || !value || !value.handle || !value.name) {
continue;
}
items.push({
id,
name: String(value.name),
handle: value.handle,
lastUsedAt: Number(value.lastUsedAt || 0),
});
}
items.sort((a, b) => b.lastUsedAt - a.lastUsedAt);
return items;
});
}
async saveSelectedVault(handle: FileSystemDirectoryHandle): Promise<VaultHistoryItem> {
const now = Date.now();
const existing = await this.getVaultHistory();
let matched: VaultHistoryItem | null = null;
for (const item of existing) {
try {
if (await item.handle.isSameEntry(handle)) {
matched = item;
break;
}
} catch {
// Ignore handles that cannot be compared, keep scanning.
}
}
const item: VaultHistoryItem = {
id: matched?.id ?? randomId(),
name: handle.name,
handle,
lastUsedAt: now,
};
await this.withStore("readwrite", async (store): Promise<void> => {
await this.requestAsPromise(store.put(item, makeVaultKey(item.id)));
await this.requestAsPromise(store.put(item.id, LAST_USED_KEY));
const merged = [...existing.filter((v) => v.id !== item.id), item].sort(
(a, b) => b.lastUsedAt - a.lastUsedAt
);
const stale = merged.slice(MAX_HISTORY_COUNT);
for (const old of stale) {
await this.requestAsPromise(store.delete(makeVaultKey(old.id)));
}
});
return item;
}
async activateHistoryItem(item: VaultHistoryItem): Promise<FileSystemDirectoryHandle> {
const granted = await hasReadWritePermission(item.handle, true);
if (!granted) {
throw new Error("Vault permissions were not granted");
}
const activated: VaultHistoryItem = {
...item,
lastUsedAt: Date.now(),
};
await this.withStore("readwrite", async (store): Promise<void> => {
await this.requestAsPromise(store.put(activated, makeVaultKey(activated.id)));
await this.requestAsPromise(store.put(activated.id, LAST_USED_KEY));
});
return item.handle;
}
async pickNewVault(): Promise<FileSystemDirectoryHandle> {
const picker = (window as any).showDirectoryPicker;
if (typeof picker !== "function") {
throw new Error("FileSystem API showDirectoryPicker is not supported in this browser");
}
const handle = (await picker({
mode: "readwrite",
startIn: "documents",
})) as FileSystemDirectoryHandle;
const granted = await hasReadWritePermission(handle, true);
if (!granted) {
throw new Error("Vault permissions were not granted");
}
await this.saveSelectedVault(handle);
return handle;
}
}

View File

@@ -1,16 +1,45 @@
import { defineConfig } from "vite";
import { svelte } from "@sveltejs/vite-plugin-svelte";
import istanbul from "vite-plugin-istanbul";
import path from "node:path";
import { readFileSync } from "node:fs";
const packageJson = JSON.parse(readFileSync("../../../package.json", "utf-8"));
const manifestJson = JSON.parse(readFileSync("../../../manifest.json", "utf-8"));
const enableCoverage = process.env.PW_COVERAGE === "1";
const repoRoot = path.resolve(__dirname, "../../..");
// https://vite.dev/config/
export default defineConfig({
plugins: [svelte()],
plugins: [
svelte(),
...(enableCoverage
? [
istanbul({
cwd: repoRoot,
include: ["src/**/*.ts", "src/**/*.svelte"],
exclude: [
"node_modules",
"dist",
"test",
"coverage",
"src/apps/webapp/test/**",
"playwright.config.ts",
"vite.config.ts",
"**/*.spec.ts",
"**/*.test.ts",
],
extension: [".js", ".ts", ".svelte"],
requireEnv: false,
cypress: false,
checkProd: false,
}),
]
: []),
],
resolve: {
alias: {
"@": path.resolve(__dirname, "../../"),
"@lib": path.resolve(__dirname, "../../lib/src"),
obsidian: path.resolve(__dirname, "../../../test/harness/obsidian-mock.ts"),
},
},
base: "./",
@@ -18,14 +47,21 @@ export default defineConfig({
outDir: "dist",
emptyOutDir: true,
rollupOptions: {
// test.html is used by the Playwright dev-server; include it here
// so the production build doesn't emit warnings about unused inputs.
input: {
index: path.resolve(__dirname, "index.html"),
webapp: path.resolve(__dirname, "webapp.html"),
test: path.resolve(__dirname, "test.html"),
},
external: ["crypto"],
},
},
define: {
MANIFEST_VERSION: JSON.stringify(process.env.MANIFEST_VERSION || manifestJson.version || "0.0.0"),
PACKAGE_VERSION: JSON.stringify(process.env.PACKAGE_VERSION || packageJson.version || "0.0.0"),
global: "globalThis",
hostPlatform: JSON.stringify(process.platform || "linux"),
},
server: {
port: 3000,

402
src/apps/webapp/webapp.css Normal file
View File

@@ -0,0 +1,402 @@
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
:root {
--background-primary: #ffffff;
--background-primary-alt: #667eea;
--background-secondary: #f0f0f0;
--background-secondary-alt: #e0e0e0;
--background-modifier-border: #d0d0d0;
--text-normal: #333333;
--text-warning: #d9534f;
--text-accent: #5bc0de;
--text-on-accent: #ffffff;
}
body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
display: flex;
align-items: center;
justify-content: center;
padding: 20px;
}
.container {
background: white;
border-radius: 12px;
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
padding: 40px;
max-width: 700px;
width: 100%;
}
h1 {
color: #333;
margin-bottom: 10px;
font-size: 28px;
}
.subtitle {
color: #666;
margin-bottom: 24px;
font-size: 14px;
}
#status {
padding: 15px;
border-radius: 8px;
margin-bottom: 20px;
font-size: 14px;
font-weight: 500;
}
#status.error {
background: #fee;
color: #c33;
border: 1px solid #fcc;
}
#status.warning {
background: #ffeaa7;
color: #d63031;
border: 1px solid #fdcb6e;
}
#status.success {
background: #d4edda;
color: #155724;
border: 1px solid #c3e6cb;
}
#status.info {
background: #d1ecf1;
color: #0c5460;
border: 1px solid #bee5eb;
}
.vault-selector {
border: 1px solid #e6e9f2;
border-radius: 8px;
padding: 16px;
background: #fbfcff;
margin-bottom: 22px;
}
.vault-selector h2 {
font-size: 18px;
margin-bottom: 8px;
color: #333;
}
.vault-selector p {
color: #555;
font-size: 14px;
margin-bottom: 12px;
}
.vault-list {
display: flex;
flex-direction: column;
gap: 10px;
margin-bottom: 12px;
}
.vault-item {
border: 1px solid #d9deee;
border-radius: 8px;
padding: 10px 12px;
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
background: #fff;
}
.vault-item-info {
min-width: 0;
}
.vault-item-name {
font-weight: 600;
color: #1f2a44;
word-break: break-word;
}
.vault-item-meta {
margin-top: 2px;
font-size: 12px;
color: #63708f;
}
button {
border: none;
border-radius: 6px;
padding: 8px 12px;
background: #2f5ae5;
color: #fff;
cursor: pointer;
font-weight: 600;
white-space: nowrap;
}
button:hover {
background: #1e4ad6;
opacity: 0.7;
}
button:disabled {
cursor: not-allowed;
opacity: 0.6;
}
.empty-note {
font-size: 13px;
color: #6c757d;
margin-bottom: 8px;
}
.empty-note.is-hidden,
.vault-selector.is-hidden {
display: none;
}
.info-section {
margin-top: 20px;
padding: 20px;
background: #f8f9fa;
border-radius: 8px;
}
.info-section h2 {
font-size: 18px;
margin-bottom: 12px;
color: #333;
}
.info-section ul {
list-style: none;
padding-left: 0;
}
.info-section li {
padding: 7px 0;
color: #666;
font-size: 14px;
}
.info-section li::before {
content: "•";
color: #667eea;
font-weight: bold;
display: inline-block;
width: 1em;
margin-left: -1em;
padding-right: 0.5em;
}
code {
background: #e9ecef;
padding: 2px 6px;
border-radius: 4px;
font-family: "Courier New", monospace;
font-size: 13px;
}
.footer {
margin-top: 24px;
text-align: center;
color: #999;
font-size: 12px;
}
.footer a {
color: #667eea;
text-decoration: none;
}
.footer a:hover {
text-decoration: underline;
}
body.livesync-log-visible {
min-height: 100vh;
padding-bottom: 42vh;
}
#livesync-log-panel {
position: fixed;
left: 0;
right: 0;
bottom: 0;
height: 42vh;
z-index: 900;
display: flex;
flex-direction: column;
background: #0f172a;
border-top: 1px solid #334155;
}
.livesync-log-header {
padding: 8px 12px;
font-size: 12px;
font-weight: 600;
color: #e2e8f0;
background: #111827;
border-bottom: 1px solid #334155;
}
#livesync-log-viewport {
flex: 1;
overflow: auto;
padding: 8px 12px;
font-family: ui-monospace, SFMono-Regular, Menlo, Consolas, "Liberation Mono", monospace;
font-size: 12px;
line-height: 1.4;
color: #e2e8f0;
white-space: pre-wrap;
word-break: break-word;
}
.livesync-log-line {
margin-bottom: 2px;
}
#livesync-command-bar {
position: fixed;
right: 16px;
bottom: 16px;
z-index: 1000;
display: flex;
flex-wrap: wrap;
gap: 8px;
max-width: 40vw;
padding: 10px;
border-radius: 10px;
background: rgba(255, 255, 255, 0.95);
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.2);
}
.livesync-command-button {
border: 1px solid #ddd;
border-radius: 8px;
padding: 6px 10px;
background: #fff;
color: #111827;
cursor: pointer;
font-size: 12px;
line-height: 1.2;
white-space: nowrap;
font-weight: 500;
}
.livesync-command-button:hover:not(:disabled) {
background: #f3f4f6;
}
.livesync-command-button.is-disabled {
opacity: 0.55;
}
#livesync-window-root {
position: fixed;
top: 16px;
left: 16px;
right: 16px;
bottom: calc(42vh + 16px);
z-index: 850;
display: flex;
flex-direction: column;
border-radius: 10px;
background: rgba(255, 255, 255, 0.98);
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.18);
overflow: hidden;
}
#livesync-window-tabs {
display: flex;
gap: 6px;
padding: 8px;
background: #f3f4f6;
border-bottom: 1px solid #e5e7eb;
}
#livesync-window-body {
position: relative;
flex: 1;
overflow: auto;
padding: 10px;
}
.livesync-window-tab {
border: 1px solid #d1d5db;
background: #fff;
color: #111827;
padding: 4px 8px;
border-radius: 6px;
cursor: pointer;
font-size: 12px;
font-weight: 500;
}
.livesync-window-tab.is-active {
background: #e0e7ff;
border-color: #818cf8;
}
.livesync-window-panel {
display: none;
width: 100%;
height: 100%;
overflow: auto;
}
.livesync-window-panel.is-active {
display: block;
}
@media (max-width: 600px) {
.container {
padding: 28px 18px;
}
h1 {
font-size: 24px;
}
.vault-item {
flex-direction: column;
align-items: stretch;
}
#livesync-command-bar {
max-width: calc(100vw - 24px);
right: 12px;
left: 12px;
bottom: 12px;
}
}
popup {
position: fixed;
min-width: 80vw;
max-width: 90vw;
min-height: 40vh;
max-height: 80vh;
background: rgba(255, 255, 255, 0.8);
padding: 1em;
border-radius: 6px;
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.2);
z-index: 10000;
overflow-y: auto;
display: flex;
align-items: center;
justify-content: center;
backdrop-filter: blur(15px);
border-radius: 10px;
z-index: 10;
}

View File

@@ -0,0 +1,45 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Self-hosted LiveSync WebApp</title>
<link rel="stylesheet" href="./webapp.css">
</head>
<body>
<div class="container">
<h1>Self-hosted LiveSync on Web</h1>
<p class="subtitle">Browser-based Self-hosted LiveSync using FileSystem API</p>
<div id="status" class="info">Initialising...</div>
<div id="vault-selector" class="vault-selector">
<h2>Select Vault Folder</h2>
<p>Open a vault you already used, or pick a new folder.</p>
<div id="vault-history-list" class="vault-list"></div>
<p id="vault-history-empty" class="empty-note">No saved vaults yet.</p>
<button id="pick-new-vault" type="button">Choose new vault folder</button>
</div>
<div class="info-section">
<h2>How to Use</h2>
<ul>
<li>Select a vault folder and grant permission</li>
<li>Create <code>.livesync/settings.json</code> in your vault folder</li>
<li>Or use Setup-URI to apply settings</li>
<li>Your files will be synced after "replicate now"</li>
</ul>
</div>
<div class="footer">
<p>
Powered by
<a href="https://github.com/vrtmrz/obsidian-livesync" target="_blank">Self-hosted LiveSync</a>
</p>
</div>
</div>
<script type="module" src="./bootstrap.ts"></script>
</body>
</html>

View File

@@ -0,0 +1,57 @@
# syntax=docker/dockerfile:1
#
# Self-hosted LiveSync WebPeer — Docker image
# Browser-based P2P peer daemon served by nginx.
#
# Build (from the repository root):
# docker build -f src/apps/webpeer/Dockerfile -t livesync-webpeer .
#
# Run:
# docker run --rm -p 8081:80 livesync-webpeer
# Then open http://localhost:8081/ in any modern browser.
#
# What is WebPeer?
# WebPeer acts as a pseudo P2P peer that runs entirely in the browser.
# It can replace a CouchDB remote server by replying to sync requests from
# other Self-hosted LiveSync instances over the WebRTC P2P channel.
#
# P2P (WebRTC) networking notes
# ─────────────────────────────
# WebRTC connections are initiated by the *browser* visiting this page, not by
# the nginx container itself. Therefore the Docker network mode of this
# container has NO effect on WebRTC connectivity.
# Simply publish port 80 (as above) and the browser handles all ICE/STUN/TURN
# negotiation on its own.
#
# If the browser is running inside another container or a restricted network,
# configuring a TURN server in the WebPeer settings is recommended.
# ─────────────────────────────────────────────────────────────────────────────
# Stage 1 — builder
# Full Node.js environment to install dependencies and build the Vite bundle.
# ─────────────────────────────────────────────────────────────────────────────
FROM node:22-slim AS builder
WORKDIR /build
# Install workspace dependencies (all apps share the root package.json)
COPY package.json ./
RUN npm install
# Copy the full source tree and build the WebPeer bundle
COPY . .
RUN cd src/apps/webpeer && npm run build
# ─────────────────────────────────────────────────────────────────────────────
# Stage 2 — runtime
# Minimal nginx image that serves the static build output.
# ─────────────────────────────────────────────────────────────────────────────
FROM nginx:stable-alpine
# Remove the default nginx welcome page
RUN rm -rf /usr/share/nginx/html/*
# Copy the built static assets
COPY --from=builder /build/src/apps/webpeer/dist /usr/share/nginx/html
EXPOSE 80

View File

@@ -6,6 +6,8 @@
"scripts": {
"dev": "vite",
"build": "vite build",
"build:docker": "docker build -f Dockerfile -t livesync-webpeer ../../..",
"run:docker": "docker run -p 8001:80 livesync-webpeer",
"preview": "vite preview",
"check": "svelte-check --tsconfig ./tsconfig.app.json && tsc -p tsconfig.node.json"
},

View File

@@ -276,7 +276,7 @@ export class P2PReplicatorShim implements P2PReplicatorBase {
}
}
}
await this.services.setting.applyPartial(remoteConfig, true);
await this.services.setting.applyExternalSettings(remoteConfig, true);
if (yn !== DROP) {
await this.plugin.core.services.appLifecycle.scheduleRestart();
}

View File

@@ -9,7 +9,7 @@ import { LOG_LEVEL_NOTICE, REMOTE_P2P } from "@lib/common/types.ts";
import { Logger } from "@lib/common/logger.ts";
import { EVENT_P2P_PEER_SHOW_EXTRA_MENU, type PeerStatus } from "@lib/replication/trystero/P2PReplicatorPaneCommon.ts";
import type { LiveSyncBaseCore } from "@/LiveSyncBaseCore.ts";
import type { UseP2PReplicatorResult } from "@lib/replication/trystero/P2PReplicatorCore.ts";
import type { P2PPaneParams } from "@/lib/src/replication/trystero/UseP2PReplicatorResult";
export const VIEW_TYPE_P2P = "p2p-replicator";
function addToList(item: string, list: string) {
@@ -32,7 +32,7 @@ function removeFromList(item: string, list: string) {
export class P2PReplicatorPaneView extends SvelteItemView {
core: LiveSyncBaseCore;
private _p2pResult: UseP2PReplicatorResult;
private _p2pResult: P2PPaneParams;
override icon = "waypoints";
title: string = "";
override navigation = false;
@@ -87,7 +87,7 @@ And you can also drop the local database to rebuild from the remote device.`,
// this.plugin.settings = remoteConfig;
// await this.plugin.saveSettings();
await this.core.services.setting.applyPartial(remoteConfig);
await this.core.services.setting.applyExternalSettings(remoteConfig);
if (yn === DROP) {
await this.core.rebuilder.scheduleFetch();
} else {
@@ -123,7 +123,7 @@ And you can also drop the local database to rebuild from the remote device.`,
await this.core.services.setting.applyPartial(currentSetting, true);
}
m?: Menu;
constructor(leaf: WorkspaceLeaf, core: LiveSyncBaseCore, p2pResult: UseP2PReplicatorResult) {
constructor(leaf: WorkspaceLeaf, core: LiveSyncBaseCore, p2pResult: P2PPaneParams) {
super(leaf);
this.core = core;
this._p2pResult = p2pResult;

Submodule src/lib updated: 9145013efa...37b8e2813e

View File

@@ -14,9 +14,7 @@ import { ModuleObsidianGlobalHistory } from "./modules/features/ModuleGlobalHist
import { ModuleIntegratedTest } from "./modules/extras/ModuleIntegratedTest.ts";
import { ModuleReplicateTest } from "./modules/extras/ModuleReplicateTest.ts";
import { LocalDatabaseMaintenance } from "./features/LocalDatabaseMainte/CmdLocalDatabaseMainte.ts";
import { P2PReplicatorPaneView, VIEW_TYPE_P2P } from "./features/P2PSync/P2PReplicator/P2PReplicatorPaneView.ts";
import { useP2PReplicator } from "@lib/replication/trystero/P2PReplicatorCore.ts";
import type { InjectableServiceHub } from "./lib/src/services/implements/injectable/InjectableServiceHub.ts";
import type { InjectableServiceHub } from "@lib/services/implements/injectable/InjectableServiceHub.ts";
import { ObsidianServiceHub } from "./modules/services/ObsidianServiceHub.ts";
import { ServiceRebuilder } from "@lib/serviceModules/Rebuilder.ts";
import { ServiceDatabaseFileAccess } from "@/serviceModules/DatabaseFileAccess.ts";
@@ -27,17 +25,24 @@ import { FileAccessObsidian } from "./serviceModules/FileAccessObsidian.ts";
import { StorageEventManagerObsidian } from "./managers/StorageEventManagerObsidian.ts";
import type { ServiceModules } from "./types.ts";
import { setNoticeClass } from "@lib/mock_and_interop/wrapper.ts";
import type { ObsidianServiceContext } from "./lib/src/services/implements/obsidian/ObsidianServiceContext.ts";
import type { ObsidianServiceContext } from "@lib/services/implements/obsidian/ObsidianServiceContext.ts";
import { LiveSyncBaseCore } from "./LiveSyncBaseCore.ts";
import { ModuleSetupObsidian } from "./modules/features/ModuleSetupObsidian.ts";
import { ModuleObsidianMenu } from "./modules/essentialObsidian/ModuleObsidianMenu.ts";
import { ModuleObsidianSettingsAsMarkdown } from "./modules/features/ModuleObsidianSettingAsMarkdown.ts";
import { SetupManager } from "./modules/features/SetupManager.ts";
import { ModuleMigration } from "./modules/essential/ModuleMigration.ts";
import { enableI18nFeature } from "./serviceFeatures/onLayoutReady/enablei18n.ts";
import { useOfflineScanner } from "./lib/src/serviceFeatures/offlineScanner.ts";
import { useCheckRemoteSize } from "./lib/src/serviceFeatures/checkRemoteSize.ts";
import { useOfflineScanner } from "@lib/serviceFeatures/offlineScanner.ts";
import { useRemoteConfiguration } from "@lib/serviceFeatures/remoteConfig.ts";
import { useCheckRemoteSize } from "@lib/serviceFeatures/checkRemoteSize.ts";
import { useRedFlagFeatures } from "./serviceFeatures/redFlag.ts";
import { useSetupProtocolFeature } from "./serviceFeatures/setupObsidian/setupProtocol.ts";
import { useSetupQRCodeFeature } from "@lib/serviceFeatures/setupObsidian/qrCode";
import { useSetupURIFeature } from "@lib/serviceFeatures/setupObsidian/setupUri";
import { useSetupManagerHandlersFeature } from "./serviceFeatures/setupObsidian/setupManagerHandlers.ts";
import { useP2PReplicatorFeature } from "@lib/replication/trystero/useP2PReplicatorFeature.ts";
import { useP2PReplicatorCommands } from "@lib/replication/trystero/useP2PReplicatorCommands.ts";
import { useP2PReplicatorUI } from "./serviceFeatures/useP2PReplicatorUI.ts";
export type LiveSyncCore = LiveSyncBaseCore<ObsidianServiceContext, LiveSyncCommands>;
export default class ObsidianLiveSyncPlugin extends Plugin {
core: LiveSyncCore;
@@ -133,10 +138,6 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const serviceHub = new ObsidianServiceHub(this);
// Capture useP2PReplicator result so it can be passed to the P2PReplicator addon
// TODO: Dependency fix: bit hacky
let p2pReplicatorResult: ReturnType<typeof useP2PReplicator> | undefined;
this.core = new LiveSyncBaseCore(
serviceHub,
(core, serviceHub) => {
@@ -147,7 +148,6 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
new ModuleObsidianEvents(this, core),
new ModuleObsidianSettingDialogue(this, core),
new ModuleObsidianMenu(core),
new ModuleSetupObsidian(core),
new ModuleObsidianSettingsAsMarkdown(core),
new ModuleLog(this, core),
new ModuleObsidianDocumentHistory(this, core),
@@ -174,13 +174,24 @@ export default class ObsidianLiveSyncPlugin extends Plugin {
const featuresInitialiser = enableI18nFeature;
const curriedFeature = () => featuresInitialiser(core);
core.services.appLifecycle.onLayoutReady.addHandler(curriedFeature);
const setupManager = core.getModule(SetupManager);
useRemoteConfiguration(core);
useSetupProtocolFeature(core, setupManager);
useSetupQRCodeFeature(core);
useSetupURIFeature(core);
useSetupManagerHandlersFeature(core, setupManager);
useOfflineScanner(core);
useRedFlagFeatures(core);
useCheckRemoteSize(core);
p2pReplicatorResult = useP2PReplicator(core, [
VIEW_TYPE_P2P,
(leaf: any) => new P2PReplicatorPaneView(leaf, core, p2pReplicatorResult!),
]);
// p2pReplicatorResult = useP2PReplicator(core, [
// VIEW_TYPE_P2P,
// (leaf: any) => new P2PReplicatorPaneView(leaf, core, p2pReplicatorResult!),
// ]);
const replicator = useP2PReplicatorFeature(core);
useP2PReplicatorCommands(core, replicator);
useP2PReplicatorUI(core, core, replicator);
}
);
}

View File

@@ -30,7 +30,7 @@ import { LOG_LEVEL_NOTICE, setGlobalLogFunction } from "octagonal-wheels/common/
import { LogPaneView, VIEW_TYPE_LOG } from "./Log/LogPaneView.ts";
import { serialized } from "octagonal-wheels/concurrency/lock";
import { $msg } from "src/lib/src/common/i18n.ts";
import { P2PLogCollector } from "../../lib/src/replication/trystero/P2PReplicatorCore.ts";
import { P2PLogCollector } from "@/lib/src/replication/trystero/P2PLogCollector.ts";
import type { LiveSyncCore } from "../../main.ts";
import { LiveSyncError } from "@lib/common/LSError.ts";
import { isValidPath } from "@/common/utils.ts";
@@ -277,27 +277,36 @@ export class ModuleLog extends AbstractObsidianModule {
}
async updateMessageArea() {
if (this.messageArea) {
const messageLines = [];
const fileStatus = this.activeFileStatus.value;
if (fileStatus && !this.settings.hideFileWarningNotice) messageLines.push(fileStatus);
const messages = (await this.services.appLifecycle.getUnresolvedMessages()).flat().filter((e) => e);
const stringMessages = messages.filter((m): m is string => typeof m === "string"); // for 'startsWith'
const networkMessages = stringMessages.filter((m) => m.startsWith(MARK_LOG_NETWORK_ERROR));
const otherMessages = stringMessages.filter((m) => !m.startsWith(MARK_LOG_NETWORK_ERROR));
if (!this.messageArea) return;
messageLines.push(...otherMessages);
if (
this.settings.networkWarningStyle !== NetworkWarningStyles.ICON &&
this.settings.networkWarningStyle !== NetworkWarningStyles.HIDDEN
) {
messageLines.push(...networkMessages);
} else if (this.settings.networkWarningStyle === NetworkWarningStyles.ICON) {
if (networkMessages.length > 0) messageLines.push("🔗❌");
}
this.messageArea.innerText = messageLines.map((e) => `⚠️ ${e}`).join("\n");
const showStatusOnEditor = this.settings?.showStatusOnEditor ?? false;
if (this.statusDiv) {
this.statusDiv.style.display = showStatusOnEditor ? "" : "none";
}
if (!showStatusOnEditor) {
this.messageArea.innerText = "";
return;
}
const messageLines = [];
const fileStatus = this.activeFileStatus.value;
if (fileStatus && !this.settings.hideFileWarningNotice) messageLines.push(fileStatus);
const messages = (await this.services.appLifecycle.getUnresolvedMessages()).flat().filter((e) => e);
const stringMessages = messages.filter((m): m is string => typeof m === "string"); // for 'startsWith'
const networkMessages = stringMessages.filter((m) => m.startsWith(MARK_LOG_NETWORK_ERROR));
const otherMessages = stringMessages.filter((m) => !m.startsWith(MARK_LOG_NETWORK_ERROR));
messageLines.push(...otherMessages);
if (
this.settings.networkWarningStyle !== NetworkWarningStyles.ICON &&
this.settings.networkWarningStyle !== NetworkWarningStyles.HIDDEN
) {
messageLines.push(...networkMessages);
} else if (this.settings.networkWarningStyle === NetworkWarningStyles.ICON) {
if (networkMessages.length > 0) messageLines.push("🔗❌");
}
this.messageArea.innerText = messageLines.map((e) => `⚠️ ${e}`).join("\n");
}
onActiveLeafChange() {
@@ -326,6 +335,9 @@ export class ModuleLog extends AbstractObsidianModule {
}
this.statusBar?.setText(newMsg.split("\n")[0]);
if (this.statusDiv) {
this.statusDiv.style.display = this.settings?.showStatusOnEditor ? "" : "none";
}
if (this.settings?.showStatusOnEditor && this.statusDiv) {
if (this.settings.showLongerLogInsideEditor) {
const now = new Date().getTime();
@@ -402,6 +414,7 @@ export class ModuleLog extends AbstractObsidianModule {
this.messageArea = this.statusDiv.createDiv({ cls: "livesync-status-messagearea" });
this.logMessage = this.statusDiv.createDiv({ cls: "livesync-status-logmessage" });
this.logHistory = this.statusDiv.createDiv({ cls: "livesync-status-loghistory" });
this.statusDiv.style.display = this.settings?.showStatusOnEditor ? "" : "none";
eventHub.onEvent(EVENT_LAYOUT_READY, () => this.adjustStatusDivPosition());
if (this.settings?.showStatusOnStatusbar) {
this.statusBar = this.services.API.addStatusBarItem();

View File

@@ -162,8 +162,8 @@ export class ModuleObsidianSettingsAsMarkdown extends AbstractModule {
result == APPLY_AND_REBUILD ||
result == APPLY_AND_FETCH
) {
this.core.settings = settingToApply;
await this.services.setting.saveSettingData();
await this.services.setting.applyExternalSettings(settingToApply, true);
this.services.setting.clearUsedPassphrase();
if (result == APPLY_ONLY) {
this._log("Loaded settings have been applied!", LOG_LEVEL_NOTICE);
return;

View File

@@ -321,8 +321,8 @@ export class ObsidianLiveSyncSettingTab extends PluginSettingTab {
}
closeSetting() {
// @ts-ignore
this.core.app.setting.close();
//@ts-ignore :
this.plugin.app.setting.close();
}
handleElement(element: HTMLElement, func: OnUpdateFunc) {

View File

@@ -21,6 +21,9 @@ export function paneGeneral(
});
this.addOnSaved("displayLanguage", () => this.display());
new Setting(paneEl).autoWireToggle("showStatusOnEditor");
this.addOnSaved("showStatusOnEditor", () => {
eventHub.emitEvent(EVENT_ON_UNRESOLVED_ERROR);
});
new Setting(paneEl).autoWireToggle("showOnlyIconsOnEditor", {
onUpdate: visibleOnly(() => this.isConfiguredAs("showStatusOnEditor", true)),
});

View File

@@ -2,8 +2,11 @@ import {
REMOTE_COUCHDB,
REMOTE_MINIO,
REMOTE_P2P,
DEFAULT_SETTINGS,
LOG_LEVEL_NOTICE,
type ObsidianLiveSyncSettings,
} from "../../../lib/src/common/types.ts";
import { Menu } from "@/deps.ts";
import { $msg } from "../../../lib/src/common/i18n.ts";
import { LiveSyncSetting as Setting } from "./LiveSyncSetting.ts";
import type { ObsidianLiveSyncSettingTab } from "./ObsidianLiveSyncSettingTab.ts";
@@ -21,6 +24,14 @@ import {
import { SETTING_KEY_P2P_DEVICE_NAME } from "../../../lib/src/common/types.ts";
import { SetupManager, UserMode } from "../SetupManager.ts";
import { OnDialogSettingsDefault, type AllSettings } from "./settingConstants.ts";
import { activateRemoteConfiguration } from "../../../lib/src/serviceFeatures/remoteConfig.ts";
import { ConnectionStringParser } from "../../../lib/src/common/ConnectionString.ts";
import type { RemoteConfigurationResult } from "../../../lib/src/common/ConnectionString.ts";
import type { RemoteConfiguration } from "../../../lib/src/common/models/setting.type.ts";
import SetupRemote from "../SetupWizard/dialogs/SetupRemote.svelte";
import SetupRemoteCouchDB from "../SetupWizard/dialogs/SetupRemoteCouchDB.svelte";
import SetupRemoteBucket from "../SetupWizard/dialogs/SetupRemoteBucket.svelte";
import SetupRemoteP2P from "../SetupWizard/dialogs/SetupRemoteP2P.svelte";
function getSettingsFromEditingSettings(editingSettings: AllSettings): ObsidianLiveSyncSettings {
const workObj = { ...editingSettings } as ObsidianLiveSyncSettings;
@@ -39,17 +50,54 @@ const toggleActiveSyncClass = (el: HTMLElement, isActive: () => boolean) => {
return {};
};
function createRemoteConfigurationId(): string {
return `remote-${Date.now().toString(36)}-${Math.random().toString(36).slice(2, 8)}`;
}
function cloneRemoteConfigurations(
configs: Record<string, RemoteConfiguration> | undefined
): Record<string, RemoteConfiguration> {
return Object.fromEntries(Object.entries(configs || {}).map(([id, config]) => [id, { ...config }]));
}
function serializeRemoteConfiguration(settings: ObsidianLiveSyncSettings): string {
if (settings.remoteType === REMOTE_MINIO) {
return ConnectionStringParser.serialize({ type: "s3", settings });
}
if (settings.remoteType === REMOTE_P2P) {
return ConnectionStringParser.serialize({ type: "p2p", settings });
}
return ConnectionStringParser.serialize({ type: "couchdb", settings });
}
function setEmojiButton(button: any, emoji: string, tooltip: string) {
button.setButtonText(emoji);
button.setTooltip(tooltip, { delay: 10, placement: "top" });
// button.buttonEl.addClass("clickable-icon");
button.buttonEl.addClass("mod-muted");
return button;
}
function suggestRemoteConfigurationName(parsed: RemoteConfigurationResult): string {
if (parsed.type === "couchdb") {
try {
const url = new URL(parsed.settings.couchDB_URI);
return `CouchDB ${url.host}`;
} catch {
return "Imported CouchDB";
}
}
if (parsed.type === "s3") {
return `S3 ${parsed.settings.bucket || parsed.settings.endpoint}`;
}
return `P2P ${parsed.settings.P2P_roomID || "Remote"}`;
}
export function paneRemoteConfig(
this: ObsidianLiveSyncSettingTab,
paneEl: HTMLElement,
{ addPanel, addPane }: PageFunctions
): void {
const remoteNameMap = {
[REMOTE_COUCHDB]: $msg("obsidianLiveSyncSettingTab.optionCouchDB"),
[REMOTE_MINIO]: $msg("obsidianLiveSyncSettingTab.optionMinioS3R2"),
[REMOTE_P2P]: "Only Peer-to-Peer",
} as const;
{
/* E2EE */
const E2EEInitialProps = {
@@ -91,24 +139,335 @@ export function paneRemoteConfig(
});
}
{
// TODO: very WIP. need to refactor the UI.
void addPanel(paneEl, $msg("obsidianLiveSyncSettingTab.titleRemoteServer"), () => {}).then((paneEl) => {
const setting = new Setting(paneEl).setName($msg("Active Remote Configuration"));
const actions = new Setting(paneEl).setName("Remote Databases");
// actions.addButton((button) =>
// button
// .setButtonText("Change Remote and Setup")
// .setCta()
// .onClick(async () => {
// const setupManager = this.core.getModule(SetupManager);
// const originalSettings = getSettingsFromEditingSettings(this.editingSettings);
// await setupManager.onSelectServer(originalSettings, UserMode.Update);
// })
// );
const el = setting.controlEl.createDiv({});
el.setText(`${remoteNameMap[this.editingSettings.remoteType] || " - "}`);
setting.addButton((button) =>
button
.setButtonText("Change Remote and Setup")
.setCta()
.onClick(async () => {
const setupManager = this.core.getModule(SetupManager);
const originalSettings = getSettingsFromEditingSettings(this.editingSettings);
await setupManager.onSelectServer(originalSettings, UserMode.Update);
})
// Connection List
const listContainer = paneEl.createDiv({ cls: "sls-remote-list" });
const syncRemoteConfigurationBuffers = () => {
const currentConfigs = cloneRemoteConfigurations(this.core.settings.remoteConfigurations);
this.editingSettings.remoteConfigurations = currentConfigs;
this.editingSettings.activeConfigurationId = this.core.settings.activeConfigurationId;
if (this.initialSettings) {
this.initialSettings.remoteConfigurations = cloneRemoteConfigurations(currentConfigs);
this.initialSettings.activeConfigurationId = this.core.settings.activeConfigurationId;
}
};
const persistRemoteConfigurations = async (synchroniseActiveRemote: boolean = false) => {
await this.services.setting.updateSettings((currentSettings) => {
currentSettings.remoteConfigurations = cloneRemoteConfigurations(
this.editingSettings.remoteConfigurations
);
currentSettings.activeConfigurationId = this.editingSettings.activeConfigurationId;
if (synchroniseActiveRemote && currentSettings.activeConfigurationId) {
const activated = activateRemoteConfiguration(
currentSettings,
currentSettings.activeConfigurationId
);
if (activated) {
return activated;
}
}
return currentSettings;
}, true);
if (synchroniseActiveRemote) {
await this.saveAllDirtySettings();
}
syncRemoteConfigurationBuffers();
this.requestUpdate();
};
const runRemoteSetup = async (
baseSettings: ObsidianLiveSyncSettings,
remoteType?: typeof REMOTE_COUCHDB | typeof REMOTE_MINIO | typeof REMOTE_P2P
): Promise<ObsidianLiveSyncSettings | false> => {
const setupManager = this.core.getModule(SetupManager);
const dialogManager = setupManager.dialogManager;
let targetRemoteType = remoteType;
if (targetRemoteType === undefined) {
const method = await dialogManager.openWithExplicitCancel(SetupRemote);
if (method === "cancelled") {
return false;
}
targetRemoteType =
method === "bucket" ? REMOTE_MINIO : method === "p2p" ? REMOTE_P2P : REMOTE_COUCHDB;
}
if (targetRemoteType === REMOTE_MINIO) {
const bucketConf = await dialogManager.openWithExplicitCancel(SetupRemoteBucket, baseSettings);
if (bucketConf === "cancelled" || typeof bucketConf !== "object") {
return false;
}
return { ...baseSettings, ...bucketConf, remoteType: REMOTE_MINIO };
}
if (targetRemoteType === REMOTE_P2P) {
const p2pConf = await dialogManager.openWithExplicitCancel(SetupRemoteP2P, baseSettings);
if (p2pConf === "cancelled" || typeof p2pConf !== "object") {
return false;
}
return { ...baseSettings, ...p2pConf, remoteType: REMOTE_P2P };
}
const couchConf = await dialogManager.openWithExplicitCancel(SetupRemoteCouchDB, baseSettings);
if (couchConf === "cancelled" || typeof couchConf !== "object") {
return false;
}
return { ...baseSettings, ...couchConf, remoteType: REMOTE_COUCHDB };
};
const createBaseRemoteSettings = (): ObsidianLiveSyncSettings => ({
...DEFAULT_SETTINGS,
...getSettingsFromEditingSettings(this.editingSettings),
});
const createNewRemoteSettings = (): ObsidianLiveSyncSettings => ({
...DEFAULT_SETTINGS,
encrypt: this.editingSettings.encrypt,
usePathObfuscation: this.editingSettings.usePathObfuscation,
passphrase: this.editingSettings.passphrase,
configPassphraseStore: this.editingSettings.configPassphraseStore,
});
const addRemoteConfiguration = async () => {
const name = await this.services.UI.confirm.askString("Remote name", "Display name", "New Remote");
if (name === false) {
return;
}
const nextSettings = await runRemoteSetup(createNewRemoteSettings());
if (!nextSettings) {
return;
}
const id = createRemoteConfigurationId();
const configs = cloneRemoteConfigurations(this.editingSettings.remoteConfigurations);
configs[id] = {
id,
name: name.trim() || "New Remote",
uri: serializeRemoteConfiguration(nextSettings),
isEncrypted: nextSettings.encrypt,
};
this.editingSettings.remoteConfigurations = configs;
if (!this.editingSettings.activeConfigurationId) {
this.editingSettings.activeConfigurationId = id;
}
await persistRemoteConfigurations(this.editingSettings.activeConfigurationId === id);
refreshList();
};
const importRemoteConfiguration = async () => {
const importedURI = await this.services.UI.confirm.askString(
"Import connection",
"Paste a connection string",
""
);
if (importedURI === false) {
return;
}
const trimmedURI = importedURI.trim();
if (trimmedURI === "") {
return;
}
let parsed: RemoteConfigurationResult;
try {
parsed = ConnectionStringParser.parse(trimmedURI);
} catch (ex) {
this.services.API.addLog(`Failed to import remote configuration: ${ex}`, LOG_LEVEL_NOTICE);
return;
}
const defaultName = suggestRemoteConfigurationName(parsed);
const name = await this.services.UI.confirm.askString("Remote name", "Display name", defaultName);
if (name === false) {
return;
}
const id = createRemoteConfigurationId();
const configs = cloneRemoteConfigurations(this.editingSettings.remoteConfigurations);
configs[id] = {
id,
name: name.trim() || defaultName,
uri: ConnectionStringParser.serialize(parsed),
isEncrypted: false,
};
this.editingSettings.remoteConfigurations = configs;
if (!this.editingSettings.activeConfigurationId) {
this.editingSettings.activeConfigurationId = id;
}
await persistRemoteConfigurations(this.editingSettings.activeConfigurationId === id);
refreshList();
};
actions.addButton((button) =>
setEmojiButton(button, "", "Add new connection").onClick(async () => {
await addRemoteConfiguration();
})
);
actions.addButton((button) =>
setEmojiButton(button, "📥", "Import connection").onClick(async () => {
await importRemoteConfiguration();
})
);
const refreshList = () => {
listContainer.empty();
const configs = this.editingSettings.remoteConfigurations || {};
for (const config of Object.values(configs)) {
const row = new Setting(listContainer)
.setName(config.name)
.setDesc(config.uri.split("@").pop() || ""); // Show host part for privacy
if (config.id === this.editingSettings.activeConfigurationId) {
row.nameEl.addClass("sls-active-remote-name");
row.nameEl.appendText(" (Active)");
}
row.addButton((btn) =>
setEmojiButton(btn, "🔧", "Configure").onClick(async () => {
const parsed = ConnectionStringParser.parse(config.uri);
const workSettings = createBaseRemoteSettings();
if (parsed.type === "couchdb") {
workSettings.remoteType = REMOTE_COUCHDB;
} else if (parsed.type === "s3") {
workSettings.remoteType = REMOTE_MINIO;
} else {
workSettings.remoteType = REMOTE_P2P;
}
Object.assign(workSettings, parsed.settings);
const nextSettings = await runRemoteSetup(workSettings, workSettings.remoteType);
if (!nextSettings) {
return;
}
const nextConfigs = cloneRemoteConfigurations(this.editingSettings.remoteConfigurations);
nextConfigs[config.id] = {
...config,
uri: serializeRemoteConfiguration(nextSettings),
isEncrypted: nextSettings.encrypt,
};
this.editingSettings.remoteConfigurations = nextConfigs;
await persistRemoteConfigurations(config.id === this.editingSettings.activeConfigurationId);
refreshList();
})
);
row.addButton((btn) =>
btn
.setButtonText("✅")
.setTooltip("Activate", { delay: 10, placement: "top" })
.setDisabled(config.id === this.editingSettings.activeConfigurationId)
.onClick(async () => {
this.editingSettings.activeConfigurationId = config.id;
await persistRemoteConfigurations(true);
refreshList();
})
);
row.addButton((btn) =>
setEmojiButton(btn, "…", "More actions").onClick(() => {
const menu = new Menu()
.addItem((item) => {
item.setTitle("🪪 Rename").onClick(async () => {
const nextName = await this.services.UI.confirm.askString(
"Remote name",
"Display name",
config.name
);
if (nextName === false) {
return;
}
const nextConfigs = cloneRemoteConfigurations(
this.editingSettings.remoteConfigurations
);
nextConfigs[config.id] = {
...config,
name: nextName.trim() || config.name,
};
this.editingSettings.remoteConfigurations = nextConfigs;
await persistRemoteConfigurations();
refreshList();
});
})
.addItem((item) => {
item.setTitle("📤 Export").onClick(async () => {
await this.services.UI.promptCopyToClipboard(
`Remote configuration: ${config.name}`,
config.uri
);
});
})
.addItem((item) => {
item.setTitle("🧬 Duplicate").onClick(async () => {
const nextName = await this.services.UI.confirm.askString(
"Duplicate remote",
"Display name",
`${config.name} (Copy)`
);
if (nextName === false) {
return;
}
const nextId = createRemoteConfigurationId();
const nextConfigs = cloneRemoteConfigurations(
this.editingSettings.remoteConfigurations
);
nextConfigs[nextId] = {
...config,
id: nextId,
name: nextName.trim() || `${config.name} (Copy)`,
};
this.editingSettings.remoteConfigurations = nextConfigs;
await persistRemoteConfigurations();
refreshList();
});
})
.addSeparator()
.addItem((item) => {
item.setTitle("🗑 Delete").onClick(async () => {
const confirmed = await this.services.UI.confirm.askYesNoDialog(
`Delete remote configuration '${config.name}'?`,
{ title: "Delete Remote Configuration", defaultOption: "No" }
);
if (confirmed !== "yes") {
return;
}
const nextConfigs = cloneRemoteConfigurations(
this.editingSettings.remoteConfigurations
);
delete nextConfigs[config.id];
this.editingSettings.remoteConfigurations = nextConfigs;
let syncActiveRemote = false;
if (this.editingSettings.activeConfigurationId === config.id) {
const nextActiveId = Object.keys(nextConfigs)[0] || "";
this.editingSettings.activeConfigurationId = nextActiveId;
syncActiveRemote = nextActiveId !== "";
}
await persistRemoteConfigurations(syncActiveRemote);
refreshList();
});
});
const rect = btn.buttonEl.getBoundingClientRect();
menu.showAtPosition({ x: rect.left, y: rect.bottom });
})
);
}
};
refreshList();
});
}
{
// eslint-disable-next-line no-constant-condition
if (false) {
const initialProps = {
info: getCouchDBConfigSummary(this.editingSettings),
};
@@ -143,7 +502,8 @@ export function paneRemoteConfig(
);
});
}
{
// eslint-disable-next-line no-constant-condition
if (false) {
const initialProps = {
info: getBucketConfigSummary(this.editingSettings),
};
@@ -178,7 +538,8 @@ export function paneRemoteConfig(
);
});
}
{
// eslint-disable-next-line no-constant-condition
if (false) {
const getDevicePeerId = () => this.services.config.getSmallConfig(SETTING_KEY_P2P_DEVICE_NAME) || "";
const initialProps = {
info: getP2PConfigSummary(this.editingSettings, {

View File

@@ -275,6 +275,10 @@ export class SetupManager extends AbstractModule {
activate: boolean = true,
extra: () => void = () => {}
): Promise<boolean> {
newConf = await this.services.setting.adjustSettings({
...this.settings,
...newConf,
});
let userMode = _userMode;
if (userMode === UserMode.Unknown) {
if (isObjectDifferent(this.settings, newConf, true) === false) {
@@ -368,13 +372,8 @@ export class SetupManager extends AbstractModule {
* @returns Promise that resolves to true if settings applied successfully, false otherwise
*/
async applySetting(newConf: ObsidianLiveSyncSettings, userMode: UserMode) {
const newSetting = {
...this.core.settings,
...newConf,
};
this.core.settings = newSetting;
this.services.setting.clearUsedPassphrase();
await this.services.setting.saveSettingData();
await this.services.setting.applyExternalSettings(newConf, true);
return true;
}
}

View File

@@ -0,0 +1,157 @@
import { beforeEach, describe, expect, it, vi } from "vitest";
import { DEFAULT_SETTINGS, REMOTE_COUCHDB, type ObsidianLiveSyncSettings } from "../../lib/src/common/types";
import { SettingService } from "../../lib/src/services/base/SettingService";
import { ServiceContext } from "../../lib/src/services/base/ServiceBase";
vi.mock("./SetupWizard/dialogs/Intro.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SelectMethodNewUser.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SelectMethodExisting.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/ScanQRCode.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/UseSetupURI.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/OutroNewUser.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/OutroExistingUser.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/OutroAskUserMode.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemote.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemoteCouchDB.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemoteBucket.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemoteP2P.svelte", () => ({ default: {} }));
vi.mock("./SetupWizard/dialogs/SetupRemoteE2EE.svelte", () => ({ default: {} }));
vi.mock("../../lib/src/API/processSetting.ts", () => ({
decodeSettingsFromQRCodeData: vi.fn(),
}));
import { decodeSettingsFromQRCodeData } from "../../lib/src/API/processSetting.ts";
import { SetupManager, UserMode } from "./SetupManager";
class TestSettingService extends SettingService<ServiceContext> {
protected setItem(_key: string, _value: string): void {}
protected getItem(_key: string): string {
return "";
}
protected deleteItem(_key: string): void {}
protected saveData(_setting: ObsidianLiveSyncSettings): Promise<void> {
return Promise.resolve();
}
protected loadData(): Promise<ObsidianLiveSyncSettings | undefined> {
return Promise.resolve(undefined);
}
}
function createLegacyRemoteSetting(): ObsidianLiveSyncSettings {
return {
...DEFAULT_SETTINGS,
remoteConfigurations: {},
activeConfigurationId: "",
remoteType: REMOTE_COUCHDB,
couchDB_URI: "http://localhost:5984",
couchDB_USER: "user",
couchDB_PASSWORD: "password",
couchDB_DBNAME: "vault",
};
}
function createSetupManager() {
const setting = new TestSettingService(new ServiceContext(), {
APIService: {
getSystemVaultName: vi.fn(() => "vault"),
getAppID: vi.fn(() => "app"),
confirm: {
askString: vi.fn(() => Promise.resolve("")),
},
addLog: vi.fn(),
addCommand: vi.fn(),
registerWindow: vi.fn(),
addRibbonIcon: vi.fn(),
registerProtocolHandler: vi.fn(),
} as any,
});
setting.settings = {
...DEFAULT_SETTINGS,
remoteConfigurations: {},
activeConfigurationId: "",
};
vi.spyOn(setting, "saveSettingData").mockResolvedValue();
const dialogManager = {
openWithExplicitCancel: vi.fn(),
open: vi.fn(),
};
const services = {
API: {
addLog: vi.fn(),
addCommand: vi.fn(),
registerWindow: vi.fn(),
addRibbonIcon: vi.fn(),
registerProtocolHandler: vi.fn(),
},
UI: {
dialogManager,
},
setting,
} as any;
const core: any = {
_services: services,
rebuilder: {
scheduleRebuild: vi.fn(() => Promise.resolve()),
scheduleFetch: vi.fn(() => Promise.resolve()),
},
};
Object.defineProperty(core, "services", {
get() {
return services;
},
});
Object.defineProperty(core, "settings", {
get() {
return setting.settings;
},
set(value: ObsidianLiveSyncSettings) {
setting.settings = value;
},
});
return {
manager: new SetupManager(core),
setting,
dialogManager,
core,
};
}
describe("SetupManager", () => {
beforeEach(() => {
vi.clearAllMocks();
vi.restoreAllMocks();
});
it("onUseSetupURI should normalise imported legacy remote settings before applying", async () => {
const { manager, setting, dialogManager } = createSetupManager();
dialogManager.openWithExplicitCancel
.mockResolvedValueOnce(createLegacyRemoteSetting())
.mockResolvedValueOnce("compatible-existing-user");
const result = await manager.onUseSetupURI(UserMode.Unknown, "mock-config://settings");
expect(result).toBe(true);
expect(setting.currentSettings().remoteConfigurations["legacy-couchdb"]?.uri).toContain(
"sls+http://user:password@localhost:5984"
);
expect(setting.currentSettings().activeConfigurationId).toBe("legacy-couchdb");
});
it("decodeQR should normalise imported legacy remote settings before applying", async () => {
const { manager, setting, dialogManager } = createSetupManager();
vi.mocked(decodeSettingsFromQRCodeData).mockReturnValue(createLegacyRemoteSetting());
dialogManager.openWithExplicitCancel.mockResolvedValueOnce("compatible-existing-user");
const result = await manager.decodeQR("qr-data");
expect(result).toBe(true);
expect(decodeSettingsFromQRCodeData).toHaveBeenCalledWith("qr-data");
expect(setting.currentSettings().remoteConfigurations["legacy-couchdb"]?.uri).toContain(
"sls+http://user:password@localhost:5984"
);
expect(setting.currentSettings().activeConfigurationId).toBe("legacy-couchdb");
});
});

View File

@@ -106,10 +106,10 @@ export class ModuleLiveSyncMain extends AbstractModule {
this._log($msg("moduleLiveSyncMain.logReadChangelog"), LOG_LEVEL_NOTICE);
}
//@ts-ignore
if (this.isMobile) {
this.settings.disableRequestURI = true;
}
// //@ts-ignore
// if (this.isMobile) {
// this.settings.disableRequestURI = true;
// }
if (last_version && Number(last_version) < VER) {
this.settings.liveSync = false;
this.settings.syncOnSave = false;

View File

@@ -176,7 +176,7 @@ export async function adjustSettingToRemote(
...config,
...Object.fromEntries(differentItems),
} satisfies ObsidianLiveSyncSettings;
await host.services.setting.applyPartial(config, true);
await host.services.setting.applyExternalSettings(config, true);
log("Remote configuration applied.", LOG_LEVEL_NOTICE);
canProceed = true;
const updatedConfig = host.services.setting.currentSettings();

View File

@@ -49,6 +49,10 @@ const createSettingServiceMock = () => {
return {
settings,
currentSettings: vi.fn(() => settings),
applyExternalSettings: vi.fn((partial: any, _feedback?: boolean) => {
Object.assign(settings, partial);
return Promise.resolve();
}),
applyPartial: vi.fn((partial: any, _feedback?: boolean) => {
Object.assign(settings, partial);
return Promise.resolve();
@@ -552,7 +556,7 @@ describe("Red Flag Feature", () => {
await adjustSettingToRemote(host as any, createLoggerMock(), config);
expect(host.mocks.ui.confirm.askSelectStringDialogue).toHaveBeenCalled();
expect(host.mocks.setting.applyPartial).toHaveBeenCalled();
expect(host.mocks.setting.applyExternalSettings).toHaveBeenCalled();
}
);
const mismatchAcceptedKeys = Object.keys(TweakValuesRecommendedTemplate).filter(
@@ -579,7 +583,7 @@ describe("Red Flag Feature", () => {
await adjustSettingToRemote(host as any, createLoggerMock(), config);
expect(host.mocks.setting.applyPartial).toHaveBeenCalled();
expect(host.mocks.setting.applyExternalSettings).toHaveBeenCalled();
expect(host.mocks.ui.confirm.askSelectStringDialogue).not.toHaveBeenCalled();
}
);

View File

@@ -0,0 +1,34 @@
import { type SetupManager, UserMode } from "@/modules/features/SetupManager";
import type { SetupFeatureHost } from "@lib/serviceFeatures/setupObsidian/types";
import { EVENT_REQUEST_OPEN_P2P_SETTINGS, EVENT_REQUEST_OPEN_SETUP_URI } from "@lib/events/coreEvents";
import { eventHub } from "@lib/hub/hub";
import { fireAndForget } from "@lib/common/utils";
import type { NecessaryServices } from "@lib/interfaces/ServiceModule";
export async function openSetupURI(setupManager: SetupManager) {
await setupManager.onUseSetupURI(UserMode.Unknown);
}
export async function openP2PSettings(host: SetupFeatureHost, setupManager: SetupManager) {
return await setupManager.onP2PManualSetup(UserMode.Update, host.services.setting.currentSettings(), false);
}
export function useSetupManagerHandlersFeature(
host: NecessaryServices<"API" | "UI" | "setting" | "appLifecycle", never>,
setupManager: SetupManager
) {
host.services.appLifecycle.onLoaded.addHandler(() => {
host.services.API.addCommand({
id: "livesync-opensetupuri",
name: "Use the copied setup URI (Formerly Open setup URI)",
callback: () => fireAndForget(openSetupURI(setupManager)),
});
eventHub.onEvent(EVENT_REQUEST_OPEN_SETUP_URI, () => fireAndForget(() => openSetupURI(setupManager)));
eventHub.onEvent(EVENT_REQUEST_OPEN_P2P_SETTINGS, () =>
fireAndForget(() => openP2PSettings(host, setupManager))
);
return Promise.resolve(true);
});
}

View File

@@ -0,0 +1,87 @@
import { describe, expect, it, vi, afterEach } from "vitest";
import { eventHub } from "@lib/hub/hub";
import { EVENT_REQUEST_OPEN_P2P_SETTINGS, EVENT_REQUEST_OPEN_SETUP_URI } from "@lib/events/coreEvents";
import { openP2PSettings, openSetupURI, useSetupManagerHandlersFeature } from "./setupManagerHandlers";
vi.mock("@/modules/features/SetupManager", () => {
return {
UserMode: {
Unknown: "unknown",
Update: "unknown",
},
};
});
describe("setupObsidian/setupManagerHandlers", () => {
afterEach(() => {
vi.restoreAllMocks();
vi.clearAllMocks();
});
it("openSetupURI should delegate to SetupManager.onUseSetupURI", async () => {
const setupManager = {
onUseSetupURI: vi.fn(async () => await Promise.resolve(true)),
} as any;
await openSetupURI(setupManager);
expect(setupManager.onUseSetupURI).toHaveBeenCalledWith("unknown");
});
it("openP2PSettings should delegate to SetupManager.onP2PManualSetup", async () => {
const settings = { x: 1 };
const host = {
services: {
setting: {
currentSettings: vi.fn(() => settings),
},
},
} as any;
const setupManager = {
onP2PManualSetup: vi.fn(async () => await Promise.resolve(true)),
} as any;
await openP2PSettings(host, setupManager);
expect(setupManager.onP2PManualSetup).toHaveBeenCalledWith("unknown", settings, false);
});
it("useSetupManagerHandlersFeature should register onLoaded handler that wires command and events", async () => {
const addHandler = vi.fn();
const addCommand = vi.fn();
const onEventSpy = vi.spyOn(eventHub, "onEvent");
const host = {
services: {
API: {
addCommand,
},
appLifecycle: {
onLoaded: {
addHandler,
},
},
setting: {
currentSettings: vi.fn(() => ({ x: 1 })),
},
},
} as any;
const setupManager = {
onUseSetupURI: vi.fn(async () => await Promise.resolve(true)),
onP2PManualSetup: vi.fn(async () => await Promise.resolve(true)),
} as any;
useSetupManagerHandlersFeature(host, setupManager);
expect(addHandler).toHaveBeenCalledTimes(1);
const loadedHandler = addHandler.mock.calls[0][0] as () => Promise<boolean>;
await loadedHandler();
expect(addCommand).toHaveBeenCalledWith(
expect.objectContaining({
id: "livesync-opensetupuri",
name: "Use the copied setup URI (Formerly Open setup URI)",
})
);
expect(onEventSpy).toHaveBeenCalledWith(EVENT_REQUEST_OPEN_SETUP_URI, expect.any(Function));
expect(onEventSpy).toHaveBeenCalledWith(EVENT_REQUEST_OPEN_P2P_SETTINGS, expect.any(Function));
});
});

View File

@@ -0,0 +1,37 @@
import { LOG_LEVEL_NOTICE, LOG_LEVEL_VERBOSE } from "@lib/common/types";
import type { LogFunction } from "@lib/services/lib/logUtils";
import { createInstanceLogFunction } from "@lib/services/lib/logUtils";
import type { SetupFeatureHost } from "@lib/serviceFeatures/setupObsidian/types";
import { configURIBase } from "@/common/types";
import type { NecessaryServices } from "@lib/interfaces/ServiceModule";
import { type SetupManager, UserMode } from "@/modules/features/SetupManager";
async function handleSetupProtocol(setupManager: SetupManager, conf: Record<string, string>) {
if (conf.settings) {
await setupManager.onUseSetupURI(UserMode.Unknown, `${configURIBase}${encodeURIComponent(conf.settings)}`);
} else if (conf.settingsQR) {
await setupManager.decodeQR(conf.settingsQR);
}
}
export function registerSetupProtocolHandler(host: SetupFeatureHost, log: LogFunction, setupManager: SetupManager) {
try {
host.services.API.registerProtocolHandler("setuplivesync", async (conf) => {
await handleSetupProtocol(setupManager, conf);
});
} catch (e) {
log("Failed to register protocol handler. This feature may not work in some environments.", LOG_LEVEL_NOTICE);
log(e, LOG_LEVEL_VERBOSE);
}
}
export function useSetupProtocolFeature(
host: NecessaryServices<"API" | "UI" | "setting" | "appLifecycle", never>,
setupManager: SetupManager
) {
const log = createInstanceLogFunction("SF:SetupProtocol", host.services.API);
host.services.appLifecycle.onLoaded.addHandler(() => {
registerSetupProtocolHandler(host, log, setupManager);
return Promise.resolve(true);
});
}

View File

@@ -0,0 +1,131 @@
import { describe, expect, it, vi, afterEach } from "vitest";
import { registerSetupProtocolHandler, useSetupProtocolFeature } from "./setupProtocol";
vi.mock("@/common/types", () => {
return {
configURIBase: "mock-config://",
};
});
vi.mock("@/modules/features/SetupManager", () => {
return {
UserMode: {
Unknown: "unknown",
Update: "unknown",
},
};
});
describe("setupObsidian/setupProtocol", () => {
afterEach(() => {
vi.restoreAllMocks();
vi.clearAllMocks();
});
it("registerSetupProtocolHandler should route settings payload to onUseSetupURI", async () => {
let protocolHandler: ((params: Record<string, string>) => Promise<void>) | undefined;
const host = {
services: {
API: {
registerProtocolHandler: vi.fn(
(_action: string, handler: (params: Record<string, string>) => Promise<void>) => {
protocolHandler = handler;
}
),
},
},
} as any;
const log = vi.fn();
const setupManager = {
onUseSetupURI: vi.fn(async () => await Promise.resolve(true)),
decodeQR: vi.fn(async () => await Promise.resolve(true)),
} as any;
registerSetupProtocolHandler(host, log, setupManager);
expect(host.services.API.registerProtocolHandler).toHaveBeenCalledWith("setuplivesync", expect.any(Function));
await protocolHandler!({ settings: "a b" });
expect(setupManager.onUseSetupURI).toHaveBeenCalledWith(
"unknown",
`mock-config://${encodeURIComponent("a b")}`
);
expect(setupManager.decodeQR).not.toHaveBeenCalled();
});
it("registerSetupProtocolHandler should route settingsQR payload to decodeQR", async () => {
let protocolHandler: ((params: Record<string, string>) => Promise<void>) | undefined;
const host = {
services: {
API: {
registerProtocolHandler: vi.fn(
(_action: string, handler: (params: Record<string, string>) => Promise<void>) => {
protocolHandler = handler;
}
),
},
},
} as any;
const log = vi.fn();
const setupManager = {
onUseSetupURI: vi.fn(async () => await Promise.resolve(true)),
decodeQR: vi.fn(async () => await Promise.resolve(true)),
} as any;
registerSetupProtocolHandler(host, log, setupManager);
await protocolHandler!({ settingsQR: "qr-data" });
expect(setupManager.decodeQR).toHaveBeenCalledWith("qr-data");
expect(setupManager.onUseSetupURI).not.toHaveBeenCalled();
});
it("registerSetupProtocolHandler should log and continue when registration throws", () => {
const host = {
services: {
API: {
registerProtocolHandler: vi.fn(() => {
throw new Error("register failed");
}),
},
},
} as any;
const log = vi.fn();
const setupManager = {
onUseSetupURI: vi.fn(),
decodeQR: vi.fn(),
} as any;
registerSetupProtocolHandler(host, log, setupManager);
expect(log).toHaveBeenCalledTimes(2);
});
it("useSetupProtocolFeature should register onLoaded handler", async () => {
const addHandler = vi.fn();
const registerProtocolHandler = vi.fn();
const host = {
services: {
API: {
addLog: vi.fn(),
registerProtocolHandler,
},
appLifecycle: {
onLoaded: {
addHandler,
},
},
},
} as any;
const setupManager = {
onUseSetupURI: vi.fn(),
decodeQR: vi.fn(),
} as any;
useSetupProtocolFeature(host, setupManager);
expect(addHandler).toHaveBeenCalledTimes(1);
const loadedHandler = addHandler.mock.calls[0][0] as () => Promise<boolean>;
await loadedHandler();
expect(registerProtocolHandler).toHaveBeenCalledWith("setuplivesync", expect.any(Function));
});
});

View File

@@ -0,0 +1,76 @@
import { eventHub, EVENT_REQUEST_OPEN_P2P } from "@/common/events";
import { reactiveSource } from "octagonal-wheels/dataobject/reactive_v2";
import type { NecessaryServices } from "@lib/interfaces/ServiceModule";
import { type UseP2PReplicatorResult } from "@/lib/src/replication/trystero/UseP2PReplicatorResult";
import { P2PLogCollector } from "@/lib/src/replication/trystero/P2PLogCollector";
import { P2PReplicatorPaneView, VIEW_TYPE_P2P } from "@/features/P2PSync/P2PReplicator/P2PReplicatorPaneView";
import type { LiveSyncCore } from "@/main";
/**
* ServiceFeature: P2P Replicator lifecycle management.
* Binds a LiveSyncTrysteroReplicator to the host's lifecycle events,
* following the same middleware style as useOfflineScanner.
*
* @param viewTypeAndFactory Optional [viewType, factory] pair for registering the P2P pane view.
* When provided, also registers commands and ribbon icon via services.API.
*/
export function useP2PReplicatorUI(
host: NecessaryServices<
| "API"
| "appLifecycle"
| "setting"
| "vault"
| "database"
| "databaseEvents"
| "keyValueDB"
| "replication"
| "config"
| "UI"
| "replicator",
never
>,
core: LiveSyncCore,
replicator: UseP2PReplicatorResult
) {
// const env: LiveSyncTrysteroReplicatorEnv = { services: host.services as any };
const getReplicator = () => replicator.replicator;
const p2pLogCollector = new P2PLogCollector();
const storeP2PStatusLine = reactiveSource("");
p2pLogCollector.p2pReplicationLine.onChanged((line) => {
storeP2PStatusLine.value = line.value;
});
// Register view, commands and ribbon if a view factory is provided
const viewType = VIEW_TYPE_P2P;
const factory = (leaf: any) => {
return new P2PReplicatorPaneView(leaf, core, {
replicator: getReplicator(),
p2pLogCollector,
storeP2PStatusLine,
});
};
const openPane = () => host.services.API.showWindow(viewType);
host.services.API.registerWindow(viewType, factory);
host.services.appLifecycle.onInitialise.addHandler(() => {
eventHub.onEvent(EVENT_REQUEST_OPEN_P2P, () => {
void openPane();
});
host.services.API.addCommand({
id: "open-p2p-replicator",
name: "P2P Sync : Open P2P Replicator",
callback: () => {
void openPane();
},
});
host.services.API.addRibbonIcon("waypoints", "P2P Replicator", () => {
void openPane();
})?.addClass?.("livesync-ribbon-replicate-p2p");
return Promise.resolve(true);
});
return { replicator: getReplicator(), p2pLogCollector, storeP2PStatusLine };
}

View File

@@ -3,6 +3,31 @@ set -e
script_dir=$(dirname "$0")
webpeer_dir=$script_dir/../../src/apps/webpeer
docker run -d --name relay-test -p 4000:8080 scsibug/nostr-rs-relay:latest
docker run -d --name relay-test -p 4000:7777 \
--tmpfs /app/strfry-db:rw,size=256m \
--entrypoint sh \
ghcr.io/hoytech/strfry:latest \
-lc 'cat > /tmp/strfry.conf <<"EOF"
db = "./strfry-db/"
relay {
bind = "0.0.0.0"
port = 7777
nofiles = 100000
info {
name = "livesync test relay"
description = "local relay for livesync p2p tests"
}
maxWebsocketPayloadSize = 131072
autoPingSeconds = 55
writePolicy {
plugin = ""
}
}
EOF
exec /app/strfry --config /tmp/strfry.conf relay'
npm run --prefix $webpeer_dir build
docker run -d --name webpeer-test -p 8081:8043 -v $webpeer_dir/dist:/srv/http pierrezemb/gostatic

View File

@@ -3,6 +3,72 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
## Unreleased 2
3rd April, 2026
As this commit is a bit of a fragile matter, I shall add a note here.
You know that untagged updates shall not be tested well. please be careful to use your own build. In most cases, I check that the warnings have disappeared, that the code compiles successfully without any warnings, and that it runs on the desktop.
### Fixed
- No unexpected error (about a replicator) during early stage of initialisation.
### New features
- Now we can configure multiple Remote Databases of the same type, e.g, multiple CouchDBs or S3 remotes.
- We can switch between multiple Remote Databases in the settings dialogue.
## Unreleased
2nd April, 2026
### CLI
#### Fixed
- Replication progress is now correctly saved and restored in the CLI.
## ~~0.25.55~~ 0.25.56
30th March, 2026
### Fixed
- No longer `Peer-to-Peer Sync is not enabled. We cannot open a new connection.` error occurs when we have not enabled P2P sync and are not expected to use it (#830).
### CLI
- Fixed incomplete localStorage support in the CLI (#831). Thank you so much @rewse !
- Fixed the issue where the CLI could not be connected to the remote which had been locked once (#833), also thanks to @rewse !
## 0.25.54
18th March, 2026
### Fixed
- Remote storage size check now works correctly again (#818).
- Some buttons on the settings dialogue now respond correctly again (#827).
### Refactored
- P2P replicator has been refactored to be a little more robust and easier to understand.
- Delete items which are no longer used that might cause potential problems
### CLI
- Fixed the corrupted display of the help message.
- Remove some unnecessary code.
### WebApp
- Fixed the issue where the detail level was not being applied in the log pane.
- Pop-ups are now shown.
- Add coverage for the test.
- Pop-ups are now shown in the web app as well.
## 0.25.53
17th March, 2026
@@ -167,91 +233,6 @@ As a result of recent refactoring, we are able to write tests more easily now!
- `ModuleObsidianAPI` has been removed and implemented in `APIService` and `RemoteService`.
- Now `APIService` is responsible for the network-online-status, not `databaseService.managers.networkManager`.
## 0.25.44
24th February, 2026
This release represents a significant architectural overhaul of the plug-in, focusing on modularity, testability, and stability. While many changes are internal, they pave the way for more robust features and easier maintenance.
However, as this update is very substantial, please do feel free to let me know if you encounter any issues.
### Fixed
- Ignore files (e.g., `.ignore`) are now handled efficiently.
- Replication & Database:
- Replication statistics are now correctly reset after switching replicators.
- Fixed `File already exists` for .md files has been merged (PR #802) So thanks @waspeer for the contribution!
### Improved
- Now we can configure network-error banners as icons, or hide them completely with the new `Network Warning Style` setting in the `General` pane of the settings dialogue. (#770, PR #804)
- Thanks so much to @A-wry!
### Refactored
#### Architectural Overhaul:
- A major transition from Class-based Modules to a Service/Middleware architecture has begun.
- Many modules (for example, `ModulePouchDB`, `ModuleLocalDatabaseObsidian`, `ModuleKeyValueDB`) have been removed or integrated into specific Services (`database`, `keyValueDB`, etc.).
- Reduced reliance on dynamic binding and inverted dependencies; dependencies are now explicit.
- `ObsidianLiveSyncPlugin` properties (`replicator`, `localDatabase`, `storageAccess`, etc.) have been moved to their respective services for better separation of concerns.
- In this refactoring, the Service will henceforth, as a rule, cease to use setHandler, that is to say, simple lazy binding.
- They will be implemented directly in the service.
- However, not everything will be middlewarised. Modules that maintain state or make decisions based on the results of multiple handlers are permitted.
- Lifecycle:
- Application LifeCycle now starts in `Main` rather than `ServiceHub` or `ObsidianMenuModule`, ensuring smoother startup coordination.
#### New Services & Utilities:
- Added a `control` service to orchestrate other services (for example, handling stop/start logic during settings realisation).
- Added `UnresolvedErrorManager` to handle and display unresolved errors in a unified way.
- Added `logUtils` to unify logging injection and formatting.
- `VaultService.isTargetFile` now uses multiple, distinct checkers for better extensibility.
#### Code Separation:
- Separated Obsidian-specific logic from base logic for `StorageEventManager` and `FileAccess` modules.
- Moved reactive state values and statistics from the main plug-in instance to the services responsible for them.
#### Internal Cleanups:
- Many functions have been renamed for clarity (for example, `_isTargetFileByLocalDB` is now `_isTargetAcceptedByLocalDB`).
- Added `override` keywords to overridden items and removed dynamic binding for clearer code inheritance.
- Moved common functions to the common library.
#### Dependencies:
- Bumped dependencies simply to a point where they can be considered problem-free (by human-powered-artefacts-diff).
- Svelte, terser, and more something will be bumped later. They have a significant impact on the diff and paint it totally.
- You may be surprised, but when I bump the library, I am actually checking for any unintended code.
## 0.25.43
5th, February, 2026
### Fixed
- Encryption/decryption issues when using Object Storage as remote have been fixed.
- Now the plug-in falls back to V1 encryption/decryption when V2 fails (if not configured as ForceV1).
- This may fix the issue reported in #772.
### Notice
Quite a few packages have been updated in this release. Please report if you find any unexpected behaviour after this update.
## 0.25.42
2nd, February, 2026
This release is identical to 0.25.41-patched-3, except for the version number.
### Refactored
- Now the service context is `protected` instead of `private` in `ServiceBase`.
- This change allows derived classes to access the context directly.
- Some dynamically bound services have been moved to services for better dependency management.
- `WebPeer` has been moved to the main repository from the sub repository `livesync-commonlib` for correct dependency management.
- Migrated from the outdated, unstable platform abstraction layer to services.
- A bit more services will be added in the future for better maintainability.
Full notes are in
[updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md).

View File

@@ -3,6 +3,47 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
## ~~0.25.55~~ 0.25.56
30th March, 2026
### Fixed
- No longer `Peer-to-Peer Sync is not enabled. We cannot open a new connection.` error occurs when we have not enabled P2P sync and are not expected to use it (#830).
### CLI
- Fixed incomplete localStorage support in the CLI (#831). Thank you so much @rewse !
- Fixed the issue where the CLI could not be connected to the remote which had been locked once (#833), also thanks to @rewse !
## 0.25.54
18th March, 2026
### Fixed
- Remote storage size check now works correctly again (#818).
- Some buttons on the settings dialogue now respond correctly again (#827).
### Refactored
- P2P replicator has been refactored to be a little more robust and easier to understand.
- Delete items which are no longer used that might cause potential problems
### CLI
- Fixed the corrupted display of the help message.
- Remove some unnecessary code.
### WebApp
- Fixed the issue where the detail level was not being applied in the log pane.
- Pop-ups are now shown.
- Add coverage for the test.
- Pop-ups are now shown in the web app as well.
## 0.25.53
17th March, 2026

View File

@@ -5,6 +5,23 @@ import path from "path";
import dotenv from "dotenv";
import { grantClipboardPermissions, writeHandoffFile, readHandoffFile } from "./test/lib/commands";
// P2P test environment variables
// Configure these in .env or .test.env, or inject via shell before running tests.
// Shell-injected values take precedence over dotenv files.
//
// Required:
// P2P_TEST_ROOM_ID - Shared room identifier for peers to discover each other
// P2P_TEST_PASSPHRASE - Encryption passphrase shared between test peers
//
// Optional:
// P2P_TEST_HOST_PEER_NAME - Name used to identify the host peer (default varies)
// P2P_TEST_RELAY - Nostr relay server URL used for peer signalling/discovery
// P2P_TEST_APP_ID - Application ID scoping the P2P session
// P2P_TEST_HANDOFF_FILE - File path used to pass state between up/down test phases
//
// General test options (also read from env):
// ENABLE_DEBUGGER - Set to "true" to attach a debugger and pause before tests
// ENABLE_UI - Set to "true" to open a visible browser window during tests
const defEnv = dotenv.config({ path: ".env" }).parsed;
const testEnv = dotenv.config({ path: ".test.env" }).parsed;
// Merge: dotenv files < process.env (so shell-injected vars like P2P_TEST_* take precedence)