mirror of
https://github.com/vrtmrz/obsidian-livesync.git
synced 2026-04-29 12:28:35 +00:00
Compare commits
22 Commits
0.25.56+pa
...
0.25.60
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
81d8224330 | ||
|
|
cc466a4b3c | ||
|
|
ceebca7de9 | ||
|
|
c2f696d0a4 | ||
|
|
1aa7c45794 | ||
|
|
faefa80cbd | ||
|
|
3737eacffd | ||
|
|
4c0af0b608 | ||
|
|
bb69eb13e7 | ||
|
|
7c9db6376f | ||
|
|
4c04e4e676 | ||
|
|
14ec35b257 | ||
|
|
b609e4973c | ||
|
|
354f0be9a3 | ||
|
|
16804ed34c | ||
|
|
31bd270869 | ||
|
|
b5d054f259 | ||
|
|
1ef2955d00 | ||
|
|
6ef56063b3 | ||
|
|
a912585800 | ||
|
|
7a863625bc | ||
|
|
12f04f6cf7 |
107
.github/ISSUE_TEMPLATE/issue-report.md
vendored
107
.github/ISSUE_TEMPLATE/issue-report.md
vendored
@@ -2,77 +2,104 @@
|
||||
name: Issue report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: ''
|
||||
labels: 'bug'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Thank you for taking the time to report this issue!
|
||||
To improve the process, I would like to ask you to let me know the information in advance.
|
||||
Before filling in this form, please read: [How to report an issue](../docs/to_issue_reporting.md).
|
||||
|
||||
All instructions and examples, and empty entries can be deleted.
|
||||
Just for your information, a [filled example](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Issue+example) is also written.
|
||||
Issues with sufficient information will be prioritised.
|
||||
|
||||
## Abstract
|
||||
The synchronisation hung up immediately after connecting.
|
||||
---
|
||||
|
||||
## Expected behaviour
|
||||
- Synchronisation ends with the message `Replication completed`
|
||||
- Everything synchronised
|
||||
## Required
|
||||
|
||||
## Actually happened
|
||||
- Synchronisation has been cancelled with the message `TypeError ... ` (captured in the attached log, around LL.10-LL.12)
|
||||
- No files synchronised
|
||||
### Abstract
|
||||
<!-- Briefly describe the problem in one or two sentences. -->
|
||||
|
||||
## Reproducing procedure
|
||||
### Expected behaviour
|
||||
<!-- What did you expect to happen? -->
|
||||
|
||||
1. Configure LiveSync as in the attached material.
|
||||
2. Click the replication button on the ribbon.
|
||||
3. Synchronising has begun.
|
||||
4. About two or three seconds later, we got the error `TypeError ... `.
|
||||
5. Replication has been stopped. No files synchronised.
|
||||
### Actually happened
|
||||
<!-- What actually happened? Include any error messages. -->
|
||||
|
||||
Note: If you do not catch the reproducing procedure, please let me know the frequency and signs.
|
||||
|
||||
## Report materials
|
||||
If the information is not available, do not hesitate to report it as it is. You can also of course omit it if you think this is indeed unnecessary. If it is necessary, I will ask you.
|
||||
|
||||
### Report from the LiveSync
|
||||
For more information, please refer to [Making the report](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Making+the+report).
|
||||
<details>
|
||||
<summary>Report from hatch</summary>
|
||||
|
||||
```
|
||||
<!-- paste here -->
|
||||
```
|
||||
</details>
|
||||
### Reproducing procedure
|
||||
<!-- Step-by-step instructions to reproduce the issue. If you cannot reproduce it reliably, please describe the frequency and any signs you noticed. -->
|
||||
|
||||
### Obsidian debug info
|
||||
Please provide debug info for **each device involved**. The primary device (where the issue occurred) is required; others are strongly recommended. If your issue involves synchronisation between devices, debug info from relevant devices is very helpful.
|
||||
To get it: open the command palette → "Show debug info".
|
||||
|
||||
<details>
|
||||
<summary>Debug info</summary>
|
||||
<summary>Device 1 (primary)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here -->
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Device 2 (if applicable)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here -->
|
||||
```
|
||||
</details>
|
||||
|
||||
### LiveSync version
|
||||
The hatch report (below) includes version information. If you cannot provide the report, please fill in the version here.
|
||||
|
||||
- Self-hosted LiveSync version: <!-- e.g. 0.23.0 — find it in Obsidian Settings → Community Plugins -->
|
||||
|
||||
### Report from LiveSync
|
||||
Open the `Hatch` pane in LiveSync settings and press `Make report`. Paste here or upload to [Gist](https://gist.github.com/) and share the link.
|
||||
|
||||
<details>
|
||||
<summary>Report from hatch (primary)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here or link to Gist -->
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Report from hatch (if applicable)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here or link to Gist -->
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
### Plug-in log
|
||||
We can see the log by tapping the Document box icon. If you noticed something suspicious, please let me know.
|
||||
Note: **Please enable `Verbose Log`**. For detail, refer to [Logging](https://docs.vrtmrz.net/LiveSync/hintandtrivia/Logging), please.
|
||||
Enable `Verbose Log` in General Settings first, then reproduce the issue and copy the log (tap the document box icon in the ribbon).
|
||||
Paste here or upload to [Gist](https://gist.github.com/) and share the link.
|
||||
|
||||
<details>
|
||||
<summary>Plug-in log</summary>
|
||||
<summary>Plug-in log (primary)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here -->
|
||||
<!-- paste here or link to Gist -->
|
||||
```
|
||||
</details>
|
||||
|
||||
### Network log
|
||||
Network logs displayed in DevTools will possibly help with connection-related issues. To capture that, please refer to [DevTools](https://docs.vrtmrz.net/LiveSync/hintandtrivia/DevTools).
|
||||
|
||||
<details>
|
||||
<summary>Plug-in log (if applicable)</summary>
|
||||
|
||||
```
|
||||
<!-- paste here or link to Gist -->
|
||||
```
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Optional
|
||||
|
||||
### Screenshots
|
||||
If applicable, please add screenshots to help explain your problem.
|
||||
|
||||
### Other information, insights and intuition.
|
||||
### Other information, insights and intuition
|
||||
Please provide any additional context or information about the problem.
|
||||
|
||||
@@ -24,7 +24,7 @@ Additionally, it supports peer-to-peer synchronisation using WebRTC now (experim
|
||||
- WebRTC is a peer-to-peer synchronisation method, so **at least one device must be online to synchronise**.
|
||||
- Instead of keeping your device online as a stable peer, you can use two pseudo-peers:
|
||||
- [livesync-serverpeer](https://github.com/vrtmrz/livesync-serverpeer): A pseudo-client running on the server for receiving and sending data between devices.
|
||||
- [webpeer](https://github.com/vrtmrz/livesync-commonlib/tree/main/apps/webpeer): A pseudo-client for receiving and sending data between devices.
|
||||
- [webpeer](https://github.com/vrtmrz/obsidian-livesync/tree/main/src/apps/webpeer): A pseudo-client for receiving and sending data between devices.
|
||||
- A pre-built instance is available at [fancy-syncing.vrtmrz.net/webpeer](https://fancy-syncing.vrtmrz.net/webpeer/) (hosted on the vrtmrz blog site). This is also peer-to-peer. Feel free to use it.
|
||||
- For more information, refer to the [English explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync-en.html) or the [Japanese explanatory article](https://fancy-syncing.vrtmrz.net/blog/0034-p2p-sync).
|
||||
|
||||
|
||||
92
aggregator.html
Normal file
92
aggregator.html
Normal file
@@ -0,0 +1,92 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Self-hosted LiveSync Setup QR Aggregator</title>
|
||||
<style>
|
||||
body { font-family: sans-serif; display: flex; flex-direction: column; align-items: center; justify-content: center; height: 100vh; margin: 0; background-color: #f4f4f9; color: #333; }
|
||||
.container { background: white; padding: 2rem; border-radius: 8px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); text-align: center; max-width: 90%; }
|
||||
.progress { margin: 20px 0; font-size: 1.2rem; font-weight: bold; }
|
||||
.status { margin-bottom: 20px; color: #666; }
|
||||
.btn { display: inline-block; padding: 12px 24px; background-color: #7c4dff; color: white; text-decoration: none; border-radius: 4px; font-weight: bold; transition: background-color 0.2s; border: none; cursor: pointer; }
|
||||
.btn:hover { background-color: #651fff; }
|
||||
.btn:disabled { background-color: #ccc; cursor: not-allowed; }
|
||||
.grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(40px, 1fr)); gap: 8px; margin: 20px 0; }
|
||||
.tile { width: 40px; height: 40px; border: 2px solid #ddd; border-radius: 4px; display: flex; align-items: center; justify-content: center; font-size: 0.8rem; }
|
||||
.tile.filled { background-color: #7c4dff; color: white; border-color: #7c4dff; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<h1>LiveSync Setup</h1>
|
||||
<div id="app">
|
||||
<p>Checking hash data...</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function updateUI() {
|
||||
const hash = window.location.hash.substring(1);
|
||||
const params = new URLSearchParams(hash);
|
||||
|
||||
const id = params.get('id');
|
||||
const total = parseInt(params.get('n') || '0');
|
||||
const index = parseInt(params.get('i') || '-1');
|
||||
const data = params.get('d');
|
||||
|
||||
const app = document.getElementById('app');
|
||||
|
||||
if (!id || total <= 0 || index === -1 || !data) {
|
||||
app.innerHTML = '<p class="status">Invalid setup URL. Please scan the QR code correctly.</p>';
|
||||
return;
|
||||
}
|
||||
|
||||
// Get session data
|
||||
const storageKey = 'ls_agg_' + id;
|
||||
let session = JSON.parse(localStorage.getItem(storageKey) || '{}');
|
||||
|
||||
// Save current data
|
||||
session[index] = data;
|
||||
localStorage.setItem(storageKey, JSON.stringify(session));
|
||||
|
||||
const receivedIndexes = Object.keys(session).map(Number);
|
||||
const count = receivedIndexes.length;
|
||||
|
||||
let html = `
|
||||
<div class="status">Session ID: ${id}</div>
|
||||
<div class="progress">${count} / ${total} Loaded</div>
|
||||
<div class="grid">
|
||||
`;
|
||||
|
||||
for (let i = 0; i < total; i++) {
|
||||
const isFilled = session[i] !== undefined;
|
||||
html += `<div class="tile ${isFilled ? 'filled' : ''}">${i + 1}</div>`;
|
||||
}
|
||||
html += `</div>`;
|
||||
|
||||
if (count === total) {
|
||||
const sortedData = Array.from({length: total}, (_, i) => session[i]).join('');
|
||||
// Use the correct protocol for settings
|
||||
const obsidianUri = `obsidian://setuplivesync?settingsQR=${sortedData}`;
|
||||
|
||||
html += `
|
||||
<p>All parts have been collected!</p>
|
||||
<a href="${obsidianUri}" class="btn">Open Obsidian to complete setup</a>
|
||||
<p style="margin-top:20px; font-size:0.8rem; color: #999;">Note: If the button does not respond, please ensure you are opening this in a browser that can trigger Obsidian.</p>
|
||||
`;
|
||||
} else {
|
||||
html += `
|
||||
<p class="status">Please scan the next QR code.</p>
|
||||
<button class="btn" disabled>Waiting...</button>
|
||||
`;
|
||||
}
|
||||
|
||||
app.innerHTML = html;
|
||||
}
|
||||
|
||||
window.addEventListener('hashchange', updateUI);
|
||||
updateUI();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,7 +1,6 @@
|
||||
# For details and other explanations about this file refer to:
|
||||
# https://github.com/vrtmrz/obsidian-livesync/blob/main/docs/setup_own_server.md#traefik
|
||||
|
||||
version: "2.1"
|
||||
services:
|
||||
couchdb:
|
||||
image: couchdb:latest
|
||||
|
||||
@@ -230,7 +230,6 @@ And, be sure to check the server log and be careful of malicious access.
|
||||
If you are using Traefik, this [docker-compose.yml](https://github.com/vrtmrz/obsidian-livesync/blob/main/docker-compose.traefik.yml) file (also pasted below) has all the right CORS parameters set. It assumes you have an external network called `proxy`.
|
||||
|
||||
```yaml
|
||||
version: "2.1"
|
||||
services:
|
||||
couchdb:
|
||||
image: couchdb:latest
|
||||
|
||||
@@ -71,7 +71,6 @@ obsidian-livesync
|
||||
|
||||
可以参照以下内容编辑 `docker-compose.yml`:
|
||||
```yaml
|
||||
version: "2.1"
|
||||
services:
|
||||
couchdb:
|
||||
image: couchdb
|
||||
|
||||
145
docs/to_issue_reporting.md
Normal file
145
docs/to_issue_reporting.md
Normal file
@@ -0,0 +1,145 @@
|
||||
# How to report an issue
|
||||
|
||||
Thank you for helping improve Self-hosted LiveSync!
|
||||
|
||||
This document explains how to collect the information needed for an issue report. Issues with sufficient information will be prioritised.
|
||||
|
||||
---
|
||||
|
||||
## Filled example
|
||||
|
||||
Here is an example of a well-filled report for reference.
|
||||
|
||||
### Abstract
|
||||
|
||||
The synchronisation hung up immediately after connecting.
|
||||
|
||||
### Expected behaviour
|
||||
|
||||
- Synchronisation ends with the message `Replication completed`
|
||||
- Everything synchronised
|
||||
|
||||
### Actually happened
|
||||
|
||||
- Synchronisation was cancelled with the message `TypeError: Failed to fetch` (visible in the plug-in log around lines 10–12)
|
||||
- No files synchronised
|
||||
|
||||
### Reproducing procedure
|
||||
|
||||
1. Configure LiveSync with the settings shown in the attached report.
|
||||
2. Click the sync button on the ribbon.
|
||||
3. Synchronisation begins.
|
||||
4. About two or three seconds later, the error `TypeError: Failed to fetch` appears.
|
||||
5. Replication stops. No files synchronised.
|
||||
|
||||
### Obsidian debug info (Device 1 — Windows desktop)
|
||||
|
||||
```
|
||||
SYSTEM INFO:
|
||||
Obsidian version: v1.2.8
|
||||
Installer version: v1.1.15
|
||||
Operating system: Windows 10 Pro 10.0.19044
|
||||
Login status: logged in
|
||||
Catalyst license: supporter
|
||||
Insider build toggle: off
|
||||
Community theme: Minimal v6.1.11
|
||||
Snippets enabled: 3
|
||||
Restricted mode: off
|
||||
Plugins installed: 35
|
||||
Plugins enabled: 11
|
||||
1: Self-hosted LiveSync v0.19.4
|
||||
...
|
||||
```
|
||||
|
||||
### Report from LiveSync
|
||||
|
||||
```
|
||||
----remote config----
|
||||
cors:
|
||||
credentials: "true"
|
||||
...
|
||||
---- Plug-in config ---
|
||||
couchDB_URI: self-hosted
|
||||
couchDB_USER: 𝑅𝐸𝐷𝐴𝐶𝑇𝐸𝐷
|
||||
...
|
||||
```
|
||||
|
||||
### Plug-in log
|
||||
|
||||
```
|
||||
2023/5/24 10:50:33->HTTP:GET to:/ -> failed
|
||||
2023/5/24 10:50:33->TypeError:Failed to fetch
|
||||
2023/5/24 10:50:33->could not connect to https://example.com/ : your vault
|
||||
(TypeError:Failed to fetch)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How to collect each piece of information
|
||||
|
||||
### Obsidian debug info
|
||||
|
||||
Open the command palette (`Ctrl/Cmd + P`) and run **"Show debug info"**. Copy the output and paste it into the issue.
|
||||
|
||||
If multiple devices are involved in the problem (e.g., sync between a phone and a desktop), please provide the debug info for each device. The device where the issue occurred is required; information from other devices is strongly recommended.
|
||||
|
||||
### Report from LiveSync (hatch report)
|
||||
|
||||
1. Open LiveSync settings.
|
||||
2. Go to the **Hatch** pane.
|
||||
3. Press the **Make report** button.
|
||||
|
||||
The report will be copied to your clipboard. It contains your LiveSync configuration and the remote server configuration, with credentials automatically redacted.
|
||||
|
||||
**Tip:** For large reports, consider uploading to [GitHub Gist](https://gist.github.com/) and sharing the link instead of pasting directly into the issue. This makes it easier to manage, and if you accidentally leave sensitive data in, a Gist can be deleted.
|
||||
|
||||
If you paste directly, wrap it in a `<details>` tag to keep the issue readable:
|
||||
|
||||
```
|
||||
<details>
|
||||
<summary>Report from hatch</summary>
|
||||
|
||||
```
|
||||
----remote config----
|
||||
:
|
||||
```
|
||||
</details>
|
||||
```
|
||||
|
||||
### Plug-in log
|
||||
|
||||
The plug-in log is volatile by default (not saved to disk) and shown only in the log dialogue, which can be opened by tapping the **document box icon** in the ribbon.
|
||||
|
||||
#### Enable verbose log
|
||||
|
||||
Before reproducing the issue, enable **Verbose Log** in LiveSync's **General Settings** pane. Without this, many diagnostic messages will be suppressed.
|
||||
|
||||
#### Persist the log to a file (optional)
|
||||
|
||||
If you need to capture a log across a restart, enable **"Write logs into the file"** in General Settings. Note that log files may contain sensitive information — use this option only for troubleshooting, and disable it afterwards.
|
||||
|
||||
As with the hatch report, consider uploading large logs to [GitHub Gist](https://gist.github.com/).
|
||||
|
||||
### Network log (for connection-related issues only)
|
||||
|
||||
If the issue is related to network connectivity (e.g., cannot connect to the server, authentication errors), a network log captured from browser DevTools can be very helpful. You do not need to include this for non-connection issues.
|
||||
|
||||
#### Opening DevTools
|
||||
|
||||
| Platform | Shortcut |
|
||||
|----------|----------|
|
||||
| Windows / Linux | `Ctrl + Shift + I` |
|
||||
| macOS | `Cmd + Shift + I` |
|
||||
| Android | Use [Chrome remote debugging](https://developer.chrome.com/docs/devtools/remote-debugging/) |
|
||||
| iOS | Use [Safari Web Inspector](https://developer.apple.com/documentation/safari-developer-tools/inspecting-ios) on a Mac |
|
||||
|
||||
#### What to capture
|
||||
|
||||
1. Open the **Network** pane in DevTools.
|
||||
2. Reproduce the issue.
|
||||
3. Look for requests marked in red.
|
||||
4. Capture screenshots of the **Headers**, **Payload**, and **Response** tabs for those requests.
|
||||
|
||||
**Important — redact before sharing:**
|
||||
- Headers: conceal the request URL path, Remote Address, `authority`, and `authorisation` values.
|
||||
- Payload / Response: the `_id` field contains your file paths — redact if needed.
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"id": "obsidian-livesync",
|
||||
"name": "Self-hosted LiveSync",
|
||||
"version": "0.25.56+patched5",
|
||||
"version": "0.25.60",
|
||||
"minAppVersion": "0.9.12",
|
||||
"description": "Community implementation of self-hosted livesync. Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"author": "vorotamoroz",
|
||||
|
||||
81
package-lock.json
generated
81
package-lock.json
generated
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.25.56+patched5",
|
||||
"version": "0.25.60",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.25.56+patched5",
|
||||
"version": "0.25.60",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-s3": "^3.808.0",
|
||||
@@ -924,13 +924,13 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@aws-sdk/xml-builder": {
|
||||
"version": "3.972.15",
|
||||
"resolved": "https://registry.npmjs.org/@aws-sdk/xml-builder/-/xml-builder-3.972.15.tgz",
|
||||
"integrity": "sha512-PxMRlCFNiQnke9YR29vjFQwz4jq+6Q04rOVFeTDR2K7Qpv9h9FOWOxG+zJjageimYbWqE3bTuLjmryWHAWbvaA==",
|
||||
"version": "3.972.19",
|
||||
"resolved": "https://registry.npmjs.org/@aws-sdk/xml-builder/-/xml-builder-3.972.19.tgz",
|
||||
"integrity": "sha512-Cw8IOMdBUEIl8ZlhRC3Dc/E64D5B5/8JhV6vhPLiPfJwcRC84S6F8aBOIi/N4vR9ZyA4I5Cc0Ateb/9EHaJXeQ==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"@smithy/types": "^4.13.1",
|
||||
"fast-xml-parser": "5.5.8",
|
||||
"@smithy/types": "^4.14.1",
|
||||
"fast-xml-parser": "5.7.1",
|
||||
"tslib": "^2.6.2"
|
||||
},
|
||||
"engines": {
|
||||
@@ -2668,6 +2668,18 @@
|
||||
"url": "https://paulmillr.com/funding/"
|
||||
}
|
||||
},
|
||||
"node_modules/@nodable/entities": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/@nodable/entities/-/entities-2.1.0.tgz",
|
||||
"integrity": "sha512-nyT7T3nbMyBI/lvr6L5TyWbFJAI9FTgVRakNoBqCD+PmID8DzFrrNdLLtHMwMszOtqZa8PAOV24ZqDnQrhQINA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/nodable"
|
||||
}
|
||||
],
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@nodelib/fs.scandir": {
|
||||
"version": "2.1.5",
|
||||
"resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz",
|
||||
@@ -3945,9 +3957,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@smithy/types": {
|
||||
"version": "4.13.1",
|
||||
"resolved": "https://registry.npmjs.org/@smithy/types/-/types-4.13.1.tgz",
|
||||
"integrity": "sha512-787F3yzE2UiJIQ+wYW1CVg2odHjmaWLGksnKQHUrK/lYZSEcy1msuLVvxaR/sI2/aDe9U+TBuLsXnr3vod1g0g==",
|
||||
"version": "4.14.1",
|
||||
"resolved": "https://registry.npmjs.org/@smithy/types/-/types-4.14.1.tgz",
|
||||
"integrity": "sha512-59b5HtSVrVR/eYNei3BUj3DCPKD/G7EtDDe7OEJE7i7FtQFugYo6MxbotS8mVJkLNVf8gYaAlEBwwtJ9HzhWSg==",
|
||||
"license": "Apache-2.0",
|
||||
"dependencies": {
|
||||
"tslib": "^2.6.2"
|
||||
@@ -6073,9 +6085,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/basic-ftp": {
|
||||
"version": "5.2.0",
|
||||
"resolved": "https://registry.npmjs.org/basic-ftp/-/basic-ftp-5.2.0.tgz",
|
||||
"integrity": "sha512-VoMINM2rqJwJgfdHq6RiUudKt2BV+FY5ZFezP/ypmwayk68+NzzAQy4XXLlqsGD4MCzq3DrmNFD/uUmBJuGoXw==",
|
||||
"version": "5.3.0",
|
||||
"resolved": "https://registry.npmjs.org/basic-ftp/-/basic-ftp-5.3.0.tgz",
|
||||
"integrity": "sha512-5K9eNNn7ywHPsYnFwjKgYH8Hf8B5emh7JKcPaVjjrMJFQQwGpwowEnZNEtHs7DfR7hCZsmaK3VA4HUK0YarT+w==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
@@ -8155,9 +8167,9 @@
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/fast-xml-builder": {
|
||||
"version": "1.1.4",
|
||||
"resolved": "https://registry.npmjs.org/fast-xml-builder/-/fast-xml-builder-1.1.4.tgz",
|
||||
"integrity": "sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg==",
|
||||
"version": "1.1.5",
|
||||
"resolved": "https://registry.npmjs.org/fast-xml-builder/-/fast-xml-builder-1.1.5.tgz",
|
||||
"integrity": "sha512-4TJn/8FKLeslLAH3dnohXqE3QSoxkhvaMzepOIZytwJXZO69Bfz0HBdDHzOTOon6G59Zrk6VQ2bEiv1t61rfkA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
@@ -8170,9 +8182,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/fast-xml-parser": {
|
||||
"version": "5.5.8",
|
||||
"resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-5.5.8.tgz",
|
||||
"integrity": "sha512-Z7Fh2nVQSb2d+poDViM063ix2ZGt9jmY1nWhPfHBOK2Hgnb/OW3P4Et3P/81SEej0J7QbWtJqxO05h8QYfK7LQ==",
|
||||
"version": "5.7.1",
|
||||
"resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-5.7.1.tgz",
|
||||
"integrity": "sha512-8Cc3f8GUGUULg34pBch/KGyPLglS+OFs05deyOlY7fL2MTagYPKrVQNmR1fLF/yJ9PH5ZSTd3YDF6pnmeZU+zA==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
@@ -8181,9 +8193,10 @@
|
||||
],
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"fast-xml-builder": "^1.1.4",
|
||||
"path-expression-matcher": "^1.2.0",
|
||||
"strnum": "^2.2.0"
|
||||
"@nodable/entities": "^2.1.0",
|
||||
"fast-xml-builder": "^1.1.5",
|
||||
"path-expression-matcher": "^1.5.0",
|
||||
"strnum": "^2.2.3"
|
||||
},
|
||||
"bin": {
|
||||
"fxparser": "src/cli/cli.js"
|
||||
@@ -11013,9 +11026,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/path-expression-matcher": {
|
||||
"version": "1.2.0",
|
||||
"resolved": "https://registry.npmjs.org/path-expression-matcher/-/path-expression-matcher-1.2.0.tgz",
|
||||
"integrity": "sha512-DwmPWeFn+tq7TiyJ2CxezCAirXjFxvaiD03npak3cRjlP9+OjTmSy1EpIrEbh+l6JgUundniloMLDQ/6VTdhLQ==",
|
||||
"version": "1.5.0",
|
||||
"resolved": "https://registry.npmjs.org/path-expression-matcher/-/path-expression-matcher-1.5.0.tgz",
|
||||
"integrity": "sha512-cbrerZV+6rvdQrrD+iGMcZFEiiSrbv9Tfdkvnusy6y0x0GKBXREFg/Y65GhIfm0tnLntThhzCnfKwp1WRjeCyQ==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
@@ -11238,9 +11251,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/postcss": {
|
||||
"version": "8.5.8",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.8.tgz",
|
||||
"integrity": "sha512-OW/rX8O/jXnm82Ey1k44pObPtdblfiuWnrd8X7GJ7emImCOstunGbXUpp7HdBrFQX6rJzn3sPT397Wp5aCwCHg==",
|
||||
"version": "8.5.10",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.10.tgz",
|
||||
"integrity": "sha512-pMMHxBOZKFU6HgAZ4eyGnwXF/EvPGGqUr0MnZ5+99485wwW41kW91A4LOGxSHhgugZmSChL5AlElNdwlNgcnLQ==",
|
||||
"dev": true,
|
||||
"funding": [
|
||||
{
|
||||
@@ -12927,9 +12940,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/strnum": {
|
||||
"version": "2.2.2",
|
||||
"resolved": "https://registry.npmjs.org/strnum/-/strnum-2.2.2.tgz",
|
||||
"integrity": "sha512-DnR90I+jtXNSTXWdwrEy9FakW7UX+qUZg28gj5fk2vxxl7uS/3bpI4fjFYVmdK9etptYBPNkpahuQnEwhwECqA==",
|
||||
"version": "2.2.3",
|
||||
"resolved": "https://registry.npmjs.org/strnum/-/strnum-2.2.3.tgz",
|
||||
"integrity": "sha512-oKx6RUCuHfT3oyVjtnrmn19H1SiCqgJSg+54XqURKp5aCMbrXrhLjRN9TjuwMjiYstZ0MzDrHqkGZ5dFTKd+zg==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
@@ -14218,9 +14231,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/vite": {
|
||||
"version": "7.3.1",
|
||||
"resolved": "https://registry.npmjs.org/vite/-/vite-7.3.1.tgz",
|
||||
"integrity": "sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==",
|
||||
"version": "7.3.2",
|
||||
"resolved": "https://registry.npmjs.org/vite/-/vite-7.3.2.tgz",
|
||||
"integrity": "sha512-Bby3NOsna2jsjfLVOHKes8sGwgl4TT0E6vvpYgnAYDIF/tie7MRaFthmKuHx1NSXjiTueXH3do80FMQgvEktRg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "obsidian-livesync",
|
||||
"version": "0.25.56+patched5",
|
||||
"version": "0.25.60",
|
||||
"description": "Reflect your vault changes to some other devices immediately. Please make sure to disable other synchronize solutions to avoid content corruption or duplication.",
|
||||
"main": "main.js",
|
||||
"type": "module",
|
||||
|
||||
@@ -45,11 +45,73 @@ CLI Main
|
||||
- Settings management (JSON file)
|
||||
- Graceful shutdown handling
|
||||
|
||||
## Something I realised later that could lead to misunderstandings
|
||||
## Usage
|
||||
|
||||
The term `vault` in this README refers to the directory containing your local database and settings file. Not the actual files you want to sync. I will fix this later, but please be mind this for now.
|
||||
The CLI operates on a **database directory** which contains PouchDB data and settings.
|
||||
|
||||
## Docker
|
||||
> [!NOTE]
|
||||
> `livesync-cli` is the alias for the CLI executable. Please replace with the actual command of your installation (e.g. `npm run --silent cli --` or `docker run ...`).
|
||||
|
||||
```bash
|
||||
livesync-cli [database-path] [command] [args...]
|
||||
```
|
||||
|
||||
|
||||
### Arguments
|
||||
|
||||
- `database-path`: Path to the directory where `.livesync` folder and `settings.json` are (or will be) located.
|
||||
- Note: In previous versions, this was referred to as the "vault" path. Now it is clearly distinguished from the actual vault (the directory containing your `.md` files).
|
||||
|
||||
### Commands
|
||||
|
||||
- `sync`: Run one replication cycle with the remote CouchDB.
|
||||
- `mirror [vault-path]`: Bidirectional sync between the local database and a local directory (**the actual vault**).
|
||||
- If `vault-path` is provided, the CLI will synchronise the database with files in the vault directory.
|
||||
- If `vault-path` is omitted, it defaults to `database-path` (compatibility mode).
|
||||
- Use this command to keep your local `.md` files in sync with the database.
|
||||
- `ls [prefix]`: List files currently stored in the local database.
|
||||
- `push <src> <dst>`: Push a local file `<src>` into the database at path `<dst>`.
|
||||
- `pull <src> <dst>`: Pull a file `<src>` from the database into local file `<dst>`.
|
||||
- `cat <src>`: Read a file from the database and write to stdout.
|
||||
- `put <dst>`: Read from stdin and write to the database path `<dst>`.
|
||||
- `init-settings [file]`: Create a default settings file.
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Basic sync with remote
|
||||
livesync-cli ./my-db sync
|
||||
|
||||
# Mirroring to your actual Obsidian vault
|
||||
livesync-cli ./my-db mirror /path/to/obsidian-vault
|
||||
|
||||
# Manual file operations
|
||||
livesync-cli ./my-db push ./note.md folder/note.md
|
||||
livesync-cli ./my-db pull folder/note.md ./note.md
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
### Build from source
|
||||
|
||||
```bash
|
||||
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
|
||||
# due to shared dependencies with webapp and main library
|
||||
npm install
|
||||
# Build the project (ensure you are in `src/apps/cli` directory)
|
||||
npm run build
|
||||
```
|
||||
|
||||
Run the CLI:
|
||||
|
||||
```bash
|
||||
# Run with npm script (from repository root)
|
||||
npm run --silent cli -- [database-path] [command] [args...]
|
||||
# Run the built executable directly
|
||||
node src/apps/cli/dist/index.cjs [database-path] [command] [args...]
|
||||
```
|
||||
|
||||
### Docker
|
||||
|
||||
A Docker image is provided for headless / server deployments. Build from the repository root:
|
||||
|
||||
@@ -61,28 +123,28 @@ Run:
|
||||
|
||||
```bash
|
||||
# Sync with CouchDB
|
||||
docker run --rm -v /path/to/your/vault:/data livesync-cli sync
|
||||
docker run --rm -v /path/to/your/db:/data livesync-cli sync
|
||||
|
||||
# Mirror to a specific vault directory
|
||||
docker run --rm -v /path/to/your/db:/data -v /path/to/your/vault:/vault livesync-cli mirror /vault
|
||||
|
||||
# List files in the local database
|
||||
docker run --rm -v /path/to/your/vault:/data livesync-cli ls
|
||||
|
||||
# Generate a default settings file
|
||||
docker run --rm -v /path/to/your/vault:/data livesync-cli init-settings
|
||||
docker run --rm -v /path/to/your/db:/data livesync-cli ls
|
||||
```
|
||||
|
||||
The vault directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
|
||||
The database directory is mounted at `/data` by default. Override with `-e LIVESYNC_DB_PATH=/other/path`.
|
||||
|
||||
### P2P (WebRTC) and Docker networking
|
||||
#### P2P (WebRTC) and Docker networking
|
||||
|
||||
The P2P replicator (`p2p-host`, `p2p-sync`, `p2p-peers`) uses WebRTC and generates
|
||||
three kinds of ICE candidates. The default Docker bridge network affects which
|
||||
candidates are usable:
|
||||
|
||||
| Candidate type | Description | Bridge network |
|
||||
|---|---|---|
|
||||
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
|
||||
| `srflx` | Host public IP via STUN reflection | Works over the internet |
|
||||
| `relay` | Traffic relayed via TURN server | Always reachable |
|
||||
| Candidate type | Description | Bridge network |
|
||||
| -------------- | ---------------------------------- | -------------------------- |
|
||||
| `host` | Container bridge IP (`172.17.x.x`) | Unreachable from LAN peers |
|
||||
| `srflx` | Host public IP via STUN reflection | Works over the internet |
|
||||
| `relay` | Traffic relayed via TURN server | Always reachable |
|
||||
|
||||
**LAN P2P on Linux** — use `--network host` so that the real host IP is
|
||||
advertised as the `host` candidate:
|
||||
@@ -91,6 +153,8 @@ advertised as the `host` candidate:
|
||||
docker run --rm --network host -v /path/to/your/vault:/data livesync-cli p2p-host
|
||||
```
|
||||
|
||||
Note: also fix the alias to include `--network host` if you want to use `livesync-cli` for P2P commands.
|
||||
|
||||
> `--network host` is not available on Docker Desktop for macOS or Windows.
|
||||
|
||||
**LAN P2P on macOS / Windows Docker Desktop** — configure a TURN server in the
|
||||
@@ -103,16 +167,35 @@ candidate carries the host's public IP and peers can connect normally.
|
||||
|
||||
**CouchDB sync only (no P2P)** — no special network configuration is required.
|
||||
|
||||
## Installation
|
||||
|
||||
### Adding `livesync-cli` alias
|
||||
|
||||
To use the `livesync-cli` command globally, you can add an alias to your shell configuration file (e.g., `.zshrc` or `.bashrc`).
|
||||
|
||||
If you are using `npm run`, add the following line:
|
||||
|
||||
```bash
|
||||
# Install dependencies (ensure you are in repository root directory, not src/apps/cli)
|
||||
# due to shared dependencies with webapp and main library
|
||||
npm install
|
||||
# Build the project (ensure you are in `src/apps/cli` directory)
|
||||
npm run build
|
||||
alias livesync-cli='npm run --silent --prefix /path/to/repository/src/apps/cli cli --'
|
||||
# or
|
||||
alias livesync-cli="npm run --silent --prefix $PWD cli --"
|
||||
```
|
||||
|
||||
Alternatively, if you want to use the built executable directly:
|
||||
|
||||
```bash
|
||||
alias livesync-cli='node /path/to/repository/src/apps/cli/dist/index.cjs'
|
||||
or
|
||||
alias livesync-cli="node $PWD/dist/index.cjs"
|
||||
```
|
||||
|
||||
If you prefer using Docker:
|
||||
|
||||
```bash
|
||||
alias livesync-cli='docker run --rm -v /path/to/your/db:/data livesync-cli'
|
||||
```
|
||||
|
||||
After adding the alias, restart your shell or run `source ~/.zshrc` (or `.bashrc`).
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
@@ -121,43 +204,43 @@ As you know, the CLI is designed to be used in a headless environment. Hence all
|
||||
|
||||
```bash
|
||||
# Sync local database with CouchDB (no files will be changed).
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json sync
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json sync
|
||||
|
||||
# Push files to local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json push /your/storage/file.md /vault/path/file.md
|
||||
|
||||
# Pull files from local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull /vault/path/file.md /your/storage/file.md
|
||||
|
||||
# Verbose logging
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json --verbose
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json --verbose
|
||||
|
||||
# Apply setup URI to settings file (settings only; does not run synchronisation)
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json setup "obsidian://setuplivesync?settings=..."
|
||||
|
||||
# Put text from stdin into local database
|
||||
echo "Hello from stdin" | npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
|
||||
echo "Hello from stdin" | livesync-cli /path/to/your-local-database --settings /path/to/settings.json put /vault/path/file.md
|
||||
|
||||
# Output a file from local database to stdout
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat /vault/path/file.md
|
||||
|
||||
# Output a specific revision of a file from local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json cat-rev /vault/path/file.md 3-abcdef
|
||||
|
||||
# Pull a specific revision of a file from local database to local storage
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json pull-rev /vault/path/file.md /your/storage/file.old.md 3-abcdef
|
||||
|
||||
# List files in local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json ls /vault/path/
|
||||
|
||||
# Show metadata for a file in local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json info /vault/path/file.md
|
||||
|
||||
# Mark a file as deleted in local database
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json rm /vault/path/file.md
|
||||
|
||||
# Resolve conflict by keeping a specific revision
|
||||
npm run --silent cli -- /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
|
||||
livesync-cli /path/to/your-local-database --settings /path/to/settings.json resolve /vault/path/file.md 3-abcdef
|
||||
```
|
||||
|
||||
### Configuration
|
||||
@@ -192,7 +275,8 @@ The CLI uses the same settings format as the Obsidian plugin. Create a `.livesyn
|
||||
|
||||
```
|
||||
Usage:
|
||||
livesync-cli [database-path] [options] [command] [command-args]
|
||||
livesync-cli <database-path> [options] <command> [command-args]
|
||||
livesync-cli init-settings [path]
|
||||
|
||||
Arguments:
|
||||
database-path Path to the local database directory (required except for init-settings)
|
||||
@@ -201,7 +285,8 @@ Options:
|
||||
--settings, -s <path> Path to settings file (default: .livesync/settings.json in local database directory)
|
||||
--force, -f Overwrite existing file on init-settings
|
||||
--verbose, -v Enable verbose logging
|
||||
--help, -h Show this help message
|
||||
--debug, -d Enable debug logging (includes verbose)
|
||||
--help, -h Show help message
|
||||
|
||||
Commands:
|
||||
init-settings [path] Create settings JSON from DEFAULT_SETTINGS
|
||||
@@ -211,16 +296,16 @@ Commands:
|
||||
p2p-host Start P2P host mode and wait until interrupted (Ctrl+C)
|
||||
push <src> <dst> Push local file <src> into local database path <dst>
|
||||
pull <src> <dst> Pull file <src> from local database into local file <dst>
|
||||
pull-rev <src> <dst> <revision> Pull specific revision into local file <dst>
|
||||
pull-rev <src> <dst> <rev> Pull specific revision <rev> into local file <dst>
|
||||
setup <setupURI> Apply setup URI to settings file
|
||||
put <vaultPath> Read text from standard input and write to local database
|
||||
cat <vaultPath> Write latest file content from local database to standard output
|
||||
cat-rev <vaultPath> <revision> Write specific revision content from local database to standard output
|
||||
put <dst> Read text from standard input and write to local database path <dst>
|
||||
cat <src> Write latest file content from local database to standard output
|
||||
cat-rev <src> <rev> Write specific revision <rev> content from local database to standard output
|
||||
ls [prefix] List files as path<TAB>size<TAB>mtime<TAB>revision[*]
|
||||
info <vaultPath> Show file metadata including current and past revisions, conflicts, and chunk list
|
||||
rm <vaultPath> Mark file as deleted in local database
|
||||
resolve <vaultPath> <revision> Resolve conflict by keeping the specified revision
|
||||
mirror <storagePath> <vaultPath> Mirror local file into local database.
|
||||
info <path> Show file metadata including current and past revisions, conflicts, and chunk list
|
||||
rm <path> Mark file as deleted in local database
|
||||
resolve <path> <rev> Resolve conflict by keeping the specified revision
|
||||
mirror [vaultPath] Mirror database contents to the local file system (vaultPath defaults to database-path)
|
||||
```
|
||||
|
||||
Run via npm script:
|
||||
@@ -300,11 +385,11 @@ In other words, it performs the following actions:
|
||||
|
||||
5. **Categorisation and synchronisation** — The union of both file sets is split into three groups and processed concurrently (up to 10 files at a time):
|
||||
|
||||
| Group | Condition | Action |
|
||||
|---|---|---|
|
||||
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
|
||||
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
|
||||
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
|
||||
| Group | Condition | Action |
|
||||
| ----------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **UPDATE DATABASE** | File exists in storage only | Store the file into the local database. |
|
||||
| **UPDATE STORAGE** | File exists in database only | If the entry is active (not deleted) and not conflicted, restore the file from the database to storage. Deleted entries and conflicted entries are skipped. |
|
||||
| **SYNC DATABASE AND STORAGE** | File exists in both | Compare `mtime` freshness. If storage is newer → write to database (`STORAGE → DB`). If database is newer → restore to storage (`STORAGE ← DB`). If equal → do nothing. Conflicted documents and files exceeding the size limit are always skipped. |
|
||||
|
||||
6. **Initialisation flag** — On the very first successful run, writes `initialized = true` to the key-value database so that subsequent runs can restore state in step 2.
|
||||
|
||||
@@ -323,9 +408,9 @@ Note: `mirror` does not respect file deletions. If a file is deleted in storage,
|
||||
Create default settings, apply a setup URI, then run one sync cycle.
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- init-settings /data/livesync-settings.json
|
||||
printf '%s\n' "$SETUP_PASSPHRASE" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
|
||||
livesync-cli -- init-settings /data/livesync-settings.json
|
||||
printf '%s\n' "$SETUP_PASSPHRASE" | livesync-cli -- /data/vault --settings /data/livesync-settings.json setup "$SETUP_URI"
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json sync
|
||||
```
|
||||
|
||||
### 2. Scripted import and export
|
||||
@@ -333,8 +418,8 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json sync
|
||||
Push local files into the database from automation, and pull them back for export or backup.
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json push ./note.md notes/note.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json pull notes/note.md ./exports/note.md
|
||||
```
|
||||
|
||||
### 3. Revision inspection and restore
|
||||
@@ -342,9 +427,9 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull
|
||||
List metadata, find an older revision, then restore it by content (`cat-rev`) or file output (`pull-rev`).
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json cat-rev notes/note.md 3-abcdef
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json pull-rev notes/note.md ./restore/note.old.md 3-abcdef
|
||||
```
|
||||
|
||||
### 4. Conflict and cleanup workflow
|
||||
@@ -352,9 +437,9 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json pull
|
||||
Inspect conflicted revisions, resolve by keeping one revision, then delete obsolete files.
|
||||
|
||||
```bash
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json info notes/note.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json resolve notes/note.md 3-abcdef
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json rm notes/obsolete.md
|
||||
```
|
||||
|
||||
### 5. CI smoke test for content round-trip
|
||||
@@ -362,8 +447,8 @@ npm run --silent cli -- /data/vault --settings /data/livesync-settings.json rm n
|
||||
Validate that `put`/`cat` is behaving as expected in a pipeline.
|
||||
|
||||
```bash
|
||||
echo "hello-ci" | npm run --silent cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
|
||||
npm run --silent cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
|
||||
echo "hello-ci" | livesync-cli -- /data/vault --settings /data/livesync-settings.json put ci/test.md
|
||||
livesync-cli -- /data/vault --settings /data/livesync-settings.json cat ci/test.md
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
@@ -5,13 +5,13 @@ import { configURIBase } from "@lib/common/models/shared.const";
|
||||
import { DEFAULT_SETTINGS, type FilePathWithPrefix, type ObsidianLiveSyncSettings } from "@lib/common/types";
|
||||
import { stripAllPrefixes } from "@lib/string_and_binary/path";
|
||||
import type { CLICommandContext, CLIOptions } from "./types";
|
||||
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toVaultRelativePath } from "./utils";
|
||||
import { promptForPassphrase, readStdinAsUtf8, toArrayBuffer, toDatabaseRelativePath } from "./utils";
|
||||
import { collectPeers, openP2PHost, parseTimeoutSeconds, syncWithPeer } from "./p2p";
|
||||
import { performFullScan } from "@lib/serviceFeatures/offlineScanner";
|
||||
import { UnresolvedErrorManager } from "@lib/services/base/UnresolvedErrorManager";
|
||||
|
||||
export async function runCommand(options: CLIOptions, context: CLICommandContext): Promise<boolean> {
|
||||
const { vaultPath, core, settingsPath } = context;
|
||||
const { databasePath, core, settingsPath } = context;
|
||||
|
||||
await core.services.control.activated;
|
||||
if (options.command === "daemon") {
|
||||
@@ -77,16 +77,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
throw new Error("push requires two arguments: <src> <dst>");
|
||||
}
|
||||
const sourcePath = path.resolve(options.commandArgs[0]);
|
||||
const destinationVaultPath = toVaultRelativePath(options.commandArgs[1], vaultPath);
|
||||
const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[1], databasePath);
|
||||
const sourceData = await fs.readFile(sourcePath);
|
||||
const sourceStat = await fs.stat(sourcePath);
|
||||
console.log(`[Command] push ${sourcePath} -> ${destinationVaultPath}`);
|
||||
console.log(`[Command] push ${sourcePath} -> ${destinationDatabasePath}`);
|
||||
|
||||
await core.serviceModules.storageAccess.writeFileAuto(destinationVaultPath, toArrayBuffer(sourceData), {
|
||||
await core.serviceModules.storageAccess.writeFileAuto(destinationDatabasePath, toArrayBuffer(sourceData), {
|
||||
mtime: sourceStat.mtimeMs,
|
||||
ctime: sourceStat.ctimeMs,
|
||||
});
|
||||
const destinationPathWithPrefix = destinationVaultPath as FilePathWithPrefix;
|
||||
const destinationPathWithPrefix = destinationDatabasePath as FilePathWithPrefix;
|
||||
const stored = await core.serviceModules.fileHandler.storeFileToDB(destinationPathWithPrefix, true);
|
||||
return stored;
|
||||
}
|
||||
@@ -95,16 +95,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("pull requires two arguments: <src> <dst>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const destinationPath = path.resolve(options.commandArgs[1]);
|
||||
console.log(`[Command] pull ${sourceVaultPath} -> ${destinationPath}`);
|
||||
console.log(`[Command] pull ${sourceDatabasePath} -> ${destinationPath}`);
|
||||
|
||||
const sourcePathWithPrefix = sourceVaultPath as FilePathWithPrefix;
|
||||
const sourcePathWithPrefix = sourceDatabasePath as FilePathWithPrefix;
|
||||
const restored = await core.serviceModules.fileHandler.dbToStorage(sourcePathWithPrefix, null, true);
|
||||
if (!restored) {
|
||||
return false;
|
||||
}
|
||||
const data = await core.serviceModules.storageAccess.readFileAuto(sourceVaultPath);
|
||||
const data = await core.serviceModules.storageAccess.readFileAuto(sourceDatabasePath);
|
||||
await fs.mkdir(path.dirname(destinationPath), { recursive: true });
|
||||
if (typeof data === "string") {
|
||||
await fs.writeFile(destinationPath, data, "utf-8");
|
||||
@@ -118,16 +118,16 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 3) {
|
||||
throw new Error("pull-rev requires three arguments: <src> <dst> <rev>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const destinationPath = path.resolve(options.commandArgs[1]);
|
||||
const rev = options.commandArgs[2].trim();
|
||||
if (!rev) {
|
||||
throw new Error("pull-rev requires a non-empty revision");
|
||||
}
|
||||
console.log(`[Command] pull-rev ${sourceVaultPath}@${rev} -> ${destinationPath}`);
|
||||
console.log(`[Command] pull-rev ${sourceDatabasePath}@${rev} -> ${destinationPath}`);
|
||||
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
rev,
|
||||
true
|
||||
);
|
||||
@@ -175,11 +175,11 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("put requires one argument: <dst>");
|
||||
}
|
||||
const destinationVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const destinationDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const content = await readStdinAsUtf8();
|
||||
console.log(`[Command] put stdin -> ${destinationVaultPath}`);
|
||||
console.log(`[Command] put stdin -> ${destinationDatabasePath}`);
|
||||
return await core.serviceModules.databaseFileAccess.storeContent(
|
||||
destinationVaultPath as FilePathWithPrefix,
|
||||
destinationDatabasePath as FilePathWithPrefix,
|
||||
content
|
||||
);
|
||||
}
|
||||
@@ -188,10 +188,10 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("cat requires one argument: <src>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
console.error(`[Command] cat ${sourceVaultPath}`);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
console.error(`[Command] cat ${sourceDatabasePath}`);
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
undefined,
|
||||
true
|
||||
);
|
||||
@@ -212,14 +212,14 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("cat-rev requires two arguments: <src> <rev>");
|
||||
}
|
||||
const sourceVaultPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const sourceDatabasePath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
const rev = options.commandArgs[1].trim();
|
||||
if (!rev) {
|
||||
throw new Error("cat-rev requires a non-empty revision");
|
||||
}
|
||||
console.error(`[Command] cat-rev ${sourceVaultPath} @ ${rev}`);
|
||||
console.error(`[Command] cat-rev ${sourceDatabasePath} @ ${rev}`);
|
||||
const source = await core.serviceModules.databaseFileAccess.fetch(
|
||||
sourceVaultPath as FilePathWithPrefix,
|
||||
sourceDatabasePath as FilePathWithPrefix,
|
||||
rev,
|
||||
true
|
||||
);
|
||||
@@ -239,7 +239,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.command === "ls") {
|
||||
const prefix =
|
||||
options.commandArgs.length > 0 && options.commandArgs[0].trim() !== ""
|
||||
? toVaultRelativePath(options.commandArgs[0], vaultPath)
|
||||
? toDatabaseRelativePath(options.commandArgs[0], databasePath)
|
||||
: "";
|
||||
const rows: { path: string; line: string }[] = [];
|
||||
|
||||
@@ -261,6 +261,8 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
rows.sort((a, b) => a.path.localeCompare(b.path));
|
||||
if (rows.length > 0) {
|
||||
process.stdout.write(rows.map((e) => e.line).join("\n") + "\n");
|
||||
} else {
|
||||
process.stderr.write("[Info] No documents found in the local database.\n");
|
||||
}
|
||||
return true;
|
||||
}
|
||||
@@ -269,7 +271,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("info requires one argument: <path>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
|
||||
for await (const doc of core.services.database.localDatabase.findAllNormalDocs({ conflicts: true })) {
|
||||
if (doc._deleted || doc.deleted) continue;
|
||||
@@ -313,7 +315,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 1) {
|
||||
throw new Error("rm requires one argument: <path>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath);
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath);
|
||||
console.error(`[Command] rm ${targetPath}`);
|
||||
return await core.serviceModules.databaseFileAccess.delete(targetPath as FilePathWithPrefix);
|
||||
}
|
||||
@@ -322,7 +324,7 @@ export async function runCommand(options: CLIOptions, context: CLICommandContext
|
||||
if (options.commandArgs.length < 2) {
|
||||
throw new Error("resolve requires two arguments: <path> <revision-to-keep>");
|
||||
}
|
||||
const targetPath = toVaultRelativePath(options.commandArgs[0], vaultPath) as FilePathWithPrefix;
|
||||
const targetPath = toDatabaseRelativePath(options.commandArgs[0], databasePath) as FilePathWithPrefix;
|
||||
const revisionToKeep = options.commandArgs[1].trim();
|
||||
if (revisionToKeep === "") {
|
||||
throw new Error("resolve requires a non-empty revision-to-keep");
|
||||
|
||||
@@ -58,7 +58,7 @@ async function createSetupURI(passphrase: string): Promise<string> {
|
||||
|
||||
describe("runCommand abnormal cases", () => {
|
||||
const context = {
|
||||
vaultPath: "/tmp/vault",
|
||||
databasePath: "/tmp/vault",
|
||||
settingsPath: "/tmp/vault/.livesync/settings.json",
|
||||
} as any;
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ export interface CLIOptions {
|
||||
}
|
||||
|
||||
export interface CLICommandContext {
|
||||
vaultPath: string;
|
||||
databasePath: string;
|
||||
core: LiveSyncBaseCore<ServiceContext, any>;
|
||||
settingsPath: string;
|
||||
}
|
||||
|
||||
@@ -5,19 +5,19 @@ export function toArrayBuffer(data: Buffer): ArrayBuffer {
|
||||
return data.buffer.slice(data.byteOffset, data.byteOffset + data.byteLength) as ArrayBuffer;
|
||||
}
|
||||
|
||||
export function toVaultRelativePath(inputPath: string, vaultPath: string): string {
|
||||
export function toDatabaseRelativePath(inputPath: string, databasePath: string): string {
|
||||
const stripped = inputPath.replace(/^[/\\]+/, "");
|
||||
if (!path.isAbsolute(inputPath)) {
|
||||
const normalized = stripped.replace(/\\/g, "/");
|
||||
const resolved = path.resolve(vaultPath, normalized);
|
||||
const rel = path.relative(vaultPath, resolved);
|
||||
const resolved = path.resolve(databasePath, normalized);
|
||||
const rel = path.relative(databasePath, resolved);
|
||||
if (rel.startsWith("..") || path.isAbsolute(rel)) {
|
||||
throw new Error(`Path ${inputPath} is outside of the local database directory`);
|
||||
}
|
||||
return rel.replace(/\\/g, "/");
|
||||
}
|
||||
const resolved = path.resolve(inputPath);
|
||||
const rel = path.relative(vaultPath, resolved);
|
||||
const rel = path.relative(databasePath, resolved);
|
||||
if (rel.startsWith("..") || path.isAbsolute(rel)) {
|
||||
throw new Error(`Path ${inputPath} is outside of the local database directory`);
|
||||
}
|
||||
@@ -25,15 +25,15 @@ export function toVaultRelativePath(inputPath: string, vaultPath: string): strin
|
||||
}
|
||||
|
||||
export async function readStdinAsUtf8(): Promise<string> {
|
||||
const chunks: Buffer[] = [];
|
||||
const chunks = [];
|
||||
for await (const chunk of process.stdin) {
|
||||
if (typeof chunk === "string") {
|
||||
chunks.push(Buffer.from(chunk, "utf-8"));
|
||||
} else {
|
||||
chunks.push(chunk);
|
||||
chunks.push(chunk as Buffer);
|
||||
}
|
||||
}
|
||||
return Buffer.concat(chunks).toString("utf-8");
|
||||
return Buffer.concat(chunks as Uint8Array[]).toString("utf-8");
|
||||
}
|
||||
|
||||
export async function promptForPassphrase(prompt = "Enter setup URI passphrase: "): Promise<string> {
|
||||
|
||||
@@ -1,29 +1,33 @@
|
||||
import * as path from "path";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { toVaultRelativePath } from "./utils";
|
||||
import { toDatabaseRelativePath } from "./utils";
|
||||
|
||||
describe("toVaultRelativePath", () => {
|
||||
const vaultPath = path.resolve("/tmp/livesync-vault");
|
||||
describe("toDatabaseRelativePath", () => {
|
||||
const databasePath = path.resolve("/tmp/livesync-vault");
|
||||
|
||||
it("rejects absolute paths outside vault", () => {
|
||||
expect(() => toVaultRelativePath("/etc/passwd", vaultPath)).toThrow("outside of the local database directory");
|
||||
expect(() => toDatabaseRelativePath("/etc/passwd", databasePath)).toThrow(
|
||||
"outside of the local database directory"
|
||||
);
|
||||
});
|
||||
|
||||
it("normalizes leading slash for absolute path inside vault", () => {
|
||||
const absoluteInsideVault = path.join(vaultPath, "notes", "foo.md");
|
||||
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("notes/foo.md");
|
||||
const absoluteInsideVault = path.join(databasePath, "notes", "foo.md");
|
||||
expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("notes/foo.md");
|
||||
});
|
||||
|
||||
it("normalizes Windows-style separators", () => {
|
||||
expect(toVaultRelativePath("notes\\daily\\2026-03-12.md", vaultPath)).toBe("notes/daily/2026-03-12.md");
|
||||
expect(toDatabaseRelativePath("notes\\daily\\2026-03-12.md", databasePath)).toBe("notes/daily/2026-03-12.md");
|
||||
});
|
||||
|
||||
it("returns vault-relative path for another absolute path inside vault", () => {
|
||||
const absoluteInsideVault = path.join(vaultPath, "docs", "inside.md");
|
||||
expect(toVaultRelativePath(absoluteInsideVault, vaultPath)).toBe("docs/inside.md");
|
||||
const absoluteInsideVault = path.join(databasePath, "docs", "inside.md");
|
||||
expect(toDatabaseRelativePath(absoluteInsideVault, databasePath)).toBe("docs/inside.md");
|
||||
});
|
||||
|
||||
it("rejects relative path traversal that escapes vault", () => {
|
||||
expect(() => toVaultRelativePath("../escape.md", vaultPath)).toThrow("outside of the local database directory");
|
||||
expect(() => toDatabaseRelativePath("../escape.md", databasePath)).toThrow(
|
||||
"outside of the local database directory"
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -36,14 +36,15 @@ function printHelp(): void {
|
||||
Self-hosted LiveSync CLI
|
||||
|
||||
Usage:
|
||||
livesync-cli [database-path] [options] [command] [command-args]
|
||||
livesync-cli <database-path> [options] <command> [command-args]
|
||||
livesync-cli init-settings [path]
|
||||
|
||||
Arguments:
|
||||
database-path Path to the local database directory (required)
|
||||
database-path Path to the local database directory
|
||||
|
||||
Commands:
|
||||
sync Run one replication cycle and exit
|
||||
p2p-peers <timeout> Show discovered peers as [peer]<TAB><peer-id><TAB><peer-name>
|
||||
p2p-peers <timeout> Show discovered peers as [peer]\t<peer-id>\t<peer-name>
|
||||
p2p-sync <peer> <timeout>
|
||||
Sync with the specified peer-id or peer-name
|
||||
p2p-host Start P2P host mode and wait until interrupted
|
||||
@@ -54,28 +55,29 @@ Commands:
|
||||
put <dst> Read UTF-8 content from stdin and write to local database path <dst>
|
||||
cat <src> Read file <src> from local database and write to stdout
|
||||
cat-rev <src> <rev> Read file <src> at specific revision <rev> and write to stdout
|
||||
ls [prefix] List DB files as path<TAB>size<TAB>mtime<TAB>revision[*]
|
||||
ls [prefix] List DB files as path\tsize\tmtime\trevision[*]
|
||||
info <path> Show detailed metadata for a file (ID, revision, conflicts, chunks)
|
||||
rm <path> Mark a file as deleted in local database
|
||||
resolve <path> <rev> Resolve conflicts by keeping <rev> and deleting others
|
||||
mirror [vault-path] Mirror database contents to the local file system (vault-path defaults to database-path)
|
||||
Examples:
|
||||
livesync-cli ./my-database sync
|
||||
livesync-cli ./my-database p2p-peers 5
|
||||
livesync-cli ./my-database p2p-sync my-peer-name 15
|
||||
livesync-cli ./my-database p2p-host
|
||||
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
|
||||
livesync-cli ./my-database pull folder/note.md ./exports/note.md
|
||||
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
|
||||
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
|
||||
echo "Hello" | livesync-cli ./my-database put notes/hello.md
|
||||
livesync-cli ./my-database cat notes/hello.md
|
||||
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
|
||||
livesync-cli ./my-database ls notes/
|
||||
livesync-cli ./my-database info notes/hello.md
|
||||
livesync-cli ./my-database rm notes/hello.md
|
||||
livesync-cli ./my-database resolve notes/hello.md 3-abcdef
|
||||
livesync-cli init-settings ./data.json
|
||||
livesync-cli ./my-database --verbose
|
||||
livesync-cli ./my-database sync
|
||||
livesync-cli ./my-database p2p-peers 5
|
||||
livesync-cli ./my-database p2p-sync my-peer-name 15
|
||||
livesync-cli ./my-database p2p-host
|
||||
livesync-cli ./my-database --settings ./custom-settings.json push ./note.md folder/note.md
|
||||
livesync-cli ./my-database pull folder/note.md ./exports/note.md
|
||||
livesync-cli ./my-database pull-rev folder/note.md ./exports/note.old.md 3-abcdef
|
||||
livesync-cli ./my-database setup "obsidian://setuplivesync?settings=..."
|
||||
echo "Hello" | livesync-cli ./my-database put notes/hello.md
|
||||
livesync-cli ./my-database cat notes/hello.md
|
||||
livesync-cli ./my-database cat-rev notes/hello.md 3-abcdef
|
||||
livesync-cli ./my-database ls notes/
|
||||
livesync-cli ./my-database info notes/hello.md
|
||||
livesync-cli ./my-database rm notes/hello.md
|
||||
livesync-cli ./my-database resolve notes/hello.md 3-abcdef
|
||||
livesync-cli init-settings ./data.json
|
||||
livesync-cli ./my-database --verbose
|
||||
`);
|
||||
}
|
||||
|
||||
@@ -112,6 +114,7 @@ export function parseArgs(): CLIOptions {
|
||||
case "-d":
|
||||
// debugging automatically enables verbose logging, as it is intended for debugging issues.
|
||||
debug = true;
|
||||
// falls through
|
||||
case "--verbose":
|
||||
case "-v":
|
||||
verbose = true;
|
||||
@@ -220,34 +223,34 @@ export async function main() {
|
||||
return;
|
||||
}
|
||||
|
||||
// Resolve vault path
|
||||
const vaultPath = path.resolve(options.databasePath!);
|
||||
// Check if vault directory exists
|
||||
// Resolve database path
|
||||
const databasePath = path.resolve(options.databasePath!);
|
||||
// Check if database directory exists
|
||||
try {
|
||||
const stat = await fs.stat(vaultPath);
|
||||
const stat = await fs.stat(databasePath);
|
||||
if (!stat.isDirectory()) {
|
||||
console.error(`Error: ${vaultPath} is not a directory`);
|
||||
console.error(`Error: ${databasePath} is not a directory`);
|
||||
process.exit(1);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Error: Vault directory ${vaultPath} does not exist`);
|
||||
console.error(`Error: Database directory ${databasePath} does not exist`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Resolve settings path
|
||||
const settingsPath = options.settingsPath
|
||||
? path.resolve(options.settingsPath)
|
||||
: path.join(vaultPath, SETTINGS_FILE);
|
||||
configureNodeLocalStorage(path.join(vaultPath, ".livesync", "runtime", "local-storage.json"));
|
||||
: path.join(databasePath, SETTINGS_FILE);
|
||||
configureNodeLocalStorage(path.join(databasePath, ".livesync", "runtime", "local-storage.json"));
|
||||
|
||||
infoLog(`Self-hosted LiveSync CLI`);
|
||||
infoLog(`Vault: ${vaultPath}`);
|
||||
infoLog(`Database Path: ${databasePath}`);
|
||||
infoLog(`Settings: ${settingsPath}`);
|
||||
infoLog("");
|
||||
|
||||
// Create service context and hub
|
||||
const context = new NodeServiceContext(vaultPath);
|
||||
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(vaultPath, context);
|
||||
const context = new NodeServiceContext(databasePath);
|
||||
const serviceHubInstance = new NodeServiceHub<NodeServiceContext>(databasePath, context);
|
||||
serviceHubInstance.API.addLog.setHandler((message: string, level: LOG_LEVEL) => {
|
||||
let levelStr = "";
|
||||
switch (level) {
|
||||
@@ -321,7 +324,11 @@ export async function main() {
|
||||
const core = new LiveSyncBaseCore(
|
||||
serviceHubInstance,
|
||||
(core: LiveSyncBaseCore<NodeServiceContext, any>, serviceHub: InjectableServiceHub<NodeServiceContext>) => {
|
||||
return initialiseServiceModulesCLI(vaultPath, core, serviceHub);
|
||||
const mirrorVaultPath =
|
||||
options.command === "mirror" && options.commandArgs[0]
|
||||
? path.resolve(options.commandArgs[0])
|
||||
: databasePath;
|
||||
return initialiseServiceModulesCLI(mirrorVaultPath, core, serviceHub);
|
||||
},
|
||||
(core) => [
|
||||
// No modules need to be registered for P2P replication in CLI. Directly using Replicators in p2p.ts
|
||||
@@ -331,8 +338,8 @@ export async function main() {
|
||||
(core) => {
|
||||
// Add target filter to prevent internal files are handled
|
||||
core.services.vault.isTargetFile.addHandler(async (target) => {
|
||||
const vaultPath = stripAllPrefixes(getPathFromUXFileInfo(target));
|
||||
const parts = vaultPath.split(path.sep);
|
||||
const targetPath = stripAllPrefixes(getPathFromUXFileInfo(target));
|
||||
const parts = targetPath.split(path.sep);
|
||||
// if some part of the path starts with dot, treat it as internal file and ignore.
|
||||
if (parts.some((part) => part.startsWith("."))) {
|
||||
return await Promise.resolve(false);
|
||||
@@ -393,7 +400,7 @@ export async function main() {
|
||||
infoLog("");
|
||||
}
|
||||
|
||||
const result = await runCommand(options, { vaultPath, core, settingsPath });
|
||||
const result = await runCommand(options, { databasePath, core, settingsPath });
|
||||
if (!result) {
|
||||
console.error(`[Error] Command '${options.command}' failed`);
|
||||
process.exitCode = 1;
|
||||
|
||||
@@ -17,7 +17,7 @@ describe("CLI parseArgs", () => {
|
||||
});
|
||||
|
||||
it("exits 1 when --settings has no value", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "--settings"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "--settings"];
|
||||
const exitMock = mockProcessExit();
|
||||
const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
|
||||
|
||||
@@ -37,7 +37,7 @@ describe("CLI parseArgs", () => {
|
||||
});
|
||||
|
||||
it("exits 1 for unknown command after database-path", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "unknown-cmd"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "unknown-cmd"];
|
||||
const exitMock = mockProcessExit();
|
||||
const stderr = vi.spyOn(console, "error").mockImplementation(() => {});
|
||||
|
||||
@@ -60,28 +60,28 @@ describe("CLI parseArgs", () => {
|
||||
});
|
||||
|
||||
it("parses p2p-peers command and timeout", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-peers", "5"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-peers", "5"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-peers");
|
||||
expect(parsed.commandArgs).toEqual(["5"]);
|
||||
});
|
||||
|
||||
it("parses p2p-sync command with peer and timeout", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-sync", "peer-1", "12"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-sync", "peer-1", "12"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-sync");
|
||||
expect(parsed.commandArgs).toEqual(["peer-1", "12"]);
|
||||
});
|
||||
|
||||
it("parses p2p-host command", () => {
|
||||
process.argv = ["node", "livesync-cli", "./vault", "p2p-host"];
|
||||
process.argv = ["node", "livesync-cli", "./databasePath", "p2p-host"];
|
||||
const parsed = parseArgs();
|
||||
|
||||
expect(parsed.databasePath).toBe("./vault");
|
||||
expect(parsed.databasePath).toBe("./databasePath");
|
||||
expect(parsed.command).toBe("p2p-host");
|
||||
expect(parsed.commandArgs).toEqual([]);
|
||||
});
|
||||
|
||||
@@ -27,10 +27,10 @@ import { DatabaseService } from "@lib/services/base/DatabaseService";
|
||||
import type { ObsidianLiveSyncSettings } from "@/lib/src/common/types";
|
||||
|
||||
export class NodeServiceContext extends ServiceContext {
|
||||
vaultPath: string;
|
||||
constructor(vaultPath: string) {
|
||||
databasePath: string;
|
||||
constructor(databasePath: string) {
|
||||
super();
|
||||
this.vaultPath = vaultPath;
|
||||
this.databasePath = databasePath;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -64,7 +64,7 @@ class NodeDatabaseService<T extends NodeServiceContext> extends DatabaseService<
|
||||
): { name: string; options: PouchDB.Configuration.DatabaseConfiguration } {
|
||||
const optionPass = {
|
||||
...options,
|
||||
prefix: this.context.vaultPath + nodePath.sep,
|
||||
prefix: this.context.databasePath + nodePath.sep,
|
||||
};
|
||||
const passSettings = { ...settings, useIndexedDBAdapter: false };
|
||||
return super.modifyDatabaseOptions(passSettings, name, optionPass);
|
||||
|
||||
49
src/apps/cli/test/repro-issue-860.sh
Executable file
49
src/apps/cli/test/repro-issue-860.sh
Executable file
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CLI_DIR="$(cd -- "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$CLI_DIR"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
display_test_info "Test for Issue #860: Empty output from ls and mirror"
|
||||
|
||||
RUN_BUILD="${RUN_BUILD:-1}"
|
||||
cli_test_init_cli_cmd
|
||||
|
||||
WORK_DIR="$(mktemp -d "${TMPDIR:-/tmp}/livesync-repro-860.XXXXXX")"
|
||||
trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
|
||||
SETTINGS_FILE="$WORK_DIR/data.json"
|
||||
VAULT_DIR="$WORK_DIR/vault"
|
||||
mkdir -p "$VAULT_DIR"
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
npm run build
|
||||
fi
|
||||
|
||||
echo "[INFO] generating settings -> $SETTINGS_FILE"
|
||||
cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
|
||||
# 1. Test 'ls' on empty database
|
||||
echo "[INFO] Testing 'ls' on empty database..."
|
||||
LS_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" ls)
|
||||
if [[ -z "$LS_OUTPUT" ]]; then
|
||||
echo "[REPRODUCED] 'ls' returned empty output for empty database."
|
||||
else
|
||||
echo "[INFO] 'ls' output: $LS_OUTPUT"
|
||||
fi
|
||||
|
||||
# 2. Test 'mirror' on empty vault
|
||||
echo "[INFO] Testing 'mirror' on empty vault..."
|
||||
MIRROR_OUTPUT=$(run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror 2>&1)
|
||||
if [[ "$MIRROR_OUTPUT" == *"[Command] mirror"* ]] && [[ ! "$MIRROR_OUTPUT" == *"[Mirror]"* ]]; then
|
||||
# Note: currently it prints [Command] mirror to stderr.
|
||||
# Let's see if it prints anything else.
|
||||
echo "[REPRODUCED] 'mirror' produced no functional logs (only command header)."
|
||||
else
|
||||
echo "[INFO] 'mirror' output: $MIRROR_OUTPUT"
|
||||
fi
|
||||
|
||||
echo "[DONE] finished repro-860 test"
|
||||
83
src/apps/cli/test/test-mirror-linux.sh
Normal file → Executable file
83
src/apps/cli/test/test-mirror-linux.sh
Normal file → Executable file
@@ -28,7 +28,9 @@ trap 'rm -rf "$WORK_DIR"' EXIT
|
||||
|
||||
SETTINGS_FILE="$WORK_DIR/data.json"
|
||||
VAULT_DIR="$WORK_DIR/vault"
|
||||
DB_DIR="$WORK_DIR/db"
|
||||
mkdir -p "$VAULT_DIR/test"
|
||||
mkdir -p "$DB_DIR"
|
||||
|
||||
if [[ "$RUN_BUILD" == "1" ]]; then
|
||||
echo "[INFO] building CLI..."
|
||||
@@ -41,6 +43,20 @@ cli_test_init_settings_file "$SETTINGS_FILE"
|
||||
# isConfigured=true is required for mirror (canProceedScan checks this)
|
||||
cli_test_mark_settings_configured "$SETTINGS_FILE"
|
||||
|
||||
# Preparation: Sync settings and files logic
|
||||
DB_SETTINGS="$DB_DIR/settings.json"
|
||||
cp "$SETTINGS_FILE" "$DB_SETTINGS"
|
||||
|
||||
# Helper for standard run (Separated paths)
|
||||
run_mirror_test() {
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" mirror "$VAULT_DIR"
|
||||
}
|
||||
|
||||
# Helper for compatibility run (Same path)
|
||||
run_mirror_compat() {
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
}
|
||||
|
||||
PASS=0
|
||||
FAIL=0
|
||||
|
||||
@@ -78,19 +94,27 @@ portable_touch_timestamp() {
|
||||
# Case 1: File exists only in storage → should be synced into DB after mirror
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 1: storage-only → DB ==="
|
||||
echo "=== Case 1: storage-only → DB (Separated Paths) ==="
|
||||
|
||||
printf 'storage-only content\n' > "$VAULT_DIR/test/storage-only.md"
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
echo "[DEBUG] DB_DIR: $DB_DIR"
|
||||
echo "[DEBUG] VAULT_DIR: $VAULT_DIR"
|
||||
|
||||
run_mirror_test
|
||||
|
||||
RESULT_FILE="$WORK_DIR/case1-cat.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/storage-only.md "$RESULT_FILE"
|
||||
# Try 'ls' first to see what's in the DB
|
||||
echo "--- DB contents ---"
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" ls
|
||||
echo "-------------------"
|
||||
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/storage-only.md "$RESULT_FILE"
|
||||
|
||||
if cmp -s "$VAULT_DIR/test/storage-only.md" "$RESULT_FILE"; then
|
||||
assert_pass "storage-only file was synced into DB"
|
||||
assert_pass "storage-only file was synced into DB using separated paths"
|
||||
else
|
||||
assert_fail "storage-only file NOT synced into DB"
|
||||
assert_fail "storage-only file NOT synced into DB with separated paths"
|
||||
echo "--- storage ---" >&2; cat "$VAULT_DIR/test/storage-only.md" >&2
|
||||
echo "--- cat ---" >&2; cat "$RESULT_FILE" >&2
|
||||
fi
|
||||
@@ -99,9 +123,9 @@ fi
|
||||
# Case 2: File exists only in DB → should be restored to storage after mirror
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 2: DB-only → storage ==="
|
||||
echo "=== Case 2: DB-only → storage (Separated Paths) ==="
|
||||
|
||||
printf 'db-only content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/db-only.md
|
||||
printf 'db-only content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/db-only.md
|
||||
|
||||
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
assert_fail "db-only.md unexpectedly exists in storage before mirror"
|
||||
@@ -109,7 +133,7 @@ else
|
||||
echo "[INFO] confirmed: test/db-only.md not in storage before mirror"
|
||||
fi
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/db-only.md")"
|
||||
@@ -119,19 +143,19 @@ if [[ -f "$VAULT_DIR/test/db-only.md" ]]; then
|
||||
assert_fail "DB-only file restored but content mismatch (got: '${STORAGE_CONTENT}')"
|
||||
fi
|
||||
else
|
||||
assert_fail "DB-only file was NOT restored to storage"
|
||||
assert_fail "DB-only file NOT restored to storage after mirror"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 3: File deleted in DB → should NOT be created in storage
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 3: DB-deleted → storage untouched ==="
|
||||
echo "=== Case 3: DB-deleted → storage untouched (Separated Paths) ==="
|
||||
|
||||
printf 'to-be-deleted\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/deleted.md
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" rm test/deleted.md
|
||||
printf 'to-be-deleted\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/deleted.md
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" rm test/deleted.md
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
if [[ ! -f "$VAULT_DIR/test/deleted.md" ]]; then
|
||||
assert_pass "deleted DB entry was not restored to storage"
|
||||
@@ -143,19 +167,19 @@ fi
|
||||
# Case 4: Both exist, storage is newer → DB should be updated
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 4: storage newer → DB updated ==="
|
||||
echo "=== Case 4: storage newer → DB updated (Separated Paths) ==="
|
||||
|
||||
# Seed DB with old content (mtime ≈ now)
|
||||
printf 'old content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-storage-newer.md
|
||||
printf 'old content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-storage-newer.md
|
||||
|
||||
# Write new content to storage with a timestamp 1 hour in the future
|
||||
printf 'new content\n' > "$VAULT_DIR/test/sync-storage-newer.md"
|
||||
touch -t "$(portable_touch_timestamp '+1 hour')" "$VAULT_DIR/test/sync-storage-newer.md"
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
DB_RESULT_FILE="$WORK_DIR/case4-pull.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
|
||||
run_cli "$DB_DIR" --settings "$DB_SETTINGS" pull test/sync-storage-newer.md "$DB_RESULT_FILE"
|
||||
if cmp -s "$VAULT_DIR/test/sync-storage-newer.md" "$DB_RESULT_FILE"; then
|
||||
assert_pass "DB updated to match newer storage file"
|
||||
else
|
||||
@@ -168,16 +192,16 @@ fi
|
||||
# Case 5: Both exist, DB is newer → storage should be updated
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 5: DB newer → storage updated ==="
|
||||
echo "=== Case 5: DB newer → storage updated (Separated Paths) ==="
|
||||
|
||||
# Write old content to storage with a timestamp 1 hour in the past
|
||||
printf 'old storage content\n' > "$VAULT_DIR/test/sync-db-newer.md"
|
||||
touch -t "$(portable_touch_timestamp '-1 hour')" "$VAULT_DIR/test/sync-db-newer.md"
|
||||
|
||||
# Write new content to DB only (mtime ≈ now, newer than the storage file)
|
||||
printf 'new db content\n' | run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" put test/sync-db-newer.md
|
||||
printf 'new db content\n' | run_cli "$DB_DIR" --settings "$DB_SETTINGS" put test/sync-db-newer.md
|
||||
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
run_mirror_test
|
||||
|
||||
STORAGE_CONTENT="$(cat "$VAULT_DIR/test/sync-db-newer.md")"
|
||||
if [[ "$STORAGE_CONTENT" == "new db content" ]]; then
|
||||
@@ -186,6 +210,25 @@ else
|
||||
assert_fail "storage NOT updated to match newer DB entry (got: '${STORAGE_CONTENT}')"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Case 6: Compatibility test - omitted vault-path
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "=== Case 6: omitted vault-path (Compatibility Mode) ==="
|
||||
|
||||
# We use VAULT_DIR as the "main" database path for this part.
|
||||
printf 'compat-content\n' > "$VAULT_DIR/compat.md"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" mirror
|
||||
|
||||
# In compat mode, it should find it in the DB at root
|
||||
CAT_RESULT="$WORK_DIR/compat-cat.txt"
|
||||
run_cli "$VAULT_DIR" --settings "$SETTINGS_FILE" pull compat.md "$CAT_RESULT"
|
||||
if [[ "$(cat "$CAT_RESULT")" == "compat-content" ]]; then
|
||||
assert_pass "Compatibility mode works (omitted vault-path)"
|
||||
else
|
||||
assert_fail "Compatibility mode failed to sync file into DB"
|
||||
fi
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
# Summary
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
@@ -138,7 +138,7 @@ export const _requestToCouchDBFetch = async (
|
||||
authorization: authHeader,
|
||||
"content-type": "application/json",
|
||||
};
|
||||
const uri = `${baseUri}/${path}`;
|
||||
const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
|
||||
const requestParam = {
|
||||
url: uri,
|
||||
method: method || (body ? "PUT" : "GET"),
|
||||
@@ -162,7 +162,7 @@ export const _requestToCouchDB = async (
|
||||
const authHeaderGen = new AuthorizationHeaderGenerator();
|
||||
const authHeader = await authHeaderGen.getAuthorizationHeader(credentials);
|
||||
const transformedHeaders: Record<string, string> = { authorization: authHeader, origin: origin, ...customHeaders };
|
||||
const uri = `${baseUri}/${path}`;
|
||||
const uri = `${baseUri.replace(/\/+$/, "")}/${path}`;
|
||||
const requestParam: RequestUrlParam = {
|
||||
url: uri,
|
||||
method: method || (body ? "PUT" : "GET"),
|
||||
|
||||
@@ -30,7 +30,8 @@
|
||||
type JSONData = Record<string | number | symbol, any> | [any];
|
||||
|
||||
const docsArray = $derived.by(() => {
|
||||
if (docs && docs.length >= 1) {
|
||||
// The merge pane compares two revisions, so guard against incomplete input before reading docs[1].
|
||||
if (docs && docs.length >= 2) {
|
||||
if (keepOrder || docs[0].mtime < docs[1].mtime) {
|
||||
return { a: docs[0], b: docs[1] } as const;
|
||||
} else {
|
||||
|
||||
@@ -636,10 +636,24 @@ Offline Changed files: ${processFiles.length}`;
|
||||
|
||||
// --> Conflict processing
|
||||
|
||||
// Keep one in-flight conflict check per path so repeated sync events do not close the active merge dialogue.
|
||||
pendingConflictChecks = new Set<FilePathWithPrefix>();
|
||||
|
||||
queueConflictCheck(path: FilePathWithPrefix) {
|
||||
if (this.pendingConflictChecks.has(path)) return;
|
||||
this.pendingConflictChecks.add(path);
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
}
|
||||
|
||||
finishConflictCheck(path: FilePathWithPrefix) {
|
||||
this.pendingConflictChecks.delete(path);
|
||||
}
|
||||
|
||||
requeueConflictCheck(path: FilePathWithPrefix) {
|
||||
this.finishConflictCheck(path);
|
||||
this.queueConflictCheck(path);
|
||||
}
|
||||
|
||||
async resolveConflictOnInternalFiles() {
|
||||
// Scan all conflicted internal files
|
||||
const conflicted = this.localDatabase.findEntries(ICHeader, ICHeaderEnd, { conflicts: true });
|
||||
@@ -648,7 +662,7 @@ Offline Changed files: ${processFiles.length}`;
|
||||
for await (const doc of conflicted) {
|
||||
if (!("_conflicts" in doc)) continue;
|
||||
if (isInternalMetadata(doc._id)) {
|
||||
this.conflictResolutionProcessor.enqueue(doc.path);
|
||||
this.queueConflictCheck(doc.path);
|
||||
}
|
||||
}
|
||||
} catch (ex) {
|
||||
@@ -679,21 +693,27 @@ Offline Changed files: ${processFiles.length}`;
|
||||
const cc = await this.localDatabase.getRaw(id, { conflicts: true });
|
||||
if (cc._conflicts?.length === 0) {
|
||||
await this.extractInternalFileFromDatabase(stripAllPrefixes(path));
|
||||
this.finishConflictCheck(path);
|
||||
} else {
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
this.requeueConflictCheck(path);
|
||||
}
|
||||
// check the file again
|
||||
}
|
||||
conflictResolutionProcessor = new QueueProcessor(
|
||||
async (paths: FilePathWithPrefix[]) => {
|
||||
const path = paths[0];
|
||||
sendSignal(`cancel-internal-conflict:${path}`);
|
||||
try {
|
||||
// Retrieve data
|
||||
const id = await this.path2id(path, ICHeader);
|
||||
const doc = await this.localDatabase.getRaw<MetaEntry>(id, { conflicts: true });
|
||||
if (doc._conflicts === undefined) return [];
|
||||
if (doc._conflicts.length == 0) return [];
|
||||
if (doc._conflicts === undefined) {
|
||||
this.finishConflictCheck(path);
|
||||
return [];
|
||||
}
|
||||
if (doc._conflicts.length == 0) {
|
||||
this.finishConflictCheck(path);
|
||||
return [];
|
||||
}
|
||||
this._log(`Hidden file conflicted:${path}`);
|
||||
const conflicts = doc._conflicts.sort((a, b) => Number(a.split("-")[0]) - Number(b.split("-")[0]));
|
||||
const revA = doc._rev;
|
||||
@@ -725,7 +745,7 @@ Offline Changed files: ${processFiles.length}`;
|
||||
await this.storeInternalFileToDatabase({ path: filename, ...stat });
|
||||
await this.extractInternalFileFromDatabase(filename);
|
||||
await this.localDatabase.removeRevision(id, revB);
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
this.requeueConflictCheck(path);
|
||||
return [];
|
||||
} else {
|
||||
this._log(`Object merge is not applicable.`, LOG_LEVEL_VERBOSE);
|
||||
@@ -743,6 +763,7 @@ Offline Changed files: ${processFiles.length}`;
|
||||
await this.resolveByNewerEntry(id, path, doc, revA, revB);
|
||||
return [];
|
||||
} catch (ex) {
|
||||
this.finishConflictCheck(path);
|
||||
this._log(`Failed to resolve conflict (Hidden): ${path}`);
|
||||
this._log(ex, LOG_LEVEL_VERBOSE);
|
||||
return [];
|
||||
@@ -761,15 +782,22 @@ Offline Changed files: ${processFiles.length}`;
|
||||
const prefixedPath = addPrefix(path, ICHeader);
|
||||
const docAMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revA });
|
||||
const docBMerge = await this.localDatabase.getDBEntry(prefixedPath, { rev: revB });
|
||||
if (docAMerge != false && docBMerge != false) {
|
||||
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
|
||||
// Again for other conflicted revisions.
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
try {
|
||||
if (docAMerge != false && docBMerge != false) {
|
||||
if (await this.showJSONMergeDialogAndMerge(docAMerge, docBMerge)) {
|
||||
// Again for other conflicted revisions.
|
||||
this.requeueConflictCheck(path);
|
||||
} else {
|
||||
this.finishConflictCheck(path);
|
||||
}
|
||||
return;
|
||||
} else {
|
||||
// If either revision could not read, force resolving by the newer one.
|
||||
await this.resolveByNewerEntry(id, path, doc, revA, revB);
|
||||
}
|
||||
return;
|
||||
} else {
|
||||
// If either revision could not read, force resolving by the newer one.
|
||||
await this.resolveByNewerEntry(id, path, doc, revA, revB);
|
||||
} catch (ex) {
|
||||
this.finishConflictCheck(path);
|
||||
throw ex;
|
||||
}
|
||||
},
|
||||
{
|
||||
@@ -793,6 +821,8 @@ Offline Changed files: ${processFiles.length}`;
|
||||
const storeFilePath = strippedPath;
|
||||
const displayFilename = `${storeFilePath}`;
|
||||
// const path = this.prefixedConfigDir2configDir(stripAllPrefixes(docA.path)) || docA.path;
|
||||
// Cancel only when replacing an existing dialogue for the same path, not on every queue pass.
|
||||
sendSignal(`cancel-internal-conflict:${docA.path}`);
|
||||
const modal = new JsonResolveModal(this.app, storageFilePath, [docA, docB], async (keep, result) => {
|
||||
// modal.close();
|
||||
try {
|
||||
@@ -1164,7 +1194,7 @@ Offline Changed files: ${files.length}`;
|
||||
// Check if the file is conflicted, and if so, enqueue to resolve.
|
||||
// Until the conflict is resolved, the file will not be processed.
|
||||
if (docMeta._conflicts && docMeta._conflicts.length > 0) {
|
||||
this.conflictResolutionProcessor.enqueue(path);
|
||||
this.queueConflictCheck(path);
|
||||
this._log(`${headerLine} Hidden file conflicted, enqueued to resolve`);
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -781,7 +781,8 @@ Success: ${successCount}, Errored: ${errored}`;
|
||||
const credential = generateCredentialObject(this.settings);
|
||||
const request = async (path: string, method: string = "GET", body: any = undefined) => {
|
||||
const req = await _requestToCouchDB(
|
||||
this.settings.couchDB_URI + (this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
|
||||
this.settings.couchDB_URI.replace(/\/+$/, "") +
|
||||
(this.settings.couchDB_DBNAME ? `/${this.settings.couchDB_DBNAME}` : ""),
|
||||
credential,
|
||||
window.origin,
|
||||
path,
|
||||
|
||||
2
src/lib
2
src/lib
Submodule src/lib updated: dd48d97e71...91b5981219
@@ -137,6 +137,23 @@ export function paneHatch(this: ObsidianLiveSyncSettingTab, paneEl: HTMLElement,
|
||||
pluginConfig.accessKey = REDACTED;
|
||||
pluginConfig.secretKey = REDACTED;
|
||||
const redact = (source: string) => `${REDACTED}(${source.length} letters)`;
|
||||
const toSchemeOnly = (uri: string) => {
|
||||
try {
|
||||
return `${new URL(uri).protocol}//`;
|
||||
} catch {
|
||||
const matched = uri.match(/^[A-Za-z][A-Za-z0-9+.-]*:\/\//);
|
||||
return matched?.[0] ?? REDACTED;
|
||||
}
|
||||
};
|
||||
pluginConfig.remoteConfigurations = Object.fromEntries(
|
||||
Object.entries(pluginConfig.remoteConfigurations || {}).map(([id, config]) => [
|
||||
id,
|
||||
{
|
||||
...config,
|
||||
uri: toSchemeOnly(config.uri),
|
||||
},
|
||||
])
|
||||
);
|
||||
pluginConfig.region = redact(pluginConfig.region);
|
||||
pluginConfig.bucket = redact(pluginConfig.bucket);
|
||||
pluginConfig.pluginSyncExtendedSetting = {};
|
||||
|
||||
@@ -32,6 +32,7 @@ import SetupRemote from "../SetupWizard/dialogs/SetupRemote.svelte";
|
||||
import SetupRemoteCouchDB from "../SetupWizard/dialogs/SetupRemoteCouchDB.svelte";
|
||||
import SetupRemoteBucket from "../SetupWizard/dialogs/SetupRemoteBucket.svelte";
|
||||
import SetupRemoteP2P from "../SetupWizard/dialogs/SetupRemoteP2P.svelte";
|
||||
import { syncActivatedRemoteSettings } from "./remoteConfigBuffer.ts";
|
||||
|
||||
function getSettingsFromEditingSettings(editingSettings: AllSettings): ObsidianLiveSyncSettings {
|
||||
const workObj = { ...editingSettings } as ObsidianLiveSyncSettings;
|
||||
@@ -183,6 +184,11 @@ export function paneRemoteConfig(
|
||||
}, true);
|
||||
|
||||
if (synchroniseActiveRemote) {
|
||||
// Keep both buffers aligned with the newly activated remote before saving any remaining dirty keys.
|
||||
syncActivatedRemoteSettings(this.editingSettings, this.core.settings);
|
||||
if (this.initialSettings) {
|
||||
syncActivatedRemoteSettings(this.initialSettings, this.core.settings);
|
||||
}
|
||||
await this.saveAllDirtySettings();
|
||||
}
|
||||
|
||||
|
||||
17
src/modules/features/SettingDialogue/remoteConfigBuffer.ts
Normal file
17
src/modules/features/SettingDialogue/remoteConfigBuffer.ts
Normal file
@@ -0,0 +1,17 @@
|
||||
import { pickBucketSyncSettings, pickCouchDBSyncSettings, pickP2PSyncSettings } from "@lib/common/utils.ts";
|
||||
import type { ObsidianLiveSyncSettings } from "@lib/common/types.ts";
|
||||
|
||||
// Keep the setting dialogue buffer aligned with the current core settings before persisting other dirty keys.
|
||||
// This also clears stale dirty values left from editing a different remote type before switching active remotes.
|
||||
export function syncActivatedRemoteSettings(
|
||||
target: Partial<ObsidianLiveSyncSettings>,
|
||||
source: ObsidianLiveSyncSettings
|
||||
): void {
|
||||
Object.assign(target, {
|
||||
remoteType: source.remoteType,
|
||||
activeConfigurationId: source.activeConfigurationId,
|
||||
...pickBucketSyncSettings(source),
|
||||
...pickCouchDBSyncSettings(source),
|
||||
...pickP2PSyncSettings(source),
|
||||
});
|
||||
}
|
||||
@@ -0,0 +1,83 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { DEFAULT_SETTINGS, REMOTE_COUCHDB, REMOTE_MINIO } from "../../../lib/src/common/types";
|
||||
import { syncActivatedRemoteSettings } from "./remoteConfigBuffer";
|
||||
|
||||
describe("syncActivatedRemoteSettings", () => {
|
||||
it("should copy active MinIO credentials into the editing buffer", () => {
|
||||
const target = {
|
||||
...DEFAULT_SETTINGS,
|
||||
remoteType: REMOTE_COUCHDB,
|
||||
activeConfigurationId: "old-remote",
|
||||
accessKey: "",
|
||||
secretKey: "",
|
||||
endpoint: "",
|
||||
bucket: "",
|
||||
region: "",
|
||||
encrypt: true,
|
||||
};
|
||||
const source = {
|
||||
...DEFAULT_SETTINGS,
|
||||
remoteType: REMOTE_MINIO,
|
||||
activeConfigurationId: "remote-s3",
|
||||
accessKey: "access",
|
||||
secretKey: "secret",
|
||||
endpoint: "https://minio.example.test",
|
||||
bucket: "vault",
|
||||
region: "sz-hq",
|
||||
bucketPrefix: "folder/",
|
||||
useCustomRequestHandler: false,
|
||||
forcePathStyle: true,
|
||||
bucketCustomHeaders: "",
|
||||
};
|
||||
|
||||
syncActivatedRemoteSettings(target, source);
|
||||
|
||||
expect(target.remoteType).toBe(REMOTE_MINIO);
|
||||
expect(target.activeConfigurationId).toBe("remote-s3");
|
||||
expect(target.accessKey).toBe("access");
|
||||
expect(target.secretKey).toBe("secret");
|
||||
expect(target.endpoint).toBe("https://minio.example.test");
|
||||
expect(target.bucket).toBe("vault");
|
||||
expect(target.region).toBe("sz-hq");
|
||||
expect(target.bucketPrefix).toBe("folder/");
|
||||
expect(target.encrypt).toBe(true);
|
||||
});
|
||||
|
||||
it("should clear stale dirty values from a different remote type", () => {
|
||||
const target = {
|
||||
...DEFAULT_SETTINGS,
|
||||
remoteType: REMOTE_MINIO,
|
||||
activeConfigurationId: "remote-s3",
|
||||
accessKey: "access",
|
||||
secretKey: "secret",
|
||||
endpoint: "https://minio.example.test",
|
||||
bucket: "vault",
|
||||
region: "sz-hq",
|
||||
couchDB_URI: "https://edited.invalid",
|
||||
couchDB_USER: "edited-user",
|
||||
couchDB_PASSWORD: "edited-pass",
|
||||
couchDB_DBNAME: "edited-db",
|
||||
};
|
||||
const source = {
|
||||
...DEFAULT_SETTINGS,
|
||||
remoteType: REMOTE_MINIO,
|
||||
activeConfigurationId: "remote-s3",
|
||||
accessKey: "access",
|
||||
secretKey: "secret",
|
||||
endpoint: "https://minio.example.test",
|
||||
bucket: "vault",
|
||||
region: "sz-hq",
|
||||
couchDB_URI: "https://current.example.test",
|
||||
couchDB_USER: "current-user",
|
||||
couchDB_PASSWORD: "current-pass",
|
||||
couchDB_DBNAME: "current-db",
|
||||
};
|
||||
|
||||
syncActivatedRemoteSettings(target, source);
|
||||
|
||||
expect(target.couchDB_URI).toBe("https://current.example.test");
|
||||
expect(target.couchDB_USER).toBe("current-user");
|
||||
expect(target.couchDB_PASSWORD).toBe("current-pass");
|
||||
expect(target.couchDB_DBNAME).toBe("current-db");
|
||||
});
|
||||
});
|
||||
84
updates.md
84
updates.md
@@ -3,60 +3,76 @@ Since 19th July, 2025 (beta1 in 0.25.0-beta1, 13th July, 2025)
|
||||
|
||||
The head note of 0.25 is now in [updates_old.md](https://github.com/vrtmrz/obsidian-livesync/blob/main/updates_old.md). Because 0.25 got a lot of updates, thankfully, compatibility is kept and we do not need breaking changes! In other words, when get enough stabled. The next version will be v1.0.0. Even though it my hope.
|
||||
|
||||
## 0.25.56+patched5
|
||||
## 0.25.60
|
||||
|
||||
6th April, 2026
|
||||
29th April, 2026
|
||||
|
||||
### Fixed
|
||||
|
||||
- Now larger settings can be exported and imported via QR code without issues. (#595)
|
||||
- When the settings data exceeds the QR code capacity, it is now split into multiple QR codes.
|
||||
- These QR codes are reassembled by the aggregator page, which collects the split data and reconstructs the original settings.
|
||||
- Aggregator page is available at `https://vrtmrz.github.io/obsidian-livesync/aggregator.html`, and this file is also included in the repository.
|
||||
- We will not send the settings data to any server. The QR code data is generated and processed entirely on the client side, ensuring that your settings remain private and secure. HOWEVER, please be careful your network environment.
|
||||
- Fixed some errors during serialisation and deserialisation of the settings, which caused issues in some cases when importing/exporting settings via QR code.
|
||||
|
||||
### Fixed (CLI)
|
||||
|
||||
- `ls` and `mirror` commands now provide informative feedback when no documents are found or filters skip all files, resolving the issue where they would exit silently (#860).
|
||||
- Improved the clarity of CLI command logs by including the total count of processed items.
|
||||
- The command-line argument `vault` has been renamed to a more appropriate name, `databaseDir`.
|
||||
- The `mirror` command now accepts a `vault` directory, which specifies the location where the actual files are stored. For compatibility reasons, the previous behaviour is still supported.
|
||||
|
||||
## 0.25.59
|
||||
|
||||
### Fixed
|
||||
|
||||
- No longer Setup-wizard drops username and password silently. (#865)
|
||||
- Thank you so much for @koteitan !
|
||||
- Setup URI is now correctly imported (#859).
|
||||
- Also thank you so much for @koteitan !
|
||||
|
||||
### Improved
|
||||
|
||||
- now French translation is added by @foXaCe ! Thank you so much!
|
||||
|
||||
## 0.25.58
|
||||
|
||||
### Fixed
|
||||
|
||||
- No longer credentials are broken during object storage configuration (related: #852).
|
||||
- Fixed a worker-side recursion issue that could raise `Maximum call stack size exceeded` during chunk splitting (related: #855).
|
||||
- Improved background worker crash cleanup so pending split/encryption tasks are released cleanly instead of being left in a waiting state (related: #855).
|
||||
- On start-up, the selected remote configuration is now applied to runtime connection fields as well, reducing intermittent authentication failures caused by stale runtime settings (related: #855).
|
||||
- Issue report generation now redacts `remoteConfigurations` connection strings and keeps only the scheme (e.g. `sls+https://`), so credentials are not exposed in reports.
|
||||
- Hidden file JSON conflicts no longer keep re-opening and dismissing the merge dialogue before we can act, which fixes persistent unresolvable `data.json` conflicts in plug-in settings sync (related: #850).
|
||||
|
||||
## 0.25.57
|
||||
|
||||
9th April, 2026
|
||||
|
||||
- Packing a batch during the journal sync now continues even if the batch contains no items to upload.
|
||||
|
||||
## 0.25.56+patched4
|
||||
|
||||
6th April, 2026
|
||||
|
||||
### Fixed
|
||||
|
||||
- Remote configuration URIs are now correctly encrypted when saved after editing in the settings dialogue.
|
||||
- No unexpected error (about a replicator) during the early stage of initialisation.
|
||||
- Now error messages are kept hidden if the show status inside the editor is disabled (related: #829).
|
||||
- Fixed an issue where devices could no longer upload after another device performed 'Fresh Start Wipe' and 'Overwrite remote' in Object Storage mode (#848).
|
||||
- Each device's local deduplication caches (`knownIDs`, `sentIDs`, `receivedFiles`, `sentFiles`) now track the remote journal epoch (derived from the encryption parameters stored on the remote).
|
||||
- When the epoch changes, the plugin verifies whether the device's last uploaded file still exists on the remote. If the file is gone, it confirms a remote wipe and automatically clears the stale caches. If the file is still present (e.g. a protocol upgrade without a wipe), the caches are preserved and only the epoch is updated. This means normal upgrades never cause unnecessary re-processing.
|
||||
|
||||
## 0.25.56+patched3
|
||||
|
||||
5th April, 2026
|
||||
|
||||
### Fixed
|
||||
|
||||
- Now surely remote configurations are editable in the settings dialogue.
|
||||
- We can fetch remote settings from the remote and apply them to the local settings for each remote configuration entry.
|
||||
- No longer layout breaking occurs when the description of a remote configuration entry is too long.
|
||||
|
||||
## 0.25.56+patched2
|
||||
|
||||
5th April, 2026
|
||||
|
||||
Beta release tagging is now changed to +patched1, +patched2, and so on.
|
||||
- When the epoch changes, the plugin verifies whether the device's last uploaded file still exists on the remote. If the file is gone, it confirms a remote wipe and automatically clears the stale caches. If the file is still present (e.g. a protocol upgrade without a wipe), the caches are preserved, and only the epoch is updated. This means normal upgrades never cause unnecessary re-processing.
|
||||
|
||||
### Translations
|
||||
|
||||
- Russian translation has been added! Thank you so much for the contribution!
|
||||
|
||||
### Fixed
|
||||
|
||||
- No unexpected error (about a replicator) during the early stage of initialisation.
|
||||
- Now error messages are kept hidden if the show status inside the editor is disabled.
|
||||
- Russian translation has been added! Thank you so much for the contribution, @vipka1n! (#845)
|
||||
|
||||
### New features
|
||||
|
||||
- Now we can configure multiple Remote Databases of the same type, e.g, multiple CouchDBs or S3 remotes.
|
||||
- A user interface for managing multiple remote databases has been added to the settings dialogue. I think no explanation is needed for the UI, but please let me know if you have any questions.
|
||||
- We can switch between multiple Remote Databases in the settings dialogue.
|
||||
|
||||
### CLI
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Replication progress is now correctly saved and restored in the CLI.
|
||||
- Replication progress is now correctly saved and restored in the CLI (related: #846).
|
||||
|
||||
## ~~0.25.55~~ 0.25.56
|
||||
|
||||
|
||||
Reference in New Issue
Block a user