I had a motherboard failure about 45 days ago on an Umbrel Ubuntu system. It took some time for me to build a new Umbrel VM. I applied my seed phrase when installing Lightning node, and let the system grind for a week. I also tried the restore option and used my latest channel backup; no channels restored or onchain balances returned. So my lightning node has no channels, and no sats balance to speak of. Amboss thinks my node has the channels, so i think i havent quite got things restored. I dont know what else to try at this point; any tips appreciated
Sorry to hear that.
Firstly, you can try restoring with this seed-phrase in another wallet. I would recommend blue wallet or sparrow, or anything else you’re familiar with (remember to have taproot enabled)- this should at least give you an immediate method to access your onchain funds.
I can see in a past post you used a back-up image, would it be possible for you to please navigate to the settings dashboard and press ‘START’ under troubleshooting. If you could share the output of that here and the approximate date your node went down, we can use that to search for older lightning channel back-ups.
Let me know how you go with the above, and we can look to some next steps. Outside of that, you could always do another rescan too:
Here is my debug log:
=====================
= Umbrel debug info =
Umbrel version
0.5.4
Memory usage
total used free shared buff/cache available
Mem: 11G 2.8G 492M 2.0M 8.6G 8.8G
Swap: 2.0G 992M 1.1G
total: 23.6%
bitcoin: 9.1%
lightning: 6%
system: 5.4%
thunderhub: 1.1%
nostr-relay: 0.8%
snowflake: 0.6%
snort: 0.6%
Memory monitor logs
2023-09-28 18:04:51 Memory monitor running!
857369 pts/0 R+ 0:00 bash ./scripts/memory-monitor
Memory monitor is already running
2023-09-29 16:16:31 Memory monitor running!
1870984 pts/3 R+ 0:00 bash ./scripts/memory-monitor
1870986 pts/3 R+ 0:00 bash ./scripts/memory-monitor
Memory monitor is already running
2023-10-23 22:40:17 Memory monitor running!
2023-10-24 07:34:32 Memory monitor running!
2023-10-29 11:33:44 Memory monitor running!
Filesystem information
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 50G 29G 19G 61% /
/dev/sdb1 2.0T 1.2T 697G 64% /mnt/d_disk
Karen logs
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:01 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:02 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:03 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:04 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:05 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:06 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:07 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:08 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:09 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:10 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:11 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:12 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:13 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:14 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:15 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:16 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:17 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:18 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:19 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:19 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:20 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:21 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:22 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:23 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:24 --:–:-- 0
100 10569 0 0 100 10569 0 396 0:00:26 0:00:26 --:–:-- 1856
100 10569 0 0 100 10569 0 382 0:00:27 0:00:27 --:–:-- 1855
100 10569 0 0 100 10569 0 369 0:00:28 0:00:28 --:–:-- 1856
100 10569 0 0 100 10569 0 356 0:00:29 0:00:29 --:–:-- 1856
100 10569 0 0 100 10569 0 344 0:00:30 0:00:30 --:–:-- 1856
100 10569 0 0 100 10569 0 334 0:00:31 0:00:31 --:–:-- 0
100 10569 0 0 100 10569 0 323 0:00:32 0:00:32 --:–:-- 0
100 10569 0 0 100 10569 0 314 0:00:33 0:00:33 --:–:-- 0
100 10715 100 146 100 10569 4 309 0:00:36 0:00:34 0:00:02 32
100 10715 100 146 100 10569 4 309 0:00:36 0:00:34 0:00:02 41
{“message”:“Successfully uploaded backup 1698763669016.tar.gz.pgp for backup ID 19a5a855b0cf0c60e1683295a6bae136444d2c34590b27d34d3c062d7110e14a”}
====== Backup success =======
Got signal: backup
karen is getting triggered!
Deriving keys…
Creating backup…
Adding random padding…
1+0 records in
1+0 records out
4293 bytes (4.3 kB, 4.2 KiB) copied, 3.4025e-05 s, 126 MB/s
Creating encrypted tarball…
backup/
backup/channel.backup
backup/.padding
Uploading backup…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:01 --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:01 --:–:-- 0
100 5084 100 146 100 4938 46 1579 0:00:03 0:00:03 --:–:-- 1626
100 5084 100 146 100 4938 46 1579 0:00:03 0:00:03 --:–:-- 1626
{“message”:“Successfully uploaded backup 1698806015805.tar.gz.pgp for backup ID 19a5a855b0cf0c60e1683295a6bae136444d2c34590b27d34d3c062d7110e14a”}
====== Backup success =======
Got signal: backup
karen is getting triggered!
Deriving keys…
Creating backup…
Adding random padding…
1+0 records in
1+0 records out
7922 bytes (7.9 kB, 7.7 KiB) copied, 3.3805e-05 s, 234 MB/s
Creating encrypted tarball…
backup/
backup/channel.backup
backup/.padding
Uploading backup…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:01 --:–:-- 0
100 8593 0 0 100 8593 0 4526 0:00:01 0:00:01 --:–:-- 4525
100 8739 100 146 100 8593 59 3477 0:00:02 0:00:02 --:–:-- 3536
{“message”:“Successfully uploaded backup 1698825088289.tar.gz.pgp for backup ID 19a5a855b0cf0c60e1683295a6bae136444d2c34590b27d34d3c062d7110e14a”}
====== Backup success =======
Got signal: change-password
karen is getting triggered!
This script must only be run on Umbrel OS
Got signal: debug
karen is getting triggered!
Docker containers
NAMES STATUS
bitcoin_server_1 Up 2 days
nostr-relay_relay-proxy_1 Up 2 days
bitcoin_tor_1 Up 2 days
bitcoin_tor_server_1 Up 2 days
bitcoin_bitcoind_1 Up 2 days
bitcoin_app_proxy_1 Up 2 days
bitcoin_i2pd_daemon_1 Up 2 days
snort_tor_server_1 Up 2 days
snort_web_1 Up 2 days
snort_app_proxy_1 Up 2 days
lightning_app_1 Up 2 days
lightning_lnd_1 Up 2 days
lightning_app_proxy_1 Up 2 days
lightning_tor_1 Up 2 days
lightning_tor_server_1 Up 2 days
nostr-relay_web_1 Up 2 days
nostr-relay_relay_1 Up 2 days
nostr-relay_tor_server_1 Up 2 days
nostr-relay_app_proxy_1 Up 2 days
thunderhub_tor_server_1 Up 2 days
thunderhub_app_proxy_1 Up 2 days
thunderhub_web_1 Up 2 days
snowflake_proxy_1 Up 2 days
snowflake_app_proxy_1 Up 2 days
snowflake_web_1 Up 2 days
snowflake_tor_server_1 Up 2 days
nginx Up 2 days
tor_proxy Up 2 days
dashboard Up 2 days
tor_server Up 2 days
auth Up 2 days
manager Up 2 days
Umbrel logs
Attaching to manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:29 GMT] “GET /v1/system/update-status HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:39 GMT] “GET /v1/apps HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:39 GMT] “GET /v1/apps?installed=1 HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:39 GMT] “GET /v1/system/memory HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:39 GMT] “GET /v1/system/storage HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:39 GMT] “GET /v1/system/get-update HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:46 GMT] “POST /v1/system/debug HTTP/1.0” 200 17 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:47 GMT] “GET /v1/system/debug-result HTTP/1.0” 200 23 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:48 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 01 Nov 2023 12:14:49 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
Tor Proxy logs
Attaching to tor_proxy
tor_proxy | Nov 01 04:33:49.000 [notice] While bootstrapping, fetched this many bytes: 693202 (consensus network-status fetch); 14101 (authority cert fetch); 12176039 (microdescriptor fetch)
tor_proxy | Nov 01 04:33:49.000 [notice] While not bootstrapping, fetched this many bytes: 3379424 (consensus network-status fetch); 34928 (authority cert fetch); 1882875 (microdescriptor fetch)
tor_proxy | Nov 01 06:04:59.000 [warn] Received http status code 404 (“Not found”) from server 51.161.32.229:443 while fetching “/tor/keys/fp/EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97”.
tor_proxy | Nov 01 09:33:00.000 [warn] Received http status code 404 (“Not found”) from server 51.161.32.229:443 while fetching “/tor/keys/fp/EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97”.
tor_proxy | Nov 01 10:33:49.000 [notice] Heartbeat: Tor’s uptime is 2 days 18:00 hours, with 9 circuits open. I’ve sent 381.44 MB and received 100.02 MB. I’ve received 41 connections on IPv4 and 0 on IPv6. I’ve made 139 connections with IPv4 and 0 with IPv6.
tor_proxy | Nov 01 10:33:49.000 [notice] While bootstrapping, fetched this many bytes: 693202 (consensus network-status fetch); 14101 (authority cert fetch); 12176039 (microdescriptor fetch)
tor_proxy | Nov 01 10:33:49.000 [notice] While not bootstrapping, fetched this many bytes: 3429558 (consensus network-status fetch); 38598 (authority cert fetch); 1975883 (microdescriptor fetch)
tor_proxy | Nov 01 11:47:02.000 [warn] Received http status code 404 (“Not found”) from server 51.161.32.229:443 while fetching “/tor/keys/fp/EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97”.
App logs
bitcoin
Attaching to bitcoin_server_1, bitcoin_tor_1, bitcoin_tor_server_1, bitcoin_bitcoind_1, bitcoin_app_proxy_1, bitcoin_i2pd_daemon_1
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
bitcoind_1 | 2023-11-01T11:44:47Z [net] Saw new cmpctblock header hash=00000000000000000001f71352dcfe3c0e873e3cde1ca8d79faea3e457dac6d0 peer=11
bitcoind_1 | 2023-11-01T11:44:47Z UpdateTip: new best=00000000000000000001f71352dcfe3c0e873e3cde1ca8d79faea3e457dac6d0 height=814830 version=0x3fff0000 log2_work=94.512129 tx=912199964 date=‘2023-11-01T11:44:49Z’ progress=1.000000 cache=150.4MiB(970676txo)
bitcoind_1 | 2023-11-01T11:52:56Z Saw new header hash=00000000000000000003c059d005128533365b1f3583b6038a8c144511a1b488 height=814831
bitcoind_1 | 2023-11-01T11:52:56Z [net] Saw new cmpctblock header hash=00000000000000000003c059d005128533365b1f3583b6038a8c144511a1b488 peer=11
bitcoind_1 | 2023-11-01T11:52:57Z UpdateTip: new best=00000000000000000003c059d005128533365b1f3583b6038a8c144511a1b488 height=814831 version=0x30000000 log2_work=94.512142 tx=912203277 date=‘2023-11-01T11:52:49Z’ progress=1.000000 cache=151.3MiB(977466txo)
bitcoind_1 | 2023-11-01T12:05:06Z Saw new header hash=000000000000000000026e1a54765eb62838d2a8aaba353bb4fed903a8f0a370 height=814832
bitcoind_1 | 2023-11-01T12:05:06Z [net] Saw new cmpctblock header hash=000000000000000000026e1a54765eb62838d2a8aaba353bb4fed903a8f0a370 peer=11
bitcoind_1 | 2023-11-01T12:05:06Z UpdateTip: new best=000000000000000000026e1a54765eb62838d2a8aaba353bb4fed903a8f0a370 height=814832 version=0x2001a000 log2_work=94.512156 tx=912206052 date=‘2023-11-01T12:04:50Z’ progress=1.000000 cache=152.6MiB(986618txo)
bitcoind_1 | 2023-11-01T12:09:56Z New outbound peer connected: version: 70016, blocks=814832, peer=915 (block-relay-only)
server_1 | umbrel-middleware
server_1 | ::ffff:10.21.0.20 - - [Tue, 31 Oct 2023 00:19:46 GMT] “GET /v1/bitcoind/info/connections HTTP/1.1” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36”
server_1 |
server_1 | umbrel-middleware
server_1 | ::ffff:10.21.0.20 - - [Tue, 31 Oct 2023 00:19:46 GMT] “GET /v1/bitcoind/info/stats HTTP/1.1” 200 126 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36”
server_1 |
server_1 | umbrel-middleware
server_1 | ::ffff:10.21.0.20 - - [Tue, 31 Oct 2023 00:19:47 GMT] “GET /v1/bitcoind/info/sync HTTP/1.1” 200 103 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36”
server_1 |
server_1 | umbrel-middleware
tor_1 | Nov 01 10:33:53.000 [notice] Heartbeat: Tor’s uptime is 2 days 18:00 hours, with 22 circuits open. I’ve sent 303.25 MB and received 399.27 MB. I’ve received 583 connections on IPv4 and 0 on IPv6. I’ve made 138 connections with IPv4 and 0 with IPv6.
tor_1 | Nov 01 10:33:53.000 [notice] While bootstrapping, fetched this many bytes: 693202 (consensus network-status fetch); 14101 (authority cert fetch); 12174865 (microdescriptor fetch)
tor_1 | Nov 01 10:33:53.000 [notice] While not bootstrapping, fetched this many bytes: 3434581 (consensus network-status fetch); 37716 (authority cert fetch); 1978160 (microdescriptor fetch)
tor_1 | Nov 01 10:33:53.000 [notice] Average packaged cell fullness: 64.702%. TLS write overhead: 2%
i2pd_daemon_1 | 11:37:23@657/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 11:38:08@657/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 11:39:24@657/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 11:40:40@657/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 11:45:19@834/error - Destination: Can’t publish LeaseSet. Destination is not ready
i2pd_daemon_1 | 11:47:44@657/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 11:56:02@834/error - SAM: Read error: End of file
i2pd_daemon_1 | 11:57:09@657/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 11:58:40@657/error - ElGamal decrypt hash doesn’t match
i2pd_daemon_1 | 11:58:40@657/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
lightning
Attaching to lightning_app_1, lightning_lnd_1, lightning_app_proxy_1, lightning_tor_1, lightning_tor_server_1
app_1 | [backup-monitor] Checking channel backup…
app_1 | [backup-monitor] Sleeping…
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
tor_1 | Nov 01 05:24:07.000 [notice] No circuits are opened. Relaxed timeout for circuit 4356 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [1 similar message(s) suppressed in last 6000 seconds]
tor_1 | Nov 01 07:22:04.000 [notice] Our directory information is no longer up-to-date enough to build circuits: We’re missing descriptors for 1/3 of our primary entry guards (total microdescriptors: 8205/8248). That’s ok. We will try to fetch missing descriptors soon.
tor_1 | Nov 01 07:22:04.000 [notice] We now have enough directory information to build circuits.
tor_1 | Nov 01 08:15:06.000 [notice] No circuits are opened. Relaxed timeout for circuit 4531 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [1 similar message(s) suppressed in last 10260 seconds]
tor_1 | Nov 01 09:49:07.000 [notice] No circuits are opened. Relaxed timeout for circuit 4574 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [3 similar message(s) suppressed in last 5640 seconds]
tor_1 | Nov 01 10:33:52.000 [notice] Heartbeat: Tor’s uptime is 2 days 18:00 hours, with 15 circuits open. I’ve sent 56.45 MB and received 55.77 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 8 connections with IPv4 and 0 with IPv6.
tor_1 | Nov 01 10:33:52.000 [notice] While bootstrapping, fetched this many bytes: 693202 (consensus network-status fetch); 14101 (authority cert fetch); 12176039 (microdescriptor fetch)
tor_1 | Nov 01 10:33:52.000 [notice] While not bootstrapping, fetched this many bytes: 673953 (consensus network-status fetch); 2394 (authority cert fetch); 1972691 (microdescriptor fetch)
lnd_1 | 2023-11-01 11:52:57.883 [INF] UTXN: Attempting to graduate height=814831: num_kids=0, num_babies=0
lnd_1 | 2023-11-01 11:52:57.893 [INF] CHDB: Pruned unconnected node 03f48f7d16dba3006858f5005bee98d56fb219fc46d8427aeaf842a48c3688181b from channel graph
lnd_1 | 2023-11-01 11:52:57.895 [INF] CHDB: Pruned unconnected node 0238bff817f31bd9ed2effd3f9421752443161e451141667196f86124d601041c2 from channel graph
lnd_1 | 2023-11-01 11:52:57.895 [INF] CHDB: Pruned unconnected node 02fd3a026e90da8b7b010bbe7656b3442a2e65bb72789a050482497a75662e7761 from channel graph
lnd_1 | 2023-11-01 11:52:57.895 [INF] CHDB: Pruned 3 unconnected nodes from the channel graph
lnd_1 | 2023-11-01 11:52:57.903 [INF] CRTR: Block 00000000000000000003c059d005128533365b1f3583b6038a8c144511a1b488 (height=814831) closed 8 channels
lnd_1 | 2023-11-01 12:05:06.877 [INF] CRTR: Pruning channel graph using block 000000000000000000026e1a54765eb62838d2a8aaba353bb4fed903a8f0a370 (height=814832)
lnd_1 | 2023-11-01 12:05:06.905 [INF] CRTR: Block 000000000000000000026e1a54765eb62838d2a8aaba353bb4fed903a8f0a370 (height=814832) closed 6 channels
lnd_1 | 2023-11-01 12:05:06.923 [INF] NTFN: New block: height=814832, sha=000000000000000000026e1a54765eb62838d2a8aaba353bb4fed903a8f0a370
lnd_1 | 2023-11-01 12:05:06.923 [INF] UTXN: Attempting to graduate height=814832: num_kids=0, num_babies=0
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
nostr-relay
Attaching to nostr-relay_relay-proxy_1, nostr-relay_web_1, nostr-relay_relay_1, nostr-relay_tor_server_1, nostr-relay_app_proxy_1
relay_1 | Nov 01 12:04:57.179 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 88.689µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:05:57.181 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 87.716µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:06:57.181 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 66.947µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:07:57.183 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 65.915µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:08:57.183 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 73.59µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:09:57.185 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 221.45µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:10:57.187 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 64.301µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:11:57.189 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 64.904µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:12:57.190 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 61.817µs (result: Ok, WAL size: 0)
relay_1 | Nov 01 12:13:57.192 INFO nostr_rs_relay::repo::sqlite: checkpoint ran in 72.878µs (result: Ok, WAL size: 0)
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / → http://nostr-relay_web_1:3000
web_1 | {“level”:“info”,“ts”:1698597229.6937203,“msg”:“serving initial configuration”}
web_1 | {“level”:“info”,“ts”:1698597229.6937785,“logger”:“tls”,“msg”:“cleaning storage unit”,“description”:“FileStorage:/data/caddy”}
web_1 | {“level”:“info”,“ts”:1698597229.695004,“logger”:“tls”,“msg”:“finished cleaning storage units”}
web_1 | {“level”:“info”,“ts”:1698597474.7583938,“logger”:“http.log.access.log0”,“msg”:“handled request”,“request”:{“remote_ip”:“10.21.0.8”,“remote_port”:“52264”,“proto”:“HTTP/1.1”,“method”:“GET”,“host”:“192.168.3.52:4848”,“uri”:“/”,“headers”:{“Accept”:[“/”],“User-Agent”:[“Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36”],“X-Forwarded-Proto”:[“http”],“X-Forwarded-Host”:[“192.168.3.52:4848”],“X-Forwarded-For”:[“::ffff:10.21.0.1”],“Accept-Language”:[“en-US,en;q=0.9”],“Accept-Encoding”:[“gzip, deflate”],“Connection”:[“close”]}},“user_id”:“”,“duration”:0.010047612,“size”:366,“status”:200,“resp_headers”:{“Content-Encoding”:[“gzip”],“Vary”:[“Accept-Encoding”],“Server”:[“Caddy”],“Etag”:[“"ryphtdge"”],“Content-Type”:[“text/html; charset=utf-8”],“Last-Modified”:[“Tue, 01 Aug 2023 09:46:25 GMT”]}}
web_1 | {“level”:“info”,“ts”:1698683630.973027,“logger”:“tls”,“msg”:“cleaning storage unit”,“description”:“FileStorage:/data/caddy”}
web_1 | {“level”:“info”,“ts”:1698683630.9752572,“logger”:“tls”,“msg”:“finished cleaning storage units”}
web_1 | {“level”:“info”,“ts”:1698710014.3377163,“logger”:“http.log.access.log0”,“msg”:“handled request”,“request”:{“remote_ip”:“10.21.0.8”,“remote_port”:“44464”,“proto”:“HTTP/1.1”,“method”:“GET”,“host”:“192.168.3.52:4848”,“uri”:“/”,“headers”:{“Connection”:[“close”],“X-Forwarded-Proto”:[“http”],“X-Forwarded-For”:[“::ffff:10.21.0.1”],“Accept-Language”:[“en-US,en;q=0.9”],“Accept-Encoding”:[“gzip, deflate”],“Accept”:[“/”],“User-Agent”:[“Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36”],“X-Forwarded-Host”:[“192.168.3.52:4848”]}},“user_id”:“”,“duration”:0.00355439,“size”:366,“status”:200,“resp_headers”:{“Last-Modified”:[“Tue, 01 Aug 2023 09:46:25 GMT”],“Content-Encoding”:[“gzip”],“Vary”:[“Accept-Encoding”],“Server”:[“Caddy”],“Etag”:[“"ryphtdge"”],“Content-Type”:[“text/html; charset=utf-8”]}}
web_1 | {“level”:“info”,“ts”:1698770030.8827107,“logger”:“tls”,“msg”:“cleaning storage unit”,“description”:“FileStorage:/data/caddy”}
web_1 | {“level”:“info”,“ts”:1698770030.88392,“logger”:“tls”,“msg”:“finished cleaning storage units”}
web_1 | {“level”:“info”,“ts”:1698840818.869003,“logger”:“http.log.access.log0”,“msg”:“handled request”,“request”:{“remote_ip”:“10.21.0.8”,“remote_port”:“53642”,“proto”:“HTTP/1.1”,“method”:“GET”,“host”:“192.168.3.52:4848”,“uri”:“/”,“headers”:{“X-Forwarded-For”:[“::ffff:10.21.0.1”],“Connection”:[“close”],“X-Forwarded-Proto”:[“http”],“X-Forwarded-Host”:[“192.168.3.52:4848”],“Accept-Language”:[“en-US,en;q=0.9”],“Accept-Encoding”:[“gzip, deflate”],“Accept”:[“/”],“User-Agent”:[“Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36”]}},“user_id”:“”,“duration”:0.00390631,“size”:366,“status”:200,“resp_headers”:{“Vary”:[“Accept-Encoding”],“Server”:[“Caddy”],“Etag”:[“"ryphtdge"”],“Content-Type”:[“text/html; charset=utf-8”],“Last-Modified”:[“Tue, 01 Aug 2023 09:46:25 GMT”],“Content-Encoding”:[“gzip”]}}
relay-proxy_1 | Nostr relay-proxy server is listening on port 80
relay-proxy_1 | Connected to ws://nostr-relay_relay_1:8080
relay-proxy_1 | No store.json file created yet
app_proxy_1 | Waiting for nostr-relay_web_1:3000 to open…
app_proxy_1 | Nostr Relay is now ready…
app_proxy_1 | Listening on port: 4848
snort
Attaching to snort_tor_server_1, snort_web_1, snort_app_proxy_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / → http://snort_web_1:80
app_proxy_1 | Waiting for snort_web_1:80 to open…
app_proxy_1 | Snort is now ready…
app_proxy_1 | Listening on port: 52027
web_1 | 2023/10/29 16:33:50 [notice] 6#6: using the “epoll” event method
web_1 | 2023/10/29 16:33:50 [notice] 6#6: nginx/1.25.2
web_1 | 2023/10/29 16:33:50 [notice] 6#6: built by gcc 12.2.1 20220924 (Alpine 12.2.1_git20220924-r10)
web_1 | 2023/10/29 16:33:50 [notice] 6#6: OS: Linux 6.2.0-35-generic
web_1 | 2023/10/29 16:33:50 [notice] 6#6: getrlimit(RLIMIT_NOFILE): 1048576:1048576
web_1 | 2023/10/29 16:33:50 [notice] 6#6: start worker processes
web_1 | 2023/10/29 16:33:50 [notice] 6#6: start worker process 29
web_1 | 2023/10/29 16:33:50 [notice] 6#6: start worker process 30
web_1 | 2023/10/29 16:33:50 [notice] 6#6: start worker process 31
web_1 | 2023/10/29 16:33:50 [notice] 6#6: start worker process 32
snowflake
Attaching to snowflake_proxy_1, snowflake_app_proxy_1, snowflake_web_1, snowflake_tor_server_1
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | Validating token: b1248859ed3a …
app_proxy_1 | [HPM] Upgrading to WebSocket
app_proxy_1 | [HPM] Client disconnected
app_proxy_1 | Validating token: 95df57d7ff23 …
app_proxy_1 | Validating token: 95df57d7ff23 …
app_proxy_1 | Validating token: 95df57d7ff23 …
app_proxy_1 | Validating token: 95df57d7ff23 …
app_proxy_1 | [HPM] Upgrading to WebSocket
app_proxy_1 | [HPM] Client disconnected
web_1 | 2023/11/01 12:13:50 10.21.0.5:36402 200 GET /
web_1 | 2023/11/01 12:13:50 10.21.0.5:36424 200 GET /auth_token.js
web_1 | 2023/11/01 12:13:50 10.21.0.5:36428 200 GET /js/gotty.js
web_1 | 2023/11/01 12:13:50 10.21.0.5:36412 200 GET /js/hterm.js
web_1 | 2023/11/01 12:13:50 New client connected: 10.21.0.5:36430
web_1 | 2023/11/01 12:13:50 Command is running for client 10.21.0.5:36430 with PID 21 (args=“-c tail -n 10000 -f /snowflake/snowflake.log | grep "Traffic Relayed"”), connections: 1
web_1 | 2023/11/01 12:13:50 10.21.0.5:36430 101 GET /ws
web_1 | 2023/11/01 12:14:03 websocket: close 1001
web_1 | 2023/11/01 12:14:03 Command exited for: 10.21.0.5:36430
web_1 | 2023/11/01 12:14:03 Connection closed: 10.21.0.5:36430, connections: 0
proxy_1 | 2023/11/01 11:45:17 error sending answer to client through broker: error sending answer to broker: remote returned status code 400
proxy_1 | 2023/11/01 11:50:33 sdp offer successfully received.
proxy_1 | 2023/11/01 11:50:33 Generating answer…
proxy_1 | 2023/11/01 11:50:38 error sending answer to client through broker: error sending answer to broker: remote returned status code 400
proxy_1 | 2023/11/01 11:54:23 sdp offer successfully received.
proxy_1 | 2023/11/01 11:54:23 Generating answer…
proxy_1 | 2023/11/01 11:54:28 error sending answer to client through broker: error sending answer to broker: remote returned status code 400
proxy_1 | 2023/11/01 12:12:29 sdp offer successfully received.
proxy_1 | 2023/11/01 12:12:29 Generating answer…
proxy_1 | 2023/11/01 12:12:34 error sending answer to client through broker: error sending answer to broker: remote returned status code 400
thunderhub
Attaching to thunderhub_tor_server_1, thunderhub_app_proxy_1, thunderhub_web_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / → http://thunderhub_web_1:3000
app_proxy_1 | Waiting for thunderhub_web_1:3000 to open…
app_proxy_1 | ThunderHub is now ready…
app_proxy_1 | Listening on port: 3000
web_1 | {
web_1 | message: ‘UnableToConnectToAnyNode’,
web_1 | level: ‘error’,
web_1 | timestamp: ‘2023-10-29T16:33:57.812Z’
web_1 | }
web_1 | {
web_1 | level: ‘error’,
web_1 | message: 'Initiating subscriptions failed: ',
web_1 | timestamp: ‘2023-10-29T16:33:57.812Z’
web_1 | }
==== Result ====
==== END =====
Oh, the channel.backup file I used was from around Aug 14th, 2023. I think i had done an rsync backup the day before, so that is where that .backup file came from. It is 2K in size, so I figure had most of the channel info.
The wallet reset procedure seems to be working… my onchain transactions are being restored. Next trick is to get back to restoring channels, or at least closing them so I can get back the sats there.
Great to hear.
I’ll wait for an update from you, but I’ll also send this into the team and see if we have encrypted back-ups to send through
It seems to have restored my onchain balance. No channel restore though.
At this point I am confident that my onchain funds have been restored, but the 14 or so channels I had are not there. Interestingly Amboss lists my channels, and the node hasnt updated there for 40 days, so that is the last time it was running properly. I guess we can try to find a channel backup that will work for a restore. The file I have is from the backup I ran before things broke, and have tried that restore, but it hasnt worked.
Hey mate- just dm’d you with 2 back-ups to try
I used one of the channel backup files and used the command line to restore my lightning wallet. I had 14 channels, most of the off chain funds have been returned to on chain, and 4 channels are still closing. Whew, good! I am waiting for the final ones to close, then will start working on liquidity again. Thanks, Smolgrrr
I have a similar problem. After my node crashed I had to set up a new one. Had a channel backup file form pretty much right before the crash. Used the ‘sed’ command as instructed but my on chain balance is only fraction of what it was before recovery. Channels are 0 and the Lightning Wallet is 0. Not sure what else I can do. Do I have to just wait or is there anything else I can do?
I also used the lightning wallet for receiving sats. So it wasn’t just the channel balance on there.
Pretty expensive endeavour if all the channels are now closing as well.
I have the same problem. My back-up files are quite old and doesn’t pick up the on-chain balance and I’ve got a LND channel that doesn’t close for close to 10 weeks now.
I would really like some guidance on how to recover my BTC.
As @dizco mentioned this is quite an expensive endeavour.
Please help. It’s been almost a week and I still have three channels waiting to close. On top of that my on chain balance shows as pending and I thus can’t do anything with it. I think my days running a node are over. This was a very expensive mistake.