How Long For Lightning Channels to Close?

My node crashed when upgrading to the new umbrel version. After several weeks there are still 4 channels waiting to close. I tried ~/umbrel/bin/lncli pendingchannels and it shows the 4 channels “waiting to close” and nothing for remote or local balance. The capacity is shown. The local balance (i.e., my funds) should actually be most of the channel capacity.

I tried the command ~/umbrel/bin/lncli closechannel --force (channel_point value) 1 and all I got back was
*** Deprecation notice ***
In a future version of Umbrel, ‘lncli’ will be removed.
Please instead use:
./scripts/app compose lightning exec lnd lncli

Any ideas?

Hi @gotagreatdog

As per the error, you’ll need to use the updated commands now. Please see the following guide. Steps for Waiting Close Channels and Unconfirmed Transactions

Could you please share the output of ~/umbrel/scripts/app compose lightning exec lnd lncli pendingchannels. Feel free to remove any identifiable node information.

Kind regards,

William H

1 Like

Here is the output< I don’t mind sharing the node info

umbrel@umbrel:~ $ ~/umbrel/scripts/app compose lightning exec lnd lncli pendingchannels
{
“total_limbo_balance”: “0”,
“pending_open_channels”: [
],
“pending_closing_channels”: [
],
“pending_force_closing_channels”: [
],
“waiting_close_channels”: [
{
“channel”: {
“remote_node_pub”: “0298f6074a454a1f5345cb2a7c6f9fce206cd0bf675d177cdbf0ca7508dd28852f”,
“channel_point”: “aca482b98702787896393ac4b5bad24a5832be36834868058a445c3d5d6619dd:1”,
“capacity”: “10000000”,
“local_balance”: “0”,
“remote_balance”: “0”,
“local_chan_reserve_sat”: “0”,
“remote_chan_reserve_sat”: “0”,
“initiator”: “INITIATOR_LOCAL”,
“commitment_type”: “STATIC_REMOTE_KEY”,
“num_forwarding_packages”: “0”,
“chan_status_flags”: “ChanStatusRestored”,
“private”: true
},
“limbo_balance”: “0”,
“commitments”: {
“local_txid”: “”,
“remote_txid”: “”,
“remote_pending_txid”: “”,
“local_commit_fee_sat”: “0”,
“remote_commit_fee_sat”: “0”,
“remote_pending_commit_fee_sat”: “0”
},
“closing_txid”: “”
},
{
“channel”: {
“remote_node_pub”: “02b56712f65c281a1eacee0bb034699a716535a1c8ae1c912e232893d4ad27d3a7”,
“channel_point”: “78f29d9603fe441ee22d0d54f0c2cf0269c27988edae42dafec679e920b4dfd4:0”,
“capacity”: “2000000”,
“local_balance”: “0”,
“remote_balance”: “0”,
“local_chan_reserve_sat”: “0”,
“remote_chan_reserve_sat”: “0”,
“initiator”: “INITIATOR_REMOTE”,
“commitment_type”: “ANCHORS”,
“num_forwarding_packages”: “0”,
“chan_status_flags”: “ChanStatusLocalDataLoss|ChanStatusRestored”,
“private”: true
},
“limbo_balance”: “0”,
“commitments”: {
“local_txid”: “”,
“remote_txid”: “”,
“remote_pending_txid”: “”,
“local_commit_fee_sat”: “0”,
“remote_commit_fee_sat”: “0”,
“remote_pending_commit_fee_sat”: “0”
},
“closing_txid”: “”
},
{
“channel”: {
“remote_node_pub”: “02defb1fcf3d8e6254abffe422327ab049848b80d2198bf41afb39d86ee70c4797”,
“channel_point”: “39cd769e5c8df97e77ea2f70c70b80fa750cb7a14d0eaab46962fb2794bc03b5:1”,
“capacity”: “700933”,
“local_balance”: “0”,
“remote_balance”: “0”,
“local_chan_reserve_sat”: “0”,
“remote_chan_reserve_sat”: “0”,
“initiator”: “INITIATOR_REMOTE”,
“commitment_type”: “ANCHORS”,
“num_forwarding_packages”: “0”,
“chan_status_flags”: “ChanStatusRestored”,
“private”: true
},
“limbo_balance”: “0”,
“commitments”: {
“local_txid”: “”,
“remote_txid”: “”,
“remote_pending_txid”: “”,
“local_commit_fee_sat”: “0”,
“remote_commit_fee_sat”: “0”,
“remote_pending_commit_fee_sat”: “0”
},
“closing_txid”: “”
},
{
“channel”: {
“remote_node_pub”: “02e9046555a9665145b0dbd7f135744598418df7d61d3660659641886ef1274844”,
“channel_point”: “0d9995d4136eadacf887931107a9f0cd029e497040ee3c508fbb37cd189731bd:0”,
“capacity”: “20000000”,
“local_balance”: “0”,
“remote_balance”: “0”,
“local_chan_reserve_sat”: “0”,
“remote_chan_reserve_sat”: “0”,
“initiator”: “INITIATOR_LOCAL”,
“commitment_type”: “ANCHORS”,
“num_forwarding_packages”: “0”,
“chan_status_flags”: “ChanStatusRestored”,
“private”: true
},
“limbo_balance”: “0”,
“commitments”: {
“local_txid”: “”,
“remote_txid”: “”,
“remote_pending_txid”: “”,
“local_commit_fee_sat”: “0”,
“remote_commit_fee_sat”: “0”,
“remote_pending_commit_fee_sat”: “0”
},
“closing_txid”: “”
}
]
}

In trying to force close it says
cannot close channel with state: ChanStatusRestored for all these channels

Hello William, are you seeing anything in the output I posted? Thank you for your help.

Ok, so the most likely cause of this is that the back-up that was used did not contain the last state of the channel DB before your node crashed.
Do you know exactly when your node crashed? It may be worth reinstalling the lightning app, and attempting to use a different back-up state.

1 Like

This guide may be helpful too, but feel free to shoot me any questions. https://www.node-recovery.com/

It crashed attempting to install 0.5.4 on Sunday morning Aug 20. I initially reinstalled with a backup file from the previous day which I thought was the last one before the crash. But there were also a couple backup files from Aug 20, the day of the crash. Which backup file do you suggest I use? I have deleted/reinstalled the Lightning Node app, it is currently syncing).

Try the latest one on the 20th of Aug I think. Let me know how that goes

There are no channels in the last backup of the day on Aug 20. That backup would be after I had reinstalled the node anyway. There are a number of backups from Aug 20, so do I try the one before the last one of the day, then if nothing’s there, the one before that and so one till I get to a backup that has channels?

Hi @gotagreatdog

Yep you can do that.
The only problem is that potentially if you use an older back-up- LND might require you to close those channels instead of keeping them open. It’s ok you’ll get all your funds back, but you’ll have to open your channels again

But the re-install caused all my channels to close anyway, except those 4 it can’t resolve. I need a bit more guidance here.

My node crashed trying to update Umbrel Lightning Node app on Aug 20. I had to reflash the SD and reinstall Umbrel, which took place sometime in the afternoon of Aug 20. There were about 6 channel backups from Aug 20 the day of the crash, but I didn’t know which one to choose. I thought it would be safe to go back to the last backup from the day before, i.e., Aug 19. As I explained this force closed (which was a surprise, I thought it would restore, not close channels) most of my channels leaving 4 that it can’t resolve. On your advice I then deleted and reinstalled Umbrel Lightning Node app and selected the final backup of the day of the crash, Aug 20. This backup, however, has no channels at all.

I suppose, please confirm:

  1. I should next try the 2nd last backup of the day of the crash? Then if no channels in that backup, the one before that, and so on, till it finds those 4 channels?
  2. It seems you have to delete the Umbrel Lightning Node app and resinstall each time to try a different backup, it that correct? It seems that Recover Channels with a different backup only works once, then the app needs to be deleted/reinstalled before trying a different backup - is that correct? Because when I tried Recover Channels again without deleting/reinstalling the app, it said backup not found.

I can’t even get backups anymore see attached. what now?

Ok, I can check with the team for any automatic back-ups.

Can you please generate the troubleshooting logs found in settings up under TROUBLESHOOT > START, and then share them here. It will contain an ID for finding your encrypted back-ups.

Here it is. BTW I wanted to mention that when entering the 2FA code I always have to enter the code then go back and enter the first number a second time to get it to work

Umbrel logs

=====================
= Umbrel debug info =

Umbrel version

0.5.4

Flashed OS version

v0.5.4

Raspberry Pi Model

Revision : c03111
Serial : 10000000a359826b
Model : Raspberry Pi 4 Model B Rev 1.1

Firmware

May 9 2023 12:16:34
Copyright (c) 2012 Broadcom
version 30aa0d70ab280427ba04ebc718c81d4350b9d394 (clean) (release) (start)

Temperature

temp=53.5’C

Throttling

throttled=0x0

Memory usage

          total        used        free      shared  buff/cache   available

Mem: 3.8G 2.2G 473M 2.0M 1.1G 1.5G
Swap: 4.1G 1.1G 3.0G

total: 57.5%
bitcoin: 23.8%
lightning: 22.2%
electrs: 10.3%
system: 1.2%

Memory monitor logs

2023-08-04 13:07:37 Memory monitor running!
2023-08-11 15:45:52 Warning memory usage at 93%
2023-08-11 15:46:53 Warning memory usage at 94%
2421 ? S 0:31 bash ./scripts/memory-monitor
Memory monitor is already running
2023-08-19 15:34:19 Memory monitor running!
2023-08-19 16:02:46 Memory monitor running!
2023-08-19 17:28:40 Memory monitor running!
2023-09-07 23:35:41 Memory monitor running!
2023-09-08 04:19:27 Memory monitor running!

Filesystem information

Filesystem Size Used Avail Use% Mounted on
/dev/root 30G 3.6G 25G 13% /
/dev/sda1 916G 647G 223G 75% /home/umbrel/umbrel

Startup service logs

Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating lightning_app_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating lightning_tor_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating lightning_lnd_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating electrs_electrs_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating electrs_tor_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating lightning_app_proxy_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating electrs_app_proxy_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating bitcoin_i2pd_daemon_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating bitcoin_bitcoind_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating bitcoin_app_proxy_1 …
Sep 08 04:19:42 umbrel umbrel startup[1637]: Creating bitcoin_tor_1 …
Sep 08 04:19:44 umbrel umbrel startup[1637]: Creating lightning_tor_1 … done
Sep 08 04:19:44 umbrel umbrel startup[1637]: Creating electrs_tor_1 … done
Sep 08 04:19:44 umbrel umbrel startup[1637]: Creating lightning_app_1 … done
Sep 08 04:19:45 umbrel umbrel startup[1637]: Creating electrs_electrs_1 … done
Sep 08 04:19:45 umbrel umbrel startup[1637]: Creating electrs_app_1 …
Sep 08 04:19:45 umbrel umbrel startup[1637]: Creating lightning_app_proxy_1 … done
Sep 08 04:19:45 umbrel umbrel startup[1637]: Creating bitcoin_i2pd_daemon_1 … done
Sep 08 04:19:46 umbrel umbrel startup[1637]: Creating electrs_app_proxy_1 … done
Sep 08 04:19:46 umbrel umbrel startup[1637]: Creating bitcoin_app_proxy_1 … done
Sep 08 04:19:46 umbrel umbrel startup[1637]: Creating bitcoin_tor_1 … done
Sep 08 04:19:46 umbrel umbrel startup[1637]: Creating lightning_lnd_1 … done
Sep 08 04:19:46 umbrel umbrel startup[1637]: Creating bitcoin_bitcoind_1 … done
Sep 08 04:19:46 umbrel umbrel startup[1637]: Creating bitcoin_server_1 …
Sep 08 04:19:47 umbrel umbrel startup[1637]: Creating electrs_app_1 … done
Sep 08 04:19:48 umbrel umbrel startup[1637]: Creating bitcoin_server_1 … done
Sep 08 04:19:48 umbrel umbrel startup[1637]: Umbrel is now accessible at
Sep 08 04:19:48 umbrel umbrel startup[1637]: http://umbrel.local
Sep 08 04:19:48 umbrel umbrel startup[1637]: http://192.168.50.51
Sep 08 04:19:48 umbrel systemd[1]: Started Umbrel Startup Service.

External storage service logs

Sep 08 04:19:02 umbrel external storage mounter[479]: Blacklisting USB device IDs against UAS driver…
Sep 08 04:19:02 umbrel external storage mounter[479]: Rebinding USB drivers…
Sep 08 04:19:03 umbrel external storage mounter[479]: Checking USB devices are back…
Sep 08 04:19:03 umbrel external storage mounter[479]: Waiting for USB devices…
Sep 08 04:19:04 umbrel external storage mounter[479]: Waiting for USB devices…
Sep 08 04:19:05 umbrel external storage mounter[479]: Checking if the device is ext4…
Sep 08 04:19:05 umbrel external storage mounter[479]: Yes, it is ext4
Sep 08 04:19:05 umbrel external storage mounter[479]: Checking filesystem for corruption…
Sep 08 04:19:05 umbrel external storage mounter[479]: e2fsck 1.44.5 (15-Dec-2018)
Sep 08 04:19:05 umbrel external storage mounter[479]: umbrel: recovering journal
Sep 08 04:19:09 umbrel external storage mounter[479]: Setting free inodes count to 60804414 (was 60804712)
Sep 08 04:19:09 umbrel external storage mounter[479]: Setting free blocks count to 71046266 (was 70975492)
Sep 08 04:19:09 umbrel external storage mounter[479]: umbrel: clean, 250562/61054976 files, 173143942/244190208 blocks
Sep 08 04:19:09 umbrel external storage mounter[479]: Mounting partition…
Sep 08 04:19:09 umbrel external storage mounter[479]: Checking if device contains an Umbrel install…
Sep 08 04:19:09 umbrel external storage mounter[479]: Yes, it contains an Umbrel install
Sep 08 04:19:09 umbrel external storage mounter[479]: Bind mounting external storage over local Umbrel installation…
Sep 08 04:19:09 umbrel external storage mounter[479]: Bind mounting external storage over local Docker data dir…
Sep 08 04:19:09 umbrel external storage mounter[479]: Bind mounting external storage to /swap
Sep 08 04:19:09 umbrel external storage mounter[479]: Bind mounting SD card root at /sd-card…
Sep 08 04:19:09 umbrel external storage mounter[479]: Checking Umbrel root is now on external storage…
Sep 08 04:19:11 umbrel external storage mounter[479]: Checking /var/lib/docker is now on external storage…
Sep 08 04:19:11 umbrel external storage mounter[479]: Checking /swap is now on external storage…
Sep 08 04:19:11 umbrel external storage mounter[479]: Setting up swapfile
Sep 08 04:19:11 umbrel external storage mounter[479]: Setting up swapspace version 1, size = 4 GiB (4294963200 bytes)
Sep 08 04:19:11 umbrel external storage mounter[479]: no label, UUID=03058f3c-f974-4d48-b9b4-ac174afe93f7
Sep 08 04:19:11 umbrel external storage mounter[479]: Checking SD Card root is bind mounted at /sd-root…
Sep 08 04:19:11 umbrel external storage mounter[479]: Starting external drive mount monitor…
Sep 08 04:19:11 umbrel external storage mounter[479]: Mount script completed successfully!
Sep 08 04:19:11 umbrel systemd[1]: Started External Storage Mounter.

External storage SD card update service logs

– Logs begin at Thu 2019-02-14 10:11:58 UTC, end at Wed 2023-09-13 16:14:25 UTC. –
Sep 08 04:19:27 umbrel systemd[1]: Starting External Storage SDcard Updater…
Sep 08 04:19:27 umbrel external storage updater[1555]: Checking if SD card Umbrel is newer than external storage…
Sep 08 04:19:27 umbrel external storage updater[1555]: No, SD version is not newer, exiting.
Sep 08 04:19:27 umbrel systemd[1]: Started External Storage SDcard Updater.

Karen logs

0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
100 8699 100 146 100 8553 78 4598 0:00:01 0:00:01 --:–:-- 4676
100 8699 100 146 100 8553 78 4598 0:00:01 0:00:01 --:–:-- 4674
{“message”:“Successfully uploaded backup 1694582386167.tar.gz.pgp for backup ID 7ca6258b54102ec4b77a37e341c3a93a932c1b3e1f8a1860b1bccbe6b9d63003”}

====== Backup success =======

Got signal: backup
karen is getting triggered!
Deriving keys…
Creating backup…
Adding random padding…
1+0 records in
1+0 records out
10004 bytes (10 kB, 9.8 KiB) copied, 0.000391776 s, 25.5 MB/s
Creating encrypted tarball…
backup/
backup/channel.backup
backup/.padding
Uploading backup…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
100 10815 100 146 100 10669 79 5779 0:00:01 0:00:01 --:–:-- 5858
100 10815 100 146 100 10669 79 5779 0:00:01 0:00:01 --:–:-- 5855
{“message”:“Successfully uploaded backup 1694603489199.tar.gz.pgp for backup ID 7ca6258b54102ec4b77a37e341c3a93a932c1b3e1f8a1860b1bccbe6b9d63003”}

====== Backup success =======

Got signal: backup
karen is getting triggered!
Deriving keys…
Creating backup…
Adding random padding…
1+0 records in
1+0 records out
7288 bytes (7.3 kB, 7.1 KiB) copied, 0.000359125 s, 20.3 MB/s
Creating encrypted tarball…
backup/
backup/channel.backup
backup/.padding
Uploading backup…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- 0:00:02 --:–:-- 0
100 7961 0 0 100 7961 0 2098 0:00:03 0:00:03 --:–:-- 2098
100 8107 100 146 100 7961 34 1861 0:00:04 0:00:04 --:–:-- 1895
{“message”:“Successfully uploaded backup 1694615986658.tar.gz.pgp for backup ID 7ca6258b54102ec4b77a37e341c3a93a932c1b3e1f8a1860b1bccbe6b9d63003”}

====== Backup success =======

Got signal: change-password
karen is getting triggered!
New password: Retype new password: passwd: password updated successfully
Got signal: debug
karen is getting triggered!

Docker containers

NAMES STATUS
lightning_app_1 Up 42 hours
lightning_tor_1 Up 42 hours
lightning_lnd_1 Up 42 hours
lightning_app_proxy_1 Up 42 hours
bitcoin_server_1 Up 5 days
electrs_app_1 Up 5 days
bitcoin_tor_1 Up 5 days
bitcoin_i2pd_daemon_1 Up 5 days
bitcoin_app_proxy_1 Up 5 days
bitcoin_bitcoind_1 Up 5 days
electrs_app_proxy_1 Up 5 days
electrs_tor_1 Up 5 days
electrs_electrs_1 Up 5 days
nginx Up 5 days
auth Up 5 days
tor_proxy Up 5 days
manager Up 5 days
dashboard Up 5 days

Umbrel logs

Attaching to manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:47 GMT] “GET /v1/system/storage HTTP/1.0” 200 486 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:47 GMT] “GET /v1/system/memory HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:47 GMT] “GET /v1/system/info HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:47 GMT] “GET /v1/system/get-update HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:47 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:48 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:49 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:50 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:51 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Wed, 13 Sep 2023 16:14:52 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
manager |
manager | umbrel-manager

Tor Proxy logs

Attaching to tor_proxy
tor_proxy | Sep 13 04:42:37.000 [notice] Heartbeat: Tor’s uptime is 4 days 23:59 hours, with 9 circuits open. I’ve sent 247.28 MB and received 503.21 MB. I’ve received 136 connections on IPv4 and 0 on IPv6. I’ve made 108 connections with IPv4 and 0 with IPv6.
tor_proxy | Sep 13 04:42:37.000 [notice] While bootstrapping, fetched this many bytes: 676628 (consensus network-status fetch); 14349 (authority cert fetch); 11943491 (microdescriptor fetch)
tor_proxy | Sep 13 04:42:37.000 [notice] While not bootstrapping, fetched this many bytes: 1691948 (consensus network-status fetch); 63378 (authority cert fetch); 4145941 (microdescriptor fetch)
tor_proxy | Sep 13 04:42:37.000 [notice] Average packaged cell fullness: 46.518%. TLS write overhead: 2%
tor_proxy | Sep 13 10:42:37.000 [notice] Heartbeat: Tor’s uptime is 5 days 5:59 hours, with 9 circuits open. I’ve sent 250.29 MB and received 506.30 MB. I’ve received 137 connections on IPv4 and 0 on IPv6. I’ve made 110 connections with IPv4 and 0 with IPv6.
tor_proxy | Sep 13 10:42:37.000 [notice] While bootstrapping, fetched this many bytes: 676628 (consensus network-status fetch); 14349 (authority cert fetch); 11943491 (microdescriptor fetch)
tor_proxy | Sep 13 10:42:37.000 [notice] While not bootstrapping, fetched this many bytes: 1731697 (consensus network-status fetch); 65151 (authority cert fetch); 4358189 (microdescriptor fetch)
tor_proxy | Sep 13 10:42:37.000 [notice] Average packaged cell fullness: 46.903%. TLS write overhead: 2%

App logs

bitcoin

Attaching to bitcoin_server_1, bitcoin_tor_1, bitcoin_i2pd_daemon_1, bitcoin_app_proxy_1, bitcoin_bitcoind_1
bitcoind_1 | 2023-09-13T15:52:29Z UpdateTip: new best=000000000000000000051c467fd2d90238851f5eef1df99de10c5122de0a5d64 height=807485 version=0x201fe000 log2_work=94.415275 tx=893562596 date=‘2023-09-13T15:52:51Z’ progress=1.000000 cache=87.8MiB(515514txo)
bitcoind_1 | 2023-09-13T15:54:56Z Saw new header hash=000000000000000000034ffd34913cbdafa24799598285bf1acce49f8da97a00 height=807486
bitcoind_1 | 2023-09-13T15:54:56Z [net] Saw new cmpctblock header hash=000000000000000000034ffd34913cbdafa24799598285bf1acce49f8da97a00 peer=1572
bitcoind_1 | 2023-09-13T15:54:56Z UpdateTip: new best=000000000000000000034ffd34913cbdafa24799598285bf1acce49f8da97a00 height=807486 version=0x2544a000 log2_work=94.415288 tx=893565759 date=‘2023-09-13T15:54:29Z’ progress=1.000000 cache=88.6MiB(522446txo)
bitcoind_1 | 2023-09-13T16:05:29Z Socks5() connect to 109.250.173.160:8333 failed: general failure
bitcoind_1 | 2023-09-13T16:05:45Z Socks5() connect to 187.74.195.215:8333 failed: general failure
bitcoind_1 | 2023-09-13T16:06:16Z Socks5() connect to 2a03:b0c0:1:d0::1246:1001:8333 failed: general failure
bitcoind_1 | 2023-09-13T16:06:32Z Socks5() connect to 46.125.78.178:8333 failed: general failure
bitcoind_1 | 2023-09-13T16:11:06Z Socks5() connect to 2601:195:c580:d910::3bee:8333 failed: general failure
bitcoind_1 | 2023-09-13T16:14:56Z Socks5() connect to 2001:8003:4036:6000:38d8:8705:3dbf:6b09:8333 failed: general failure
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / → http://10.21.22.2:3005
app_proxy_1 | Waiting for 10.21.22.2:3005 to open…
app_proxy_1 | Bitcoin Node is now ready…
app_proxy_1 | Listening on port: 2100
server_1 | yarn run v1.22.18
server_1 | $ node ./bin/www
server_1 | Fri, 08 Sep 2023 04:19:54 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:33:9
server_1 | Fri, 08 Sep 2023 04:19:55 GMT morgan deprecated default format: use combined format at app.js:33:9
server_1 | Listening on port 3005
i2pd_daemon_1 | 15:30:01@552/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 15:34:31@552/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 15:39:46@135/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 15:41:41@135/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 15:48:30@144/error - Tunnels: Can’t select next hop for LxOmvshqoiWXjWC9AjV4TWpG57AmXUhbGZpTCL554Nw=
i2pd_daemon_1 | 15:48:30@144/error - Tunnels: Can’t create inbound tunnel, no peers available
i2pd_daemon_1 | 15:51:01@552/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 16:02:51@552/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
i2pd_daemon_1 | 16:08:19@144/error - Tunnel: Tunnel with id 2094136105 already exists
i2pd_daemon_1 | 16:12:01@135/error - Garlic: Can’t handle ECIES-X25519-AEAD-Ratchet message
tor_1 | Sep 13 15:13:19.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Sep 13 15:31:30.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Sep 13 15:46:28.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Sep 13 15:46:41.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Sep 13 16:05:29.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Sep 13 16:05:45.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Sep 13 16:06:32.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Sep 13 16:14:56.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.

electrs

Attaching to electrs_app_1, electrs_app_proxy_1, electrs_tor_1, electrs_electrs_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / → http://10.21.22.4:3006
app_proxy_1 | Waiting for 10.21.22.4:3006 to open…
app_proxy_1 | Electrs is now ready…
app_proxy_1 | Listening on port: 2102
tor_1 | Sep 13 04:42:46.000 [notice] Heartbeat: Tor’s uptime is 4 days 23:59 hours, with 10 circuits open. I’ve sent 61.51 MB and received 78.97 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 75 connections with IPv4 and 0 with IPv6.
tor_1 | Sep 13 04:42:46.000 [notice] While bootstrapping, fetched this many bytes: 676628 (consensus network-status fetch); 14101 (authority cert fetch); 11943491 (microdescriptor fetch)
tor_1 | Sep 13 04:42:46.000 [notice] While not bootstrapping, fetched this many bytes: 1024957 (consensus network-status fetch); 145386 (authority cert fetch); 4142797 (microdescriptor fetch)
tor_1 | Sep 13 09:32:37.000 [notice] No circuits are opened. Relaxed timeout for circuit 3811 (a Measuring circuit timeout 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
app_1 | > umbrel-electrs@1.0.1 dev:backend
app_1 | > npm run start -w umbrel-electrs-backend
app_1 |
app_1 |
app_1 | > umbrel-electrs-backend@0.1.12 start
app_1 | > node ./bin/www
app_1 |
app_1 | Fri, 08 Sep 2023 04:20:00 GMT morgan deprecated morgan(options): use morgan(“default”, options) instead at app.js:28:9
app_1 | Fri, 08 Sep 2023 04:20:00 GMT morgan deprecated default format: use combined format at app.js:28:9
app_1 | Listening on port 3006
tor_1 | Sep 13 10:42:46.000 [notice] Heartbeat: Tor’s uptime is 5 days 5:59 hours, with 9 circuits open. I’ve sent 64.38 MB and received 82.28 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 77 connections with IPv4 and 0 with IPv6.
tor_1 | Sep 13 10:42:46.000 [notice] While bootstrapping, fetched this many bytes: 676628 (consensus network-status fetch); 14101 (authority cert fetch); 11943491 (microdescriptor fetch)
tor_1 | Sep 13 10:42:46.000 [notice] While not bootstrapping, fetched this many bytes: 1061660 (consensus network-status fetch); 150705 (authority cert fetch); 4353232 (microdescriptor fetch)
tor_1 | Sep 13 14:28:36.000 [notice] No circuits are opened. Relaxed timeout for circuit 3966 (a Measuring circuit timeout 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
electrs_1 | [2023-09-13T14:35:48.351Z INFO electrs::index] indexing 1 blocks: [807482…807482]
electrs_1 | [2023-09-13T14:35:48.526Z INFO electrs::chain] chain updated: tip=000000000000000000009db86c91ac3ce9cab592e9dd067674099d65f5131f6a, height=807482
electrs_1 | [2023-09-13T15:02:26.798Z INFO electrs::index] indexing 1 blocks: [807483…807483]
electrs_1 | [2023-09-13T15:02:26.945Z INFO electrs::chain] chain updated: tip=00000000000000000002463dc3e19898fe74be93b69c1d9e71d02b5a5dbc1f24, height=807483
electrs_1 | [2023-09-13T15:11:57.762Z INFO electrs::index] indexing 1 blocks: [807484…807484]
electrs_1 | [2023-09-13T15:11:57.906Z INFO electrs::chain] chain updated: tip=000000000000000000042e661b9241089467e35cb3e7c68762178aec064870a8, height=807484
electrs_1 | [2023-09-13T15:52:39.658Z INFO electrs::index] indexing 1 blocks: [807485…807485]
electrs_1 | [2023-09-13T15:52:39.793Z INFO electrs::chain] chain updated: tip=000000000000000000051c467fd2d90238851f5eef1df99de10c5122de0a5d64, height=807485
electrs_1 | [2023-09-13T15:54:57.153Z INFO electrs::index] indexing 1 blocks: [807486…807486]
electrs_1 | [2023-09-13T15:54:57.332Z INFO electrs::chain] chain updated: tip=000000000000000000034ffd34913cbdafa24799598285bf1acce49f8da97a00, height=807486

lightning

Attaching to lightning_app_1, lightning_tor_1, lightning_lnd_1, lightning_app_proxy_1
app_1 | ::ffff:10.21.0.2 - - [Wed, 13 Sep 2023 16:14:30 GMT] “GET /v1/lnd/channel HTTP/1.1” 304 - “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36”
app_1 |
app_1 | umbrel-lightning
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_proxy_1 | Validating token: ab4ea7513139 …
app_proxy_1 | Validating token: ab4ea7513139 …
app_proxy_1 | Validating token: ab4ea7513139 …
app_proxy_1 | Validating token: ab4ea7513139 …
app_proxy_1 | Validating token: ab4ea7513139 …
app_proxy_1 | Validating token: ab4ea7513139 …
app_proxy_1 | Validating token: ab4ea7513139 …
app_proxy_1 | Validating token: ab4ea7513139 …
app_proxy_1 | Validating token: ab4ea7513139 …
tor_1 | Sep 13 10:02:59.000 [notice] While not bootstrapping, fetched this many bytes: 332788 (consensus network-status fetch); 1449 (authority cert fetch); 1624369 (microdescriptor fetch)
tor_1 | Sep 13 11:26:13.000 [warn] Received http status code 404 (“Not found”) from server 37.120.174.24:443 while fetching “/tor/keys/fp/EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97”.
tor_1 | Sep 13 13:13:13.000 [warn] Received http status code 404 (“Not found”) from server 37.120.174.24:443 while fetching “/tor/keys/fp/EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97”.
tor_1 | Sep 13 15:33:13.000 [warn] Received http status code 404 (“Not found”) from server 94.16.113.114:9001 while fetching “/tor/keys/fp/EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97”.
tor_1 | Sep 13 15:34:19.000 [notice] No circuits are opened. Relaxed timeout for circuit 2360 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [1 similar message(s) suppressed in last 19980 seconds]
tor_1 | Sep 13 16:02:59.000 [notice] Heartbeat: Tor’s uptime is 1 day 18:00 hours, with 20 circuits open. I’ve sent 34.71 MB and received 39.28 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 25 connections with IPv4 and 0 with IPv6.
tor_1 | Sep 13 16:02:59.000 [notice] While bootstrapping, fetched this many bytes: 681857 (consensus network-status fetch); 14352 (authority cert fetch); 12030013 (microdescriptor fetch)
tor_1 | Sep 13 16:02:59.000 [notice] While not bootstrapping, fetched this many bytes: 372204 (consensus network-status fetch); 1638 (authority cert fetch); 1721275 (microdescriptor fetch)
lnd_1 | 2023-09-13 15:52:30.111 [INF] CRTR: Pruning channel graph using block 000000000000000000051c467fd2d90238851f5eef1df99de10c5122de0a5d64 (height=807485)
lnd_1 | 2023-09-13 15:52:30.236 [INF] NTFN: New block: height=807485, sha=000000000000000000051c467fd2d90238851f5eef1df99de10c5122de0a5d64
lnd_1 | 2023-09-13 15:52:30.237 [INF] UTXN: Attempting to graduate height=807485: num_kids=0, num_babies=0
lnd_1 | 2023-09-13 15:52:36.157 [INF] CRTR: Block 000000000000000000051c467fd2d90238851f5eef1df99de10c5122de0a5d64 (height=807485) closed 0 channels
lnd_1 | 2023-09-13 15:53:08.351 [INF] CRTR: Examining channel graph for zombie channels
lnd_1 | 2023-09-13 15:53:08.445 [INF] CRTR: Pruning 0 zombie channels
lnd_1 | 2023-09-13 15:54:57.712 [INF] CRTR: Pruning channel graph using block 000000000000000000034ffd34913cbdafa24799598285bf1acce49f8da97a00 (height=807486)
lnd_1 | 2023-09-13 15:54:57.851 [INF] CRTR: Block 000000000000000000034ffd34913cbdafa24799598285bf1acce49f8da97a00 (height=807486) closed 0 channels
lnd_1 | 2023-09-13 15:54:57.918 [INF] NTFN: New block: height=807486, sha=000000000000000000034ffd34913cbdafa24799598285bf1acce49f8da97a00
lnd_1 | 2023-09-13 15:54:57.919 [INF] UTXN: Attempting to graduate height=807486: num_kids=0, num_babies=0
app_proxy_1 | Validating token: ab4ea7513139 …

==== Result ====

The debug script did not automatically detect any issues with your Umbrel.

Are you able to restore the backups from Aug 19-20?

Any progress getting the backups? I’m getting worried with no response.

Hi @gotagreatdog

Just in reference to your messages, and as unfortunately the back-ups aren’t working.
The next best solution here will be to use guggero’s recovery tools:

Let me know how you go with filling out the node-recovery form, and then we can look at chantools

Hi William,

What happened to the channel backups? When I first tried recovery there were a number of backups from Aug 20 and Aug 21 , my node crashed on Aug 20. Sunday Morning NY time on Aug 20. Where did those go, why are they unavailable now?

Thank you,
Clint Farrell

Hello William,

Do you have the previous backup? It is this one:

{“message”:“Successfully uploaded backup 1692500044648.tar.gz.pgp for backup ID 7ca6258b54102ec4b77a37e341c3a93a932c1b3e1f8a1860b1bccbe6b9d63003”}

Clint Farrell