Recovering a lightning wallet without a backup file

After a recent update my bitcoin and lighting nodes wouldn’t open. The apps would load to about 60% and then freeze up.

I deleted and reinstalled both. The bitcoin node is fully synced and I’ve reinstalled my lightning node using my seed phrase. At this point I’m back up and running, but the sats in my lightning wallet are missing (the sats in my on chain wallet are there).

Prior to my node stalling out I did not have a backup of my payment channels and I’m not sure how to proceed using the recovery process without that file, because the automatic backup files are not showing up either.

Is there anything I can do?

This is a guide that may help you
To restore your Lightning node and payment channels, follow these steps:

  1. Install/re-install the Lightning Node app
  2. Select “Recover your previous node” when prompted
  3. Enter your 24 secret words
  4. Select an automatic encrypted backup of your channels or upload your own backup file, and that’s it. Here is a demo of the Automatic Channel Recovery tool .

By the way, I am facing an issue that no backup files show up at step 4, and waiting for further advice from Umbrel.

thanks! I’ve followed those steps and am at the same spot as you … backup files don’t show up and I didn’t have my own files downloaded.

So at the last step, do you see this screen below?

1 Like

yep, that’s where I’m stuck and that’s exactly what I’ve been seeing for the past week or so.

So have I as well since last week. I am waiting for @smolgrrr for support on this issue.

Good luck!

Do you have a link to your other thread/topic? I’d like to follow along because I don’t have an insignificant amount of sats in my lightning wallet & I really do not want to lose them.

I also have the same problem. I am following this thread.

In the best case, if you had open channels and you didn’t save any backup locally, I would hope that umbrel company have them somewhere for you. This is just what I expect following the official troubleshooting (Official Umbrel Troubleshooting Guide and FAQ).

However, if for some reason they don’t have those backups, I can only cross my fingers and wait for my peers to force close the channels in the latest valid state.

One could check if those channels are still open by loading seed phrase into a wallet software and trace the UTXOs from this wallet in a blockchain explorer. I intend to do this soon.

Update:
Oh! I’ve just discovered that the 24 words do not satisfy the BIP39 checksum.
How can I extract the master key for the LND wallet? There isn’t anything like it in the user interface.

Update2:
The seed phrase encodes the master key in a “aezeed” format. Here is a guide to decode that (Restoring your Umbrel node wallet – Telegraph).

I suggest you contact Umbrel support team since you have a significant amount of sats stuck there.
Last year I experienced a similar situation, 5 millions sats stuck. Umbrel in time released the automatic backup method. I, however, was able to restore around over 3 millions sats and lost around 1.5 millions.

I am also thinking of reinstalling my Lightning node again and to see if automatic backup files will show up.

Hello. Today I tried again to recover the channels using the automatic backup feature and this time the backup dialog showed up allowing me to select the latest backup file (from umbrel’s cloud) before my node went down. I invite you to try as well. Good luck!

Oh cool, congratulations!

Hi @butterknuckles

Sorry to hear that.

If you could please follow these steps, and share the output with me, I will forward the back-up to you.

  1. Open a terminal window on your computer. On macOS, you can open the Terminal app that’s installed by default on every Mac. On Windows, you can open Command Prompt or the PowerShell app.
  2. Type in the following command* and press the Enter key:ssh -t umbrel@umbrel.local*In the command given above, you can replace umbrel.local with the local IP of your Umbrel if you prefer. If you are using PowerShell on Windows 10, you may need to run ssh umbrel@umbrel instead of the command given above.
  3. You will then be prompted to enter your password. This is the same password you use to access your Umbrel through a browser. The default password before you have created an account on your Umbrel is moneyprintergobrrr on Raspberry Pi, and umbrel on Umbrel Home. You will not be able to see your password as you type it into the terminal. Once you have typed in your password, press the Enter key.
  4. Run cat ~/umbrel/logs/karen.log | grep ‘backup ID’ | tail -n 1

Thanks for the help here,@smolgrrr. Running the command cat ~/umbrel/logs/karen.log | grep ‘backup ID’ | tail -n 1 returns the following: “no such file or directory”

Interesting, instead can you please go to the settings dashboard, and press START under Troubleshooting.

You should be able to download the log, and share it here. I can try find the back-up from there. Thanks

thanks again for helping me look into this. Here are the troubleshooting logs:

=====================
= Umbrel debug info =

Umbrel version

0.5.4

Flashed OS version

v0.5.3

Raspberry Pi Model

Revision : d03115
Serial : 10000000bef849d3
Model : Raspberry Pi 4 Model B Rev 1.5

Firmware

Dec 1 2021 15:01:54
Copyright (c) 2012 Broadcom
version 71bd3109023a0c8575585ba87cbb374d2eeb038f (clean) (release) (start)

Temperature

temp=51.6’C

Throttling

throttled=0x0

Memory usage

          total        used        free      shared  buff/cache   available

Mem: 7.8G 2.0G 139M 5.0M 5.7G 5.7G
Swap: 4.1G 1.9G 2.2G

total: 25.8%
bitcoin: 11.2%
lightning: 8.6%
electrs: 3.6%
system: 2%
thunderhub: 0.4%

Memory monitor logs

2023-02-02 01:47:58 Memory monitor running!
2023-02-02 02:58:48 Memory monitor running!
2023-06-13 15:53:44 Memory monitor running!
2419 ? S 0:22 bash ./scripts/memory-monitor
Memory monitor is already running
2023-07-13 00:03:39 Warning memory usage at 91%
2023-09-15 13:24:56 Memory monitor running!
2023-09-19 15:11:26 Memory monitor running!
2023-09-26 16:22:43 Memory monitor running!
2023-09-27 02:43:11 Memory monitor running!

Filesystem information

Filesystem Size Used Avail Use% Mounted on
/dev/root 235G 3.2G 223G 2% /
/dev/sda1 916G 655G 215G 76% /home/umbrel/umbrel

Startup service logs

Sep 27 02:43:44 umbrel umbrel startup[1025]: Executing hook: /home/umbrel/umbrel/app-data/bitcoin/hooks/pre-start
Sep 27 02:43:44 umbrel umbrel startup[1025]: Executing hook: /home/umbrel/umbrel/app-data/lightning/hooks/pre-start
Sep 27 02:43:46 umbrel umbrel startup[1025]: The APP_MEMPOOL_PORT variable is not set. Defaulting to a blank string.
Sep 27 02:43:46 umbrel umbrel startup[1025]: The APP_MEMPOOL_HIDDEN_SERVICE variable is not set. Defaulting to a blank string.
Sep 27 02:43:47 umbrel umbrel startup[1025]: Creating lightning_app_proxy_1 …
Sep 27 02:43:47 umbrel umbrel startup[1025]: Creating lightning_lnd_1 …
Sep 27 02:43:47 umbrel umbrel startup[1025]: Creating lightning_app_1 …
Sep 27 02:43:47 umbrel umbrel startup[1025]: Creating lightning_tor_1 …
Sep 27 02:43:47 umbrel umbrel startup[1025]: Creating bitcoin_app_proxy_1 …
Sep 27 02:43:47 umbrel umbrel startup[1025]: Creating bitcoin_i2pd_daemon_1 …
Sep 27 02:43:48 umbrel umbrel startup[1025]: Creating bitcoin_bitcoind_1 …
Sep 27 02:43:48 umbrel umbrel startup[1025]: Creating bitcoin_tor_1 …
Sep 27 02:43:58 umbrel umbrel startup[1025]: Creating lightning_app_1 … done
Sep 27 02:43:59 umbrel umbrel startup[1025]: Creating lightning_tor_1 … done
Sep 27 02:44:01 umbrel umbrel startup[1025]: Creating bitcoin_tor_1 … done
Sep 27 02:44:02 umbrel umbrel startup[1025]: Creating lightning_lnd_1 … done
Sep 27 02:44:04 umbrel umbrel startup[1025]: Creating bitcoin_i2pd_daemon_1 … done
Sep 27 02:44:08 umbrel umbrel startup[1025]: Creating bitcoin_app_proxy_1 … done
Sep 27 02:44:08 umbrel umbrel startup[1025]: Creating lightning_app_proxy_1 … done
Sep 27 02:44:09 umbrel umbrel startup[1025]: Creating bitcoin_bitcoind_1 … done
Sep 27 02:44:09 umbrel umbrel startup[1025]: Creating bitcoin_server_1 …
Sep 27 02:44:24 umbrel umbrel startup[1025]: Creating bitcoin_server_1 … done
Sep 27 02:44:24 umbrel umbrel startup[1025]: Umbrel is now accessible at
Sep 27 02:44:24 umbrel umbrel startup[1025]: http://umbrel.local
Sep 27 02:44:24 umbrel umbrel startup[1025]: http://192.168.0.121
Sep 27 02:44:24 umbrel systemd[1]: Started Umbrel Startup Service.
Sep 28 12:59:01 umbrel passwd[28384]: pam_unix(passwd:chauthtok): password changed for umbrel
Sep 29 02:25:15 umbrel passwd[17504]: pam_unix(passwd:chauthtok): password changed for umbrel
Oct 01 15:36:26 umbrel passwd[27284]: pam_unix(passwd:chauthtok): password changed for umbrel
Oct 02 13:59:38 umbrel passwd[20405]: pam_unix(passwd:chauthtok): password changed for umbrel

External storage service logs

Sep 27 02:42:50 umbrel external storage mounter[541]: Running external storage mount script…
Sep 27 02:42:50 umbrel external storage mounter[541]: Found device "JMicron "
Sep 27 02:42:50 umbrel external storage mounter[541]: Blacklisting USB device IDs against UAS driver…
Sep 27 02:42:50 umbrel external storage mounter[541]: Rebinding USB drivers…
Sep 27 02:42:50 umbrel external storage mounter[541]: Checking USB devices are back…
Sep 27 02:42:50 umbrel external storage mounter[541]: Waiting for USB devices…
Sep 27 02:42:51 umbrel external storage mounter[541]: Waiting for USB devices…
Sep 27 02:42:52 umbrel external storage mounter[541]: Waiting for USB devices…
Sep 27 02:42:53 umbrel external storage mounter[541]: Checking if the device is ext4…
Sep 27 02:42:53 umbrel external storage mounter[541]: Yes, it is ext4
Sep 27 02:42:53 umbrel external storage mounter[541]: Checking filesystem for corruption…
Sep 27 02:42:53 umbrel external storage mounter[541]: e2fsck 1.44.5 (15-Dec-2018)
Sep 27 02:42:54 umbrel external storage mounter[541]: umbrel: clean, 194921/61054976 files, 164019108/244190208 blocks
Sep 27 02:42:54 umbrel external storage mounter[541]: Mounting partition…
Sep 27 02:42:54 umbrel external storage mounter[541]: Checking if device contains an Umbrel install…
Sep 27 02:42:54 umbrel external storage mounter[541]: Yes, it contains an Umbrel install
Sep 27 02:42:54 umbrel external storage mounter[541]: Bind mounting external storage over local Umbrel installation…
Sep 27 02:42:54 umbrel external storage mounter[541]: Bind mounting external storage over local Docker data dir…
Sep 27 02:42:54 umbrel external storage mounter[541]: Bind mounting external storage to /swap
Sep 27 02:42:54 umbrel external storage mounter[541]: Bind mounting SD card root at /sd-card…
Sep 27 02:42:54 umbrel external storage mounter[541]: Checking Umbrel root is now on external storage…
Sep 27 02:42:55 umbrel external storage mounter[541]: Checking /var/lib/docker is now on external storage…
Sep 27 02:42:55 umbrel external storage mounter[541]: Checking /swap is now on external storage…
Sep 27 02:42:55 umbrel external storage mounter[541]: Setting up swapfile
Sep 27 02:43:02 umbrel external storage mounter[541]: Setting up swapspace version 1, size = 4 GiB (4294963200 bytes)
Sep 27 02:43:02 umbrel external storage mounter[541]: no label, UUID=747063f5-e68f-49d6-ace4-7ec0a2db2ff7
Sep 27 02:43:02 umbrel external storage mounter[541]: Checking SD Card root is bind mounted at /sd-root…
Sep 27 02:43:02 umbrel external storage mounter[541]: Starting external drive mount monitor…
Sep 27 02:43:02 umbrel external storage mounter[541]: Mount script completed successfully!
Sep 27 02:43:02 umbrel systemd[1]: Started External Storage Mounter.

External storage SD card update service logs

– Logs begin at Wed 2023-09-27 02:42:44 UTC, end at Thu 2023-10-05 13:43:01 UTC. –
Sep 27 02:43:08 umbrel systemd[1]: Starting External Storage SDcard Updater…
Sep 27 02:43:09 umbrel external storage updater[976]: Checking if SD card Umbrel is newer than external storage…
Sep 27 02:43:09 umbrel external storage updater[976]: No, SD version is not newer, exiting.
Sep 27 02:43:09 umbrel systemd[1]: Started External Storage SDcard Updater.

Karen logs

Uploading backup…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
100 10429 100 146 100 10283 89 6285 0:00:01 0:00:01 --:–:-- 6370
100 10429 100 146 100 10283 89 6277 0:00:01 0:00:01 --:–:-- 6363
{“message”:“Successfully uploaded backup 1696465495303.tar.gz.pgp for backup ID b39c73333fd133c24163e97876403f37963491e525be11bc3ff1e620855d265f”}

====== Backup success =======

Got signal: backup
karen is getting triggered!
Deriving keys…
Creating backup…
Adding random padding…
1+0 records in
1+0 records out
7265 bytes (7.3 kB, 7.1 KiB) copied, 0.000307682 s, 23.6 MB/s
Creating encrypted tarball…
backup/
backup/.padding
backup/channel.backup
Uploading backup…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
100 8085 100 146 100 7939 88 4829 0:00:01 0:00:01 --:–:-- 4914
100 8085 100 146 100 7939 88 4826 0:00:01 0:00:01 --:–:-- 4914
{“message”:“Successfully uploaded backup 1696479628341.tar.gz.pgp for backup ID b39c73333fd133c24163e97876403f37963491e525be11bc3ff1e620855d265f”}

====== Backup success =======

Got signal: backup
karen is getting triggered!
Deriving keys…
Creating backup…
Adding random padding…
1+0 records in
1+0 records out
2821 bytes (2.8 kB, 2.8 KiB) copied, 0.000235793 s, 12.0 MB/s
Creating encrypted tarball…
backup/
backup/.padding
backup/channel.backup
Uploading backup…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
100 3463 0 0 100 3463 0 2302 0:00:01 0:00:01 --:–:-- 2300
100 3609 100 146 100 3463 54 1286 0:00:02 0:00:02 --:–:-- 1340
100 3609 100 146 100 3463 54 1286 0:00:02 0:00:02 --:–:-- 1340
{“message”:“Successfully uploaded backup 1696480951158.tar.gz.pgp for backup ID b39c73333fd133c24163e97876403f37963491e525be11bc3ff1e620855d265f”}

====== Backup success =======

Got signal: debug
karen is getting triggered!

Docker containers

NAMES STATUS
thunderhub_app_proxy_1 Up 41 hours
thunderhub_web_1 Up 41 hours
electrs_app_1 Up 3 days
electrs_electrs_1 Up 3 days
electrs_tor_1 Up 3 days
electrs_app_proxy_1 Up 3 days
lightning_tor_1 Up 8 days
lightning_app_1 Up 8 days
lightning_lnd_1 Up 8 days
lightning_app_proxy_1 Up 8 days
bitcoin_server_1 Up 8 days
bitcoin_bitcoind_1 Up 8 days
bitcoin_tor_1 Up 8 days
bitcoin_i2pd_daemon_1 Up 8 days
bitcoin_app_proxy_1 Up 8 days
nginx Up 8 days
manager Up 8 days
tor_proxy Up 8 days
dashboard Up 8 days
auth Up 8 days

Umbrel logs

Attaching to manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:45 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:46 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:47 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:48 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:49 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:50 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:51 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:52 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:53 GMT] “GET /v1/system/debug-result HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager
manager | ::ffff:10.21.21.2 - - [Thu, 05 Oct 2023 13:43:54 GMT] “GET /v1/system/is-sd-card-failing HTTP/1.0” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
manager |
manager | umbrel-manager

Tor Proxy logs

Attaching to tor_proxy
tor_proxy | Oct 05 02:43:40.000 [notice] Heartbeat: Tor’s uptime is 8 days 0:00 hours, with 20 circuits open. I’ve sent 336.46 MB and received 553.26 MB. I’ve received 42486 connections on IPv4 and 0 on IPv6. I’ve made 35 connections with IPv4 and 0 with IPv6.
tor_proxy | Oct 05 02:43:40.000 [notice] While bootstrapping, fetched this many bytes: 705729 (consensus network-status fetch); 14103 (authority cert fetch); 12521464 (microdescriptor fetch)
tor_proxy | Oct 05 02:43:40.000 [notice] While not bootstrapping, fetched this many bytes: 1605605 (consensus network-status fetch); 99344 (authority cert fetch); 5190103 (microdescriptor fetch)
tor_proxy | Oct 05 02:43:40.000 [notice] Average packaged cell fullness: 58.621%. TLS write overhead: 3%
tor_proxy | Oct 05 08:43:40.000 [notice] Heartbeat: Tor’s uptime is 8 days 6:00 hours, with 12 circuits open. I’ve sent 343.08 MB and received 559.90 MB. I’ve received 43808 connections on IPv4 and 0 on IPv6. I’ve made 35 connections with IPv4 and 0 with IPv6.
tor_proxy | Oct 05 08:43:40.000 [notice] While bootstrapping, fetched this many bytes: 705729 (consensus network-status fetch); 14103 (authority cert fetch); 12521464 (microdescriptor fetch)
tor_proxy | Oct 05 08:43:40.000 [notice] While not bootstrapping, fetched this many bytes: 1668783 (consensus network-status fetch); 99344 (authority cert fetch); 5257465 (microdescriptor fetch)
tor_proxy | Oct 05 08:43:40.000 [notice] Average packaged cell fullness: 58.827%. TLS write overhead: 3%

App logs

bitcoin

Attaching to bitcoin_server_1, bitcoin_bitcoind_1, bitcoin_tor_1, bitcoin_i2pd_daemon_1, bitcoin_app_proxy_1
bitcoind_1 | 2023-10-05T13:16:33Z UpdateTip: new best=00000000000000000001be6ad82531117998b9000a058819adabd0866cfe2987 height=810752 version=0x2001e000 log2_work=94.457787 tx=903223580 date=‘2023-10-05T13:15:50Z’ progress=1.000000 cache=98.9MiB(595953txo)
bitcoind_1 | 2023-10-05T13:30:57Z Saw new header hash=00000000000000000002c97085d960385763ab3fc7f633f39f65e2ff0aef7c62 height=810753
bitcoind_1 | 2023-10-05T13:30:57Z [net] Saw new cmpctblock header hash=00000000000000000002c97085d960385763ab3fc7f633f39f65e2ff0aef7c62 peer=1372
bitcoind_1 | 2023-10-05T13:30:57Z UpdateTip: new best=00000000000000000002c97085d960385763ab3fc7f633f39f65e2ff0aef7c62 height=810753 version=0x2000e000 log2_work=94.457800 tx=903227163 date=‘2023-10-05T13:30:30Z’ progress=1.000000 cache=100.9MiB(611603txo)
bitcoind_1 | 2023-10-05T13:35:49Z Saw new header hash=00000000000000000000b7842cdf5e5c7e4d95ce853722f8023e0e9f8256e18c height=810754
bitcoind_1 | 2023-10-05T13:35:49Z [net] Saw new cmpctblock header hash=00000000000000000000b7842cdf5e5c7e4d95ce853722f8023e0e9f8256e18c peer=1372
bitcoind_1 | 2023-10-05T13:35:50Z UpdateTip: new best=00000000000000000000b7842cdf5e5c7e4d95ce853722f8023e0e9f8256e18c height=810754 version=0x20600000 log2_work=94.457813 tx=903228834 date=‘2023-10-05T13:35:16Z’ progress=1.000000 cache=101.3MiB(614783txo)
bitcoind_1 | 2023-10-05T13:35:54Z Saw new header hash=00000000000000000004bf5be53a23efd5f2c7372dad07b10e53328325780dbd height=810755
bitcoind_1 | 2023-10-05T13:35:54Z [net] Saw new cmpctblock header hash=00000000000000000004bf5be53a23efd5f2c7372dad07b10e53328325780dbd peer=1582
bitcoind_1 | 2023-10-05T13:35:54Z UpdateTip: new best=00000000000000000004bf5be53a23efd5f2c7372dad07b10e53328325780dbd height=810755 version=0x28954000 log2_work=94.457826 tx=903228835 date=‘2023-10-05T13:35:48Z’ progress=1.000000 cache=101.3MiB(614803txo)
i2pd_daemon_1 | 13:18:45@5/error - Tunnel: Tunnel with id 3524419745 already exists
i2pd_daemon_1 | 13:18:46@5/error - Tunnel: Tunnel with id 3312621742 already exists
i2pd_daemon_1 | 13:26:40@944/error - SSU2: RelayIntro unknown router to introduce
i2pd_daemon_1 | 13:27:37@5/error - Tunnel: Tunnel with id 2985842992 already exists
i2pd_daemon_1 | 13:37:03@5/error - Tunnel: Tunnel with id 2940978240 already exists
i2pd_daemon_1 | 13:40:44@983/error - Destination: Can’t publish LeaseSet. Destination is not ready
i2pd_daemon_1 | 13:41:20@983/error - Garlic: Failed to decrypt message
i2pd_daemon_1 | 13:41:34@983/error - SAM: Stream read error: Operation canceled
i2pd_daemon_1 | 13:41:34@983/error - SAM: Read error: Operation canceled
i2pd_daemon_1 | 13:41:34@983/error - SAM: Read error: End of file
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
server_1 | umbrel-middleware
server_1 | ::ffff:10.21.0.3 - - [Tue, 03 Oct 2023 15:59:12 GMT] “GET /v1/bitcoind/info/blocks?from=810483&to=810486 HTTP/1.1” 200 1023 “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
server_1 |
server_1 | umbrel-middleware
server_1 | ::ffff:10.21.0.3 - - [Tue, 03 Oct 2023 15:59:12 GMT] “GET /v1/bitcoind/info/sync HTTP/1.1” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
server_1 |
server_1 | umbrel-middleware
server_1 | ::ffff:10.21.0.3 - - [Tue, 03 Oct 2023 15:59:12 GMT] “GET /v1/bitcoind/info/blocks?from=810343&to=810486 HTTP/1.1” 200 36586 “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
server_1 |
server_1 | umbrel-middleware
tor_1 | Oct 05 11:24:18.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Oct 05 11:46:22.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Oct 05 11:53:13.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Oct 05 12:07:53.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Oct 05 12:11:20.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Oct 05 12:16:49.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Oct 05 12:24:57.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Oct 05 13:01:42.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor_1 | Oct 05 13:32:00.000 [warn] Received http status code 404 (“Not found”) from server 166.84.6.10:9001 while fetching “/tor/keys/fp/EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97”.
tor_1 | Oct 05 13:32:06.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60000ms after 18 timeouts and 1000 buildtimes.

electrs

Attaching to electrs_app_1, electrs_electrs_1, electrs_tor_1, electrs_app_proxy_1
electrs_1 | [2023-10-05T13:05:06.360Z INFO electrs::index] indexing 1 blocks: [810751…810751]
electrs_1 | [2023-10-05T13:05:06.564Z INFO electrs::chain] chain updated: tip=000000000000000000016a846a6499d729a4296b3a43e11d0f1b126e4c0330ab, height=810751
electrs_1 | [2023-10-05T13:16:33.905Z INFO electrs::index] indexing 1 blocks: [810752…810752]
electrs_1 | [2023-10-05T13:16:34.014Z INFO electrs::chain] chain updated: tip=00000000000000000001be6ad82531117998b9000a058819adabd0866cfe2987, height=810752
electrs_1 | [2023-10-05T13:30:57.782Z INFO electrs::index] indexing 1 blocks: [810753…810753]
electrs_1 | [2023-10-05T13:30:57.933Z INFO electrs::chain] chain updated: tip=00000000000000000002c97085d960385763ab3fc7f633f39f65e2ff0aef7c62, height=810753
electrs_1 | [2023-10-05T13:35:50.495Z INFO electrs::index] indexing 1 blocks: [810754…810754]
electrs_1 | [2023-10-05T13:35:50.584Z INFO electrs::chain] chain updated: tip=00000000000000000000b7842cdf5e5c7e4d95ce853722f8023e0e9f8256e18c, height=810754
electrs_1 | [2023-10-05T13:35:54.477Z INFO electrs::index] indexing 1 blocks: [810755…810755]
electrs_1 | [2023-10-05T13:35:54.491Z INFO electrs::chain] chain updated: tip=00000000000000000004bf5be53a23efd5f2c7372dad07b10e53328325780dbd, height=810755
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_1 | umbrel-middleware
app_1 | ::ffff:10.21.0.5 - - [Tue, 03 Oct 2023 15:55:46 GMT] “GET /v1/electrs/syncPercent HTTP/1.1” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
app_1 |
app_1 | umbrel-middleware
app_1 | ::ffff:10.21.0.5 - - [Tue, 03 Oct 2023 15:59:46 GMT] “GET /v1/electrs/syncPercent HTTP/1.1” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
app_1 |
app_1 | umbrel-middleware
app_1 | ::ffff:10.21.0.5 - - [Tue, 03 Oct 2023 16:01:32 GMT] “GET /v1/electrs/syncPercent HTTP/1.1” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
app_1 |
app_1 | umbrel-middleware
tor_1 | Oct 04 20:22:41.000 [notice] No circuits are opened. Relaxed timeout for circuit 1737 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [2 similar message(s) suppressed in last 14400 seconds]
tor_1 | Oct 05 02:13:21.000 [notice] Heartbeat: Tor’s uptime is 2 days 12:00 hours, with 10 circuits open. I’ve sent 31.96 MB and received 44.85 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 9 connections with IPv4 and 0 with IPv6.
tor_1 | Oct 05 02:13:21.000 [notice] While bootstrapping, fetched this many bytes: 711775 (consensus network-status fetch); 14356 (authority cert fetch); 12507534 (microdescriptor fetch)
tor_1 | Oct 05 02:13:21.000 [notice] While not bootstrapping, fetched this many bytes: 517306 (consensus network-status fetch); 56768 (authority cert fetch); 1528432 (microdescriptor fetch)
tor_1 | Oct 05 05:40:44.000 [notice] No circuits are opened. Relaxed timeout for circuit 1990 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway.
tor_1 | Oct 05 08:13:21.000 [notice] Heartbeat: Tor’s uptime is 2 days 18:00 hours, with 9 circuits open. I’ve sent 34.71 MB and received 48.03 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 9 connections with IPv4 and 0 with IPv6.
tor_1 | Oct 05 08:13:21.000 [notice] While bootstrapping, fetched this many bytes: 711775 (consensus network-status fetch); 14356 (authority cert fetch); 12507534 (microdescriptor fetch)
tor_1 | Oct 05 08:13:21.000 [notice] While not bootstrapping, fetched this many bytes: 575058 (consensus network-status fetch); 58542 (authority cert fetch); 1595648 (microdescriptor fetch)

lightning

Attaching to lightning_tor_1, lightning_app_1, lightning_lnd_1, lightning_app_proxy_1
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_1 | umbrel-lightning
app_1 | ::ffff:10.21.0.2 - - [Thu, 05 Oct 2023 13:43:31 GMT] “GET /img/icon.2405747e.svg HTTP/1.1” 304 - “-” “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Safari/605.1.15”
app_1 |
app_1 | umbrel-lightning
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
app_1 | Checking LND status…
app_1 | LND already unlocked!
tor_1 | Oct 05 02:49:17.000 [notice] While bootstrapping, fetched this many bytes: 705729 (consensus network-status fetch); 14103 (authority cert fetch); 12521464 (microdescriptor fetch)
tor_1 | Oct 05 02:49:17.000 [notice] While not bootstrapping, fetched this many bytes: 2328977 (consensus network-status fetch); 1323 (authority cert fetch); 5071546 (microdescriptor fetch)
tor_1 | Oct 05 04:27:14.000 [notice] No circuits are opened. Relaxed timeout for circuit 11069 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [1 similar message(s) suppressed in last 27420 seconds]
tor_1 | Oct 05 08:44:13.000 [notice] No circuits are opened. Relaxed timeout for circuit 11244 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [1 similar message(s) suppressed in last 15420 seconds]
tor_1 | Oct 05 08:49:17.000 [notice] Heartbeat: Tor’s uptime is 8 days 6:00 hours, with 52 circuits open. I’ve sent 159.46 MB and received 137.48 MB. I’ve received 0 connections on IPv4 and 0 on IPv6. I’ve made 27 connections with IPv4 and 0 with IPv6.
tor_1 | Oct 05 08:49:17.000 [notice] While bootstrapping, fetched this many bytes: 705729 (consensus network-status fetch); 14103 (authority cert fetch); 12521464 (microdescriptor fetch)
tor_1 | Oct 05 08:49:17.000 [notice] While not bootstrapping, fetched this many bytes: 2379710 (consensus network-status fetch); 1323 (authority cert fetch); 5136299 (microdescriptor fetch)
tor_1 | Oct 05 10:42:15.000 [notice] No circuits are opened. Relaxed timeout for circuit 11351 (a Measuring circuit timeout 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [3 similar message(s) suppressed in last 7140 seconds]
lnd_1 | 2023-10-05 13:30:58.666 [INF] UTXN: Attempting to graduate height=810753: num_kids=0, num_babies=0
lnd_1 | 2023-10-05 13:30:58.685 [INF] CRTR: Block 00000000000000000002c97085d960385763ab3fc7f633f39f65e2ff0aef7c62 (height=810753) closed 40 channels
lnd_1 | 2023-10-05 13:35:50.870 [INF] CRTR: Pruning channel graph using block 00000000000000000000b7842cdf5e5c7e4d95ce853722f8023e0e9f8256e18c (height=810754)
lnd_1 | 2023-10-05 13:35:51.037 [INF] CRTR: Block 00000000000000000000b7842cdf5e5c7e4d95ce853722f8023e0e9f8256e18c (height=810754) closed 11 channels
lnd_1 | 2023-10-05 13:35:51.043 [INF] NTFN: New block: height=810754, sha=00000000000000000000b7842cdf5e5c7e4d95ce853722f8023e0e9f8256e18c
lnd_1 | 2023-10-05 13:35:51.043 [INF] UTXN: Attempting to graduate height=810754: num_kids=0, num_babies=0
lnd_1 | 2023-10-05 13:35:54.475 [INF] CRTR: Pruning channel graph using block 00000000000000000004bf5be53a23efd5f2c7372dad07b10e53328325780dbd (height=810755)
lnd_1 | 2023-10-05 13:35:54.484 [INF] NTFN: New block: height=810755, sha=00000000000000000004bf5be53a23efd5f2c7372dad07b10e53328325780dbd
lnd_1 | 2023-10-05 13:35:54.485 [INF] UTXN: Attempting to graduate height=810755: num_kids=0, num_babies=0
lnd_1 | 2023-10-05 13:35:54.552 [INF] CRTR: Block 00000000000000000004bf5be53a23efd5f2c7372dad07b10e53328325780dbd (height=810755) closed 0 channels

thunderhub

Attaching to thunderhub_app_proxy_1, thunderhub_web_1
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | [HPM] Upgrading to WebSocket
app_proxy_1 | [HPM] Client disconnected
app_proxy_1 | Validating token: 7d8db12cac55 …
app_proxy_1 | Validating token: 7d8db12cac55 …
web_1 | {
web_1 | message: ‘Client connected: 7sqs04MBp9WSaZssAAAP’,
web_1 | level: ‘info’,
web_1 | timestamp: ‘2023-10-04T01:09:15.790Z’
web_1 | }
web_1 | {
web_1 | message: ‘Client disconnected: 7sqs04MBp9WSaZssAAAP’,
web_1 | level: ‘info’,
web_1 | timestamp: ‘2023-10-04T01:10:00.751Z’
web_1 | }

==== Result ====

The debug script did not automatically detect any issues with your Umbrel.

I have the same issue. Please let me know of any developments

If anyone has attempted a recovery and uninstalling and reinstalling Lightning Node app (rebooting Umbrel and rebooting your router prior to reinstalling we’ve also seen gets the recovery window to populate channels)

The intended behavior of Automated Recovery is demonstrated here
It will recover channels and close them, returning all funds onchain
We then can investigate as well any problematic channels with offline peers etc.

Then follow the steps here from the previous post, we do request the following information to provide the backup channel for you manually if it is not populating:

  1. We have the ability to provide this backup to you to attempt if you please share an estimated time when you went down, the date/time

  2. Also, the time when you brought the Lightning Node back online also helps too, as close as you can, your backup ID, and we can provide the channel backup file

  3. You can run this command to grab your backup ID:

    cat ~/umbrel/logs/karen.log | grep 'backup ID' | tail -n 1

And can provide your backup ID that populates and we can send it to you to upload

(also make sure your seed phrase is only running on one instance at a time)

Can reference here in FAQ on How to SSH

The output of the backup ID will only be a few lines including its long alphanumeric string feel free to post this here as well as the requested info from the steps above with time (any other extenuating factors or details about the last time your node was online/when you went down, last time you opened or closed any channels, or any other issue that led to you having to perform a disaster recovery helps us investigate)