Fool-proof Core Lightning Backup and Restore to a new machine?

I did some research, and I understand from CLN docs here that saving private keys (via xxd) and channels db is not “rocket science” but requires surgical precision.

I need confirmation in the steps below:

  1. stop the Core Lightning app (unclear how, I suppose via shell command)
  2. backup private key and db from the bitcoin directory at app-data/core-lightning/data/lightning/
  3. build and run the new node, install Bitcoin core, and wait for the 100% sync
  4. install Core-Lightning on the new server
  5. stop it (again, via shell command?)
  6. restore keys and replace the db in the app-data/core-lightning/data/lightning/ folder
  7. restart CLN app (or reboot the machine)

What is unclear:

  1. how do I properly stop Core Lightning to avoid no routing transactions “sneaking through” right after I complete my manual backup?
  2. are there any other files located elsewhere requiring a restore, such as different RPC parameters for Bitcoin core, or any of the other TOR services running under Docker Compose?

Thank you =)

Hi, stopping Core Lightning (or any other umbrel app) is pretty easy. You have two options:

  1. via web interface: right click on the app icon and hit “stop”
  2. via terminal (in your case the app-id is core-lightning):
umbreld client apps.stop.mutate --appId <app-id>
1 Like

Thank you - actually there’s no right-click menu, at least on Firefox, so I operated via shell.

It seemed to work the first 10 minutes, then it’s stuck in a reboot loop, with Karen triggered:

core-lightning

Attaching to core-lightning_lightningd_1, core-lightning_c-lightning-rest_1, core-lightning_app_proxy_1, core-lightning_app_1, core-lightning_tor_1
c-lightning-rest_1  |     at Socket.<anonymous> (/usr/src/app/lightning-client-js.js:80:23)
c-lightning-rest_1  |     at Socket.emit (node:events:513:28)
c-lightning-rest_1  |     at emitErrorNT (node:internal/streams/destroy:157:8)
c-lightning-rest_1  |     at emitErrorCloseNT (node:internal/streams/destroy:122:3)
c-lightning-rest_1  |     at processTicksAndRejections (node:internal/process/task_queues:83:21) {
c-lightning-rest_1  |   errno: -111,
c-lightning-rest_1  |   code: 'ECONNREFUSED',
c-lightning-rest_1  |   syscall: 'connect',
c-lightning-rest_1  |   address: '/root/.lightning/bitcoin/lightning-rpc'
c-lightning-rest_1  | }
app_1               | 2024/07/02 10:18:24 socat[351956] E connect(5, AF=1 "/root/.lightning/bitcoin/lightning-rpc", 40): Connection refused
app_1               | 
app_1               | Waiting for lightningd
app_1               | 2024/07/02 10:18:24 socat[351961] E connect(5, AF=1 "/root/.lightning/bitcoin/lightning-rpc", 40): Connection refused
app_1               | 
app_1               | Waiting for lightningd
app_1               | 2024/07/02 10:18:25 socat[351967] E connect(5, AF=1 "/root/.lightning/bitcoin/lightning-rpc", 40): Connection refused
app_1               | 
app_1               | Waiting for lightningd
app_1               | 2024/07/02 10:18:26 socat[351973] E connect(5, AF=1 "/root/.lightning/bitcoin/lightning-rpc", 40): Connection refused
app_proxy_1         | yarn run v1.22.19
app_proxy_1         | $ node ./bin/www
app_proxy_1         | [HPM] Proxy created: /  -> http://10.21.21.94:2103
app_proxy_1         | Waiting for 10.21.21.94:2103 to open...
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** gossipd: backtrace: ccan/ccan/io/io.c:59 (next_plan) 0x55c74d00a24b
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** gossipd: backtrace: ccan/ccan/io/io.c:407 (do_plan) 0x55c74d00a6d2
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** gossipd: backtrace: ccan/ccan/io/io.c:417 (io_ready) 0x55c74d00a76b
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** gossipd: backtrace: ccan/ccan/io/poll.c:453 (io_loop) 0x55c74d00c05a
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** gossipd: backtrace: gossipd/gossipd.c:684 (main) 0x55c74ceba05f
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** gossipd: backtrace: (null):0 ((null)) 0x7f2d37d98d09
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** gossipd: backtrace: (null):0 ((null)) 0x55c74ceb6d99
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** gossipd: backtrace: (null):0 ((null)) 0xffffffffffffffff
lightningd_1        | 2024-07-01T19:52:34.795Z **BROKEN** connectd: STATUS_FAIL_GOSSIP_IO: gossipd exited?
lightningd_1        | lightningd: connectd failed (exit status 242), exiting.
tor_1               | Jul 02 04:17:21.000 [notice] No circuits are opened. Relaxed timeout for circuit 317 (a Measuring circuit timeout 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [7 similar message(s) suppressed in last 6840 seconds]
tor_1               | Jul 02 06:51:19.000 [notice] No circuits are opened. Relaxed timeout for circuit 376 (a Hidden service: Uploading HS descriptor 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [2 similar message(s) suppressed in last 9240 seconds]
tor_1               | Jul 02 07:52:13.000 [notice] Heartbeat: Tor's uptime is 12:00 hours, with 9 circuits open. I've sent 6.65 MB and received 18.40 MB. I've received 0 connections on IPv4 and 0 on IPv6. I've made 6 connections with IPv4 and 0 with IPv6.
tor_1               | Jul 02 07:52:13.000 [notice] While bootstrapping, fetched this many bytes: 646096 (consensus network-status fetch); 14101 (authority cert fetch); 10952505 (microdescriptor fetch)
tor_1               | Jul 02 07:52:13.000 [notice] While not bootstrapping, fetched this many bytes: 101426 (consensus network-status fetch); 3579 (authority cert fetch); 874941 (microdescriptor fetch)
tor_1               | Jul 02 08:30:18.000 [warn] Received http status code 404 ("Not found") from server 208.115.216.54:9002 while fetching "/tor/keys/fp/D586D18309DED4CD6D57C18FDB97EFA96D330566+EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97".
tor_1               | Jul 02 08:31:20.000 [notice] No circuits are opened. Relaxed timeout for circuit 431 (a Measuring circuit timeout 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [5 similar message(s) suppressed in last 6060 seconds]
tor_1               | Jul 02 10:14:18.000 [warn] Received http status code 404 ("Not found") from server 208.115.216.54:9002 while fetching "/tor/keys/fp/D586D18309DED4CD6D57C18FDB97EFA96D330566+EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97".
tor_1               | Jul 02 10:15:22.000 [notice] No circuits are opened. Relaxed timeout for circuit 502 (a Measuring circuit timeout 4-hop circuit in state doing handshakes with channel state open) to 60000ms. However, it appears the circuit has timed out anyway. [8 similar message(s) suppressed in last 6300 seconds]

Bitcoin seems to be working:

bitcoind_1     | 2024-07-02T09:54:56Z New outbound-full-relay v2 peer connected: version: 70016, blocks=850361, peer=245
bitcoind_1     | 2024-07-02T10:07:43Z New block-relay-only v2 peer connected: version: 70016, blocks=850361, peer=249
bitcoind_1     | 2024-07-02T10:12:22Z New outbound-full-relay v1 peer connected: version: 70016, blocks=850361, peer=252
bitcoind_1     | 2024-07-02T10:15:25Z Saw new header hash=000000000000000000002f3a9dd003942f90efeb54cd57e2c151adfc0138aea1 height=850362
bitcoind_1     | 2024-07-02T10:15:26Z UpdateTip: new best=000000000000000000002f3a9dd003942f90efeb54cd57e2c151adfc0138aea1 height=850362 version=0x2648e000 log2_work=95.015939 tx=1033766477 date='2024-07-02T10:14:57Z' progress=1.000000 cache=110.6MiB(912813txo)
bitcoind_1     | 2024-07-02T10:15:31Z Saw new header hash=0000000000000000000093f9ada7182df4fbdc4e705b16dc19126ba9a11eec90 height=850363
bitcoind_1     | 2024-07-02T10:15:31Z UpdateTip: new best=0000000000000000000093f9ada7182df4fbdc4e705b16dc19126ba9a11eec90 height=850363 version=0x26dca000 log2_work=95.015952 tx=1033770900 date='2024-07-02T10:15:27Z' progress=1.000000 cache=111.1MiB(917884txo)
bitcoind_1     | 2024-07-02T10:16:50Z Saw new header hash=00000000000000000000447a85698c9d35e8731b9335d72a0929cbe3e0d821a2 height=850364
bitcoind_1     | 2024-07-02T10:16:50Z Saw new cmpctblock header hash=00000000000000000000447a85698c9d35e8731b9335d72a0929cbe3e0d821a2 peer=0
bitcoind_1     | 2024-07-02T10:16:51Z UpdateTip: new best=00000000000000000000447a85698c9d35e8731b9335d72a0929cbe3e0d821a2 height=850364 version=0x29752000 log2_work=95.015965 tx=1033777937 date='2024-07-02T10:16:10Z' progress=1.000000 cache=111.1MiB(918833txo)

Update: stopped core-lightning via shell, removed the gossip file, and restarted.

Hi,
I was running core-lightning on the home NAS server within a docker container … I made a file-system backup of my lightning-node before I lost all data due to logical storage disk failure. I had approx 80 funded channels + some funds on lightning wallet which I would be glad to recover. This is what I done so far BUT without success:

  • I install Core-Lightning on Umbrel 1.2.2
  • I stopped the Core-Lightning app
  • I checked the directory, there are the following files:
-rw-r--r-- 1 root   root    36864 Aug  7 15:32 accounts.sqlite3
-rw------- 1 root   root      241 Aug  7 15:32 ca-key.pem
-rw-r--r-- 1 root   root      554 Aug  7 15:32 ca.pem
-rw------- 1 root   root      241 Aug  7 15:32 client-key.pem
-rw-r--r-- 1 root   root      530 Aug  7 15:32 client.pem
-r-------- 1 root   root       57 Aug  7 15:32 emergency.recover
-rw------- 1 root   root        1 Aug  7 15:32 gossip_store
-r-------- 1 root   root       32 Aug  7 15:32 hsm_secret
srw------- 1 root   root        0 Aug  7 15:32 lightning-rpc
-rw-r--r-- 1 root   root   667648 Aug  7 15:58 lightningd.sqlite3
-rw------- 1 root   root      241 Aug  7 15:32 server-key.pem
-rw-r--r-- 1 root   root      530 Aug  7 15:32 server.pem
  • Following the guidelines from this thread, I copied the following files from my file system backup to this directory:
  hsm_secret
  lightningd.sqlite3
  emergency.recover
  • I started the core-lighting app from UX

Unfortunatelly, this steps does not make the core-ligtning app to start… I’m getting errors in the log file:

ccore-lightning_app_1               | Waiting for lightningd
core-lightning_app_1               | 2024/08/02 12:46:53 socat[646] E connect(5, AF=1 "/root/.lightning/bitcoin/lightning-rpc", 40): Connection refused

I’m thinking there are other things which may need to be done… I found config file:

/home/umbrel/umbrel/app-data/core-lightning/data/lightningd/.commando-env
LIGHTNING_PUBKEY="03f084e...............630"
LIGHTNING_RUNE="LJV43bcm......................pb24b"

1/ Obviously, the pubkey of my previous lightning node was different. Sould I update the pubkey value in this .commando-env config file before starting the core-lightning app?

2/ I found other files in umbrel related to core-lighting app, like:

/home/umbrel/umbrel/app-data/core-lightning/data/c-lightning-rest/certs/
total 24
drwxr-xr-x 2 umbrel umbrel 4096 Aug 7 15:32 .
drwxr-xr-x 3 umbrel umbrel 4096 Aug 7 12:02 …
-rw-r–r-- 1 root root 114 Aug 7 15:32 access.macaroon
-rw-r–r-- 1 root root 1310 Aug 7 15:32 certificate.pem
-rw------- 1 root root 1704 Aug 7 15:32 key.pem
-rw-r–r-- 1 root root 128 Aug 7 15:32 rootKey.key

What should I do with these files, can I keep them as they are?

3/ The public IP address of the current lightning node hosted on umbrel is different compare to the old LN node… is this a problem for opened channels?

4/ What should I do with other files from old lightning node filesystem backup ? should I also copy them to replace existing files or not? like:

-rw-r--r-- 1 root   root    36864 Aug  7 15:32 accounts.sqlite3
-rw------- 1 root   root      241 Aug  7 15:32 ca-key.pem
-rw-r--r-- 1 root   root      554 Aug  7 15:32 ca.pem
-rw------- 1 root   root      241 Aug  7 15:32 client-key.pem
-rw-r--r-- 1 root   root      530 Aug  7 15:32 client.pem
-rw------- 1 root   root      241 Aug  7 15:32 server-key.pem
-rw-r--r-- 1 root   root      530 Aug  7 15:32 server.pem

Thank you so much for your help.