Node stuck syncing, few/0 Network Connections

I’m having trouble with the initial sync of my new node. I successfully configured the Raspberry Pi 4 with a Crucial MX500 1000GB SSD with Umbrel 13.2. My node was doing its initial sync and got to 79.6% and has been stuck there for 3 days (block 625,156 of 687,985 blocks on the screen). My “Network Connections” fluctuate between 0 and 3, but is usually at 0 or 1. I initially setup the node to connect via WiFi, but as a troubleshooting step have now connected it directly via ethernet cable. I’ve also tried reflashing the SSD, but have ended up back in the same place (as I did not change the data on the SSD). I’ve rebooted the Pi several times.

I have read through all the forums and message boards I can find to no avail. The debug script log appears to show problems connecting with Tor (based on my novice interpretation).

https://umbrel-paste.vercel.app/e2d82cfa8539310a1da3b015a24f1c16

I can access the node through the Tor Browser, Safari browser on Mac and Firefox on PC. Though quite strangely, sometimes the password does not work. I retype it carefully several times and it just won’t login. Other times, it logs in with no problem. Probably user error… but I have not been able to login again successfully from my iPhone or iPad after doing so successfully several times. I wonder if perhaps there are multiple passwords as I attempted to change the password via ssh… though I thought to no avail.

Likely unrelated, but I also picked an auto-generated (complicated) password when I setup the account. Due to all the problems I’ve been having and so many logins, I tried to change it to something more simpletemporarily, but the “Change Password” function in settings does not seem to work (can’t click on the “Change Password” button after entering existing and new passwords.

I’m at a loss and not sure what to do other than reformat the SSD and start all over. Thanks for the help.

Shot in the dark but have you tried setting a static IP address yet? I had issues with mine going offline every 24 hours like clockwork after the 0.3.12 update, but the static helped with this so far. It also addressed a behavior where I’d only have 3-5 Lightning peers, but that’s now in the 8-10 peer range.

Hope this helps.

i tried setting a static IP… nothing changed. Still stuck syncing at the exact same spot. Going on a week now.

This shows up occasionally (but not every time) I run the debug script. Bad SSD? Or corrupt data file?

bitcoin | 2021-06-21T20:00:38Z LevelDB read failure: Corruption: block checksum mismatch: /data/.bitcoin/chainstate/427627.ldb
bitcoin | 2021-06-21T20:00:38Z Fatal LevelDB error: Corruption: block checksum mismatch: /data/.bitcoin/chainstate/427627.ldb
bitcoin | 2021-06-21T20:00:38Z You can use -debug=leveldb to get more complete diagnostic messages
bitcoin | 2021-06-21T20:00:38Z Error: Error reading from database, shutting down.
bitcoin | Error: Error reading from database, shutting down.
bitcoin | 2021-06-21T20:00:38Z Error reading from database: Fatal LevelDB error: Corruption: block checksum mismatch: /data/.bitcoin/chainstate/427627.ldb

I just reran debug and did not get the Fatal LevelDB error
Attaching to bitcoin
bitcoin | 2021-06-21T20:10:33Z Config file arg: zmqpubrawblock=“tcp://0.0.0.0:28332”
bitcoin | 2021-06-21T20:10:33Z Config file arg: zmqpubrawtx=“tcp://0.0.0.0:28333”
bitcoin | 2021-06-21T20:10:33Z Command-line arg: zmqpubrawblock=“tcp://0.0.0.0:28332”
bitcoin | 2021-06-21T20:10:33Z Command-line arg: zmqpubrawtx=“tcp://0.0.0.0:28333”
bitcoin | 2021-06-21T20:10:33Z Using at most 125 automatic connections (1048576 file descriptors available)
bitcoin | 2021-06-21T20:10:33Z Using 16 MiB out of 32/2 requested for signature cache, able to store 524288 elements
bitcoin | 2021-06-21T20:10:33Z Using 16 MiB out of 32/2 requested for script execution cache, able to store 524288 elements
bitcoin | 2021-06-21T20:10:33Z Script verification uses 3 additional threads
bitcoin | 2021-06-21T20:10:33Z scheduler thread start
bitcoin | 2021-06-21T20:10:33Z HTTP: creating work queue of depth 16
bitcoin | 2021-06-21T20:10:33Z Using random cookie authentication.
bitcoin | 2021-06-21T20:10:33Z Generated RPC authentication cookie /data/.bitcoin/.cookie
bitcoin | 2021-06-21T20:10:33Z Using rpcauth authentication.
bitcoin | 2021-06-21T20:10:33Z HTTP: starting 4 worker threads
bitcoin | 2021-06-21T20:10:33Z Using wallet directory /data/.bitcoin
bitcoin | 2021-06-21T20:10:33Z init message: Verifying wallet(s)…
bitcoin | 2021-06-21T20:10:33Z init message: Loading banlist…
bitcoin | 2021-06-21T20:10:33Z SetNetworkActive: true
bitcoin | 2021-06-21T20:10:33Z Using /16 prefix for IP bucketing
bitcoin | 2021-06-21T20:10:33Z Cache configuration:
bitcoin | 2021-06-21T20:10:33Z * Using 2.0 MiB for block index database
bitcoin | 2021-06-21T20:10:33Z * Using 24.8 MiB for transaction index database
bitcoin | 2021-06-21T20:10:33Z * Using 21.7 MiB for basic block filter index database
bitcoin | 2021-06-21T20:10:33Z * Using 8.0 MiB for chain state database
bitcoin | 2021-06-21T20:10:33Z * Using 143.6 MiB for in-memory UTXO set (plus up to 286.1 MiB of unused mempool space)
bitcoin | 2021-06-21T20:10:33Z init message: Loading block index…
bitcoin | 2021-06-21T20:10:33Z Switching active chainstate to Chainstate [ibd] @ height -1 (null)
bitcoin | 2021-06-21T20:10:33Z Opening LevelDB in /data/.bitcoin/blocks/index
bitcoin | 2021-06-21T20:10:34Z Opened LevelDB successfully
bitcoin | 2021-06-21T20:10:34Z Using obfuscation key for /data/.bitcoin/blocks/index: 0000000000000000


And the Tor data usually says something like this:
Attaching to tor
tor | Jun 21 19:55:20.000 [notice] Bootstrapped 85% (ap_conn_done): Connected to a relay to build circuits
tor | Jun 21 19:55:20.000 [notice] Bootstrapped 89% (ap_handshake): Finishing handshake with a relay to build circuits
tor | Jun 21 19:55:20.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
tor | Jun 21 19:55:20.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
tor | Jun 21 19:55:21.000 [notice] Bootstrapped 100% (done): Done
tor | Jun 21 19:55:28.000 [warn] Guard TorOverBridge ($F709A054E8E003A108A59A2AD727360B02ACA276) is failing an extremely large amount of circuits. This could indicate a route manipulation attack, extreme network overload, or a bug. Success counts are 3/151. Use counts are 1/1. 3 circuits completed, 0 were unusable, 0 collapsed, and 0 timed out. For reference, your timeout cutoff is 60 seconds.
tor | Jun 21 19:55:28.000 [notice] Guard TorOverBridge ($F709A054E8E003A108A59A2AD727360B02ACA276) is failing more circuits than usual. Most likely this means the Tor network is overloaded. Success counts are 101/180. Use counts are 82/82. 101 circuits completed, 0 were unusable, 0 collapsed, and 0 timed out. For reference, your timeout cutoff is 60 seconds.
tor | Jun 21 19:55:28.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60s after 18 timeouts and 142 buildtimes.tor | Jun 21 19:55:39.000 [warn] Guard TorOverBridge ($F709A054E8E003A108A59A2AD727360B02ACA276) is failing a very large amount of circuits. Most likely this means the Tor network is overloaded, but it could also mean an attack against you or potentially the guard itself. Success counts are 148/299. Use counts are 109/109. 148 circuits completed, 0 were unusable, 0 collapsed, and 0 timed out. For reference, your timeout cutoff is 60 seconds.
tor | Jun 21 19:55:46.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60s after 18 timeouts and 247 buildtimes.tor | Jun 21 19:56:03.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor | Jun 21 19:56:10.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $08CE3DBFDAA27DB6C044A677AF68D7235C2AFC85~DigiGesTor4e4 [1vwKVjDxpr5w8yPeR391Rh7RWo3Zfzl1wEYqJ/EBv2c] at 195.176.3.20. Retrying on a new circuit.
tor | Jun 21 19:56:10.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $08CE3DBFDAA27DB6C044A677AF68D7235C2AFC85~DigiGesTor4e4 [1vwKVjDxpr5w8yPeR391Rh7RWo3Zfzl1wEYqJ/EBv2c] at 195.176.3.20. Retrying on a new circuit.
tor | Jun 21 19:56:10.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $08CE3DBFDAA27DB6C044A677AF68D7235C2AFC85~DigiGesTor4e4 [1vwKVjDxpr5w8yPeR391Rh7RWo3Zfzl1wEYqJ/EBv2c] at 195.176.3.20. Retrying on a new circuit.
tor | Jun 21 19:56:11.000 [notice] New control connection opened from 10.21.21.9.
tor | Jun 21 19:56:14.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor | Jun 21 19:56:25.000 [notice] Have tried resolving or connecting to address ‘[scrubbed]’ at 3 different places. Giving up.
tor | Jun 21 19:56:25.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $7BE4E70CFFE53C480C46655F91C13D70A97EFF0B~181DusExitRelay [FVDHZE+kzfjglzpEfsHS7hnnPodb7ajeQxMy4hbXsNQ] at 213.202.216.189. Retrying on a new circuit.
tor | Jun 21 19:56:25.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $7BE4E70CFFE53C480C46655F91C13D70A97EFF0B~181DusExitRelay [FVDHZE+kzfjglzpEfsHS7hnnPodb7ajeQxMy4hbXsNQ] at 213.202.216.189. Retrying on a new circuit.
tor | Jun 21 19:56:40.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $623CCCC1A1370700DD03046A85D953D35CAB5C21~FriendlyExitNode2 [CLsfsK1TE8KASOdJRo3MGGjuqI3kDaUhgdKP+Tfp+rU] at 209.141.54.195. Retrying on a new circuit.
tor | Jun 21 19:56:40.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $623CCCC1A1370700DD03046A85D953D35CAB5C21~FriendlyExitNode2 [CLsfsK1TE8KASOdJRo3MGGjuqI3kDaUhgdKP+Tfp+rU] at 209.141.54.195. Retrying on a new circuit.
tor | Jun 21 19:56:55.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $4625F385CA5364CDD63C791893973B0CAD49C4E0~HoustonTexas4Torcom [NH+BK4yA7HUQuGZpBbQyf3AnF/XscIsh7nHMpJKJ0k8] at 144.172.118.4. Retrying on a new circuit.
tor | Jun 21 19:56:55.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $4625F385CA5364CDD63C791893973B0CAD49C4E0~HoustonTexas4Torcom [NH+BK4yA7HUQuGZpBbQyf3AnF/XscIsh7nHMpJKJ0k8] at 144.172.118.4. Retrying on a new circuit.
tor | Jun 21 19:57:15.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $E9F5BA6C38A43293BA2725523B970B54E7BCAD94~China [iP2HvqJWtNbEqELShaz9q35l1jRegHvlHMuV03vjV/g] at 91.219.236.228. Retrying on a new circuit.
tor | Jun 21 19:57:15.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $E9F5BA6C38A43293BA2725523B970B54E7BCAD94~China [iP2HvqJWtNbEqELShaz9q35l1jRegHvlHMuV03vjV/g] at 91.219.236.228. Retrying on a new circuit.
tor | Jun 21 19:57:31.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $14EF5E383B0163C926E3C2DFD9D126FE4ABA4057~azechartorrelay1 [41wXZnEseXrvTVa68qbNQ6mYW4uts0YWFAkTa4fvsY4] at 209.141.38.113. Retrying on a new circuit.
tor | Jun 21 19:57:31.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $14EF5E383B0163C926E3C2DFD9D126FE4ABA4057~azechartorrelay1 [41wXZnEseXrvTVa68qbNQ6mYW4uts0YWFAkTa4fvsY4] at 209.141.38.113. Retrying on a new circuit.
tor | Jun 21 19:57:31.000 [notice] Tried for 124 seconds to get a connection to [scrubbed]:8333. Giving up.
tor | Jun 21 19:57:31.000 [notice] Tried for 124 seconds to get a connection to [scrubbed]:8333. Giving up.
tor | Jun 21 20:00:41.000 [notice] We tried for 15 seconds to connect to ‘[scrubbed]’ using exit $7327876AE79C997DFE311A7B15B4FA875736BBD1~F3Netze [LqeU711tRrsCBctUvVOrCsmTgfp/mAkMj59IiH+B9Xs] at 185.220.100.255. Retrying on a new circuit.

i also tried plugging my umbrel node into another network (Friend’s house) to see if it was somehow my network blocking updates. I got the same behavior on my friends network. Could login to the umbrel node, but stuck syncing exactly like on my network.

i’m really at a loss. I guess I need to reformat the SSD and just start over.

oh well… i wiped the SSD and started over. Its downloading blockheaders now. I still only have 1 Peer connection most of the time… so its going very slowly. Something definitely seems wrong with the 3.12 update as lots of users are having syncing problems. At this rate… It’ll probably take months to sync the node. At least this experience taught me to not treat this node with any seriousness (minimal funds)… as it is still clearly beta software (as they disclose in the setup process). I love the UI though and hope the team at Umbrel gets things working again and more stable!

Hi, try this command to prude Docker. Hope this helps!

sudo systemctl stop umbrel-startup.service && docker system prune --force --all && sudo systemctl start umbrel-startup.service