My Umbrel is stuck at this update stage. Attached debug + screenshot please someone can help me?
Debug 3.txt (12.0 KB)
My Umbrel is stuck at this update stage. Attached debug + screenshot please someone can help me?
Debug 3.txt (12.0 KB)
Hi Holyspawn
Sorry I donāt have a solution but we are in the same boat.
Yesterday I rushed to update and my Umbrel got stuck on āConfiguring new releaseā¦ā:
=====================
= Umbrel debug info =Umbrel version
0.4.4
Memory usage
total used free shared buff/cache available
Mem: 16G 1,7G 10G 13M 3,9G 13G
Swap: 2,0G 0B 2,0Gtotal: 10,7%
bitcoin: 3,5%
lnd: 2,2%
electrs: 1,2%
tor: 0,1%
system: %Memory monitor logs
2021-10-24 09:38:02 Memory monitor running!
2021-10-24 09:38:05 Memory monitor running!
2021-10-24 09:38:08 Memory monitor running!
2021-10-24 09:38:12 Memory monitor running!
2021-10-24 09:38:15 Memory monitor running!
2021-10-24 09:38:18 Memory monitor running!
2021-10-24 09:38:21 Memory monitor running!
2021-10-24 09:38:24 Memory monitor running!
2021-10-24 09:38:27 Memory monitor running!
2021-10-24 09:38:30 Memory monitor running!Filesystem information
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 2,0T 482G 1,4T 26% /
/dev/sda5 2,0T 482G 1,4T 26% /Karen logs
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/events
karen is running in /home/umbrel/umbrel/eventsDocker containers
NAMES STATUS
middleware Up 21 minutes
bitcoin Up 21 minutes
nginx Up 21 minutes
lnd Up 21 minutes
manager Up 21 minutes
umbrel_app_2_tor_1 Up 21 minutes
dashboard Up 21 minutes
umbrel_app_3_tor_1 Up 21 minutes
tor Up 21 minutes
umbrel_app_tor_1 Up 21 minutes
electrs Up 21 minutesBitcoin Core logs
Attaching to bitcoin
e[36mbitcoin |e[0m 2021-10-24T07:18:02Z init message: Done loading
e[36mbitcoin |e[0m 2021-10-24T07:18:02Z msghand thread start
e[36mbitcoin |e[0m 2021-10-24T07:18:03Z New outbound peer connected: version: 70015, blocks=706409, peer=0 (block-relay-only)
e[36mbitcoin |e[0m 2021-10-24T07:18:04Z New outbound peer connected: version: 70015, blocks=706409, peer=2 (block-relay-only)
e[36mbitcoin |e[0m 2021-10-24T07:18:08Z New outbound peer connected: version: 70016, blocks=706409, peer=3 (outbound-full-relay)
e[36mbitcoin |e[0m 2021-10-24T07:18:32Z New outbound peer connected: version: 70016, blocks=706409, peer=5 (outbound-full-relay)
e[36mbitcoin |e[0m 2021-10-24T07:18:32Z Imported mempool transactions from disk: 1429 succeeded, 0 failed, 0 expired, 1 already there, 0 waiting for initial broadcast
e[36mbitcoin |e[0m 2021-10-24T07:18:32Z loadblk thread exit
e[36mbitcoin |e[0m 2021-10-24T07:18:35Z P2P peers available. Skipped DNS seeding.
e[36mbitcoin |e[0m 2021-10-24T07:18:35Z dnsseed thread exit
e[36mbitcoin |e[0m 2021-10-24T07:18:37Z New outbound peer connected: version: 70016, blocks=706409, peer=6 (outbound-full-relay)
e[36mbitcoin |e[0m 2021-10-24T07:18:41Z Socks5() connect to 31.208.66.28:8333 failed: connection refused
e[36mbitcoin |e[0m 2021-10-24T07:19:03Z New outbound peer connected: version: 70015, blocks=706409, peer=7 (outbound-full-relay)
e[36mbitcoin |e[0m 2021-10-24T07:19:19Z UpdateTip: new best=0000000000000000000a070a51610ed17896f3b383dd7f31c6a818b14ae723cc height=706410 version=0x20c00004 log2_work=93.135193 tx=680845281 date=ā2021-10-24T07:19:06Zā progress=1.000000 cache=1.8MiB(13385txo)
e[36mbitcoin |e[0m 2021-10-24T07:20:46Z New outbound peer connected: version: 70016, blocks=706410, peer=8 (outbound-full-relay)
e[36mbitcoin |e[0m 2021-10-24T07:21:22Z New outbound peer connected: version: 70016, blocks=706410, peer=9 (outbound-full-relay)
e[36mbitcoin |e[0m 2021-10-24T07:21:33Z New outbound peer connected: version: 70016, blocks=706410, peer=10 (outbound-full-relay)
e[36mbitcoin |e[0m 2021-10-24T07:21:46Z New outbound peer connected: version: 70016, blocks=706410, peer=11 (outbound-full-relay)
e[36mbitcoin |e[0m 2021-10-24T07:21:46Z New outbound peer connected: version: 70016, blocks=706410, peer=12 (block-relay-only)
e[36mbitcoin |e[0m 2021-10-24T07:21:50Z UpdateTip: new best=000000000000000000035dc2adaed0fee967160cb301a7c181c5e04c1f648963 height=706411 version=0x3fffe004 log2_work=93.135205 tx=680845750 date=ā2021-10-24T07:21:41Zā progress=1.000000 cache=2.3MiB(16584txo)
e[36mbitcoin |e[0m 2021-10-24T07:22:28Z New outbound peer connected: version: 70015, blocks=706411, peer=13 (block-relay-only)
e[36mbitcoin |e[0m 2021-10-24T07:25:54Z Socks5() connect to 192.42.116.26:8333 failed: connection refused
e[36mbitcoin |e[0m 2021-10-24T07:27:17Z Socks5() connect to 37.142.66.233:8333 failed: connection refused
e[36mbitcoin |e[0m 2021-10-24T07:29:03Z Socks5() connect to 188.241.82.29:8333 failed: general failure
e[36mbitcoin |e[0m 2021-10-24T07:30:52Z Socks5() connect to 2601:1c2:4e02:e6e0::db95:8333 failed: general failure
e[36mbitcoin |e[0m 2021-10-24T07:33:17Z UpdateTip: new best=00000000000000000007ed4ecf47815a90ff0cdb8f8027c269566bf452aff000 height=706412 version=0x20600004 log2_work=93.135216 tx=680847156 date=ā2021-10-24T07:32:32Zā progress=1.000000 cache=3.1MiB(23119txo)
e[36mbitcoin |e[0m 2021-10-24T07:34:23Z New outbound peer connected: version: 70015, blocks=706412, peer=15 (block-relay-only)
e[36mbitcoin |e[0m 2021-10-24T07:35:56Z Socks5() connect to 185.132.134.58:8333 failed: general failure
e[36mbitcoin |e[0m 2021-10-24T07:35:59Z Socks5() connect to 2a01:5241:478:dd00:8070:f228:4326:d180:8333 failed: host unreachable
e[36mbitcoin |e[0m 2021-10-24T07:36:34Z Socks5() connect to ia6owbzczv5owdyhnpptuo2dfqf4ewvgvfpj4myp4shexypwktj4llqd.onion:8333 failed: host unreachableLND logs
Attaching to lnd
e[36mlnd |e[0m 2021-10-24 07:27:02.326 [INF] CRTR: Processed channels=1 updates=75 nodes=0 in last 1m0.000347382s
e[36mlnd |e[0m 2021-10-24 07:27:34.334 [INF] DISC: Broadcasting 126 new announcements in 13 sub batches
e[36mlnd |e[0m 2021-10-24 07:28:02.325 [INF] CRTR: Processed channels=0 updates=93 nodes=2 in last 59.999551028s
e[36mlnd |e[0m 2021-10-24 07:29:02.325 [INF] CRTR: Processed channels=0 updates=71 nodes=0 in last 59.999728427s
e[36mlnd |e[0m 2021-10-24 07:29:04.334 [INF] DISC: Broadcasting 119 new announcements in 12 sub batches
e[36mlnd |e[0m 2021-10-24 07:30:02.325 [INF] CRTR: Processed channels=0 updates=64 nodes=0 in last 59.999828736s
e[36mlnd |e[0m 2021-10-24 07:30:34.334 [INF] DISC: Broadcasting 118 new announcements in 12 sub batches
e[36mlnd |e[0m 2021-10-24 07:31:02.325 [INF] CRTR: Processed channels=0 updates=101 nodes=0 in last 59.999843407s
e[36mlnd |e[0m 2021-10-24 07:31:30.989 [INF] DISC: GossipSyncer(02fde713655dacc8a68195e3925be3da27348a184bf634c223b003c60dd0ebeb5a): fetching chan anns for 500 chans
e[36mlnd |e[0m 2021-10-24 07:32:02.325 [INF] CRTR: Processed channels=0 updates=89 nodes=2 in last 59.999365561s
e[36mlnd |e[0m 2021-10-24 07:32:04.334 [INF] DISC: Broadcasting 138 new announcements in 14 sub batches
e[36mlnd |e[0m 2021-10-24 07:33:02.326 [INF] CRTR: Processed channels=0 updates=82 nodes=1 in last 1m0.000617084s
e[36mlnd |e[0m 2021-10-24 07:33:17.740 [INF] CRTR: Pruning channel graph using block 00000000000000000007ed4ecf47815a90ff0cdb8f8027c269566bf452aff000 (height=706412)
e[36mlnd |e[0m 2021-10-24 07:33:17.790 [INF] NTFN: New block: height=706412, sha=00000000000000000007ed4ecf47815a90ff0cdb8f8027c269566bf452aff000
e[36mlnd |e[0m 2021-10-24 07:33:17.791 [INF] UTXN: Attempting to graduate height=706412: num_kids=0, num_babies=0
e[36mlnd |e[0m 2021-10-24 07:33:17.793 [INF] NTFN: Block disconnected from main chain: height=706412, sha=00000000000000000007ed4ecf47815a90ff0cdb8f8027c269566bf452aff000
e[36mlnd |e[0m 2021-10-24 07:33:17.847 [INF] CRTR: Block 00000000000000000007ed4ecf47815a90ff0cdb8f8027c269566bf452aff000 (height=706412) closed 0 channels
e[36mlnd |e[0m 2021-10-24 07:33:17.850 [INF] NTFN: New block: height=706412, sha=00000000000000000007ed4ecf47815a90ff0cdb8f8027c269566bf452aff000
e[36mlnd |e[0m 2021-10-24 07:33:17.850 [INF] UTXN: Attempting to graduate height=706412: num_kids=0, num_babies=0
e[36mlnd |e[0m 2021-10-24 07:33:17.902 [INF] CRTR: Pruning channel graph using block 00000000000000000007ed4ecf47815a90ff0cdb8f8027c269566bf452aff000 (height=706412)
e[36mlnd |e[0m 2021-10-24 07:33:18.023 [INF] CRTR: Block 00000000000000000007ed4ecf47815a90ff0cdb8f8027c269566bf452aff000 (height=706412) closed 0 channels
e[36mlnd |e[0m 2021-10-24 07:33:34.333 [INF] DISC: Broadcasting 151 new announcements in 16 sub batches
e[36mlnd |e[0m 2021-10-24 07:34:02.326 [INF] CRTR: Processed channels=0 updates=122 nodes=0 in last 59.998729853s
e[36mlnd |e[0m 2021-10-24 07:35:02.325 [INF] CRTR: Processed channels=0 updates=95 nodes=9 in last 59.99913981s
e[36mlnd |e[0m 2021-10-24 07:35:04.334 [INF] DISC: Broadcasting 159 new announcements in 16 sub batches
e[36mlnd |e[0m 2021-10-24 07:36:02.326 [INF] CRTR: Processed channels=0 updates=80 nodes=3 in last 1m0.000478177s
e[36mlnd |e[0m 2021-10-24 07:36:34.334 [INF] DISC: Broadcasting 134 new announcements in 14 sub batches
e[36mlnd |e[0m 2021-10-24 07:37:02.326 [INF] CRTR: Processed channels=0 updates=94 nodes=1 in last 1m0.00041531s
e[36mlnd |e[0m 2021-10-24 07:38:02.325 [INF] CRTR: Processed channels=0 updates=78 nodes=3 in last 59.999181694s
e[36mlnd |e[0m 2021-10-24 07:38:04.334 [INF] DISC: Broadcasting 128 new announcements in 13 sub batchesTor logs
Attaching to tor
e[36mtor |e[0m Oct 24 06:26:39.000 [notice] Have tried resolving or connecting to address ā[scrubbed]ā at 3 different places. Giving up.
e[36mtor |e[0m Oct 24 06:27:14.000 [notice] Have tried resolving or connecting to address ā[scrubbed]ā at 3 different places. Giving up.
e[36mtor |e[0m Oct 24 06:50:18.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.
e[36mtor |e[0m Oct 24 07:16:29.000 [notice] Catching signal TERM, exiting cleanly.
e[36mtor |e[0m Oct 24 07:17:11.047 [notice] Tor 0.4.5.7 running on Linux with Libevent 2.1.8-stable, OpenSSL 1.1.1d, Zlib 1.2.11, Liblzma N/A, Libzstd N/A and Glibc 2.28 as libc.
e[36mtor |e[0m Oct 24 07:17:11.047 [notice] Tor canāt help you if you use it wrong! Learn how to be safe at Tor Project | Download
e[36mtor |e[0m Oct 24 07:17:11.048 [notice] Read configuration file ā/etc/tor/torrcā.
e[36mtor |e[0m Oct 24 07:17:11.049 [warn] You have a ControlPort set to accept connections from a non-local address. This means that programs not running on your computer can reconfigure your Tor. Thatās pretty bad, since the controller protocol isnāt encrypted! Maybe you should just listen on 127.0.0.1 and use a tool like stunnel or ssh to encrypt remote connections to your control port.
e[36mtor |e[0m Oct 24 07:17:11.059 [notice] You configured a non-loopback address ā10.21.21.11:9050ā for SocksPort. This allows everybody on your local network to use your machine as a proxy. Make sure this is what you wanted.
e[36mtor |e[0m Oct 24 07:17:11.059 [warn] You have a ControlPort set to accept connections from a non-local address. This means that programs not running on your computer can reconfigure your Tor. Thatās pretty bad, since the controller protocol isnāt encrypted! Maybe you should just listen on 127.0.0.1 and use a tool like stunnel or ssh to encrypt remote connections to your control port.
e[36mtor |e[0m Oct 24 07:17:11.059 [notice] Opening Socks listener on 10.21.21.11:9050
e[36mtor |e[0m Oct 24 07:17:11.060 [notice] Opened Socks listener connection (ready) on 10.21.21.11:9050
e[36mtor |e[0m Oct 24 07:17:11.060 [notice] Opening Control listener on 10.21.21.11:29051
e[36mtor |e[0m Oct 24 07:17:11.060 [notice] Opened Control listener connection (ready) on 10.21.21.11:29051
e[36mtor |e[0m Oct 24 07:17:11.000 [notice] Bootstrapped 0% (starting): Starting
e[36mtor |e[0m Oct 24 07:17:11.000 [notice] Starting with guard context ādefaultā
e[36mtor |e[0m Oct 24 07:17:12.000 [notice] Bootstrapped 5% (conn): Connecting to a relay
e[36mtor |e[0m Oct 24 07:17:12.000 [notice] Bootstrapped 10% (conn_done): Connected to a relay
e[36mtor |e[0m Oct 24 07:17:12.000 [notice] Bootstrapped 14% (handshake): Handshaking with a relay
e[36mtor |e[0m Oct 24 07:17:13.000 [notice] Bootstrapped 15% (handshake_done): Handshake with a relay done
e[36mtor |e[0m Oct 24 07:17:13.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
e[36mtor |e[0m Oct 24 07:17:13.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
e[36mtor |e[0m Oct 24 07:17:13.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
e[36mtor |e[0m Oct 24 07:17:14.000 [notice] Bootstrapped 100% (done): Done
e[36mtor |e[0m Oct 24 07:17:22.000 [notice] Your network connection speed appears to have changed. Resetting timeout to 60s after 18 timeouts and 1000 buildtimes.
e[36mtor |e[0m Oct 24 07:17:22.000 [notice] Guard beluga ($36CCD481B3D9D72097323A8AE69E9B63D124A0E4) is failing more circuits than usual. Most likely this means the Tor network is overloaded. Success counts are 152/217. Use counts are 85/85. 152 circuits completed, 0 were unusable, 0 collapsed, and 3 timed out. For reference, your timeout cutoff is 60 seconds.
e[36mtor |e[0m Oct 24 07:18:27.000 [notice] New control connection opened from 10.21.21.9.
e[36mtor |e[0m Oct 24 07:35:56.000 [notice] Have tried resolving or connecting to address ā[scrubbed]ā at 3 different places. Giving up.
e[36mtor |e[0m Oct 24 07:35:59.000 [notice] Have tried resolving or connecting to address ā[scrubbed]ā at 3 different places. Giving up.
e[36mtor |e[0m Oct 24 07:36:34.000 [notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDir available to query.==== Result ====
This script could not automatically detect an issue with your Umbrel.
Please copy the entire output of this script and paste it in the Umbrel Telegram group (Telegram: Contact @getumbrel) so we can help you with your problem.
Itās recommended to upload the output somewhere and share a link to it. Run this script with āāuploadā to automatically generate a link to share.
I will post anything I find here.
For all of you guys, please follow the steps explained in the Umbrel troubleshooting guide - how to manually update your node.
From my debug log it seems my update hadnāt even started (even though I let it run all night).
So I simply clearly the contents of update-status.json:
cp ./umbrel/statuses/update-status.json ./umbrel/statuses/update-status.json.bkp
truncate -s 0 ./umbrel/statuses/update-status.json
Now at least I can reach the dashboard and back up my passphrase. And the Install now button came right back. I tried re-running the update but this time I got āUnable to start the update processā:
Thanks DarthCoin, I just tried the manual update but it failed too.
Error āAn update is already in progress. Exiting now.ā
===========
Edit: Finally managed to install the update! Had to remove the update-in-progress file:
sudo rm ./statuses/update-in-progress
After that the manual update worked flawlessly:
Thanks!
Darth, did that, i run sudo rm statuses/update-in-progress, since iām stuck, but i get ārm: cannot remove ā./statuses/update-in-progressā: No such file or directoryā .
This is instead the message i get if i run cd ~/umbrel && sudo ./scripts/update/update --repo getumbrel/umbrel#v0.4.5:
umbrel@umbrel:~ $ cd ~/umbrel && sudo ./scripts/update/update --repo getumbrel/umbrel#v0.4.5
Cloning into ā/tmp/umbrel-updateāā¦
remote: Enumerating objects: 3594, done.
remote: Counting objects: 100% (1191/1191), done.
remote: Compressing objects: 100% (387/387), done.
remote: Total 3594 (delta 1044), reused 807 (delta 804), pack-reused 2403
Receiving objects: 100% (3594/3594), 960.26 KiB | 9.06 MiB/s, done.
Resolving deltas: 100% (2177/2177), done.
Note: checking out ā551b9b583cfb14c92dbaeab7bda8f5e47356c958ā.
You are in ādetached HEADā state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b
An update is already in progress. Exiting now.
I add here the complete debug file and update the one on the main post.
Debug 3.txt (12.0 KB)
Any help? 24h+ that iām stuck
The file update-in-progress was not deleted, make sure you run āsudo rm ./statuses/update-in-progressā from within Umbrelās directory.
try this:
cd umbrel
sudo rm ./statuses/update-in-progress
Gonna try this.
Thank you m8.
Ok Darth, i think iām cursed.
I flashed succesfully the SD, started the node, then i get the usual error " Error: Failed to connect external drive" that i usually solved with this command: ssh -t umbrel@umbrel.local āsed -i ās/ blacklist_uas/ #blacklist_uas/gā /home/umbrel/umbrel/scripts/umbrel-os/external-storage/mount && sudo rebootā , problem is that if i launch the command prompt with ssh -t umbrel@umbrel.local the command prompt window open and close immediately after (like after 1 second) so i basically canāt access the node and type the command that solve usually that errorā¦ any suggestion please?
When this red umbrella appear, that means clearly you have a hardware issue with connection to your drive.
Try shut down the node, check the cabling, case, change USB port, have the original power source, disconnect anything else from the Pi, be sure you have a good connection to your drive and start again
Ok. I check all the cables tomorrow and update you. Thank you, Darth.
Ok Darth,
the curse continue ^^.
Flashed the micro card succesfully (verified) then everytime i power on the Rasp i get the āFailed to connect to external driveā. Tried all the 4 USB ports (both 3.0 and 2.0) , i see the SSD led blinking for some seconds then remaining fixed on.
IMPORTANT NOTE: i updated several OS previous versions and everytime i updated the OS the Rasp wasnt unable to see correctly the SSD and everytime i was āforcingā the OS to see correctly the SSD using this command: ssh -t umbrel@umbrel.local āsed -i ās/ blacklist_uas/ #blacklist_uas/gā /home/umbrel/umbrel/scripts/umbrel-os/external-storage/mount && sudo rebootā then everything was working again. This time seem that this new OS version donāt allow me to access to the command prompt (i launch the ssh -t umbrel@umbrel.local and it open for 1-2 secs and close immediately after) in order to insert that command, so iām stuck atmā¦ and since my node is a 11Mil sats node this is quite frustratingā¦
My SSD model is: Samsung Memorie MZ-77Q1T0BW 870 QVO SSD Internal, 1 TB, SATA, 2.5"
My SSD case is: UGREEN Case External HDD 2.5" USB-C 3.1 Gen 2 with USPA up to 6Gbps, HDD SSD Thundebolt 3 Compatibile with SATA 7mm 9.5mm (https://www.amazon.it/gp/product/B07Y825V4N/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1)
My only solution is to buy a new 1 or 2TB SSD within the 100% compatibility list or can i do others attempts in order to fix this? Maybe connecting my current SSD to a pc and change some files or changing the SSD? Or changing just the SSD case model?
I desperately need some help and i appreciate deeply your support.
I want my node up and running again ASAP. This new OS totally screwed meā¦ really frustratingā¦!
Check in Troubleshooting guide the section about āmy external drive is not mountingā
Checked that BUT:
Darth, how can i type commands if my command prompts close after 2 seconds everytime i open it? Windows 10 64 bit here. Any reason why this is happening? How can i access the command prompt avoiding that it close immediately?
That means you have serious issues with your hardware. Maybe the power source is not right.
You should start by changing it with a good one, original, new.
Also can be many other reasons: usb cable, drive case, SSD itself or even the damn mSD card can fail.
This is my Rasp. I have the original power cable. I will follow your steps then replacing in this order these: power cable, the usb cable, then, drive case then SSD. I would exclude the mSD card since i flashed it regularly and i see the error message when i start the Rasp, suggesting that the OS is apparently working. Strange however that this supposed āhardware issueā started right after the 0.4.5 OS update, after 5 months of perfect workā¦
Regarding the drive case, can you please suggest me a model that should be fully reliable and compatible? Thanks.
Just an update since i have finally almost solved this nightmare. The issue was the SSD external case. I have replaced it and i managed to start the Umbrel dashboard and start the sync from scratch again. Now i still have apparently a little last issue , whenever i try to open the W10 powershell with the command ssh umbrel@umbrel the new window pop up and close automatically after 1 sec. I need to access the ssh in order to recover my lightning channels, so i must find a way now. At least the node is working again. If anyone have a tip or a suggestion regarding how i can access succesfully the ssh command prompt please share, now i will try the Putty software.