Dennys Umbrel App Store ☂️

Sorry. Doom is designed with an amd64 architecture (Umbrel Home) or a virtual machine running Linux. Unfortunately, it is not compatible with Raspberry Pi or other ARM-based devices.

If you’re running Umbrel on a Raspberry Pi, this app won’t work due to architecture differences. You’ll need a system with an amd64 processor or a Linux VM to use it.

Thank you so much, Denny, for your quick reply! Yes, I’m using a DTV Electronics CmRat with a Radxa Compute Module CM3, which runs on ARM architecture. That explains why it doesn’t boot.

I have another question for you. I was chatting with Fiatjaf recently about a solution he introduced for RSS and Nostr feeds called Flux. I’m a big fan of FreshRSS on UmbrelOS, but I’ve noticed it’s the only feed reader available. It would be great to see Flux added as an option, especially since it comes from the creator of Nostr. He even set up a Docker container to simplify installation!

Image:

Here’s the link to the Docker container: fiatjaf/noflux on Docker Hub

And the discussion on Nostr can be found here: Primal discussion.

Looking forward to hearing your thoughts!

Done. Can you test it and give me a quick feedback? :muscle:

1 Like

Thanks, Danny, for the quick upload! I just installed the app and tested it with a few RSS feeds and Nostr public keys. It’s definitely faster and more user-friendly than FeedRSS. I really like the variety of integrations, especially the Telegram bot for notifications when new content or notes are published. It’s running perfectly on Umbrel. I’ll suggest another app tomorrow!

1 Like

I’m really pleased that everything is working the way you want it to. Thanks for this great suggestion. :innocent:

I’ll take the screenshots, improve the description and then get it out there. I’m also thinking about making a PR on GitHub. :muscle:

I noticed that we don’t have a SEO-friendly app in Umbrel, so here’s the solution: SerpBear! This tool offers the same functionality as big-name SEO (Search Engine Optimization) platforms like Semrush and Ahrefs, which often cost hundreds of dollars a month. The twist? SerpBear is free and self-hosted on your Umbrel device. Pretty cool, right?

SerpBear is an open-source app designed for search engine position tracking and keyword research. It lets you monitor your website’s keyword rankings on Google and even sends you notifications about changes in their positions. How handy is that? It would be awesome to see this added to Denny’s Umbrel App Store, one of the best in the Umbrel Community.

Here’s the link to the Docker container (this is the most up-to-date version on Docker Hub, but there are others): https://hub.docker.com/r/towfiqi/serpbear.

A review to understand how it works: My review of self hosted Serpbear for SEO (Without Spending a Fortune on Monthly Subscriptions) - AI Augmented Living

Let me know what do you think!

Thanks for the tip on SerpBear! It sounds like a great, cost-effective solution for SEO tracking, especially since it’s self-hosted and privacy-friendly. :+1:t2:

I’ll take a look at the Docker container and test how it works in the coming days. If everything goes well, it could be a fantastic addition to Umbrel. I’ll update you once I’ve had a chance to try it out. :blush:

Added Noflux. (Thanks to @blockdyor)

1 Like

Added SerpBear. (Another great suggestion from @blockdyor)

1 Like

Thank you, Denny! I’m testing SerpBear, and it’s working great. After installation, the main step is linking it to a scraping service API, like ScrapingRobot. Many of these services include a free tier, allowing a set number of scrapes each month. For instance, ScrapingRobot provides up to 5,000 scrapes per month at no cost. If you need more frequent scrapes, you can upgrade for a fraction of what pricier SEO tools charge. This will definitely simplify things for a lot of website owners. Thanks again!

1 Like

Added Wakapi.

1 Like

Added Quake III Arena.

1 Like

Add BookStack and Haven (Haven: Host your own private blog)

1 Like

Hi,

I have a question about SerpBear. The app works great for almost everything, but I’ve encountered an issue with automating the scraping process (e.g., scheduling it to run daily). Here’s the error I get:

denny-serpbear_app_1 | [0] GET /api/searchconsole?domain=blockdyor-com  
denny-serpbear_app_1 | [0] GET /api/keywords?domain=blockdyor.com  
denny-serpbear_app_1 | [1] ERROR Making SERP Scraper Cron Request..  
denny-serpbear_app_1 | [1] TypeError: fetch failed  
denny-serpbear_app_1 | [1] at node:internal/deps/undici/undici:13392:13  
denny-serpbear_app_1 | [1] at process.processTicksAndRejections (node:internal/process/task_queues:105:5) {  
denny-serpbear_app_1 | [1] [cause]: Error: connect ECONNREFUSED 10.21.21.9:3232  
denny-serpbear_app_1 | [1] at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1610:16) {  
denny-serpbear_app_1 | [1] errno: -111,  
denny-serpbear_app_1 | [1] code: 'ECONNREFUSED',  
denny-serpbear_app_1 | [1] syscall: 'connect',  
denny-serpbear_app_1 | [1] address: '10.21.21.9',  
denny-serpbear_app_1 | [1] port: 3232  
denny-serpbear_app_1 | [1] }  

Interestingly, manual scraping works without any issues. I’m wondering if this could be resolved by adjusting the Docker Compose file or making some other configuration changes.I attempted to raise an issue on the developer’s GitHub, but I haven’t received a response yet.

Would you have any advice on how to fix this?

Thanks!