Adding Emergency Alerts to the Pi, and Finally Setting Up Backups

Since the last post, three things happened on the mesh node. The news site got wider. A whole new node appeared for emergency warnings. And I finally got a backup system in place, because losing everything to a corrupt SD card had started to feel like the kind of thing that would eventually happen if I kept putting it off.
This is how all of that went.
Broadening the news
The starting lineup on the news node was nine Canadian feeds: CBC, National Post, Globe and Mail, Rebel News, True North, the Macdonald-Laurier Institute, the Fraser Institute, Straight Arrow News, and OpenCanada. Looking at that list with fresh eyes, something stood out. Five of the nine leaned clearly to the right. The public broadcasters were there, the national dailies were there, but the overall editorial weight was uneven.
That's a defensible choice if the node is just my personal news diet. The whole point of setting it up to show at Maker Faire, though, is that strangers are going to browse it. Strangers deserve range.
I added five more feeds. The Tyee, an independent BC outlet that runs investigative work. Policy Options, the non-partisan magazine from the Institute for Research on Public Policy. Western Standard, which keeps the Alberta flavour without repeating what Rebel and True North already cover. The Hub, a newer centre-right outlet with more policy focus than opinion. And the Financial Post, for business and markets.
Fourteen feeds total now. The mix feels like what a curious person's reading might actually look like instead of a single editorial viewpoint.
The fetcher script only needed new entries in its dictionary. The matching page scripts came from the same template I'd used for the original nine. No new architecture, just more content.
A new node for emergency alerts
Here's what I got curious about. If the internet goes down during an emergency, what's the point of a mesh network that can't tell you there's an emergency?
Canada has a national alerting system called Alert Ready. Provinces and territories feed alerts into a central aggregator called NAAD, run by Pelmorex (the company behind The Weather Network). Every severe thunderstorm warning, AMBER alert, and civil emergency passes through it. Some individual provinces also publish their own direct feeds, though coverage is uneven.
For a Calgary-based node, two feeds cover the country:
Alberta Emergency Alert publishes a clean public Atom feed at
emergencyalert.alberta.ca.NAAD aggregates everything national at
rss.naad-adna.pelmorex.com.
I set up this new node the same way the news node works. A separate config directory (~/.nomadnetwork_warnings/), its own systemd service (nomadnet_warnings.service), its own fetcher script running on a tighter schedule. News can refresh every 30 minutes. Emergency alerts need to be fresh, so the fetcher runs every 10.
Sorting by province
The NAAD feed returns everything for the entire country in one stream. A recent fetch pulled 214 active alerts. That's way too many to show as a flat list, and anyone trying to find out whether something relevant to them was happening would give up fast.
So the page needed to group alerts by province. Easy enough in concept. The hard part is that NAAD doesn't tag alerts with a structured province field. You have to figure out where each alert belongs by reading the text.
Environment Canada helps. Weather alerts end with a small signature line that includes a hashtag like #ABStorm or #ONStorm, and an email address like meteoAB@ec.gc.ca. Both are unambiguous. Fall back to matching province names and major cities in the alert text, and you can classify most alerts.
I wrote that logic, deployed it, and looked at the output. Newfoundland alerts sorted correctly into Newfoundland because they mentioned Avalon Peninsula and St. John's. But almost every other alert ended up in an "Other / Unclassified" pile.
The truncation bug
What caught this was a batch of winter storm warnings that clearly came from Environment Canada. They should have sorted into whichever province they covered. Instead they all said unknown.
The cause took me a minute to find. My fetcher stores each alert's description in the cache, truncating to 200 characters for display. Which is fine for humans reading the page. But the province-detection logic was running against that same truncated text, and the ECCC hashtags I was pattern-matching on come at the very end of the full description, after the alert body and the boilerplate about how to report severe weather. The 200-character cut chopped off exactly the part I needed.
The fix was to run detection on the untruncated text, before truncating for display. Save the province code into the cache alongside the title and date. Then the page script just reads the precomputed code and doesn't need to do any pattern-matching itself.
I ran the fetcher again, and the province breakdown went from "one province classified, twelve unclassified" to most alerts in their correct provinces, with the quiet ones listed compactly at the bottom. Much better.
Filtering the noise
One more thing. NAAD broadcasts periodic test messages. They're important for the system itself but useless on a public node where visitors are looking for real alerts. The fetcher now filters them before caching. English and French variants both, since NAAD is bilingual.
Backups
Somewhere during the warnings-node work, I realized I'd built three NomadNet instances, a stack of custom scripts, a guestbook with real entries from people who had visited, and twelve feed cache files. If my SD card died tomorrow, all of it was gone. That's not a risk I can keep carrying.
The fix is simple and cheap. A USB drive plugged into the Pi. Automated daily backups to it. Seven dollars, a cron entry, done.
Formatting the USB
I had a spare 4 GB Kingston stick lying around, formatted FAT32 like they usually come. FAT32 works for most things but it loses Unix file ownership and permissions, which matters when you're backing up config files that need to be restored with the right owner. So I reformatted it to ext4, which is what the Pi's main filesystem uses.
The -L flag gives the drive a label so it's obvious what the stick is if I ever pick it up later and wonder.
Persistent mount
I wanted the drive to mount automatically whenever the Pi boots, at a consistent path. That means adding it to /etc/fstab with its UUID rather than its device name (device names can shift if other USB devices are plugged in):
The nofail option is the one that matters. If the USB ever comes unplugged and I reboot, the Pi should still boot normally. Without nofail, a missing drive can drop the system into recovery mode, which is the last thing you want when the Pi lives in a closet.
To get your own UUID, run sudo blkid /dev/sda1 and copy the UUID value.
One small gotcha
I set the mount point's ownership before mounting the drive. Then mounted the drive. Then discovered I couldn't write to /mnt/backup as my regular user.
The reason: when you chown a directory and then mount a filesystem on top of it, the filesystem's root permissions cover up whatever you set on the underlying directory. You have to chown after mounting, so you're changing the mounted filesystem's root rather than the mount-point directory underneath.
Fix was one command:
Writes worked after that.
The backup script
The backup itself is a shell script. It tars all the important directories and files into a timestamped archive on the USB drive, keeps the last 30 days' worth, and logs each run to a file on the drive.
What goes in the tar:
~/.reticulum/(network identity, the thing you really can't lose)~/.nomadnetwork/,~/.nomadnetwork_news/,~/.nomadnetwork_warnings/(all three nodes)The fetcher scripts
The systemd unit files from
/etc/systemd/system/A dump of the user's crontab
Cron runs it nightly at 3 AM:
The first test run produced a backup file under 10 MB. Thirty days of those fit on a 4 GB USB with room to spare.
What this doesn't protect against
Worth being honest about. A USB drive plugged into the Pi protects against the SD card failing, which is the most common failure mode. It does not protect against the Pi being stolen, the house burning down, or pulling the USB out during a write. For any of those scenarios I'd want a second backup target somewhere off the property. That's a later project, probably rclone pushing to Proton Drive. One resilience layer at a time.
Where things stand
Three nodes running on one Pi. Fourteen news feeds across the political spectrum. Emergency alerts from every province, sorted sensibly, with test broadcasts filtered out. Daily backups to a USB stick that will survive the SD card dying.
Maker Faire is a few weeks away and the node is finally starting to feel ready.

