A review of operating systems/software for your NAS
Posted on: Sunday 23 Jan 2022
I’ve used almost all the major NAS operating systems/software for DIY NASes so I wanted to write up a few thoughts on each. At the time of this writing, ZFS is the only serious contender for NAS filesystems, so this review only covers operating systems with ZFS support. At this time, HAMMERFS is still only available on Dragonfly BSD which does not have support for jails and only recently gained VM support. This makes Dragonfly BSD not a suitable host for software such as Plex. At this time, BTRFS is still not stable, and is only available on Linux.
I have installed all these operating systems on the same hardware/ZFS pool, so ZFS has been easily passing the vendor lock-in test. I haven’t had any data loss or incompatibility importing my NAS pool in to any of these installs. My comments are only relating to the administration and general use as a NAS.
- FreeNAS: Altogether, a pretty nice piece of software. This gives you the “router webpage UI” feel for your NAS and makes it incredibly easy to one-click most common actions. FreeNAS is just a fancy web interface on top of a standard FreeBSD installation, so there is no worry of data lock-in here. The backend software driving the web interface is rather slow, and FreeNAS is picky about you not installing any software in the “main” system. You should make a jail or VM to install any tools (such as 7zip) which can make basic tasks like unzipping downloads on your NAS annoying. The preferred method would be to unzip them locally, then copy the files over. FreeNAS also had a major software rewrite during one of the major FreeBSD transitions which involved them completely dropping jail support unannounced and the majority of their userbase along with it. They have since reverted this change. A really good OS if you don’t want to deal with how painful the various network sharing service configurations are.
- FreeBSD: FreeBSD feels refreshing to use, like older Debian installations. It doesn’t fight with you and doesn’t have many opinions. For me, the system became a death by many cuts situation. You have to configure all your network sharing services. Samba is never fun to configure, Avahi is miserable. Ezjails aren’t really as “EZ” as described. Packages aren’t built with an ffmpeg that can transcode MP3 so you will likely have to build all of X in ports (a 3-4 hour endeavor, plus learning the system) to play your music. Once you get your jails created, keeping both them and the base system updated is another adventure. You have all the tools available, but if you don’t have heavy FreeBSD administration experience, the system becomes a real drag to maintain.
- Illumos: It just ain’t Linux. It ain’t BSD. It’s not bad, but it’s just too far away from what you’re probably familiar with to get used to using. (If you’re a Solaris administrator, ignore this.) The man pages are great, but what the system is lacking is high level “getting started” guide to introduce and give you good defaults for setting up a NAS. There is some amazing technology here, and I would definitely recommend playing with Illumos at some point, but for a NAS, it was just an uphill battle to learn the system and get back to the same functionality I had with FreeNAS. The package system is well integrated with the zones (“jails”), but setting up both zones and VMs involved hand-editing some JSON. If you have some network administration experience, Illumos may feel very comfortable to you. If you have a non-IT background, I don’t recommend using it on your NAS (but still try it!)
- NetBSD: It just feels ancient. I come from an OpenBSD background, and it’s missing all the quality of life I would expect out of OpenBSD. If you come from a Linux background, I would recommend trying NetBSD in a VM first to get the feel for it. I got annoyed while trying to set up my disks (the disk subsystem is just slightly different from both FreeBSD and OpenBSD) and the system didn’t last much longer past that. The documentation is well-written as you would expect from a BSD project, but the man pages lack the “see also” section so it becomes tricky to find the exact tools you’re looking for when you’re not familiar with an area.
- Linux: SystemD. There are many jokes about NIH syndrome (not invented here), but every time I pick up Linux again, I have to learn at least one new core tool. Much like FreeBSD, you are going to have to configure all the network sharing services yourself, but unlike FreeBSD you are more likely to find some help on the internet for the exact version your distro ships with so it won’t be as painful (usually). Linux is also lacking in the platform stability you will find in other systems (such as the BSDs). The distros that can/do provide a more modern set of packages also tend to have an unstable base that wants frequent updates and suffers from occasional breakage during updates. The distros that provide a more stable base are plagued by woefully out of date packages. This is not the type of situation you want for a NAS.
- NixOS: NixOS is a rather opinionated package and configuration management system that hides a Linux underneath. You’re not allowed to touch the Linux, just edit your nixos.conf. If you can’t nixos.conf it, you are in for a miserable time. In practice, this tends to work out great for a NAS. It breaks the packages away from the base system as you find in the BSD world, and splits your configuration away from both. This provides a much more gentle segregation than the FreeNAS “one jail, one binary” approach to getting CLI tools on your NAS while still making it incredibly easy to throw out and recreate your installation. The nixos.conf feels like all the wisdom of the Archlinux wiki distilled down in to a tiny service block. All in all, this approach seems to work great for configuring servers and makes service management on the NAS pretty enjoyable while still maintaining control of the parts you want. The documentation for NixOS is rather spotty, and the main manual is a several hundred page HTML file that seems designed to choke up all major browsers.
At the time of this writing, I am using NixOS on my NAS and would highly recommend either NixOS or FreeNAS, unless you have substantial experience using another system.
Introducing naughty.st
Posted on: Friday 21 May 2021
Naughty.st is a small little service to redirect one domain to another while preserving the rest of the URL. This is best used for redirecting links to services such as Twitter or Youtube to their free/open/less annoying counterparts such as Nitter or Cloudtube/Invidious. Even if you don’t care about the privacy of these services, not having your page bogged down by over-use of Javascript or sitting through a minute of ads for a 15 second video or simply having a better connection to your locally hosted instance might be of interest.
Why not use X?
If you had this same exact problem and found a solution to it already, definitely stick with that. I ran in to a lot of partial solutions though so I ended up making this.
Rewriting URLs with a proxy
Rewrites was the first thing I tried; I found it ended up being a huge pain to intercept all SSL traffic, and then a rewrite caused all kinds of chaos when the expected/pinned certificates did not match the redirected site. If you have the tools to properly set this up on your network and all your devices, rewriting URLs at the proxy is by far the best way to go. But for me this was just adding more complications to my network and still not working in most cases.
Browser addons
If you only use one browser and it has an addon to redirect/rewrite URLs and links, that’s great. I use multiple browsers on multiple devices, and most mobile browsers don’t have good (or any) addon support. On top of that, the list of sites you are redirecting to slowly get out of sync between all the addons over time. Naughty.st provides you a single place to update that list, and if a service goes down it becomes simple to swap out the URLs. It is also easy to integrate naughty.st in to addons to remove the list synchronization problem. There is already an iOS share sheet shortcut and I plan to create addons for other browsers.
Is it safe to use naughty.st directly or should I self host?
There is no logging on naughty.st, but you also have no way to verify the code I am running on my server is what you see in the repository. I tried to make naughty.st as easy as possible to self-host and the iOS shortcut doesn’t hard-code the service URL.
How does naughty.st work?
I definitely recommend looking at the code as it’s pretty simple even if you’re not familiar with golang. Naughty.st starts up on port :8476
and then sends any request it receives to the function named urlHandler()
. urlHandler()
does some sanity checking rather than just passing garbage or something unintended to the service (say by being triggered on a page we don’t support). After that, it’s just a simple switch/case and matching on the host names of the services. If a match is found, the host name in the URL structure is swapped out (all these services use compatible URL formats) and then naughty.st returns a standard HTTP redirect to let your browser do all the hard work. Naughty.st never has to look at the content of either of the pages to preform this action. If a match isn’t found, a helpful message is returned.
Introducing smallcms
Posted on: Friday 11 Sep 2015
Smallcms is a simple content management system (CMS) for allowing sections of your front page to become dynamically editable. Smallcms is a perl CGI app that you can drop into most sites.
Smallcms will iterate over any tag with a class that ends in -editable
and present it as a text box, making it ideal for quick news tickers and small boxes that need to be updated frequently, but don’t warrant adding a database to your site. The smallcms code is shorter than a page, and easy to understand. It currently does not offer any features except for <br>
to \n
conversions as appropriate. Smallcms does not care about it’s name; it is suggested that the binary is named something more appropriate, such as edit_news.pl
when installed.
Smallcms may be found on gitlab.
Smallblog.pl, or how not to create a static blog in shell (part 4)
Posted on: Thursday 10 Sep 2015
Continued from part 3.
Much like the third part of this series, this coincides with the release of smallblog.sh 0.3 and smallblog.pl 0.5. Both of these releases attempt to address problems encountered with smallblog on ZFS. On ZFS, open(2)
and stat(2)
seem to be heavy operations. This would likely be mitigated on a dedicated server by the various caches, but on a virtual server, these become rather heavy compared to Linux’s filesystem. Smallblog.sh 0.3 attempted to work around this by moving the heaviest operations in terms of lookups and reads into plugins, so execution of this codepath wouldn’t be required, and it would be easy to replicate the ill-performant code in another language. Smallblog.pl addresses this with a rewrite and re-architect of the code to avoid so many lookups and reads. This approach isn’t incompatible with shell, but it does not fit the architecture of smallblog.sh, which relies on piping data, rather than ever storing it to reference later. Smallblog.sh could have been rewritten, but I see no advantage of shell when used as an applications language. I also wanted to leverage a templating system to remove the HTML generation from the middle of the script, and perl offers several nice choices in that area. Again, this approach isn’t incompatible with shell, and a template would work the same in both applications. What finally caused me to abandon shell was the lack of compatibility for non-POSIX features and extensions across shells. As I don’t use BASH, writing BASH-specific features into my code was never an issue, but I have a fully heterogeneous set of systems, and expect smallblog to run on all of them. The disparity among shells finally became too much, so I needed to switch over to a language with a living specification, not a frozen one.
A rewrite
Due to the small size of *small*blog, a rewrite was easy. I had a set of inputs and outputs to test against, and a rather small specification to output data against (the jekyll default theme.) So I just retraced the same steps that I used to create smallblog.sh, starting inside and layering on functionality as I moved outwards towards a complete implementation.
I began with the familiar
my $text = read_file($path);
$html = markdown($text);
and called it in a loop with a
my @paths = split("\n", `ls -r */*/*/*.md`);
foreach my $path (@paths) {
...
}
Even for those not familiar with perl, this line may trigger some memories. It looks rather similar to a certain
for post in `ls -r */*/*/*.md`; do
...
done
All that perl snippet is doing is shelling out and using ls(1)
to build an array of paths to posts. Why waste time digging through perl libraries when we can just ask the shell? Beats me. A proper perl implementation of this would still take just as many reads to the filesystem to build the list.
Less stat more storage
As stat(2)
and open(2)
was what I wanted to avoid, I had to change from querying files every time I needed a single line to storing everything in a data structure and working against that. As it turns out, the access pattern for every function (except the main index page) is to read every file, process part of it, then pipe that output to disk in it’s final form. By simply saving both the path and the file contents once, then acting on that, the filesystem access was reduced to a directory lookup and read for each file (as opposed to a lookup and read on every file in each function.) The single exception, the main index page index.html
only does this on the top few newest files (5 by default.) One loop with a maximum count, and there are now 10 less filesystem accesses.
It should be noted, this approach does incur a memory penalty, as now each post, it’s path, and it’s HTML version are now stored in memory. While the name *small*blog isn’t meant to restrict this program’s scope, I suspect most installations using this software will fit easily in RAM, even on smaller sized virtual servers. This is a trade-off, but I believe larger installations should make use of a proper database to allow more refined queries into the available data.
Templates, now with less shell substitutions
Templating systems are great. Content is removed from code, and logic flow becomes cleaner and more simple, and anyone can edit the output without knowing how to “code”. Smallblog.sh was slowly growing into templates, despite my effort to not reinvent a templating library. To allow for the dynamic titles for each page (just the post title, or the site title for the main page,) labels in the style of %TITLE%
were sneaking into the $blog_header
and $blog_footer
, and then being regexed back out in make_index()
.
With real templates, now the giant data structure I collected can be passed to the template, and the files just call out the variables they want in the form of ${site.title}
. Because the chosen templating system Template Toolkit allows multiple formats for variable tags, both their standard [% var %]
and shell style ${var}
, converting the existing HTML generation code involved one regex, and a handful of renamed variables.
As the templating system is modular, it supports more include options than just dropping a header and footer onto a page. The main index page is a particularly tedious point as it just duplicates most of the individual post page generation code, but can’t reuse the code without rewinding the logic to remove headers and footers. With templates post.tmpl
is now only the HTML to print the post, and the page generation code has become
[% INCLUDE site_header.tmpl title=post.title %]
[% INCLUDE post.tmpl %]
<br />
[% INCLUDE site_footer.tmpl %]
while the main index page can wrap the [% INCLUDE post.tmpl %]
in a foreach
loop, and still insert the “all posts” link at the bottom of the page
[% INCLUDE site_header.tmpl title=site.title %]
[% FOREACH post=posts %]
[% INCLUDE post.tmpl %]
<br />
[% END %]
<h3>
<a
class="extra"
href="${site.prefix}/archive.html"
>
all posts
</a>
</h3>
[% INCLUDE site_footer.tmpl %]
The “all posts” archive.html
page has also substantially benefited from templates if anyone wants to peek.
What next?
Smallblog.pl has now caught up to smallblog.sh in features, while being easier for me to maintain (and use.) As such, smallblog.sh is being deprecated in favor of smallblog.pl. I have purposefully skipped a version number to allow for one last smallblog.sh release should any bugs or interesting features come up.
As for smallblog.pl, it is a fairly direct translation from smallblog.sh, re-architecting aside. There are plenty of places the code and templates can be cleaned up. I would also like to further reduce the number of filesystem accesses. Right now, every file is blindly regenerated, even if the source and templates haven’t changed. I would like to avoid the needless churn, as in most cases, only the main index page and archives need updating.
There likely won’t be any new features introduced to smallblog.pl for a while, as it is currently feature-complete for my usage. If there are any features or changes that would interest you, feel free to email me, or simply file an issue.