A few months back, I decided I should have my files and media backed up on something better than a couple flash drives. The concept of having a NAS connected to my network that all my internal network devices could connect with was very appealing. However, I had some trouble in picking out which style of NAS to go with.
Pre-Packaged Solution
Amazon/Newegg sell a variety of pre-packaged NAS solutions. However, these typically run proprietary software which I’m not a fan of. I’d also really prefer to avoid needing to talk to an outsourced tech support hotline when I eventually need to restore a RAID array. Also, they typically lacked the number of drive bays that I wanted to have.
Installable Operating System
I saw an ad TrueNAS on big Linus’ show some time before. Its web interface was appealing since I would be able to set everything up in a more guided fashion. Also, using a system that is also used by some enterprise customers made me feel like it could be more stable than a platform I made on my own, especially for managing updates.
Build Your Own
Of course, there’s always the option of making your own NAS. However, when I was getting started, I thought this option would be more work than it had benefits.
I eventually decided install the TrueNAS system on custom-built hardware. All-in-all, the hardware cost ~$1,000. I was able to set up 6 6TB drives with OpenZFS’s raidz2 pretty easily using the TrueNAS web UI. I had some trouble managing permissions but all-in-all, the install went great. My computer was backed up using rsync and I had all my files on my own hardware.
Update Failed
However, the first time I clicked the update button using TrueNAS, it destroyed its network configuration. I was unable to connect to it through SSH or the web app. The data was not lost but I would have had to re-install the ISO in order to continue use TrueNAS on the machine.
Instead of doing that, I decided to put Debian on the NAS instead and build my own solution. It ended up being a much simpler setup (although it did still take a few hours to come up with). The rest of this article is my notes from setting up a NAS on debian:
OS Install
Install Debian via ISO
- No GNOME
- Yes SSH Server
Enable nomodeset flag from GRUB
- Fixes blank screen on TTY setup
- Edit
/etc/defaut/grub
- Add
nomodeset
toGRUB_CMDLINE_LINUX_DEFAULT
- Add
- Run
update-grub
as root
Initial Setup
Install sudo
and give to unprivileged user
apt install sudo
usermod -aG sudo <user>
Install htop
for process monitoring
apt install htop
Install nload
for network monitoring
apt install nload
Ensure system is in proper timezone (important for predictable crontab
setup later)
- I often choose the default of ET by mistake
timedatectl
- find current time zonetimedatectl list-timezones
- list available time zonestimedatectl set-timezone <timezone>
- set the time zone
SSH Configuration
Disable password authentication in /etc/ssh/sshd_config
PasswordAuthentication no
Port Forward 22 (for external SSH)
- this is router-specific
ZFS Setup
Install ZFS
- This installs the “backported” version so the zfs package is closer to bleeding edge
- Add
contrib
flags to source lists in/etc/apt/sources.list
- Add
deb http://deb.debian.org/debian bullseye-backports main contrib non-free
source to/etc/apt/sources.list
sudo apt update
sudo apt install linux-headers-amd64
- Needed to build the zfsutils-linux packagesudo apt install -t bullseye-backports zfsutils-linux
- note:
bullseye
is the “codename” of the debian version
- note:
Mount ZFS RAID Pool
- This is coming from an existing ZFS raid-z2 array previously configured using TrueNAS
- Import an existing ZFS Pool
zpool import
- List Poolszpool import <pool-name>
- Imports a pool (and mounts it!)- Add the
-f
flag if the pool was accessed by another device
- Add the
zpool status
- View ZFS statuszpool status -x
- simplified “are all pools healthy?”
- Change mountpoint
zfs get mountpoint
- lists all pools mountpointszfs set mountpoint=/mnt <pool-name>
- Useful commands
zpool list
shows total array usagezfs list
shows by-dataset usage
Remove TrueNAS .system
Datasets
- For example, from
zfs list
- pool-michael/.system
- pool-michael/.system/configs-81963fc7279b4cf49c43e5a8cbe36cdb
zfs destroy -r pool-michael/.system
Sync Setup
Install rsync
sudo apt install rsync
Install Sync PC -> NAS
- systemd timers + rsync services on PC
- See systemd-timers-as-cron.md
Install Sync NAS -> Family NAS scripts
sudo apt install -t bullseye-backports curl
- installcurl
dependency- Need backported version likely because of something with the previous zfs install
mkdir scripts
mkdir logs
- Install scripts to
michael
- scripts/
- update-known-hosts.sh
- update-beefslab-ip.sh
- sync-to-family.sh
- chmod u+x
scripts/*
- scripts/
- ipdir@beefslab.com config
ssh-keygen
on NAS- Install
id_rsa.pub
tobeefslab.com:/home/ipdir/.ssh/authorized_keys
ssh ipdir@beefslab.com
on NAS to get the beefslab.com hosts file entries up to date on NAS
- test out scripts
Install cronjob tasks
crontab -e
asmichael
user
# min hour dom mon dow command
@reboot sleep 15 && $HOME/scripts/update-beefslab-ip.sh >> $HOME/logs/update-beefslab-ip.log
0 0 * * * $HOME/scripts/update-beefslab-ip.sh >> $HOME/logs/update-beefslab-ip.log
10 0 * * * $HOME/scripts/update-known-hosts.sh >> $HOME/logs/update-known-hosts.log
15 0 * * * $HOME/scripts/sync-to-family.sh
Hardening
From one month of installing fail2ban
, I have blocked almost 1,000 IPs of bots / scanners
- Install
fail2ban
to block IPs of repeated SSH failures- Install
iptables
andufw
apt install iptables ufw
- Allow port 22 (ssh) and enable ufw
ufw allow 22
ufw enable
- install
/etc/fail2ban/jail.local
- see
fail2ban/jail.local
- see
- Useful Commands
fail2ban-client status
fail2ban-client status sshd
fail2ban-client set sshd unbanip <ip>
- unban an IPufw status
iptables --list
- Install