Self-Hosting Journey - Part 6 - Providing Storage with NAS

Welcome back! In our last post, we forged the network backbone with OpenWrt, creating a secure and segmented environment for our services. With the highways built, it’s time to construct the vault. A self-hosted environment is nothing without a safe place to store its data, so today, we’re building the heart of our homelab: a resilient Network Attached Storage (NAS) solution.

My vision for a NAS is simple: a centralized, high-capacity, and reliable storage server that makes accessing my data easy and secure. To achieve this, we’ll lean on two powerful concepts: the 3-2-1 backup rule and the data integrity features of the ZFS filesystem.

The 3-2-1 backup rule is a strategy to protect data:

  • 3 copies of your data (1 primary + 2 backups)
  • 2 different storage types (e.g., external drive, cloud)
  • 1 backup stored offsite (e.g., cloud or remote location)

In this blog post, we’ll walk through how to set up, configure, and organize your data, and how to conveniently share it using a NAS container.

My Storage Implementation

  1. Dedicated VM Storage: My mini-PC includes three SSDs. The first will be allocated for virtual machine storage, ensuring optimal performance and separation from backup and archive workloads.

  2. Redundant Data Archive: The remaining two SSDs will be configured in a ZFS RAID-1 mirror, providing a reliable and fault-tolerant location for snapshots, backups, and archived data. ZFS adds data integrity through self-healing checksums and protection against drive failure.

  3. Convenient Access: To streamline access across my devices and VMs, I’ll set up a dedicated NAS machine (as an LXC container) and configure multiple sharing protocols (NFS, SMB, and Syncthing) to make data accessible across all my devices and VMs.

  4. Offsite Backup: For added resilience, the mirrored ZFS pool will be replicated to an external HDD managed by a Raspberry Pi located offsite 400km away. This ensures secure, remote backups in case of local hardware failure or data loss.

Step 1: Setting up the ZFS Storage Pool

The first step takes place on the Proxmox host itself, where I initialized my two data SSDs.

  1. Identify the Drives: Before doing anything, it’s critical to identify the correct drive names (lsblk). In my case, they were /dev/nvme0n1 and /dev/nvme1n1.

  2. Wipe the Drives: To ensure a clean slate, I zapped all existing partition tables.

1
2
sgdisk --zap-all /dev/nvme0n1
sgdisk --zap-all /dev/nvme1n1
  1. Create the ZFS Pool: I then created a new, encrypted ZFS pool in a RAID-1 configuration (mirror). The command below sets up the pool with modern features like LZ4 compression, TRIM for SSD health, and encryption. The encryption key is stored in a file on the host for automatic mounting on boot.
1
2
3
4
5
6
7
8
9
10
11
12
# Create an encryption keyfile
dd if=/dev/urandom of=/root/keyfilepve bs=1 count=32
chmod 600 /root/keyfilepve

# Create the mirrored zpool named 'ssd'
zpool create \
-o ashift=12 \
-o autotrim=on \
-O encryption=on -O keyformat=raw -O keylocation=file:///root/keyfilepve \
-O compression=lz4 \
-O atime=off \
ssd mirror /dev/disk/by-id/<ID-of-drive-1> /dev/disk/by-id/<ID-of-drive-2>

ZFS ARC Memory Usage and Tuning

By default, ZFS uses up to 50% of system memory for the Adaptive Replacement Cache (ARC). This caching mechanism plays a critical role in I/O performance, so any reduction should be made with care.

A good rule of thumb for ARC sizing is:

2 GiB base + 1 GiB per TiB of usable storage

For example, a system with a 2 TiB ZFS pool should allocate at least 4 GiB for ARC. Keep in mind that ZFS enforces a minimum ARC size of 64 MiB.

To make ARC limits persistent across reboots, add (or modify) the following lines in /etc/modprobe.d/zfs.conf:

1
2
options zfs zfs_arc_max=4294967296      # 4 GiB
options zfs zfs_arc_min=1073741824 # 1 GiB

These values are in bytes, so 4 * 2^30 for 4 GiB and 1 * 2^30 for 1 GiB.

⚠️ Important: If zfs_arc_max is less than or equal to zfs_arc_min (which defaults to 1/32 of total system RAM), it will be ignored unless you explicitly set zfs_arc_min to a value less than zfs_arc_max.

Step 2: Organizing Our Data with ZFS Datasets

One of the best features of ZFS is the ability to create “datasets”, which are like smart, independent filesystems within the main pool. This allows us to apply different settings for different types of data.

Here’s the structure I created:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# For personal files from Nextcloud, Immich, etc.
zfs create ssd/archive

# To store Proxmox VM and container backups
zfs create ssd/backup

# For Docker configurations
zfs create ssd/docker

# For general network sharing via Samba
zfs create ssd/share

# For real-time file synchronization across devices
zfs create ssd/sync

# For sensitive data to be encrypted and pushed to the cloud
zfs create ssd/cloud

Step 3: Building the NAS Machine (LXC Container)

With our storage pool ready, we need a machine to manage and serve it. I created a privileged Debian LXC container to act as our NAS. A privileged container is necessary to support NFS, which requires kernel-level access.

After creating the container, I passed the ZFS datasets directly into it by editing its configuration file (/etc/pve/lxc/<ID>.conf) on the Proxmox host:

1
2
3
4
5
mp0: /ssd/archive,mp=/ssd/archive
mp1: /ssd/docker,mp=/ssd/docker
mp2: /ssd/share,mp=/ssd/share
mp3: /ssd/sync,mp=/ssd/sync
mp4: /ssd/cloud,mp=/ssd/cloud

This makes the storage pools appear as local directories inside the container, ready to be shared.

Step 4: Configuring the Sharing Protocols

Inside the NAS container, I set up three different sharing methods to cover all my use cases.

  1. NFS for Linux Machines: Ideal for sharing data with other VMs and Linux systems on my network. After installing nfs-kernel-server on both the Proxmox host and the NAS container, I configured /etc/exports in the NAS to share the ssd/archive dataset.

  2. SMB/Samba for Cross-Platform Sharing: The universal standard for sharing with Windows, macOS, and Linux. I installed samba, created a dedicated user, and configured a secure share in /etc/samba/smb.conf pointing to the ssd/share/samba directory.

  3. Syncthing for Real-Time Device Sync: A fantastic tool for keeping folders synchronized across my phone, laptop, and other devices. I installed Syncthing and configured it to run as a systemd service, using the ssd/sync dataset as its primary folder.

Configuring NFS Server

These are the steps to setup and configure an NFS server on the NAS machine.

  1. To enable NFS in an LXC container on Proxmox, first install nfs-kernel-server on the Proxmox host, since NFS relies on kernel modules which are shared with containers.

  2. Next, configure the container as privileged and enable nesting.

This is an example of my /etc/pve/lxc/202.conf file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
arch: amd64
cores: 1
features: nesting=1
hostname: nas
memory: 512
mp0: /ssd/archive,mp=/ssd/archive
nameserver: 10.0.20.1
net0: name=eth0,bridge=vmbr1,hwaddr=BC:24:11:EB:E3:DE,ip=dhcp,tag=20,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-202-disk-0,size=8G
searchdomain: home.local
startup: order=2
swap: 512
  1. Then start the container and run:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    # Install nfs also on container
    apt install nfs-kernel-server

    # Setup a shared directory
    echo "/ssd/archive 10.0.20.0(rw,sync,no_subtree_check)" > /etc/exports

    # Adjust the desired permissions on the folder, e.g.
    chown 1000:1000 /ssd/archive

    # Apply changes in /etc/exports
    exportfs -a

    # Reset current and apply new changes in /etc/exports
    exportfs -ra

    # Check shared directories
    exportfs -v

    # Check the status of nfs-kernel-server
    systemctl status nfs-kernel-server

Your container now exposes /ssd/archive via NFS, ready to be mounted by remote machines.

Mounting NFS Folder

These are the steps to follow to mount the NFS folder on a machine.

  1. Install the nfs client package and mount the NFS share folder.

    1
    2
    3
    4
    5
    6
    7
    8
    # On Debian/Ubuntu
    sudo apt install nfs-common

    # On Fedora
    sudo dnf install nfs-utils

    # Mount the remote NFS share
    sudo mount <nfs-server-ip>:<remote-path> <local-mount-point>
  2. To make the mount persistent across reboots, add the following line to /etc/fstab:

    1
    10.0.20.2:/ssd/archive /mnt/archive nfs4 rw,tcp,intr,noatime 0 0

Replace the IP address, remote path, and local mount point with your actual setup.

Setting up Samba SMB Server

These are the steps to setup and configure an SMB server on the NAS machine.

  1. Install Samba on the container
1
sudo apt install samba
  1. Create a Dedicated Samba User and Group
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Create a group for Samba users
sudo groupadd sambausers

# Create a system user (no home, no shell)
sudo useradd -M -s /sbin/nologin samba

# Add the user to the Samba group
sudo usermod -aG sambausers samba

# Add the user to the Samba password database
sudo smbpasswd -a samba

# Enable the Samba account
sudo smbpasswd -e samba

Useful Samba User Management Commands

1
2
3
4
5
# Remove a Samba user (does not delete the Unix user)
sudo smbpasswd -x samba

# List all Samba users
sudo pdbedit -L
  1. Configure Samba: /etc/samba/smb.conf

Edit your Samba configuration file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[global]
workgroup = WORKGROUP
server string = Samba Server
server role = standalone server

map to guest = Bad User
guest account = nobody

log file = /var/log/samba/log.%m
max log size = 1000
panic action = /usr/share/samba/panic-action %d

client min protocol = SMB2
client max protocol = SMB3

[share]
comment = Network Share used for Apps Data
path = /ssd/share/samba
guest ok = no
browseable = yes
writable = yes
valid users = @sambausers
  1. Create and Set Permissions for the Shared Folder
    1
    2
    3
    4
    5
    6
    # Create the shared folder
    sudo mkdir -p /ssd/share/samba

    # Set group ownership and permissions
    sudo chown root:sambausers /ssd/share/samba
    sudo chmod 2770 /ssd/share/samba

The 2770 permission ensures that:

  • Only the owner and group can read/write/execute
  • New files inherit the group (setgid bit)
  1. Restart Samba
    1
    sudo systemctl restart smbd

You’re now ready to connect to your Samba share from other devices using the samba user credentials.

Mounting SMB share
  1. Install the cifs package

    1
    apt install cifs-utils
  2. Mount the share

1
mount -t cifs //server/share ~/sambashare -o username=your_samba_username,password=your_samba_password,uid=1000,gid=1000,iocharset=utf8,vers=3.0

If you want to setup an auto-mount, add the credentials into a file

~/.smbcredentials

1
2
username=your_samba_username
password=your_samba_password

Ensure to secure the password by setting restrictive permissions

1
chmod 600 ~/.smbcredentials

Append the following line to /etc/fstab file

1
//server/share /home/username/sambashare cifs credentials=/home/username/.smbcredentials,uid=username,gid=groupname,iocharset=utf8,vers=3.0 0 0

Setting Up a Synced Folder with Syncthing

To enable continuous file synchronization via Syncthing, start by setting up the package repository and creating a dedicated system user.

  1. Add Syncthing Package Repository

First, add the Syncthing signing key and configure the APT source list:

1
2
3
4
sudo mkdir -p /etc/apt/keyrings
sudo curl -L -o /etc/apt/keyrings/syncthing-archive-keyring.gpg https://syncthing.net/release-key.gpg

echo "deb [signed-by=/etc/apt/keyrings/syncthing-archive-keyring.gpg] https://apt.syncthing.net/ syncthing stable" | sudo tee /etc/apt/sources.list.d/syncthing.list

Install Syncthing:

1
2
sudo apt update
sudo apt install syncthing
  1. Create a Dedicated Syncthing User and Directory
1
2
3
4
5
6
7
8
9
10
11
# Create the sync folder
sudo mkdir -p /ssd/sync/syncthing

# Create a dedicated user without login privileges
sudo useradd -M -s /sbin/nologin syncthing

# Assign ownership of the sync folder
sudo chown syncthing:syncthing /ssd/sync/syncthing

# Set the folder as the user's home directory
sudo usermod -d /ssd/sync/syncthing syncthing
  1. Configure the Systemd Service

Create a systemd unit file at /etc/systemd/system/syncthing@.service:

(This file is based on the official systemd unit file)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=man:syncthing(1)
After=network.target
StartLimitIntervalSec=60
StartLimitBurst=4

[Service]
User=%i
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart --logflags=0
Restart=on-failure
RestartSec=1
SuccessExitStatus=3 4
RestartForceExitStatus=3 4

# Hardening options
ProtectSystem=full
PrivateTmp=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
NoNewPrivileges=true

# Optional: allow Syncthing to change file ownership
# AmbientCapabilities=CAP_CHOWN CAP_FOWNER

[Install]
WantedBy=multi-user.target
  1. Enable and Start the Service

Enable and start Syncthing for the dedicated user:

1
2
3
4
5
sudo systemctl enable syncthing@syncthing.service
sudo systemctl start syncthing@syncthing.service

# Check service status
sudo systemctl status syncthing@syncthing.service
  1. Access the Web Interface

By default, Syncthing’s web UI binds to localhost only. To access it remotely, set up an SSH port forward:

1
2
# Replace user@nas with your actual SSH user and NAS hostname/IP
ssh -L 8022:localhost:8384 user@nas

Then navigate to:
http://localhost:8022/ on your local browser.

Syncthing Web Page

This allows secure and quick access to the Syncthing interface without exposing it to the wider network.

Conclusion

In this post, we’ve laid the groundwork for our storage system. Our data now lives on a resilient, self-healing, and mirrored ZFS pool, with seamless access from any device on the network.

In the next post, we’ll take it a step further, organizing and automating backups and snapshots to protect your data against nearly any threat.

Stay tuned! 🚀