Self-Hosting Journey - Part 4 - Building the Foundation with Proxmox

The planning is done, the hardware has arrived, and the excitement is real. It’s time to move from blueprint to build. The very first step is to install the hypervisor, the operating system that will manage all our virtual machines and containers. As planned, we’ll be using Proxmox VE.

In this post, I’ll walk you through preparing the installer, installing Proxmox, and performing the essential post-installation tasks to create a robust and flexible foundation for our homelab.

Step 1: Preparing the Proxmox Installer

Before we can do anything on the new mini-PC, we need to create a bootable USB drive on another computer.

  1. Download the Proxmox ISO

    Visit the Proxmox VE download page and copy the ISO link. In this example, I’ll use version 8.3-1.

    1
    2
    cd /tmp
    wget https://enterprise.proxmox.com/iso/proxmox-ve_8.3-1.iso
  2. Verify the Download (Optional but Recommended)

    1
    shasum -a 256 proxmox-ve_8.3-1.iso

    Ensure the output matches the checksum listed on the Proxmox website.

  3. Create a Bootable USB

    Insert a USB drive (at least 2GB), identify it using lsblk, and write the ISO with:

    1
    2
    sudo umount /dev/sdX1  # Replace sdX1 with your USB partition
    sudo dd bs=1M conv=fdatasync if=proxmox-ve_8.3-1.iso of=/dev/sdX

    ⚠️ Make sure to double-check the device name (e.g., /dev/sdX) to avoid overwriting the wrong disk.

  4. Boot from USB

    Plug the USB into your target machine, boot from it, and begin the installation.

Step 2: Installing the Proxmox OS

The process is straightforward, but one key decision is the filesystem. The rest of the setup involves setting your location, password, and network configuration.

Proxmox Setup

Choosing the Filesystem

For less experienced readers, here’s a brief and simplified explanation of what a filesystem is.

Whenever you format an entire disk or a partition, such as when installing a new operating system, you’re asked to choose a filesystem for that partition. A filesystem is the method an operating system uses to organize and store data on a storage device like a hard drive (HDD), solid-state drive (SSD), or USB stick. It determines how files are named, saved, accessed, and managed.

The choice of filesystem can affect several important aspects, including:

  • Performance
  • Compatibility
  • Security
  • Limits on file and volume size

Example: If you’re using a USB drive that needs to work with both Windows and macOS, FAT32 or exFAT is a good choice because of its broad compatibility. On the other hand, if you’re setting up a Linux server, ext4 is usually a better option due to its reliability and performance.

The most common filesystems for computer workstations are:

  • FAT32 – Simple and widely compatible (Windows, macOS, Linux), but limited to files no larger than 4 GB.
  • NTFS – Used by Windows; supports large files and file permissions.
  • ext4 – Commonly used on Linux; reliable and fast.
  • APFS – Used by macOS; optimized for SSDs.

Since I am working only with Linux systems and also plan to create a NAS solution, I need to evaluate more advanced Linux filesystems that are optimized for specific purposes.

The following table summarizes key information about the most common linux-compatible filesystems.

EXT4 XFS BtrFS OpenZFS
Online Enlarge
Online Shrink
Offline Enlarge
Offline Shrink
Compression
Encryption
Checksum ❌ (only on metadata) ❌ (only on metadata)
Snapshots
Deduplication
Journaling CoW CoW
RAID Support Basic Advanced
Limitations Few Few Needs regular scrub/rebalance; unstable RAID 5/6 High RAM usage, Kernel Out-of-tree module, Max usage limit 94%
Best For General Purpose Large files Modern Linux Distros Storage

Linux Filesystems Comparison

My Choice

As I outlined in my planning post, I have three SSDs.

  • For the Proxmox OS drive: I chose EXT4. It’s simple, stable, and perfectly suited for the hypervisor’s operating system.

  • For the two data drives: I will configure these later with ZFS in a RAID-1 mirror. ZFS offers advanced features like data integrity, snapshots, and encryption, which are ideal for a reliable NAS.

Once the installation finishes, the system will reboot. You can now access the Proxmox web interface from another computer on the same LAN at:

1
https://your-ip-address:8006
Proxmox Login screen

Step 3: Essential Post-Installation Tweaks

With Proxmox running, a few initial configuration steps will set us up for success.

1. Configure Community Repositories: By default, Proxmox uses enterprise repositories that require a paid subscription. For a homelab, we’ll switch to the free, “no-subscription” repositories.

  • In the Proxmox UI, go to Datacenter > pve > Updates > Repositories.
  • Disable the pve-enterprise repository.
  • Click Add, select the No-Subscription repository, and add it.
  • Reload the package list to apply the changes.
Proxmox screen showing final state for repositories settings

2. Set Up Network Bridges: As planned, we’ll use an OpenWRT VM to manage our network. To facilitate this, we need two “virtual switches” (Linux Bridges): one for the public internet (WAN) and one for our private network (LAN).

  • vmbr0 (WAN): Proxmox creates this bridge by default during installation. We’ll connect this to our physical internet connection.
  • vmbr1 (LAN): We need to create this one. Go to pve > System > Network, click Create > Linux Bridge, name it vmbr1, and check the VLAN Aware box. This bridge will handle all our internal VM traffic.
Setting Up Network Bridge VMBR1 for LAN

This setup gives our future OpenWRT VM the connections it needs to route traffic between our home environment and the outside world.

Window showing networking final configuration

Step 4: Building Our VM Factory with Cloud-Init

To speed up VM creation, we’ll create templates. Instead of installing an OS from scratch every time, we can clone a template and have a new machine running in seconds. The magic behind this is Cloud-Init, a tool that automates the initial setup (hostname, users, SSH keys, etc.).

I’ll cover two ways to create cloud-init templates:

Method 1: Manual ISO-Based Installation

Step 1: Download the ISO

Visit the Debian download page and grab the netinst ISO.

Debian download official webpage

For example:

1
https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-12.8.0-amd64-netinst.iso
Debian installer sha-512 hash

Save the SHA-512 checksum and upload the ISO in Proxmox via:

  • Datacenter > pve1 > local(pve1) > ISO Images > Download from URL

Paste the ISO URL, click Query URL, then select the hash algorithm and paste the SHA-512 value for automatic verification.

Proxmox window showing download completed

Step 2: Create a VM

Create a new VM manually as follow:

  1. Choose an ID. I will use 8000 for templates.

  2. Choose a name for the template.

    Proxmox VM creation step 1
  3. Select the image you previously downloaded as ISO image.

    Proxmox VM creation step 2
  4. Select “q35” as the machine type, enable “QEMU Agent”, and optionally choose UEFI BIOS (as I did).

    Proxmox VM creation step 3
  5. Select a disk size. If you’re using SSDs, choose “SSD emulation”, and leave the other options at their defaults.

    Proxmox VM creation step 4
  6. Select “host” as the CPU type and leave the other options at their defaults.

    Proxmox VM creation step 5
  7. Choose the desired memory amount and leave the other options at their defaults.

    Proxmox VM creation step 6
  8. Leave vmbr0 selected for now; we will modify it later after setting up the VM.

    Proxmox VM creation step 7
  9. Finally, click “Finish” to create the VM.

    Proxmox VM creation step 8

Alternatively to this procedure, use this command on Proxmox host:

1
2
qm create 8000 --name debian12-cloudinit --machine q35 --agent enabled=1 --cpu host --cores 2 --memory 2048 --net0 virtio,bridge=vmbr0 --scsi0 local-lvm:20,format=raw --ide2 local:iso/debian-12.8.0-amd64-netinst.iso,media=cdrom --boot order=scsi0
qm set 8000 --bios ovmf --efidisk0 local-lvm:1,format=raw,efitype=4m,pre-enrolled-keys=1

Install Debian as usual and then prepare the system for cloud-init:

1
2
3
4
5
sudo apt update
sudo apt install cloud-init
sudo passwd -d root
sudo truncate -s 0 /etc/machine-id /var/lib/dbus/machine-id
sudo cloud-init clean

Step 3: Finalize the Template

  1. Shut down the VM. Go to the “Hardware” tab, select the boot disk, and click “Remove” to delete it.

    Proxmox Template finalization step 1
  2. After removal, the setup should look like this:

    Proxmox Template finalization step 2
  3. Next, go to Hardware > Add > CloudInit Drive.

    Proxmox Template finalization step 3
  4. Select the ide2 slot and choose “local-lvm” as the storage location, then click “Add”.

  5. Once this is done, the configuration is complete. You can now right-click on the VM and select “Convert to Template”.

    Proxmox Template finalization step 5
  6. Now, go to the Cloud-Init section of the VM template to edit your settings as desired.

    Proxmox Template finalization step 6

Now you can clone the template anytime and Proxmox will inject the desired settings at boot.

Proxmox Template finalization step 7

Method 2: Using Prebuilt Cloud Images

A faster way is to use official cloud-ready QCOW2 images. Here’s how to do it for Debian and Fedora:

Debian Cloud Image

Go to the official Debian Cloud Image website and copy the URL to download the QCOW2 image file.

The Debian Cloud Image on Official Website

Then, on Proxmox host run:

1
2
3
4
5
6
7
8
9
cd /tmp
wget https://cdimage.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2

qm create 8000 --name debian12-cloudinit --agent enabled=1 --cpu host --cores 1 --memory 2048 --net0 virtio,bridge=vmbr1,tag=20 --scsihw virtio-scsi-pci --machine q35
qm importdisk 8000 debian-12-generic-amd64.qcow2 local-lvm --format qcow2
qm set 8000 --scsi0 local-lvm:vm-8000-disk-0
qm disk resize 8000 scsi0 20G
qm set 8000 --boot order=scsi0 --bios ovmf --efidisk0 local-lvm:1,format=raw,efitype=4m,pre-enrolled-keys=1 --ide2 local-lvm:cloudinit
qm template 8000

Fedora Cloud Image

The same procedure can be followed for other distributions such as Fedora or Ubuntu. For example, here’s how it works with Fedora:

Get the official Fedora Cloud Image download link:

The Fedora Cloud Image on Official Website

Then, on Proxmox host run:

1
2
3
4
5
6
7
8
9
cd /tmp
wget https://download.fedoraproject.org/pub/fedora/linux/releases/41/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-41-1.4.x86_64.qcow2

qm create 8001 --name fedora41-cloudinit --agent enabled=1 --cpu host --cores 1 --memory 2048 --net0 virtio,bridge=vmbr1,tag=20 --scsihw virtio-scsi-pci --machine q35
qm importdisk 8001 Fedora-Cloud-Base-Generic-41-1.4.x86_64.qcow2 local-lvm --format qcow2
qm set 8001 --scsi0 local-lvm:vm-8001-disk-0
qm disk resize 8001 scsi0 20G
qm set 8001 --boot order=scsi0 --bios ovmf --efidisk0 local-lvm:1,format=raw,efitype=4m,pre-enrolled-keys=1 --ide2 local-lvm:cloudinit
qm template 8001

These templates are ready to be cloned and launched with minimal setup.

Step 5: Securing Access with SSH Keys

Password-based logins are vulnerable to brute-force attacks. SSH keys are the industry standard for secure remote access.

Generate a Secure SSH Key

On your personal computer (not the server), generate an ED25519 key, which is modern, fast, and secure.

1
ssh-keygen -t ed25519 -a 100 -C "your-identifier"
  • -t ed25519: Use the modern, secure ED25519 algorithm.
  • -a 100: Use key derivation rounds to harden the private key passphrase.
  • -C: Add a comment to identify the key (like your email or device name).

It is recommended to protect the key with a strong passphrase to safeguard it in case the device is stolen or compromised.

You’ll get two files:

  • ~/.ssh/id_ed25519 (your private key, keep it secret!)
  • ~/.ssh/id_ed25519.pub (your public key, share it freely)

Deploy Your Key

Copy the contents of your public key (id_ed25519.pub). In the Proxmox UI, go to your VM template’s Cloud-Init tab and paste the public key into the “SSH public key” field. Now, any VM cloned from this template will automatically trust your key, allowing you to log in securely without a password.

Configuring the SSH Client (~/.ssh/config)

To simplify connections and enforce good defaults, create a config file:

1
vim ~/.ssh/config

Here’s a sample:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Host *
ForwardAgent no
ForwardX11 no
StrictHostKeyChecking yes
UserKnownHostsFile ~/.ssh/known_hosts
LogLevel INFO
ServerAliveInterval 60
ServerAliveCountMax 3
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256
KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256

Host pve-vm
HostName 192.168.1.100
User user
IdentityFile ~/.ssh/id_ed25519
IdentitiesOnly yes

This lets you SSH into your VM with:

1
ssh pve-vm

The ssh key to be used can be specified through -I flag, if it is not specified in the configuration file as follow:

1
ssh -I ~/.ssh/id_ed25519 user@machine

As alternative, you can automatically use your ssh keys by leveraging the ssh-agent. If you want to use the ssh-agent ensure to remove IdentitiesOnly yes option in config file.

ssh-agent

The ssh-agent is installed by default in almost any linux distribution, it is included in openssh-client package, and it is a background daemon that securely stores your decrypted SSH private keys in memory. It allows you to enter your passphrase once and reuse those keys for subsequent SSH sessions without retyping it.

When the ssh-agent is running, it loads in memory the ssh keys and creates a unix socket to interact with them.

The following are some common commands to manage the ssh-agent.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Start the agent, this will create env variables SSH_AUTH_SOCK, SSH_AGENT_PID
eval "$(ssh-agent -s)"

# List loaded keys fingerprint
ssh-add -l

# List loaded keys public keys
ssh-add -L

# Add private key to the agent
ssh-add ~/.ssh/id_ed25519

# Remove private key from the agent
ssh-add -d ~/.ssh/id_ed25519

# Remove all private key from the agent
ssh-add -D

# Stopping the agent
eval "$(ssh-agent -k)"

Another useful feature of ssh-agent is agent forwarding. Agent forwarding is useful when you need to log in to a remote host using an SSH key via a jump host. By using it, you can avoid storing your SSH key on the jump host. Agent forwarding works by forwarding access to your local ssh-agent‘s Unix socket to the remote host. If you initiate an SSH connection from there, the remote host can relay signing requests back to your local agent.

To enable agent forwarding, use the -A option when connecting via SSH, or enable it in the configuration file by adding ForwardAgent yes for a specific host.

Pay particular attention when using agent forwarding, use it only with trusted machines. The forwarded Unix socket can be abused if the jump host is shared with other users or has been compromised. An attacker could use your keys (via the forwarded socket) to authenticate elsewhere, even though they cannot extract the private key.

Additionally, there was a notable vulnerability, CVE-2023-38408, which was patched in OpenSSH 9.3p2 and later. This vulnerability, related to OpenSSH’s forwarding via PKCS#11, could allow remote code execution on your local machine.

Conclusion

We’ve successfully laid the foundation. Proxmox is installed, our network is prepped for advanced configuration, and we have a streamlined process for deploying new virtual machines. The server is no longer just a box of components; it’s a living, breathing platform ready for action.

In the next post, we’ll tackle the next major piece of our architecture: networking. I’ll show you how to install and configure OpenWRT as a virtual router inside Proxmox, giving us fine-grained control over our digital domain with VLANs, firewalls, and more.
Stay tuned! 🚀