Arvandor
Production-grade infrastructure-as-code for running services on Proxmox with enterprise HA patterns.
Overview
Arvandor provides a complete infrastructure stack:
- Terraform - VM provisioning on Proxmox (Linux and Windows)
- Ansible - Configuration management
- Nebula - Encrypted overlay network
- Active Directory - Windows domain services (hybrid on-prem/cloud)
- Vault - Secrets management (3-node Raft cluster)
- PostgreSQL - Database (3-node Patroni + etcd)
- Valkey - Cache/queue (3-node Sentinel)
- Garage - S3-compatible storage (3-node cluster)
Architecture
┌─────────────────────────────────────────────────────────────────────────┐
│ Proxmox Host │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Management │ │ Services │ │ Data │ │ Workloads │ │
│ │ 1000-1999 │ │ 2000-2999 │ │ 3000-3999 │ │ 4000-4999 │ │
│ │ │ │ │ │ │ │ │ │
│ │ DNS, Caddy │ │ Vault │ │ PostgreSQL │ │ Your Apps │ │
│ │ Lighthouse │ │ Gitea │ │ Valkey │ │ │ │
│ │ AD (DC, CA) │ │ │ │ Garage │ │ │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │ │
│ └────────────────┴────────────────┴────────────────┘ │
│ │ │
│ Nebula Overlay (10.10.10.0/24) │
└─────────────────────────────────────────────────────────────────────────┘
Quick Start
1. Prerequisites
- Proxmox VE host
- Arch Linux VM template (VMID 9000)
- Windows Server 2025 sysprepped VM template (VMID 10000, optional)
- Terraform, Ansible installed locally
- Nebula binary for certificate generation
2. Configure
# Clone repository
git clone <repo-url> arvandor
cd arvandor
# Configure Terraform
cp terraform/terraform.tfvars.example terraform/terraform.tfvars
vim terraform/terraform.tfvars
# Configure Ansible
cp ansible/inventory.ini.example ansible/inventory.ini
vim ansible/inventory.ini
# Generate Nebula CA
cd nebula
nebula-cert ca -name "Arvandor CA"
3. Provision
# Create VMs
cd terraform
terraform init
terraform plan
terraform apply
# Bootstrap VMs (in order)
cd ../ansible
ansible-playbook -i inventory.ini playbooks/bootstrap.yml
ansible-playbook -i inventory.ini playbooks/security.yml
ansible-playbook -i inventory.ini playbooks/nebula.yml
4. Deploy Services
# DNS server
ansible-playbook -i inventory.ini playbooks/dns.yml
# PostgreSQL HA cluster
ansible-playbook -i inventory.ini playbooks/postgres-ha.yml
# Valkey Sentinel
ansible-playbook -i inventory.ini playbooks/valkey-sentinel.yml
# Garage S3
ansible-playbook -i inventory.ini playbooks/garage.yml
5. Windows AD Infrastructure (Optional)
DC01 is provisioned manually (first domain controller bootstraps the forest). CA01 and RDS01 are provisioned via Terraform from a sysprepped Windows Server template.
# Create CA and RDS VMs from Windows template
cd terraform
terraform apply
# After OOBE, on each Windows VM:
# - Set static IP and DNS (point to DC01)
# - Join domain: Add-Computer -DomainName "yourdomain.internal" -Restart
# - Install roles:
# CA01: Install-WindowsFeature AD-Certificate-Services -IncludeManagementTools
# RDS01: Install-WindowsFeature RDS-Session-Host,FS-FileServer -IncludeManagementTools
Nebula runs on Windows as a service, providing the same encrypted overlay connectivity as Linux VMs. Install the Windows Nebula binary, sign a cert, and register as a service:
nebula.exe -service install -config C:\nebula\config.yml
Start-Service nebula
Directory Structure
arvandor/
├── terraform/ # VM provisioning (Linux + Windows)
│ ├── modules/vm/ # Reusable VM module (os_type: linux/windows)
│ ├── management.tf # DNS, Caddy, AD (DC, CA, RDS)
│ ├── services.tf # Vault, Gitea
│ └── data.tf # PostgreSQL, Valkey, Garage
├── ansible/ # Configuration management
│ ├── playbooks/ # Core playbooks
│ ├── templates/ # Jinja2 templates
│ └── vault/ # Ansible Vault secrets
├── nebula/ # Overlay network
│ └── configs/ # Per-host certificates
├── network/ # Host networking
└── docs/ # Documentation
Network Design
Two-Network Model
| Network | CIDR | Purpose |
|---|---|---|
| Bridge (vmbr1) | 192.168.100.0/24 | Provisioning only |
| Nebula | 10.10.10.0/24 | All application traffic |
VMs only accept traffic from the Proxmox host (for Ansible) and the Nebula overlay. This provides isolation even if someone gains bridge network access.
Security Groups (Nebula)
| Group | Purpose |
|---|---|
admin |
Full access (your devices) |
infrastructure |
Core services |
projects |
Application workloads |
games |
Isolated game servers |
Windows AD Integration
The VM module supports both Linux and Windows VMs via the os_type variable. Windows VMs use UEFI (OVMF), q35 machine type, and skip cloud-init initialization.
| VM | VMID | Role | Provisioning |
|---|---|---|---|
| dc01 | 1003 | Domain Controller | Manual (forest bootstrap) |
| ca01 | 1005 | Certificate Authority | Terraform + manual role install |
| rds01 | 1006 | Remote Desktop + File Server | Terraform + manual role install |
Design: DC01 is the only manually provisioned Windows VM (same pattern as the Nebula lighthouse). It bootstraps the AD forest, after which all other Windows VMs can be domain-joined. Nebula provides encrypted connectivity for AD traffic (Kerberos, LDAP, DNS) without exposing ports to the internet.
Documentation
- Getting Started - Detailed setup guide
- Architecture - Design decisions
- Provisioning Guide - Adding new VMs
License
MIT