Designing a 5-Node OKD Cluster

Designing a 5-Node OKD Cluster

March 2, 2026
1 min read (20 min read total)
3 subposts

The previous post laid out the goals: production-grade bare-metal OKD, distributed storage, VLAN segmentation, virtualization. Now I turn those into concrete specs.

No part numbers, no pricing, no vendor links. Requirements become decisions, decisions get justified. The BOM comes later.

Important (Design constraints)
  • Budget. Every component needs to justify its cost — not cheapest, but right for the purpose.
  • Physical space. 19-inch rack, not a server room. Reasonable footprint, no jet engine noise.
  • Power. Five nodes at 24/7. Desktop-class CPUs at 65W vs server-class Xeons at 150W+ — the math adds up. In Europe, electricity isn’t cheap.
  • Used market. The Polish used market (Allegro.pl) has plenty of ex-corporate machines. What’s actually available shapes every hardware choice.

The design breaks down into three areas, each covered in its own subpost:

Summary (Design areas)
  1. Compute — How many nodes, deployment phases, CPU and memory sizing, chassis requirements, failure domains, and node roles.
  2. Network — VLAN segmentation, 10 Gbps storage networking, and why the network is the foundation everything else depends on.
  3. Storage — Tiered Ceph with fast NVMe and slow HDD pools, replication strategies, and Rook-Ceph.

Start with the compute design to understand the physical foundation of the cluster.