Hardware for Self-Hosting: A Low Time Preference Approach to Digital Infrastructure
We obsess over monthly cloud bills while refusing to spend $500 on hardware that could run for a decade. This inversion reveals something deeper than poor financial planning—it shows how thoroughly we’ve internalized the idea that infrastructure is something we rent, not own.
Let me tell you about a different approach.
The Time Preference Framework#
In economics, time preference describes how we value present versus future satisfaction. High time preference means wanting everything now, even at the cost of tomorrow. Low time preference means accepting short-term friction for long-term gain.
This maps perfectly to infrastructure choices:
High time preference infrastructure: Rent everything, optimize for immediate deployment, accept ongoing dependency. Subscribe to services that can disappear, change terms, or raise prices. Trade autonomy for convenience.
Low time preference infrastructure: Own your foundation, invest in expandable systems, accept initial complexity for long-term control. Build infrastructure that cannot be taken away by a single vendor decision.
The enterprise world knows this instinctively - no serious organization puts core systems on platforms they don’t control. Yet somehow, this wisdom evaporates when we discuss personal or small business infrastructure.
Digital sovereignty requires the patience to build properly. It means choosing hardware that will still serve you in 2030, not just solve today’s problem.
The Evolution: From Simplicity Through Complexity to Simplicity Again#
My journey:
Stage 1: The Raspberry Pi Promise
I started where most people start—a Raspberry Pi running a few services. Elegant. Minimal. Low power. The perfect introduction to self-hosting.
Until you hit the limitations. ARM compatibility issues. Performance constraints. Storage bottlenecks. The Pi is an excellent learning platform, but treating it as production infrastructure is high time preference thinking—it solves today’s simple problem while creating tomorrow’s complex migration.
Stage 2: The AMD64 Awakening
An old laptop became my first real server. Suddenly, everything worked. Docker containers ran at full speed. Storage wasn’t a constant negotiation. I could run actual workloads.
This taught me the first principle: architecture is cheaper than workarounds. The hours I’d spent optimizing for ARM constraints evaporated with proper hardware.
Stage 3: The Overclocking Monster
Then came the DIY server—a custom-built, overclocked beast that did everything. Hypervisor, storage, routing, services. One machine to rule them all.
It was magnificent. It was powerful. It was a single point of failure that kept me awake at night.
Stage 4: Separation and Simplicity
Today, my infrastructure reflects a hard-learned lesson: simplicity comes from separation, not consolidation.
- Router: Dedicated hardware, isolated from everything else (custom pfSense build with 10Gbps NICs)
- Networking: Mikrotik switches handling 10Gbps fabric
- Storage: Separate NAS, not virtualized
- Compute: Hypervisor running services, but never the foundation
When your router is a VM, you cannot troubleshoot the hypervisor without losing network access. When your storage is virtualized, you cannot fix the hypervisor without losing your data. These aren’t theoretical problems—they’re 2 AM disasters.
The mistake: trying to virtualize everything. The wisdom: virtualize what changes, own what must remain stable.
The Great Hardware Convergence of 2025#
Here’s a truth that would have seemed impossible a decade ago: consumer hardware has converged with enterprise reliability for most self-hosting use cases.
Enterprise gear still wins on specific features—out-of-band management, redundant power supplies, hot-swap everything. But for the core metrics that matter to self-hosting?
RAM reliability: Consumer ECC is now available. Non-ECC DDR5 has error rates that enterprise DDR3 would have envied. I’ve run production workloads on consumer RAM for years without corruption.
Storage endurance: Consumer NVMe drives now ship with endurance ratings that exceed most homelab write patterns. A quality consumer SSD will outlast your use case.
Platform longevity: A well-chosen consumer platform from 2020 runs 2025 workloads better than enterprise gear from 2015. The upgrade cycle has slowed enough that buying quality consumer hardware is genuinely low time preference.
The enterprise premium now buys you vendor support and compliance checkboxes, not fundamental reliability. For self-hosting, that premium is dead weight.
Architecture Decisions That Actually Matter#
Forget the specs sheets. Here’s what determines whether your infrastructure serves you in 2030:
1. Expandability vs. Simplicity#
I spent years chasing expandability—server cases with 12 drive bays, motherboards with seven PCIe slots, elaborate plans for future growth.
Today, I optimize for simplicity. A few Mini PCs running compute workloads. Dedicated networking gear. A focused storage solution. When I need more capacity, I add another simple node, not complexity to an existing one.
The principle: Horizontal scaling with simple components beats vertical scaling with complex ones. It’s the same pattern that makes microservices resilient—failure domains are isolated, expansion doesn’t require rearchitecture.
2. Separation of Concerns#
This isn’t just good software architecture—it’s essential infrastructure design:
Network services must be independent of compute services. Your router cannot be a VM. Your DNS cannot depend on your application stack. These are foundational—treat them as such.
Storage must be independent of compute. When the hypervisor needs maintenance, your data remains accessible. When storage needs expansion, compute continues running.
Services can be virtualized; foundations cannot. Containers and VMs are perfect for applications. They’re disasters for infrastructure that everything else depends on.
This separation brings something beyond reliability—it brings peace of mind. I can reboot my hypervisor without checking if DNS will survive. I can upgrade my router without verifying that storage remains mounted. Each system has one job, done well.
3. The Energy Abundance Mindset#
Let’s address the elephant in the data center: power consumption.
The typical response is guilt—apologizing for running “inefficient” old hardware, justifying power usage, calculating carbon footprints. This is high time preference thinking masquerading as environmental consciousness.
Low time preference says: produce more energy, don’t constrain capability.
Humanity’s creativity is infinite. Our ability to solve problems—including energy production—is unbounded given enough time and freedom. The constraint isn’t available energy; it’s our willingness to build systems that generate it.
My infrastructure draws significant power. It also enables work that would otherwise require cloud services running in massive data centers. The total energy calculation favors local compute—no data transfer overhead, no redundant cooling, no geographic replication burning watts for resilience I don’t need.
Beyond the math, there’s a philosophical point: optimizing for energy efficiency over capability is the infrastructure equivalent of austerity economics. It assumes scarcity is permanent rather than a problem to solve.
Build for capability. Address energy through generation, not limitation.
City Build vs. Shed Rack: Deployment Models#
Your physical environment shapes your hardware strategy more than any spec sheet.
The City Build (Where I Am)#
Living in a city apartment means infrastructure shares space with life:
Noise matters: Screaming server fans aren’t an option. Mini PCs and consumer gear win here—they’re designed for inhabited spaces.
Heat is visible: A rack generating kilowatts of heat in a small apartment is a climate control problem. Distributed Mini PCs spread the thermal load.
Space is expensive: A 42U rack in a city apartment is thousands of dollars of opportunity cost. Compact, efficient hardware maximizes value per square meter.
Accessibility is constant: When hardware is in your living space, it’s always accessible for maintenance. This changes your fault tolerance requirements—you can afford less redundancy because recovery time is measured in minutes, not hours.
My current setup reflects this: several Mini PCs for compute, a compact NAS for storage, networking gear in a small cabinet. Total footprint: about 0.5 square meters. Total capability: enough to run production workloads that would cost hundreds monthly in cloud fees.
The Shed Rack (Where I’m Going)#
But I’m planning the migration to a different model—a dedicated space for infrastructure:
Noise becomes irrelevant: Enterprise gear with proper cooling stops being a compromise.
Expandability is physical: Rack space enables growth without rearchitecture. Need more compute? Add a server. Need more storage? Expand the array.
Environmental control is isolated: Cooling and power distribution become dedicated problems with dedicated solutions.
The homelab becomes a lab: Distance from living space creates mental separation—infrastructure becomes a distinct domain, not furniture.
The shed rack isn’t about capability (my current setup handles my workloads fine). It’s about optionality. It’s infrastructure as a long-term platform, not a collection of compromises with domestic life.
A Decision Framework#
So what should you actually buy? Here’s how I think about it now:
If You’re Starting#
Don’t begin with complexity. A used business desktop or Mini PC gives you real AMD64 capability without complexity. Add a second machine for storage—even another old desktop with drives. Build TrueNAS or whatever works for you.
Total investment: $400-800 in used gear. Capability: enough to run a dozen services, learn the patterns, and understand what you actually need.
This is low time preference because it teaches you the patterns without locking you into proprietary platforms. You control everything from the hardware up.
If You’re Expanding: Separation and Specialization#
Add capability through specialized nodes, not by replacing what works:
Dedicated router hardware: Custom build. This is your network foundation—it’s worth doing properly. My pfSense box with 10Gbps NICs cost $200 in parts, handles every expansion since.
Separate storage from compute: Storage decisions have decade-long consequences. Isolate them. DIY builds give you control and upgradeability that appliances never will.
Networking that scales: Quality switches with room to grow. Mikrotik 10Gbps gear—prosumer pricing, enterprise capability. Your bottleneck shouldn’t be the fabric connecting your nodes.
If You’re Rebuilding: Simplicity Over Power#
After years of complexity, I optimize for:
- Fewer, better nodes over many marginal ones
- Clear separation over elegant consolidation
- Boring reliability over impressive specs
- Maintenance simplicity over feature maximization
The most powerful question: “Can I fix this at 2 AM without documentation?”
The Sovereignty Calculation#
Here’s the real math on hardware for self-hosting:
Cloud costs: $50-200/month for modest workloads = $600-2400/year
Quality hardware investment: $1500-3000
Break-even: 15-36 months
Hardware lifespan: 5-10 years with proper selection
But the calculation misses the point. This isn’t about saving money—it’s about who controls your infrastructure.
Cloud services can raise prices arbitrarily, change terms, discontinue features, get acquired, implement surveillance, or disappear entirely.
Hardware you own costs exactly what you paid, follows terms you set, maintains features you chose, answers to no one but you, and continues operating regardless of external decisions.
The sovereignty calculation isn’t financial—it’s about independence as infrastructure.
Every month you pay rent on cloud infrastructure that could have been owned. Every subscription is a decision to remain dependent rather than become sovereign.
The hardware you choose isn’t just a technical decision—it’s a statement about whether you believe in digital independence or digital serfdom.
I chose independence. The hardware investments I made years ago still serve me today. The systems I own cannot be taken away by vendor decisions. The infrastructure I control enables work that would be prohibitively expensive or impossible in rented environments.
Your digital sovereignty begins with hardware you own. Everything else follows from that choice.