Solving container deployment problems one by one (restarts, routing, secrets, scheduling) quietly converges on a worse, hand-rolled Kubernetes.
Key Takeaways
The post’s argument: each operational fix you bolt on mirrors a K8s primitive, until your DIY stack has the same surface area with worse docs.
Kubernetes complexity is framed as convergent rather than arbitrary - the problems it solves are real and keep appearing regardless of stack.
The title implies recognition, not shame: by the time you notice, you have already committed to the design.
Mac’s Tech Blog positions this as a trap for teams who defer orchestration decisions while scaling container workloads.
Hacker News Comment Review
Commenters broadly accept the convergence argument for SaaS web products but push back on it as universal - on-prem deployments, single-node clusters, and edge workloads are cited as cases where K8s is overkill or a poor fit.
A sharp meta-observation from the thread: K8s is not the end state either - you still need deploy.sh to inject image tags, then Helm, then a Helm wrapper, so the configuration clock never stops.
Erlang/OTP/BEAM drew repeated mentions as a genuine escape hatch: distributed scheduling, restarts, and failover are baked into OTP, meaning many Elixir/Gleam shops hit far higher scale before needing pods.
Notable Comments
@et1337: “pretty soon you’re back to ‘my dear friend you have built a Helm’” - the recursion point that the article’s own framing invites.
@oddurmagnusson: Shipped Yoink as a personal middle ground - K8s concepts without cluster overhead, pointed at a single baremetal Hetzner machine.