Starcloud Bets AI Compute Will Move to Orbit

· science · Source ↗

Published 2026-05-06 - Runtime about 12 min - Watch on YouTube

Philip Johnston’s core claim is simple: the cheapest future AI compute may be in orbit, not on Earth, because space removes land, battery, and sunlight constraints while launch costs keep falling. Starcloud is trying to prove that with real GPUs, real satellites, and a planned inference-first constellation.

What Matters

  • Starcloud says it already trained nanoGPT in space on an NVIDIA H100, plus ran a version of Gemini and high-powered SAR inference.
  • The economic wedge is brutal: Earth solar needs permitted land and batteries; space skips both and gets roughly 8x more energy per square meter.
  • Johnston pegs break-even launch cost at about $500/kg, about 10x below today, and says Starship’s $10-$20/kg target clears it.
  • The near-term product is inference, not training: he says inference will be 99% of AI compute soon, while large training in space stays hard.
  • Starcloud has filed for an 88,000-satellite constellation, each around 200 kilowatts, aiming at roughly 20 gigawatts of compute in low Earth orbit.
  • The operating pitch is always-on power and sub-50 millisecond latency via optical links in dawn-dusk sun-synchronous orbit.
  • Heat is the real engineering constraint: a 400-square-meter solar array needs about 100 square meters of radiator, and hotter chips reduce radiator mass.
  • Johnston frames the 5-gigawatt, 4 km by 4 km concept as a 15-year problem, not a next-year one, but calls the constellation capex a $100 billion start.