The massive energy demands of modern AI models are driving tech giants to look toward space. While the hurdles—from cooling to radiation—are immense, companies are thinking in decades rather than years.
Major tech companies are scrambling to find ways to power the training and operation of artificial intelligence. According to a Wall Street Journal report, both Elon Musk's SpaceX and Jeff Bezos' Blue Origin are working on concepts for orbital data centers. Blue Origin has reportedly had a team assigned to the topic for over a year.
The logic is simple: in space, the sun shines constantly. In certain orbits, solar modules receive up to eight times more energy annually than they would at a mid-latitude location on Earth. Moving infrastructure to orbit also eliminates conflicts over land use and water consumption.
SpaceX plans to equip an upgraded version of its Starlink satellites with AI computing capabilities, launching them into orbit aboard the Starship mega-rocket. Musk recently claimed on X that Starship could deliver solar-powered AI satellites with a total capacity of up to 500 gigawatts annually. The initiative is part of a share sale strategy that could value SpaceX at $800 billion.
That figure seems massive when placed against current technical estimates. Google manager Travis Beals estimates that replacing the computing power of a single modern 1-gigawatt data center on Earth would require a swarm of roughly 10,000 satellites. In this model, 300 to 500 gigawatts would require a fleet of millions of high-performance satellites—a logistical and financial feat far beyond today's capabilities.
Blue Origin's team has also been studying the necessary technology for over a year. Founder Jeff Bezos sees the main advantage in unlimited solar energy but expects it will take up to 20 years for orbital data centers to become cheaper than terrestrial facilities.
Google is already pursuing a concrete timeline in cooperation with satellite operator Planet Labs. As part of Project "Suncatcher," two test satellites equipped with Google's Tensor Processing Units (TPUs) are scheduled to launch in early 2027. Beals described the project as a "moonshot."
Scaling requires massive constellations
Google's approach differs from monolithic space stations. Instead of massive single structures, researchers propose swarms—constellations of smaller satellites. To replicate the capacity of a terrestrial gigawatt data center, Beals says 10,000 satellites of the 100-kilowatt class would be necessary. This likely corresponds to the power generated by SpaceX's new Starlink v3 satellites.
The system design calls for these satellites to fly in a "dawn-dusk" orbit to maximize solar exposure. According to the Google paper, solar modules in this orbit receive about eight times more energy per year than at an average location on Earth. The biggest challenge, however, isn't generating energy—it's communication between the computing units.
In terrestrial data centers, AI chips like Google's TPUs connect via extremely high-bandwidth fiber optic cables. In space, lasers must handle this task. The Google team proposes Free-Space Optics (FSO). To achieve the required data rates of several terabits per second, the satellites must fly extremely close together.
The paper describes formations where satellites are only a few hundred meters apart. One proposed cluster consists of 81 satellites within a one-kilometer radius. This proximity allows the use of commercial optical transceivers, as signal strength decreases quadratically with distance. At short ranges, multiple laser links can also be used in parallel (spatial multiplexing) to further increase bandwidth.
Radiation poses a training risk
Another core problem is the aggressive environment. Space electronics face constant radiation that can cause malfunctions or destroy chips. To test feasibility, researchers subjected Google's 6th generation AI chips, the "Trillium" TPUs, to a stress test.
Using a cyclotron, the chips were bombarded with 67 MeV proton beams to simulate five years in low Earth orbit. The result: the hardware survived the total dose without permanent failure. However, "Single Event Effects"—spontaneous bit flips in memory—did occur.
While this error rate is acceptable for running finished AI models (inference), researchers say it poses a risk for training new ones. An unnoticed calculation error could corrupt days of training. Robust error correction mechanisms would be essential here.
Economic viability hinges on Starship
Technical feasibility hinges largely on transport costs. According to the Google paper's analysis, the price for transport to low Earth orbit (LEO) must fall to around $200 per kilogram for the concept to compete economically with terrestrial data centers.
This brings the focus back to SpaceX. The research team places high hopes on Starship. If SpaceX achieves full and rapid reusability, launch costs could drop drastically. If components are reused 100 times, the paper predicts theoretical internal costs for SpaceX of under $15 per kilogram. Even with high profit margins for SpaceX, a customer price under $200/kg is conceivable by the mid-2030s.
Thermodynamics remains the biggest hurdle
Beyond radiation and cost, thermodynamics is the biggest enemy of high-performance electronics in space. In a vacuum, there is no air for convection cooling; heat can only be released through radiation. The paper describes thermal management as one of the most critical optimization tasks for operating power-dense TPUs in a vacuum.
Google relies primarily on a passive system of heat pipes and dedicated radiator surfaces to maximize reliability and avoid the mechanical failure points of active pumps. The planned tight formation of satellite clusters poses a specific challenge: orbit calculations must ensure satellites don't shadow each other, as this would block not only energy generation but also the radiation of waste heat (rejected heat) to neighboring satellites.
Regarding materials, Google emphasizes the need for advanced thermal interface materials to efficiently transfer the enormous heat load from chips to radiators. While the current design assumes discrete components for computing load, satellite bus, and radiators, researchers outline a future evolution toward highly integrated designs. Similar to smartphones, future systems could merge compute units, power supply, and radiators into a single structure to save mass.
Asteroid mining offers radical cooling alternatives
The idea of moving computing power into space isn't new. The 2023 paper "Space-Based Data Centers and Cooling" considered even more radical approaches that border on science fiction.
In the study published in the journal Symmetry, authors explore the theoretical possibility of using water from asteroids to cool space data centers. The concept relies on the fact that certain asteroids contain significant amounts of water. Private spacecraft could target these asteroids, mine the water, and use it as coolant for server habitats—complete data centers rather than satellite swarms.
Using database analyses, the study identified around 20 asteroids that are both water-rich and energetically accessible from Earth (less than 0.26 astronomical units away). Such an approach would reduce reliance on complex radiator systems but presupposes a mining infrastructure in space that does not yet exist.