UII UPDATE 465 | FEBRUARY 2026
OPINION
(This is an updated version of a report previously published in December 2025)
When Elon Musk announced the merger of two of his most highly valued companies, SpaceX and xAI, in early February, it triggered a wave of responses and reactions. Some observers saw the move as inevitable; some viewed it as a cynical attempt to secure future financing for both X and xAI, while supporters applauded Musk's visionary genius. Many were in awe: the combined group could be worth $1.25 trillion. This would potentially make it the largest merger in history, while the planned IPO in 2026 would be the biggest ever.
For Uptime Institute and its community, it is the technical and strategic justification of the merger that is most striking. One of the key rationales stated for the merger is to put a large number of data centers in space: for communications, for AI, for space exploration and to utilize free solar energy. Less prosaically, in Musk's words, the companies are "scaling to make a sentient sun to understand the Universe and extend the light of consciousness to the stars!"
In December 2025, Uptime Intelligence published a report expressing scepticism about the practicality of data centers in space. This was in response to a series of statements by Musk and other chief executives, including Jeff Bezos (Amazon) and Sundar Pichai (Google) as well as former Google chief executive Eric Schmidt, who voiced support for the idea. This report is an updated version.
At the time, much of the support expressed for data centers in space was largely verbal and speculative. However, the new mission of SpaceX and xAI (with all its executive intent, expertise and resources) changes all this. Unless the project is viewed as little more than financial engineering (which some critics believe), there is now serious and credible momentum behind the idea of building data centers in space.
Apart from SpaceX/xAI, Google, in particular, has set up a formal project (known as Suncatcher) to explore the concept, with the first satellite launches scheduled for 2027. The ultimate goal is a huge, kilometer-wide data center constellation comprising 81 satellites. In addition. Starcloud, a startup formerly known as Lumens, has raised $20 million in seed funding to date and has already had a light satellite, carrying a small server equipped with an Nvidia GPU, inserted into Low Earth Orbit (LEO) for testing. This will be followed by a second, larger mission into space in 2026. The company's stated long-term aim is to build gigawatt-scale data centers in orbit. The startup already has a neocloud partner in US company Crusoe.
The underlying driver, and assumption, behind these efforts is that generative AI will become a foundational, ubiquitous technology that will need vast amounts of compute and power. Proponents argue that it will ultimately be more efficient to power the training work in space and transmit the resulting bits to Earth, rather than move the electrical energy to Earth after solar conversion or to use terrestrial power sources for the training process.
The prospect of building data centers in space has some appealing benefits. Solar panels there can provide free energy run for most of the day, every day, at more than 30% efficiency, compared with less than 20% on Earth. Cooling could, in theory (see below), be completely free; real estate is endless; and laser-based optical communications in a vacuum can outperform even physical fiber, opening up many new configurative possibilities. Launch costs, meanwhile, are falling year by year. For SpaceX/xAI, this will be both a cost and a source of profit and competitive advantage.
Google appears to believe that the idea of putting large amounts of compute in space is worth some investment, but it is not yet clear whether it is as committed as some media reports suggest. Discussing the Suncatcher project, chief executive Sundar Pichai said that "like any moonshot, it's going to require us to solve a lot of complex engineering challenges." The key words in this sentence are "a lot" and "complex."
Alongside its announcement, Google published a technical paper examining the issues (see Towards a future space-based, highly scalable AI infrastructure system design). The paper states that it is "working backward from an eventual future in which the majority of AI computation happens in space." Using projections and comparisons with terrestrial data centers, it suggests that such an approach could work in theory and might even be economically competitive; but only if many factors go in its favor.
One immediate issue is that, as on Earth, renewable energy (even from solar panels in orbit) lacks energy density, so a lot of real estate is needed. Powering tens of thousands of GPUs and other IT would require many thousands of square meters of solar panels. The result is a vast, large-scale operation, with significant weight to lift into space. Rather than build one very large modular satellite, Google and others propose deploying many smaller ones orbiting in a close formation (Google's concept envisions 81).
These smaller satellites would be positioned just a few hundred meters apart and communicate using optical links, which can offer lower latency and higher bandwidth than current optical fiber. In theory, this would allow a GPU cluster to be distributed across many satellites while still communicating with Earth at acceptable latencies for most applications. However, such networks use active laser links that, like the servers themselves, require continuous power.
This raises another issue. To maximize solar exposure, the satellites would need to operate in a dawn-to-dusk Sun-synchronous orbit (SSO), passing above a certain point on the ground exactly the same time, every day. This would maintain near-continuous sunlight for most of the year; but not all the time due to equinox eclipses. If the data center is to operate without regular downtime, it would need batteries to ride through these interruptions, roughly two days a year, usually in short periods of up to 5 minutes each. In other words, orbital data centers are going to need batteries; possibly in similar configurations to terrestrial facilities. Unlike on Earth, however, these solar outages occur every 90 minutes during equinoxes, so a reserve would be needed to allow for recharging, further adding complexity and weight.
An additional facility resiliency challenge is the risk of cascading damage: space debris could strike one satellite to generate further debris that hits adjacent satellites, setting off a chain of collisions known as the Kessler Syndrome. Data center operators may also be concerned that large data centers are more exposed to hostile action and electronic interference than their terrestrial sites.
Cooling presents a major challenge as well. The liquid cooling systems, needed for high-performance compute, would require major reengineering because the movement of fluids and bubbles is different in microgravity. Thermal management systems researched by NASA for space electronics, such as two-phase loops and capillary-driven flows, would need to be adapted, if possible, for dense data centers. Even then, removing heat from electronics is only half of the problem. Heat rejection, in the absence of an atmosphere, would need vast surface areas for radiators (high emissivity in the infrared spectrum) pointing toward deep space: the exact opposite direction of where the solar panels must face.
A further problem is radiation exposure, which significantly shortens component lifespans of microelectronics, from embedded controllers to IT silicon. Mitigating this requires extensive shielding, further adding weight and costs. This also raises the question of how operators would handle equipment failures or maintenance of cooling and power systems. While failed processors can be left in situ, failures in cooling or power are more problematic. More than outright hardware failures, space radiation also multiplies the probability of data corruption and computational errors going undetected. To date, proponents have not clearly addressed this issue.
For all these issues, and many others, SpaceX/xAI is clearly committed to push ahead, while Google is at a more exploratory and non-committal stage. Yet even if the many technical hurdles are overcome, the economics remain tough. Google has estimated that launch prices would need to fall to around $200 per kilogram before space-based data center costs are in a similar range to those of terrestrial facilities. Today, launch costs are more than $3,500 per kilogram. They would have to fall by nearly 95%; and Google agrees this might only happen if the industry is working at a large scale. SpaceX has indicated it intends to dramatically increase the number of launches it undertakes in the years ahead.
At that scale, however, carbon emissions, including soot from rocket launches, from these launches become such a serious issue that building data center constellations in space would likely be viewed as a backward environmental strategy; one that also contributes to space junk and non-recyclable waste. In 2024, Microsoft reported that 90% of its server components for the cloud were recycled. In space, this figure might drop to zero. It is possible that environmental and political opposition to big data centers will apply to space data centers as much as it does currently to terrestrial ones.
For all the celestial engineering and economic challenges, the biggest hurdle to the development of commercial-scale space data centers will likely lie on Earth. At least three technologies could start making a big difference to power availability in the 2030s and 2040s; small modular reactors, carbon capture at scale (new metal organic frameworks are promising), enabling continued large-scale use of fossil fuels and, longer term, fusion energy. If any of these succeed at scale (alongside the more prosaic use of renewable energy and battery storage) then power is likely to become less of a constraint. Some projects are moonshots, but some are more moonshot than others.