Dr. Owen Rogers

Dr. Owen Rogers

Dr. Owen Rogers is Uptime Institute’s Senior Research Director of Cloud Computing. Dr. Rogers has been analyzing the economics of cloud for over a decade as a chartered engineer, product manager and industry analyst. Rogers covers all areas of cloud, including AI, FinOps, sustainability, hybrid infrastructure and quantum computing.

Latest Research

Cloud cost savings depend on application design

Scalability and cost efficiency are the top reasons enterprises migrate to the cloud, but scalability issues due to application design flaws can lead to spiralling costs — and some workload repatriation to on-premises facilities

 
Outage data shows cloud apps must be designed for failure

On average, cloud apps achieve availabilities of 99.97% regardless of their architecture. However, for the unlucky few that experience issues, a dual-region design has five times less downtime than one based on a single data center.

 
Build resilient apps: do not rely solely on cloud infrastructure

When building cloud applications, organizations cannot rely solely on cloud provider infrastructure for resiliency. Instead, they must architect their applications to survive occasional service and data center outages.

Briefing Reports 15 min read
 
Cloud a viable choice amidst uncertain AI returns

Dedicated AI infrastructure helps ensure data is controlled, compliant and secure, while models remain accurate and differentiated. However, this reassurance comes at a cost that may not be justified compared with cheaper options.

 
Neoclouds: a cost-effective AI infrastructure alternative

A new wave of GPU-focused cloud providers is offering high-end hardware at prices lower than those charged by hyperscalers. Dedicated infrastructure needs to be highly utilized to outperform these neoclouds on cost.

 
How AWS’s own silicon and software deliver cloud scalability

Hyperscalers design their own servers and silicon to scale colossal server estates effectively. AWS uses a system called Nitro to offload virtualization, networking and storage management from the server processor onto a custom chip.

 
REPLAY | Five data center predictions for 2025

Uptime Intelligence surveys the data center industry landscape to look deeper at what can actually happen in 2025 and beyond based on the latest trends and developments. The stronghold that AI has on the industry is a constant discussion - but how…

 
Sweat dedicated GPU clusters to beat cloud on cost

Dedicated GPU infrastructure can beat the public cloud on cost. Companies considering purchasing an AI cluster need to consider utilization as the key variable in their calculations.

 
Five data center predictions for 2025

Uptime Intelligence looks beyond the more obvious trends of 2025 and examines some of the latest developments and challenges shaping the data center industry.

Keynote Reports 28 min read
 
Why AWS’s AI strategy is a sprint

Cloud providers need to win AI use cases in their early stages of development. If they fail to attract customers, their AI applications may be locked-in to rival platforms and harder to move, which can have serious repercussions.

 
How tagging provides better management of cloud costs

Visibility into costs remains a top priority for enterprises that are consuming cloud services. Improving the tagging of workloads and resources may help them to spot, and curb, rising costs.

 
Most AI models will be trained in the cloud

The cost and complexity of deploying large-scale GPU clusters for generative AI training will drive many enterprises to the public cloud. Most enterprises will use pre-trained foundation models, to reduce computational overheads.

 
Why technology business management does more than FinOps

While the aim of FinOps is to manage just the cloud costs, technology business management seeks to aggregate all costs of IT, including data centers, servers, software and labor, to identify savings and manage return on investment.

 
Understanding AI deployment methods and locations

Enterprises have various options on how and where to deploy their AI training and inference workloads. This report explains how these different options balance cost, complexity and customization.

 
What is the outlook for GPU cloud providers?

To meet the demand driven by AI workloads, a new breed of cloud provider has emerged, delivering inexpensive GPU infrastructure as a service. Their services are highly demanded today, but longer-term, the market is ripe for consolidation.