UII UPDATE 470 | FEBRUARY 2026

Intelligence Update

Digital twins and DCIM: why data quality must come first

Digital twin software: Part 3

Many operators consider data center infrastructure management (DCIM) essential for running modern, flexible and efficient data centers.

DCIM is often described as a type of digital twin (DT) because it offers a single "pane of glass" for configuring a digital replica of the data center's operating IT infrastructure, including configurations, connections and layouts. Details on equipment types and settings are collected from IT asset management and configuration management databases, while operational power and performance data are obtained from equipment sensors to monitor and manage IT equipment usage and availability.

Digital twins and DCIM use physical data from connected IT and OT systems, sensors, and networked infrastructure. DCIM uses this data to provide visibility into IT assets down to the component level — including network ports, power connections, and switches — and to visualize them via dashboards and 2D/3D interactive displays for analysis and change. AI/machine learning-assisted workflows are increasingly used for recommendations, predictions, and task automations (see part one of this series, Digital twins: reshaping AI infrastructure planning).

A key benefit of a digital twin is its ability to identify discrepancies between the virtual and physical models, uncovering hidden issues such as system inefficiencies, equipment faults and errors, as well as security and safety risks. Greater functionality and user interactivity can help operations teams troubleshoot and correct problems while optimizing on the go (see part two, Digital twins: the role of simulations).

The use of precision physical data is a foundation for accurate digital twin models and AI-driven decision-making. However, data interoperability, compatibility and quality issues persist and are widespread, leaving many operators distrustful of AI for operational decisions (see the Uptime Institute Global Data Center Survey 2025). Despite this caution, most respondents to Uptime's annual survey (73%) would trust AI if adequately trained on operational sensor data.

This report examines the challenges and opportunities for DCIM in building a trusted data platform for data center digital twins (DC-DTs).

DCIM's digital twin characteristics

DCIM can deliver micro-level digital twin data for facility-specific IT assets, such as servers, racks, switches and ports. This data typically includes ID references, manufacturer, model, U position, power draw, firmware and sensor readings from connected power and cooling equipment.

A data center digital twin (DC-DT) for operations seeks to unify all IT and OT equipment, systems and data within a single platform, bringing together both micro- and macro-level data in the process. This is likely to include DCIM data, plus entire facility CAD/BIM designs, as well as mechanical and electrical systems, such as building management systems (BMS), supervisory control and data acquisition (SCADA), and programmable logic controllers (PLCs).

A DC-DT will ingest orders of magnitude more data than DCIM alone, processing both historical and live data streams to enable real-time system monitoring and simulation analysis. However, this significantly increases complexity (see Real-time data challenges).

In part one of this series, we identified five key attributes of a digital twin (see left-hand column of Table 1) that relate to its ability to support operator objectives for availability, resiliency, and cost economics. The right-hand column explains the similarities and differences between DCIM and digital twins.

Table 1: DCIM capabilities against DT characteristics

image

Data quality, access, and interoperability

While DCIM offers some capabilities and characteristics of a digital twin, several foundational issues remain that undermine the digital twin concept. These include:

Data quality degradation

Operators often manage tens of thousands of data points across BMS, environmental, cooling, and IT power systems. This data can generate valuable information, but only if it is accurately captured, measured, stored, and made accessible. Over time, data quality can degrade due to poor configuration, missed system updates, and undocumented changes between IT and facilities teams.

No common language across systems

An average data center may rely on six or more suppliers in the power chain alone. Many of these systems require unique integrations and connector paths, such as APIs and network protocols, which may not be readily available in the DCIM systems being used. This adds to the complexity of managing multi-vendor data environments.

Data siloes persist

Establishing a single source of truth for all assets, connections, and their dependencies at both the macro and micro levels is essential. However, achieving this requires connected systems to "speak the same language", which is challenging because vendors often use their own taxonomies, schemas, and registries.

Data integration limitations

APIs and open-standard protocols can be used to connect DCIM with IT and OT data from third-party software, such as ITSM/ITOM systems, IP networks, and intelligent PDUs. However, support for non-standard requirements is often limited, requiring further customization and configuration.

Data-sharing restrictions

Data-sharing restrictions between IT and OT networks often prevent DCIM from accessing data outside the IT network. Internal policies, data privacy requirements, and cybersecurity concerns further complicate cross-network data sharing (see DCIM vulnerabilities increase the threat of cyberattacks).

Real-time data challenges

For many operators, existing systems and processes are unable to capture and process the real-time data needed to measure the live environment accurately. This is a significant barrier to implementing any operational digital twin.

Time-series data

Time-series data — commonly used by DCIM software — is critical for monitoring IT and OT equipment because it provides time-stamped telemetry readings at set intervals, such as hourly or per minute. High-quality time-series data can support accurate DCIM dashboards, reports and analytics, and provide valuable baseline data for AI modelling.

Real-time monitoring of IT and OT equipment requires continuous data streaming and processing, often involving thousands of data points per second. Most DCIM tools are not designed for real-time data streaming, though some offer APIs for integration with third-party systems, depending on data-sharing permissions.

Data normalization

System interoperability issues identified earlier are exacerbated in real-time data environments, where high volumes of data require immediate processing.

To address these challenges, some DCIM/DCM-C solutions deploy proprietary data collectors. Hosted on a dedicated server or virtual machine, data collectors poll local equipment sensors using standard IT/OT protocols such as Modbus and SNMP. The gathered data is then normalized to align with common communication formats, schemas, value sets, and taxonomies. Finally, the data is encrypted and made available to the DCIM or DCM-C platform.

Despite the options available today, it is widely acknowledged that significant time and investment are still needed in areas such as data engineering, data management, and data science to turn data into meaningful, actionable intelligence.

The Uptime Intelligence View

While organizations face mounting pressure to adopt AI and explore digital twin concepts, operators remain conservative in their approach. Many still grapple with foundational data quality and system compatibility issues, as discussed in this report. However, relying solely on software is insufficient when attempting to resolve long-standing issues stemming from restrictive policies and legacy approaches to connectivity.

DCIM could — and should — provide valuable physical data on IT and OT assets and connected systems, supporting both historical trend analysis and time-delayed operational analysis. At the same time, operators are demanding more real-time data streaming capabilities to model live environments. Achieving this will require suppliers to adopt common standards and terminology to simplify data sharing and reduce the barriers to system interoperability. At the same time, operators should look for ways to modernize their internal policies to break down data siloes wherever practical.

 

Other related reports published by Uptime Institute include: 
Digital twins: reshaping AI infrastructure planning 
Digital twins: the role of simulations 
Nvidia's vision: digital twins and automated facilities 
DCIM past and present: what's changed? 
DCIM vulnerabilities increase the threat of cyberattacks 
Data center management software: the evolving role of DCIM

 

About the Author

John O'Brien

John O'Brien

John is Uptime Institute’s Senior Research Analyst for Cloud and Software Automation. As a technology industry analyst for over two decades, John has been analyzing the impact of cloud migration, modernization and optimization for the past decade. John covers hybrid and multi-cloud infrastructure, sustainability, and emerging AIOps, DataOps and FinOps practices.

Posting comments is not available for Network Guests