UII UPDATE 440 | DECEMBER 2025

Intelligence Update

AI in data: sorting reality from hallucination

Many people do not use the term artificial intelligence correctly: vendors, investors, and even some operators label everything from basic automation scripts to deep learning controllers as AI. This inflation of the term has commercial and strategic motives. AI branding helps attract funding, creates differentiation in the market, and positions traditional analytics as cutting-edge solutions.

However, this broad usage also breeds confusion and skepticism. Data center operators, uncertain about the level of autonomy or risk they face, often hesitate to implement even safe, deterministic systems.

Many operators remain hesitant to implement AI in their data centers, often citing fears of hallucination — the risk that an AI system might generate false or invented information. Yet not all AI behaves this way, and the term is frequently misapplied. By clarifying the different types of AI, how they vary in capability and reliability, and which pose genuine hallucination risks, operators can better distinguish dependable automation from the marketing-driven “AI washing” that fuels confusion and obscures real risk.

The spectrum of AI in data centers

AI in data centers spans a broad continuum, from deterministic, data-driven algorithms to advanced systems capable of adaptive or autonomous decision-making. Treating these technologies as a single category obscures important differences in capability, reliability and operational risk. Understanding this spectrum is critical for evaluating what each system can — and cannot — safely automate.

Table 1 compares the different types of AI used in modern data centers.

Table 1 AI types used in data centers

image

Mislabeling AI: the root of hallucination fears

Across the tech sector, and within data center operations in particular, everything from basic regression models to large transformer networks is labeled as AI. This conflation blurs the operational reality:

Predictive and optimization models (ML, neural networks) rely on measurable data and statistical learning. They rarely improvise.

Generative and language models (LLMs, GenAI) produce content probabilistically, often without grounding in external data, which creates a risk of fabrication.

Agentic AI orchestrates systems and can call on other models (including LLMs) to plan or communicate, but its reliability depends on which components it uses.

This terminological blur feeds operator anxiety. A predictive control loop that tunes chillers based on real-time feedback is not at risk of hallucination, yet many operators equate it with the behavior of chatbots and generative systems. In practice, hallucination is a property of generative AI, not of deterministic automation or data-driven control.

Understanding which AI types can hallucinate, and why, is essential for evaluating their operational reliability. Table 2 below clarifies the differences across major AI categories used in data centers.

Table 2 Hallucination behavior and risks across AI types

image

Managing and mitigating hallucination risks

Operators can apply a focused set of safeguards that keep AI useful while limiting unsafe or fabricated outputs:

  • Constrain generative models to verified, domain‑specific sources such as maintenance manuals, runbooks, building management system (BMS)/data center infrastructure management (DCIM) logs, incident records and approved knowledge articles.
  • Use retrieval‑augmented generation (RAG) so that models base responses on current operational data rather than general training alone.
  • Adopt hybrid architectures that pair LLM copilots with deterministic rule engines or physics‑based digital twins, which can verify or veto proposed actions before they affect live systems.
  • Require human‑in‑the‑loop validation before AI can change configurations, control physical systems, or execute high‑impact runbook steps.
  • Establish clear governance that makes a distinction between “assistive AI” (documentation, recommendations, analysis) and “operational AI” (any system that can directly change configurations or physical infrastructure).
  • Apply strict scoping and access control so more powerful generative or agentic components start in read‑only or advisory modes and follow least‑privilege principles for credentials and APIs.

The Uptime Intelligence View

Much of the data center industry’s caution around AI appears to come from treating it as a single, generative technology rather than a stack of distinct capabilities. In real deployments, predictive models are typically aligned with control and optimization tasks; emerging agentic approaches support orchestrated, multi-step decision flows; and LLMs or other generative systems are best suited for documentation, reasoning support, and advisory use under governance constraints. When these distinctions are made explicit, AI can be a potential enabler of resilient, self‑optimizing facilities and poses less risk of becoming a direct threat to uptime.

About the Author

Dr. Rand Talib

Dr. Rand Talib

Dr. Rand Talib is a Research Analyst at Uptime Institute with expertise in energy analysis, building performance modeling, and sustainability. Dr. Talib holds a Ph.D. in Civil Engineering with a concentration in building systems and energy efficiency. Her background blends academic research and real-world consulting, with a strong foundation in machine learning, energy audits, and high-performance infrastructure systems.

Posting comments is not available for Network Guests