
Definition (in the context of data centers):
Redundancy refers to the practice of duplicating critical resources and infrastructure—such as cables, servers, power supplies, or cooling systems—to ensure service continuity in case one component fails.
Main purpose:
- Reliability and availability: if one element stops working, the redundant system automatically takes over without service interruption.
- Operational safety: helps reduce the risk of data loss or downtime in critical applications.
Example:
In a data center, having multiple fiber optic lines connecting the same node means that if one line fails, the others can maintain data flow.
Key note:
While more redundancy often means greater resilience and capacity, it also leads to higher energy consumption, costs, and pressure to maintain and power digital infrastructure.
See also: High Availability (HA), Fault Tolerance.
« Back to Glossary Index