Case study: Saudi Sinnovate smart facility
The Saudi Sinnovate Smart Technology Hub required a robust information and communications technology (ICT) infrastructure to support the project goals for a smart and flexible facility. The project is located in the high-tech district of the King Abdullah Economic City (KAEC) smart city about 62 miles north of Jeddah, Saudi Arabia. The project includes four specialized buildings on an integrated campus. A convention center, innovation center, and joint-development center are dedicated to support information technology (IT) collaboration and lecture spaces for student and business use. They include spaces for classrooms, meeting rooms, knowledge incubators, an auditorium, and a technology showcase. The fourth building is a data center with traditional and module spaces as well as an auditorium and business-tour route for marketing. The data center is designed to be fully maintainable without requiring a shutdown for maintenance of major components.
To meet the goals for a smart and flexible ICT infrastructure, the systems were selected to communicate using Ethernet across a fiber optics backbone. The fiber-optic cable allows for dedicated circuits for each network type under one cable jacket, along with spare fibers, while taking up significantly less space than a bundle of copper cables. This allowed us to integrate the security, facility automation, and IT systems across the campus using a common infrastructure (see Figure 5). The site is connected to the smart city infrastructure through dual-entrance rooms in the data center. The other campus buildings have ICT-specific telecom rooms (TRs) housing servers, network switches, and other Ethernet-based support hardware. The buildings include at least one TR per floor, each with dual fiber-optic-backbone distribution pathways and horizontal distribution pathways. Classrooms, conference rooms, and hallways have raised access floors to enable flexibility and economy in future reconfigurations and technology upgrades.
The security network uses an Ethernet-based physical security information-management (PSIM) software system to integrate the systems for monitoring (see Figure 6). The security network is physically and logically segregated from all the other networks, including separate underground pathways and entrances into the TRs. The video surveillance system also required Power over Ethernet (PoE) for convenience in supporting the remotely mounted closed-circuit television cameras. The security system network architecture includes a dedicated redundant fiber-optic backbone for the campus that serves the security control rooms in the buildings as well as connections to the specialized site-security buildings. The servers supporting the system are secured in shared rooms with cages or in dedicated rooms in the buildings. Security systems are monitored and controlled from a security command center. The PSIM software integrates the systems so they can be monitored on a consolidated platform including video walls, workstations, and other specialized security-monitoring equipment.
The facility-automation systems are all integrated onto a converged Ethernet network and include the systems listed in Figure 6. Facility-automation system network architecture uses the enterprise IT fiber-optic backbone and the TRs within each building. The servers supporting the facility-automation system are in dedicated racks in the associated buildings, including physically separated redundant systems in the data center to ensure concurrent maintainability of the system. Information from the facility-automation systems also is made available to the data center information-management (DCIM) system via a network security device connecting it to the IT network.
Monitoring of the facility-automation systems occurs in dedicated building-management system (BMS) monitoring rooms in both the joint-development center and data center. The systems integrated on the facility automation network have different Ethernet architectures that were considered when designing the buildings. The lighting controls, HVAC, leak detection, and water system controls were consolidated into hubs and controllers that support Ethernet with devices that were not Ethernet-capable. The power-monitoring system differs in that most of the devices could use Ethernet directly to communicate with the associated system server. Performance-monitoring applications on kiosks in lobbies provide feedback on metrics, e.g., energy-use intensity (EUI) in the Innovation Center and power usage effectiveness (PUE) in the data center.
The IT network is converged and includes common enterprise IT systems as well as integration with the systems in Figure 5 and the DCIM. The IT-system network architecture includes the aforementioned redundant fiber-optic backbone for the campus and the TRs building-distribution systems for the aggregate and distribution switches and other network hardware. The highly integrated components include smart boards, digital signage, and the DCIM system for the data center. The smart boards are integrated with the audio/visual and lighting-control systems in the conference rooms and classrooms to ensure peak performance. The smart boards enable videoconferencing, collaboration, and presentation of lecture or meeting content. The digital-signage system serves to distribute video display information for a number of systems including enterprise marketing, operational and monitoring information, interactive kiosks, and a data center tour route.
Connected end devices range from way-finding interactive screens to video walls in the data center network operations center (NOC). Kiosks in the lobbies display wayfinding applications, performance-monitoring applications, and marketing materials. The DCIM system integrates monitoring and assessment of the IT systems as well as secure collection of information from the facility-automation systems for monitoring in the NOC (see Figure 6).
The Ethernet-based ICT-system network architectures enable the client’s smart and flexible goals to be accomplished. Integration of the systems becomes economical and flexible in the future as they evolve and require minimal rework in the building spaces. TRs and data center rooms enable quick cross-connection of the existing and future systems. Concurrently maintainable systems allow seamless system software and hardware upgrades. Connections to the city infrastructure are also enabled and can grow as the city grows. Facility-automation system integration enables metrics to be calculated by collecting equipment, power, and water data into a single system for calculation and visualization.
Timothy Kuhlman is an electrical engineer and telecommunications technologist at CH2M with 27 years of experience in the construction and design of structured cabling systems. James Godfrey is an automation technologist at CH2M with 16 years of experience with commercial and industrial facility controls systems design and commissioning. He holds an MS in electrical engineering.