Edge compute and edge networking are rapidly gaining traction. Enormous amounts of critical data are being produced by local sensors and devices, forcing businesses to move services closer to where the data is being generated. According to IDC, by 2023 more than 50% of new enterprise, IT infrastructure deployed will be at the edge.  But as Jason Carolan of Flexential points out in a Voices of the Industry article, data can be lost forever at the edge, making solid architecture to enable edge technologies important considerations. [1]

Before discussing architectures for Edge Data Centers, it is important to define what we mean by “the Edge.”  Since most companies refer to the edge as one step closer to the consumer (i.e. the generator or viewer of data) than their business currently serves, the edge has a range of meanings.  For the large cloud providers, the edge for them is the multi-tenant data centers (MTDCs) that host their points of presence.  The large MTDC operators refer to the edge as the second- or third-tier cities outside of the “NFL city” locations where they currently have real estate.   Even closer to the end-users are edge sites that might be co-located with cell tower sites to offer the lowest amount of latency.  Of course, there are exceptions to the above such as AWS Outposts where cabinets from the cloud provider are available on-site at user premises.    While these different cases can all be referred to as “the Edge”, the terms “Edge”, “Edgier” and “Edge-iest” (or far edge) are sometimes used to refer to the different scales and requirements.  The chart below discusses some of the varying requirements and definitions for data center sizes.  Of course, with these varying definitions of the edge, the design and requirements for a data center will vary widely.  

For the rest of this article, we will discuss the design and requirements of the “Edgier” data center listed above.  These could also be described as metro data centers and would also correspond with the central offices of the traditional telecommunication companies.  

It is helpful to evaluate the edge data center requirements by describing a specific use case.  While much has been written describing how autonomous vehicles will be a driving factor for 5G and the need for lower latency (no pun intended), I prefer a more grounded use case, for example cloud-based CAD design.  CAD design is a compute-heavy application and has traditionally been done on high-performance laptops. However, as with most applications, it is now possible to get CAD-as-a-service with all the benefits that come with software-as-a-service offerings.  However, latency is critical since users will get frustrated with slow responses to their requests to manipulate or rotate drawings.  In addition to the reduced latency benefit for the consumer, bringing compute functionality to the edge of the network reduces the transport cost of sending large amounts of data from the user to the large hyperscale data center and back to the user.  While data will still be sent to the hyperscale data centers, local processing of the data can greatly reduce the amount (and hence transport cost) of data sent.  Based on this use case, some requirements for edge data center are listed below:

  • Multiple distributed locations near users to minimize latency

  • Connection to multiple telecom service providers (AT&T, Verizon…) for customer access to data center

  • Availability of enterprises cages with enough on-site compute capacity for local processing of customer data

  • Connectivity to cloud providers (AWS, Microsoft Azure) to provide compute, backup and long-term storage capacity for data

  • Redundant connectivity to cloud providers for multi-cloud service option (avoid outage problems)

  • Unmanned operation due to size and remote nature of these edge data centers

  • Remote diagnostic and reconfiguration ability to monitor network and respond to customer requests

Based on the above use case as well as input from the hyperscale data center companies who want to extend their reach to these customers, we can see how these requirements above convert into the data center design.  These data centers will be approximately 1 to 2 MW with around 5,000 square feet of space.  With an average rack power density of 10 kW / rack, these data centers can host between 100 to 200 racks offering significant on-site computing power.  These data centers should be highly connected with multiple service provider and cloud offerings.  The location of these data centers will be determined by the goal to offer 1 to 5 msec of latency, spreading locations well beyond the “NFL city” locations where the majority of MTDCs are located today.  

While most MTDC operators are used to having multiple employees on site in their data centers offering everything from security to “smart hands” services, the size of the metro Edge Data centers precludes on-site staffing.  The need to manage the customer service requirements remotely has been one of the challenges in operationalizing these smaller data centers.  

While much of the architecture can be leveraged from existing data centers, such as rack designs, electrical power management and cooling architectures, the unmanned nature of these smaller data centers requires a system that can provide interconnection management and diagnostics remotely.  Telescent has developed a robotic Network Topology Manager (NTMTM) that allows remote management and diagnostics of the fiber layer using an SDN interface or a turn-key network Orchestrator and can meet the requirements listed above.    The robotic system allows connections, reconfigurations and disconnections to be handled remotely while offering ultra-low loss and latching performance equal to a traditional fiber connector.  Once the robot has cleaned the fiber and placed it into the new connection, the connection will act like a static patch panel.  And since the Telescent system includes a power monitor as well as other optional equipment such as an OTDR, any issue with the fiber can be monitored and diagnosed remotely, greatly reducing any time-to-repair.  The Telescent NTMTM offers a pay-as-you-grow model and can scale from a few hundred ports to over 1,000 ports in a single system, a size that is perfectly suited for edge data centers.  And since reliability will be critical for operation in unmanned locations, the Telescent NTMTM has been certified to NEBS Level 3, has passed multiple customer trials simulating a 10-year lifetime and has over 1 billion port hours in operation.  Based on business models developed with multiple customers, just avoiding the cost of truck rolls to the remote site easily provides business justification for the Telescent NTM.

While there are a range of definitions for “the Edge”, the Telescent NTM addresses the requirements of a metro edge data center to bring compute functionality closer to the consumer with a remotely-controlled, reconfigurable cross connect system with built-in fiber diagnostic capability.  For more information about the Telescent NTM, please contact the company directly at info@telescent.com.

[1] Edge to Core Networking is Critical for IoT Devices, AI and Automation, Jason Carolan, Flexential, Data Center Frontiers Voices of the Industry, Jan 10, 2022.