The Edge Data Center Becomes a ThingBy Drew Robb
Data centers were the thing to have in the early 2000s. Just about every mid-sized and large organization, and a whole lot of smaller ones, rushed to build their own data center. The trend became so prevalent that many of them wanted another one – an additional data center for business continuity or resilience purposes.
But the cloud and software as a service (SaaS) put an end to that era. Gradually, the allure of the in-house data center diminished. More and more applications, services and platforms shifted to the cloud. The heyday of the data center was over. Articles began to appear about the end of the data center.
Of course, the mega-cloud developers built massive data centers of their own, taking the state of the art to new heights. Facebook, Google, Amazon and other large service providers became the data center kings. While many enterprises retain their own in-house resources, their duties are split between managing internal data centers and external cloud-based assets. But a renaissance beckons.
No, it is not the abandonment of the cloud and the pendulum swinging back towards in-house data centers. Rather, it is the emergence of the edge data center.
An edge data center can have many different functions. It may be a relay point for real-time information, a center of analysis, a pre-processing center or even a semi-autonomous arm of a much larger network. And as the name implies, these units are at the edge of the network, located wherever they are needed.
Edge traffic gives rise to edge data centers
This trend is being driven by a change in traffic volume and direction. Instead of all data going to and from a central hub, IDC reports that more than half of all data will be generated at the edge of the network by as many as 80 billion Internet of Things (IoT) devices. The fallout from this trend is that 70% of enterprises will be forced to institute data processing at the edge by 2023. IoT and the growing maturity of artificial intelligence (AI) mean that it is no longer feasible to transmit all that traffic centrally for processing, analysis and action.
Latency is one reason why some are turning to edge data centers – user expectations and application requirements often demand real-time response. But there are other practicalities. Much of the data generated only has value for a very short time frame. After that, it can be summarized and the vast bulk of it discarded.
Take applications such as self-driving/connected cars. Edge sensors, compute resources and in some cases, edge data centers are needed to receive, process, analyze, summarize, and transmit both centrally and to the vehicles in the vicinity. After all, the modern car is filled with sensors and transmitters. Newer models include imaging systems, radar and LiDAR (light detection and ranging), GPS, self-parking/self-driving functions, collision avoidance, blind spot monitoring and lane departure warnings.
Extrapolate this down the road (no pun intended) a few years into a potentially autonomous driving world. Edge compute power is vital in providing the real-time responses needed to make such an intricate system work – and keep everyone safe.
That’s just one use case. What about next-generation smartphones, augmented reality (AR), virtual reality (VR), robotics, 5G, gaming, medical applications, drones, content streaming and interactive entertainment? To work well, they need to eliminate latency and must be programmed to automatically act based on policy and certain data parameters. They can’t wait for centralized data centers to control their actions.
The rise of the edge data center, then, is about locating IT resources where they are most needed – closest to users, devices, sensors and equipment. However, it does not mean that CIOs should rush off to install their own edge data centers. More than likely, very specific use cases and value propositions will determine where and when they will be deployed.