Data growth is exploding. Data creation is expected to reach a staggering 180 zettabytes by the year 2025—that’s 118.8 zettabytes more than in 2020. As the volume of data we produce increases exponentially, so do the demands on data storage.
This increase is being fueled by several factors. First, the need to power a growing number of base stations—the fixed transceivers that serve as the main communication point for mobile technology like 5G (and even 6G)—is expected to boom as the technologies become more popular. According to a report by Allied Market Research, the 5G base station market is expected to reach over $190 billion dollars by 2030.
Additionally, as the number of self-driving cars increases, so does the need for local storage and remote processing power in the cars themselves. As of mid-2021, there were over 1,400 self-driving cars in the U.S., however one report projects that there could be as many as 33 million autonomous vehicles (AVs) on the road by 2040. Other technologies like augmented reality (AR) and virtual reality (VR), plus the digitization of manufacturing (Industry 4.0) are further driving this trend.
Needless to say, it will be impossible to store all of this data in the cloud. As the demand to access and process data anytime, anywhere grows, backbone network bandwidth and centralized data centers’ data processing capability slows dramatically.
So what’s the solution?
Keeping up will require a completely different information technology (IT) infrastructure paradigm that moves computation and data storage much closer to data sources and end users than today’s centralized data centers: The answer is edge computing.
Edge computing offers a significant reduction in data traffic over the backbone network and much better quality of service (QoS); both natural effects of moving computation and storage closer to data sources and end users. Some industries have already caught on and are beginning to reap these benefits. For example, telecom companies have started to deploy edge computing infrastructures in order to steadily improve the QoS for their customers in the presence of growing 5G/6G wireless communication systems. And rightly so: Edge computing is essentially the only option for telecom companies to reduce data roundtrip latency. This will also become a critical imperative for other emerging applications, such as AVs.
But aside from these emerging technologies, all enterprises should have storage tiers at the edge. These tiers could fall into two categories: The first would be a highly distributed caching tier that keeps hot data closer to end users in order to minimize the network traffic from centralized data centers. The second category could be a highly distributed raw data staging tier that enables local pre-processing on raw data to reduce the network traffic to centralized data centers and decrease the processing burden.
The success of edge computing largely depends on its cost effectiveness. As semiconductor technology scaling reaches its limit, heterogeneous and domain-specific computing become critical in maintaining the cost effectiveness of IT infrastructure. Another key component of future heterogeneous computing paradigms is computational storage, which will serve as an essential building block for cost-effective edge computing infrastructure.
Together, edge computing and cloud will compliment each other to form the foundation of future pervasive IT infrastructure. In the coming years, we can expect to see the continued growth of cloud in the form of centralized data centers, in addition to the rapid expansion of edge computing.
The implications of edge computing are far-reaching and will be felt by businesses and consumers alike. The pervasiveness and performance of edge computing will directly determine the QoS experienced by everyday consumers, whether that’s via video upload/download speed or query response time. The backbone network and centralized data centers are no longer sufficient for meeting the new demands of data accessibility and processing. By ushering in the adoption of edge computing, enterprises and end-users can expect possibilities like never before.
About the Author
Dr. Tong Zhang is co-founder and Chief Scientist, ScaleFlux. Dr. Zhang is a well-established researcher with significant contributions to the areas of data storage systems and VLSI signal processing. He is responsible for developing key techniques and algorithms for Computational Storage products and exploring their optimal use in mainstream application domains such as database. He is currently a Professor at Rensselaer Polytechnic Institute (RPI). His current and past research span over a wide range of areas, including database, filesystem, solid-state and magnetic data storage devices and systems, digital signal processing and communication, error correction coding, VLSI architectures, and computer architecture. He has published over 150 technical papers at prestigious USENIX/IEEE/ACM conferences and journals with the citation h-index of 36.
Sign up for the free insideAI News newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1